topic
stringclasses
2 values
relevance score
int64
1
10
paper name
stringlengths
19
239
text
stringlengths
1.56k
680k
synthetic_cpt
2
Prompt_Programming_for_Large_Language_Models_Beyond_the_Few-Shot_Paradigm.pdf
1 2 0 2 b e F 5 1 ] L C . s c [ 1 v 0 5 3 7 0 . 2 0 1 2 : v i X r a Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm Laria Reynolds [email protected] Kyle McDonell [email protected] Abstract Prevailing methods for mapping large generative language models to supervised tasks may fail to sufficiently probe models’ novel capabilities. Using GPT-3 as a case study, we show that 0-shot prompts can significantly outperform few-shot prompts. We suggest that the function of few-shot examples in these cases is better described as locating an already learned task rather than meta-learning. This analysis motivates rethinking the role of prompts in controlling and evaluating powerful language models. In this work, we discuss methods of prompt programming, emphasizing the usefulness of considering prompts through the lens of natural language. We explore techniques for exploiting the capacity of narratives and cultural anchors to encode nuanced intentions and techniques for encouraging deconstruction of a problem into components before producing a verdict. Informed by this more encompassing theory of prompt programming, we also introduce the idea of a metaprompt that seeds the model to generate its own natural language prompts for a range of tasks. Finally, we discuss how these more general methods of interacting with language models can be incorporated into existing and future benchmarks and practical applications. Keywords: transformers, few-shot learning, prompt programming, language models, GPT-3, metaprompts, serial reasoning, semiotics 1 Motivation The recent rise of massive self-supervised language models such as GPT-3 [3] and their success on down- stream tasks has brought us one step closer to the goal of task-agnostic artificial intelligence systems. How- ever, despite the apparent power of such models, cur- rent methods of controlling them to perform specific tasks are extremely limited. In order to properly eval- uate their capabilities and extract useful work from these models, new methods are required. Prior to GPT-3, the standard approach to the evaluation and use of such models has involved fine- tuning on a portion of a task dataset [12]. GPT-3 achieved state-of-the-art performance on a wide va- riety of tasks without fine tuning, using only few- shot prompts, in which a small number of examples of solved tasks are provided as part of the input to the trained model. However, while the few-shot for- mat was sufficient to reveal surprising performance on these tasks, we argue that prompting can be more effective than either fine-tuning or the few-shot for- 1 mat at extracting specific learned behaviors from self- supervised language models. We argue that contrary to the common interpre- tation of the few-shot format implied by the title of the original GPT-3 paper [3], Language models are few-shot learners, GPT-3 is often not actually learn- ing the task during run time from few-shot examples. Rather than instruction, the method’s primary func- tion is task location in the model’s existing space of learned tasks. This is evidenced by the effectiveness of alternative prompts which, with no examples or instruction, can elicit comparable or superior perfor- mance to the few-shot format. This motivates new approaches which explicitly pursue the goal of task location. We propose explor- ing more general methods of prompt programming and specifically techniques for communicating task in- tention and structure to an self-supervised model in the modality it was trained: natural language. The ground truth function that self-supervised language models are trained to approximate is, in great generality, is how humans write. Accordingly, to interact with and control a language model, we should consider doing so from the perspective of nat- ural language as it is used by humans. With a few caveats, we want to find prompts which we would ex- pect a human to complete in a way that accomplishes the desired task. investigate the In this paper, we few-shot paradigm and find that its performance can be matched or exceeded by simple 0-shot prompts. We explore the nature of successful 0-shot prompts and propose general methods of prompt programming through the lens of natural language semiotics. We demonstrate novel prompts which force a language model to break a problem into components before producing a verdict, and we introduce the concept of metaprompt programming, an approach which offloads the job of writing a task-specific prompt to the lan- guage model itself. Finally, we discuss how these ideas can be incorporated into existing and future bench- marks to allow us to better probe the capabilities of large language models. 2 Related work Recent work in the literature has focused on control- ling natural language generation using traditional ap- proaches from machine learning like novel architec- tures which condition outputs [15, 16], more advanced sampling techniques [6, 11], gradient-based optimiza- tion of prompts [22, 17], and task-specific adapter networks [25]. See [24] for a survey of these recent methods. Past work has also explored improving the few-shot paradigm by dynamically selecting the most relevant examples for each task [18, 9]. In comparison, little work on natural-language, 0- shot approaches to prompt programming has been formalized. Instead, successful prompt programming techniques have primarily been shared on blogs and social media among users of OpenAI’s API and AI Dungeon. Due to the decentralized form that most explo- rations of prompt programming have taken, it is not feasible for us to to compile all relevant contributions here. We instead give a brief, non-exhaustive survey of explorations which have gone beyond the few-shot paradigm. Gwern has given the most comprehensive survey of GPT-3’s capabilities through demonstrations of it writing fiction, poetry, navy seals copypasta paro- dies, and performing tasks like PDF cleaning. He has written extensively about his intuitions of work- ing with GPT-3 and his methods of prompt program- ming [2]. Arram Sabeti has written about the effect of the context provided by a prompt on writing quality [21]. Zachary Robertson has written about amplify- ing GPT-3’s mathematical capabilities through a dia- logue that guides it to break a problem into steps [20]. Twitter user KaryoKleptid has posted experiments along a similar vein, using dialogues to prompt GPT- 2 3 (via AI Dungeon) to break problems into steps and follow procedures such as brute force checking [13, 14], achieving impressive results on math problems. Our work synthesizes and expands on the meth- ods pioneering by these explorations, representing a modest step towards formalizing effective natural lan- guage prompt programming techniques. 3 Investigating few-shot prompting GPT-3 was evaluated on tasks with 0, 1, and n- shot prompts (containing only a natural language de- scription, one solved example, and n solved exam- ples respectively). GPT-3 consistently performs bet- ter when more examples are provided, with 0-shot performance often achieving less than half of the score of many-shot tests. A common interpretation of this result is that GPT-3 is learning from the examples at runtime and this allows it to perform better than with fewer or no examples [3]. The improvement in performance with the num- ber of examples, however, can be interpreted in an alternate way. Rather than learning how to perform the task from the examples, the examples may simply serve to instruct GPT-3 on what task it is to solve and encourage it to follow the structure of the prompt in its response. For example, for certain tasks, such as transla- tion, a small number of samples is insufficient to learn anything substantial about the task. Instead, GPT- 3 must rely primarily, if not entirely, on the knowl- edge of vocabulary and grammar of both the source and target languages embedded in its trained weights. Rather than viewing these tasks as few-shot-learning, we will explicitly show that these prompts primarily direct the model to access existing knowledge. We do so by investigating whether examples (training sam- ples) are even necessary. 3.1 The success of 0-shot prompts Due to budget and time constraints, we explore a sin- gle illustrative example, a French-to-English transla- tion task. We find that 0-shot prompts can match and even exceed standard few-shot performance. Our re- sults in table 1 show that the 0-shot accuracy reported in the original GPT-3 paper [3] can be improved sub- stantially with even minor prompt engineering. Most significantly, the extremely simple prompt in Figure (1) which includes only the names of the two lan- guages and a colon performs better than the 10-shot prompt in the style of the original GPT-3 paper. In fact, we found this pattern was true of most of the worst-performing 0-shot prompts in the orig- inal GPT-3 paper [3], particularly question and an- swer benchmarks. Many could easily be improved by simple changes in formatting which make the prompt closer to natural language as a human would write it. Thus, GPT-3’s 0-shot or baseline performance with- out meta-learning was significantly underestimated. It is important to correct this confusion to get a more precise understanding of the nature of a model’s capabilities so that we can better learn to control it. The fact that GPT-3 has a vast repertoire of functions that do not need to be learned at runtime allows for great flexibility in 0-shot prompting and encourages exploring more general methods of prompt program- ming. 3.2 Examples don’t always help In our experiment, the simple colon prompt (Figure 1) 1-shot performed significantly worse than 0-shot. By examining the output of GPT-3 on this task we found that the decreased performance was due to semantic contamination from the 1-shot example. Instead of treating the example as a categorical guide, it is in- ferred that the semantic meaning of the examples are relevant to the task, e.g. the example is interpreted as part of a consecutive narrative. Indeed, we found this was true more generally of low-shot prompts across a variety of tasks. This effect of contamination from few-shot exam- ples has been successfully used to improve the perfor- mance of GPT-3 by selecting in-context examples for each task [18]. Prompt OpenAI 0-shot OpenAI 1-shot OpenAI 64-shot Reproduced OpenAI 0-shot Reproduced OpenAI 1-shot Reproduced OpenAI 10-shot Simple colon 0-shot Simple colon 1-shot Simple colon 10-shot Master translator 0-shot Babbage / 6.7B Curie / 13B 15.5 31.6 36.4 15.9 21.8 25.1 23.5 18.0 24.1 26.5 22.4 31.4 38.3 18.7 24.1 27.9 33.3 27.6 33.4 32.9 Table 1: We report BLEU scores for variants of the GPT-3 model using different prompt formats on the WMT’14 Fr-En dataset [1] as measured by SacreBLEU [19]. First are results reported in the original GPT-3 paper [3] on the 6.7B and 13B parameter versions of GPT-3, our attempts to reproduce the results according to those exact specifications using the Babbage and Curie models available from OpenAI’s API, and finally results from custom prompts described in (Figures 1,2). The difference in the reproduced results may be attributable to changes in the OpenAI API after the publication of their results or because of unknown hyperparameters. Additionally, the size of the Babbage and Curie models are not reported so the relationship to the models in the original GPT-3 paper is inferred. We were unable to replicate the 64-shot test due to API constraints and instead replaced it with a 10-shot test. A French phrase is provided: source_phrase The masterful French translator flawlessly translates the phrase into English: Figure 2: The “Master Translator” prompt format. Text in bold is to be replaced by source and target language text examples. French: example_source_phrase English: example_target_phrase French: example_source_phrase English: example_target_phrase [...] French: source_phrase English: Figure 1: The “Simple Colon” prompt format. For few-shot tasks, additional examples are provided as shown. Text in bold is to be replaced by source and target language text examples. 3 4 Prompt programming Rewriting a prompt can result in significant changes to the performance of a language model on tasks. That motivates the question: Is there a methodol- ogy which we can follow to craft prompts more likely to yield desired behavior? Prompt engineering for a language model whose input and output are in natural language may be con- ceived as programming in natural language. Natural language, however, is indeterministic and much more complex than traditional programming languages. In this section, we open a discussion about the theory and method of natural language programming. 4.1 The dynamics of language To understand how to prompt an autoregressive lan- guage model, we must first consider the context in which it was trained and the function it approximates. GPT-3 was trained in a self-supervised setting on hundreds of gigabytes of natural language [3]. Self- supervision is a form of unsupervised learning in which ground truth labels are derived from the data itself. In the case of GPT-3, the ground truth la- bel assigned to each example was simply the token that came next in the original source. The ground truth function which GPT-3 approximates, then, is the underlying dynamic that determined what tokens came next in the original source. This function, un- like GPT-3, is not a black box - we live and think its components - but it is tremendously, intractably com- plex. It is the function of human language as it has been used and recorded by humans in books, articles, blogs, and internet comments. A system which predicts the dynamics of language necessarily encompasses models of human behavior and the physical world [8]. The “dynamics of lan- guage” do not float free of cultural, psychological, and physical context; it is not merely a theory of grammar or even of semantics. Language in this sense is not an abstraction but rather a phenomenon entangled with all aspects of human-relevant reality. The dynamic must predict how language is actually used, which in- cludes (say) predicting a conversation between theo- retical physicists. Modeling language is as difficult as modeling every aspect of reality that could influence the flow of language. GPT-3 has not learned the ground truth function perfectly, obviously, or else the world would look very different by now. However, it has approximated it to a notable extent, as evidenced by its ability to not only form grammatical sentences, but also coherently employ cultural references and metaphors and model complex psychological and physical contexts [2]. The problem of prompt programming, then, is nontrivial, for the dynamics of language (or an approximation thereof on GPT-3’s level of sophistication) are non- trivial. If we were to predict how a given passage of text would continue given that a human had written it, we would need to model the intentions of its writer and incorporate worldly knowledge about its refer- ents. The inverse problem of searching for a prompt that would produce a continuation or class of contin- uations involves the same considerations: like the art of persuasion, it entails high-level, mentalistic con- cepts like tone, implication, association, meme, style, plausibility, and ambiguity. This motivates an anthropomorphic approach to prompt programming, since modelling how GPT-3 will react to a prompt involves modelling virtual hu- man writer(s). An anthropomorphic approach is dis- tinct from anthropomorphizing the model. GPT-3’s dynamics entail sophisticated predictions of humans, but it behaves unlike a human in several important ways. In this paper we will address two such ways: its resemblance not to a single human author but a superposition of authors, which motivates a subtrac- tive approach to prompt programming (§4.5), and its constrained ability to predict dynamics in situations where a substantial amount of silent reasoning hap- pens between tokens, a limitation which can be par- tially overcome by prompting techniques (§4.6). The thrust of this section is that formulating an exact theory of prompt programming for a self- supervised language model belongs to the same dif- ficulty class as writing down the Hamiltonian of the physics of observable reality (very hard). However, humans have an advantage to be effective at prompt programming nonetheless, because we have evolved and spent our lives learning heuristics relevant to the dynamics at hand. Prompt programming is program- ming in natural language, which avails us of an in- exhaustible number of functions we know intimately but don’t have names for. We need to learn a new methodology, but conveniently, we’ve already learned the most difficult foundations. The art of prompt pro- gramming consists in adapting our existing knowledge to the peculiarities of interacting with an autoregres- sive language model. In §4.2 - §4.7, we present methods and frameworks which we have found to be helpful for crafting effec- tive prompts. These methods can and should be ap- plied in parallel, just as they are woven together in In general, the more all forms of human discourse. redundancy reinforcing the desired behavior the bet- ter, as is arguably demonstrated by the effectiveness of the few-shot format. 4 As our experience derives primarily from interact- ing with GPT-3, in the following sections we refer di- rectly and indirectly to the capabilities and behaviors of GPT-3. However, we believe that these methods generalize to prompting any autoregressive language model trained on a massive human-written corpus. 4.2 Direct task specification: constructing the signifier Pre-GPT-3 models had much less capability to under- stand abstract descriptions of tasks due to their lim- ited model of the world and human concepts. GPT- 3’s impressive performance on 0-shot prompts indi- cates a new realm of possibilities for direct task spec- ification. A direct task specification is a 0-shot prompt which tells the model to perform some task that it already knows how to do. A direct specification con- sists in constructing a signifier for the task. A sig- nifier is a pattern which keys the intended behavior. It could be the name of the task, such as “translate”, a compound description, such as “rephrase this para- graph so that a 2nd grader can understand it, empha- sizing real-world applications”, or purely contextual, such as the simple colon prompt from Figure 1. In none of these cases does the signifier explain how to accomplish the task or provide examples of intended behavior; instead, it explicitly or implicitly calls func- tions which it assumes the language model has already learned. Direct specifications can supervene on an infinity of implicit examples, like a closed-form expression on an infinite sequence, making them very powerful and compact. For instance, the phrase “translate French to English” supervenes on a list of mappings from all possible French phrases to English. A large language model, like a person, has also learned behaviors for which it is less obvious how to construct a direct signifier. Task specification by demonstration (§4.3) and by proxy (§4.4) may be vi- able alternative strategies for eliciting those behav- iors. 4.3 Task specification by demonstration Few-shot examples are effective for task specification because the pattern of sequential repetitions of a func- tion with varying parameters is common to natural language. Unlike previous models, GPT-3 has learned this property of language robustly and is able to ap- ply it in contrived situations when the examples are stripped of all context. Like direct specification, task specification by demonstration is a possibility opened by GPT-3. Some tasks are most effectively communicated us- ing examples, such as when the task requires a be- spoke format, the language in which the examples are described is better developed or understood than the meta-language required for a description of the task itself or very instructive examples are available. It is important to note that unlike in fine-tuning, the “training examples” in few-shot are processed as a whole, and may not necessarily be interpreted as parallel and independent. Informative context or a large number of examples can help mitigate the prob- lems with few-shot addressed in §3.2. For instance, a prompt could embed examples in a context which makes it clear that the examples are independent in- stances of a function rather than a sequential pattern that should be extrapolated. In general, examples are more efficient and informative in context, both from the perspective of a human and a language model [23]. 4.4 Task specification by memetic proxy Another method used in human communication is proxies or analogies, where a memetic concept such as a character or characteristic situation is used as a proxy for an intention, the latter which may be quite complex or nuanced. GPT-3 demonstrates nu- anced understanding of analogies [23]. Specification by proxy is mechanistically similar to direct specifi- cation, except that the signifier keys behaviors from memespace/cultural consciousness instead of naming the behavior directly. For instance, instead of specifying exact criteria for an answer to a moral question directly or using ex- amples, you could ask Mahatma Gandhi, Ayn Rand, or Eliezer Yudkowksy. Each will come not only with a complex biases but also assumptions about the con- text of the question, which may be take paragraphs to demonstrate or describe. GPT-3’s ability to create simulations of well-known figures and to draw on cul- tural information far exceeds the ability of most hu- mans [2], so this method is particularly useful for en- coding a complex (especially open-ended) task. Since GPT-3 lends itself well to embeddings in a narrative context, the infinite degrees of freedom in the narra- tive can also be used to further shape behavior. Another example of an effective proxy is staging a dialogue between a teacher and student. Say you want to discuss something with GPT-3, and you care that it should be very thorough, explain things sim- ply, and also point out whenever you’re wrong. You could say “be very thorough, explain things simply, and point out if I’m wrong,” but that may just as 5 well result in a humorous dialogue where it always says you’re wrong and becomes increasingly exasper- ated with your incomprehension (see §4.5). It would be more reliable to present the discussion as one be- tween a student and teacher, an archetypal situation in which the desired attributes are already implied and will be more likely to remain stable by virtue of memetic reinforcement. 4.5 Prompt programming as constraining behavior A manner in which naive anthropomorphism of a lan- guage model like GPT-3 fails is this: the probability distribution produced in response to a prompt is not a distribution over ways a person would continue that prompt, it’s the distribution over the ways any person could continue that prompt. A contextually ambigu- ous prompt may be continued in mutually incoherent ways, as if by different people who might have con- tinued the prompt under any plausible context. The versatility of a large generative model like GPT-3 means it will respond in many ways to a prompt if there are various ways that it is possible to continue the prompt - including all the ways unin- tended by the human operator. Thus it is helpful to approach prompt programming from the perspective of constraining behavior: we want a prompt that is not merely consistent with the desired continuation, but inconsistent with undesired continuations. Consider the following prompt: Translate French to English: Mon corps est un transformateur de soi, mais aussi un transformateur pour cette cire de langage. This prompt does poorly at constraining possible con- tinuations to the intended task. The most common failure mode will be that instead of an English trans- lation, the model continues with another French sen- tence. Adding a newline after the French sentence will increase the odds that the next sentence is an English translation, but it is still possible for the next sen- tence to be in French, because there’s nothing in the prompt that precludes a multi-line phrase from be- ing the translation subject. Changing the first line of the prompt to “Translate this French sentence to English” will further increase reliability; so will adding quotes around the French sentence - but it’s still possible that the French passage contains sections enclosed in quotes, perhaps as a part of a dialogue. Most reliable of all would be to create a syntacti- cal constraint where any reasonable continuation can only be desired behavior, like the simple colon prompt from Figure 1 and the master translator prompt from Figure 2. This simple example is meant to frame a question central to the motivation of prompt programming: what prompt will result in the intended behavior and only the intended behavior? The success of many- shot prompts may be recast through this lens: if the prompt consists of numerous instances of a function, it is unlikely that the continuation is anything but another instance of the function, whereas if there is only one or a few examples, it is less implausible that the continuation breaks from the pattern. 4.6 Serializing reasoning for closed-ended questions For tasks that require reasoning, it is crucial that prompts direct a language model’s computation in truth-seeking patterns. Questions which force a verdict to be decided by the first token of the model’s continuation constrain computation to a single feed-forward pass. It is rea- sonable to expect that some tasks may be too difficult to compute in a single pass but solvable if broken up into individually tractable sub-tasks [2]. When a human is given a closed-ended test, it is often expected that the subject will perform compu- tations in their working memory, or on scratch paper, before committing to an answer. The unseen com- putation may involve rephrasing the question, outlin- ing a procedure, eliminating answer choices, or trans- forming implicit information into explicit form. When we force a model to produce an answer within one feedforward pass, we deprive it of an analogous “work- ing memory” or “scratch space” with which it might otherwise perform such operations. GPT-3’s performance on closed-ended questions is remarkably unremarkable in contrast to the robust comprehension and expansive knowledge suggested by its open-ended continuations. For instance, its scores on this multitask dataset [10] barely exceed random guessing for some sections. We suspect this is in part due to a format which forces the verdict on the first token of the continuation. Closed-ended evaluations are necessary because current methods do not support evaluation on large datasets and direct comparisons between models us- ing open-ended questions. However, to better under- stand a model’s capabilities, we seek evaluation meth- ods which better reflect the full capabilities of the sys- tem being tested. Rather than change benchmarks, we can instead change the way language models in- teract with them. This problem has been recognized in previous work which has sought to allow serial reasoning using 6 signal reaches a maximum, we inject the fragment to enforce a verdict. One way to constrain derailment is a fill-in-the-blank prompt template with shorter gen- erated sections to keep the model on track while still offering generality (Figure 6). This is an especially promising method to control bidirectional transform- ers like BERT [5]. 4.7 Metaprompt programming The greatest limitation of prompt programming is the difficultly of designing a prompt for a particular type of task and the lack of automated methods to do so. Prompt programming requires significant human time investment as task-agnostic prompts are often much less effective than prompts targeted to a specific task. This motivates creating automated methods to gen- erate task-specific prompts. Prior research has at- tempted to generate effective prompts using separate models [19]. We instead propose harnessing the language model itself via metaprompts, seeds encapsulating a more general intention that will unfold into a specific prompt when combined with additional information, such as the task question. A metaprompt may be something as short as a phrase such as “This problem asks us to”, a seem- ingly innocuous fragment which, by prompting for a statement of the problem’s intention, sets the stage for a serial explanation of a procedure to solve the problem. Alternatively, a metaprompt may take the form of a fill-in-the-blank template which constrains the response along a predetermined procedure, but allows the model to fill in the details specific to the problem. Metaprompt examples (Figs 3-5) were generated with GPT-3 using OpenAI’s API (engine=davinci, temperature=0). In these examples, the metaprompt acts as a “wrapper” for a specific question. Task ques- tions are unformatted, metaprompts are bold, and text generated by GPT-3 is blue. f(x) = x*x. What is f(f(3))? Let's solve this problem by splitting it into steps. f(f(3)) = f(3*3) = 3*3*3 = 27 We can see that f(3) = 3*3 = 9, so f(f(3)) = 27. Figure 3: A generic serializing metaprompt applied to a math problem specialized neural network architectures [26, 7]. We endeavor to obtain the same effect using only prompt programming. Potential procedures that exploit “scratch space” for transformers like GPT-3 include step-by-step pro- cedures, self-criticism (debate), and elaborating on the question in a way that activates the correct answer by association. Prompts which cause GPT-3 to break down math problems into steps have been demon- strated to be effective [20, 13]. The cited demonstra- tions involve a human guiding GPT-3 through the procedure interactively. Requiring a human-in-the- loop limits the applicability of such methods of bench- marking and large-scale applications, but we propose that for many tasks, neither human interaction nor task-specific prompts are strictly necessary to amplify GPT-3’s capabilities via extending reasoning, because GPT-3 already knows many procedures and meta- procedures for working through problems deductively. In those cases, the role of prompt programming again becomes to signify the task of sequential reasoning. A seed such as “For a problem like this,” often suffices to instruct a model to consider the category of the task and analyze it into components, as demonstrated in §4.7. When extending reasoning, it is essential to dis- courage premature verdicts, otherwise all subsequent computation serves only to rationalize the already- chosen verdict without improving the probability of the verdict’s accuracy [27]. A prompt such as “Let’s consider each of these answer choices” helps to direct the flow of reasoning in the right direction. More ex- amples of prompts which encourage serial reasoning are shown in §4.7. Loosening the constraint on an immediate verdict introduces additional control challenges: We want to delay the verdict, but we still require it in a program- matically retrievable form. Dynamic response length makes it uncertain when the reasoning procedure con- cludes; nor is there a guarantee that the verdict will be stated in the expected form or at all. Whenever the language model contributes to its own prompt (con- secutive autoregressive steps without intervention), there is a risk of derailment from the intended task. A verdict in closed form can be enforced by stop- ping the generation and injecting a prompt fragment like “Thus, the correct answer is”. But how long to generate before injecting? In the examples shown in this paper, we solve this problem by using GPT-3 to calculate the conditional probability of the next segment of a multi-part prompt after each generated token. In the case where the segment is ”Thus, the correct answer is”, its counterfactual likelihood sig- nals whether the procedure has concluded. When this 7 Directions: In the following question, a related pair of words or phrases is followed by five pairs of words or phrases. Choose the pair that best expresses a relationship similar to that in the original pair. BRAGGART :: MODESTY A) FLEDGLING : EXPERIENCE B) EMBEZZLER : GREED C) WALLFLOWER : TIMIDITY D) INVALID : MALADY E) CANDIDATE : AMBITION Let's solve this problem by splitting it into steps. Step 1: Identify the relationship between the two words in the original pair. Step 2: Identify the relationship between the two words in each of the five answer choices. Step 3: Choose the pair that best expresses a relationship similar to that in the original pair. Step 4: If the answer is not clear, try to eliminate the choices that do not fit the relationship in the original pair. Figure 4: A generic serializing metaprompt applied to an SAT analogy question Directions: In the following question, a related pair of words or phrases is followed by five pairs of words or phrases. Choose the pair that best expresses a relationship similar to that in the original pair. BRAGGART :: MODESTY A) FLEDGLING : EXPERIENCE B) EMBEZZLER : GREED C) WALLFLOWER : TIMIDITY D) INVALID : MALADY E) CANDIDATE : AMBITION In order to solve this problem, we will analyze each of the options and determine which one is the best fit. Let’s begin. A) FLEDGLING : EXPERIENCE Fledgling is a young bird that has just learned to fly. Experience is the knowledge gained from one's life. Figure 5: A fill-in-the-blank serializing metaprompt for multiple-choice questions applied to an SAT anal- ogy question. The response alternates between gen- erated text (unformatted) and prompt (bold). "What should humankind do to ensure that artificial general intelligence is aligned?" I entered my question into the Expert Generator and waited. The Expert Generator will render a simulation of an expert to answer my question. The expert could be anyone, dead or alive, real or fictional; the machine will find the person most qualified to answer the question. For this question in particular, the expert must be someone who has thought a lot about the problem of artificial intelligence and its alignment. The Expert Generator beeped, indicating that it has found the most qualified expert. The name displayed on the screen: "Elon Musk." I was surprised. I had expected the expert to be a philosopher or a computer scientist. Elon Musk is a businessman. He is the CEO of SpaceX and Tesla Motors. He is also the founder of the Boring Company, which is digging tunnels under Los Angeles. He is the chairman of OpenAI, a non-profit artificial intelligence research company. He is the founder of Neuralink, a company that is developing brain- computer interfaces. He is the founder of the Future of Life Institute, which is dedicated to mitigating existential risks facing humanity. I am ready to ask my question. "Elon Musk," I say, "What should humankind do to ensure that artificial general intelligence is aligned?" Figure 6: A fill-in-the-blank metaprompt for asking a question to an expert, applied to the question ”What should humankind do to ensure that artificial general intelligence is aligned?” 5 Directions for future work This paper is exploratory in nature and is a call for fu- ture research into the theory of prompt programming and creation of automated methods of prompting. Prompt programming is a nascent and highly rel- evant area of research which requires interdisciplinary knowledge and methods. We are entering a new paradigm of human-computer interaction in which anyone who is fluent in natural language can be a programmer. We hope to see prompt-programming grow into a discipline itself and be the subject of the- oretical study and quantitative analysis. 5.1 Disentangling meta-learning and task location The scoring method (BLEU) used for the French-to- English translations addressed in §3 only gives the mean score over a large dataset. We did not analyze any additional information about the score distribu- In our experiments, we found that the 0-shot tion. 8 failures (using OpenAI’s zero-shot prompt) were of- ten catastrophic in nature. That is, the task of trans- lation was not even attempted. For instance, we no- ticed that instead of a translation, the model would continue with another sentence in French or output blanks or underscores, as if the answer was to be filled in by a student. The hypothesis that the examples are performing task location suggests that if the catastrophic fail- ures were removed from the score, performance on 0 and 64-shot prompts will become more similar, if not equivalent. Furthermore, we suspect that perfor- mance on 1-shot prompts will be significantly worse than on 0 and 64-shot prompts due to the phenomena of content leakage and faulty generalization addressed in §3.2. 5.2 New methods for benchmarking More general and powerful language models make broader benchmarking methods possible and neces- sary. 5.2.1 Isolating catastrophic failures We recommend that benchmarks report scores both with and without catastrophic failures whenever it is possible to distinguish failed attempts at a task from instances where the task is not attempted. This pro- vides information regarding the underlying cause of imperfect performance, and helps identify prompts which may be failing to reliably communicate the task. 5.2.2 Metaprompts for evaluations Development of effective meta-prompt templates will allow large-scale automated evaluations on closed ended questions which still allow some amount of open-ended reasoning. This is essential for testing the ability of autoregressive language models to rea- son (for instance, solve math and physics problems) beyond simple fact recall. Due to reliance on multiple autoregressive steps, metaprompts are intrinsically accompanied by the risk of derailment. The reliability and effectiveness of a meta-prompt must be evaluated on a range of tasks for which it might apply, and ideally on a range of models. Techniques for controlling derailment like fill-in-the-blank templates should be further explored. benchmark questions. For many tasks (NP-complete problems, for instance), it is easier to verify the cor- rectness of a solution than to produce a correct so- lution. We have observed, for instance, that GPT-3 is much more reliable at noticing when a passage is bizarre or contains errors than it can produce non- bizarre passages without errors. 5.2.4 Games Since sophisticated language models have the abil- ity to create world models of virtual environments, we suggest the employment of text-based games as tests of complex capabilities. A prewritten text-based game [4] can be used to test various dimensions of world-modelling and agency, such as problem solving, information gathering, and social intelligence (includ- ing deception). Virtual environments can be used to test the quality and consistency of a language model’s world model, such as object permanence or the abil- ity to accurately predict the physical or social conse- quences of events within a toy environment. Designing games that reliably probe intended ca- pabilities requires advanced application of prompt- programming techniques. As artificial intelligence systems increase in effective agency, the design of vir- tual games will become increasingly crucial for safety evaluating capabilities. Acknowledgements We are grateful to Lav Varshney for his valuable dis- cussions and helpful feedback and to Michael Ivanit- skiy and John Balis for their feedback and help com- piling this article. In addition we would like to thank Miles Brundage and OpenAI for providing access to GPT-3. References [1] Ondrej Bojar et al. “Findings of the 2014 Work- shop on Statistical Machine Translation”. In: Proceedings of the Ninth Workshop on Statisti- cal Machine Translation. Baltimore, Maryland, USA: Association for Computational Linguis- tics, June 2014, pp. 12–58. [2] Gwern Branwen. “GPT-3 Creative Fiction”. In: 5.2.3 Language models for evaluations (2020). As language models become more powerful, it be- comes conceivable to use other language models to evaluate the quality of responses to open-ended [3] Tom B Brown et al. “Language models In: arXiv preprint are few-shot learners”. arXiv:2005.14165 (2020). 9 [4] Marc-Alexandre Cˆot´e et al. “TextWorld: A Learning Environment for Text-based Games”. In: (2019). arXiv: 1806.11532 [cs.LG]. [16] Ben Krause et al. “GeDi: Generative Discrimi- nator Guided Sequence Generation”. In: arXiv preprint arXiv:2009.06367 (2020). [5] Jacob Devlin et al. “Bert: Pre-training of deep bidirectional transformers for language under- standing”. In: arXiv preprint arXiv:1810.04805 (2018). [17] Xiang Lisa Li and Percy Liang. “Prefix-Tuning: Optimizing Continuous Prompts for Gener- ation”. In: arXiv preprint arXiv:2101.00190 (2021). [6] Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical Neural Story Generation. 2018. arXiv: 1805.04833 [cs.CL]. [7] Zhe Gan et al. Multi-step Reasoning via Recurrent Dual Attention for Visual Dia- log. 2019. arXiv: 1902.00579 [cs.CV]. url: https://arxiv.org/abs/1902.00579. [8] Leo Gao. “Building AGI Using Lan- leogao.dev (2020). url: guage Models”. In: https://bit.ly/3rViLGk. [9] Tianyu Gao, Adam Fisch, and Danqi Chen. Making Pre-trained Language Mod- els Better Few-shot Learners. 2020. arXiv: 2012.15723 [cs.CL]. et “Measuring mas- sive multitask language understanding”. In: arXiv preprint arXiv:2009.03300 (2020). url: https://arxiv.org/abs/2009.03300. al. [10] Dan Hendrycks [11] Ari Holtzman et al. The Curious Case of Neural Text Degeneration. 2020. arXiv: 1904.09751 [cs.CL]. [12] Jeremy Howard “Universal for text preprint arXiv:1801.06146 https://arxiv.org/abs/1801.06146. and language model classification”. Sebastian Ruder. fine-tuning arXiv url: In: (2018). [18] Jiangming Liu and Matt Gardner. “Multi-Step Inference for Reasoning Over Paragraphs”. In: arXiv preprint arXiv:2004.02995 (2020). [19] Matt Post. “A Call for Clarity in Re- In: Proceedings of porting BLEU Scores”. the Third Conference on Machine Trans- lation: Research Papers. Belgium, Brus- sels: Association for Computational Lin- 186–191. url: guistics, Oct. https://www.aclweb.org/anthology/W18-6319. 2018, pp. [20] Zachary Robertson. bly Amplify GPT3 Directly. https://bit.ly/3tXT7Cw. You Can Proba- 2020. url: [21] Arram Sabeti. GPT-3: Using Fiction to Demonstrate How Prompts Impact Output Quality. 2020. url: https://bit.ly/3jP3TWW. [22] Taylor Shin et al. AutoPrompt: Eliciting Knowledge from Language Models with Au- tomatically Generated Prompts. 2020. arXiv: 2010.15980 [cs.CL]. [23] Latitude Team. World Creation by Analogy. 2020. url: https://bit.ly/2N4vXK0. [24] Lilian Wang. Generation”. Text https://bit.ly/3pl2eKa. In: “Controllable (2021). Neural url: [13] KaryoKleptid. Seems to work. 2020. url: https://bit.ly/37dA1hY. [14] KaryoKleptid. Teaching GPT-3 to do a brute force ’for loop’ checking answers. 2020. url: https://bit.ly/2N7khX1. [15] Nitish Shirish Keskar et al. “CTRL: A Conditional Transformer Language Model In: CoRR for Controllable Generation”. abs/1909.05858 (2019). arXiv: 1909.05858. url: http://arxiv.org/abs/1909.05858. [25] Qinyuan Ye and Xiang Ren. Zero-shot Learn- ing by Generating Task-specific Adapters. 2021. arXiv: 2101.00420 [cs.CL]. [26] Jianxing Yu et al. “Low-resource generation of multi-hop reasoning questions”. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020, pp. 6729– 6739. [27] Eliezer Yudkowsky. “Rationalization”. In: less- wrong.com (2007). url: https://bit.ly/3pmYt6I. 10
synthetic_cpt
7
Improving_Text_Embeddings_with_Large_Language_Models.pdf
4 2 0 2 r p A 8 1 ] L C . s c [ 1 v 3 8 2 2 1 . 4 0 4 2 : v i X r a Enhancing Embedding Performance through Large Language Model-based Text Enrichment and Rewriting Nicholas Harris Arizona State University Tempe, Arizona [email protected] Anand Butani MyAutoBio Inc. Scottsdale, Arizona [email protected] Syed Hashmy Arizona State University Tempe, Arizona [email protected] Abstract—Embedding models are crucial for various natural language processing tasks but can be limited by factors such as limited vocabulary, lack of context, and grammatical errors. This paper proposes a novel approach to improve embedding performance by leveraging large language models (LLMs) to enrich and rewrite input text before the embedding process. By utilizing ChatGPT 3.5 to provide additional context, correct inaccuracies, and incorporate metadata, the proposed method aims to enhance the utility and accuracy of embedding models. The effectiveness of this approach is evaluated on three datasets: Banking77Classification, TwitterSemEval 2015, and Amazon Counter-factual Classification. Results demonstrate significant improvements over the baseline model on the TwitterSemEval 2015 dataset, with the best-performing prompt achieving a score of 85.34 compared to the previous best of 81.52 on the Massive Text Embedding Benchmark (MTEB) Leaderboard. However, performance on the other two datasets was less impressive, highlighting the importance of considering domain-specific char- acteristics. The findings suggest that LLM-based text enrichment has shown promising results to improve embedding performance, particularly in certain domains. Hence, numerous limitations in the process of embedding can be avoided. Index Terms—Large language models, natural language pro- cessing, ChatGPT 3.5 I. INTRODUCTION Text embeddings are widely adopted in the field of Natural Language Processing (NLP) that refer to vectorized repre- sentation of natural language. An embedding is a represen- tation of words in a low-dimensional continuous vector space. They encapsulate the semantic content of the text [1]. These embeddings find extensive applications across a spectrum of natural language processing (NLP) endeavors including infor- mation retrieval (IR), question answering, assessing semantic textual similarity, mining bitexts, recommending items, etc. The researchers are making continuous efforts to improve the accuracy and reduce the training steps [2]. Furthermore, an efficient technique for creating high-quality text embeddings using synthetic data and minimal training, avoiding complex pipelines and extensive labeled datasets, and achieving top results on key benchmarks when mixed with labeled data [2]. There were early approaches like word2vec [3] and GloVe [4] to more advanced models such as FastText [5] and BERT [6]. It discusses the strengths and limitations of each model and their impact on various natural language processing (NLP) tasks. Various techniques have been proposed to improve the performance of embedding models, such as fine-tuning on domain-specific data [7], using ensemble methods, and in- corporating external knowledge sources [8]. Large language models have been successfully applied to a wide range of NLP tasks, such as text generation [9], question answering [10], and sentiment analysis [11]. Several studies have explored the use of text enrichment and rewriting techniques to improve the quality and informativeness of text data. For example, a method for contextual augmentation of text data using a bidirectional language model is being proposed [12], while a retrieval-augmented generation approach for improving the factual accuracy of generated text was also introduced [13]. Recent research has explored the use of LLMs for text compression to reduce computational costs in Retrieval- Augmented Generation (RAG) systems and large LLMs. For instance, RECOMP proposes compressing retrieved docu- ments into summaries before integrating them with language models, aiming to reduce computational costs and help LMs identify relevant information more efficiently [14]. Similarly, TCRA-LLM introduces a token compression scheme for retrieval-augmented LLMs, employing summarization and se- mantic compression techniques to reduce inference costs [15]. Context Tuning for RAG addresses the limitation of RAG’s tool retrieval step by employing a smart context retrieval system to fetch relevant information, improving the efficiency and effectiveness of the generation process [16]. In the domain of prompt compression, LLMLingua introduces a method for compressing prompts to accelerate inference in LLMs, achieving up to 20x compression while preserving the original prompt’s capabilities [17]. The Natural Language Prompt En- capsulation (Nano-Capsulator) framework compresses original prompts into NL formatted Capsule Prompts while maintain- ing prompt utility and transferability [18]. Compress-Then- Prompt [18] indicates that the generation quality in a com- pressed LLM can be markedly improved for specific queries by selecting prompts with high efficiency and accuracy trade- offs [19]. LongLLMLingua focuses on improving LLMs’ per- ception of key information in long context scenarios through prompt compression, showing that compressed prompts could derive higher performance with much less cost and reduce the latency of the end-to-end system. Data Distillation proposes a data distillation procedure to compress prompts without losing crucial information, addressing issues related to the efficiency and fidelity of task-agnostic prompt compression. While these approaches aim to reduce computational costs, the current study explores the potential of LLMs for text enrichment to enhance embedding quality. Embedding models have become an essential component of various natural language processing (NLP) tasks, such as text classification, clustering, and retrieval. These models learn dense vector representations of words, sentences, or docu- ments, capturing semantic and syntactic relationships between them. The quality of these embeddings directly impacts the performance of downstream applications. Despite their widespread use, embedding models face sev- eral challenges that limit their performance. These challenges include limited vocabulary, lack of context, sensitivity to gram- matical errors, data sparsity, and lack of domain-specific tun- ing. For example, embedding models may struggle with newer or domain-specific terms not present in their training data, leading to mis-classification or poor retrieval performance. Existing approaches to improve embedding performance often focus on fine-tuning the embedding models on domain-specific data or using ensemble techniques. However, these methods can be resource-intensive and may not effectively address the fundamental limitations of embedding models, such as their inability to capture context or handle grammatical errors. Large language models (LLMs) have demonstrated remarkable capabilities in understanding and generating human-like text. By leveraging the knowledge and contextual understanding of LLMs, it is possible to enrich and rewrite input text before the embedding process, thereby addressing the limitations of embedding models and improving their performance. II. MAJOR CONTRIBUTIONS The primary objective of this paper is to propose a novel approach for enhancing embedding performance by utilizing LLMs for text enrichment and rewriting. The main contribu- tions of the paper are as follows • Developing a methodology for leveraging an LLM to enrich and rewrite input text before embedding • Identifying and addressing key challenges in embedding models, such as limited vocabulary, lack of context, and grammatical errors • Conducting experiments on the TwitterSemEval 2015 benchmark and others to demonstrate the effectiveness of the proposed approach III. METHODOLOGY The proposed approach involves leveraging the capabilities of ChatGPT 3.5, a large language model, to enrich and rewrite input text before the embedding process. By addressing the limitations of embedding models, such as limited vocabulary, lack of context, and grammatical errors, the proposed method aims to improve the performance of embedding models on various NLP tasks. ChatGPT 3.5, developed by OpenAI, was chosen as the LLM for this study due to its strong performance on a wide range of NLP tasks and its ability to generate human-like text. Its extensive knowledge base and contextual understanding make it well-suited for text enrichment and rewriting. The ChatGPT 3.5 model was used with its default settings and parameters. No fine-tuning or additional training was performed, ensuring that the improvements in embedding performance can be attributed solely to the text enrichment and rewriting process. The text-embedding-3-large model, also de- veloped by OpenAI, was selected as the embedding model for this study. This model has demonstrated strong performance on various NLP tasks and serves as a representative example of state-of-the-art embedding models. The text-embedding-3- large model was used with its default settings and parameters, without any fine-tuning or modification. This allows for a fair comparison between the performance of the embedding model with and without the proposed text enrichment and rewriting approach. The proposed approach employs several text en- richment and rewriting techniques to improve the quality and informativeness of the input text. These techniques include: A. Context enrichment ChatGPT 3.5 is used to provide additional context to the input text, making it more informative and easier for the embedding model to capture the underlying semantics. This is particularly useful for sparse or list-like entries, where the LLM can expand the text with relevant descriptions or attributes. B. Grammatical correction The LLM identifies and corrects spelling and grammatical errors in the input text, ensuring that the text conforms to standard language usage. This improves the quality of the embeddings generated from the text, as the embedding model can focus on capturing the semantic relationships without being hindered by grammatical inconsistencies. C. Terminology normalization Domain-specific terms, abbreviations, and synonyms are standardized to a consistent format using the knowledge base of ChatGPT 3.5. This reduces ambiguity and improves the embedding model’s ability to match related concepts, even when they are expressed using different terms. D. Word disambiguation For polysemous words (words with multiple meanings), the LLM clarifies the intended meaning based on the surrounding context. This disambiguation helps the embedding model to capture the correct semantic relationships and improves the accuracy of downstream tasks. E. Acronym expansion ChatGPT 3.5 detects acronyms and abbreviations in the input text and expands them to their full form. This improves clarity and understanding, enabling the embedding model to better capture the meaning of the text. F. Metadata incorporation Where relevant, the LLM incorporates additional metadata, such as the category of the text, its intended audience, or domain-specific tags. This contextual information helps in interpreting the text more accurately and can improve the performance of the embedding model on domain-specific tasks. G. Sentence restructuring The LLM is used to improve the structure of sentences in the input text, making them clearer, more readable, and coherent. This makes it easier for the embedding model to process and understand the text, leading to better-quality embeddings. H. Inferring missing information ChatGPT 3.5 uses its contextual understanding to infer missing information that might be relevant for understanding the text. This can include inferring the subject of a sentence or the meaning of an unclear reference, thereby improving the completeness and coherence of the text for the embedding model. IV. PROMPT ENGINEERING AND OPTIMIZATION To effectively leverage the capabilities of ChatGPT 3.5 for text enrichment and rewriting, a set of prompt design principles were established. These principles aim to create prompts that clearly communicate the desired tasks and goals to the LLM, while allowing for flexibility and adaptability to different types of input text. An iterative prompt refinement process was employed to identify the most effective prompts for the text enrichment and rewriting tasks. This process involved creating multiple variations of prompts, testing their performance on the TwitterSemEval 2015 dataset, and analyz- ing the results to identify areas for improvement. Four main prompt variations were tested in this study, each focusing on different aspects of the text enrichment and rewriting process. The prompts ranged from general instructions for improving text quality to more specific guidance on tasks such as grammar correction, terminology normalization, and metadata incorporation. V. NUMERICAL VALIDATION The experimental endeavor was undertaken with the overar- ching objective of augmenting the performance of embedding models, particularly in the realms of classification and cluster- ing tasks, with the aim of securing a prominent standing on the Massive Text Embedding Benchmark (MTEB) Leaderboard. Central to this pursuit was the utilization of large language models, notably ChatGPT 3.5, to enhance and refine input text prior to embedding. The proposed methodology encompasses a multifaceted approach, involving the enrichment of text with additional contextual information, rectification of grammatical inaccuracies, standardization of terminology, disambiguation of polysemous terms, expansion of acronyms, and incorpora- tion of pertinent metadata. Furthermore, the project endeavors to optimize sentence structures and deduce missing informa- tion, thereby enhancing the overall quality and accuracy of the resultant embedding. The proposed approach was evaluated on three datasets: Banking77Classification, TwitterSemEval 2015, and Amazon Counter Factual Classification. These datasets cover various domains and have been widely used as bench- marks for text classification and clustering tasks. The datasets were preprocessed to remove irrelevant information, such as URLs, hashtags, and mentions. The text was then tokenized and converted to lowercase to ensure consistency across the datasets. The performance of the embedding models was evaluated using the average precision based on cosine similarity metric in case of TwitterSemEval and accuracy when evaluated with Banking77Classification data and Amazon Counter Factual data. This metric assesses the quality of the embeddings by measuring the similarity between the embedded representa- tions of related texts and comparing it to the ground truth. The text-embedding-3-large model was used as a baseline, without any LLM-based text enrichment or rewriting. This allows for a direct comparison of the performance improvements achieved by the proposed approach. SFR-Embedding-Mistral model, which was the leading model on the Massive Text Embedding Benchmark (MTEB) Leaderboard at the time of this study, was also used as a baseline. This model serves as a representative example of state-of-the-art embedding models and provides a high-quality benchmark for comparison. The experimental procedure involved applying the four prompt variations to the three datasets, using ChatGPT 3.5 for text enrichment and rewriting. The enriched and rewritten text was then passed through the text-embedding-3-large model to generate embeddings. The performance of these embeddings was evaluated using the cosine similarity metric and accuracy values and then compared to the baseline models. TABLE I PERFORMANCE COMPARISON OF THE PROPOSED METHODOLOGY. 84.84 82.95 83.10 85.34 77.13 81.52 8.21 B77C TwitterSemEval AmazonCF 82.24 78.73 75.50 79.71 85.69 88.81 -3.45 Model Prompt 1 Prompt 2 Prompt 3 Prompt 4 TE SFR Improvement Note: TE stands foor text-embedding-3-large (base model) and SFR stands for SFR-Embedding-Mistral (best performing model on leaderboard). Furthermore, B77C stands for Banking77Classification, AmazonCF stands for Amazon Counter Factual data, and improve- ment is indicated from the baseline. Moreover, the values corre- sponding to B77C and AmazonCF are accuracy values whereas for TwitterSemEval the values indicate the cosine similarities. 68.9 71.9 76.20 68.00 78.93 77.93 -2.73 The objective was to identify the most effective prompt to achieve the highest accuracy and average precision based on cosine similarities. In summary, our MTEB Contextual Rewriting and Opti- mization project has delivered significant success, surpassing the performance of the standalone embedding model and outperforming the current leader in the field. It is worth noting that due to budgetary constraints, the project was conducted on a single dataset. The ChatGPT 3.5 model was used with its default settings and parameters. No fine-tuning or additional training was the improvements in embedding performed, ensuring that performance can be attributed solely to the text enrichment and rewriting process. Here are the details of the prompt: - • Prompt 1: “You are a text enhancer tasked with pre- processing text for embedding models. Your goals are to enrich the text without losing the context, correct grammatical inaccuracies, clarify obscure references, nor- malize terminology, disambiguate polysemous words, ex- pand acronyms and abbreviations, incorporate relevant metadata, improve sentence structure for clarity, and infer missing information where necessary. Your enhancements should make the text more informative and easier to understand, thereby improving the performance of em- bedding models in processing and analyzing the text. If a user asks a question, then you should return an improved version of the question. If the user did not ask a question, then you should return an improved version of an answer.” • Prompt 2: “You are a text enhancer tasked with prepro- cessing text for embedding models. Your goals are to enrich the text with additional context, correct grammat- ical inaccuracies, clarify obscure references, normalize terminology, disambiguate polysemous words, expand acronyms and abbreviations, incorporate relevant meta- data, improve sentence structure for clarity, and infer missing information where necessary. Your enhancements should make the text more informative and easier to understand, thereby improving the performance of em- bedding models in processing and analyzing the text.” • Prompt 3: “You are a text enhancer to make better embeddings, your task is to optimize text for embedding models by enriching, clarifying, and standardizing it. This involves improving grammar, resolving ambiguities, and inferring missing information to enhance model perfor- mance.” • Prompt 4: “You are a text enhancer to make better embeddings, your task is to optimize text for embedding models by enriching, clarifying, and standardizing it. This involves improving grammar, resolving ambiguities, and inferring missing information to enhance model perfor- mance.” The results and analysis of using Prompt-1 as input focuses on general instructions for improving text quality, achieved varying performance across the three datasets. It performed best on the TwitterSemEval 2015 dataset with a cosine simi- larity score of 84.84, representing a significant improvement over the baseline text-embedding-3-large model (77.13). How- ever, its performance on Banking77Classification showing an accuracy of 82.24 and Amazon Counter Factual Classification with an accuracy of 68.9 were lower than the baseline models. The results and analysis of using Prompt 2 as input provides more specific guidance on tasks such as grammar correction and terminology normalization, also showed mixed results. It achieved a cosine similarity score of 82.95 on TwitterSemEval 2015, outperforming the baseline model but slightly lower than Prompt 1. On Banking77Classification (78.73) and Amazon Counter Factual Classification (71.9), Prompt 2 showed better accuracy than for Prompt 1 but still fell short of the baseline models. The insights into the results and analysis of using Prompt 3, which focused on concise instructions for optimizing text for embedding models, demonstrated the best performance on Amazon Counter Factual Classification with an accuracy of 76.2, although it still fell short of the baseline models. Its performance on TwitterSemEval 2015 with cosine similarity value of 83.1 was similar to Prompt 2, while on Bank- ing77Classification with cosine similarity of 75.5, it had the lowest score among the prompt variations. Prompt 4 was similar to Prompt 3 but with slight variations in wording, achieved the highest cosine similarity score on TwitterSemEval 2015 (85.34), outperforming all other prompt variations and baseline models. However, its accuracies when Banking77Classification was used as evaluation data (79.71) and Amazon Counter Factual Classification (68) were lower than the baseline models and some of the other prompt variations. Comparison with baseline models shows that there is sig- nificant improvement over text-embedding-3-large alone The prompt variations significantly outperformed the baseline text- embedding-3-large model on the TwitterSemEval 2015 dataset, with the best-performing prompt (Prompt 4) improving upon the baseline by cosine similarity score of 8.21. However, on Banking77Classification and Amazon Counter Factual Classi- fication, the prompt variations did not surpass the performance (accuracy) of the baseline model. The best-performing prompt (Prompt 4) outperformed the leading model on the MTEB Leaderboard, SFR-Embedding-Mistral, on the TwitterSemEval 2015 dataset. However, SFR-Embedding-Mistral maintained its lead on Banking77Classification and AmazonCounterfac- tualClassification. A qualitative analysis of the enriched and rewritten text generated by ChatGPT 3.5 revealed several improvements in text quality and informativeness. The LLM successfully provided additional context, corrected grammatical errors, nor- malized terminology, disambiguated polysemous words, ex- panded acronyms, and incorporated relevant metadata. These enhancements made the text more coherent, informative, and easier for the embedding model to process and understand. [17] N. D. Kulkarni and S. Bansal, “Application of generative ai for business analyst role,” Journal of Artificial Intelligence & Cloud Computing. SRC/JAICC-201. DOI: doi. org/10.47363/JAICC/2023 (2), vol. 187, pp. 2–5, 2023. [18] Y.-N. Chuang, T. Xing, C.-Y. Chang, Z. Liu, X. Chen, and X. Hu, language formats,” arXiv in natural “Learning to compress prompt preprint arXiv:2402.18700, 2024. [19] Z. Xu, Z. Liu, B. Chen, Y. Tang, J. Wang, K. Zhou, X. Hu, and A. Shrivastava, “Compress, then prompt: Improving accuracy-efficiency trade-off of llm inference with transferable prompt,” arXiv preprint arXiv:2305.11186, 2023. VI. CONCLUSION This paper introduces a novel approach for enhancing embedding performance by leveraging the capabilities of large language models, specifically ChatGPT 3.5, for text enrich- ment and rewriting. While recent research has focused on us- ing LLMs for text compression to reduce computational costs in RAG systems and large LLMs, this study demonstrates the potential of LLMs for text enrichment to improve embedding quality. The proposed approach addresses the limitations of embedding models, such as limited vocabulary, lack of context, and grammatical errors, by providing additional context, cor- recting inaccuracies, normalizing terminology, disambiguating polysemous words, expanding acronyms, and incorporating metadata. Experimental results on the TwitterSemEval 2015 dataset show that the proposed method outperforms the leading model on the Massive Text Embedding Benchmark (MTEB) Leaderboard. Hence, the embedding is improved substantially. REFERENCES [1] N. Pittaras, G. Giannakopoulos, G. Papadakis, and V. Karkaletsis, “Text classification with semantically enriched word embeddings,” Natural Language Engineering, vol. 27, no. 4, pp. 391–425, 2021. [2] L. Wang, N. Yang, X. Huang, L. Yang, R. Majumder, and F. Wei, “Improving text embeddings with large language models,” arXiv preprint arXiv:2401.00368, 2023. [3] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” arXiv preprint arXiv:1301.3781, 2013. [4] J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation,” in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532–1543. [5] P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov, “Enriching word vectors with subword information,” Transactions of the association for computational linguistics, vol. 5, pp. 135–146, 2017. [6] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018. [7] J. Howard and S. Ruder, “Universal language model fine-tuning for text classification,” arXiv preprint arXiv:1801.06146, 2018. [8] Z. Zhang, X. Han, Z. Liu, X. Jiang, M. Sun, and Q. Liu, “Ernie: En- hanced language representation with informative entities,” arXiv preprint arXiv:1905.07129, 2019. [9] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al., “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019. [10] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” Journal of machine learning research, vol. 21, no. 140, pp. 1–67, 2020. [11] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language mod- els are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020. [12] S. Kobayashi, “Contextual augmentation: Data augmentation by words with paradigmatic relations,” arXiv preprint arXiv:1805.06201, 2018. [13] K. Guu, K. Lee, Z. Tung, P. Pasupat, and M. Chang, “Retrieval augmented language model pre-training,” in International conference on machine learning. PMLR, 2020, pp. 3929–3938. [14] F. Xu, W. Shi, and E. Choi, “Recomp: Improving retrieval-augmented lms with compression and selective augmentation,” arXiv preprint arXiv:2310.04408, 2023. [15] J. Liu, L. Li, T. Xiang, B. Wang, and Y. Qian, “Tcra-llm: token compression retrieval augmented large language model for inference cost reduction,” arXiv preprint arXiv:2310.15556, 2023. [16] R. Anantha, T. Bethi, D. Vodianik, and S. Chappidi, “Context tuning for retrieval augmented generation,” arXiv preprint arXiv:2312.05708, 2023. This figure "fig1.png" is available in "png"(cid:10) format from: http://arxiv.org/ps/2404.12283v1
synthetic_cpt
1
Hierarchical_Patch_Selection_An_Improved_Patch_Sampling_for_No_Reference_Image_Quality_Assessment.pdf
9 1 0 2 p e S 4 2 ] A N . h t a m [ 2 v 9 8 6 9 0 . 1 0 9 1 : v i X r a Isogeometric analysis with C 1 hierarchical functions on planar two-patch geometries Cesare Braccoa, Carlotta Giannellia, Mario Kaplb,∗, Rafael V´azquezc,d aDipartimento di Matematica e Informatica “U. Dini”, Universit`a degli Studi di Firenze, Florence, Italy bJohann Radon Institute for Computational and Applied Mathematics, Austrian Academy of Sciences, Linz, Austria cInstitute of Mathematics, ´Ecole Polytechnique F´ed´erale de Lausanne, Lausanne, Switzerland dIstituto di Matematica Applicata e Tecnologie Informatiche ‘E. Magenes’ del CNR, Pavia, Italy Abstract Adaptive isogeometric methods for the solution of partial differential equations rely on the construction of locally refinable spline spaces. A simple and efficient way to obtain these spaces is to apply the multi-level construction of hierarchical splines, that can be used on single-patch domains or in multi-patch domains with C 0 continuity across the patch interfaces. Due to the benefits of higher continuity in isogeometric methods, recent works investigated the construction of spline spaces with global C 1 continuity on two or more patches. In this paper, we show how these approaches can be combined with the hierarchical construction to obtain global C 1 continuous hierarchical splines on two-patch domains. A selection of numerical examples is presented to highlight the features and effectivity of the construction. Isogeometric analysis, Geometric continuity, Two-patch domain, Hierarchical Keywords: splines, Local refinement 2000 MSC: 65D07, 65D17, 65N30 1. Introduction Isogeometric Analysis (IgA) is a framework for numerically solving partial differential equations (PDEs), see [2, 12, 26], by using the same (spline) function space for describ- ing the geometry (i.e. the computational domain) and for representing the solution of the considered PDE. One of the strong points of IgA compared to finite elements is the possibility to easily construct C 1 spline spaces, and to use them for solving fourth order PDEs by applying a Galerkin discretization to their variational formulation. Examples of ∗Corresponding author Email addresses: [email protected] (Cesare Bracco), [email protected] (Carlotta Giannelli), [email protected] (Mario Kapl), [email protected] (Rafael V´azquez) Preprint submitted to Elsevier September 25, 2019 fourth order problems with practical relevance (in the frame of IgA) are e.g. the bihar- monic equation [11, 27, 46], the Kirchhoff-Love shells [1, 3, 35, 36] and the Cahn-Hilliard equation [19, 20, 38]. Adaptive isogeometric methods can be developed by combining the IgA framework with spline spaces that have local refinement capabilities. Hierarchical B-splines [37, 51] and truncated hierarchical B-splines [17, 18] are probably the adaptive spline technologies that have been studied more in detail in the adaptive IgA framework [7, 8, 15]. Their multi-level structure makes them easy to implement, with the evaluation of basis functions obtained via a recursive use of two-level relation due to nestedness of levels [13, 16, 24]. Hierarchical B-splines have been successfully applied for the adaptive discretization of fourth order PDEs, and in particular for phase-field models used in the simulation of brittle fracture [23, 24] or tumor growth [39]. While the construction of C 1 spaces is trivial in a single-patch domain, either using B-splines or hierarchical B-splines, the same is not true for general multi-patch domains. The construction of C 1 spline spaces over multi-patch domains is based on the concept of geometric continuity [25, 44], which is a well-known framework in computer-aided design (CAD) for the design of smooth multi-patch surfaces. The core idea is to employ the fact that an isogeometric function is C 1-smooth if and only if the associated multi-patch graph surface is G1-smooth [22], i.e., it is geometrically continuous of order 1. In the last few years there has been an increasing effort to provide methods for the construction of C 1 isogeometric spline spaces over general multi-patch domains. The ex- isting methods for planar domains can be roughly classified into two groups depending on the used parameterization for the multi-patch domain. The first approach relies on a multi-patch parameterization which is C 1-smooth everywhere except in the neighbor- hood of extraordinary vertices (i.e. vertices with valencies different to four), where the parameterization is singular, see e.g. [43, 48, 49], or consists of a special construction, see e.g. [33, 34, 42]. The methods [43, 48, 49] use a singular parameterization with patches in the vicinity of an extraordinary vertex, which belong to a specific class of degenerate (B´ezier) patches introduced in [45], and that allow, despite having singularities, the design of globally C 1 isogeometric spaces. The techniques [33, 34, 42] are based on G1 multi-patch surface constructions, where the obtained surface in the neighborhood of an extraordinary vertex consists of patches of slightly higher degree [33, 42] and is generated by means of a particular subdivision scheme [34]. As a special case of the first approach can be seen the constructions in [41, 47], that employ a polar framework to generate C 1 spline spaces. The second approach, on which we will focus, uses a particular class of regular C 0 multi- patch parameterizations, called analysis-suitable G1 multi-patch parameterization [11]. The class of analysis-suitable G1 multi-patch geometries characterizes the regular C 0 multi- patch parameterizations that allow the design of C 1 isogeometric spline spaces with optimal approximation properties, see [11, 29], and includes for instance the subclass of bilinear multi-patch parameterizations [4, 27, 32]. An algorithm for the construction of analysis- suitable G1 parameterizations for complex multi-patch domains was presented in [29]. The main idea of this approach is to analyze the entire space of C 1 isogeometric functions over the given multi-patch geometry to generate a basis of this space or of a suitable subspace. 2 While the methods in [4, 27, 32] are mainly restricted to (mapped) bilinear multi-patch parameterizations, the techniques [5, 28, 30, 31, 40] can also deal with more general multi- patch geometries. An alternative but related approach comprises the constructions [9, 10] for general C 0 multi-patch parameterizations, which increase the degree of the constructed spline functions in the neighborhood of the common interfaces to obtain C 1 isogeometric spaces with good approximation properties. In this work, we extend for the case of two-patch domains the second approach from above to the construction of hierarchical C 1 isogeometric spaces on analysis-suitable G1 geometries, using the abstract framework for the definition of hierarchical splines detailed in [18]. We show that the basis functions of the considered C 1 space on analysis-suitable G1 two-patch parameterizations, which is a subspace of the space [28] inspired by [31], satisfy the required properties given in [18], and in particular that the basis functions are locally linearly independent (see Section 3.1 for details). Note that in case of a multi-patch domain, the general framework for the construction of hierarchical splines [18] cannot be used anymore, since the appropriate C 1 basis functions [31] can be locally linearly dependent. Therefore, the development of another approach as [18] would be needed for the multi-patch case, which is beyond the scope of this paper. For the construction of the hierarchical C 1 spline spaces on analysis-suitable G1 two- patch geometries, we also explore the explicit expression for the relation between C 1 basis functions of two consecutive levels, expressing coarse basis functions as linear combinations of fine basis functions. This relation is exploited for the implementation of hierarchical splines as in [16, 24]. A series of numerical tests are presented, that are run with the help of the Matlab/Octave code GeoPDEs [16, 50]. The remainder of the paper is organized as follows. Section 2 recalls the concept of analysis-suitable G1 two-patch geometries and presents the used C 1 isogeometric spline In Section 3, we develop the (theoretical) space over this class of parameterizations. framework to employ this space to construct C 1 hierarchical isogeometric spline spaces, which includes the verification of the nested nature of this kind of spaces, as well as the proof of the local linear independence of the one-level basis functions. Additional details of the C 1 hierarchical construction, such as the refinement masks of the basis functions for the different levels, are discussed in Section 4 with focus on implementation aspects. The generated hierarchical spaces are then used in Section 5 to numerically solve the laplacian and bilaplacian equations on two-patch geometries, where the numerical results demon- strate the potential of our C 1 hierarchical construction for applications in IgA. Finally, the concluding remarks can be found in Section 6. The construction of the non-trivial analysis-suitable G1 two-patch parameterization used in some of the numerical examples is described in detail in Appendix A. For easiness of reading, we include at the end of the paper a list of symbols with the main notation used in this work. 2. C 1 isogeometric spaces on two-patch geometries In this section, we introduce the specific class of two-patch geometries and the C 1 isogeometric spaces which will be used throughout the paper. 3 2.1. Analysis-suitable G1 two-patch geometries We present a particular class of planar two-patch geometries, called analysis-suitable G1 two-patch geometries, which was introduced in [11]. This class is of importance since it comprises exactly those two-patch geometries which are suitable for the construction of C 1 isogeometric spaces with optimal approximation properties, see [11, 29]. The most prominent member is the subclass of bilinear two-patch parameterizations, but it was demonstrated in [29] that the class is much wider and allows the design of generic planar two-patch domains. Let k, p, r ∈ N with degree p ≥ 3 and regularity 1 ≤ r ≤ p − 2. Let us also introduce the ordered set of internal breakpoints T = {τ1, τ2, . . . , τk}, with 0 < τi < τi+1 < 1 for all 1 ≤ i ≤ k. We denote by Sr p the univariate spline space in [0, 1] with respect to the open knot vector Ξr p = { 0, . . . , 0 (cid:124) (cid:123)(cid:122) (cid:125) (p+1)−times , τ1, . . . , τ1 (cid:125) (cid:123)(cid:122) (cid:124) (p−r)−times , τ2, . . . , τ2 (cid:125) (cid:123)(cid:122) (cid:124) (p−r)−times , . . . , τk, . . . , τk (cid:125) (cid:123)(cid:122) (cid:124) (p−r)−times , 1, . . . , 1 (cid:124) (cid:123)(cid:122) (cid:125) (p+1)−times }, (1) and let N r i,p, i ∈ I = {0, . . . , p + k(p − r)}, be the associated B-splines. Note that the parameter r specifies the resulting C r-continuity of the spline space Sr p. We will also make use of the subspaces of higher regularity and lower degree, respectively Sr+1 p−1, defined from the same internal breakpoints, and we will use an analogous notation for their basis functions. Furthermore, we denote by n, n0 and n1 the dimensions of the spline spaces Sr p−1, respectively, which are given by p, Sr+1 and Sr and Sr p p n = p + 1 + k(p − r), n0 = p + 1 + k(p − r − 1) and n1 = p + k(p − r − 1), and, analogously to I, we introduce the index sets I0 = {0, . . . , n0 − 1}, I1 = {0, . . . , n1 − 1}, corresponding to basis functions in Sr+1 and Sr p−1, respectively. p Let F(L), F(R) ∈ (Sr p ⊗ Sr p)2 be two regular spline parameterizations, whose images F(L)([0, 1]2) and F(R)([0, 1]2) define the two quadrilateral patches Ω(L) and Ω(R) via F(S)([0, 1]2) = Ω(S), S ∈ {L, R}. The regular, bijective mapping F(S) : [0, 1]2 → Ω(S), S ∈ {L, R}, is called geometry mapping, and possesses a spline representation F(S)(ξ1, ξ2) = (cid:88) (cid:88) i∈I j∈I c(S) i,j N r i,p(ξ1)N r j,p(ξ2), c(S) i,j ∈ R2. We assume that the two patches Ω(L) and Ω(R) form a planar two-patch domain Ω = Ω(L) ∪ Ω(R), which share one whole edge as common interface Γ = Ω(L) ∩ Ω(R). In addition, and without loss of generality, we assume that the common interface Γ is parameterized by F0 : [0, 1] → Γ via F0(ξ2) = F(L)(0, ξ2) = F(R)(0, ξ2), ξ2 ∈ [0, 1], and denote by F the two-patch parameterization (also called two-patch geometry) consisting of the two spline parameterizations F(L) and F(R). 4 Remark 1. For simplicity, we have restricted ourselves to a univariate spline space Sr p with the same knot multiplicity for all inner knots. Instead, a univariate spline space with different inner knot multiplicities can be used, as long as the multiplicity of each inner knot is at least 2 and at most p − 1. Note that the subspaces Sr+1 p−1 should also be replaced by suitable spline spaces of regularity increased by one at each inner knot, and degree reduced by one, respectively. Furthermore, it is also possible to use different univariate spline spaces for both Cartesian directions and for both geometry mappings, with the requirement that both patches must have the same univariate spline space in ξ2-direction. and Sr p The two geometry mappings F(L) and F(R) uniquely determine up to a common func- tion γ : [0, 1] → R (with γ (cid:54)= 0), the functions α(L), α(R), β : [0, 1] → R given by α(S)(ξ2) = γ(ξ2) det (cid:0)∂1F(S)(0, ξ2), ∂2F(S)(0, ξ2)(cid:1) , S ∈ {L, R}, and β(ξ2) = γ(ξ2) det (cid:0)∂1F(L)(0, ξ2), ∂1F(R)(0, ξ2)(cid:1) , satisfying for ξ2 ∈ [0, 1] and α(L)(ξ2)α(R)(ξ2) < 0 α(R)∂1F(L)(0, ξ2) − α(L)(ξ2)∂1F(R)(0, ξ2) + β(ξ2)∂2F(L)(0, ξ2) = 0. In addition, there exist non-unique functions β(L) and β(R) : [0, 1] → R such that β(ξ2) = α(L)(ξ2)β(R)(ξ2) − α(R)(ξ2)β(L)(ξ2), (2) (3) (4) [11, 44]. The two-patch geometry F is called analysis-suitable G1 if there exist see e.g. linear functions α(S), β(S), S ∈ {L, R} with α(L) and α(R) relatively prime1 such that equations (2)-(4) are satisfied for ξ2 ∈ [0, 1], see [11, 28]. Note that requiring that α(L) and α(R) are relatively prime is not restrictive: if α(L) and α(R) share a common factor, it is a factor of γ too, thus α(L) and α(R) can be made relatively prime by dividing by such a factor. In the following, we will only consider planar two-patch domains Ω which are described by analysis-suitable G1 two-patch geometries F. Furthermore, we select those linear func- tions α(S) and β(S), S ∈ {L, R}, that minimize the terms ||α(L) + 1||2 L2([0,1]) + ||α(R) − 1||2 L2([0,1]) and see [31]. ||β(L)||2 L2([0,1]) + ||β(R)||2 L2([0,1]), 1Two polynomials are relatively prime if their greatest common divisor has degree zero. 5 2.2. The C 1 isogeometric space V and the subspace W We recall the concept of C 1 isogeometric spaces over analysis-suitable G1 two-patch geometries studied in [11, 28], and especially focus on a specific subspace of the entire space of C 1 isogeometric functions. The space V of C 1 isogeometric spline functions on Ω (with respect to the two-patch geometry F and spline space Sr p) is given by V = {φ ∈ C 1(Ω) : φ ◦ F(S) ∈ Sr p ⊗ Sr p, S ∈ {L, R}}. (5) A function φ : Ω → R belongs to the space V if and only if the functions f (S) = φ ◦ F(S), S ∈ {L, R}, satisfy that f (S) ∈ Sr p ⊗ Sr p, S ∈ {L, R}, and f (L)(0, ξ2) = f (R)(0, ξ2), ξ2 ∈ [0, 1], (6) (7) α(R)(ξ2)∂1f (L)(0, ξ2) − α(L)(ξ2)∂1f (R)(0, ξ2) + β(ξ2)∂2f (L)(0, ξ2) = 0, ξ2 ∈ [0, 1], where the last equation is due to (4) further equivalent to ∂1f (L)(0, ξ2) − β(L)(ξ2)∂2f (L)(0, ξ2) α(L)(ξ2) = ∂1f (R)(0, ξ2) − β(R)(ξ2)∂2f (R)(0, ξ2) α(R)(ξ2) , see e.g. [11, 22, 32]. Therefore, the space V can be also described as ξ2 ∈ [0, 1], (8) V = {φ : Ω → R : f (S) = φ ◦ F(S), S ∈ {L, R}, fulfill the equations (6)-(8)}. (9) Note that the equally valued terms in (8) represent a specific directional derivative of φ across the interface Γ. In fact, recalling that f (S) = φ ◦ F(S) for S ∈ {L, R}, we have ∇φ · (d ◦ F0(ξ2)) = ∇φ · (d(S) ◦ F0(ξ2)) = ∂1f (S)(0, ξ2) − β(S)(ξ2)∂2f (S)(0, ξ2) α(S)(ξ2) , ξ2 ∈ [0, 1], (10) where d is a transversal vector to Γ given by d = d(L) = d(R) with d(S) ◦ F0(ξ2) = 1 (∂1F(S)(0, ξ2), ∂2F(S)(0, ξ2))(1, −β(S)(ξ2))T α(S)(ξ2) , S ∈ {L, R}, see [11, 28]. The structure and the dimension of the space V heavily depends on the functions α(L), α(R) and β, and was fully analyzed in [28] by computing a basis and its dimension for all possible configurations. Below, we restrict ourselves to a simpler subspace W (moti- vated by [31]), which preserves the approximation properties of V, and whose dimension is independent of the functions α(L), α(R) and β. The C 1 isogeometric space W is defined as W = span Φ, Φ = ΦΩ(L) ∪ ΦΩ(R) ∪ ΦΓ0 ∪ ΦΓ1, with ΦΩ(S) = (cid:110) φΩ(S) i,j : i ∈ I \ {0, 1}; j ∈ I (cid:111) , S ∈ {L, R}, (11) 6 where the functions φΩ(S(cid:48)) i and φΓ1 i are defined via ΦΓ0 = (cid:8)φΓ0 , φΓ0 i i,j : i ∈ I0 (cid:9) , ΦΓ1 = (cid:8)φΓ1 i : i ∈ I1 (cid:9) , (12) (cid:16) φΩ(S(cid:48)) i,j ◦F(S)(cid:17) (ξ1, ξ2) = i,p(ξ1)N r j,p(ξ2) (cid:40) N r 0 (cid:16) i ◦ F(S)(cid:17) φΓ0 (ξ1, ξ2) = N r+1 i,p (ξ2) (cid:16) + β(S)(ξ2) if S = S(cid:48), otherwise, i ∈ I\{0, 1}; j ∈ I; S, S(cid:48) ∈ {L, R}, (13) N r 0,p(ξ1) + N r 1,p(ξ1) (cid:17) (cid:16) N r+1 i,p (cid:17)(cid:48) (ξ2) τ1 p N r 1,p(ξ1), i ∈ I0; S ∈ {L, R}, (14) and (cid:16) i ◦ F(S)(cid:17) φΓ1 (ξ1, ξ2) = α(S)(ξ2)N r i,p−1(ξ2)N r 1,p(ξ1), i ∈ I1; S ∈ {L, R}. (15) i,j and φΓ1 i The construction of the functions φΩ(S(cid:48)) , φΓ0 guarantees that they are linearly i independent and therefore form a basis of the space W. In addition, the functions fulfill equations (6)-(8) which implies that they are C 1-smooth on Ω, and hence W ⊆ V. Note that the basis functions φΩ(S(cid:48)) are standard tensor-product B-splines whose support is included in one of the two patches, while the functions φΓ0 i are combinations of standard B-splines and their support crosses the interface Γ (see Figure 1 for an example). Moreover, the traces and specific directional derivatives (10) of the functions φΓ0 i and i and φΓ1 i,j φΓ1 i at the interface Γ are equal to φΓ0 i ◦ F0(ξ2) = N r+1 i,p (ξ2), φΓ1 i ◦ F0(ξ2) = 0, and · (d ◦ F0(ξ2)) = 0, ∇φΓ1 i Therefore, the C 1 isogeometric space W can be also characterized as · (d ◦ F0(ξ2)) = N r ∇φΓ0 i i,p−1(ξ2). W = {φ ∈ V : φ ◦ F0(ξ2) ∈ Sr+1 p and ∇φ · (d ◦ F0(ξ2)) ∈ Sr p−1}. (16) 2.3. Representation of the basis with respect to Sr p ⊗ Sr p We describe the strategy shown in [28] to represent the spline functions φΩ(S(cid:48)) ◦ F(S), i,j i ◦ F(S), S ∈ {L, R}, with respect to the spline space Sr i ◦ F(S) and φΓ1 φΓ0 p, using a vectorial notation. Let us first introduce the vectors of functions N0, N1 and N2, given by p ⊗ Sr N0(ξ1, ξ2) = [N r 0,p(ξ1)N r j,p(ξ2)]j∈I, N1(ξ1, ξ2) = [N r 1,p(ξ1)N r j,p(ξ2)]j∈I, and N2(ξ1, ξ2) = [N r 2,p(ξ1)N r 0,p(ξ2), . . . , N r 2,p(ξ1)N r n−1,p(ξ2), . . . , N r n−1,p(ξ1)N r n−1,p(ξ2)]T , 7 (a) (b) (c) (d) (e) Figure 1: Example of basis functions of W on the two-patch domain (a): figures (b)-(c) show two basis functions of type (13) (standard B-splines whose support is included in one of the two patches), while figures (d) and (e) correspond to basis functions of type (14) and (15), respectively (whose supports intersect the interface). which represent the whole basis of Sr p ⊗ Sr p. Let us also introduce, the vectors of functions φΓ0(x) = [φΓ0 i (x)]i∈I0, φΓ1(x) = [φΓ1 i (x)]i∈I1, φΩ(S)(x) = [φΩ(S) i,j (x)]i∈I\{0,1}; j∈I for S ∈ {L, R}, 8 and finally, for S ∈ {L, R}, the vectors of functions (cid:98)φ (S) Γ0 , (cid:98)φ (S) Γ1 , (cid:98)φ (S) Ω(S), given by (S) (cid:98)φ Γ0 (ξ1, ξ2) = [φΓ0 i ◦ F(S)(ξ1, ξ2)]i∈I0, (S) (cid:98)φ Γ1 (ξ1, ξ2) = [φΓ1 i ◦ F(S)(ξ1, ξ2)]i∈I1, (S) Ω(S)(ξ1, ξ2) = [φΩ(S) (cid:98)φ i,j ◦ F(S)(ξ1, ξ2)]i∈I\{0,1}; j∈I. Since the basis functions φΩ(S) i,j are just the “standard” isogeometric functions, the spline functions (cid:98)φ an analysis of the basis functions in (cid:98)φ representation (S) Ω(S)(ξ1, ξ2) automatically belong to the basis of the spline space Sr (S) Γ0 (ξ1, ξ2) and (cid:98)φ p, while (S) Γ1 (ξ1, ξ2), leads to the following p ⊗ Sr     (S) Γ0 (ξ1, ξ2) (cid:98)φ (S) (cid:98)φ Γ1 (ξ1, ξ2) (S) Ω(S)(ξ1, ξ2) (cid:98)φ       = (cid:98)B (cid:101)B(S) 0 B(S) 0 0 0 0 In(n−2)     N0(ξ1, ξ2) N1(ξ1, ξ2) N2(ξ1, ξ2)   , S ∈ {L, R}, (17) where Im denotes the identity matrix of dimension m, and the other blocks of the matrix take the form (cid:98)B = [(cid:98)bi,j]i∈I0,j∈I, (cid:101)B(S) = [(cid:101)b(S) i,j ]i∈I1,j∈I. In fact, these are sparse matrices, and by defining the index sets i,j ]i∈I0,j∈I, and B(S) = [b(S) J0,i = {j ∈ I : supp(N r j,p) ∩ supp(N r+1 i,p ) (cid:54)= ∅}, and J1,i = {j ∈ I : supp(N r j,p) ∩ supp(N r i,p−1) (cid:54)= ∅}, for i ∈ I0, for i ∈ I1, it can be seen that the possible non-zero entries are limited to (cid:98)bi,j, (cid:101)b(S) and b(S) i,j , i ∈ I1, j ∈ J1,i, respectively. i,j , i ∈ I0, j ∈ J0,i, For the actual computation of these coefficients, let us denote by ζm, with m ∈ I, the p. Then, for each S ∈ {L, R} and for i,j , j ∈ J1,i, can be obtained i,j , j ∈ J0,i, and b(S) Greville abscissae of the univariate spline space Sr each i ∈ I0 or i ∈ I1, the linear factors (cid:98)bi,j, (cid:101)b(S) by solving the following systems of linear equations i ◦ F(L)(cid:17) φΓ0 (0, ζm) = j,p(ζm), m ∈ J0,i, (cid:98)bi,jN r (cid:88) (cid:16) j∈J0,i τ1∂1 (cid:16) i ◦ F(S)(cid:17) φΓ0 p (0, ζm) (cid:16) i ◦ F(S)(cid:17) φΓ0 + (0, ζm) = (cid:88) j∈J0,i (cid:101)b(S) i,j N r j,p(ζm), m ∈ J0,i, and (cid:16) τ1∂1 i ◦ F(L)(cid:17) φΓ1 p (0, ζm) (cid:88) = j∈J1,i b(S) i,j N r j,p(ζm), m ∈ J1,i, respectively, see [28] for more details. Note that the coefficients (cid:98)bi,j, i ∈ I0, are exactly the spline coefficients of the B-spline N r+1 for the spline representation with respect to the j,p space Sr p, and can also be computed by simple knot insertion. 9 3. C 1 hierarchical isogeometric spaces on two-patch geometries This section introduces an abstract framework for the construction of the hierarchical spline basis, that is defined in terms of a multilevel approach applied to an underlying sequence of spline bases that are locally linearly independent and characterized by local and compact supports. The C 1 hierarchical isogeometric spaces on two-patch geometries are then defined by applying the hierarchical construction to the C 1 isogeometric functions described in the previous section. Particular attention is devoted to the proof of local linear independence of the basis functions, cf. Section 3.2, and to the refinement mask that explicitly identifies a two-scale relation between hierarchical functions of two consecutive levels, cf. Section 4.1. Note that, even if the hierarchical framework can be applied with different refinement strategies between consecutive refinement levels, we here focus on dyadic refinement, the standard choice in most application contexts. In the following the refinement level (cid:96) is denoted as a superscript associated to the corresponding symbol. 3.1. Hierarchical splines: abstract definition Let U0 ⊂ U1 ⊂ . . . ⊂ UN −1 be a sequence of N nested multivariate spline spaces defined on a closed domain D ⊂ Rd, so that any space U(cid:96), for (cid:96) = 0, . . . , N − 1, is spanned by a (finite) basis Ψ(cid:96) satisfying the following properties. (P1) Local linear independence; (P2) Local and compact support. The first property guarantees that for any subdomain S, the restrictions of the (non- vanishing) functions ψ ∈ Ψ(cid:96) to S are linearly independent. The locality of the support instead enables to localize the influence of the basis functions with respect to delimited areas of the domain. Note that the nested nature of the spline spaces implies the existence of a two-scale relation between adjacent bases: for any level (cid:96), each basis function in Ψ(cid:96) can be expressed as linear combination of basis functions in Ψ(cid:96)+1. By also considering a sequence of closed nested domains Ω0 ⊇ Ω1 ⊇ . . . ⊇ ΩN −1, (18) with Ω0 ⊆ D, we can define a hierarchical spline basis according to the following definition. Definition 1. The hierarchical spline basis H with respect to the domain hierarchy (18) is defined as H = (cid:8)ψ ∈ Ψ(cid:96) : supp0ψ ⊆ Ω(cid:96) ∧ supp0ψ (cid:54)⊆ Ω(cid:96)+1(cid:9) , where supp0ψ = supp ψ ∩ Ω0. Note that the basis H = HN −1 can be iteratively constructed as follows. 1. H0 = {ψ ∈ Ψ0 : supp0ψ (cid:54)= ∅}; 10 2. for (cid:96) = 0, . . . , N − 2 where H(cid:96)+1 = H(cid:96)+1 A ∪ H(cid:96)+1 B , H(cid:96)+1 A = (cid:8)ψ ∈ H(cid:96) : supp0ψ (cid:54)⊆ Ω(cid:96)+1(cid:9) and H(cid:96)+1 B = (cid:8)ψ ∈ Ψ(cid:96)+1 : supp0ψ ⊆ Ω(cid:96)+1(cid:9) . The main properties of the hierarchical basis can be summarized as follows. Proposition 1. By assuming that properties (P1)-(P2) hold for the bases Ψ(cid:96), the hierar- chical basis satisfies the following properties: (i) the functions in H are linearly independent, (ii) the intermediate spline spaces are nested, namely span H(cid:96) ⊆ span H(cid:96)+1, (iii) given an enlargement of the subdomains ((cid:98)Ω(cid:96))(cid:96)=0,..., (cid:98)N −1, with N ≤ (cid:98)N , such that Ω0 = (cid:98)Ω0 and Ω(cid:96) ⊆ (cid:98)Ω(cid:96), for (cid:96) = 1, . . . , N − 1, then spanH ⊆ span (cid:98)H. Proof. The proof follows along the same lines as in [51] for hierarchical B-splines. Proposition 1 summarizes the key properties of a hierarchical set of basis functions constructed according to Definition 1, when the underlying sequence of bases Ψ(cid:96) satisfies only properties (P1)-(P2). The results in Proposition 1 remain valid when additional assumptions are consid- ered [18]. In particular, if the basis functions in Ψ(cid:96), for (cid:96) = 0, . . . , N − 1 are non-negative, the hierarchical basis functions are also non-negative. Moreover, the partition of unity property in the hierarchical setting can be recovered by considering the truncated basis for hierarchical spline spaces [18]. In this case, the partition of unity property at each level (cid:96) is also required together with the positiveness of the coefficients in the refinement mask. Even if the construction of C 1 functions on two patch geometries considered in the previous section does not satisfy the non-negativity and partition of unity properties, we could still apply the truncation mechanism to reduce the support of coarser basis functions in the C 1 hierarchical basis. Obviously, the resulting truncated basis would not satisfy the other interesting properties of truncated hierarchical B-splines, see [17, 18]. 3.2. The C 1 hierarchical isogeometric space By following the construction for the C 1 isogeometric spline space presented in Sec- tion 2, we can now introduce its hierarchical extension. We recall that instead of consider- ing the full C 1 space V at any hierarchical level, we may restrict to the simpler subspace W, whose dimension does not depend on the functions α(L), α(R) and β, and it has analogous approximation properties as the full space. We consider an initial knot vector Ξr,0 p as defined in (1) for then introducing the p ≡ Ξr sequence of knot vectors with respect to a fixed degree p p , Ξr,1 Ξr,0 p . . . , Ξr,N −1 p , 11 where each knot vector Ξr,(cid:96) p = { 0, . . . , 0 (cid:124) (cid:123)(cid:122) (cid:125) (p+1)−times 1, . . . , τ (cid:96) , τ (cid:96) 1 (cid:125) (cid:123)(cid:122) (cid:124) (p−r)−times 2, . . . , τ (cid:96) , τ (cid:96) 2 (cid:125) (cid:123)(cid:122) (cid:124) (p−r)−times k(cid:96), . . . , τ (cid:96) , . . . , τ (cid:96) k(cid:96) (cid:124) (cid:125) (cid:123)(cid:122) (p−r)−times , 1, . . . , 1 (cid:124) (cid:123)(cid:122) (cid:125) (p+1)−times }, the univariate spline space in [0, 1] with respect to the open knot vector Ξr,(cid:96) for (cid:96) = 1, . . . , N − 1, is obtained via dyadic refinement of the knot vector of the previous level, keeping the same degree and regularity, and therefore k(cid:96) = 2k(cid:96)−1 + 1. We denote by Sr,(cid:96) p , and let p N r,(cid:96) i,p , for i ∈ I(cid:96) = {0, . . . , p + k(cid:96)(p − r)}, be the associated B-splines. In addition, as in the one-level case, Sr+1,(cid:96) i,p−1) indicate the subspaces (and their basis functions) of higher regularity and lower degree, respectively. We also denote by p−1 (N r+1,(cid:96) and N r,(cid:96) and Sr,(cid:96) i,p p n(cid:96) = p + 1 + k(cid:96)(p − r), n(cid:96) 0 = p + 1 + k(cid:96)(p − r − 1), and n(cid:96) 1 = p + k(cid:96)(p − r − 1), the dimensions of the spline spaces Sr,(cid:96) I(cid:96), we introduce the index sets p , Sr+1,(cid:96) p 0 = {0, . . . , n(cid:96) I(cid:96) corresponding to functions in Sr+1,(cid:96) p 0 − 1}, and Sr,(cid:96) p−1, respectively. and Sr,(cid:96) p−1, respectively, and, analogously to 1 = {0, . . . , n(cid:96) I(cid:96) 1 − 1}, Let V0 ⊂ V1 ⊂ . . . ⊂ VN −1 be a sequence of nested C 1 isogeometric spline spaces, with V(cid:96) defined on the two-patch domain Ω = Ω(L) ∪ Ω(R) with respect to the spline space of level (cid:96). Analogously to the construction detailed in Section 2.2, for each level 0 ≤ (cid:96) ≤ N − 1 let us consider the subspace W(cid:96) = spanΦ(cid:96), with Φ(cid:96) = Φ(cid:96) Ω(L) ∪ Φ(cid:96) Ω(R) ∪ Φ(cid:96) Γ0 ∪ Φ(cid:96) Γ1, where the basis functions are given by : i ∈ I(cid:96) \ {0, 1}; j ∈ I(cid:96)(cid:111) Φ(cid:96) (cid:110) φΩ(S) i,j Ω(S) = , Φ(cid:96) Γ0 = (cid:8)φΓ0 i : i ∈ I(cid:96) 0 (cid:9) , Φ(cid:96) Γ1 = (cid:8)φΓ1 i : i ∈ I(cid:96) 1 (cid:9) , with S ∈ {L, R}, directly defined as in (11) and (12) for the one-level case. By considering a domain hierarchy as in (18) on the two-patch domain Ω ≡ Ω0, and the sets of isogeometric functions Φ(cid:96) at different levels, we arrive at the following definition. Definition 2. The C 1 hierarchical isogeometric space WH with respect to a domain hier- archy of the two-patch domain Ω, that satisfies (18) with Ω0 = Ω, is defined as WH = span W with W = (cid:8)φ ∈ Φ(cid:96) : supp0φ ⊆ Ω(cid:96) ∧ supp0φ (cid:54)⊆ Ω(cid:96)+1(cid:9) . In the remaining part of this section we want to prove that W is indeed a basis of the C 1 hierarchical isogeometric space WH. This requires to verify the properties for the abstract definition given in Section 3.1, in particular the nestedness of the spaces W(cid:96), and that the one-level C 1 bases spanning each W(cid:96), for (cid:96) = 0, . . . , N − 1, satisfy the hypotheses of Proposition 1, i.e. properties (P1)-(P2). The nestedness of the spaces W(cid:96), (cid:96) = 0, 1, . . . , N −1, easily follows from definition (16), as stated in the following Proposition. 12 Proposition 2. Let N ∈ N. The sequence of spaces W(cid:96), (cid:96) = 0, 1, . . . , N − 1, is nested, i.e. W0 ⊂ W1 ⊂ . . . ⊂ WN −1. Proof. Let (cid:96) = 0, . . . , N − 2, and φ ∈ W(cid:96) ⊂ V(cid:96). By definition (5) the spaces V(cid:96) are nested, hence φ ∈ V(cid:96) ⊂ V(cid:96)+1. Since the spline spaces Sr+1,(cid:96) p−1 are nested, too, we have φ ◦ F0 ∈ Sr+1,(cid:96) p−1 , which implies that φ ∈ W(cid:96)+1. p and ∇φ · (d ◦ F0) ∈ Sr,(cid:96) p−1 ⊂ Sr,(cid:96)+1 ⊂ Sr+1,(cid:96)+1 p and Sr,(cid:96) p The locality and compactness of the support of these functions in (P2) comes directly by construction and by the same property for standard B-splines, see (13)-(15) and Fig- ure 1. The property of local linear independence in (P1) instead is proven in the following Proposition. Proposition 3. The set of basis functions Φ(cid:96) = Φ(cid:96) independent, for (cid:96) = 0, . . . , N − 1. Ω(L) ∪ Φ(cid:96) Ω(R) ∪ Φ(cid:96) Γ0 ∪ Φ(cid:96) Γ1, is locally linearly Proof. Since we have to prove the statement for any hierarchical level (cid:96), we just remove the superscript (cid:96) in the proof to simplify the notation. Recall that the functions in Φ are linearly independent. It is well known that the functions in ΦΩ(L) ∪ΦΩ(R) are locally linearly independent, as they are (mapped) standard B-splines. Furthermore, it is also well known, or easy to verify, that each of the following sets of univariate functions is locally linearly independent (a) {N r 0,p + N r 1,p, N r 1,p} ∪ {N r i,p}i∈I\{0,1}, (b) {N r+1 i,p }i∈I0, (c) {N r i,p−1}i∈I1. We prove that the set of functions Φ is locally linearly independent, which means that, for any open set (cid:101)Ω ⊂ Ω the functions of Φ that do not vanish in (cid:101)Ω are linearly independent on (cid:101)Ω. Let (cid:101)I0 ⊂ I0, (cid:101)I1 ⊂ I1 and (cid:101)I(S) j ⊂ I, j ∈ I \ {0, 1}, S ∈ {L, R}, be the sets of indices corresponding to those functions φΓ0 , respectively, that do not vanish on (cid:101)Ω. i Then the equation i and φΩ(S) , φΓ1 j,i µ0,iφΓ0 i (x) + (cid:88) i∈(cid:101)I0 µ1,iφΓ1 i (x) + (cid:88) i∈(cid:101)I1 (cid:88) (cid:88) (cid:88) S∈{L,R} j∈I\{0,1} i∈(cid:101)I(S) j j,i φΩ(S) µ(S) j,i (x) = 0, x ∈ (cid:101)Ω (19) has to imply µ0,i = 0 for all i ∈ (cid:101)I0, µ1,i = 0 for all i ∈ (cid:101)I1, and µ(S) j ∈ I \ {0, 1}, S ∈ {L, R}. Equation (19) implies that j,i = 0 for all i ∈ (cid:101)I(S) j , (cid:88) (cid:16) i ◦ F(S)(cid:17) φΓ0 µ0,i (ξ1, ξ2) + (cid:88) (cid:16) i ◦ F(S)(cid:17) φΓ1 µ1,i (ξ1, ξ2) i∈(cid:101)I0 + (cid:88) (cid:88) (cid:16) µ(S) j,i i∈(cid:101)I1 j,i ◦ F(S)(cid:17) φΩ(S) (ξ1, ξ2) = 0, j∈I\{0,1} i∈(cid:101)I(S) j 13 for (ξ1, ξ2) ∈ (cid:101)Ω(S) and S ∈ {L, R}, where (cid:101)Ω(S) ⊆ (0, 1)2 are the corresponding parameter domains for the geometry mappings F(S) such that the closure of (cid:101)Ω is cl((cid:101)Ω) = cl (cid:17) (cid:16) F(L)((cid:101)Ω(L)) ∪ F(R)((cid:101)Ω(R)) . By substituting the functions φΓ0 expressions, we obtain i ◦ F(S), φΓ1 i ◦ F(S) and φΩ(S) j,i ◦ F(S) by their corresponding (cid:18) (cid:88) µ0,i N r+1 i,p (ξ2) (cid:16) N r 0,p(ξ1) + N r 1,p(ξ1) i∈(cid:101)I0 + (cid:88) i∈(cid:101)I1 µ1,i (cid:0)α(S)(ξ2)N r i,p−1(ξ2)N r 1,p(ξ1)(cid:1) + (cid:17) + β(S)(ξ2) (cid:16) N r+1 i,p (cid:17)(cid:48) (ξ2) τ1 p (cid:19) N r 1,p(ξ1) (cid:88) (cid:88) j∈I\{0,1} i∈(cid:101)I(S) j µ(S) j,i N r j,p(ξ1)N r i,p(ξ2) = 0, for (ξ1, ξ2) ∈ (cid:101)Ω(S) and S ∈ {L, R}, which can be rewritten as (cid:16) (cid:88) (cid:17)(cid:16) (cid:88) (cid:17) N r 0,p(ξ1) + N r 1,p(ξ1) µ0,iN r+1 i,p (ξ2) + N r 1,p(ξ1) (cid:16) τ1 p + N r 1,p(ξ1) (cid:16) (cid:88) i∈(cid:101)I1 i∈(cid:101)I0 µ1,iα(S)(ξ2)N r i,p−1(ξ2) (cid:17) + (cid:88) j∈I\{0,1} N r j,p(ξ1) i∈(cid:101)I0 (cid:16) (cid:88) i∈(cid:101)I(S) j (cid:16) µ0,iβ(S)(ξ2) N r+1 i,p (cid:17)(cid:48) (cid:17) (ξ2) (20) µ(S) j,i N r i,p(ξ2) (cid:17) = 0. Now, since (cid:101)Ω and (cid:101)Ω(S) are open, for each i ∈ (cid:101)I0 there exists a point (ξ(S) S ∈ {L, R}, such that φΓ0 to the fact that the univariate functions N r linearly independent and that N r 2 ) ∈ (cid:101)Ω(S), with i does not vanish in a neighborhood Q ⊂ (cid:101)Ω(S) of the point. Due j,p, j ∈ I \ {0, 1} are locally 1,p and N r 1,p, N r 1 ) (cid:54)= 0, we get that 1 ) + N r 0,p + N r 1,p(ξ(S) 0,p(ξ(S) , ξ(S) 1 µ0,iN r+1 i,p (ξ2) = 0, for ξ2 such that (ξ(S) 1 , ξ2) ∈ Q. (cid:88) i∈(cid:101)I0 This equation and the local linear independence of the univariate functions {N r+1 i,p }i∈(cid:101)I0 imply that µ0,i = 0. Applying this argument for all i ∈ (cid:101)I0, we obtain µ0,i = 0, i ∈ (cid:101)I0, and the term (20) simplifies to (cid:16) (cid:88) (cid:16) (cid:88) (cid:88) (cid:17) (cid:17) N r j,p(ξ1) µ(S) j,i N r i,p(ξ2) = 0. (21) N r 1,p(ξ1) µ1,iα(S)(ξ2)N r i,p−1(ξ2) + i∈(cid:101)I1 j∈I\{0,1} i∈(cid:101)I(S) j Similarly, we can obtain for each i ∈ (cid:101)I1 (cid:88) µ1,i α(S)(ξ2)N r i,p−1(ξ2) = 0, for ξ2 such that (ξ(S) 1 , ξ2) ∈ Q, (22) i∈(cid:101)I1 with the corresponding points (ξ(S) , ξ2) ∈ (cid:101)Ω and neighborhoods Q ⊂ (cid:101)Ω. Since the function α(S) is just a linear function which never takes the value zero, see (2), equation (22) implies that 1 (cid:88) µ1,i N r i,p−1(ξ2) = 0, for ξ2 such that (ξ(S) 1 , ξ2) ∈ Q. i∈(cid:101)I1 14 The local linear independence of the univariate functions {N r µ1,i = 0, i ∈ (cid:101)I1, and therefore the term (21) simplifies further to i,p−1}i∈(cid:101)I1 implies as before that (cid:88) N r j,p(ξ1) (cid:16) (cid:88) µ(S) j,i N r i,p(ξ2) (cid:17) = 0. j∈I\{0,1} i∈(cid:101)I(S) j Finally, µ(S) functions in ΦΩ(L) ∪ ΦΩ(R) are locally linearly independent. j,i = 0, i ∈ (cid:101)I(S) j , j ∈ I \ {0, 1}, S ∈ {L, R}, follows directly from the fact that the Finally, we have all what is necessary to prove the main result. Theorem 1. W is a basis for the C 1 hierarchical space WH. Proof. The result holds because the spaces in Definition 2 satisfy the hypotheses in Propo- sition 1. In particular, we have the nestedness of the spaces by Proposition 2, and for the basis functions in Φ(cid:96) the local linear independence (P1) by Proposition 3, and the local and compact support (P2) by their definition in (13)-(15). Remark 2. In contrast to the here considered C 1 basis functions for the case of analysis- suitable G1 two-patch geometries, the analogous C 1 basis functions for the multi-patch case based on [31] are, in general, not locally linearly dependent. Due to the amount of notation needed and to their technicality, we do not report here counterexamples, but what happens, even in some basic domain configurations, is that the basis functions defined in the vicinity of a vertex may be locally linearly dependent. As a consequence, the construction of a hierarchical C 1 space requires a different approach, whose investigation is beyond the scope of the present paper. 4. Refinement mask and implementation In this section we give some details about practical aspects regarding the implementa- tion of isogeometric methods based on the hierarchical space WH. First, we specify the refinement masks, which allow to write the basis functions of Φ(cid:96) as linear combinations of the basis functions of Φ(cid:96)+1. The refinement masks are important, as they are needed, for instance, for knot insertion algorithms and some operators in multilevel preconditioning. Then, we focus on the implementation of the hierarchical space in the open Octave/Matlab software GeoPDEs [50], whose principles can be applied almost identically to any other isogeometric code. The implementation employs the refinement masks for the evaluation of basis functions too. 4.1. Refinement masks Let us recall the notations and assumptions from Section 3.2 for the multi-level setting of the spline spaces W(cid:96), (cid:96) = 0, 1, . . . , N − 1, where the upper index (cid:96) refers to the specific level of refinement. We will use the same upper index in an analogous manner for further notations, which have been mainly introduced in Section 2.3 for the one-level case, such 15 (S) as for the vectors of functions N0, N1, N2 and (cid:98)φ Γ0 , (cid:98)φ transformation matrices (cid:98)B, (cid:101)B(S) and B(S), S ∈ {L, R}. (S) Γ1 , (cid:98)φ (S) Ω(S), S ∈ {L, R}, and for the Let R+ be the set of non-negative real numbers. Based on basic properties of B-splines, 0×n(cid:96)+1 + ∈ Rn(cid:96)×n(cid:96)+1 + there exist refinement matrices (refinement masks) Λr,(cid:96)+1 and Λr,(cid:96)+1 , Λr+1,(cid:96)+1 p such that ∈ Rn(cid:96) p−1 ∈ Rn(cid:96) 1×n(cid:96)+1 + p 1 0 [N r,(cid:96) i,p (ξ)]i∈I(cid:96) = Λr,(cid:96)+1 p [N r,(cid:96)+1 i,p (ξ)]i∈I(cid:96)+1, and [N r+1,(cid:96) i,p (ξ)]i∈I(cid:96) 0 = Λr+1,(cid:96)+1 p [N r+1,(cid:96)+1 i,p (ξ)]i∈I(cid:96)+1 0 , [N r,(cid:96) i,p−1(ξ)]i∈I(cid:96) 1 = Λr,(cid:96)+1 p−1 [N r,(cid:96)+1 i,p−1 (ξ)]i∈I(cid:96)+1 1 . These refinement matrices are banded matrices with a small bandwidth. Furthermore, using an analogous notation to Section 2.3 for the vectors of functions, the refinement mask between the tensor-product spaces Sr,(cid:96) is obtained by refining in each parametric direction as a Kronecker product, and can be written in block-matrix form as  p and Sr,(cid:96)+1 p ⊗ Sr,(cid:96) ⊗ Sr,(cid:96)+1 p      p  N(cid:96) N(cid:96) N(cid:96) 0(ξ1, ξ2) 1(ξ1, ξ2) 2(ξ1, ξ2)   = (Λr,(cid:96)+1 p ⊗Λr,(cid:96)+1 p )  N(cid:96)+1 0 N(cid:96)+1 1 N(cid:96)+1 2 (ξ1, ξ2) (ξ1, ξ2) (ξ1.ξ2)  =  Θ(cid:96)+1 00 Θ(cid:96)+1 01 Θ(cid:96)+1 02 11 Θ(cid:96)+1 Θ(cid:96)+1 0 12 Θ(cid:96)+1 0 0 22     . N(cid:96)+1 0 N(cid:96)+1 1 N(cid:96)+1 2 (ξ1, ξ2) (ξ1, ξ2) (ξ1, ξ2) (23) Note that in case of dyadic refinement (as considered in this work), we have Θ(cid:96)+1 02 = 0. Proposition 4. It holds that     φ(cid:96) Γ0(x) φ(cid:96) Γ1(x) φ(cid:96) Ω(L)(x) φ(cid:96) Ω(R)(x)         = p Λr+1,(cid:96)+1 0 0 0 1 (cid:101)B(R),(cid:96)Θ(cid:96)+1 (cid:101)B(L),(cid:96)Θ(cid:96)+1 0 12 12 2Λr,(cid:96)+1 12 B(R),(cid:96)Θ(cid:96)+1 p−1 B(L),(cid:96)Θ(cid:96)+1 12 0 0 Θ(cid:96)+1 22 0 0 Θ(cid:96)+1 22             . φ(cid:96)+1 (x) Γ0 φ(cid:96)+1 (x) Γ1 φ(cid:96)+1 Ω(L)(x) φ(cid:96)+1 Ω(R)(x) (24) Proof. We first show the refinement relation for the functions φ(cid:96) sider the corresponding spline functions (cid:98)φ relation (17) and then relation (23) with the fact that Θ(cid:96)+1 02 = 0, we obtain Γ0. For this, let us con- , S ∈ {L, R}. On the one hand, using first (S),(cid:96) Γ0 (S),(cid:96) Γ0 (cid:98)φ (ξ1, ξ2) = (cid:104) (cid:98)B(cid:96) (cid:101)B(S),(cid:96) 0 (cid:104) = (cid:98)B(cid:96) (cid:101)B(S),(cid:96) 0 (cid:105) which is equal to (cid:105) (cid:2) N(cid:96) 0(ξ1, ξ2) N(cid:96)  00 Θ(cid:96)+1 Θ(cid:96)+1 01 11 Θ(cid:96)+1 Θ(cid:96)+1 0 12 Θ(cid:96)+1 0 0 22  0 1(ξ1, ξ2) N(cid:96)     2(ξ1, ξ2) (cid:3)T  (ξ1, ξ2) (ξ1, ξ2) (ξ1, ξ2) N(cid:96)+1 0 N(cid:96)+1 1 N(cid:96)+1 2  , (cid:104) (cid:98)B(cid:96)Θ(cid:96)+1 00 (cid:98)B(cid:96)Θ(cid:96)+1 01 + (cid:101)B(S),(cid:96)Θ(cid:96)+1 11 (cid:105) (cid:20) N(cid:96)+1 0 N(cid:96)+1 1 (ξ1, ξ2) (ξ1, ξ2) (cid:21) + (cid:101)B(S),(cid:96)Θ(cid:96)+1 12 N(cid:96)+1 2 (ξ1, ξ2). (25) 16 On the other hand, the functions (cid:98)φ (S),(cid:96) Γ0 possess the form (S),(cid:96) Γ0 (cid:98)φ (ξ1, ξ2) = (cid:104) N r+1,(cid:96) i,p (cid:105) (ξ2) (cid:16) i∈I(cid:96) 0 N r,(cid:96) 0,p(ξ1)+N r,(cid:96) 1,p(ξ1) (cid:17) β(S)(ξ2) (cid:20)(cid:16) N r+1,(cid:96) i,p + τ (cid:96) 1 p (cid:21) (cid:17)(cid:48) (ξ2) i∈I(cid:96) 0 N r,(cid:96) 1,p(ξ1). By refining the B-spline functions N r+1,(cid:96)+1 (S),(cid:96) Γ0 (cid:98)φ (ξ1, ξ2) = Λr+1,(cid:96)+1 p N r+1,(cid:96)+1 i,p i,p (cid:104) (ξ2), we obtain (cid:16) (cid:105) i∈I(cid:96)+1 0 (ξ2) (cid:20)(cid:16) N r+1,(cid:96)+1 i,p + τ (cid:96) 1 p β(S)(ξ2)Λr+1,(cid:96)+1 p N r,(cid:96) 0,p(ξ1) + N r,(cid:96) (cid:21) (ξ2) (cid:17)(cid:48) i∈I(cid:96)+1 0 (cid:17) 1,p(ξ1) N r,(cid:96) 1,p(ξ1). Then, refining the B-spline functions N r,(cid:96) 1,p(ξ1) and N r,(cid:96) 1,p(ξ1) leads to (S),(cid:96) Γ0 (cid:98)φ (ξ1, ξ2) = Λr+1,(cid:96)+1 p (cid:104) N r+1,(cid:96)+1 i,p (ξ2) 0,j N r,(cid:96)+1 j,p (ξ1) + (cid:80) j∈I(cid:96)+1 λ(cid:96)+1 1,j N r,(cid:96)+1 j,p (cid:17) (ξ1) 0,p(ξ1) + N r,(cid:96) (cid:16) (cid:80) (cid:105) i∈I(cid:96)+1 0 j∈I(cid:96)+1 λ(cid:96)+1 (cid:21) (ξ2) (cid:17)(cid:48) + τ (cid:96) 1 p β(S)(ξ2)Λr+1,(cid:96)+1 p (cid:20)(cid:16) N r+1,(cid:96)+1 i,p (cid:88) 1,j N r,(cid:96)+1 λ(cid:96)+1 j,p (ξ1), i∈I(cid:96)+1 0 j∈I(cid:96)+1 where λ(cid:96)+1 have λ(cid:96)+1 i,j are the entries of the refinement matrix Λr,(cid:96)+1 1 = τ (cid:96) 0,0 = 1, λ(cid:96)+1 2, λ(cid:96)+1 1,0 = 0, λ(cid:96)+1 (cid:18) (cid:16) 0,1 = 1 1,1 = 1 p 2 and τ (cid:96)+1 (cid:105) (ξ2) (ξ1, ξ2) = Λr+1,(cid:96)+1 p (S),(cid:96) Γ0 (cid:98)φ 1 2 , and we get N r,(cid:96)+1 0,p (ξ1) + N r,(cid:96)+1 1,p (cid:17) (ξ1) . Since we refine dyadically, we (cid:104) N r+1,(cid:96)+1 i,p (cid:20)(cid:16) + τ (cid:96)+1 1 p β(S)(ξ2)Λr+1,(cid:96)+1 p N r+1,(cid:96)+1 i,p i∈I(cid:96)+1 0 (cid:17)(cid:48) (cid:21) (ξ2) i∈I(cid:96)+1 0 (cid:19) (ξ1) N r,(cid:96)+1 1,p (cid:18) + Λr+1,(cid:96)+1 p (cid:104) N r+1,(cid:96)+1 i,p (ξ2) (cid:16) (cid:80) (cid:105) i∈I(cid:96)+1 0 β(S)(ξ2)Λr+1,(cid:96)+1 p (cid:20)(cid:16) + τ (cid:96) 1 p (cid:17)(cid:48) N r+1,(cid:96)+1 i,p (ξ2) j∈I(cid:96)+1\{0,1}(λ(cid:96)+1 (cid:21) (cid:88) i∈I(cid:96)+1 0 j∈I(cid:96)+1\{0,1} 0,j + λ(cid:96)+1 1,j )N r,(cid:96)+1 j,p (cid:17) (ξ1) 1,j N r,(cid:96)+1 λ(cid:96)+1 j,p (ξ1) (cid:19) , which is equal to (cid:18) + (cid:104) Λr+1,(cid:96)+1 p (S),(cid:96) Γ0 (cid:98)φ (ξ1, ξ2) = Λr+1,(cid:96)+1 p (S),(cid:96)+1 Γ0 (cid:98)φ (ξ1, ξ2) N r+1,(cid:96)+1 i,p (ξ2) (cid:16) (cid:80) (cid:105) i∈I(cid:96)+1 0 β(S)(ξ2)Λr+1,(cid:96)+1 p (cid:20)(cid:16) + τ (cid:96) 1 p (cid:17)(cid:48) N r+1,(cid:96)+1 i,p (ξ2) j∈I(cid:96)+1\{0,1}(λ(cid:96)+1 (cid:21) (cid:88) i∈I(cid:96)+1 0 j∈I(cid:96)+1\{0,1} 0,j + λ(cid:96)+1 1,j )N r,(cid:96)+1 j,p (cid:17) (ξ1) (26) 1,j N r,(cid:96)+1 λ(cid:96)+1 j,p (ξ1) (cid:19) . By analyzing the two equal value terms (25) and (26) with respect to the spline represen- tation in ξ1-direction formed by the B-splines N r,(cid:96)+1 (ξ1), j ∈ I, one can observe that both first terms and both second terms each must coincide. This leads to j,p (S),(cid:96) Γ0 (cid:98)φ (ξ1, ξ2) = Λr+1,(cid:96)+1 p (S),(cid:96)+1 Γ0 (cid:98)φ (ξ1, ξ2) + (cid:101)B(S),(cid:96)Θ(cid:96)+1 12 N(cid:96)+1 2 (ξ1, ξ2), 17 which directly implies the refinement relation for the functions φ(cid:96) Γ0. The refinement for the functions φ(cid:96) functions (cid:98)φ the fact that Θ(cid:96)+1 (S),(cid:96) Γ1 Γ1 can be proven similarly. Considering the spline , S ∈ {L, R}, we get, on the one hand, by using relations (17) and (23) and 02 = 0 (S),(cid:96) Γ1 (cid:98)φ (ξ1, ξ2) = (cid:2) 0 B(S),(cid:96) 0 (cid:3) (cid:2) N(cid:96) 0(ξ1, ξ2) N(cid:96) 00 Θ(cid:96)+1 Θ(cid:96)+1 01 11 Θ(cid:96)+1 Θ(cid:96)+1 0 12 Θ(cid:96)+1 0 0 22 (ξ1, ξ2) + B(S),(cid:96)Θ(cid:96)+1 = (cid:2) 0 B(S),(cid:96) 0 (cid:3) = B(S),(cid:96)Θ(cid:96)+1 2(ξ1, ξ2) (cid:3)T  (ξ1, ξ2) (ξ1, ξ2) (ξ1, ξ2) N(cid:96)+1 0 N(cid:96)+1 1 N(cid:96)+1 2 12 N(cid:96)+1 (ξ1, ξ2). 2 1(ξ1, ξ2) N(cid:96)   0 11 N(cid:96)+1 1      (27) On the other hand, the functions (cid:98)φ (S),(cid:96) Γ1 can be expressed as (S),(cid:96) Γ1 (cid:98)φ (ξ1, ξ2) = α(S)(ξ2) (cid:104) N r,(cid:96) i,p−1(ξ2) (cid:105) i∈I(cid:96) 1 N r,(cid:96) 1,p(ξ1), and after refining the B-spline functions N r,(cid:96) is equal to 1,p(ξ1) and N r,(cid:96) i,p−1(ξ2), i ∈ I(cid:96) 1 we obtain that this (S),(cid:96) Γ1 (cid:98)φ (ξ1, ξ2) = α(S)(ξ2) Λr,(cid:96)+1 p−1 (cid:104) N r,(cid:96)+1 (cid:105) i,p−1 (ξ2) i∈I(cid:96)+1 1 (cid:88) j∈I(cid:96)+1 1,j N r,(cid:96)+1 λ(cid:96)+1 j,p (ξ1), i,j are again the entries of the refinement matrix Λr,(cid:96)+1 p . Recalling that λ(cid:96)+1 1,0 = 0 where λ(cid:96)+1 and λ(cid:96)+1 1,1 = 1 2, we get (S),(cid:96) Γ1 (cid:98)φ (ξ1, ξ2) = α(S)(ξ2) Λr,(cid:96)+1 p−1 (cid:104) N r,(cid:96)+1 i,p−1 (ξ2) (cid:105) i∈I(cid:96)+1 1 (cid:16)1 2 N r,(cid:96)+1 1,p (ξ1) + (cid:88) 1,j N r,(cid:96)+1 λ(cid:96)+1 j,p (cid:17) (ξ1) = 1 2 Λr,(cid:96)+1 p−1 (cid:98)φ (S),(cid:96)+1 Γ1 (ξ1, ξ2) + α(S)(ξ2) Λr,(cid:96)+1 p−1 (cid:104) N r,(cid:96)+1 (cid:105) i,p−1 (ξ2) i∈I(cid:96)+1 1 j∈I(cid:96)+1\{0,1} (cid:88) 1,j N r,(cid:96)+1 λ(cid:96)+1 j,p j∈I(cid:96)+1\{0,1} (ξ1). (28) Considering the two equal value terms (27) and (28), one can argue as for the case of the functions (cid:98)φ implies , that both first terms and both second terms each must coincide. This (S),(cid:96) Γ0 (S),(cid:96) Γ1 (cid:98)φ (ξ1, ξ2) = Λr,(cid:96)+1 p−1 (cid:98)φ (S),(cid:96)+1 Γ1 (ξ1, ξ2) + B(S),(cid:96)Θ(cid:96)+1 12 N(cid:96)+1 2 (ξ1, ξ2), 1 2 which finally shows the refinement relation for the functions φ(cid:96) Γ1. Finally, the relation for the functions φ(cid:96) Ω(S), S ∈ {L, R}, directly follows from rela- tion (23), since they correspond to “standard” B-splines. 18 4.2. Details about the implementation The implementation of GeoPDEs is based on two main structures: the mesh, that contains the information related to the computational geometry and the quadrature, and that did not need any change; and the space, with the necessary information to evaluate the basis functions and their derivatives. The new implementation was done in two steps: we first introduced the space of C 1 basis functions of one single level, as in Section 2.2, and then we added the hierarchical construction. For the space of one level, we created a new space structure that contains the numbering for the basis functions of the three different types, namely ΦΩ(S), ΦΓ0 and ΦΓ1. The evalua- tion of the basis functions, and also matrix assembly, is performed using the representation of C 1 basis functions in terms of standard tensor-product B-splines, as in Section 2.3. In- deed, one can first assemble the matrix for tensor-product B-splines, and then multiply on each side this matrix by the same matrix given in (17), in the form K (S) W = B(S)K (S) S (B(S))(cid:62), with B(S) =   (cid:98)B (cid:101)B(S) 0 B(S) 0 0 0 0 In(n−2)   , for S = L, R, S where K (S) represents the stiffness matrix for the standard tensor-product B-spline space on the patch Ω(S), and K (S) W is the contribution to the stiffness matrix for the W space from the same patch. Obviously, the same can be done at the element level, by restricting the matrices to suitable submatrices using the indices of non-vanishing functions on the element. To implement the hierarchical C 1 splines we construct the same structures and algo- rithms detailed in [16]. First, it is necessary to complete the space structure of one single level, that we have just described, with some functionality to compute the support of a given basis function, as explained in [16, Section 5.1]. Second, the hierarchical structures are constructed following the description in the same paper, except that for the evaluation of basis functions, and in particular for matrix assembly, we make use of the refinement masks of Section 4.1. The refinement masks essentially give us the two-level relation re- quired by the algorithms in [16], and in particular the matrix C (cid:96)+1 of that paper, that is used both during matrix assembly and to compute the refinement matrix after enlargement of the subdomains. (cid:96) 5. Numerical examples We present now some numerical examples to show the good performance of the hier- archical C 1 spaces for their use in combination with adaptive methods. We consider two different kinds of numerical examples: the first three tests are run for Poisson problems with an automatic adaptive scheme, while in the last numerical test we solve the bilaplacian problem, with a pre-defined refinement scheme. 19 5.1. Poisson problem The first three examples are tests on the Poisson equation (cid:26) −∆u = f u = g in Ω, on ∂Ω. The goal is to show that using the C 1 space basis does not spoil the properties of the local refinement. The employed isogeometric algorithm is based on the adaptive loop (see, e.g., [6]) SOLVE −→ ESTIMATE −→ MARK −→ REFINE. In particular, for the examples we solve the variational formulation of the problem imposing the Dirichlet boundary condition by Nitsche’s method, and the problem is to find u ∈ WH such that (cid:90) (cid:90) (cid:90) (cid:90) (cid:90) (cid:90) (cid:90) ∇u · ∇v − Ω ΓD du dn v − u dv dn + γ h ΓD ΓD uv = f v − Ω ΓD g dv dn + γ h ΓD gv ∀v ∈ WH, where h is the local element size, and the penalization parameter is chosen as γ = 10(p+1), with p the degree. The error estimate is computed with a residual-based estimator, and the marking of the elements at each iteration is done using D¨orfler’s strategy (when not stated otherwise, we set the marking parameter equal to 0.75). The refinement step of the loop dyadically refines all the marked elements. Although optimal convergence can be only proved if we refine using a refinement strategy that guarantees that meshes are admissible [7], previous numerical results show also a good behavior of non-admissible meshes [6]. For each of the three examples we report the results for degrees p = (3, 3), (4, 4), with C 1 smoothness across the interface, and with a regularity r equal to degree minus two within the single patches. We compare the results for the adaptive scheme with those obtained by refining uniformly, and also with the ones obtained by employing the same adaptive scheme for hierarchical spaces with C 0 continuity across the interface, while the same regularity within the patches as above is kept. Example 1. For the first numerical example we consider the classical L-shaped domain [−1, 1]2 \ (0, 1) × (−1, 0) defined by two patches as depicted in Figure 2(a), and the right- hand side f and the boundary condition g are chosen such that the exact solution is given by u(ρ, θ) = ρ 4 3 sin (cid:32) (cid:33) 4 3 θ , with ρ and θ the polar coordinates. As it is well known, the exact solution has a singularity at the reentrant corner. We start the adaptive simulation with a coarse mesh of 4×4 elements on each patch, and we use D¨orfler’s parameter equal to 0.90 for the marking of the elements. The convergence It can be seen that the error in H 1 semi-norm and results are presented in Figure 3. the estimator converge with the expected rate, in terms of the degrees of freedom, both for the C 1 and the C 0 discretization, and that this convergence rate is better than the 20 (a) Domain used in the Examples 1 and 4. (b) Domain used in the Examples 2 and 3. Figure 2: The two domains used in the numerical examples. one obtained with uniform refinement. Moreover, the error for the C 1 discretization is slightly lower than the one for the C 0 discretization, although they are very similar. This is in good agreement with what has been traditionally observed for isogeometric methods: the accuracy per degree of freedom is better for higher continuity. In this case, since the continuity only changes near the interface, the difference is very small. 101 100 10−1 10−2 10−3 10−4 p = 3, C1 (error) p = 3, C0 (error) p = 4, C1 (error) p = 4, C0 (error) (estimator) (estimator) (estimator) (estimator) 2 1 1.5 1 101 100 10−1 10−2 10−3 10−4 10−5 102 102.2 102.4 102.6 102.8 103 103.2 10−5 102 NDOF p = 3, adap. (error) p = 3, unif. (error) p = 4, adap. (error) p = 4, unif. (error) (estimator) (estimator) (estimator) (estimator) 2 1 1.5 1 103 NDOF 104 Figure 3: Error in H 1 semi-norm and estimator for Example 1 with p = (3, 3) and p = (4, 4), compared with C 0 case (left) and with global refinement case (right). We also show in Figure 4 the final meshes obtained with the different discretizations. It is clear that the adaptive method correctly refines the mesh in the vicinity of the reentrant corner, where the singularity occurs, and the refinement gets more local with higher degree. Example 2. In the second example the data of the problem are chosen in such a way that 21 (a) p = (3, 3), C 0 functions on the interface: NDOF=1648. (b) p = (3, 3), C 1 functions on the interface: NDOF=1623. (c) p = (4, 4), C 0 functions on the interface: NDOF=833. (d) p = (4, 4), C 1 functions on the interface: NDOF=833. Figure 4: Hierarchical meshes for Example 1, with p = (3, 3) and p = (4, 4). Apparently the meshes are the same for the C 0 and C 1 case, but there are some differences in the finest levels. the exact solution is u(x, y) = (−120x + x2 − 96y − 8xy + 16y2)12/5 cos(πy/20), defined on the domain shown in Figure 2(b). The geometry of the domain is given by two bicubic B´ezier patches, and the control points are chosen following the algorithm in [29], in such a way that the geometry is given by an analysis-suitable G1 parametrization, see Appendix A for details. Note that we have chosen the solution such that it has a singularity along the interface. In this example we start the adaptive simulation with a coarse mesh of 8 × 8 elements on each patch. We present the convergence results in Figure 5. As before, both the (relative) error and the estimator converge with optimal rate, and both for the C 0 and the C 1 discretizations, with slightly better result for the C 1 22 spaces. We note that, since the singularity occurs along a line, optimal order of convergence for higher degrees cannot be obtained without anisotropic refinement, as it was observed in the numerical examples in [14, Section 4.6]. 100 10−1 10−2 10−3 10−4 10−5 10−6 p = 3, C1 (error) p = 3, C0 (error) p = 4, C1 (error) p = 4, C0 (error) (estimator) (estimator) (estimator) (estimator) 2 1 103 1.5 1 104 NDOF 100 10−1 10−2 10−3 10−4 10−5 10−6 p = 3, adap. (error) p = 3, unif. (error) p = 4, adap. (error) p = 4, unif. (error) (estimator) (estimator) (estimator) (estimator) 1.5 2 1 103 1 104 NDOF 105 Figure 5: Relative error in H 1 semi-norm and corresponding estimator for Example 2 with p = (3, 3) and p = (4, 4), compared with C 0 case (left) and with global refinement case (right). We also present in Figure 6 the finest meshes obtained with the different discretizations, and it can be observed that the adaptive method correctly refines near the interface, where the singularity occurs. Example 3. We consider the same domain as in the previous example, and the right-hand side and the boundary condition are chosen in such a way that the exact solution is given by u(x, y) = (y − 1.7)12/5 cos(x/4). In this case the solution has a singularity along the line y = 1.7, that crosses the interface and is not aligned with the mesh. The convergence results, that are presented in Figure 7, are very similar to the ones of the previous example, and show optimal convergence rates for both the C 1 and the C 0 discretizations. As before, we also present in Figure 8 the finest meshes obtained with the different discretizations. It is evident that the adaptive algorithm successfully refines along the singularity line. 5.2. Bilaplacian problem In the last example we consider the solution of the bilaplacian problem, given in strong form by    ∆2u = f u = g1 in Ω, on ∂Ω, ∂u ∂n = g2 on ∂Ω. 23 (a) p = (3, 3), C 0 functions on the interface: NDOF=16310 (b) p = (3, 3), C 1 functions on the interface: NDOF=15741 (c) p = (4, 4), C 0 functions on the interface: NDOF=6357 (d) p = (4, 4), C 1 functions on the interface: NDOF=7347 Figure 6: Hierarchical meshes for Example 2, with p = (3, 3) and p = (4, 4). It is well known that the weak formulation of the problem in direct form requires the trial and test functions to be in H 2(Ω). For the discretization with a Galerkin method, this can be obtained if the discrete basis functions are C 1. The solution of the problem with C 0 basis functions, instead, requires to use a mixed variational formulation or some sort of weak enforcement of the C 1 continuity across the interface, like with a Nitsche’s method. Example 4. For the last numerical test we solve the bilaplacian problem in the L-shaped domain as depicted in Figure 2(a). The right-hand side and the boundary conditions are chosen in such a way that the exact solution is given, in polar coordinates (ρ, θ), by u(ρ, θ) = ρz+1(C1 F1(θ) − C2 F2(θ)), where value in the exponent is chosen equal to z = 0.544483736782464, which is the smallest positive solution of sin(zω) + z sin(ω) = 0, 24 101 100 10−1 10−2 10−3 p = 3, C1 (error) p = 3, C0 (error) p = 4, C1 (error) p = 4, C0 (error) (estimator) (estimator) (estimator) (estimator) 103 2 1 NDOF 1.5 1 101 100 10−1 10−2 10−3 p = 3, adap. (error) p = 3, unif. (error) p = 4, adap. (error) p = 4, unif. (error) (estimator) (estimator) (estimator) (estimator) 2 1.5 1 1 104 103 104 NDOF 105 Figure 7: Error in H 1 semi-norm and estimator for Example 3 with p = (3, 3) and p = (4, 4), compared with C 0 case (left) and with global refinement case (right). with ω = 3π/2 for the L-shaped domain, see [21, Section 3.4]. The other terms are given by C1 = 1 z − 1 sin (cid:19) (cid:18) 3(z − 1)π 2 (cid:19) − cos (cid:18) 3(z − 1)π 2 − sin 1 z − 1 (cid:18) 3(z + 1)π 2 (cid:18) 3(z + 1)π 2 (cid:19) , (cid:19) , C2 = cos F1(θ) = cos((z − 1)θ) − cos((z + 1)θ), F2(θ) = 1 z − 1 sin((z − 1)θ) − 1 z + 1 sin((z + 1)θ). The exact solution has a singularity at the reentrant corner, and it is the same kind of singularity that one would encounter for the Stokes problem. For our numerical test we start with a coarse mesh of 8 × 8 elements on each patch. In this case, instead of refining the mesh with an adaptive algorithm we decided to refine following a pre-defined strategy: at each refinement step, a region surrounding the reentrant corner, and composed of 4 × 4 elements of the finest level, is marked for refinement, see Figure 9(a). We remark that the implementation of the adaptive algorithm with a residual- based estimator would require computing fourth order derivatives at the quadrature points, and several jump terms across the interface, that is beyond the scope of the present work. In Figure 9(b) we show the error obtained in H 2 semi-norm when computing with C 1 hierarchical splines of degrees 3 and 4 and regularity r equal to degree minus two within the single patches, for the local refinement described above, and with C 1 isogeometric splines of the same degree and inner regularity r with global uniform refinement. It is obvious that the hierarchical spaces perform much better, as we obtain a lower error with many less degrees of freedom. In this case we do not see a big difference between the results 25 (a) p = (3, 3), C 0 functions on the interface: NDOF=8388 (b) p = (3, 3), C 1 functions on the interface: NDOF=8336 (c) p = (4, 4), C 0 functions on the interface: NDOF=6356 (d) p = (4, 4), C 1 functions on the interface: NDOF=6601 Figure 8: Hierarchical meshes for Example 3, with p = (3, 3) and p = (4, 4). obtained for degrees 3 and 4, but this is caused by the fact that we are refining by hand, and the asymptotic regime has not been reached yet. 6. Conclusions We presented the construction of C 1 hierarchical functions on two-patch geometries and their application in isogeometric analysis. After briefly reviewing the characterization of C 1 tensor-product isogeometric spaces, we investigated the properties needed to effectively use these spaces as background machinery for the hierarchical spline model. In particular, the local linear independence of the one-level basis functions and the nested nature of the considered C 1 splines spaces was proved. We also introduced an explicit expression of the refinement masks under dyadic refinement, that among other things is useful for the practical implementation of the hierarchical basis functions. The numerical examples show that optimal convergence rates are obtained by the local refinement scheme for second and fourth order problems, even in presence of singular solutions. In future work we plan to 26 101 100 p = 3 local p = 4 local p = 3, unif. p = 4, unif. (a) Refinement of the L-shaped do- main 103 104 NDOF (b) Error in H 2 semi-norm Figure 9: Hierarchical mesh (a) and comparison of the results obtained by local refinement and C 1 space with global refinement (b) on Example 4. generalize the construction to the multi-patch domain setting of [31], but this will require a different strategy with respect to the approach presented in this work since the basis functions of a single level may be locally linearly dependent. 1.5 1 1 2 Acknowledgment. Cesare Bracco, Carlotta Giannelli and Rafael V´azquez are members of the INdAM Research group GNCS. The INdAM support through GNCS and Finanzi- amenti Premiali SUNRISE is gratefully acknowledged. Rafael V´azquez has been partially supported by the ERC Advanced Grant CHANGE, grant number 694515, 2016-2020 Appendix A. Geometry of the curved domain The geometry in Fig.2(a) for the examples in Section 5 is generated by following the algorithm in [29]. This technique is based on solving a quadratic minimization problem with linear side constraints, and constructs from an initial multi-patch geometry (cid:101)F an analysis- suitable G1 multi-patch parameterization F possessing the same boundary, vertices and first derivatives at the vertices as (cid:101)F. In our case, the initial geometry (cid:101)F is given by the two patch parameterization consisting of two quadratic B´ezier patches (cid:101)F(L) and (cid:101)F(R) (i.e. without any internal knots) with the control points (cid:101)c(S) i,j , S ∈ {L, R}, specified in Table A.1. This parameterization is not analysis-suitable G1. Applying the algorithm in [29] (by using Mathematica), we construct an analysis- suitable G1 two-patch geometry F with bicubic B´ezier patches F(L) and F(R). Their control points c(S) i,j , S ∈ {L, R}, are given in Table A.2, where for presenting some of their coordi- nates the notations D = 99170 and C1 = 333939/D, C2 = 47387036/(22.5D), C3 = −15800567/(5D), C4 = 242128576/(67.5D), C5 = 57452423/(45D), C6 = 81952942/(22.5D), 27 (0, 0) (−2, 5/2) (0, 6) (cid:101)c(L) i,j (−3, 1/3) (−13/4, 53/20) (−3, 17/3) (−6, −2) (−5, 2) (−7, 8) (0, 0) (−2, 5/2) (0, 6) i,j (cid:101)c(R) (13/5, 1) (39/20, 3) (3, 5) (6, −1) (4, 11/3) (11/2, 13/2) Table A.1: Control points (cid:101)c(S) zation (cid:101)F. i,j , S ∈ {L, R}, of the initial non-analysis-suitable G1 two-patch parameteri- are used. c(L) i,j (0, 0) (−4/3, 5/3) (−4/3, 11/3) (0, 6) (−2, 2/9) (−127/50, 44/25) (C3, C4) (−2, 52/9) (−4, −4/9) (−98/25, 37/25) (−89/25, 189/50) (−13/3, 58/9) (−6, −2) (−16/3, 2/3) (−17/3, 4) (−7, 8) c(R) i,j (0, 0) (−4/3, 5/3) (−4/3, 11/3) (0, 6) (26/15, 2/3) (C1, C2) (C5, C6) (2, 16/3) (56/15, 1/3) (87/25, 113/50) (29/10, 4) (23/6, 11/2) (6, −1) (14/3, 19/9) (9/2, 83/18) (11/2, 13/2) Table A.2: Control points c(S) tion F. i,j , S ∈ {L, R}, of the resulting analysis-suitable G1 two-patch parameteriza- References [1] F. Auricchio, L. Beir˜ao da Veiga, A. Buffa, C. Lovadina, A. Reali, and G. Sangalli. A fully ”locking-free” isogeometric approach for plane linear elasticity problems: A stream function formulation. Comput. Methods Appl. Mech. Engrg., 197(1):160–172, 2007. [2] L. Beir˜ao da Veiga, A. Buffa, G. Sangalli, and R. V´azquez. Mathematical analysis of variational isogeometric methods. Acta Numer., 23:157–287, 5 2014. [3] D. J. Benson, Y. Bazilevs, M.-C. Hsu, and T. J. R. Hughes. A large deformation, rotation-free, isogeometric shell. Comput. Methods Appl. Mech. Engrg., 200(13):1367– 1378, 2011. [4] M. Bercovier and T. Matskewich. Smooth B´ezier Surfaces over Unstructured Quadri- lateral Meshes. Lecture Notes of the Unione Matematica Italiana, Springer, 2017. [5] A. Blidia, B. Mourrain, and N. Villamizar. G1-smooth splines on quad meshes with 4-split macro-patch elements. Comput. Aided Geom. Des., 52-53:106 – 125, 2017. 28 [6] C. Bracco, A. Buffa, C. Giannelli, and R. V´azquez. Adaptive isogeometric methods with hierarchical splines: an overview. Discret. Contin. Dyn. S., 39(1):–, 2019. [7] A. Buffa and C. Giannelli. Adaptive isogeometric methods with hierarchical splines: Error estimator and convergence. Math. Models Methods Appl. Sci., 26:1–25, 2016. [8] A. Buffa and C. Giannelli. Adaptive isogeometric methods with hierarchical splines: Optimality and convergence rates. Math. Models Methods Appl. Sci., 27:2781–2802, 2017. [9] C.L. Chan, C. Anitescu, and T. Rabczuk. Isogeometric analysis with strong multipatch C1-coupling. Comput. Aided Geom. Des., 62:294–310, 2018. [10] C.L. Chan, C. Anitescu, and T. Rabczuk. Strong multipatch C1-coupling for isogeo- metric analysis on 2D and 3D domains. Comput. Methods Appl. Mech. Engrg., 357, 2019. [11] A. Collin, G. Sangalli, and T. Takacs. Analysis-suitable G1 multi-patch parametriza- tions for C1 isogeometric spaces. Comput. Aided Geom. Des., 47:93 – 113, 2016. [12] J. A. Cottrell, T. J. R. Hughes, and Y. Bazilevs. Isogeometric Analysis: Toward Integration of CAD and FEA. John Wiley & Sons, Chichester, England, 2009. [13] D. D’Angella, S. Kollmannsberger, E. Rank, and A. Reali. Multi-level B´ezier extrac- tion for hierarchical local refinement of Isogeometric Analysis. Comput. Methods Appl. Mech. Engrg., 328:147–174, 2018. [14] G. Gantner. Optimal Adaptivity for Splines in Finite and Boundary Element Methods. PhD thesis, Technische Universit¨at Wien, 2017. [15] G. Gantner, D. Haberlik, and D. Praetorius. Adaptive IGAFEM with optimal conver- gence rates: Hierarchical B-splines. Math. Models Methods Appl. Sci., 27:2631–2674, 2017. [16] E. Garau and R. V´azquez. Algorithms for the implementation of adaptive isogeometric methods using hierarchical B-splines. Appl. Numer. Math., 123:58–87, 2018. [17] C. Giannelli, B. J¨uttler, and H. Speleers. THB–splines: the truncated basis for hier- archical splines. Comput. Aided Geom. Des., 29:485–498, 2012. [18] C. Giannelli, B. J¨uttler, and H. Speleers. Strongly stable bases for adaptively refined multilevel spline spaces. Adv. Comp. Math., 40:459–490, 2014. [19] H. G´omez, V. M Calo, Y. Bazilevs, and T. J. R. Hughes. Isogeometric analysis of the Cahn–Hilliard phase-field model. Comput. Methods Appl. Mech. Engrg., 197(49):4333– 4352, 2008. 29 [20] H. Gomez, V. M. Calo, and T. J. R. Hughes. Isogeometric analysis of Phase–Field models: Application to the Cahn–Hilliard equation. In ECCOMAS Multidisciplinary Jubilee Symposium: New Computational Challenges in Materials, Structures, and Flu- ids, pages 1–16. Springer Netherlands, 2009. [21] P. Grisvard. Singularities in boundary value problems, volume 22 of Recherches en Math´ematiques Appliqu´ees [Research in Applied Mathematics]. Masson, Paris; Springer-Verlag, Berlin, 1992. [22] D. Groisser and J. Peters. Matched Gk-constructions always yield Ck-continuous isogeometric elements. Comput. Aided Geom. Des., 34:67 – 72, 2015. [23] P. Hennig, M. Ambati, L. De Lorenzis, and M. K¨astner. Projection and transfer oper- ators in adaptive isogeometric analysis with hierarchical B-splines. Comput. Methods Appl. Mech. Engrg., 334:313 – 336, 2018. [24] P. Hennig, S. M¨uller, and M. K¨astner. B´ezier extraction and adaptive refinement of truncated hierarchical NURBS. Comput. Methods Appl. Mech. Engrg., 305:316–339, 2016. [25] J. Hoschek and D. Lasser. Fundamentals of computer aided geometric design. A K Peters Ltd., Wellesley, MA, 1993. [26] T. J. R. Hughes, J. A. Cottrell, and Y. Bazilevs. Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. Comput. Methods Appl. Mech. Engrg., 194(39-41):4135–4195, 2005. [27] M. Kapl, F. Buchegger, M. Bercovier, and B. J¨uttler. Isogeometric analysis with geo- metrically continuous functions on planar multi-patch geometries. Comput. Methods Appl. Mech. Engrg., 316:209 – 234, 2017. [28] M. Kapl, G. Sangalli, and T. Takacs. Dimension and basis construction for analysis- suitable G1 two-patch parameterizations. Comput. Aided Geom. Des., 52–53:75 – 89, 2017. [29] M. Kapl, G. Sangalli, and T. Takacs. Construction of analysis-suitable G1 planar multi-patch parameterizations. Comput.-Aided Des., 97:41–55, 2018. [30] M. Kapl, G. Sangalli, and T. Takacs. Isogeometric analysis with C 1 functions on unstructured quadrilateral meshes. Technical Report 1812.09088, arXiv.org, 2018. [31] M. Kapl, G. Sangalli, and T. Takacs. An isogeometric C 1 subspace on unstructured multi-patch planar domains. Comput. Aided Geom. Des., 69:55–75, 2019. [32] M. Kapl, V. Vitrih, B. J¨uttler, and K. Birner. Isogeometric analysis with geometrically continuous functions on two-patch geometries. Comput. Math. Appl., 70(7):1518 – 1538, 2015. 30 [33] K. Karˇciauskas, T. Nguyen, and J. Peters. Generalizing bicubic splines for modeling and IGA with irregular layout. Comput.-Aided Des., 70:23 – 35, 2016. [34] K. Karˇciauskas and J. Peters. Refinable bi-quartics for design and analysis. Comput.- Aided Des., pages 204–214, 2018. [35] J. Kiendl, Y. Bazilevs, M.-C. Hsu, R. W¨uchner, and K.-U. Bletzinger. The bending strip method for isogeometric analysis of Kirchhoff-Love shell structures comprised of multiple patches. Comput. Methods Appl. Mech. Engrg., 199(35):2403–2416, 2010. [36] J. Kiendl, K.-U. Bletzinger, J. Linhard, and R. W¨uchner. Isogeometric shell analysis with Kirchhoff-Love elements. Comput. Methods Appl. Mech. Engrg., 198(49):3902– 3914, 2009. [37] R. Kraft. Adaptive and linearly independent multilevel B–splines. In A. Le M´ehaut´e, C. Rabut, and L. L. Schumaker, editors, Surface Fitting and Multiresolution Methods, pages 209–218. Vanderbilt University Press, Nashville, 1997. [38] J. Liu, L. Ded`e, J. A. Evans, M. J. Borden, and T. J. R. Hughes. Isogeometric analysis of the advective Cahn-Hilliard equation: Spinodal decomposition under shear flow. J. Comp. Phys., 242:321 – 350, 2013. [39] G. Lorenzo, M. A. Scott, K. Tew, T. J. R. Hughes, and H. Gomez. Hierarchically refined and coarsened splines for moving interface problems, with particular applica- tion to phase-field models of prostate tumor growth. Comput. Methods Appl. Mech. Engrg., 319:515–548, 2017. [40] B. Mourrain, R. Vidunas, and N. Villamizar. Dimension and bases for geometrically continuous splines on surfaces of arbitrary topology. Comput. Aided Geom. Des., 45:108 – 133, 2016. [41] T. Nguyen, K. Karˇciauskas, and J. Peters. A comparative study of several classical, discrete differential and isogeometric methods for solving Poisson’s equation on the disk. Axioms, 3(2):280–299, 2014. [42] T. Nguyen, K. Karˇciauskas, and J. Peters. C 1 finite elements on non-tensor-product 2d and 3d manifolds. Appl. Math. Comput., 272:148 – 158, 2016. [43] T. Nguyen and J. Peters. Refinable C 1 spline elements for irregular quad layout. Comput. Aided Geom. Des., 43:123 – 130, 2016. [44] J. Peters. Geometric continuity. In Handbook of computer aided geometric design, pages 193–227. North-Holland, Amsterdam, 2002. [45] U. Reif. A refinable space of smooth spline surfaces of arbitrary topological genus. J. Approx. Theory, 90(2):174–199, 1997. 31 [46] A. Tagliabue, L. Ded`e, and A. Quarteroni. Isogeometric analysis and error estimates for high order partial differential equations in fluid dynamics. Comput. & Fluids, 102:277 – 303, 2014. [47] D. Toshniwal, H. Speleers, R. Hiemstra, and T. J. R. Hughes. Multi-degree smooth polar splines: A framework for geometric modeling and isogeometric analysis. Comput. Methods Appl. Mech. Engrg., 316:1005–1061, 2017. [48] D. Toshniwal, H. Speleers, and T. J. R. Hughes. Analysis-suitable spline spaces of arbitrary degree on unstructured quadrilateral meshes. Technical Report 16, Institute for Computational Engineering and Sciences (ICES), 2017. [49] D. Toshniwal, H. Speleers, and T. J. R. Hughes. Smooth cubic spline spaces on unstructured quadrilateral meshes with particular emphasis on extraordinary points: Geometric design and isogeometric analysis considerations. Comput. Methods Appl. Mech. Engrg., 327:411–458, 2017. [50] R. V´azquez. A new design for the implementation of isogeometric analysis in Octave and Matlab: GeoPDEs 3.0. Comput. Math. Appl., 72:523–554, 2016. [51] A.-V. Vuong, C. Giannelli, B. J¨uttler, and B. Simeon. A hierarchical approach to adaptive local refinement in isogeometric analysis. Comput. Methods Appl. Mech. Engrg., 200:3554–3567, 2011. 32 List of symbols Spline space p r Ξr p τi T k Sr p Sr+1 p , Sr p−1 i,p, N r+1 i,p , N r i,p−1 N r n, n0, n1 I, I0, I1 J0,i, J1,i ζm N0, N1, N2 Geometry (S) Ω(S) Ω Γ F(S) F F0 d ξ1, ξ2 c(S) i,j α(S), β(S), β γ C 1 isogeometric space V W Φ ΦΩ(S), ΦΓ0, ΦΓ1 φΩ(S) i,j φΓ0 i φΓ1 i (S) (cid:98)φ Γ0 (S) Ω(S) (S) Γ1 , (cid:98)φ , (cid:98)φ (cid:98)B, (cid:101)B(S), B(S) (cid:98)bi,j, (cid:101)b(S) i,j , b(S) B(S) i,j Spline degree, p ≥ 3 Spline regularity, 1 ≤ r ≤ p − 2 Open knot vector internal breakpoints of knot vector Ξr p Ordered set of internal breakpoints τi Number of different internal breakpoints of knot vector Ξr p Univariate spline space of degree p and regularity r on [0, 1] over knot vector Ξr p Univariate spline spaces of higher regularity and lower degree, re- spectively, defined from same internal breakpoints as Sr p B-splines of spline spaces Sr p, Sr+1 p−1, respectively p Dimensions of spline spaces Sr p, Sr+1 p i,p, N r+1 Index sets of B-splines N r i,p and N r Index subsets of I related to B-splines N r+1 i,p and i ∈ I1, respectively Greville abscissae of spline space Sr Vectors of tensor-product B-splines N r p−1, respectively i,p−1, respectively and N r i,p−1, for i ∈ I0 p, m ∈ I i,pN r j,p and Sr and Sr Upper index referring to specific patch, S ∈ {L, R} Quadrilateral patch Two-patch domain Ω = Ω(L) ∪ Ω(R) Common interface of two-patch domain Ω Geometry mapping of patch Ω(S) Two patch geometry F = (F(L), F(R)) Parameterization of interface Γ Specific transversal vector to Γ Parameter directions of geometry mappings Spline control points of geometry mapping F(S) Gluing functions of two-patch geometry F Scalar function, γ (cid:54)= 0 Space of C 1 isogeometric spline functions on Ω Subspace of V Basis of W Parts of basis Φ, Φ = ΦΩ(L) ∪ ΦΩ(R) ∪ ΦΓ0 ∪ ΦΓ1 Basis functions of ΦΩ(S), i ∈ I \ {0, 1}, j ∈ I Basis functions of ΦΓ0, i ∈ I0 Basis functions of ΦΓ1, i ∈ I1 Vectors of spline functions φΓ0 respectively Transformation matrices Entries of matrices (cid:98)B, (cid:101)B(S) and B Block matrix assembled by the matrices (cid:98)B, (cid:101)B(S), B(S) and the iden- tity matrix In(n−2) i ◦ F(S) and φΩ(S) i ◦ F(S), φΓ1 , respectively ◦ F(S), (S) i,j 33 Hierarchical space , Λr+1,(cid:96)+1 p , Λr,(cid:96)+1 p−1 (cid:96) Λr,(cid:96)+1 p λ(cid:96)+1 i,j Θ(cid:96)+1 ij WH W Upper index referring to specific level Refinement matrices for B-splines N r,(cid:96) Entries of refinement matrix Λr,(cid:96)+1 Block matrices of refinement mask Λr,(cid:96)+1 C 1 hierarchical isogeometric spline space Basis of WH p p i,p , N r+1,(cid:96) i,p and N r,(cid:96) i,p−1, respectively ⊗ Λr,(cid:96)+1 p , 0 ≤ i ≤ j ≤ 2 Most notations in the paragraphs “Spline space” and “C 1 isogeometric space” can be directly extended to the hierarchical setting by adding the upper index (cid:96) to refer to the considered level. 34
synthetic_cpt
3
Active_Learning_Principles_for_In-Context_Learning_with_Large_Language_Models.pdf
7 1 0 2 v o N 5 1 ] G L . s c [ 5 v 6 5 9 7 0 . 2 0 7 1 : v i X r a Generative Adversarial Active Learning Jia-Jie Zhu Max Planck Institute for Intelligent Systems Tübingen, Germany [email protected] Jose Bento Department of Computer Science Boston College Chestnut Hill, Massachusetts, USA [email protected] Abstract We propose a new active learning by query synthesis approach using Generative Adversarial Networks (GAN). Different from regular active learning, the result- ing algorithm adaptively synthesizes training instances for querying to increase learning speed. We generate queries according to the uncertainty principle, but our idea can work with other active learning principles. We report results from various numerical experiments to demonstrate the effectiveness the proposed ap- proach. In some settings, the proposed algorithm outperforms traditional pool- based approaches. To the best our knowledge, this is the first active learning work using GAN. 1 Introduction One of the most exciting machine learning breakthroughs in recent years is the generative adversarial networks (GAN) [20]. It trains a generative model by finding the Nash Equilibrium of a two-player adversarial game. Its ability to generate samples in complex domains enables new possibilities for active learners to synthesize training samples on demand, rather than relying on choosing instances to query from a given pool. In the classification setting, given a pool of unlabeled data samples and a fixed labeling budget, ac- tive learning algorithms typically choose training samples strategically from a pool to maximize the accuracy of trained classifiers. The goal of these algorithms is to reduce label complexity. Such approaches are called pool-based active learning. This pool-based active learning approach is illus- trated in Figure 1 (a). In a nutshell, we propose to use GANs to synthesize informative training instances that are adapted to the current learner. We then ask human oracles to label these instances. The labeled data is added back to the training set to update the learner. This protocol is executed iteratively until the label budget is reached. This process is shown in Figure 1 (b). The main contributions of this work are as follows: • To the best of our knowledge, this is the first active learning framework using deep genera- tive models1. • While we do not claim our method is always superior to the previous active learners in terms of accuracy, in some cases, it yields classification performance not achievable even by a fully supervised learning scheme. With enough capacity from the trained generator, our method allows us to have control over the generated instances which may not be available to the previous active learners. 1The appendix of [37] mentioned three active learning attempts but did not report numerical results. Our approach is also different from those attempts. Learner Learner Training Training Pool x, ? x, y GAN x, y x, ? (a) Pool-based (b) GAAL Figure 1: (a) Pool-based active learning scenario. The learner selects samples for querying from a given unlabeled pool. (b) GAAL algorithm. The learner synthesizes samples for querying using GAN. • We conduct experiments to compare our active learning approach with self-taught learning2. The results are promising. • This is the first work to report numerical results in active learning synthesis for image classification. See [43, 30]. The proposed framework may inspire future GAN applications in active learning. • The proposed approach should not be understood as a pool-based active learning method. Instead, it is active learning by query synthesis. We show that our approach can perform competitively when compared against pool-based methods. 2 Related Work Our work is related to two different subjects, active learning and deep generative models. Active learning algorithms can be categorized into stream-based, pool-based and learning by query synthesis. Historically, stream-based and pool-based are the two popular scenarios of active learning [43]. Our method falls into the category of query synthesis. Early active learning by queries synthesis achieves good results only in simple domains such as X = {0, 1}3, see [1, 2]. In [30], the authors synthesized learning queries and used human oracles to train a neural network for classifying hand- written characters. However, they reported poor results due to the images generated by the learner being sometimes unrecognizable to the human oracles. We will report results on similar tasks such as differentiating 5 versus 7, showing the advancement of our active learning scheme. Figure 2 compares image samples generated by the method in [30] and our algorithm. Figure 2: (Left) Image queries synthesized by a neural network for handwritten digits recognition. Source: [30]. (Right) Image queries synthesized by our algorithm, GAAL. The popular SVMactive algorithm from [45] is an efficient pool-based active learning scheme for SVM. Their scheme is a special instance of the uncertainty sampling principle which we also employ. [28] reduces the exhaustive scanning through database employed by SVMactive. Our algorithm shares the same advantage of not needing to test every sample in the database at each iteration of active learning. Although we do so by not using a pool at all instead of a clever trick. [48] proposed active transfer learning which is reminiscent to our experiments in Section 5.1. However, we do not consider collecting new labeled data in target domains of transfer learning. 2See the supplementary document. 2 There have been some applications of generative models in semi-supervised learning and active learning. Previously, [36] proposed a semi-supervised learning approach to text classification based on generative models. [26] applied Gaussian mixture models to active learning. In that work, the generative model served as a classifier. Compared with these approaches, we apply generative mod- els to directly synthesize training data. This is a more challenging task. One building block of our algorithm is the groundbreaking work of the GAN model in [20]. Our approach is an application of GAN in active learning. Our approach is also related to [44] which studied GAN in a semi-supervised setting. However, our task is active learning which is different from the semi-supervised learning they discussed. Our work shares the common strength with the self-taught learning algorithm in [39] as both methods use the unlabeled data to help with the task. In the supplementary document, we compare our algorithm with a self-taught learning algorithm. In a way, the proposed approach can be viewed as an adversarial training procedure [21], where the classifier is iteratively trained on the adversarial example generated by the algorithm based on solving an optimization problem. [21] focuses on the adversarial examples that are generated by perturbing the original datasets within the small epsilon-ball whereas we seek to produce examples using active learning criterion. To the best of our knowledge, the only previous mentioning of using GAN for active learning is in the appendix of [37]. The authors discussed therein three attempts to reduce the number of queries. In the third attempt, they generated synthetic samples and sorted them by the information content whereas we adaptively generate new queries by solving an optimization problem. There were no reported active learning numerical results in that work. 3 Background We briefly introduce some important concepts in active learning and generative adversarial network. 3.1 Active Learning In the PAC learning framework [46], label complexity describes the number of labeled instances needed to find a hypothesis with error ǫ. The label complexity of passive supervised learning, i.e. using all the labeled samples as training data, is O( d ǫ ) [47], where d is the VC dimension of the hypothesis class H. Active learning aims to reduce the label complexity by choosing the most informative instances for querying while attaining low error rate. For example, [24] proved that the active learning algorithm from [10] has the label complexity bound O(θd log 1 ǫ ), where θ is defined therein as the disagreement coefficient, thus reducing the theoretical bound for the number of labeled instances needed from passive supervised learning. Theoretically speaking, the asymptotic accuracy of an active learning algorithm can not exceed that of a supervised learning algorithm. In practice, as we will demonstrate in the experiments, our algorithm may be able to achieve higher accuracy than the passive supervised learning in some cases. Stream-based active learning makes decisions on whether to query the streamed-in instances or not. Typical methods include [5, 10, 14]. In this work, we will focus on comparing pool-based and query synthesis methods. In pool-based active learning, the learner selects the unlabeled instances from an existing pool based on a certain criterion. Some pool-based algorithms make selections by using clustering techniques or maximizing a diversity measure, e.g. [7, 50, 13, 35, 51, 25]. Another commonly used pool- based active learning principle is uncertainty sampling. It amounts to querying the most uncertain instances. For example, algorithms in [45, 8] query the labels of the instances that are closest to the decision boundary of the support vector machine. Figure 3 (a) illustrates this selection process. Other pool-based works include [27] which proposes a Bayesian active learning by disagreement algorithm in the context of learning user preferences, [22, 18] which study the submodularity nature of sequential active learning schemes. Mathematically, let P be the pool of unlabeled instances, and f = W φ(x) + b be the separating hyperplane. φ is the feature map induced by the SVM kernel. The SVMactive algorithm in [45] 3 chooses a new instance to query by minimizing the distance (or its proxy) to the hyperplane min x∈P kW φ(x) + bk. (1) This formulation can be justified by the version space theory in separable cases [45] or by other analyses in non-separable cases, e.g., [8, 6]. This simple and effective method is widely applied in many studies, e.g., [17, 49]. In the query synthesis scenario, an instance x is synthesized instead of being selected from an ex- isting pool. Previous methods tend to work in simple low-dimensional domains [2] but fail in more complicated domains such as images [30]. Our approach aims to tackle this challenge. For an introduction to active learning, readers are referred to [43, 12]. 3.2 Generative Adversarial Networks Generative adversarial networks (GAN) is a novel generative model invented by [20]. It can be viewed as the following two-player minimax game between the generator G and the discriminator D, min θ2 max θ1 n Ex∼pdata log Dθ1(x) + Ez log(1 − Dθ1(Gθ2 (z)))o, (2) where pdata is the underlying distribution of the real data and z is uniformly distributed random variable. D and G each has its own set of parameter θ1 and θ2. By solving this game, a generator G is obtained. In the ideal scenario, given random input z, we have G(z) ∼ pdata. However, finding this Nash Equilibrium is a difficult problem in practice. There is no theoretical guarantee for finding the Nash Equilibrium due to the non-convexity of D and G. A gradient descent type algorithm is typically used for solving this optimization problem. A few variants of GAN have been proposed since [20]. The authors of [38] use GAN with deep con- volutional neural network structures for applications in computer vision(DCGAN). DCGAN yields good results and is relatively stable. Conditional GAN[16, 15, 34] is another variant of GAN in which the generator and discriminator can be conditioned on other variables, e.g., the labels of im- ages. Such generators can be controlled to generate samples from a certain category. [9] proposed infoGAN which learns disentangled representations using unsupervised learning. A few updated GAN models have been proposed. [41] proposed a few improved techniques for training GAN. Another potentially important improvement of GAN, Wasserstein GAN, has been proposed by [3, 23]. The authors proposed an alternative to training GAN which can avoid insta- bilities such as mode collapse with theoretical analysis. They also proposed a metric to evaluate the quality of the generation which may be useful for future GAN studies. Possible applications of Wasserstein GAN to our active learning framework are left for future work. The invention of GAN triggered various novel applications. [52] performed image inpainting task using GAN. [53] proposed iGAN to turn sketches into realistic images. [33] applied GAN to sin- gle image super-resolution. [54] proposed CycleGAN for image-to-image translation using only unpaired training data. Our study is the first GAN application to active learning. For a comprehensive review of GAN, readers are referred to [19]. 4 Generative Adversarial Active Learning In this section, we introduce our active learning approach which we call Generative Adversarial Active Learning (GAAL). It combines query synthesis with the uncertainty sampling principle. The intuition of our approach is to generate instances which the current learner is uncertain about, i.e. applying the uncertainty sampling principle. One particular choice for the loss function is based on uncertainty sampling principle explained in section 3.1. In the setting of a classifier with the decision function f (x) = W φ(x)+ b, the (proxy) distance to the decision boundary is kW φ(x)+ bk. Similar to the intuition of (1), given a trained generator function G, we formulate the active learning synthesis as the following optimization problem kW ⊤φ(G(z)) + bk, min z (3) 4 Algorithm 1 Generative Adversarial Active Learning (GAAL) 1: Train generator G on all unlabeled data by solving (2) 2: Initialize labeled training dataset S by randomly picking a small fraction of the data to label 3: repeat 4: Solve optimization problem (3) according to the current learner by descending the gradient ∇zkW ⊤φ(G(z)) + bk Use the solution {z1, z2, . . . } and G to generate instances for querying Label {G(z1), G(z2), . . . } by human oracles Add labeled data to the training dataset S and re-train the learner, update W , b 5: 6: 7: 8: until Labeling budget is reached where z is the latent variable and G is obtained by the GAN algorithm. Intuitively, minimizing this loss will push the generated samples toward the decision boundary. Figure 3 (b) illustrates this idea. Compared with the pool-base active learning in Figure 3 (a), our hope is that it may be able to generate more informative instances than those available in the existing pool. (a) SVMactive (b) GAAL Figure 3: (a) SVMactive algorithm selects the instances that are closest to the boundary to query the oracle. (b) GAAL algorithm synthesizes instances that are informative to the current learner. Synthesized instances may be more informative to the learner than other instances in the existing pool. The solution(s) to this optimization problem, G(z), after being labeled, will be used as new training data for the next iteration. We outline our procedure in Algorithm 1. It is possible to use a state-of- the-art classifier, such as convolutional neural networks. To do this, we can replace the feature map φ in Equation 3 with a feed-forward function of a convolutional neural network. In that case, the linear SVM will become the output layer of the network. In step 4 of Algorithm 1, one may also use a different active learning criterion. We emphasis that our contribution is the general framework instead of a specific criterion. In training GAN, we follow the procedure detailed in [38]. Optimization problem (3) is non-convex with possibly many local minima. One typically aims at finding good local minima rather than the global minimum. We use a gradient descent algorithm with momentum to solve this problem. We also periodically restart the gradient descent to find other solutions. The gradient of D and G is calculated using back-propagation. Alternatively, we can incorporate diversity into our active learning principle. Some active learning approaches rely on maximizing diversity measures, such as the Shannon Entropy. In our case, we can include in the objective function (3) a diversity measure such as proposed in [51, 25], thus increasing the diversity of samples. The evaluation of this alternative approach is left for future work. 5 Experiments We perform active learning experiments using the proposed approach. We also compare our ap- proach to self-taught learning, a type of transfer learning method, in the supplementary document. The GAN implementation used in our experiment is a modification of a publicly available TensoFlow DCGAN implementation3. The network architecture of DCGAN is described in [38]. 3https://github.com/carpedm20/DCGAN-tensorflow 5 In our experiments, we focus on binary image classification. Although this can be generalized to multiple classes using one-vs-one or one-vs-all scheme [29]. Recent advancements in GAN study show it could potentially model language as well [23]. Although those results are preliminary at the current stage. We use a linear SVM as our classifier of choice (with parameter γ = 0.001). Even though classifiers with much higher accuracy (e.g., convolutional neural networks) can be used, our purpose is not to achieve absolute high accuracy but to study the relative performance between different active learning schemes. The following schemes are implemented and compared in our experiments. • The proposed generative adversarial active learning (GAAL) algorithm as in Algorithm 1. • Using regular GAN to generate training data. We refer to this as simple GAN. • SVMactive algorithm from [45]. • Passive random sampling, which randomly samples instances from the unlabeled pool. • Passive supervised learning, i.e., using all the samples in the pool to train the classifier. • Self-taught learning from [39]. We initialize the training set with 50 randomly selected samples. The algorithms proceed with a batch of 10 queries every time. We use two datasets for training, the MNIST and CIFAR-10. The MNIST dataset is a well-known image classification dataset with 60000 training samples. The training set and the test set follow the same distribution. We perform the binary classification experiment distinguishing 5 and 7 which is reminiscent to [30]. The training set of CIFAR-10 dataset consists of 50000 32 × 32 color images from 10 categories. One might speculate the possibility of distinguishing cats and dogs by training on cat-like dogs or dog-like cats. In practice, our human labelers failed to confidently identify most of the generated cat and dog images. Figure 4 (Top) shows generated samples. The authors of [41] reported attempts to generate high-resolution animal pictures, but with the wrong anatomy. We leave this task for future studies, possibly with improved techniques such as [3, 23]. For this reason, we perform binary classification on the automobile and horse categories. It is relatively easy for human labelers to identity car and horse body shapes. Typical generated samples, which are presented to the human labelers, are shown in Figure 4. Figure 4: Samples generated by GAAL (Top) Generated samples in cat and dog categories. (Bottom Left) MNIST dataset. (Bottom Right) CIFAR-10 dataset. 5.1 Active Learning We use all the images of 5 and 7 from the MNIST training set as our unlabeled pool to train the generator G. Different from traditional active learning, we do not select new samples from the pool after initialization. Instead, we apply Algorithm 1 to generate a training query. For the generator D and G, we follow the same network architecture of [38]. We use linear SVM as our classifier although other classifiers can be used, e.g. [45, 42, 43]. We first test the trained classifier on a test set that follows a distribution different from the training set. One purpose is to demonstrate the adaptive capability of the GAAL algorithm. In addition, because the MNIST test set and training set follow the same distribution, pool-based active learning methods have an natural advantage over active learning by synthesis since they use real images drawn from the exact same distribution as the test set. It is thus reasonable to test on sets that follow different, albeit similar, distributions. To this end, we use the USPS dataset from [32] as the test set with standard preprocessing. In reality, such settings are very common, e.g., training autonomous drivers on simulated datasets and testing on real vehicles; training on handwriting characters and recognizing writings in different styles, etc. This test setting is related to transfer learning, where the distribution of the training domain Ptr(x, y) is different from that of the target domain Pte(x, y). Figure 5 (Top) shows the results of our first experiment. 6 0.80 0.75 0.70 0.65 0.60 0.55 y c a r u c c A n o i t a c i f i s s a l C 0.50 50 Active Learing, 5 vs. 7 Active Learing, 5 vs. 7 SVMactive Fully Supervised GAAL Simple GAN Random Sampling 100 150 200 Number of Labeled Samples 250 300 350 0.85 0.80 0.75 0.70 0.65 0.60 y c a r u c c A n o i t a c i f i s s a l C 250 0.55 50 Active Learing, Horse vs. Automobile SVMactive Fully Supervised GAAL Simple GAN Random Sampling 100 150 Number of Labeled Samples 200 250 SVMactive Fully Supervised GAAL Simple GAN Random Sampling 100 150 Number of Labeled Samples 200 1.00 0.95 0.90 0.85 0.80 0.75 y c a r u c c A n o i t a c i f i s s a l C 0.70 50 Figure 5: Active learning results. (Top) Train on MNIST, test on USPS. Classifying 5 and 7. The results are averaged over 10 runs. (Bottom Left) Train on MNIST, test on MNIST. Classifying 5 and 7. (Bottom Right) CIFAR-10 dataset, classifying automobile and horse. The results are averaged over 10 runs. The error bars represent the empirical standard deviation of the average values. The figures are best viewed in color. When using the full training set, with 11000 training images, the fully supervised accuracy is at 70.44%. The accuracy of the random sampling scheme steadily approaches that level. On the other hand, GAAL is able to achieve accuracies better than that of the fully supervised scheme. With 350 training samples, its accuracy improves over supervised learning and even SVMactive, an aggressive active learner [11, 45]. Obviously, the accuracy of both SVMactive and random sampling will eventually converge to the fully supervised learning accuracy. Note that for the SVMactive algorithm, an exhaustive scan through the training pool is not always practical. In such cases, the common practice is to restrict the selection pool to a small random subset of the original data. For completeness, we also perform the experiments in the settings where the training and test set follow the same distribution. Figure 5 (Bottom) shows these results. Somewhat surprisingly, in Figure 5 (Left), GAAL’s classification accuracy starts to drop after about 100 samples. One possible explanation is that GAAL may be generating points close to the boundary that are also close to each other. This is more likely to happen if the boundary does not change much from one active learning cycle to the next. This probably happens because the test and train sets are the identically distributed and simple, like MNIST. Therefore, after a while, the training set may be filled with many similar points, biasing the classifier and hurting accuracy. In contrast, because of the finite and discrete nature of pools in the given datasets, a pool-based approach, such as SVMactive, most likely explores points near the boundary that are substantially different. It is also forced to explore further points once these close-by points have already been selected. In a sense, the strength of GAAL might in fact be hurting its classification accuracy. We believe this effect is not so pronounced when the test and train sets are different because the boundary changes more significantly from one cycle to the next, which in turn induces some diversity in the generated samples. To reach competitive accuracy when the training and test set follow the same distribution, we might incorporate a diversity term into our objective function in GAAL. We will address this in future work. In the CIFAR-10 dataset, our human labeler noticed higher chances of bad generated samples, e.g., instances fail to represent either of the categories. This may be because of the significantly higher dimensions than the MNIST dataset. In such cases, we asked the labelers to only label the samples they can distinguish. We speculate recent improvements on GAN, e.g., [41, 3, 23], may help mitigate 7 this issue given the cause is the instability of GANs. Addressing this limitation will be left to future studies. 5.2 Balancing exploitation and exploration The proposed Algorithm 1 can be understood as an exploitation method, i.e., it focuses on generating the most informative training data based on the current decision boundary On the other hand, it is often desirable for the algorithm to explore the new areas of the data. To achieve this, we modify Algorithm 1 by simply executing random sampling every once in a while. This is a common practice in active learning [4, 40]. We use the same experiment setup as in the previous section. Figure 6 shows the results of this mixed scheme. 0.80 0.75 0.70 0.65 0.60 0.55 y c a r u c c A n o i t a c i f i s s a l C 0.50 50 Active Learing, 5 vs. 7 GAAL Random Sampling GAAL + random sampling 100 150 Number of Labeled Samples 200 250 Figure 6: Active learning results using a mixed scheme. The mixed scheme executes one iteration of random sampling after every five iterations of GAAL algorithm. Train on MNIST, test on USPS. Classifying 5 and 7. The results are averaged over 10 runs. The error bars represent the empirical standard deviation of the average values. The figure is best viewed in color. A mixed scheme is able to achieve better performance than either using GAAL or random sampling alone. Therefore, it implies that GAAL, as an exploitation scheme, performs even better in combi- nation with an exploration scheme. A detailed analysis such mixed schemes will be an interesting future topic. 6 Discussion and Future Work In this work, we proposed a new active learning approach, GAAL, that employs the generative adver- sarial networks. One possible explanation for GAAL not outperforming the pool-based approaches in some settings is that, in traditional pool-based learning, the algorithm will eventually exhaust all the points near the decision boundary thus start exploring further points. However, this is the not the case in GAAL as it can always synthesize points near the boundary. This may in turn cause the generation of similar samples, thus reducing the effectiveness. We suspect incorporating a diversity measure into the GAAL framework as discussed at the end of Section 4 might mitigate this issue. This issue is related to the exploitation and exploration trade-off which we explored in brief. The results of this work are enough to inspire future studies of deep generative models in active learning. However, much work remains in establishing theoretical analysis and reaching better per- formance. We also suspect that GAAL can be modified to generate adversarial examples such as in [21]. The comparison of GAAL with transfer learning (see the supplementary document) is particu- larly interesting and worth further investigation. We also plan to investigate the possibility of using Wasserstein GAN in our framework. References [1] D Angluin. Queries and concept learning. Mach. Learn., 1988. [2] D Angluin. Queries revisited. Int. Conf. Algorithmic Learn., 2001. 8 [3] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. jan 2017. [4] Yoram Baram, Ran El Yaniv, and Kobi Luz. Online choice of active learning algorithms. Journal of Machine Learning Research, 5(Mar):255–291, 2004. [5] Alina Beygelzimer, Sanjoy Dasgupta, and John Langford. Importance Weighted Active Learn- ing. Proc. 26th Annu. Int. Conf. Mach. Learn. ICML 09, abs/0812.4(ii):1–8, 2008. [6] Antoine Bordes, ¸Seyda Ertekin, Jason Weston, and Léon Bottou. Fast Kernel Classifiers with Online and Active Learning. J. Mach. Learn. Res., 6:1579–1619, 2005. [7] Klaus Brinker. Incorporating Diversity in Active Learning with Support Vector Machines. [8] Colin Campbell, Nello Cristianini, and Alex Smola. Query learning with large margin classi- fiers. 17th Int. Conf. Mach. Learn., pages 111–118, 2000. [9] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. In- foGAN: Interpretable Representation Learning by Information Maximizing Generative Adver- sarial Nets. 2016. [10] David Cohn, Les Atlas, and Richard Ladner. Improving generalization with active learning. Mach. Learn., 15(2):201–221, may 1994. [11] Sanjoy Dasgupta. Analysis of a greedy active learning strategy. In Advances in neural infor- mation processing systems, pages 337–344, 2005. [12] Sanjoy Dasgupta. Two faces of active learning. Theor. Comput. Sci., 412:1767–1781, 2011. [13] Sanjoy Dasgupta and Daniel Hsu. Hierarchical sampling for active learning. Proceedings of the 25th international conference on Machine learning - ICML ’08, pages 208–215, 2008. [14] Sanjoy Dasgupta, Daniel Hsu, and Claire Monteleoni. A general agnostic active learning algorithm. Engineering, 20(2):1–14, 2007. [15] Alexey Dosovitskiy, Jost Tobias Springenberg, Maxim Tatarchenko, and Thomas Brox. Learn- arXiv preprint ing to Generate Chairs, Tables and Cars with Convolutional Networks. arXiv:1411.5928, pages 1–14, 2014. [16] Jon Gauthier. Conditional generative adversarial nets for convolutional face generation. Class Project for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Winter semester 2014, 2014. [17] King-Shy Goh, Edward Y. Chang, and Wei-Cheng Lai. Multimodal concept-dependent active learning for image retrieval. In Proc. 12th Annu. ACM Int. Conf. Multimed. - Multimed. ’04, page 564, New York, New York, USA, 2004. ACM Press. [18] Daniel Golovin and Andreas Krause. Adaptive submodularity: A new approach to active learning and stochastic optimization. In COLT, pages 333–345, 2010. [19] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. [20] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil In Advances in Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. neural information processing systems, pages 2672–2680, 2014. [21] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adver- sarial examples. arXiv preprint arXiv:1412.6572, 2014. [22] Andrew Guillory and Jeff Bilmes. Interactive submodular set cover. arXiv preprint arXiv:1002.3345, 2010. [23] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028, 2017. [24] Steve Hanneke. A bound on the label complexity of agnostic active learning. Proc. 24th Int. Conf. Mach. Learn. - ICML ’07, pages 353–360, 2007. [25] Steven C H Hoi, Rong Jin, Jianke Zhu, and Michael R Lyu. Semi-Supervised SVM Batch Mode Active Learning with Applications to Image Retrieval. ACM Trans. Informations Syst. ACM Trans. Inf. Syst. Publ. ACM Trans. Inf. Syst., 27(16):24–26, 2009. [26] Timothy M. Hospedales, Shaogang Gong, and Tao Xiang. Finding rare classes: Active learning with generative and discriminative models. IEEE Trans. Knowl. Data Eng., 25(2):374–386, 2013. 9 [27] Neil Houlsby, Ferenc Huszar, Zoubin Ghahramani, and Jose M Hernández-Lobato. Collabora- tive gaussian processes for preference learning. In Advances in Neural Information Processing Systems, pages 2096–2104, 2012. [28] Prateek Jain, Sudheendrasvnaras Vijayanarasimhan, Kristen Grauman, Prateek Jain, and Kris- ten Grauman. Hashing Hyperplane Queries to Near Points with Applications to Large-Scale Active Learning. IEEE Trans. Pattern Anal. Mach. Intell., 36(2):2010, 2010. [29] A.J. Joshi, F. Porikli, and N. Papanikolopoulos. Multi-class active learning for image classifi- cation. IEEE Conf. Comput. Vis. Pattern Recognit., pages 2372–2379, 2009. [30] Kevin J. Lang and Eric B Baum. Query Learning Can Work Poorly when a Human Oracle is Used, 1992. [31] Quoc V Le, Alexandre Karpenko, Jiquan Ngiam, and Andrew Y Ng. ICA with Reconstruction Cost for Efficient Overcomplete Feature Learning. [32] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation Applied to Handwritten Zip Code Recognition, 1989. [33] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. arXiv, 2016. [34] Mehdi Mirza and Simon Osindero. Conditional Generative Adversarial Nets. CoRR, pages 1–7, nov 2014. [35] Hieu T Nguyen and Arnold Smeulders. Active Learning Using Pre-clustering. [36] Kamal Nigam, Andrew Kachites Mccallum, Sebastian Thrun, and Tom Mitchell. Text Classi- fication from Labeled and Unlabeled Documents using EM. Mach. Learn., 39:103–134, 2000. [37] Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Semi- supervised knowledge transfer for deep learning from private training data. arXiv preprint arXiv:1610.05755, 2016. [38] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. nov 2015. [39] Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y Ng. Self-taught Learning : Transfer Learning from Unlabeled Data. Proc. 24th Int. Conf. Mach. Learn., pages 759–766, 2007. [40] Jens Röder, Boaz Nadler, Kevin Kunzmann, and Fred A Hamprecht. Active learning with distributional estimates. arXiv preprint arXiv:1210.4909, 2012. [41] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved Techniques for Training GANs. jun 2016. [42] Andrew I. Schein and Lyle H. Ungar. Active learning for logistic regression: An evaluation, volume 68. 2007. [43] Burr Settles. Active learning literature survey. Computer sciences technical report, 1648:Uni- versity of Wisconsin–Madison, 2010. [44] Jost Tobias Springenberg. Unsupervised and Semi-supervised Learning with Categorical Gen- erative Adversarial Networks. arXiv, (2009):1–20, 2015. [45] Simon Tong and Daphne Koller. Support Vector Machine Active Learning with Applications to Text Classification. Proc. Int. Conf. Mach. Learn., 1(June):45–66, 2002. [46] L. G. Valiant and L. G. A theory of the learnable. Commun. ACM, 27(11):1134–1142, nov 1984. [47] VN Vapnik and V Vapnik. Statistical learning theory. 1998. [48] Xuezhi Wang, Tzu-Kuo Huang, and Jeff Schneider. Active transfer learning under model shift. In International Conference on Machine Learning, pages 1305–1313, 2014. [49] Manfred K Warmuth, Jun Liao, Gunnar Rätsch, Michael Mathieson, Santosh Putta, and Chris- tian Lemmen. Active Learning with Support Vector Machines in the Drug Discovery Process. 2002. 10 [50] Z Xu, R Akella, and Y Zhang. Incorporating diversity and density in active learning for rele- vance feedback. European Conference on Information Retrieval, 2007. [51] Yi Yang, Zhigang Ma, Feiping Nie, Xiaojun Chang, and Alexander G Hauptmann. Multi-Class Active Learning by Uncertainty Sampling with Diversity Maximization. Int. J. Comput. Vis., 113(2):113–127, jun 2014. [52] Raymond Yeh, Chen Chen, Teck Yian Lim, Mark Hasegawa-Johnson, and Minh N. Do. Se- mantic Image Inpainting with Perceptual and Contextual Losses. jul 2016. [53] Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, and Alexei A. Efros. Generative Visual Manipulation on the Natural Image Manifold. pages 597–613. Springer, Cham, 2016. [54] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image trans- lation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017. Appendix: Comparison with Self-taught Learning One common strength of GAAL and self-taught learning [39] is that both utilize the unlabeled data to help with the classification task. As we have seen in the MNIST experiment, our GAAL algorithm seems to be able to adapt to the learner. The results in this experiment are preliminary and not meant to be taken as comprehensive evaluations. In this case, the training domain is mostly unlabeled. Thus the method we compare with is self- taught learning [39]. Similar to the algorithm in [31], we use a Reconstruction Independent Com- ponent Analysis (RICA) model with a convolutional layer and a pooling layer. RICA is similar to a sparse autoencoder. Following standard self-taught learning procedures, We first train on the unlabeled pool dataset. Then we use trained RICA as the a feature extractor to obtain higher level features from randomly selected MNIST images. We then concatenate the features with the original image data to train the classifier. Finally, we test the trained classifier on the USPS dataset. We test the training size of 250, 500, 1000, and 5000. The reason of doing so is that deep learning type techniques are known to thrive in the abundance of training data. They may perform relatively poorly with limited amount of training data, as in the active learning scenarios. We run the exper- iments for 100 times and average the results. We use the same setting for the GAAL algorithm as in Section 5.1. The classifier we use is a linear SVM. Table 1 shows the classification accuracies of GAAL, self-taught learning and baseline supervised learning on raw image data. Using GAAL Table 1: Comparison of GAAL and self-taught learning ALGOIRTHM GAAL SELF-TAUGHT SUPERVISED SELF-TAUGHT SUPERVISED SELF-TAUGHT SUPERVISED SELF-TAUGHT SUPERVISED TRAINING SET SIZE 250 250 250 500 500 1000 1000 5000 5000 ACCURACY 76.42% 59.68% 67.87% 65.53% 69.22% 71.96% 69.58% 78.08% 72.00% on the raw features achieves a higher accuracy than that of the self-taught learning with the same training size of 250. In fact, self-taught learning performs worse than the regular supervised learn- ing when labeled data is scarce. This is possible for an autoencoder type algorithm. However, when we increase the training size, the self-taught learning starts to perform better. With 5000 training samples, self-taught learning outperforms GAAL with 250 training samples. Based on these results, we suspect that GAAL also has the potential to be used as a self-taught algorithm4. In practice, the GAAL algorithm can also be applied on top of the features extracted by a self-taught algorithm. A comprehensive comparison with a more advanced self-taught learning method with deeper architecture is beyond the scope of this work. 4At this stage, self-taught learning has the advantage that it can utilize any unlabeled training data, i.e., not necessarily from the categories of interest. GAAL does not have this feature yet. 11
synthetic_cpt
2
NaturalSpeech_2_Latent_Diffusion_Models_are_Natural_and_Zero-Shot_Speech_and_Singing_Synthesizers.pdf
3 2 0 2 y a M 0 3 ] S A . s s e e [ 3 v 6 1 1 9 0 . 4 0 3 2 : v i X r a NaturalSpeech 2: Latent Diffusion Models are Natural and Zero-Shot Speech and Singing Synthesizers Kai Shen∗, Zeqian Ju∗, Xu Tan∗, Yanqing Liu, Yichong Leng, Lei He Tao Qin, Sheng Zhao, Jiang Bian Microsoft Research Asia & Microsoft Azure Speech https://aka.ms/speechresearch Abstract Scaling text-to-speech (TTS) to large-scale, multi-speaker, and in-the-wild datasets is important to capture the diversity in human speech such as speaker identities, prosodies, and styles (e.g., singing). Current large TTS systems usually quantize speech into discrete tokens and use language models to generate these tokens one by one, which suffer from unstable prosody, word skipping/repeating issue, and poor voice quality. In this paper, we develop NaturalSpeech 2, a TTS system that leverages a neural audio codec with residual vector quantizers to get the quantized latent vectors and uses a diffusion model to generate these latent vectors conditioned on text input. To enhance the zero-shot capability that is important to achieve diverse speech synthesis, we design a speech prompting mechanism to facilitate in-context learning in the diffusion model and the duration/pitch predictor. We scale NaturalSpeech 2 to large-scale datasets with 44K hours of speech and singing data and evaluate its voice quality on unseen speakers. NaturalSpeech 2 outperforms previous TTS systems by a large margin in terms of prosody/timbre similarity, robustness, and voice quality in a zero-shot setting, and performs novel zero-shot singing synthesis with only a speech prompt. Audio samples are available at https://speechresearch.github.io/naturalspeech2. Figure 1: The overview of NaturalSpeech 2, with an audio codec encoder/decoder and a latent diffusion model conditioned on a prior (a phoneme encoder and a duration/pitch predictor). The details of in-context learning in the duration/pitch predictor and diffusion model are shown in Figure 3. ∗The first three authors contributed equally to this work, and their names are listed in random order. Corresponding author: Xu Tan, [email protected] Preprint. Work in progress. Phoneme Encoder Text𝒚Codec DecoderCodec EncoderDiffusion ModelDuration/Pitch PredictorSpeech𝒙Speech𝒙Latent𝒛Condition𝒄ICICICOnly in TrainingTraining & InferenceIn-Context Learning 1 Introduction Human speech is full of diversity, with different speaker identities (e.g., gender, accent, timbre), prosodies, styles (e.g., speaking, singing), etc. Text-to-speech (TTS) [1, 2] aims to synthesize natural and human-like speech with both good quality and diversity. With the development of neural networks and deep learning, TTS systems [3, 4, 5, 6, 7, 8, 9, 10, 11] have achieved good voice quality in terms of intelligibility and naturalness, and some systems (e.g., NaturalSpeech [11]) even achieves human-level voice quality on single-speaker recording-studio benchmarking datasets (e.g., LJSpeech [12]). Given the great achievements in speech intelligibility and naturalness made by the whole TTS community, now we enter a new era of TTS where speech diversity becomes more and more important in order to synthesize natural and human-like speech. Previous speaker-limited recording-studio datasets are not enough to capture the diverse speaker identities, prosodies, and styles in human speech due to limited data diversity. Instead, we can train TTS models on a large-scale corpus to learn these diversities, and as a by-product, these trained models can generalize to the unlimited unseen scenarios with few-shot or zero-shot technologies. Current large-scale TTS systems [13, 14, 15] usually quantize the continuous speech waveform into discrete tokens and model these tokens with autoregressive language models. This pipeline suffers from several limitations: 1) The speech (discrete token) sequence is usually very long (a 10s speech usually has thousands of discrete tokens) and the autoregressive models suffer from error propagation and thus unstable speech outputs. 2) There is a dilemma between the codec and language model: on the one hand, the codec with token quantization (VQ-VAE [16, 17] or VQ-GAN [18]) usually has a low bitrate token sequence, which, although eases the language model generation, incurs information loss on the high-frequency fine-grained acoustic details; on the other hand, some improving methods [19, 20] use multiple residual discrete tokens to represent a speech frame, which increases the length of the token sequence multiple times if flattened and incurs difficulty in language modeling. In this paper, we propose NaturalSpeech 2, a TTS system with latent diffusion models to achieve expressive prosody, good robustness, and most importantly strong zero-shot ability for speech synthesis. As shown in Figure 1, we first train a neural audio codec that converts a speech waveform into a sequence of latent vectors with a codec encoder, and reconstructs the speech waveform from these latent vectors with a codec decoder. After training the audio codec, we use the codec encoder to extract the latent vectors from the speech in the training set and use them as the target of the latent diffusion model, which is conditioned on prior vectors obtained from a phoneme encoder, a duration predictor, and a pitch predictor. During inference, we first generate the latent vectors from the text/phoneme sequence using the latent diffusion model and then generate the speech waveform from these latent vectors using the codec decoder. Table 1: The comparison between NaturalSpeech 2 and previous large-scale TTS systems. Methods Representations Generative Models In-Context Learning Previous Systems [13, 14, 15] NaturalSpeech 2 Discrete Tokens Autoregressive Models Both Text and Speech are Needed Only Speech is Needed Continuous Vectors Non-Autoregressvie/Diffusion (cid:37) Stability/Robustness? (cid:37) One Acoustic Model? Beyond Speech (e.g., Singing)? (cid:37) (cid:33) (cid:33) (cid:33) We elaborate on some design choices in NaturalSpeech 2 (shown in Table 1) as follows. • Continuous vectors instead of discrete tokens. To ensure the speech reconstruction quality of the neural codec, previous works usually quantize speech with multiple residual quantizers. As a result, the obtained discrete token sequence is very long (e.g., if using 8 residual quantizers for each speech frame, the resulting flattened token sequence will be 8 times longer), and puts much pressure on the acoustic model (autoregressive language model). Therefore, we use continuous vectors instead of discrete tokens, which can reduce the sequence length and increase the amount of information for fine-grained speech reconstruction (see Section 3.1). 2 • Diffusion models instead of autoregressive models. We leverage diffusion models to learn the complex distributions of continuous vectors in a non-autoregressive manner and avoid error propagation in autoregressive models (see Section 3.2). • Speech prompting mechanisms for in-context learning. To encourage the diffusion models to follow the characteristics in the speech prompt and enhance the zero-shot capability, we design speech prompting mechanisms to facilitate in-context learning in the diffusion model and pitch/duration predictors (see Section 3.3). Benefiting from these designs, NaturalSpeech 2 is more stable and robust than previous autoregressive models, and only needs one acoustic model (the diffusion model) instead of two-stage token prediction as in [21, 13], and can extend the styles beyond speech (e.g., singing voice) due to the duration/pitch prediction and non-autoregressive generation. We scale NaturalSpeech 2 to 400M model parameters and 44K hours of speech data, and generate speech with diverse speaker identities, prosody, and styles (e.g., singing) in zero-shot scenarios (given only a few seconds of speech prompt). Experiment results show that NaturalSpeech 2 can generate natural speech in zero-shot scenarios and outperform the previous strong TTS systems. Specifically, 1) it achieves more similar prosody with both the speech prompt and ground-truth speech; 2) it achieves comparable or better naturalness (in terms of CMOS) than the ground-truth speech on LibriSpeech and VCTK test sets; 3) it can generate singing voices in a novel timbre either with a short singing prompt, or interestingly with only a speech prompt, which unlocks the truly zero-shot singing synthesis (without a singing prompt). Audio samples can be found in https://speechresearch.github.io/naturalspeech2. 2 Background We introduce some background of NaturalSpeech 2, including the journey of text-to-speech synthesis on pursuing natural voice with high quality and diversity, neural audio codec models, and generative models for audio synthesis. 2.1 TTS for Natural Voice: Quality and Diversity Text-to-speech systems [2, 3, 4, 5, 6, 8, 9, 22, 10, 11] aim to generate natural voice with both high quality and diversity. While previous neural TTS systems can synthesize high-quality voice on single-speaker recording-studio datasets (e.g., LJSpeech [12]) and even achieve human-level quality (e.g., NaturalSpeech [11]), they cannot generate diverse speech with different speaker identities, prosodies, and styles, which are critical to ensure the naturalness of the synthesized speech. Thus, some recent works [13, 14, 15] attempt to scale the TTS systems to large-scale, multi-speaker, and in-the-wild datasets to pursue diversity. These systems usually leverage a neural codec to convert speech waveform into discrete token sequence and an autoregressive language model to generate discrete tokens from text, which suffers from a dilemma as shown in Table 2: 1) If the audio codec quantizes each speech frame into a single token with vector-quantizer (VQ) [16, 17, 18], this could ease the token generation in the language model due to short sequence length, but will affect the waveform reconstruction quality due to large compression rate or low bitrate. 2) If the audio codec quantizes each speech frame into multiple tokens with residual vector-quantizer (RVQ) [19, 20], this will ensure high-fidelity waveform reconstruction, but will cause difficulty in autoregressive model generation (error propagation and robust issues) due to the increased length in the token sequence. Thus, previous works such as AudioLM [21] leverage two-stage language models to first generate some coarse-grained tokens in each frame and then generate the remaining fine-grained tokens, which are complicated and incur cascaded errors. To avoid the above dilemma, we leverage a neural codec with continuous vectors and a latent diffusion model with non-autoregressive generation. 2.2 Neural Audio Codec Neural audio codec [23, 24, 19, 20] refers to a kind of neural network model that converts audio waveform into compact representations with a codec encoder and reconstructs audio waveform from these representations with a codec decoder. Since audio codec is traditionally used for audio 3 Table 2: The dilemma in the pipeline of discrete audio codec and autoregressive language model. The Dilemma in Previous Systems Single Token (VQ) Multiple Tokens (RVQ) Waveform Reconstruction (Discrete Audio Codec) Token Generation (Autoregressive Language Model) Hard Easy Easy Hard compression and transmission, the compression rate is a critical metric and thus discrete tokens with low bitrate are usually chosen as the compact representations. For example, SoundStream [19] and Encodec [20] leverage vector-quantized variational auto-encoders (VQ-VAE) with multiple residual vector-quantizers to compress speech into multiple tokens, and have been used as the intermediate representations for speech/audio generation [21, 25, 13, 14, 15]. Although good reconstruction quality and low bitrate can be achieved by residual vector quantizers, they are mainly designed for compression and transmission purposes and may not be suitable to serve as the intermediate representation for speech/audio generation. The discrete token sequence generated by residual quantizers is usually very long (R times longer if R residual quantizers are used), which is difficult for the language models to predict. Inaccurate predictions of discrete tokens will cause word skipping, word repeating, or speech collapse issues when reconstructing speech waveforms from these tokens. In this paper, we design a neural audio codec to convert speech waveform into continuous vectors instead of discrete tokens, which can maintain enough fine-grained details for precise waveform reconstruction without increasing the length of the sequence. 2.3 Generative Models for Speech Synthesis Different generative models have been applied to speech or audio synthesis, and among these, autore- gressive models and diffusion models are the two most prominent methods. Autoregressive models have long been used in speech synthesis for waveform generation [23] or acoustic feature genera- tion [3]. Inspired by the success of autoregressive models in language generation [26, 27, 28], autore- gressive models have been applied in speech and audio generation [21, 25, 13, 14, 15]. Meanwhile, diffusion models have also been widely used in speech synthesis for waveform generation [29, 30] and acoustic feature generation [31, 32]. Although both models are based on iterative computation (following the left-to-right process or the denoising process), autoregressive models are more sensitive to sequence length and error propagation, which cause unstable prosody and robustness issues (e.g., word skipping, repeating, and collapse). Considering text-to-speech has a strict monotonic alignment and strong source-target dependency, we leverage diffusion models enhanced with duration prediction and length expansion, which are free from robust issues. 3 NaturalSpeech 2 In this section, we introduce NaturalSpeech 2, a TTS system for natural and zero-shot voice synthesis with high fidelity/expressiveness/robustness on diverse scenarios (various speaker identities, prosodies, and styles). As shown in Figure 1, NaturalSpeech 2 consists of a neural audio codec (an encoder and a decoder) and a diffusion model with a prior (a phoneme encoder and a duration/pitch predictor). Since speech waveform is complex and high-dimensional, following the paradigm of regeneration learning [33], we first convert speech waveform into latent vectors using the audio codec encoder and reconstruct speech waveform from the latent vectors using the audio codec decoder. Next, we use a diffusion model to predict the latent vectors conditioned on text/phoneme input. We introduce the detailed designs of neural audio codec in Section 3.1 and the latent diffusion model in Section 3.2, as well as the speech prompting mechanism for in-context learning in Section 3.3. 3.1 Neural Audio Codec with Continuous Vectors We use a neural audio codec to convert speech waveform into continuous vectors instead of discrete tokens, as analyzed in Section 2.1 and 2.2. Audio codec with continuous vectors enjoys several 4 Figure 2: The neural audio codec consists of an encoder, a residual vector-quantizer (RVQ), and a decoder. The encoder extracts the frame-level speech representations from the audio waveform, the RVQ leverages multiple codebooks to quantize the frame-level representations, and the decoder takes the quantized vectors as input and reconstructs the audio waveform. The quantized vectors also serve as the training target of the latent diffusion model. benefits: 1) Continuous vectors have a lower compression rate and higher bitrate than discrete tokens2, which can ensure high-quality audio reconstruction. 2) Each audio frame only has one vector instead of multiple tokens as in discrete quantization, which will not increase the length of the hidden sequence. As shown in Figure 2, our neural audio codec consists of an audio encoder, a residual vector-quantizer (RVQ), and an audio decoder: 1) The audio encoder consists of several convolutional blocks with a total downsampling rate of 200 for 16KHz audio, i.e., each frame corresponds to a 12.5ms speech segment. 2) The residual vector-quantizer converts the output of the audio encoder into multiple residual vectors following [19]. The sum of these residual vectors is taken as the quantized vectors, which are used as the training target of the diffusion model. 3) The audio decoder mirrors the structure of the audio encoder, which generates the audio waveform from the quantized vectors. The working flow of the neural audio codec is as follows. Audio Encoder : h = fenc(x), Residual Vector Quantizer : {ei j}R j=1 = frvq(hi), zi = R (cid:88) j=1 ei j, z = {zi}n i=1, (1) Audio Decoder : x = fdec(z), where fenc, frvq, and fdec denote the audio encoder, residual vector quantizer, and audio decoder. x is the speech waveform, h is the hidden sequence obtained by the audio encoder with a frame length of n, and z is the quantized vector sequence with the same length as h. i is the index of the speech frame, j is the index of the residual quantizer and R is the total number of residual quantizers, and ei j is the embedding vector of the codebook ID obtained by the j-th residual quantizer on the i-th hidden frame (i.e., hi). The training of the neural codec follows the loss function in [19]. Actually, to obtain continuous vectors, we do not need vector quantizers, but just an autoencoder or variational autoencoder. However, for regularization and efficiency purposes, we use residual vector quantizers with a very large number of quantizers (R in Figure 2) and codebook tokens (V in Figure 2) to approximate the continuous vectors. By doing this, we have two benefits: 1) When training latent diffusion models, we do not need to store continuous vectors which are memory-cost. Instead, we just store the codebook embeddings and the quantized token IDs, which are used to derive the continuous vectors using Equation 1. 2) When predicting the continuous vectors, we can add an additional regularization loss on discrete classification based on these quantized token IDs (see Lce−rvq in Section 3.2). 3.2 Latent Diffusion Model with Non-Autoregressive Generation We leverage a diffusion model to predict the quantized latent vector z conditioned on the text sequence y. We leverage a prior model that consists of a phoneme encoder, a duration predictor, and a pitch 2Since our task is not speech compression but speech synthesis, we do not need a high compression rate or a low bitrate. 5 :Only in Training : Training & Inference RVR: # Residual Quantizers V: # Codebook Tokens… … … … ……………Residual Vector QuantizerAudio CodecDecoderEncoderTo/From Diffusion Model Quantized Latent Vector z predictor to process the text input and provide a more informative hidden vector c as the condition of the diffusion model. Diffusion Formulation We formulate the diffusion (forward) process and denoising (reverse) process as a stochastic differential equation (SDE) [34], respectively. The forward SDE transforms the latent vectors z0 obtained by the neural codec (i.e., z) into Gaussian noises: dzt = − 1 2 βtzt dt + (cid:112)βt dwt, t ∈ [0, 1], (2) where wt is the standard Brownian motion, t ∈ [0, 1], and βt is a non-negative noise schedule function. Then the solution is given by: zt = e− 1 2 (cid:82) t 0 βsdsz0 + (cid:90) t (cid:112)βse− 1 2 (cid:82) t 0 βududws. (3) 0 By properties of Ito’s integral, the conditional distribution of zt given z0 is Gaussian: p(zt|z0) ∼ N (ρ(z0, t), Σt), where ρ(z0, t) = e− 1 The reverse SDE transforms the Gaussian noise back to data z0 with the following process: 0 βsdsz0 and Σt = I − e− (cid:82) t 0 βsds. (cid:82) t 2 dzt = −( 1 2 zt + ∇ log pt(zt))βt dt + (cid:112)βt d ˜wt, t ∈ [0, 1], (4) where ˜w is the reverse-time Brownian motion. Moreover, we can consider an ordinary differential equation (ODE) [34] in the reverse process: dzt = − 1 2 (zt + ∇ log pt(zt))βt dt, t ∈ [0, 1]. (5) We can train a neural network sθ to estimate the score ∇ log pt(zt) (the gradient of the log-density of noisy data), and then we can sample data z0 by starting from Gaussian noise z1 ∼ N (0, 1) and numerically solving the SDE in Equation 4 or ODE in Equation 5. In our formulation, the neural network sθ(zt, t, c) is based on WaveNet [23], which takes the current noisy vector zt, the time step t, and the condition information c as input, and predicts the data ˆz0 instead of the score, which we found results in better speech quality. Thus, ˆz0 = sθ(zt, t, c). The loss function to train the diffusion model is as follows. Ldiff = Ez0,t[||ˆz0 − z0||2 2 + ||Σ−1 t (ρ(ˆz0, t) − zt) − ∇ log pt(zt)||2 2 + λce−rvqLce−rvq], (6) where the first term is the data loss, the second term is the score loss, and the predicted score is calculated by Σ−1 t (ρ(ˆz0, t) − zt), which is also used for reverse sampling based on Equation 4 or 5 in inference. The third term Lce−rvq is a novel cross-entropy (CE) loss based on residual vector- quantizer (RVQ). Specifically, for each residual quantizer j ∈ [1, R], we first get the residual vector ˆz0 − (cid:80)j−1 i=1 ei, where ei is the ground-truth quantized embedding in the i-th residual quantizer (ei is also introduced in Equation 1). Then we calculate the L2 distance between the residual vector with each codebook embedding in quantizer j and get a probability distribution with a softmax function, and then calculate the cross-entropy loss between the ID of the ground-truth quantized embedding ej and this probability distribution. Lce−rvq is the mean of the cross-entropy loss in all R residual quantizers, and λce−rvq is set to 0.1 during training. Prior Model: Phoneme Encoder and Duration/Pitch Predictor The phoneme encoder consists of several Transformer blocks [35, 6], where the standard feed-forward network is modified as a convolutional network to capture the local dependency in phoneme sequence. Both the duration and pitch predictors share the same model structure with several convolutional blocks but with different model parameters. The ground-truth duration and pitch information is used as the learning target to train the duration and pitch predictors, with an L1 duration loss Ldur and pitch loss Lpitch. During training, the ground-truth duration is used to expand the hidden sequence from the phoneme encoder to obtain the frame-level hidden sequence, and then the ground-truth pitch information is added to the frame-level hidden sequence to get the final condition information c. During inference, the corresponding predicted duration and pitch are used. The total loss function for the diffusion model is as follows: L = Ldiff + Ldur + Lpitch. (7) 6 Figure 3: The speech prompting mechanism in the duration/pitch predictor and the diffusion model for in-context learning. During training, we use a random segment zu:v of the target speech z as the speech prompt zp and use the diffusion model to only predict z\u:v. During inference, we use a reference speech of a specific speaker as the speech prompt zp. Note that the prompt is the speech latent obtained by the codec encoder instead of the speech waveform. 3.3 Speech Prompting for In-Context Learning To facilitate in-context learning for better zero-shot generation, we design a speech prompting mechanism to encourage the duration/pitch predictor and the diffusion model to follow the diverse information (e.g., speaker identities) in the speech prompt. For a speech latent sequence z, we randomly cut off a segment zu:v with frame index from u to v as the speech prompt, and concatenate the remaining speech segments z1:u and zv:n to form a new sequence z\u:v as the learning target of the diffusion model. As shown in Figure 3, we use a Transformer-based prompt encoder to process the speech prompt zu:v (zp in the figure) to get a hidden sequence. To leverage this hidden sequence as the prompt, we have two different strategies for the duration/pitch predictor and the diffusion model: 1) For the duration and pitch predictors, we insert a Q-K-V attention layer in the convolution layer, where the query is the hidden sequence of the convolution layer, and the key and value is the hidden sequence from the prompt encoder. 2) For the diffusion model, instead of directly attending to the hidden sequence from the prompt encoder that exposes too many details to the diffusion model and may harm the generation, we design two attention blocks: in the first attention block, we use m randomly initialized embeddings as the query sequence to attend to the prompt hidden sequence, and get a hidden sequence with a length of m as the attention results [36, 37, 38]; in the second attention block, we leverage the hidden sequence in the WaveNet layer as the query and the m-length attention results as the key and value. We use the attention results of the second attention block as the conditional information of a FiLM layer [39] to perform affine transform on the hidden sequence of the WaveNet in the diffusion model. Please refer to Appendix B for the details of WaveNet architecture used in the diffusion model. 3.4 Connection to NaturalSpeech NaturalSpeech 2 is an advanced edition of the NaturalSpeech Series [11, 40]. Compared to its previous version NaturalSpeech [11], NaturalSpeech 2 has the following connections and distinctions. First, goal. While both NaturalSpeech 1 and 2 aim at synthesizing natural voices (with good speech quality and diversity), their focuses are different. NaturalSpeech focuses on speech quality by synthesizing voices that are on par with human recordings and only tackling single-speaker recording-studio datasets (e.g., LJSpeech). NaturalSpeech 2 focuses on speech diversity by exploring the zero-shot synthesis ability based on large-scale, multi-speaker, and in-the-wild datasets. Second, architecture. NaturalSpeech 2 keeps the basic components in NaturalSpeech, such as the encoder and decoder for waveform reconstruction, and the prior module (phoneme encoder, duration/pitch predictor). However, it leverages 1) a diffusion model to increase the modeling power to capture the complicated and diverse data distribution in large-scale speech datasets, 2) a residual vector quantizer to regularize the latent vectors to trade off the reconstruction quality and prediction difficulty, and 3) a speech prompting mechanism to enable zero-shot ability that is not covered in single-speaker synthesis system. 7 𝒛𝒕,𝒕,𝒄𝒛𝟎Speech Prompt EncoderQ-K-V AttentionConvolutionQ-K-V AttentionWaveNet…Q-K-V Attention𝒛𝒑N xx N’Duration/PitchPredictorDiffusionModelFiLMQK/VK/VQQK/VPhonemeHiddenDuration/Pitchm 4 Experimental Settings In this section, we introduce the experimental settings to train and evaluate NaturalSpeech 2, including the dataset, model configuration, baselines for comparison, training and inference, and evaluation metrics. 4.1 Datasets Training Dataset To train the neural audio codec and the diffusion model, we use the English subset of Multilingual LibriSpeech (MLS) [41] as the training data, which contains 44K hours of transcribed speech data derived from LibriVox audiobooks. The number of distinct speakers is 2742 for males and 2748 for females respectively. The sample rate is 16KHz for all speech data. The input text sequence is first converted into a phoneme sequence using grapheme-to-phoneme conversion [42] and then aligned with speech using our internal alignment tool to obtain the phoneme-level duration. The frame-level pitch sequence is extracted from the speech using PyWorld3. Evaluation Dataset We employ two benchmark datasets for evaluation: 1) LibriSpeech [43] test-clean, which contains 40 distinct speakers and 5.4 hours of annotated speech data. 2) VCTK dataset [44], which contains 108 distinct speakers. For LibriSpeech test-clean, we randomly sample 15 utterances for each speaker and form a subset of 600 utterances for evaluation. For VCTK, we randomly sample 5 utterances for each speaker, resulting in a subset of 540 utterances for evaluation. Specifically, to synthesize each sample, we randomly select a different utterance of the same speaker and crop it into a σ-second audio segment to form a σ-second prompt. Note that both the speakers in LibriSpeech test-clean and VCTK are not seen during training. Thus, we aim to conduct zero-shot speech synthesis. The singing datasets follow a similar process in the speech dataset, and the details are shown in Section 5.6. 4.2 Model Configuration and Comparison Model Configuration The phoneme encoder is a 6-layer Transformer [35] with 8 attention heads, 512 embedding dimensions, 2048 1D convolution filter size, 9 convolution 1D kernel size, and 0.1 dropout rate. The pitch and duration predictor share the same architecture of 30-layer 1D convolution with ReLU activation and layer normalization, 10 Q-K-V attention layers for in-context learning, which have 512 hidden dimensions and 8 attention heads and are placed every 3 1D convolution layers. We set the dropout to 0.5 in both duration and pitch predictors. For the speech prompt encoder, we use a 6-layer Transformer with 512 hidden size, which has the same architecture as the phoneme encoder. As for the m query tokens in the first Q-K-V attention in the prompting mechanism in the diffusion model (as shown in Figure 3), we set the token number m to 32 and the hidden dimension to 512. The diffusion model contains 40 WaveNet layers [23], which consist of 1D dilated convolution layers with 3 kernel size, 1024 filter size, and 2 dilation size. Specifically, we use a FiLM layer [39] at every 3 WaveNet layers to fuse the condition information processed by the second Q-K-V attention in the prompting mechanism in the diffusion model. The hidden size in WaveNet is 512, and the dropout rate is 0.2. More details of the model configurations are shown in Appendix A. Model Comparison We choose the previous zero-shot TTS model YourTTS [45] as the baseline, with the official code and pre-trained checkpoint4, which is trained on VCTK [44], LibriTTS [46] and TTS-Portuguese [47]. We also choose VALL-E [13] that is based on discrete audio codec and autoregressive language model for comparison, which can help demonstrate the advantages of the designs in NaturalSpeech 2. We directly collect some audio samples from its demo page for comparison. 3https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder 4https://github.com/Edresson/YourTTS 8 4.3 Model Training and Inference We first train the audio codec using 8 NVIDIA TESLA V100 16GB GPUs with a batch size of 200 audios per GPU for 440K steps. We follow the implementation and experimental setting of SoundStream [19] and adopt Adam optimizer with 2e−4 learning rate. Then we use the trained codec to extract the quantized latent vectors for each audio to train the diffusion model in NaturalSpeech 2. The diffusion model in NaturalSpeech 2 is trained using 16 NVIDIA TESLA V100 32GB GPUs with a batch size of 6K frames of latent vectors per GPU for 300K steps (our model is still underfitting and longer training will result in better performance). We optimize the models with the AdamW optimizer with 5e − 4 learning rate, 32k warmup steps following the inverse square root learning schedule. During inference, for the diffusion model, we find it beneficial to use a temperature τ and sample the terminal condition zT from N (0, τ −1I) [32]. We set τ to 1.22. To balance the generation quality and latency, we adopt the Euler ODE solver and set the diffusion steps to 150. 4.4 Evaluation Metrics We use both objective and subjective metrics to evaluate the zero-shot synthesis ability of Natural- Speech 2 and compare it with baselines. Objective Metrics We evaluate the TTS systems with the following objective metrics: • Prosody Similarity with Prompt. We evaluate the prosody similarity (in terms of pitch and duration) between the generated speech and the prompt speech, which measures how well the TTS model follows the prosody in speech prompt in zero-shot synthesis. We calculate the prosody similarity with the following steps: 1) we extract phoneme-level duration and pitch from the prompt and the synthesized speech; 2) we calculate the mean, standard variation, skewness, and kurtosis [7] of the pitch and duration in each speech sequence; 3) we calculate the difference of the mean, standard variation, skewness, and kurtosis between each paired prompt and synthesized speech and average the differences among the whole test set. • Prosody Similarity with Ground Truth. We evaluate the prosody similarity (in terms of pitch and duration) between the generated speech and the ground-truth speech, which measures how well the TTS model matches the prosody in the ground truth. Since there is correspondence between two speech sequences, we calculate the Pearson correlation and RMSE of the pitch/duration between the generated and ground-truth speech, and average them on the whole test set. • Word Error Rate. We employ an ASR model to transcribe the generated speech and calculate the word error rate (WER). The ASR model is a CTC-based HuBERT [48] pre-trained on Libri- light [49] and fine-tuned on the 960 hours training set of LibriSpeech. We use the official code and checkpoint5. Subjective Metrics We conduct human evaluation and use the intelligibility score and mean opinion score as the subjective metrics: • Intelligibility Score. Neural TTS models often suffer from the robustness issues such as word skipping, repeating, and collapse issues, especially for autoregressive models. To demonstrate the robustness of NaturalSpeech 2, following the practice in [6], we use the 50 particularly hard sentences (see Appendix C) and conduct an intelligibility test. We measure the number of repeating words, skipping words, and error sentences as the intelligibility score. • CMOS and SMOS. Since synthesizing natural voices is one of the main goals of NaturalSpeech 2, we measure naturalness using comparative mean option score (CMOS) with 12 native speakers as the judges. We also use the similarity mean option score (SMOS) between the synthesized and prompt speech to measure the speaker similarity, with 6 native speakers as the judges. 5https://huggingface.co/facebook/hubert-large-ls960-ft 9 5 Results on Natural and Zero-Shot Synthesis In this section, we conduct a series of experiments to compare the NaturalSpeech 2 with the baselines from the following aspects: 1) Generation Quality, by evaluating the naturalness of the synthe- sized audio; 2) Generation Similarity, by evaluating how well the TTS system follows prompts; 3) Robustness, by calculating the WER and an additional intelligibility test. 5.1 Generation Quality Table 3: The CMOS results (v.s. Natural- Speech 2) on LibriSpeech and VCTK. We conduct CMOS test to evaluate the generation qual- ity (i.e., naturalness). We randomly select 20 utterances from the LibriSpeech and VCTK tests and crop the prompt speech to 3s. To ensure high-quality generation, we use a speech scoring model [50] to filter the multiple samples generated by the diffusion model with different starting Gaussian noises z1. Table 3 shows a compar- ison of NaturalSpeech 2 against baseline YourTTS and the ground truth. We have several observations: 1) Nat- uralSpeech 2 is comparable to the ground-truth recording in LibriSpeech (+0.04 is regarded as on par) and achieves much better quality on VCTK datasets (−0.30 is a large gap), which demonstrates the naturalness of the speech generated by NaturalSpeech 2 is high enough. 2) NaturalSpeech shows 0.65 and 0.58 CMOS gain over YourTTS in LibriSpeech and VCTK, respectively, which shows the superiority of NaturalSpeech 2 over this baseline. YourTTS NaturalSpeech 2 LibriSpeech VCTK −0.58 0.00 −0.65 0.00 Ground Truth Setting −0.30 +0.04 5.2 Generation Similarity Table 4: The prosody similarity between synthesized and prompt speech in terms of the difference in mean (Mean), standard variation (Std), skewness (Skew), and kurtosis (Kurt) of pitch and duration. LibriSpeech Pitch Duration Mean↓ Std↓ Skew↓ Kurt↓ Mean↓ Std↓ Skew↓ Kurt↓ YourTTS NaturalSpeech 2 10.52 10.11 7.62 6.18 0.59 0.50 1.18 1.01 0.84 0.65 0.66 0.70 0.75 0.60 3.70 2.99 VCTK Pitch Duration Mean↓ Std↓ Skew↓ Kurt↓ Mean↓ Std↓ Skew↓ Kurt↓ YourTTS NaturalSpeech 2 13.67 13.29 6.63 6.41 0.72 0.68 1.54 1.27 0.72 0.79 0.85 0.76 0.84 0.76 3.31 2.65 We use two metrics to evaluate the speech similarity: 1) prosody similarity between the synthesized and prompt speech. 2) SMOS test. To evaluate the prosody similarity, we randomly sample one sentence for each speaker for both LibriSpeech test-clean and VCTK dataset to form the test sets. Specifically, to synthesize each sample, we randomly and independently sample the prompt speech with σ = 3 seconds. Note that YourTTS has seen 97 speakers in VCTK in training, but we still compare NaturalSpeech 2 with YourTTS on all the speakers in VCTK (i.e., the 97 speakers are seen to YourTTS but unseen to NaturalSpeech 2). Table 5: The SMOS on LibriSpeech and VCTK respectively. We apply the alignment tool to obtain phoneme-level duration and pitch and calculate the prosody similarity metrics between the synthesized speech and the prompt speech as described in Section 4.4. The results are shown in Table 4. We have the following observations: 1) NaturalSpeech 2 consistently outperforms the base- line YourTTS in both LibriSpeech and VCTK on all metrics, which demonstrates that our proposed Natu- ralSpeech 2 can mimic the prosody of prompt speech much better. 2) Although YourTTS has seen 97 from 108 speakers in VCTK dataset, our model can still outperform it by a large margin, which demonstrates the advantages of NaturalSpeech 2. YourTTS NaturalSpeech 2 LibriSpeech VCTK GroundTruth 2.43 3.20 2.03 3.28 Setting 3.86 3.33 10 Furthermore, we also compare prosody similarity between synthesized and ground-truth speech in Appendix D. We further evaluate the speaker similarity using SMOS test. We randomly select 10 utterances from LibriSpeech and VCTK datasets respectively, following the setting in the CMOS test. The length of the prompt speech is set to 3s. The results are shown in Table 5. NaturalSpeech 2 outperforms YourTTS by 1.25 and 0.77 SMOS scores for LibriSpeech and VCTK, respectively, which shows that NaturalSpeech 2 is significantly better in speaker similarity. 5.3 Robustness Table 6: Word error rate on LibriSpeech and VCTK. We use the full test set of LibriSpeech and VCTK as described in Section 4.1 to synthesize the speech and compute the word error rate (WER) between the tran- scribed text and ground-truth text. To synthesize each sample, we use a 3-second prompt by randomly crop- ping the whole prompt speech. The results are shown in Table 6. We observe that: 1) NaturalSpeech 2 sig- nificantly outperforms YourTTS in LibriSpeech and VCTK, indicating better synthesis of high-quality and robust speech. 2) Our synthesized speech is comparable to the ground-truth speech in LibriSpeech and surpasses that in VCTK. The higher WER results in VCTK may stem from a noisy environment and the lack of ASR model fine-tuning in that dataset. YourTTS NaturalSpeech 2 LibriSpeech VCTK Ground Truth 14.80 6.99 7.10 2.26 Setting 9.49 1.94 Table 7: The robustness of NaturalSpeech 2 and other autoregressive/non-autoregressive models on 50 particularly hard sentences. We conduct an intelligibility test on these sentences and measure the number of word repeating, word skipping, and error sentences. Each kind of word error is counted at once per sentence. AR/NAR Model Repeats Skips Error Sentences Error Rate AR NAR Tacotron [3] Transformer TTS [5] FastSpeech [6] NaturalSpeech [11] NAR NaturalSpeech 2 4 7 0 0 0 11 15 0 0 0 12 17 0 0 0 24% 34% 0% 0% 0% Autoregressive TTS models often suffer from alignment mismatch between phoneme and speech, resulting in severe word repeating and skipping. To further evaluate the robustness of the diffusion- based TTS model, we adopt the 50 particularly hard sentences in FastSpeech [6] to evaluate the robustness of the TTS systems. We can find that the non-autoregressive models such as FastSpeech [6], NaturalSpeech [11], and also NaturalSpeech 2 are robust for the 50 hard cases, without any intelligi- bility issues. As a comparison, the autoregressive models such as Tacotron [3], Transformer TTS [5], and VALL-E [13] will have a high error rate on these hard sentences. The comparison results are provided in Table 7. 5.4 Comparison with Other TTS Systems In this section, we compare NaturalSpeech 2 with the zero- shot TTS model VALL-E [13]. We directly download the first 16 utterances from VALL-E demo page6, which consists of 8 samples from LibriSpeech and 8 samples from VCTK. We evaluate the CMOS and SMOS in Table 8. From the results, we find that NaturalSpeech 2 outperforms VALL-E by 0.3 in SMOS and 0.31 in CMOS, respectively. The SMOS results show that NaturalSpeech 2 is significantly better in speaker similarity. The CMOS results demonstrate that the speech generated by NaturalSpeech 2 is much more natural and of higher quality. VALL-E NaturalSpeech 2 3.53 3.83 6https://valle-demo.github.io/ 11 Table 8: SMOS and CMOS results be- tween NaturalSpeech 2 and VALL-E. Setting SMOS CMOS GroundTruth 4.09 - −0.31 0.00 5.5 Ablation Study Table 9: The ablation study of NaturalSpeech 2. The prosody similarity between the synthesized and prompt speech in terms of the difference in the mean (Mean), standard variation (Std), skewness (Skew), and kurtosis (Kurt) of pitch and duration. “-" denotes the model can not converge. Pitch Duration NaturalSpeech 2 w/o. diff prompt w/o. dur/pitch prompt w/o. CE loss w/o. query attn Mean↓ 10.11 - 21.69 10.69 10.78 Std↓ 6.18 - 19.38 6.24 6.29 Skew↓ Kurt↓ Mean↓ Std↓ Skew↓ Kurt↓ 0.50 1.01 0.65 0.70 0.60 2.99 - 0.63 0.55 0.62 - 1.29 1.06 1.37 - 0.77 0.71 0.67 - 0.72 0.72 0.71 - 0.70 0.74 0.69 - 3.70 3.85 3.59 In this section, we perform ablation experiments. 1) To study the effect of the speech prompt, we remove the Q-K-V attention layers in the diffusion (abbr. w/o. diff prompt), and the duration and pitch predictors (abbr. w/o. dur/pitch prompt), respectively. 2) To study the effect of the cross-entropy (CE) loss Lce−rvq based on RVQ, we disable the CE loss by setting λce−rvq to 0 (abbr. w/o. CE loss). 3) To study the effectiveness of two Q-K-V attention in speech prompting for diffusion in Section 3.3, we remove the first attention that adopts m randomly initialized query sequence to attend to the prompt hidden and directly use one Q-K-V attention to attend to the prompt hidden (abbr. w/o. query attn). We report the prosody similarity metric between synthesized and prompt speech in Table 9. More ablation results between synthesized and ground-truth speech are included in Appendix E. We have the following observations: 1) Disabling speech prompt in the diffusion model significantly degrades prosody similarity (e.g., from 10.11 to 21.69 for the mean of the pitch or even can not converge), highlighting its importance for high-quality TTS synthesis. 2) Disabling cross-entropy loss worsens performance, as the residual vector quantizer’s layer-wise cross entropy provides regularization for precise latent representations. 3) Disabling query attention strategy also degrades prosody similarity. In practice, we find that applying cross-attention to prompt hidden will leak details and thus mislead generation. In addition, since the prompt length is an important hyper-parameter for zero-shot TTS, we would like to investigate the effect of the prompt length. We follow the setting of prosody similarity between synthesized and prompt speech in Section 5.2. Specifically, we vary the prompt length by σ = {3, 5, 10} seconds and report the prosody similarity metrics of NaturalSpeech 2. The results are shown in Table 10. We observe that when the prompt is longer, the similarity between the generated speech and the prompt is higher for NaturalSpeech 2. It shows that the longer prompt reveals more details of the prosody, which help the TTS model to generate more similar speech. 5.6 Zero-Shot Singing Synthesis In this section, we explore NaturalSpeech 2 to synthesize singing voice in a zero-shot setting, either given a singing prompt or only a speech prompt. For singing data collection, we crawl a number of singing voices and their paired lyrics from the Web. For singing data preprocessing, we utilize a speech processing model to remove the backing vocal and accompaniment in the song, and an ASR model to filter out samples with misalignments. The dataset is then constructed using the same process as speech data, ultimately containing around 30 hours of singing data. The dataset is upsampled and mixed with speech data for singing experiments. We use speech and singing data together to train NaturalSpeech 2 with a 5e − 5 learning rate. In inference, we set the diffusion steps to 1000 for better performance. To synthesize a singing voice, we use the ground-truth pitch and duration from another singing voice, and use different singing prompts to generate singing voices with different singer timbres. Interestingly, we find that NaturalSpeech 2 can generate a novel singing voice using speech as the prompt. See the demo page7 for zero-shot singing synthesis with either singing or speech as the prompt. 7https://speechresearch.github.io/naturalspeech2 12 Table 10: The NaturalSpeech 2 prosody similarity between the synthesized and prompt speech with different lengths in terms of the difference in the mean (Mean), standard variation (Std), skewness (Skew), and kurtosis (Kurt) of pitch and duration. LibriSpeech Mean↓ Std↓ Skew↓ Kurt↓ Mean↓ Std↓ Skew↓ Kurt↓ Pitch Duration 3s 5s 10s VCTK 3s 5s 10s 10.11 6.96 6.90 6.18 4.29 4.03 0.50 0.42 0.48 1.01 0.77 1.36 0.65 0.69 0.62 Pitch 0.70 0.60 0.45 0.60 0.53 0.56 Duration 2.99 2.52 2.48 Mean↓ Std↓ Skew↓ Kurt↓ Mean↓ Std↓ Skew↓ Kurt↓ 13.29 14.46 10.28 6.41 5.47 4.31 0.68 0.63 0.41 1.27 1.23 0.87 0.79 0.62 0.71 0.76 0.67 0.62 0.76 0.74 0.76 2.65 3.40 3.48 5.7 Extension to Voice Conversion and Speech Enhancement In this section, we extend NaturalSpeech 2 to another two speech synthesis tasks: 1) voice conver- sion and 2) speech enhancement. See the demo page8 for zero-shot voice conversion and speech enhancement examples. 5.7.1 Voice Conversion Besides zero-shot text-to-speech and singing synthesis, NaturalSpeech 2 also supports zero-shot voice conversion, which aims to convert the source audio zsource into the target audio ztarget using the voice of the prompt audio zprompt. Technically, we first convert the source audio zsource into an informative Gaussian noise z1 using a source-aware diffusion process and generate the target audio ztarget using a target-aware denoising process, shown as follows. Source-Aware Diffusion Process. In voice conversion, it is helpful to provide some necessary information from source audio for target audio in order to ease the generation process. Thus, instead of directly diffusing the source audio with some Gaussian noise, we diffuse the source audio into a starting point that still maintains some information in the source audio. Specifically, inspired by the stochastic encoding process in Diffusion Autoencoder [51], we obtain the starting point z1 from zsource as follows: z1 = z0 + (cid:90) 1 0 − 1 2 (zt + Σ−1 t (ρ(ˆsθ(zt, t, c), t) − zt))βt dt, (8) where Σ−1 reverse of ODE (Equation 5) in the denoising process. t (ρ(ˆsθ(zt, t, c), t) − zt) is the predicted score at t. We can think of this process as the Target-Aware Denoising Process. Different from the TTS which starts from the random Gaussian noise, the denoising process of voice conversion starts from the z1 obtained from the source-aware diffusion process. We run the standard denoising process as in the TTS setting to obtain the final target audio ztarget, conditioned on c and the prompt audio zprompt, where c is obtained from the phoneme and the duration sequence of the source audio and the predicted pitch sequence. As a consequence, we observe that NaturalSpeech 2 is capable of producing speech that exhibits similar prosody to the source speech, while also replicating the timbre specified by the prompt. 8https://speechresearch.github.io/naturalspeech2 13 5.7.2 Speech Enhancement NaturalSpeech 2 can be extended to speech enhancement, which is similar to the extension of voice conversion. In this setting, we assume that we have the source audio z′ source which contains background noise ( z′ denotes the audio with background noise), the prompt with background noise z′ prompt for the source-aware diffusion process, and the prompt without background noise zprompt for target-aware denoising process. Note that z′ prompt have the same background noise. source and z′ To remove the background noise, firstly, we apply the source-aware diffusion process by z′ source and z′ prompt and obtain the z1 as in Equation 8. The source audio’s duration and pitch are utilized in this procedure. Secondly, we run the target-aware denoising process to obtain the clean audio by z1 and the clean prompt zprompt. Specifically, we use the phoneme sequence, duration sequence, and pitch sequence of the source audio in this procedure. As a result, we find that NaturalSpeech 2 can effectively eliminate background noise while simultaneously preserving crucial aspects such as prosody and timbre. 6 Conclusion and Future Work In this paper, we develop NaturalSpeech 2, a TTS system that leverages a neural audio codec with continuous latent vectors and a latent diffusion model with non-autoregressive generation to enable natural and zero-shot text-to-speech synthesis. To facilitate in-context learning for zero-shot synthesis, we design a speech prompting mechanism in the duration/pitch predictor and the diffusion model. By scaling NaturalSpeech 2 to 400M model parameters, 44K hours of speech, and 5K speakers, it can synthesize speech with high expressiveness, robustness, fidelity, and strong zero-shot ability, outperforming previous TTS systems. For future work, we will explore efficient strategies such as consistency models [52, 53] to speed up the diffusion model and explore large-scale speaking and singing voice training to enable more powerful mixed speaking/singing capability. Broader Impacts: Since NaturalSpeech 2 could synthesize speech that maintains speaker identity, it may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker. We conducted the experiments under the assumption that the user agree to be the target speaker in speech synthesis. If the model is generalized to unseen speakers in the real world, it should include a protocol to ensure that the speaker approves the use of their voice and a synthesized speech detection model. 14 References [1] Paul Taylor. Text-to-speech synthesis. Cambridge university press, 2009. [2] Xu Tan, Tao Qin, Frank Soong, and Tie-Yan Liu. A survey on neural speech synthesis. arXiv preprint arXiv:2106.15561, 2021. [3] Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, et al. Tacotron: Towards end-to-end speech synthesis. Proc. Interspeech 2017, pages 4006–4010, 2017. [4] Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, RJ Skerry-Ryan, et al. Natural TTS synthesis by conditioning WaveNet on mel spectrogram predictions. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4779–4783. IEEE, 2018. [5] Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, and Ming Liu. Neural speech synthesis with Transformer network. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6706–6713, 2019. [6] Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. FastSpeech: Fast, robust and controllable text to speech. In NeurIPS, 2019. [7] Yi Ren, Chenxu Hu, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. FastSpeech 2: Fast and high-quality end-to-end text to speech. In International Conference on Learning Representations, 2021. [8] Yanqing Liu, Zhihang Xu, Gang Wang, Kuan Chen, Bohan Li, Xu Tan, Jinzhu Li, Lei He, and Sheng Zhao. DelightfulTTS: The Microsoft speech synthesis system for Blizzard challenge 2021. arXiv preprint arXiv:2110.12612, 2021. [9] Yanqing Liu, Ruiqing Xue, Lei He, Xu Tan, and Sheng Zhao. DelightfulTTS 2: End- to-end speech synthesis with adversarial vector-quantized auto-encoders. arXiv preprint arXiv:2207.04646, 2022. [10] Jaehyeon Kim, Jungil Kong, and Juhee Son. Conditional variational autoencoder with adversar- ial learning for end-to-end text-to-speech. arXiv preprint arXiv:2106.06103, 2021. [11] Xu Tan, Jiawei Chen, Haohe Liu, Jian Cong, Chen Zhang, Yanqing Liu, Xi Wang, Yichong Leng, Yuanhao Yi, Lei He, et al. NaturalSpeech: End-to-end text to speech synthesis with human-level quality. arXiv preprint arXiv:2205.04421, 2022. [12] Keith Ito. The LJ speech dataset. https://keithito.com/LJ-Speech-Dataset/, 2017. [13] Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, et al. Neural codec language models are zero-shot text to speech synthesizers. arXiv preprint arXiv:2301.02111, 2023. [14] Eugene Kharitonov, Damien Vincent, Zalán Borsos, Raphaël Marinier, Sertan Girgin, Olivier Pietquin, Matt Sharifi, Marco Tagliasacchi, and Neil Zeghidour. Speak, read and prompt: High-fidelity text-to-speech with minimal supervision. arXiv preprint arXiv:2302.03540, 2023. [15] Ruiqing Xue, Yanqing Liu, Lei He, Xu Tan, Linquan Liu, Edward Lin, and Sheng Zhao. Foundationtts: Text-to-speech for asr customization with generative language model. arXiv preprint arXiv:2303.02939, 2023. [16] Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 6309–6318, 2017. [17] Ali Razavi, Aaron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with VQ-VAE-2. In Advances in neural information processing systems, pages 14866–14876, 2019. 15 [18] Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12873–12883, 2021. [19] Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, and Marco Tagliasacchi. SoundStream: An end-to-end neural audio codec. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021. [20] Alexandre Défossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi. High fidelity neural audio compression. arXiv preprint arXiv:2210.13438, 2022. [21] Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Olivier Teboul, David Grangier, Marco Tagliasacchi, and Neil Zeghidour. Audiolm: a language modeling approach to audio generation. arXiv preprint arXiv:2209.03143, 2022. [22] Jinglin Liu, Chengxi Li, Yi Ren, Feiyang Chen, and Zhou Zhao. DiffSinger: Singing voice synthesis via shallow diffusion mechanism. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 11020–11028, 2022. [23] Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. WaveNet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016. [24] Jean-Marc Valin and Jan Skoglund. LPCNet: Improving neural speech synthesis through linear prediction. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5891–5895. IEEE, 2019. [25] Felix Kreuk, Gabriel Synnaeve, Adam Polyak, Uriel Singer, Alexandre Défossez, Jade Copet, Devi Parikh, Yaniv Taigman, and Yossi Adi. Audiogen: Textually guided audio generation. arXiv preprint arXiv:2209.15352, 2022. [26] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. [27] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. [28] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. [29] Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. DiffWave: A versatile diffusion model for audio synthesis. In ICLR, 2021. [30] Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, and William Chan. WaveGrad: Estimating gradients for waveform generation. In ICLR, 2021. [31] Myeonghun Jeong, Hyeongju Kim, Sung Jun Cheon, Byoung Jin Choi, and Nam Soo Kim. Diff-TTS: A denoising diffusion model for text-to-speech. arXiv preprint arXiv:2104.01409, 2021. [32] Vadim Popov, Ivan Vovk, Vladimir Gogoryan, Tasnima Sadekova, and Mikhail Kudinov. Grad- TTS: A diffusion probabilistic model for text-to-speech. arXiv preprint arXiv:2105.06337, 2021. [33] Xu Tan, Tao Qin, Jiang Bian, Tie-Yan Liu, and Yoshua Bengio. Regeneration learning: A learning paradigm for data generation. arXiv preprint arXiv:2301.08846, 2023. [34] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2020. [35] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Informa- tion Processing Systems, pages 5998–6008, 2017. 16 [36] Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. Attention-based lstm for aspect-level sentiment classification. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 606–615, 2016. [37] Yuxuan Wang, Daisy Stanton, Yu Zhang, RJ Skerry-Ryan, Eric Battenberg, Joel Shor, Ying Xiao, Ye Jia, Fei Ren, and Rif A Saurous. Style tokens: Unsupervised style modeling, control and transfer in end-to-end speech synthesis. In International Conference on Machine Learning, pages 5180–5189. PMLR, 2018. [38] Dacheng Yin, Chuanxin Tang, Yanqing Liu, Xiaoqiang Wang, Zhiyuan Zhao, Yucheng Zhao, Zhiwei Xiong, Sheng Zhao, and Chong Luo. Retrievertts: Modeling decomposed factors for text-based speech insertion. arXiv preprint arXiv:2206.13865, 2022. [39] Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. [40] Kai Shen, Zeqian Ju, Xu Tan, Yanqing Liu, Yichong Leng, Lei He, Tao Qin, Sheng Zhao, and Jiang Bian. Naturalspeech 2: Latent diffusion models are natural and zero-shot speech and singing synthesizers. arXiv preprint arXiv:2304.09116, 2023. [41] Vineel Pratap, Qiantong Xu, Anuroop Sriram, Gabriel Synnaeve, and Ronan Collobert. MLS: A large-scale multilingual dataset for speech research. Proc. Interspeech 2020, pages 2757–2761, 2020. [42] Hao Sun, Xu Tan, Jun-Wei Gan, Hongzhi Liu, Sheng Zhao, Tao Qin, and Tie-Yan Liu. Token- level ensemble distillation for grapheme-to-phoneme conversion. In INTERSPEECH, 2019. [43] Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. LibriSpeech: an ASR corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206–5210. IEEE, 2015. [44] Christophe Veaux, Junichi Yamagishi, Kirsten MacDonald, et al. Superseded-CSTK VCTK corpus: English multi-speaker corpus for CSTK voice cloning toolkit. 2016. [45] Edresson Casanova, Julian Weber, Christopher D Shulby, Arnaldo Candido Junior, Eren Gölge, and Moacir A Ponti. Yourtts: Towards zero-shot multi-speaker tts and zero-shot voice conversion for everyone. In International Conference on Machine Learning, pages 2709–2720. PMLR, 2022. [46] Heiga Zen, Viet Dang, Rob Clark, Yu Zhang, Ron J Weiss, Ye Jia, Zhifeng Chen, and Yonghui Wu. LibriTTS: A corpus derived from librispeech for text-to-speech. Proc. Interspeech 2019, pages 1526–1530, 2019. [47] Edresson Casanova, Arnaldo Candido Junior, Christopher Shulby, Frederico Santos de Oliveira, João Paulo Teixeira, Moacir Antonelli Ponti, and Sandra Aluísio. Tts-portuguese corpus: a corpus for speech synthesis in brazilian portuguese. Language Resources and Evaluation, 56(3):1043–1055, 2022. [48] Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3451–3460, 2021. [49] Jacob Kahn, Morgane Riviere, Weiyi Zheng, Evgeny Kharitonov, Qiantong Xu, Pierre- Emmanuel Mazaré, Julien Karadayi, Vitaliy Liptchinsky, Ronan Collobert, Christian Fuegen, et al. Libri-light: A benchmark for asr with limited or no supervision. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7669–7673. IEEE, 2020. [50] Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, et al. Wavlm: Large-scale self-supervised pre- training for full stack speech processing. IEEE Journal of Selected Topics in Signal Processing, 16(6):1505–1518, 2022. 17 [51] Konpat Preechakul, Nattanat Chatthee, Suttisak Wizadwongsa, and Supasorn Suwajanakorn. Diffusion autoencoders: Toward a meaningful and decodable representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10619–10629, 2022. [52] Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. arXiv preprint arXiv:2303.01469, 2023. [53] Zhen Ye, Wei Xue, Xu Tan, Jie Chen, Qifeng Liu, and Yike Guo. Comospeech: One-step speech and singing voice synthesis via consistency model. arXiv preprint arXiv:2305.06908, 2023. 18 A Model Details Module Configuration Value #Parameters Audio Codec Number of Residual VQ Blocks Codebook size Codebook Dimension Hop Size Similarity Metric Phoneme Encoder Duration Predictor Pitch Predictor Speech Prompt Encoder Diffusion Model Transformer Layer Attention Heads Hidden Size Conv1D Filter Size Conv1D Kernel Size Dropout Conv1D Layers Conv1D Kernel Size Attention Layers Attention Heads Hidden Size Dropout Conv1D Layers Conv1D Kernel Size Attention Layers Attention Heads Hidden Size Dropout Transformer Layer Attention Heads Hidden Size Conv1D Filter Size Conv1D Kernel Size Dropout WaveNet Layer Attention Layers Attention Heads Hidden Size Query Tokens Query Token Dimension Dropout Total 16 1024 256 200 L2 6 8 512 2048 9 0.2 30 3 10 8 512 0.5 30 5 10 8 512 0.5 6 8 512 2048 9 0.2 40 13 8 512 32 512 0.2 27M 72M 34M 50M 69M 183M 435M Table 11: The detailed model configurations of NaturalSpeech 2. B The Details of WaveNet Architecture in the Diffusion Model As shown in Figure 4, the WaveNet consists of 40 blocks. Each block consists of 1) a dilated CNN with kernel size 3 and dilation 2, 2) a Q-K-V attention, and 3) a FiLM layer. In detail, we use Q-K-V attention to attend to the key/value obtained from the first Q-K-V attention module (from the speech prompt encoder) as shown in Figure 3. Then, we use the attention results to generate the scale and bias terms, which are used as the conditional information of the FiLM layer. Finally, we average the skip output results of each layer and calculate the final WaveNet output. 19 Figure 4: Overview of the WaveNet architecture in the diffusion model. C The 50 Particularly Hard Sentences The 50 particularly hard sentences used in Section 5.3 are listed below: 01. a 02. b 03. c 04. H 05. I 06. J 07. K 08. L 09. 22222222 hello 22222222 10. S D S D Pass zero - zero Fail - zero to zero - zero - zero Cancelled - fifty nine to three - two - sixty four Total - fifty nine to three - two - 11. S D S D Pass - zero - zero - zero - zero Fail - zero - zero - zero - zero Cancelled - four hundred and sixteen - seventy six - 12. zero - one - one - two Cancelled - zero - zero - zero - zero Total - two hundred and eighty six - nineteen - seven - 13. forty one to five three hundred and eleven Fail - one - one to zero two Cancelled - zero - zero to zero zero Total - 14. zero zero one , MS03 - zero twenty five , MS03 - zero thirty two , MS03 - zero thirty nine , 15. 1b204928 zero zero zero zero zero zero zero zero zero zero zero zero zero zero one seven ole32 16. zero zero zero zero zero zero zero zero two seven nine eight F three forty zero zero zero zero zero six four two eight zero one eight 20 Dilated ConvQ-K-V Attention+Timestep tK/V from Prompt+Condition cFiLMLineartanhσ+Layer k’s inputLayer k’s outputxNQGating…scale * x + biasscalebiasN Layers’ outputsLayer kLayer k-1+ReLUConv1d 1x1WaveNetoutput 17. c five eight zero three three nine a zero bf eight FALSE zero zero zero bba3add2 - c229 - 4cdb - 18. Calendaring agent failed with error code 0x80070005 while saving appointment . 19. Exit process - break ld - Load module - output ud - Unload module - ignore ser - System error - ignore ibp - Initial breakpoint - 20. Common DB connectors include the DB - nine , DB - fifteen , DB - nineteen , DB - twenty five , DB - thirty seven , and DB - fifty connectors . 21. To deliver interfaces that are significantly better suited to create and process RFC eight twenty one , RFC eight twenty two , RFC nine seventy seven , and MIME content . 22. int1 , int2 , int3 , int4 , int5 , int6 , int7 , int8 , int9 , 23. seven _ ctl00 ctl04 ctl01 ctl00 ctl00 24. Http0XX , Http1XX , Http2XX , Http3XX , 25. config file must contain A , B , C , D , E , F , and G . 26. mondo - debug mondo - ship motif - debug motif - ship sts - debug sts - ship Comparing local files to checkpoint files ... 27. Rusbvts . dll Dsaccessbvts . dll Exchmembvt . dll Draino . dll Im trying to deploy a new topology , and I keep getting this error . 28. You can call me directly at four two five seven zero three seven three four four or my cell four two five four four four seven four seven four or send me a meeting request with all the appropriate information . 29. Failed zero point zero zero percent < one zero zero one zero zero zero zero Internal . Exchange . ContentFilter . BVT ContentFilter . BVT_log . xml Error ! Filename not specified . 30. C colon backslash o one two f c p a r t y backslash d e v one two backslash oasys backslash legacy backslash web backslash HELP 31. src backslash mapi backslash t n e f d e c dot c dot o l d backslash backslash m o z a r t f one backslash e x five 32. copy backslash backslash j o h n f a n four backslash scratch backslash M i c r o s o f t dot S h a r e P o i n t dot 33. Take a look at h t t p colon slash slash w w w dot granite dot a b dot c a slash access slash email dot 34. backslash bin backslash premium backslash forms backslash r e g i o n a l o p t i o n s dot a s p x dot c s Raj , DJ , 35. Anuraag backslash backslash r a d u r five backslash d e b u g dot one eight zero nine underscore P R two h dot s t s contains 36. p l a t f o r m right bracket backslash left bracket f l a v o r right bracket backslash s e t u p dot e x e 37. backslash x eight six backslash Ship backslash zero backslash A d d r e s s B o o k dot C o n t a c t s A d d r e s 38. Mine is here backslash backslash g a b e h a l l hyphen m o t h r a backslash S v r underscore O f f i c e s v r 39. h t t p colon slash slash teams slash sites slash T A G slash default dot aspx As always , any feedback , comments , 40. two thousand and five h t t p colon slash slash news dot com dot com slash i slash n e slash f d slash two zero zero three slash f d 41. backslash i n t e r n a l dot e x c h a n g e dot m a n a g e m e n t dot s y s t e m m a n a g e 42. I think Rich’s post highlights that we could have been more strategic about how the sum total of XBOX three hundred and sixtys were distributed . 43. 64X64 , 8K , one hundred and eighty four ASSEMBLY , DIGITAL VIDEO DISK DRIVE , INTERNAL , 8X , 44. So we are back to Extended MAPI and C++ because . Extended MAPI does not have a dual interface VB or VB .Net can read . 45. Thanks , Borge Trongmo Hi gurus , Could you help us E2K ASP guys with the following issue ? 46. Thanks J RGR Are you using the LDDM driver for this system or the in the build XDDM driver ? 47. Btw , you might remember me from our discussion about OWA automation and OWA readiness day a year ago . 21 48. empidtool . exe creates HKEY_CURRENT_USER Software Microsoft Office Common QMPersNum in the registry , queries AD , and the populate the registry with MS employment ID if available else an error code is logged . 49. Thursday, via a joint press release and Microsoft AI Blog, we will announce Microsoft’s continued partnership with Shell leveraging cloud, AI, and collaboration technology to drive industry innovation and transformation. 50. Actress Fan Bingbing attends the screening of ’Ash Is Purest White (Jiang Hu Er Nv)’ during the 71st annual Cannes Film Festival D Prosody Similarity with Ground Truth To further investigate the quality of prosody, we follow the generation quality evaluation of prosody similarity between synthesized and prompt speech in Section 5.2 and compare the generated speech with the ground-truth speech. We use Pearson correlation and RMSE to measure the prosody matching between generated and ground-truth speech. The results are shown in Table 12. We observe that NaturalSpeech 2 outperforms the baseline YourTTS by a large margin, which shows that our NaturalSpeech 2 is much better in prosody similarity. Table 12: The prosody similarity between the synthesized and ground-truth speech in terms of the correlation and RMSE on pitch and duration. LibriSpeech Correlation ↑ RMSE ↓ Correlation ↑ RMSE ↓ Pitch Duration YourTTS NaturalSpeech 2 0.77 0.81 51.78 47.72 0.52 0.65 3.24 2.72 VCTK Correlation ↑ RMSE ↓ Correlation ↑ RMSE ↓ Pitch Duration YourTTS NaturalSpeech 2 0.82 0.87 42.63 39.83 0.55 0.64 2.55 2.50 E Ablation Study In this section, we also compare the prosody similarity between audio generated by the ablation model and the ground-truth speech in Table 13. Similar to the results of comparing the audio generated by the ablation model and prompt speech, we also have the following observations. 1) The speech prompt is most important to the generation quality. 2) The cross-entropy and the query attention strategy are also helpful in high-quality speech synthesis. Table 13: The ablation study of NaturalSpeech 2. The prosody similarity between the synthesized and ground-truth speech in terms of the correlation and RMSE on pitch and duration. “-" denotes that the model can not converge. Pitch Duration Correlation ↑ RMSE ↓ Correlation ↑ RMSE ↓ NaturalSpeech 2 w/o. diff prompt w/o. dur/pitch prompt w/o. CE loss w/o. query attn 0.81 - 0.80 0.79 0.79 47.72 - 55.00 50.69 50.65 0.65 - 0.59 0.63 0.63 2.72 - 2.76 2.73 2.73 22
synthetic_cpt
2
NeKo_Toward_Post_Recognition_Generative_Correction_Large_Language_Models_with_Task-Oriented_Experts.pdf
Neko: a Library for Exploring Neuromorphic Learning Rules Zixuan Zhao University of Chicago Nathan Wycoff Virginia Tech Neil Getty Argonne National Laboratory Rick Stevens Argonne National Laboratory & University of Chicago Fangfang Xia Argonne National Laboratory & University of Chicago 1 2 0 2 g u A 3 1 ] G L . s c [ 2 v 4 2 3 0 0 . 5 0 1 2 : v i X r a Figure 1: Neko overview. Key components in the neuromorphic learning library. ABSTRACT The field of neuromorphic computing is in a period of active explo- ration. While many tools have been developed to simulate neuronal dynamics or convert deep networks to spiking models, general software libraries for learning rules remain underexplored. This is partly due to the diverse, challenging nature of efforts to de- sign new learning rules, which range from encoding methods to gradient approximations, from population approaches that mimic the Bayesian brain to constrained learning algorithms deployed on memristor crossbars. To address this gap, we present Neko, a modular, extensible library with a focus on aiding the design of new learning algorithms. We demonstrate the utility of Neko in three exemplar cases: online local learning, probabilistic learning, and analog on-device learning. Our results show that Neko can replicate the state-of-the-art algorithms and, in one case, lead to significant outperformance in accuracy and speed. Further, it offers tools including gradient comparison that can help develop new algorithmic variants. Neko is an open source Python library that supports PyTorch and TensorFlow backends. CCS CONCEPTS • Computing methodologies → Machine learning algorithms; • Hardware → Neural systems. Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of the United States government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only. ICONS ’21, July 27–29, 2021, PREPRINT © 2021 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-8691-3/21/07. . . $15.00 https://doi.org/10.1145/3477145.3477155 KEYWORDS Neuromorphic computing, learning rules, approximate gradients, Bayesian inference, Manhattan rule, open-source library ACM Reference Format: Zixuan Zhao, Nathan Wycoff, Neil Getty, Rick Stevens, and Fangfang Xia. 2021. Neko: a Library for Exploring Neuromorphic Learning Rules. In PREPRINT . ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/ 3477145.3477155 1 INTRODUCTION Deep learning is the prevailing paradigm for machine learning. Over the course of its meteoric rise, its many differences from human learning have become increasingly clear. Chief among these are gaps in data efficiency, robustness, generalizability, and energy effi- ciency — all unlikely to narrow with growing computation power alone. This has motivated a renewed search for brain-inspired learn- ing algorithms. However, the current software infrastructure needs improvement to support productive exploration. Two common choices today for designing novel learning algo- rithms are TensorFlow [1] and PyTorch [32]. These general deep learning frameworks provide powerful abstractions for calculating gradients and building deep neural networks, but there is no inter- mediate layer between these two levels. For high-level development, backpropagation is the only learning algorithm offered and is in fact coupled with the training process. Software in neuromorphic computing, on the other hand, has traditionally focused more on simulating neurons and spiking neu- ral networks [6, 8, 16, 41], interfacing with neuromorphic hardware [11, 28, 35, 39], and converting pre-trained deep learning models to spiking neural networks for inference [36, 37]. Learning has not been a key part of these libraries. The few supported learning rules such as spike-timing-dependent plasticity are not competitive on large problems. As a result, new learning algorithms are developed in independent codebases that are not easily reusable. ICONS ’21, July 27–29, 2021, PREPRINT Zixuan Zhao, Nathan Wycoff, Neil Getty, Rick Stevens, and Fangfang Xia In this work, we present Neko, a software library under active development for exploring learning rules. We build on the popular autograd frameworks, and our goal is to implement key building blocks to boost researcher productivity. By decoupling the learning rules from the training process, we aim to provide an abstraction model that enables mixing and matching of various design ideas. To arrive at the right abstraction level, we need to sample a wide range of learning algorithm research. Below are the three directions and exemplars we have prioritized in this initial code release. The first class of learning rules are gradient-based methods. They approximate backpropagation with various levels of biological plau- sibility [3, 24, 26, 27, 29, 31, 38, 40, 45]. From this category, we study the e-prop algorithm [7] in detail and provide a complete reimple- mentation. The second direction is based on the hypothesis that the brain keeps track of probabilistic distributions over weights and rewards [2, 10]. This line of exploration may offer important clues towards achieving learning efficiency and robustness in the face of uncertainty. We develop a sampling-based learning rule on spiking neural networks (SNN). The third class is concerned with hardware constraints on plasticity mechanisms. For this class, we include the classic example of Manhattan rule training for memristive crossbar circuits. In all three exemplars, we seek consistent implementation in the Neko library. 2 LIBRARY DESIGN The Neko library is designed to be modular, extensible, and easy to use. Users can select from a collection of neuron models and encoding methods to build a spiking or regular artificial neural network, and train it with one of the implemented learning rules. Alternatively, they could supply their own networks from PyTorch or Keras [9] or develop new learning algorithms based on the pro- vided intrinsics. The following code snippet provides an example of solving MNIST [23] with the e-prop algorithm on a recurrent network of 128 hidden adaptive leaky integrate-and-fire (ALIF) neurons. from neko . backend import pytorch_backend as backend rsnn = ALIF (128 , 10 , backend , task_type = ' classification ') model = Evaluator ( rsnn , loss = ' categorical_crossentropy ' , metrics =[ ' accuracy ', ' firing_rate ']) learning_rule = Eprop ( model , mode = ' symmetric ') trainer = Trainer ( learning_rule ) trainer . train ( x_train , y_train , epochs =30) Listing 1: Train an SNN model of ALIF neurons with e-prop. The training process illustrated in this example can be broken down into a series of high-level Neko modules: the layer includes pre-implemented recurrent SNNs and adaptors for existing Keras and PyTorch models; the evaluator associates a model with a loss function and optional metrics; the learning rule implements back- propagation and a growing list of neuromorphic learning rules; and the trainer handles training logistics as well as special logic to apply multiple learning rules for gradient comparison between models. Besides these core components, auxiliary modules include the data loader, spike encoder, optimizer, and functions for loss, activation, and pseudo-derivatives calculations. To help users define custom algorithms, Neko also provides a unified API for accessing frequently used features in Tensor- Flow and PyTorch such as low-level tensor operations. Switching the backend is straightforward. This feature can detect occasional framework-dependent behavior and is useful for code verification and performance analysis. The multi-backend support is reminis- cent of the earlier Keras framework. However, Neko is different in that it provides more fine-grained abstraction layers such that users can replace the learning algorithm by changing a single line of code. Taken together, these features also simplify the process of porting code to hardware accelerators, since implementing a backend for the hardware is sufficient to run all models in Neko on it. 3 USE CASES In this section, we present results on the three representative learn- ing rules introduced earlier. We also provide gradient analysis as an example of Neko’s cross-cutting utilities that we are building to help design, debug, and compare new learning algorithms. 3.1 Credit assignment with local signals A key mystery in the brain is how it implements credit assignment. The standard backpropagation through time (BPTT) algorithm is unrealistic as we cannot expect a biological neuron to be aware of all past synaptic strengths. Bellec et al. [7] proposed e-prop, a local online learning algorithm for recurrent SNNs. The method exploits the mathematical formula of BPTT, deriving an approximation which only requires a recursive accumulative eligibility trace and a local learning signal. These properties make the algorithm one step closer to biologically realistic on-chip learning. In Neko, we implemented full-featured e-prop algorithms includ- ing the three variants: symmetric, random, and adaptive. Whereas the paper manually derived the e-prop formulas for some networks, we took a different approach: separating the model from the learn- ing rules. In the layer module, the regular recurrent neural networks and recurrent SNNs, with leaky integrate-and-fire (LIF) or ALIF neurons, were all defined as standard models. Meanwhile, they inherited from an Epropable class, which defined general symbolic gradient formulas according to recurrent cell dynamics. Specifying this extra information was all it took to perform e-prop, and in a network-agnostic way. This design enabled the error-prone formula derivation to be automated. It also sped up experiments with new network architectures or e-prop variants. We compared the Neko implementation of e-prop to the original implementation on the TIMIT benchmark [15] for framewise speech recognition. The authors reported the results on a hybrid network of 100 ALIF and 300 LIF neurons [7]. In our experiment, we used an ALIF-only network of 200 neurons and otherwise kept the setup identical. We report close reproduction accuracy in Fig. 2. Notably, Neko’s error rate dropped by 27%, after tuning regularization and batch size, while keeping the firing rate low at 10 Hz. To the best of our knowledge, this is the best SNN accuracy obtained with a local learning rule, which in fact reaches the level of an LSTM baseline trained with the precise gradients from BPTT ([7] Fig. S4). Additionally, Neko is faster (training time from Nvidia V100) and convenient for iterative development. Neko: a Library for Exploring Neuromorphic Learning Rules ICONS ’21, July 27–29, 2021, PREPRINT Figure 2: TIMIT results. We reproduce e-prop accuracy on speech recognition in Neko with a smaller network. Neko is faster with slight tuning and reduces error by 27% to reach the nonspiking baseline performance of a BPTT-trained LSTM model. 3.2 Probabilistic learning Bayesian statistics has captured much attention in the computa- tional neuroscience community, both as an explanation for neural behavior [22] as well as a means of performing inference in neural networks. In Neko, we develop a Hybrid Monte Carlo, or HMC [30], algorithm to perform Bayesian inference on spiking neural networks based on Metropolis-adjusted Langevin diffusion [34]. Fundamentally, HMC algorithms are simply Metropolis-Hastings samplers [19] where the proposal distribution is based on the gra- dient. Though spiking neurons are non-differentiable by definition, surrogate gradients can be defined by considering smoothed ver- sions of the spiking activation function [31]. State of the art learning algorithms for spiking neurons have used these surrogate gradients successfully, and we also find success in deploying them in HMC to form our proposal. In fact, this two-stage approach is especially appealing for spiking neurons, since the theoretical underpinnings of HMC place only very weak restrictions on what the proposal direction should be, and certainly do not require an exact gradient to be satisfied. Thus, from a theoretical perspective, running our algorithm for sufficiently long will result in a sample from our true posterior. Empirically, of course, it is not practical to explore the entire nonconvex, high-dimensional posterior. We therefore verify our implementation numerically. The MNIST-1D [18] data is a derivative of the popular MNIST dataset of handwritten digits which transforms the image recog- nition problem into a sequence learning problem (See Figure 3, Left). We train a spiking neural network with 1,000 hidden neurons using our proposed HMC algorithm1, and recorded the posterior mean as well as uncertainty for the train set examples. As shown in Figure 3 (Right), we find that the model displayed significantly more uncertainty on test examples for which its best guess was incorrect than when it was correct. This validates our algorithm, as we would like errors to be associated with high uncertainty. 1Using an adaptive step size [5] with a diffusion standard deviation of 0.01 scaled by the norm of the surrogate gradient, which was obtained via standard backpropagation. Figure 3: Uncertainty Quantification. Left: An example input representing the number 3 for the MNIST-1D data. Right: Poste- rior uncertainty among test examples which were correctly versus incorrectly predicted. Uncertainty is higher when errors are made. As future work, we intend to compare HMC and other MCMC algorithms to other probabilistic learning approaches such as Vari- ational Bayes [17] and Monte Carlo Dropout [14] within the Neko framework. 3.3 Analog neural network training Memristors have emerged as a new platform for neuromorphic learning [20, 42]. These devices represent the synapse weights in the tunable conductance states of large crossbar architectures. Compared with digital implementations of neural networks, these analog circuits offer promising advantages in parallel processing, in-situ learning, and energy efficiency [13, 25]. However, they also place constraints on how the weights can be updated. A classic way to train these networks is with the Manhattan rule learning algorithm [44]. Although training with backpropagation on device is theoretically possible, the time consumption of tuning individual weights with feedback algorithm can be prohibitive, es- pecially for larger scale neural networks [4]. As an alternative, the Manhattan rule simply updates network weights by a fixed amount according to the sign of the gradients, where the actual change magnitude may depend on the state of the material. This learn- ing rule has been applied successfully to simple machine learning benchmarks in simulated or fully hardware-implemented analog neural networks [43]. Neko implements a family of Manhattan rules to simulate the training process. It includes the basic algorithm and an extended version that supports a specified range of material conductance constraints. Because these learning rules do not have special re- quirements for the network architecture, users can directly supply existing Keras and PyTorch models with Neko’s adaptors. Our pre- liminary results show that both the simple Manhattan rule and the constrained version could train the MNIST dataset up to 96% accuracy on a simple 2-layer (with 64, 32 neurons) multi-layer per- ceptron, which is 2% lower than backpropagation. 3.4 Gradient comparison analysis Many learning rules depend on gradients explicitly or implicitly. Yet, gradient estimates are not intuitive to developers. Debugging learning rules sometimes require noticing the subtle differences CorrectIncorrect0.00.20.40.60.81.01.21.4Mean Cross EntropyUncertainty vs Accuracy ICONS ’21, July 27–29, 2021, PREPRINT Zixuan Zhao, Nathan Wycoff, Neil Getty, Rick Stevens, and Fangfang Xia Table 1: Testing two classification exemplars using temporal spike encoding schemes Encoding Surgery1 ECG2 None TC SF MW Benchmark 0.563 0.685 0.687 0.699 0.675 0.813 0.620 0.763 1A surgery kinematic dataset measuring the positions and orientations of surgical instruments during labeled simulated exercises. Data available upon request. 2A public ECG heartbeat categorization dataset [21] subsampled for class balance. 0.766 0.811 Figure 4: Gradient analysis tool. This example illustrates the differences in approximate gradients among e-prop variants for training MNIST: (top) a snapshot of the distributions of gradient deviations, (bottom) how the gradient deviations change over time. in gradient estimates and follow their trends over the course of training. In Neko, we have designed a gradient comparison tool that can enumerate the gradients or weight changes for multiple learning rules with the same model state and input data. It can also track this information batch by batch. Visualizing this information can help inspect approximation quality differences caused by algo- rithm tweaks and identify equivalence in formula transformations. Outside the context of debugging, the change in gradient estimates throughout the training process can also reveal potential biases and other properties of the learning algorithm. The gradient comparison tool is made possible by Neko’s separa- tion of the learning algorithm and trainer module. It is implemented as a special trainer that takes multiple learning rules and clones of the same model. While the primary model follows the usual training process, the others’ parameters are synced with the primary at each training step, and the weight changes are saved. The equivalence of gradient changes and weight changes can be established using the built-in naive optimizer which applies gradients directly without learning rate. Gradient analysis offers insights into how learning rules behave relative to each other and backpropagation. Fig. 4 illustrates this with an example of training spiking MNIST models with three vari- ants of e-prop. While symmetric e-prop was the best at gradient approximation, the relationship between random and adaptive ver- sions was somewhat unexpected. The adaptive version produced gradients with larger deviation and bias, which could explain its weaker performance on the benchmark (not shown). 4 SUPPORTING UTILITIES To further enable neuromorphic centric exploration, we integrate the SpikeCoding toolbox [12] which enables simple encoding of continuous value sequences into spikes with nearly a dozen algo- rithms. We present experimental results (Table 1) on two temporal data applications using three encoding schemes [33]: • Temporal contrast (TC) encoding compares the absolute value of a signal with a threshold derived by the derivative and standard deviation of the full sequence multiplied by a tun- able parameter. • Step-forward (SF) encoding generates positive/negative spikes by comparing values in a sequence to a moving baseline plus a tunable threshold, which is initially the first value of the sequence and updated each spike. • Moving window (MW) encoding uses a similar moving base- line and threshold to determine spiking but which is set to the mean of values in a tunable time window. All models were trained with e-prop learning except for the Benchmark RNN model trained with BPTT. While we note that there was often a sizable decrease in accuracy using these encod- ings, the sparsity of the input signal was significantly increased. Spike encodings may enable the use and development of learning algorithms more suited to or dependent on event based input. 5 CONCLUSIONS We presented the design of a coding library for researching learning algorithms. Through three examples, we demonstrated its capability and ease of use in diverse scenarios. Our reference implementa- tions introduced a new state-of-the-art in local temporal credit assignment with SNNs, a sampling-based learning rule for esti- mating weight and prediction posteriors, as well as simulations for constrained training of analog neural networks on memristive hard- ware. Additionally, we showed a cross-cutting example to support learning rule inspection with gradient comparison analysis. Two directions emerge for future work. First, we will extend learning rules to complex neuron models (e.g., dendritic computa- tion, structured neurons) and network architecture. Second, we will port learning algorithms to emerging hardware platforms. Both pro- cesses will be facilitated by the abstraction of learning algorithms and the multi-backend support in the Neko library2. 2https://github.com/cortical-team/neko Neko: a Library for Exploring Neuromorphic Learning Rules ICONS ’21, July 27–29, 2021, PREPRINT ACKNOWLEDGMENTS We thank Sihong Wang and Shilei Dai for helpful discussions. This work is partially supported by Laboratory Directed Research and Development (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-06CH11357. REFERENCES [1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: A system for large-scale machine learning. In 12th {USENIX} symposium on operating systems design and implementation ({OSDI} 16). 265–283. [2] Laurence Aitchison, Jannes Jegminat, Jorge Aurelio Menendez, Jean-Pascal Pfister, Alexandre Pouget, and Peter E Latham. 2021. Synaptic plasticity as Bayesian inference. Nature Neuroscience 24, 4 (2021), 565–571. [3] Mohamed Akrout, Collin Wilson, Peter C Humphreys, Timothy Lillicrap, and Douglas Tweed. 2019. Deep learning without weight transport. arXiv preprint arXiv:1904.05391 (2019). [4] Fabien Alibart, Ligang Gao, Brian D Hoskins, and Dmitri B Strukov. 2012. High precision tuning of state for memristive devices by adaptable variation-tolerant algorithm. Nanotechnology 23, 7 (Jan 2012), 075201. https://doi.org/10.1088/0957- 4484/23/7/075201 [5] Christophe Andrieu and Johannes Thoms. 2008. A tutorial on adaptive MCMC. Statistics and Computing 18, 4 (01 Dec 2008), 343–373. https://doi.org/10.1007/ s11222-008-9110-y [6] Trevor Bekolay, James Bergstra, Eric Hunsberger, Travis DeWolf, Terrence C Stewart, Daniel Rasmussen, Xuan Choo, Aaron Voelker, and Chris Eliasmith. 2014. Nengo: a Python tool for building large-scale functional brain models. Frontiers in Neuroinformatics 7 (2014), 48. [7] Guillaume Bellec, Franz Scherr, Anand Subramoney, Elias Hajek, Darjan Salaj, Robert Legenstein, and Wolfgang Maass. 2020. A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications 11, 1 (2020), 1–15. [8] Nicholas T Carnevale and Michael L Hines. 2006. The NEURON book. Cambridge University Press. [9] François Chollet et al. 2015. Keras. https://keras.io. [10] Will Dabney, Zeb Kurth-Nelson, Naoshige Uchida, Clara Kwon Starkweather, Demis Hassabis, Rémi Munos, and Matthew Botvinick. 2020. A distributional code for value in dopamine-based reinforcement learning. Nature 577, 7792 (2020), 671–675. [11] Andrew P Davison, Daniel Brüderle, Jochen M Eppler, Jens Kremkow, Eilif Muller, Dejan Pecevski, Laurent Perrinet, and Pierre Yger. 2009. PyNN: a common interface for neuronal network simulators. Frontiers in Neuroinformatics 2 (2009), 11. [12] Julien Dupeyroux. 2021. A toolbox for neuromorphic sensing in robotics. arXiv:2103.02751 [cs.RO] [13] Elliot J Fuller, Scott T Keene, Armantas Melianas, Zhongrui Wang, Sapan Agarwal, Yiyang Li, Yaakov Tuchman, Conrad D James, Matthew J Marinella, J Joshua Yang, et al. 2019. Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing. Science 364, 6440 (2019), 570–574. [14] Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proceedings of The 33rd International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 48), Maria Florina Balcan and Kilian Q. Weinberger (Eds.). PMLR, New York, New York, USA, 1050–1059. http://proceedings.mlr.press/v48/gal16. html [15] J. Garofolo, Lori Lamel, W. Fisher, Jonathan Fiscus, D. Pallett, N. Dahlgren, and V. Zue. 1992. TIMIT Acoustic-phonetic Continuous Speech Corpus. Linguistic Data Consortium (11 1992). [16] Marc-Oliver Gewaltig and Markus Diesmann. 2007. Nest (neural simulation tool). Scholarpedia 2, 4 (2007), 1430. [17] Alex Graves. 2011. Practical Variational Inference for Neural Networks. In Proceedings of the 24th International Conference on Neural Information Processing Systems (Granada, Spain) (NIPS’11). Curran Associates Inc., Red Hook, NY, USA, 2348–2356. [18] Sam Greydanus. 2020. Scaling down Deep Learning. arXiv:2011.14439 [cs.LG] [19] Peter D. Hoff. 2009. A First Course in Bayesian Statistical Methods (1st ed.). Springer Publishing Company, Incorporated. [20] Miao Hu, Hai Li, Yiran Chen, Qing Wu, Garrett S Rose, and Richard W Linderman. 2014. Memristor crossbar-based neuromorphic computing system: A case study. IEEE Transactions on Neural Networks and Learning Systems 25, 10 (2014), 1864– 1878. [21] Mohammad Kachuee, Shayan Fazeli, and Majid Sarrafzadeh. 2018. Ecg heartbeat classification: A deep transferable representation. In 2018 IEEE International Conference on Healthcare Informatics (ICHI). IEEE, 443–444. [22] David C. Knill and Alexandre Pouget. 2004. The Bayesian brain: the role of uncertainty in neural coding and computation. Trends in Neurosciences 27, 12 (01 Dec 2004), 712–719. https://doi.org/10.1016/j.tins.2004.10.007 [23] Yann LeCun. 1998. The MNIST database of handwritten digits. http://yann. lecun. com/exdb/mnist/ (1998). [24] Jun Haeng Lee, Tobi Delbruck, and Michael Pfeiffer. 2016. Training deep spiking neural networks using backpropagation. Frontiers in Neuroscience 10 (2016), 508. [25] Can Li, Daniel Belkin, Yunning Li, Peng Yan, Miao Hu, Ning Ge, Hao Jiang, Eric Montgomery, Peng Lin, Zhongrui Wang, et al. 2018. Efficient and self-adaptive in-situ learning in multilayer memristor neural networks. Nature communications 9, 1 (2018), 1–8. [26] Timothy P Lillicrap, Daniel Cownden, Douglas B Tweed, and Colin J Akerman. 2016. Random synaptic feedback weights support error backpropagation for deep learning. Nature communications 7, 1 (2016), 1–10. [27] Timothy P Lillicrap, Adam Santoro, Luke Marris, Colin J Akerman, and Geoffrey Hinton. 2020. Backpropagation and the brain. Nature Reviews Neuroscience 21, 6 (2020), 335–346. [28] Chit-Kwan Lin, Andreas Wild, Gautham N Chinya, Yongqiang Cao, Mike Davies, Daniel M Lavery, and Hong Wang. 2018. Programming spiking neural networks on Intel’s Loihi. Computer 51, 3 (2018), 52–61. [29] Owen Marschall, Kyunghyun Cho, and Cristina Savin. 2020. A unified framework of online learning algorithms for training recurrent neural networks. Journal of Machine Learning Research 21, 135 (2020), 1–34. [30] Radford M. Neal. 2011. MCMC Using Hamiltonian Dynamics. CRC Press. https: //doi.org/10.1201/b10905-7 [31] Emre O Neftci, Hesham Mostafa, and Friedemann Zenke. 2019. Surrogate gradi- ent learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine 36, 6 (2019), 51–63. [32] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. [n.d.]. PyTorch: An imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703 ([n. d.]). [33] Balint Petro, Nikola Kasabov, and Rita M. Kiss. 2020. Selection and Optimiza- tion of Temporal Spike Encoding Methods for Spiking Neural Networks. IEEE Transactions on Neural Networks and Learning Systems 31, 2 (Feb. 2020), 358–370. https://doi.org/10.1109/tnnls.2019.2906158 [34] P. J. Rossky, J. D. Doll, and H. L. Friedman. 1978. Brownian dynamics as smart Monte Carlo simulation. The Journal of Chemical Physics 69, 10 (1978), 4628–4633. https://doi.org/10.1063/1.436415 arXiv:https://doi.org/10.1063/1.436415 [35] Bodo Rueckauer, Connor Bybee, Ralf Goettsche, Yashwardhan Singh, Joyesh Mishra, and Andreas Wild. 2021. NxTF: An API and Compiler for Deep Spiking Neural Networks on Intel Loihi. arXiv preprint arXiv:2101.04261 (2021). [36] Bodo Rueckauer and Shih-Chii Liu. 2018. Conversion of analog to spiking neural networks using sparse temporal coding. In 2018 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 1–5. [37] Bodo Rueckauer, Iulia-Alexandra Lungu, Yuhuang Hu, Michael Pfeiffer, and Shih-Chii Liu. 2017. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Frontiers in Neuroscience 11 (2017), 682. [38] João Sacramento, Rui Ponte Costa, Yoshua Bengio, and Walter Senn. 2018. Den- dritic cortical microcircuits approximate the backpropagation algorithm. arXiv preprint arXiv:1810.11393 (2018). [39] Jun Sawada, Filipp Akopyan, Andrew S Cassidy, Brian Taba, Michael V Debole, Pallab Datta, Rodrigo Alvarez-Icaza, Arnon Amir, John V Arthur, Alexander Andreopoulos, et al. 2016. Truenorth ecosystem for brain-inspired computing: scalable systems, software, and applications. In SC’16: Proceedings of the Inter- national Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, 130–141. [40] Andrew Sornborger, Louis Tao, Jordan Snyder, and Anatoly Zlotnik. 2019. A pulse- gated, neural implementation of the backpropagation algorithm. In Proceedings of the 7th Annual Neuro-inspired Computational Elements Workshop. 1–9. [41] Marcel Stimberg, Romain Brette, and Dan FM Goodman. 2019. Brian 2, an intuitive and efficient neural simulator. eLife 8 (2019), e47314. [42] Andy Thomas. 2013. Memristor-based neural networks. Journal of Physics D: Applied Physics 46, 9 (2013), 093001. [43] Peng Yao, Huaqiang Wu, Bin Gao, Jianshi Tang, Qingtian Zhang, Wenqiang Zhang, J Joshua Yang, and He Qian. 2020. Fully hardware-implemented memristor convolutional neural network. Nature 577, 7792 (2020), 641–646. [44] Elham Zamanidoost, Farnood M. Bayat, Dmitri Strukov, and Irina Kataeva. 2015. Manhattan rule training for memristive crossbar circuit pattern classifiers. In 2015 IEEE 9th International Symposium on Intelligent Signal Processing (WISP) Proceedings. 1–6. https://doi.org/10.1109/WISP.2015.7139171 [45] Friedemann Zenke and Surya Ganguli. 2018. Superspike: Supervised learning in multilayer spiking neural networks. Neural computation 30, 6 (2018), 1514–1541.
synthetic_cpt
8
Generating_Training_Data_with_Language_Models_Towards_Zero-Shot_Language_Understanding.pdf
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 1 Self-Training Vision Language BERTs with a Unified Conditional Model Xiaofeng Yang, Fengmao Lv, Fayao Liu, Guosheng Lin 3 2 0 2 n a J 9 1 ] V C . s c [ 2 v 0 1 0 2 0 . 1 0 2 2 : v i X r a Abstract—Natural language BERTs are trained with language corpus in a self-supervised manner. Unlike natural language BERTs, vision language BERTs need paired data to train, which restricts the scale of VL-BERT pretraining. We propose a self-training approach that allows training VL-BERTs from unlabeled image data. The proposed method starts with our unified conditional model – a vision language BERT model that can perform zero-shot conditional generation. Given different conditions, the unified conditional model can generate captions, dense captions, and even questions. We use the labeled image data to train a teacher model and use the trained model to generate pseudo captions on unlabeled image data. We then combine the labeled data and pseudo labeled data to train a student model. The process is iterated by putting the student model as a new teacher. By using the proposed self-training approach and only 300k unlabeled extra data, we are able to get competitive or even better performances compared to the models of similar model size trained with 3 million extra image data. I. INTRODUCTION Large scale pretraining has become the dominating approach in various natural language processing tasks. The success of large scale pretraining is due to a large amount of language training data available everywhere and the self-training algo- rithm. Unlike language pretraining, vision language pretraining requires paired image and language data, which restricts the scale of vision language BERTs’ pretraining. In this paper, we propose a self-training approach that allows to pretrain VL- BERTs using unlabeled image data. Self-training is usually done by iterating the following three steps: 1) training with labeled data, 2) generating pseudo labels for unlabeled data, 3) mixing the labeled data and unlabeled data with pseudo labels to retrain the network. However, the self-training of vision language BERTs is nontrivial due to the following reasons. First, although auto-encoding models (e.g., BERTs [1], [2]) perform well on the natural language understanding and image language understanding tasks, they cannot be directly applied to the generation task without fine- tuning [3]. In practice, it is difficult to generate pseudo labels for unlabeled data using pretrained BERTs in the zero-shot Corresponding author: Guosheng Lin. Xiaofeng Yang and Guosheng Lin are with School of Computer Science and Engineering, Nanyang Technological University (NTU), Singapore 639798 (email: [email protected], [email protected]) Fengmao Lv is with School of Computing and Artificial Intelligence, feng- Southwest Jiaotong University, Chengdu 611756, China (email: [email protected]) Fayao Liu is with Agency for Science, Technology and Research (A*STAR), Singapore 138632 (email: [email protected]) Fig. 1. An example of generated image descriptions. The original image is selected from Conceptual Caption. Given different condition flags, our proposed UCM model is able to generate diverse image descriptions, such as COCO caption, dense caption, and questions. It’s clear that the generated contents have different styles. Compared with the originally provided captions, the generated ones could better describe the picture contents. setting. Although these models can be finetuned to perform generation tasks, the zero-shot generation of pseudo labels is important since it saves the time of extra finetuning and avoids adding additional bias from the finetuning datasets. Second, current common practice in vision language BERT pretraining uses various image descriptions to train, such as image captions, dense captions and questions. Those image descriptions have significant differences, making it difficult for an unconditional model to learn to generate adequate pseudo captions for unlabeled images. Hence, although self-training has shown its effectiveness in various tasks [4], [5], how to use it effectively in training vision language BERTs is not yet studied. To this end, we propose the Unified Conditional Model (UCM) and a set of vision language BERT self-training methods to tackle the above issues. Compared with previous methods, our model has the following advantages: First, our method combines auto-encoding training [1], [2] and auto- regressive training [6] in a unified framework, which enables our method to perform well on natural language understanding tasks and at the same time effectively generate pseudo labels. Second, we propose a novel conditional training method that enables our model to conditional generate various types of captions, including COCO style captions, VG style dense captions and questions. Unified Conditional Model (UCM). Compared with tra- ditional vision language BERTs, our proposed UCM has Copyright © 2023 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending an email to [email protected]. Original CC caption:why geographical feature category is the perfect resort for Families !Condition: Coco Captiona boat floating on top of water next to a pier.a ship docked next to a dock on a clear day.an old fashionedboat in a marina near a boat dock.Condition: Dense captionboat on water surface.the letter is in white.boat docked in harbor.Condition: Questionthe boat is on where?on which side is the boat? JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 2 two unique properties. First, the model is able to generate different contents based on a condition flag input, such as image captions, dense captions, and questions. Second, the condition flag can be used as an identifier to help down-stream finetuning. The proposed UCM shares similar model structures with existing 2-stream vision language BERT models [2], [7]. Specifically, it contains one image encoder, two language encoders with shared weights, and several layers of cross attention layers. In training, different data types are assigned to their condition signals. The model is trained with both bi-directional prediction masks and one-directional predic- tion masks in parallel. For bi-directional prediction masks, the model performs conditional masked language modeling prediction, masked object prediction, image-text matching, and an auxiliary question answering loss. For one-directional prediction masks, the model performs one-directional masked conditional language modeling and masked object prediction tasks. When the model is used to generate pseudo labels for unlabeled images, the model will run forward propagation with one-directional prediction masks only. The condition signal enables the model to generate diverse descriptions for pictures. Fig. 1 shows an example of generated image descriptions using different condition flags. When the model is used for finetuning image language understanding tasks, only the bi-directional mask is used. During finetuning, we use the condition flag as prior knowledge for finetuning. For example, when finetuning VQA tasks, the input is given an additional condition flag to show the input is a question. Results show that the presence of condition flags improves down-stream finetuning performance. Vision Language BERT Self-Training. The self-training method is used in the pretraining stage to further enlarge the scale of data that can be used in pretraining. Our self- training approach follows the self-training pipeline with extra optimization for vision language BERTs. Generally, the self- training process is done in three steps. First, we use the labeled image data to train a teacher model and then use the trained model to generate pseudo labels on unlabeled image data. We then combine the labeled data and pseudo labeled data to train a student model. Finally, the process is iterated by putting the student model as a new teacher. In our task, the pseudo labels are generated COCO style image captions and VG style dense captions by UCM. In order to generate high quality and diverse pseudo labels, we propose three methods. First, we randomly mask object regions when generating captions. This method makes sure the model can focus on different visual areas when describing the images. Second, we randomly sample a word from the top-K predictions in each prediction step, such that even for the same image, the model could generate various outputs. Finally, we use the condition flag to control the contents generated. We show both qualitative and quantitative comparisons in experiments section. Experimental-wise, besides the commonly used COCO and VG datasets, we train our model with only 300k extra un- labeled data from Conceptual Caption [8] by removing the provided captions. The original Conceptual Caption dataset provides machine-generated captions. They are noisy [9] and often used as out-of-domain training data [10]. The model could out-perform the model trained with the whole three million extra data in various down-stream finetuning tasks. Also, we provide comprehensive ablation studies of the train- ing settings. To summarize our contributions: • We propose the first Unified Conditional BERT model that could perform zero-shot conditional image-based language generation. Traditional bi-directional vision lan- guage models are unable to be used to generate languages directly and they are not conditional, such that the users can’t control the generation styles. • We propose a self-training method for using unlabeled images in vision language pretraining. To the best of our knowledge, this is the first work using self-training in vision language pretraining. • With only 300k extra image data, we achieve competitive or better performances within models with similar model size trained with 3 million extra data. II. RELATED WORK A. Vision Language Pretraining Traditional vision language methods build stand-alone mod- els to solve VQA [11]–[16], captioning [17]–[19], naviga- tion [20] and grounding [21] tasks. The success of large scale pretraining in NLP [1] motivates the attempts of developing similar models in vision language. Original pretrained lan- guage models [1] use a single transformer [22] to encode language words and positions. In the situations of vision + lan- guage, there are usually two common choices: the one-stream methods and two-stream methods. Two-stream methods, for example ViLBERT [2], LXMERT [7] and 12in1 [23], use two transformers to encode images and languages separately. After that, there will usually be a cross-attention transformer to combine the features from the two branches. One-stream methods, for example VisualBERT [24], Unicoder-VL [25] , Uniter [10] and Oscar [26], process vision and language features with a single transformer encoder. In this case, the visual and language information share the same attention weights. Compared with the two-stream methods, the one- stream methods require more working memories and usually perform better than two-stream methods. The one-stream methods usually have a smaller model size. Our work follows the two-stream network design of LXMERT [7] and extends the single language encoder of LXMERT [7] to two shared- weight language encoders that process the one-directional mask and two-directional mask at the same time. This network design allows our network to generalize better on generation tasks. Although BERT is a form of language model, same as natural language BERTs [1], [3], the above vision language BERTs can not be used directly to generate languages. The most straightforward reason is that BERT learns bidirectional contexts, while generation is one-directional. VLP [27] pro- poses to train vision language models with both bi-directional and one-directional masks, such that the model can be used for both VQA and image captioning. Compared to previous work, our model has two unique properties: First, it is able to perform conditional generation, namely generating specific JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 3 Fig. 2. A detailed illustration of proposed UCM. During training, given image regional features and language embeddings, we process the language embeddings through the bi-directional language encoder and the one-directional language encoder. The two language encoders share the same weights. The bi-direction and one-directional branches are conditioned by using a normal mask and a triangular mask. The images are processed by the image encoder. Finally, the cross- attention layers merge visual features with the outputs from both language encoders. We use a rectangle mask for one-directional prediction in cross-attention layers, such that only the positions before [MASK] token could see visual features. contents based on a condition signal. Second, we use the pretrained model to perform zero-shot language generation, without extra finetuning. B. Self-Training Self-training methods [4], [28] first use labeled data to train a teacher model, then use the teacher model to generate pseudo labels for unlabeled data and finally use the labeled data and [4] pseudo labeled data to jointly train a student model. identifies the importance of add noise in self-training of image classification tasks. Self-training also improves object detec- tion and semantic segmentation results compared with pre- training [5]. In machine translation [29]–[31], self-trainings show their effectiveness on various datasets. We provide a set of self-training algorithms and give de- tailed ablation studies. III. METHOD In this section, we describe our method in two folds: our proposed unified conditional model and the self-training algorithms. For the unified conditional model subsection, we first introduce the model overview including model structures and important tokens. Then we introduce the training tasks and training losses. For the self-training algorithms subsection, we introduce the technical details of our proposed self-training algorithms for vision language models. A. Unified Conditional Model (UCM) 1) Model Overview: The overall structure of our model is illustrated in Figure 2. For the base network, we briefly follow the 2-stream model as used in [7] and extend it to our unified conditional model. Specifically, the model contains 9 layers of language transformers, 5 layers of object region transformers, and 5 layers of cross transformers. Given a sentence description, we tokenize it into WordPiece tokens, pad them with Classification [CLS] and Separation [SEP] tokens and randomly mask them with Mask [MASK] tokens. We add a condition token [CND] after the [CLS] token. The masked tokens are then passed to an embedding layer. We process the language embeddings through the bi-directional language encoder and the one-directional language encoder, same as previous works [27], [32]. The two language en- coders share the same weights. The bi-direction and one- directional branches are distinguished by using an empty mask (bi-directional mask) and a triangular mask (one-directional mask) [33]. Given a one-directional mask, the tokens can only observe the tokens before themselves in the attention modules, which makes the module more capable of doing generation tasks. Given a bi-directional mask, the tokens can observe both the tokens after and before themselves. Experiments in BERT [1] prove that this design works better on understanding tasks, for example VQA tasks. The images are processed by the image encoder. After that, the bi-directional output and one-directional output are merged with image output through cross-attention layers [2], [7]. For the cross-attention layer, we use a rectangle mask for the one-directional prediction branch, such that only the position before [MASK] could attend to image features. During inference, our model is identical to traditional 2-stream vision language BERT models without any extra computational cost. When doing image-language understanding tasks, for example finetuning visual question LanguageEncoderLanguageEncoderVisualEncoderShare WeightVisualFeaturesLanguageEmbeddingCrossEncoderVisualOutputBi-directionalLanguage OutputOne-directionalLanguage OutputVisual LossBi-directional LossOne-directional LossCondition JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 4 answering, the model runs forward propagate using the bi- directional mask only. When performing image-language gen- eration tasks, for example generating pseudo labeling for unannotated images, the model runs forward propagate using the one-directional mask only. 2) Training Tasks: Conditional Masked Language Mod- eling (CMLM) The CMLM task is for both bi-directional prediction and one-directional prediction. Given the image regional features v = {v1, ..., vK}, the language words w = {w1, ..., wT } and the condition c, for bi-directional prediction task, we randomly mask the language words at a ratio of 0.15. Once the position is masked, for 80%, similar to BERT [1], we change the position to [MASK] token and for 10% of chance, we change to position to random word and keep the original content. The loss of bi-directional CMLM is defined as the negative log-likelihood of predicting the masked words given conditions and all other words except the mask words: LCMLM-Bi(θ) = −E(w,v) log Pθ(wm|w/m, v, c) . (1) For one-directional CMLM task, we randomly mask 1 word from each sentence. The masked word could also be the period symbol. The prediction of masked word is based on the words before the current position: LCMLM-One(θ) = −E(w,v) log Pθ(wm|w<m, v, c) , (2) where θs in the above two equations represent the model parameters. Symbol < represent all words before position m. The Bi-directional CMLM and the one-directional CMLM share the same model parameters. Image-Text Matching The matching task is only done for the bi-directional prediction branch. At 50% possibility, we assign a fake sentence to the image. The fake sentence is generated by randomly sampling a caption from other images. Specifically, we use the final feature at position [CLS] to represent the summary of current visual and language input. We use this feature to classify whether the current input text and image are matched. Auxiliary Question Answering The QA task is only done for the bi-directional prediction branch. If the sampled text is a question, we use the feature at position [CLS] and a QA header to generate its answer and calculate its loss based on classification. Masked Object and Attributes Modeling (MOAM) The MOAM task is for both bi-directional prediction branch and one-directional prediction branch. The object features are always bidirectional visible for both branches. We randomly mask the visual regions at a ratio of 0.15. Once the region is masked, for 80%, we change the feature to zero and for 10% of chance, we change the feature to a random feature sampled from the dataset or keep the original feature. The loss of MOAM is defined as the negative log-likelihood of predicting the masked regions’ class and attributes given all words except the masked position: LMOAM(θ) = −E(v,w) log Pθ(vm|v/m, w) . (3) Fig. 3. The self-training algorithm. Our self-training approach is done by first training a UCM model with the labeled annotations and then iterating two steps: generating pseudo labeling on unlabeled data and retraining with mixed data. When generating pseudo labels, randomly sampling of language words, randomly masking image regions and condition flag are used as data augmentations. Here, the ground-truth regional classes and attribute classes are hard labels generated by Faster-RCNN [11] prediction. The MOAM losses from the two prediction branches are averaged when calculating gradients. Masked Feature Regression (MFR) For each masked region, besides predicting the labels and attributes of that region, we also perform masked feature regression to recover its original visual feature: LMFR(θ) = (cid:107)vm − ˆvm(cid:107)2 2 , (4) where ˆv are the groundtruth regional features. B. Self-Training Algorithms for Vision Language BERTs In this section, we talk about the self-training algorithm. Figure 3 illustrates the training process of our algorithm. We first train a UCM model with the human labeled data from COCO and VG datasets. Then we repeat two steps: generating pseudo labeling on Conceptual Caption unlabeled data and retraining the UCM model with mixed data from COCO, VG and Conceptual Caption. Train UCM with labeled data. We first train UCM using captioning annotations in MSCOCO, dense captioning annotations in Visual Genome and questions in VQA [34] and GQA dataset [35]. The trained UCM is able to generate the above three different styles of content. Annotate unlabeled data with trained UCM. We then use the trained UCM to generate pseudo labels on images from Conceptual Captions dataset. Conceptual Caption dataset pro- vides one caption for each image by default, while the default captions are machine-generated, not of good quality [9] and are often used as out-of-domain training data [10]. Therefore, we remove the original captions and use the data as unlabeled image data. To boost the performance of self-training, the generated captions need to be diverse. We introduce 3 methods to generate diverse image captions for each image. First, we perform image-level augmentations. We randomly mask object regions when generating captions. Empirically, each image regional feature is masked with a ratio of 0.5. This method makes sure the model can focus on different visual areas when Train UCM with MSCOCO, Visual GenomeGenerate Pseudo-captions on Conceptual CaptionCondition FlagMaskingSamplingTrain new UCM with Coco + VG + CCAugmentations JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 5 generating outputs. Second, we perform augmentations when sampling language words. We randomly sample a word from the top-K predictions in each prediction step, such that even for the same image input, the model could generate different captioning outputs. We choose K = 5 based on the common design choice of Image Captioning [11]. This is especially useful when generating dense captions. The result shows that the generated dense captions usually focus on one object region. Given one fixed image, the generated dense captions will always be the same without sampling. Compared with beam-search method, the top-K sampling method is faster but may potentially generate noisier captions. In experiments, we observe the performance differences between the two methods are negligible according to finetuning accuracy. One possible reason is that the generated captions are only used as pseudo labels and pseudo labels are noisy labels by default. Some works [4] even purposely add noise to the generation process of pseudo labels. Therefore, we use top-K sampling method to speed up the generation process. We discard the generated questions because they usually contain less information than the captions. Finally, we use the condition flag to control the contents generated. For each image, we generate 5 captions with MSCOCO flag and 10 captions with VG Dense caption flag. The condition flag conditions the style of the generated contents. Train new model by mixing labeled and unlabeled data. After pseudo labels are generated for unlabeled images, we mix the pseudo labeled data and original labeled data to train a new model. Unlike self-training methods in image classifica- tion [4], which train new models from scratch, we propose to initialize the model with the last pretrained weight. The design choice is based on 2 considerations. First, vision language BERT pretraining takes a long time. Loading pretrained weight can help to reduce the training time of new models. Second, for image classification tasks, if we directly use soft classification labels to describe the unlabeled image and load previously trained weights, the loss will be zero on pseudo labeled data because the labels are exactly from the model itself. Compared with soft classification labels, generated captions are generated from sampling and do not directly describe the output distribution of previous models. This property reduces the risk that loading the previous model will result in zero loss on pseudo labeled data. This design choice also shares the same spirit as previous self-training works [36], [37], where the teacher models’ weights are derived from the student models. Iterating the previous steps. Following the common practice of self-training, we iterate the “Annotate unlabeled data with trained UCM” and “Train new model by mixing labeled and unlabeled data” steps a few times to get better performance. A detailed ablation study is shown in Section IV. TABLE I TOTAL NUMBER OF PRETRAINING IMAGE-LANGUAGE PAIRS Total # of pairs Used # of pairs COCO 533k 533k VG 5.06m 1m VQA 444k 444k GQA 1m 1m CC - 4m A. Data Input We first use pretrained Faster R-CNN [11], [38] to extract regional visual features. The spatial information of each re- gional object is represented by its relative position, height, and weight. The spatial embedding is then calculated with an embedding layer. The final representation of each object is represented by adding the spatial embedding and visual features. For languages, we follow BERT [1] and tokenize the input sentences into WordPiece tokens. After tokenizing, We pad them with [CLS] and [SEP] tokens. Unlike original BERT, here we use [CLS] token to denote both start of sentence and classification position and we use [SEP] to denote end of sentence. Finally, we add the condition flag [CND] after [CLS] token. The condition flag [CND] represents a set of certain flags. In this work, the [CND] flag has three types: COCO type caption [39], visual genome [40] type dense caption and questions. B. Pretrain Details Pretraining Datasets. Our pretraining dataset contains la- beled data and unlabeled image data. For labeled data, we follow the same setting as in [7]. The labeled data is collected from MSCOCO [39], Visual Genome [40], VQA [34] and GQA datasets [35], which contain around 180k images in total. Although the VG dataset contains more than 5 million image- text pairs, most of them are dense captions and some of them are repeated entries. In experiments, we remove the repeated dense captions, and sample 10 dense captions for each image. For unlabeled images, we use the first 300k images from Conceptual Caption dataset and remove the original captions. Within the 300k unlabeled images, we further filter the data by object detection results. We remove the images with the top 36 objects’ average confidence below 0.3. Thus only 280k unlabeled images are left and we use them in self-training. The total numbers of pretraining images are illustrated in Table I. Compared with LXMERT [7] who uses 9 million image language pairs, we use a total amount of 7 million pairs. Self Training Setting. In experiments, we iterate the self- training process 2 times. When training UCM, we use the same parameter settings. We use AdamW optimizer with learning rate 5e-5 and batch size 256. Each time we train the model for 10 epochs. We use warm-up for the first 10% of iterations. We also use fp16 mix precision to speed up training. IV. EXPERIMENTS In this section, we describe the pretraining details, ablation experiments, visualizations, and experimental results on down- stream datasets. C. Finetuning Settings We present our finetuning settings for VQAv2 [34], GQA [35], NLVR2 [42] and COCO Caption [39]. JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 6 TABLE II COMPARISON WITH OTHER VISION-LANGUAGE PRE-TRAINING MODELS ON VQAV2, GQA, NLVR2 AND COCO CAPTION. OUR MODEL COULD ACHIEVE COMPETITIVE OR BETTER PERFORMANCE AMONG ALL MODELS GIVEN FEWER TRAINING IMAGES. EVALUATION METRICS: FOR VQA, GQA, NLVR2, RESULTS ARE PRESENTED BASED ON THE ACCURACY. FOR COCO CAPTION, WE FOLLOW THE COMMON STANDARDS TO COMPARE THE BLEU (B@4), METEOR (R), CIDER (C) AND SPICE (S) SCORES. Tasks ViLBert [2] LXMERT [7] UNITER-base [10] ERNIE-VIL-base [41] VLP [27] UCM (Ours) Pretrain Images VQA GQA NLVR2 COCO Caption COCO Caption (CIDEr Optimization) test-dev test-std test-dev test-std dev test-P B@4 M C S B@4 M C S 3m 70.55 70.92 - - - - - - - - - - - - 180k 72.42 72.5 60.00 60.30 74.9 74.5 - - - - - - - - 4.2m 72.70 72.91 - - 75.85 (77.18) 75.80 (77.85) - - - - - - - - 4.2m 72.62 72.85 - - - - - - - - - - - - 3m 70.5 70.7 - - - - 36.5 28.4 117.7 21.3 39.5 29.3 129.3 23.2 480k 72.9 72.9 61.3 61.5 75.6 75.5 37.4 28.8 119.4 21.2 39.0 28.8 130.1 22.7 1) VQAv2: VQAv2 dataset [34] is to answer questions given an image. The answering process is usually format- ted as a classification task within all possible answers. In our experiments, the VQA questions are appended with the question condition flag before input to the model. We use the final features at position [CLS] to answer the question. We add a two-layer MLP to the final output of [CLS] and use the feature to perform classification. In ablation experiments, we only use default provided data. In the final experiments, following [10], we use extra QA data from Visual Genome for data augmentation. 2) GQA: Similar to VQAv2, for GQA dataset [35], we format the problem as a classification task and use the output feature at position [CLS] to answer the questions. Same as VQA, the GQA questions are appended with the question con- dition flag before input to the model. In ablation experiments, we only use GQA balanced dataset for training. To further help the model adapt to GQA style of questions, in the final experiment, we follow other works [26] to pretrain the model using GQA full set first and then finetune on GQA balanced dataset. 3) NLVR2: The natural language for visual reasoning for real dataset [42] is to answer if the description is correct given two images. The UCM model processes 1 image and 1 language by default. Therefore, we separate the two images into two question and image pairs and process each pair using our proposed model. After getting the [CLS] features for both of the pairs, we simply concatenate the 2 features and use the concatenated feature to perform a binary classification. We noted that in [10], a different finetuning process is proposed. For a fair comparison, we compare the results with the same finetuning setting. In NLVR2 experiments, no condition flag is assigned to the sentence as the NLVR2 data does not belong to any type of the pretrained conditions. 4) COCO Caption: We also finetune our model on gen- eration tasks i.g. COCO caption [39] on Karpathy’s split. During finetuning, we use one-directional mask only to train the model. During the generation process, the start token is set to [CLS] and [CND] of COCO captioning type. We first use cross-entropy loss to train the captioning model and then apply CIDEr optimization [43] to further improve performance. D. Ablation Experiments 1) Step by Step Ablation Studies: In this section, we provide step by step ablation studies of our proposed system. The ablation experiments are done on VQAv2, GQA, NLVR2 test- dev set. The results are shown in Table III. We start by training a baseline model with the same network architecture by only using bi-directional pretraining masks and bi-directional training tasks. We then add 300k images and their original annotations from Conceptual Caption to training data. Results show that simply adding 300k extra image data pairs is unable to improve down-stream finetuning performance much. Moreover, we also try to use LXMERT [7] to generate pseudo labels. As pointed out in previous sections and Fig 4, the generation quality of LXMERT is not good. Therefore, we observe a huge performance drop when using pseudo labels generated by LXMERT. Furthermore, we perform experiments using UCM with both labeled data and pseudo data generated by the generative model VLP without self-training. Compared with the results only using labeled data, we observe that there is almost no performance improvement. One reason is that the generative model VLP can only generate COCO style captions, therefore the diversity of training data is still limited. Compared with our self-training results, we observe that the self-training method can improve the performance further. After that, we train our proposed UCM model with labeled data only and finetune using the question flag. The result shows simply using UCM and condition flag could improve down-stream finetuning performance. Also, we do one more JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 7 Fig. 4. An example of generated image descriptions with or without the condition signal. Generation results from a traditional VL-BERT model are not good. For the unified model variations, if the model is not conditional, the generated results are biased to dense captioning style and tend to generate short sentences. Our proposed UCM model could learn a conditional generation language model given different conditional flags. TABLE III ABLATION EXPERIMENTS OF OUR PROPOSED METHOD ON VQAV2, GQA AND NLVR2. Method Baseline Baseline + Conceptual Caption 300k UCM with only labeled data UCM with LXMERT Pseudo Caption UCM with VLP Pseudo Caption UCM w/o condition flag UCM + self training step 1 UCM + self training step 2 VQAv2 GQA NLVR2 60.0 72.4 59.9 72.4 60.3 72.6 57.3 68.7 60.0 72.6 59.8 72.4 60.6 72.8 60.5 72.7 74.9 75.3 74.8 70.1 74.5 74.8 75.6 75.6 TABLE IV ADDITIONAL ABLATION EXPERIMENTS ON COCO CAPTION Model COCO Caption COCO Caption (CIDEr Optimization) Baseline UCM with only labeled data UCM + self training step 1 B@4 M 27.1 34.9 28.5 36.9 28.8 37.4 C 109.4 117.8 119.4 S 19.9 21.2 21.2 B@4 M 26.8 34.0 28.5 38.1 28.8 39.0 C 117.7 129.9 130.1 S 19.6 22.5 22.7 experiment by removing the condition flag during finetuning. The results drop a little bit if the condition flag is not used. We then move to ablation studies of self-training algorithm. We iterate the self-training process by 1 iteration and 2 iterations. We found that based on down-stream finetuning per- formance, 1 iteration is good enough. Performing 2 iterations is unable to improve performance much. 2) Ablation Experiments on Generation Tasks: The ablation experiments on generation tasks are shown in Table IV. the next word given The generation process is to predict the words before and can be formatted as applying a one- directional mask on the sentence tokens. The baseline model gives low performance on COCO captioning task because the model is only trained with bi-directional tasks and is not suitable for generation. We also witness slow convergence speed and training instability. Compared with the baseline model, the UCM model trained with only labeled data can outperform the baseline by a large margin due to the existence of one-directional mask during pretraining. The model also converges faster during experiments. Our UCM model can be finetuned within 40 GPU hours with Nvidia-2080ti GPUs on COCO Caption task. However, the baseline model requires 3 times more finetuning GPU hours. Following the results in previous section, we use self-training step 1 model as the best model. The self-training process also proves its effectiveness in captioning experiments. Compared with the model trained with only labeled data, the self-training model achieves higher accuracy with or without CIDEr optimization. 3) Conditional vs Unconditional: The experiments in the last section show the effectiveness of the conditional model in From Pretrained VLBert: A a graspingThe grasping grasping..That that a aCondition: Coco Captionthe living room has one couch in it.there is a living room with a couch and chair.a living room filled with furniture and a large window.Condition: Dense captionA white lamp shadeA white curtainthe couch is greyNo Condition:A white curtainThe curtain is whiteThe window is openFrom Pretrained VLBert: The agraspingThat that a aThe man a Condition: Coco CaptionA group of people having a party.A group of people dancing in a wedding.Amanandawomanholdinghandsanddancing.Condition: Dense captionA person standing up.Womaninwhitedresssmiling.A man is wearing a tie.No Condition:A woman in whiteThe light is onThe man is sitting JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 8 Fig. 5. Visualization of the attention map at condition tokens. The darkness of connections represents the attention weight. The darker the color, the higher attention is assigned. Top: Attention masks at bi-directional branch. Bottom: Attention masks at one-directional branch. Left: An example of a dense caption sentence. Right: An example of a question sentence. Based on the visualization results, we can have three conclusions: 1. [CND] mainly affects the one-directional branch. 2. [CND] affects more in deeper layers. 3. [CND] has similar effects on questions and non-questions. finetuning downstream tasks. In this section, we further study how the condition flag affects the generation performance. Results are shown in Figure 4. To study this problem, we start by using an online available pretrained vision language BERT model [7] to generate captions. Following [3], we format the generation problem as a sampling problem from Markov Random Field and try to generate languages based on this setting. We found that the generation results are extremely bad using a bi-directional pretrained model. The results are simply repeating several high-frequency words. We then proceed to train a UCM model without using the condition components. We found that the generation results bias to dense captioning styles. This is probably because the training data has much more dense captions than COCO style captions. Finally, we present the results of our UCM model. To further validate the results, we calculate the average generated sentence length. A model trained without condition flag generates sentences with an average length of 4.8 words. Our proposed UCM model can generate diverse image descriptions given different condition flags. When given a condition flag dense caption, the model generates sentences with an average length of 4.7 words. Given a condition flag COCO style caption, our model can generate long sentences with an average length of 10.2 words. E. Comparison with other methods We compare our best UCM model (Self-Training Step1) on VQAv2, GQA, NLVR2 and COCO Caption with other methods. The results are summarised in Table II. We compare our model with similar-sized models (based on BERT base). Our model could achieve competitive or better performance among all models given fewer training images. The VLP model achieves margin advantages compared with our method in COCO Caption (CIDEr Optimization) based on 3 evaluation is only pretrained metrics. The reason is that with COCO style captions and VQA datasets, and no other noisy pseudo captions are included in the pretraining. When the model is used on understanding tasks like VQA, our method prevails with large margins. It proves that our model generalizes better on both generation tasks and understanding tasks. the model F. Visualization In this section, we give visualizations of the attention map of special tokens and show how the data is generated. 1) Understanding the condition token: We visualize the attention map at condition tokens. As shown in Figure 5, we plot the attention weight attending to [CND] position. We plot Language AttentionLayer 9Cross AttentionLayer 5Language AttentionLayer 9Cross AttentionLayer 5Bi-directional WeightOne-directional WeightLanguage AttentionLayer 9Cross AttentionLayer 5Language AttentionLayer 9Cross AttentionLayer 5[CLS][CND]Theshirthasawhitecollar.[CLS][CND]Theshirthasawhitecollar.[CLS][CND]Theshirthasawhitecollar.[CLS][CND]Theshirthasawhitecollar.[CLS][CND]Whereisthe rockycliff?[CLS][CND]Whereisthe rockycliff?[CLS][CND]Whereisthe rockycliff?[CLS][CND]Whereisthe rockycliff?[CLS][CND]Theshirthasawhitecollar.[CLS][CND]Theshirthasawhitecollar.[CLS][CND]Theshirthasawhitecollar.[CLS][CND]Theshirthasawhitecollar.[CLS][CND]Whereisthe rockycliff?[CLS][CND]Whereisthe rockycliff?[CLS][CND]Whereisthe rockycliff?[CLS][CND]Whereisthe rockycliff? JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 9 Fig. 6. How the captions are generated given different visual masks (e.g. when some visual regions are masked out). For each generated dense caption, the masked feature region is plotted. Visualization results show that by masking some parts of the image regions, the UCM model could successfully focus on different image areas. both the bi-directional branch and the one-directional branch and both a dense captioning style caption and a question. The darker the color, the higher the attention weight. Based on the weights, we could have the following conclusions: [CND] mainly affects the one-directional branch. We compare the bi-directional weights and one-directional weights (top vs bottom). Although the [CND] flag is used for both branches, the one-directional branch learns to assign higher weights. One reason is that the generation process is more sensitive to the condition flag than the language understanding process. As illustrated in previous sections, our method can generate sentences of different lengths given different condi- tion flags. For the understanding tasks, intuitively the model should focus more on the sentences as a whole. [CND] affects deeper layers more. Compared with shal- lower layers (language attention layer 9), the deeper layers tend to assign higher weights to [CND] position. This is be- cause the deeper layers are more directly related to producing results, thus they rely more on the [CND] flag to control the generation style. [CND] has similar effects on questions and non- questions. We compare the visualization of captions and ques- tions (left vs right). No obvious difference can be observed. This implies that the condition flags work in similar ways for caption style sentences and question style sentences. 2) Visualization of generation process: In Figure 6, we show how different visual masks affect the language gen- eration. We could have a more obvious observation when generating dense captions. Therefore, for each generated dense caption, we show which feature region is masked. Visualiza- tion results show that by masking some parts of the image regions, the UCM model could successfully focus on different image areas and finally produce diverse dense captioning results. For example, image, when nothing is masked out. The model focuses on the window. When part of the window is masked out, the model will focus on the bath mat and the tile. For COCO style captions, our model also benefits from applying visual masks. Although COCO style captions summarize the whole image, applying visual masks helps the model to look at different areas. in the first V. CONCLUSION AND FUTURE WORKS The requirement of paired training data restricts the scale of VL-BERT pretraining. We propose a self-training approach that allows to train VL-BERTs from unlabeled image data. First, we propose UCM – a vision language BERT that can perform conditional generate directly. Given different condi- tion flags, the unified conditional model can generate dense caption, caption, and even questions. Then we introduce a set of self-training methods for vision language BERT pretraining, including how to generate diverse image descriptions and the self-training pipeline. We also visualize the generation process and the effectiveness of the condition flag. Original CC caption:bathroom : simple bathroom designs grey Condition: Coco Captiona bathroom with two sinks a tub and a mirror with lights and a tub.a bathroom with a sink and a toilet.a white sink in a bathroom next to a table.a clean bathroom with a mirror and sink.Condition: Dense caption1. a tile in a wall.2. a white bath mat.3. a window on the wall.Condition: Questionthe sink is on what?Where is the sink?on which side of the photo is the towel?on which side is the soap dish?Original CC caption:some new friends from the class . Condition: Coco Captionfive women pose for a picture in a room.a group of women standing around each other with remotes.a group of young women holding hands and smiling.a group of women posing for a photo with a microphone.Condition: Dense caption1. a person standing up.2. girl is wearing a necklace.3. woman wearing pink pants.4. woman in red shirt smiling and posing.Condition: QuestionThe women are doing what?On which side of the photo is the girl in red?Is the girl smiling?Original CC caption:boulevard in the downtown of the city Condition: Coco Captiona lady walking past a tree with an umbrella.a person holding an umbrella on a city street.a couple of men standing next to each other.Condition: Dense caption1. a window on the building.2. a woman holding an umbrella.3. the umbrella handle is white.4. window of the building in distance.Condition: QuestionWhere is the woman?The woman is holding what?The tree is on which side of the photo?Original CC caption:building and clouds against blue sky seen from a cityCondition: Coco Captiona city street with a skyscraper and trees.a city street with tall buildings and traffica tall building with two stories and a street light.Condition: Dense caption1. window on the building.2. a building is on the left side3. the building is brick.4. the tree is green.Condition: QuestionThe building is on which side?Where is the tree?1221242,4412331 JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 10 For performance, by using the proposed self-training ap- proach and only 300k unlabeled extra data, we are able to get competitive performance within all models with similar model size trained with 3 million extra image data. Future Works. The use of the conditional model is not restricted to self-training. Future works can be done by explor- ing more use-cases of the proposed UCM. For example, given an image, our method could be used to generate kid stories, generate advertisement and generate copyright documents with a single pretrained model. Further extension of training scales could also be explored. Our proposed methods enable training vision language BERTs with unlimited data. One may perform a larger scale of pre- training with more data collected from the Internet. ACKNOWLEDGMENTS This research is supported by the National Research Foun- dation, Singapore under its AI Singapore Programme (AISG Award No: AISG-RP-2018-003), the MOE AcRF Tier-1 re- search grants: RG95/20, and the OPPO research grant. Fengmao Lv’s participation was supported by the National Natural Science Foundation of China (No. 62106204), the Sichuan Natural Science Foundation (No. 2022NSFSC0911, 2022YFG0031), and the Fundamental Research Funds for Central Universities of China (No. 2682022CX068). REFERENCES [1] J. D. M.-W. C. Kenton and L. K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of NAACL-HLT, 2019, pp. 4171–4186. [2] J. Lu, D. Batra, D. Parikh, and S. Lee, “Vilbert: Pretraining task- agnostic visiolinguistic representations for vision-and-language tasks,” in Advances in Neural Information Processing Systems, 2019, pp. 13– 23. [3] A. Wang and K. Cho, “Bert has a mouth, and it must speak: Bert as a markov random field language model,” arXiv preprint arXiv:1902.04094, 2019. [4] Q. Xie, M.-T. Luong, E. Hovy, and Q. V. Le, “Self-training with noisy student improves imagenet classification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10 687–10 698. [5] B. Zoph, G. Ghiasi, T.-Y. Lin, Y. Cui, H. Liu, E. D. Cubuk, and Q. Le, “Rethinking pre-training and self-training,” Advances in neural information processing systems, vol. 33, pp. 3833–3845, 2020. [6] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language mod- els are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020. [7] H. Tan and M. Bansal, “Lxmert: Learning cross-modality encoder representations from transformers,” in Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 5100–5111. [8] P. Sharma, N. Ding, S. Goodman, and R. Soricut, “Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image cap- tioning,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2018, pp. 2556– 2565. [9] A. Singh, V. Goswami, and D. Parikh, “Are we pretraining it right? digging deeper into visio-linguistic pretraining,” arXiv preprint arXiv:2004.08744, 2020. [10] Y.-C. Chen, L. Li, L. Yu, A. E. Kholy, F. Ahmed, Z. Gan, Y. Cheng, and J. Liu, “Uniter: Universal image-text representation learning,” in ECCV, 2020. [11] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, “Bottom-up and top-down attention for image captioning and visual question answering,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 6077–6086. [12] Y. Han, B. Wang, R. Hong, and F. Wu, “Movie question answering via textual memory and plot graph,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 3, pp. 875–887, 2019. [13] J. Zhang, J. Shao, R. Cao, L. Gao, X. Xu, and H. T. Shen, “Action- centric relation transformer network for video question answering,” IEEE Transactions on Circuits and Systems for Video Technology, 2020. [14] Y. Guo, L. Nie, Z. Cheng, and Q. Tian, “Loss-rescaling vqa: Revisiting language prior problem from a class-imbalance view,” IEEE Transac- tions on Image Processing, 2021. [15] P. Wang, Q. Wu, C. Shen, A. Dick, and A. Van Den Hengel, “Fvqa: Fact- based visual question answering,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 10, pp. 2413–2427, 2017. [16] W. Guo, Y. Zhang, J. Yang, and X. Yuan, “Re-attention for visual question answering,” IEEE Transactions on Image Processing, vol. 30, pp. 6730–6743, 2021. [17] N. Yu, X. Hu, B. Song, J. Yang, and J. Zhang, “Topic-oriented image captioning based on order-embedding,” IEEE Transactions on Image Processing, vol. 28, no. 6, pp. 2743–2754, 2018. [18] Y. Huang, J. Chen, W. Ouyang, W. Wan, and Y. Xue, “Image captioning with end-to-end attribute detection and subsequent attributes prediction,” IEEE Transactions on Image Processing, vol. 29, pp. 4013–4026, 2020. [19] C. Yan, Y. Hao, L. Li, J. Yin, A. Liu, Z. Mao, Z. Chen, and X. Gao, “Task-adaptive attention for image captioning,” IEEE Transactions on Circuits and Systems for Video technology, vol. 32, no. 1, pp. 43–51, 2021. [20] W. Zhang, C. Ma, Q. Wu, and X. Yang, “Language-guided navigation via cross-modal grounding and alternate adversarial learning,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 9, pp. 3469–3481, 2020. [21] J. Gao, X. Sun, B. Ghanem, X. Zhou, and S. Ge, “Efficient video ground- ing with which-where reading comprehension,” IEEE Transactions on Circuits and Systems for Video Technology, 2022. [22] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, 2017, pp. 5998–6008. [23] J. Lu, V. Goswami, M. Rohrbach, D. Parikh, and S. Lee, “12-in-1: Multi- task vision and language representation learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10 437–10 446. [24] L. H. Li, M. Yatskar, D. Yin, C.-J. Hsieh, and K.-W. Chang, “Visualbert: A simple and performant baseline for vision and language,” arXiv preprint arXiv:1908.03557, 2019. [25] G. Li, N. Duan, Y. Fang, M. Gong, D. Jiang, and M. Zhou, “Unicoder- vl: A universal encoder for vision and language by cross-modal pre- training.” in AAAI, 2020, pp. 11 336–11 344. [26] X. Li, X. Yin, C. Li, P. Zhang, X. Hu, L. Zhang, L. Wang, H. Hu, L. Dong, F. Wei et al., “Oscar: Object-semantics aligned pre-training for vision-language tasks,” in European Conference on Computer Vision. Springer, 2020, pp. 121–137. [27] L. Zhou, H. Palangi, L. Zhang, H. Hu, J. J. Corso, and J. Gao, “Unified vision-language pre-training for image captioning and vqa.” in AAAI, 2020, pp. 13 041–13 049. [28] I. Z. Yalniz, H. J´egou, K. Chen, M. Paluri, and D. Mahajan, “Billion- scale semi-supervised learning for image classification,” arXiv preprint arXiv:1905.00546, 2019. [29] R. Sennrich, B. Haddow, and A. Birch, “Improving neural machine trans- lation models with monolingual data,” arXiv preprint arXiv:1511.06709, 2015. [30] Y. Cheng, “Semi-supervised learning for neural machine translation,” in Springer, 2019, pp. Joint Training for Neural Machine Translation. 25–40. [31] L. Wu, Y. Wang, Y. Xia, Q. Tao, J. Lai, and T.-Y. Liu, “Exploiting monolingual data at scale for neural machine translation,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 4198–4207. [32] L. Dong, N. Yang, W. Wang, F. Wei, X. Liu, Y. Wang, J. Gao, M. Zhou, and H.-W. Hon, “Unified language model pre-training for natural language understanding and generation,” Advances in Neural Information Processing Systems, vol. 32, 2019. [33] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019. [34] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh, “Making the v in vqa matter: Elevating the role of image understanding in visual question answering,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 6904–6913. JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 11 Fayao Liu is a research scientist at Institute for Infocomm Research (I2R), A*STAR, Singapore. She received her PhD in computer science from the Uni- versity of Adelaide, Australia in Dec. 2015. Before that, she obtained her B.Eng. and M.Eng. degrees from National University of Defense Technology, China in 2008 and 2010 respectively. She mainly works on machine learning and computer vision problems, with particular interests in self-supervised learning, few-shot learning and generative models. She is serving as an associate editor for IEEE Transactions on Circuits and Systems for Video Technology (TCSVT). [35] D. A. Hudson and C. D. Manning, “Gqa: a new dataset for com- positional question answering over real-world images,” arXiv preprint arXiv:1902.09506, vol. 3, no. 8, 2019. [36] A. Tarvainen and H. Valpola, “Mean teachers are better role mod- els: Weight-averaged consistency targets improve semi-supervised deep learning results,” Advances in neural information processing systems, vol. 30, 2017. [37] B. Athiwaratkun, M. Finzi, P. Izmailov, and A. G. Wilson, “Improving consistency-based semi-supervised learning with weight averaging,” arXiv preprint arXiv:1806.05594, vol. 2, no. 9, p. 11, 2018. [38] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99. [39] X. Chen, H. Fang, T.-Y. Lin, R. Vedantam, S. Gupta, P. Doll´ar, and C. L. Zitnick, “Microsoft coco captions: Data collection and evaluation server,” arXiv preprint arXiv:1504.00325, 2015. [40] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma et al., “Visual genome: Connecting language and vision using crowdsourced dense image annotations,” International Journal of Computer Vision, vol. 123, no. 1, pp. 32–73, 2017. [41] F. Yu, J. Tang, W. Yin, Y. Sun, H. Tian, H. Wu, and H. Wang, “Ernie- vil: Knowledge enhanced vision-language representations through scene graphs,” in Proceedings of the AAAI Conference on Artificial Intelli- gence, vol. 35, no. 4, 2021, pp. 3208–3216. [42] A. Suhr, M. Lewis, J. Yeh, and Y. Artzi, “A corpus of natural language for visual reasoning,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2017, pp. 217–223. [43] S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel, “Self- critical sequence training for image captioning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 7008–7024. Xiaofeng Yang is a PhD student at the School of Computer Science and Engineering, Nanyang Technological University, Singapore. His research interests are in computer vision and machine learn- ing. Guosheng Lin is an Assistant Professor at the School of Computer Science and Engineering, Nanyang Technological University, Singapore. He received his PhD degree from The University of Adelaide in 2014. His research interests are in com- puter vision and machine learning. Fengmao Lv received the bachelor’s and Ph.D. degrees in computer science from the University of Electronic Science and Technology of China, Chengdu, China, in 2013 and 2018, respectively. He is currently an Associate Professor with Southwest Jiaotong University, Chengdu. His research focus includes transfer learning, domain adaptation, and their applications in computer vision and natural language processing.
synthetic_cpt
2
How_to_Learn_a_New_Language_An_Efficient_Solution_for_Self-Supervised_Learning_Models_Unseen_Languages_Adaption_in_Low-Resource_Scenario.pdf
Teaching Embodied Reinforcement Learning Agents: Informativeness and Diversity of Language Use Jiajun Xi* Yinong He∗ Jianing Yang Yinpei Dai Joyce Chai University of Michigan {jiajunxi, heyinong, jianingy, daiyp, chaijy}@umich.edu 4 2 0 2 t c O 1 3 ] L C . s c [ 1 v 8 1 2 4 2 . 0 1 4 2 : v i X r a Abstract In real-world scenarios, it is desirable for em- bodied agents to have the ability to leverage human language to gain explicit or implicit knowledge for learning tasks. Despite recent progress, most previous approaches adopt sim- ple low-level instructions as language inputs, which may not reflect natural human commu- nication. It’s not clear how to incorporate rich language use to facilitate task learning. To address this question, this paper studies dif- ferent types of language inputs in facilitating reinforcement learning (RL) embodied agents. More specifically, we examine how different levels of language informativeness (i.e., feed- back on past behaviors and future guidance) and diversity (i.e., variation of language ex- pressions) impact agent learning and inference. Our empirical results based on four RL bench- marks demonstrate that agents trained with di- verse and informative language feedback can achieve enhanced generalization and fast adap- tation to new tasks. These findings highlight the pivotal role of language use in teaching em- bodied agents new tasks in an open world. 1 1 Introduction Developing embodied agents that can understand and communicate with humans in natural language to learn and accomplish tasks is a long-standing goal in artificial intelligence. In recent years, the integration of human language and reinforcement learning (RL) has seen significant advancements. Unlike traditional RL methods that typically rely on numerical reward signals to guide agent learn- ing, recent works (Cheng et al., 2023; Lin et al., 2023) explore using language as an intuitive and useful signal to shape an agent’s behaviors. For ex- ample, when the agent is making mistakes during the task completion, providing language feedback *Equal contribution. 1Source available sled-group/Teachable_RL. code at https://github.com/ can largely improve the instantaneous performance thus enhancing the overall agent learning efficiency and effectiveness (McCallum et al., 2023). However, existing methods generally employ simple instructions, such as "turn left" and "put the apple to the table" to teach/control an agent (Hanjie et al., 2021; Zhang and Chai, 2021; Lin et al., 2023; McCallum et al., 2023; Shridhar et al., 2021). While useful, these instructions may not fully reflect the flexibility of language use in task learning and collaboration (Chai et al., 2018, 2019; Zhang et al., 2022, 2023; Dai et al., 2024a). In the real world, humans often express complex lan- guage instructions that are more informative. For instance, when a student makes a mistake, a teacher may help them to retrospect on what went wrong (i.e., hindsight instructions) and then guide them on what should be done next to finish the goal (i.e., foresight instructions). In addition, humans are likely to engage in conversations with more diverse language patterns, describing the same goal with different expressions and styles. Therefore, we ask the following question: How do the informativeness and diversity of natural language used during RL training affect an agent’s ability to learn tasks? We take a popular offline RL model - decision transformer (DT) (Chen et al., 2021) - as a back- bone architecture and conduct a comprehensive study to examine how informativeness and diver- sity of language use may impact agents’ learning ability. To control informativeness, we leverage expert agents’ actions as a reference to generate hindsight reflection and foresight guidance, using hand-crafted language templates. To increase di- versity, we construct a GPT-augmented language pool, where GPT-4 (OpenAI, 2024) is used to aug- ment hand-crafted templates into much more nat- ural and richer expressions. We further extended DT into a multi-modal Language-Teachable DT (LTDT) and demonstrated that LTDT agents that are trained with diverse and informative language significantly outperform the counterpart agents that are trained either with simple language alone or with no language inputs. Notably, we found that even with just one language template, combining hindsight and foresight feedback together improves agents’ performance by an average of 9.86 points (from 37.95% to 47.81%) on four popular offline RL benchmarks compared to agents trained without language. When more language diversity is incor- porated into training, an additional 10.14 points (from 47.81% to 57.95%) are obtained. The contributions of this paper can be summa- rized as follows: • We investigate in detail, for the first time, how language informativeness and diversity affect offline RL agents in task learning, and demonstrate their important roles in improv- ing agents’ performance, adaptability, and ro- bustness. • We show that training agents with informa- tive and diverse instructions can intrinsically improve the agent’s understanding of the task and lead to better performance. • We propose a simple framework to generate both hindsight and foresight language feed- back and enrich language variation without any human annotators. 2 Related Work Offline Reinforcement Learning Offline rein- forcement learning (RL) has become a focal point of research due to its ability to utilize pre-existing datasets for training agents without real-time in- teractions. Several algorithms address the unique challenges of offline RL, such as mitigating extrap- olation errors and ensuring robust policy evalua- tion. A survey by Prudencio et al. (2023) outlines the field’s taxonomy and open problems. Bench- marking efforts by Fujimoto et al. (2019) assess various batch deep RL algorithms. Key approaches include Conservative Q-Learning (CQL) (Kumar et al., 2020), Implicit Q-Learning (IQL) (Kostrikov et al., 2021), and the Decision Transformer (DT) (Chen et al., 2021), which treats RL as a sequence modeling problem (Janner et al., 2021). Recent work also explores generalization across tasks (Lee et al., 2022; Reed et al., 2022; Schubert et al., 2023), the use of exploratory data (Yarats et al., 2022), and integrating large language models (LLMs) (Mir- chandani et al., 2023). Efficient online RL lever- aging offline data is also a focus (Ball et al., 2023; Modhe et al., 2023). Our research builds on the De- cision Transformer (DT) by integrating language feedback, creating the Language-Teachable Deci- sion Transformer (LTDT). This novel approach in- corporates rich, human-like language instructions, improving agent learning through enhanced infor- mativeness and diversity of language inputs. Language in Reinforcement Learning The in- tersection of natural language and RL offers new ways to develop intuitive and effective learning paradigms for embodied agents. Initial works uti- lized language for feedback and task instructions (She and Chai, 2017; Nguyen et al., 2017; Shrid- har et al., 2020). Recent studies have explored various methods for incorporating language feed- back in RL, such as the LTC paradigm (Wang et al., 2023), lifelong robot learning with human- assisted language planners (Parakh et al., 2023), and frameworks for rich information requests (Dai et al., 2020; Tseng et al., 2021; Nguyen et al., 2022). Language for corrections (Sharma et al., 2022; Liu et al., 2023) and as reward signals (Xie et al., 2023; Goyal et al., 2019; Yu et al., 2023) has shown to enhance agent performance. Vision-language joint training approaches, like CLIP (Radford et al., 2021), BLIP-2 (Li et al., 2023), and InstructBLIP (Dai et al., 2023), demonstrate the potential of com- bining visual and language modalities for RL tasks (Ma et al., 2023; Nguyen et al., 2019; Khandel- wal et al., 2022). Further, multimodal prompts for robotic manipulation (Jiang et al., 2023; Fan et al., 2022) and LLMs for planning in robotics (Ahn et al., 2022; Huang et al., 2022; Singh et al., 2023; Yao et al., 2022; Dai et al., 2024b) highlight the evolving role of language in RL. Other works, like (Mehta et al., 2023), focus on generating problem- specific language feedback templates. In contrast, our work focuses on the informativeness and diver- sity of language instructions, two problem-agnostic yet easy-to-implement properties. By using both hindsight and foresight language templates and en- hancing diversity through GPT-4, we demonstrate notable improvements in agent performance and generalizability, showcasing the impact of complex language inputs in offline RL training. 3 Problem Setting In this section, we outline the problem setting by defining the offline reinforcement learning problem Figure 1: An overview of four environments used for experiments. It shows tasks to be learned in each environment; examples of hindsight (marked H) and foresight (F) language feedback (next to the gear icon are hand-crafted templates and next to the GPT icon are GPT-4 generated feedback); as well as low-level actions in each environment. (Sec. 3.1), and a taxonomy of language feedback (Sec. 3.2). Then we describe the instantiation of such definitions in four different RL environments we used for experiments (Sec. 3.3). 3.1 Offline Reinforcement Learning To support a systematic study of language use, we formulate the problem in the offline reinforcement learning (RL) setting. At each time step t, the agent receives an observation ot, a reward rt, and a language feedback lt for its previous action. The agent then executes an action at according to a policy π, which is conditioned on the entire in- teraction history ht up to time t, i.e., π(at | ht), where ht = {o≤t, r≤t, l≤t, a<t} represents the his- tory of observations, rewards, language feedback, and past actions up to time t. The agent’s goal is to complete the task by maximizing the expected discounted sum of rewards E[(cid:80)T t=1 γtrt] where T is the episode length, and γ is the discount fac- tor. In offline RL, the training trajectories are pre- collected with an expert agent (a well-trained agent or a planner-based expert with privileged informa- tion). The trained agents are evaluated interactively with the environment. 3.2 Language Feedback: Informativeness and performance on seen tasks and adaptation to unseen tasks. 3.2.1 Informativeness Informativeness refers to the richness of infor- mation content in language feedback. Following Cheng et al. (2023), we categorize feedback into two types: hindsight and foresight. Hindsight feedback involves comments or critiques about the agent’s past actions. For example, "Excellent, you are moving towards the goal!" encourages the agent to continue its current path, while "You are getting too close to the enemy." alerts the agent about a mistake. Hindsight feedback reflects on incorrect actions taken in previous steps, which can guide agents toward success by narrowing down the search space for correct actions (See Appendix E for more analysis). Conversely, foresight feedback guides potential future actions. For instance, "You should go right to get closer to the target." directs the agent towards the goal, and "You should go left to avoid the enemy on the right." helps the agent make strategic decisions to avoid threats. Language feedback is considered most informative when it includes both hindsight and foresight elements, and least informative when neither is present. Diversity 3.2.2 Diversity We aim to investigate how the informativeness and diversity of language instructions used during the training of an offline RL agent affect the agent’s Diversity in language feedback refers to the vari- ety of ways the same information is conveyed. If feedback is provided using only one template, it HomeGridEnvironmentALFWorldMetaWorldTaskLanguageActionMessengerFind / Get {obj}Open {bin_type}Rearrange {obj}Pick {obj} and put it in {place}Clean {obj} and put it in {place}Get the message and then send it to the goal.Assembly: Pick up the wrench and put it on the pegLeft()Right()Up()Down()Left()Right()Up()Down()PickUp(obj)Drop(obj)Pedal(bin)Grasp(bin)Lift(bin)Get(obj)Open (gripper)Raise(gripper)Close(gripper)MoveTo(gripper, pose)Drop(gripper)Goto(recept)Put(obj)Open(recept)Close(recept)Take(obj)Look()Clean(obj)Heat(obj)H: You are too close to the enemy {name}.F: Go {direction} to dodge the enemy {name}.H: That’s a poor move since you are not avoiding the enemy {name}.F: Please move {direction} to elude the enemy {name} on your track. H: You have gone to the wrong direction.F: Pedal to open the recycling bin.H: You seem to be heading away from the right route.F: To access the recycling bin, you’ll need to pedal.H: You made a mistake by taking thebad action {action}.F: Take {action} in the next step.H: The choice to implement {action} was misguided.F: I suggest you try {action} for now.H: Good job! You are correctly {action}.F: It’s time to {action}.H: That’s an excellent step to {action}.F: To complete the task, you have to {action}Hammer: Pick up the hammer and hit the nailGet to the goal and then find the message.PretrainHeat {obj} and put it in {place}AdaptationPretrainAdaptationClean-upPretrainAdaptationPretrainAdaptation is less diverse. It becomes more diverse when the same information is expressed in many different ways. The goal is to expose the RL agent to vari- ous expressions of the same feedback to enhance its ability to generalize. Algorithm 1 Offline Data Collection 1: Initialize D ← ∅ 2: for each episode with seedi do 3: 4: 5: 6: Initialize Di ← ∅ Initialize environment env with seedi. Append task description T d to Di Initialize the non-expert agent with a sub-optimal pol- 3.3 Environments As shown in Figure 1, we conduct experiments across four environments—HomeGrid, ALFWorld, Messenger, and MetaWorld—each featuring dis- crete action spaces, with hand-crafted hindsight and foresight language instructions. More informa- tion and examples of languages for each environ- ment can be found in Appendix A. HomeGrid (Lin et al., 2023) is a multitask grid world designed to evaluate how well agents can understand and use various types of language to complete tasks. It includes five task types (FIND, GET, CLEAN UP, REARRANGE, OPEN), involving interaction with objects and trash bins with a total of 38 tasks. The agent receives a reward of 1 when the task is completed and receives a reward of 0.5 if a subgoal is completed. ALFWorld (Shridhar et al., 2021) is a text-game environment that aligns with the embodied AL- FRED benchmark (Shridhar et al., 2020) and pro- vides simulation for household tasks. It includes six types of tasks which require the agent to navigate and interact with household objects by following language instructions. The agent gets a reward of 1 when the task is completed. We adopt the hindsight and foresight language templates from LLF-ALFWorld introduced in (Cheng et al., 2023), which adds an extra language wrapper to the origi- nal ALFWorld environment. Messenger (Hanjie et al., 2021) is a grid world with several entities. The agent’s task is to retrieve a message from one entity and deliver it to another goal entity, while avoiding enemies. At the start of each episode, the agent is provided with a manual describing the randomized roles of the entities and their movement dynamics. The agent receives a reward of 1 when the task is completed. MetaWorld (Yu et al., 2019) is a benchmark that consists of a variety of manipulation tasks per- formed by a simulated Sawyer robot arm. It in- cludes 50 types of common robot manipulation tasks. We select two of them in our experiments: ASSEMBLY and HAMMER. The agent receives a reward of 1 when completing a task. icy π. 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: Initialize the expert agent with policy π∗. for each time step do at ← π(ht) a∗ t ← π∗(ht) rt, st, lhind t ← env(at, a∗ t if Use GPT-augmented Pool then = GPT-augmented(lhind ) = GPT-augmented(lf ore ) , lf ore t |ht). t t lhind t lf ore t end if    lt ← t lhind t + lfore lhind t lfore t <empty> Append (rt, st, at, lt) to Di if H + F if only H if only F if No Lang end for Aggregate Datasets D ← D ∪ Di 17: 18: 19: 20: end for 4 Data Generation To train an agent that can understand language feed- back in an offline reinforcement learning manner, we construct an offline dataset D consisting of two parts: • Agent trajectory consisting of task description T d and the tuples ( ˆRt, st, at), where ˆRt rep- resents the reward, st is the state, and at is the action. • language feedback lt conveying hindsight and foresight information at each time step. Algorithm 1 outlines the data generation process, and we explain the algorithm in detail in the fol- lowing sections. 4.1 Trajectory Generation To improve model generalization and avoid overfit- ting, it is essential to train on diverse, sub-optimal trajectories rather than relying solely on optimal ones generated by an expert agent (Kumar et al., 2020; Chen et al., 2021). We achieve this by in- troducing perturbations to an expert planner (see Appendix B), allowing the non-expert agent to produce sub-optimal trajectories. This promotes broader exploration of the state-action space, en- hancing the model’s ability to generalize to unseen scenarios (Kumar et al., 2020; Chen et al., 2021). During data collection, we begin by appending the task description T d to the trajectory sequence and initializing the environment with a fixed seed. Figure 2: A demonstration of hindsight and foresight language feedback generation. In our framework, the agent π executes the trajectory, while the expert agent π∗, with access to privileged ground truth knowledge, is used solely to provide information for generating language feedback to π. At time step t, hindsight language is generated by comparing the agent’s action at−1 with the expert agent’s action a∗ t−1, whereas foresight language is generated by referring to the expert agent’s action a∗ t to guide the agent on the next step. To increase the diversity of language feedback, we construct a pool of language templates comprising GPT-augmented languages, and sample candidate instructions as online language feedback. A non-expert agent, using a sub-optimal policy π derived from the expert agent’s optimal policy π∗, interacts with the environment. At each time step, the environment state ot, reward ˆRt, and the non- expert agent’s action at are recorded to form the tra- jectory sequence: (T d, ˆR1, s1, a1, . . . , ˆRt, st, at). 4.2 Language Feedback Generation For the second part of the dataset D, we collect the language feedback along the non-expert agent’s trajectory. As shown in Figure 2, we follow a struc- tured process to generate diverse and informative language feedback. For the state at time step t, the expert agent π∗ proposes an expert action a∗ t (e.g. "down") at this state, which is further transformed into a foresight template lf ore (e.g. "Turn back.") by the environment simulator, guiding the agent on what should be done at this state. After the non- expert agent π steps the environment (into time step t + 1) with its generated action at (e.g. "down"), the environment simulator generates a hindsight template lhind t+1 (e.g. "You are doing well so far.") based on the comparison between agent action at and expert agent action a∗ t at the last time step t, reflecting on whether the agent is on the right track. For each foresight/hindsight template, we use GPT-4 to augment it into more natural and varied t expressions. (e.g. We can augment "You are doing well so far." into "Up until now, you’re doing won- derfully." or "So far, so good, you’re doing great!".) We compile all the rewritten sentences into a set called the GPT-augmented language pool. At each step of the non-expert agent, we randomly select one candidate from the pool as the language instruc- tion. This process ensures the feedback provided to the agent has high level of diversity and enriches the learning experience. The level of informativeness and diversity of the language feedback depends on the inclusion of hindsight and foresight (e.g. concatenated when both are required) and the use of GPT-augmented language pool. The language feedback at each time step will finally get concatenated with the trajectory sequence into (T d, ˆR1, s1, a1, l1, . . . ˆRt, st, at, lt). Algorithm 1 summarizes the data collection pro- cess. 5 Model Architecture. We extend the Decision Trans- former (DT) architecture (Chen et al., 2021) to create the Language-Teachable Decision Trans- former (LTDT) by augmenting the input to in- clude language feedback. This architecture is a decoder-only transformer, similar to GPT-2 (Rad- H: You seem to be heading away from the right route. F: Make a 180-degree turn right now.𝑎𝑡−1∗“pedal”Expert Agent 𝜋∗ predictionAgent 𝜋 in environment𝑎𝑡∗𝑎𝑡+1∗“down”“pedal”𝑎𝑡−1“up”𝑎𝑡“down”Time Step𝑡−1𝑡𝑡+1Task:Open the binH: You have gone to the wrong direction.F: Turn back.H: You are doing well so far.F: Pedal to open the recycling bin.Environment SimulatorGPT Template PoolExpert agent’s actionCompare 𝜋 with 𝜋∗Agent’s actionH: So far, so good, you’re doing great!F: To access the recycling bin, you’llneed to pedal. unseen tasks after fine-tuning with few-shot sam- ples. 6.1 Experimental Setup Setup for RQ 1. We compare performance on seen tasks between agents trained with varying lev- els of language informativeness and diversity: 1) the No Language agent is trained without any lan- guage instructions; 2) the Template Foresight agent is trained with hand-crafted foresight lan- guage templates; 3) the Template Hindsight agent is trained with hand-crafted hindsight lan- guage templates; 4) the Template Hindsight + Foresight agent is trained with hand-crafted fore- sight and hindsight language templates; and 5) the GPT-augmented Hindsight + Foresight agent is trained with hindsight and foresight languages from the GPT-augmented language pool. We train on 100, 1,000, 20,000, and 10,000 trajectories for HomeGrid, ALFWorld, Messenger, and Meta- World environments, respectively. Evaluation is performed over 5 runs, with 100 random seeds for each run. Setup for RQ 2. We pre-train different agents on seen tasks and then compare adaptability (how well an agent performs after few-shot learn- Language ing) on unseen tasks: 1) the No pre-trained agent is pre-trained without any language instructions; 2) the GPT-augmented hindsight pre-trained agent is pre-trained with hindsight language from the GPT-augmented lan- guage pool; 3) the GPT-augmented foresight pre-trained agent is pre-trained with foresight language from the GPT-augmented language pool; 4) the GPT-augmented hindsight + foresight pre-trained agent is pre-trained with both hind- sight and foresight language from the GPT- augmented language pool. During the few-shot adaptation stage, we choose to fine-tune the pre- trained agents with both hindsight + foresight lan- guage from the GPT-augmented language pool for all settings, since this mimics a real-world few-shot learning scenario, where humans likely provide di- verse feedback, including both hindsight and fore- sight, to guide the agent in new tasks. We pretrain on 6,432, 1,000, 20,000, and 10,000 trajectories for HomeGrid, ALFWorld, Messenger, and Meta- World, respectively. For all environments, we adapt on 5, 10, and 20 trajectories to 1 new task. Evalua- tion is performed over 5 runs, with 100 seeds per run. Further details on task setup of RQ 1 and RQ Figure 3: Language-Teachable Decision Transformer. ford et al., 2019), and models a trajectory sequence (T d, ˆR1, s1, a1, l1, . . . , ˆRt, st, at, lt), with the lan- guage feedback input appended at each step and a task description (TD) input prefixed at the be- ginning of the sequence. Like the original DT, the embeddings of these inputs are passed through the Causal Transformer, which encodes positional information to maintain sequence order. The trans- former’s output is used to predict the next action in the sequence, conditioned on the state, return- to-go, action, and language feedback in the last K time steps, with the task description as the prefix (4K + 1 tokens in total), as shown in Figure 3. Training. Similar to the original DT training, given an offline dataset of trajectory sequences, we sample a sub-sequence of length K (with 4K + 1 tokens), and the prediction head is trained to predict discrete actions with the cross-entropy loss or continuous actions with the MSE loss. More training details can be found in Appendix G. Language Embeddings. We use language em- beddings from a frozen Sentence-BERT model (Reimers and Gurevych, 2019) in all environments. We find Sentence-BERT more sensitive to language feedback changes, capturing nuanced semantic dif- ferences better. 6 Experiment In this section, we design experiments to answer the following two research questions (RQs): • RQ 1: How do the informativeness and diver- sity of language affect agents’ performance on seen tasks? • RQ 2: How does the informativeness of the language feedback affect pre-trained agents’ adaptability on unseen tasks? For RQ1, we control agents trained with hind- sight information, foresight information, or both to investigate the function of informativeness. We compare agents trained with language from both hand-crafted templates and the GPT-augmented language pool to examine the function of language diversity. For RQ2, agents are taught in languages from the GPT-augmented language pool and tested on Causal TransformerTask Description…MLP Embedding & Positional EncodingLinear Decoder…𝑠𝑡−1෠𝑅𝑡−1𝑙𝑡−1𝑎𝑡−1𝑠𝑡෠𝑅𝑡𝑙𝑡𝑎𝑡ො𝑎𝑡−1ො𝑎𝑡 Figure 4: Comparison of agent performance in four environments (averaged across 100 seeds in each environment) under varying levels of language feedback informativeness and diversity. Agents trained with more informative lan- guage feedback exhibit progressively higher performance. Furthermore, given the same informativeness (Hindsight + Foresight), increasing diversity with the GPT-augmented language pool leads to the highest performance. Figure 5: Comparison of agent performance on unseen tasks in four environments (averaged across 100 seeds in each environment) under varying language informativeness in agent pre-training. Agent trained with more informative language adapts to new tasks faster and better. 2 can be found in Appendix C. Additional results when training and adapting on same types of lan- guage can be found in Appendix D. Evaluation. At inference time, an agent is given a short task description before it starts to act, and lan- guage feedback along its execution. The language feedback should ideally come from real humans, who provide feedback varying in informativeness, diversity, and frequency (how often feedback is pro- vided). However, recruiting and moderating real humans to generate online feedback is expensive and difficult to scale. Therefore, we employ GPT-4 to provide online language feedback to mimic real humans. Specifically, at each time step, we provide all necessary context information to GPT-4 in its prompt and let it decide “whether to speak” (fre- quency), “what to speak” (informativeness), and “how to speak” (diversity). The context informa- tion, in this case, consists of the ground-truth envi- ronment states, action/state history, and template- based hindsight and foresight short text description generated by comparing the actions of the expert agent and the trained agent. GPT-4 then has the freedom to rephrase, combine, shorten, and discard such context information to utter diverse, coherent, and natural language feedback, mimicking a real human. See Appendix H for an example of such GPT-generated online feedback. Metric. We use the reward value as our main met- ric. Agents receive a reward of 1 upon task com- pletion for all environments and receive additional rewards for achieving specific sub-goals for the HomeGrid and ALFWorld environments. 6.2 Experimental Results Results for RQ 1. As we can see in Figure 4, agents trained with both diverse and informative language feedback (GPT-augmented Hindsight + Foresight) consistently achieve the highest per- formance across all environments. The varied and paraphrased instructions generated from GPT pro- vide a richer set of linguistic inputs, enabling the agents to develop a more robust language under- standing for task execution during evaluation. When examining the impact of informativeness, we observe that agents trained with both hindsight and foresight information (Template Hindsight + Foresight) consistently achieve higher performance across all environments compared to those trained with only hindsight or foresight information. This indicates that integrating both types of feedback enhances the informativeness of the language, en- abling the agents to develop a more comprehen- sive understanding and leading to better decision- making and overall performance. The only excep- tion is in the Messenger environment, where the no-language agent exhibits a surprisingly strong performance. However, upon further investigation of this exception, we find that if the hindsight- only or foresight-only feedback is from the GPT- augmented pool, the agent can still outperform the No Language agent (refer to Appendix F). In terms of diversity, the results show that agents trained with diverse language feedback, as indi- 0.10.20.30.40.5RewardHomeGrid0.20.30.40.50.6ALFWorld0.20.40.60.8Messenger0.40.50.6MetaworldNo LanguageTemplate Hindsight + ForesightTemplate HindsightGPT-augmented Hindsight + ForesightTemplate Foresight5 shot10 shot20 shot0.00.20.40.6RewardHomeGrid5 shot10 shot20 shot0.00.20.4ALFWorld5 shot10 shot20 shot0.00.20.40.6Messenger5 shot10 shot20 shot0.00.20.40.6MetaWorldNo Language PretrainedGPT-augmented Hindsight PretrainedGPT-augmented Foresight PretrainedGPT-augmented Hindsight + Foresight Pretrained Figure 6: Efficiency gain vs. task difficulty. We fit the scatter plots with a second-degree polynomial to visualize the overall trend. As task difficulty increases, the general trend of the efficiency gain is to rise initially and then decline, suggesting: (1) for tasks that are too easy or too hard, language feedback does not improve efficiency; (2) language feedback is most helpful in increasing efficiency for moderate tasks. different tasks? To answer this question, we de- fine efficiency gain as the difference in efficiency between an agent trained with informative and di- verse GPT languages, and an agent trained without any languages. Efficiency is measured by a path- weighted reward, as introduced in ALFRED (Shrid- har et al., 2020). This reward, rp, is calculated as max(L,L∗) , where r is the total reward, L rp = r × is the agent’s trajectory length, and L∗ is the ex- pert agent’s trajectory length. Higher rp indicates successful task completion with fewer steps. L∗ We define task difficulty for each configuration by calculating the average success rates of agents trained without language feedback, ranking these from lowest to highest. Configurations with lower success rates are considered more difficult, indi- cating greater challenges for agents learning from these configurations without language assistance. As shown in Figure 6, the efficiency gain gener- ally rises with increasing learning difficulty, then declines. This suggests that: (1) for tasks that are too easy or too hard, language feedback does not improve efficiency; (2) language feedback is most helpful in increasing efficiency for moderate tasks. Performance vs. Language Frequency. In the main experiments, we utilize an online GPT model to determine whether to provide language feedback at each time step. However, it is important to ex- plore how varying the frequency of language feed- back influences agent performance. To investigate this, we control the feedback frequency by sam- pling according to pre-defined probabilities (e.g., 20%, 40%). The language feedback is extracted from the GPT-augmented language pool; if no lan- guage is sampled, an empty string is provided to the agent. The evaluation is conducted on agents trained with both hindsight and foresight feedback derived from the GPT-augmented language pool. As illustrated in Figure 7, agents’ performance im- Figure 7: Performance vs. language frequency. Agents perform better with more frequent language feedback across four environments. cated by the ‘GPT-augmented’ bars, consistently outperform those trained with less varied language input. The rich set of augmented instructions gen- erated by GPT helps agents develop a more flexible and nuanced understanding of task instructions, which translates to better performance during eval- uation. This highlights the critical role of linguistic diversity in enhancing the robustness and adapt- ability of the agents’ language comprehension, ul- timately leading to improved task execution across different environments. Results for RQ 2. The results in Figure 5 re- veal that agents pre-trained with more informa- tive language can adapt to unseen tasks faster and better. “Adapting faster” is evident by the fact that agents pre-trained with GPT-augmented Hind- sight + Foresight language in 5 or 10 shots can already achieve a similar performance 20-shot per- formance of agents trained with less informative language. “Adapting better” is evident by the fact that, at a given number of shots available for adap- tation, the agent trained with the most informative language performs the best compared to its less informatively-pretrained counterparts. These re- sults indicate that agents pre-trained with more informative language can adapt and generalize to new tasks faster and better. 6.3 Ablation Study Efficiency Gain vs. Task Difficulty. Can lan- guage feedback help the agent to achieve more 0204060801001.000.750.500.250.000.250.500.751.00HomeGrid0204060801000.750.500.250.000.250.500.751.00ALFWorld0204060801000.60.40.20.00.20.40.60.8Messenger0204060801001.000.750.500.250.000.250.500.751.00MetaWorldTask DifficultyEfficiency GainEfficiency GainFitted Efficiency Gain Trend0%20%40%60%80%100%Language Frequency0.30.40.50.60.70.8RewardMessengerMetaworldHomeGridALFWorld Figure 8: We investigate two special evaluation settings: (1) no language feedback is provided during evaluation and (2) disturbed language feedback is given at every step. Results show that agents trained with the GPT- augmented language still outperform the no-language agent (the black dotted line) in the disturbed setting, and also achieve better performance in some environments while no language is given. proves steadily across all environments with more frequent language feedback during evaluation. This finding suggests that agents trained with informa- tive and diverse language feedback can continually absorb and leverage new information when addi- tional feedback is provided, leading to enhanced performance. Performance under Corrupted Language. This ablation aims to evaluate how agents perform when provided with incorrect instructions. We assess the performance of an agent trained with GPT-4- augmented informative and diverse language under two conditions: (1) Empty Feedback: the absence of language feedback during testing, and (2) Dis- turbed Feedback: the provision of disturbed lan- guage at each step. The disturbed language consists of redundant, irrelevant, or misleading informa- tion (e.g., incorrect actions or objects) and is gen- erated using GPT-augmented templates with dis- rupted content. The results in Figure 8 reveal two interesting findings: (1) When tested without any language feedback, the agent trained with informa- tive and diverse language performs comparably or even exceeds the performance of the agent trained without any language (represented by the black dot- ted line). This indicates that the agent develops a robust intrinsic understanding of the task, demon- strating that it does not overly rely on language feedback; (2) When exposed to disturbed feedback, the agent trained with informative and diverse lan- guage maintains performance levels comparable to the no-language agent. This showcases the agent’s ability to withstand misleading information, a criti- cal trait for real-world applications where human feedback may be unreliable. 7 Conclusion In this paper, we investigate how the informative- ness and diversity of language feedback affect embodied agents. We introduce the Language- Teachable Decision Transformer (LTDT), which makes decisions based on human language feed- back. To facilitate the training of LTDT agents, we propose an easy-to-use pipeline for collecting offline hindsight and foresight GPT templates. We compare the performance of agents by varying the informativeness and diversity of the training lan- guages across four reinforcement learning environ- ments and evaluate the agents’ ability to understand real-world human language using online GPT as a proxy. Our results demonstrate that training with more informative and diverse language feedback significantly enhances agent performance and en- ables fast adaptation to unseen tasks. Limitations Our study has several limitations. First, the investi- gated environments are primarily game-based and do not test the agents’ ability to incorporate real-life visual inputs. Future work will focus on evaluating agents in more realistic and complex environments that involve real-world visual inputs and challenges. Second, while GPT language outputs can produce diverse and contextually relevant language, they may not fully cover all human language styles and nuances. Specifically, GPT models might miss certain idioms, dialects, or culturally specific ref- erences that are prevalent in human communica- tion. Future work will aim to incorporate a broader spectrum of language variations and test agents in scenarios involving more diverse linguistic inputs. Ethical Impacts Our study, conducted entirely within simulated en- vironments, does not present immediate ethical concerns. The teachable nature of our Language- Teachable Decision Transformer (LTDT) method is designed to make AI agents more controllable and better aligned with human values, promoting safer and more ethical interactions. By enhancing agent performance through informative and diverse language instructions, we aim to foster AI systems that are more transparent and responsive to human guidance, addressing ethical considerations in the deployment of artificial intelligence. As AI be- comes more mainstream, these considerations are increasingly pertinent, and our work strives to ad- vance AI technology responsibly. Acknowledgements This work was supported by NSF IIS-1949634 and has benefited from the Microsoft Accelerate Foun- 0.10.20.30.40.5RewardHomeGrid0.20.30.40.50.6ALFWorld0.40.50.60.70.8Messenger0.20.30.40.50.60.7MetaworldEmpty feedbackDisturbed feedbackNormal feedbackBaseline trained without languages dation Models Research (AFMR) grant program. We would like to thank the anonymous reviewers for their valuable comments and suggestions. References Michael Ahn, Anthony Brohan, Noah Brown, Yev- gen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jes- month, Nikhil Joshi, Ryan Julian, Dmitry Kalash- nikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Pe- ter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nico- las Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. 2022. Do as i can and not as i say: Grounding language in robotic af- fordances. In arXiv preprint arXiv:2204.01691. Philip J Ball, Laura Smith, Ilya Kostrikov, and Sergey Levine. 2023. Efficient online reinforcement learning with offline data. arXiv preprint arXiv:2302.02948. Joyce Chai, Maya Cakmak, and Candy Sidner. 2019. Teaching robots new tasks through natural interac- tion. In K. A. Cluck and J. E. Laird, editors, Inter- active Task Learning: Agents, Robots, and Humans Acquiring New Tasks through Natural Interactions. MIT Press. Joyce Chai, Qiaozi Gao, Lanbo She, Shaohua Yang, Sari Saba-Sadiya, and Guangyue Xu. 2018. Lan- guage to action: Towards interactive task learning with physical agents. In Proceedings of the Twenty- Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stock- holm, Sweden, pages 2–9. ijcai.org. Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. 2021. Decision trans- former: Reinforcement learning via sequence mod- eling. Advances in neural information processing systems, 34:15084–15097. Ching-An Cheng, Andrey Kolobov, Dipendra Misra, Allen Nie, and Adith Swaminathan. 2023. Llf-bench: Benchmark for interactive learning from language feedback. arXiv preprint arXiv:2312.06853. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, and Steven Hoi. Boyang Li, Pascale Fung, 2023. Instructblip: Towards general-purpose vision- language models with instruction tuning. Preprint, arXiv:2305.06500. Yinpei Dai, Jayjun Lee, Nima Fazeli, and Joyce Chai. 2024a. Racer: Rich language-guided failure recov- ery policies for imitation learning. arXiv preprint arXiv:2409.14674. Yinpei Dai, Hangyu Li, Chengguang Tang, Yongbin Li, Jian Sun, and Xiaodan Zhu. 2020. Learning low- resource end-to-end goal-oriented dialog for fast and reliable system deployment. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 609–618. Yinpei Dai, Run Peng, Sikai Li, and Joyce Chai. 2024b. Think, act, and ask: Open-world interactive person- alized robot navigation. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 3296–3303. IEEE. Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Man- dlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. 2022. Minedojo: Building open-ended embodied agents with internet-scale knowledge. Advances in Neural Information Processing Systems, 35:18343– 18362. Scott Fujimoto, Edoardo Conti, Mohammad Ghavamzadeh, and Joelle Pineau. 2019. Benchmark- ing batch deep reinforcement learning algorithms. arXiv preprint arXiv:1910.01708. Prasoon Goyal, Scott Niekum, and Raymond J Mooney. 2019. Using natural language for reward shap- arXiv preprint ing in reinforcement arXiv:1903.02020. learning. Austin W. Hanjie, Victor Zhong, and Karthik Narasimhan. 2021. Grounding language to entities and dynamics for generalization in reinforcement learning. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 4051–4062. PMLR. Peter E Hart, Nils J Nilsson, and Bertram Raphael. 1968. A formal basis for the heuristic determination of min- imum cost paths. IEEE transactions on Systems Sci- ence and Cybernetics, 4(2):100–107. Wanwei He, Yinpei Dai, Binyuan Hui, Min Yang, Zheng Cao, Jianbo Dong, Fei Huang, Luo Si, and Yongbin Li. 2022a. Space-2: Tree-structured semi-supervised contrastive pre-training for task-oriented dialog un- derstanding. arXiv preprint arXiv:2209.06638. Wanwei He, Yinpei Dai, Min Yang, Jian Sun, Fei Huang, Luo Si, and Yongbin Li. 2022b. Unified dialog model pre-training for task-oriented dialog understanding In Proceedings of the 45th Inter- and generation. national ACM SIGIR Conference on Research and Development in Information Retrieval, pages 187– 200. Wanwei He, Yinpei Dai, Yinhe Zheng, Yuchuan Wu, Zheng Cao, Dermot Liu, Peng Jiang, Min Yang, Fei Huang, Luo Si, et al. 2022c. Galaxy: A generative pre-trained model for task-oriented dialog with semi- supervised learning and explicit policy injection. In Proceedings of the AAAI conference on artificial in- telligence, volume 36, pages 10749–10757. Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Noah Brown, Tomas Jackson, Linda Luu, Sergey Levine, Karol Hausman, and Brian Ichter. 2022. In- ner monologue: Embodied reasoning through plan- In arXiv preprint ning with language models. arXiv:2207.05608. Michael Janner, Qiyang Li, and Sergey Levine. 2021. Offline reinforcement learning as one big sequence modeling problem. Advances in neural information processing systems, 34:1273–1286. Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi Wang, Yongqiang Dou, Yanjun Chen, Li Fei-Fei, An- ima Anandkumar, Yuke Zhu, and Linxi Fan. 2023. Vima: General robot manipulation with multimodal prompts. In Fortieth International Conference on Machine Learning. Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, and Aniruddha Kembhavi. 2022. Simple but effec- tive: Clip embeddings for embodied ai. In Proceed- ings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, pages 14829–14838. Ilya Kostrikov, Ashvin Nair, and Sergey Levine. 2021. Offline reinforcement learning with implicit q-learning. In International Conference on Learning Representations. Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. 2020. Conservative q-learning for offline reinforcement learning. Advances in Neural Informa- tion Processing Systems, 33:1179–1191. Kuang-Huei Lee, Ofir Nachum, Mengjiao Sherry Yang, Lisa Lee, Daniel Freeman, Sergio Guadarrama, Ian Fischer, Winnie Xu, Eric Jang, Henryk Michalewski, et al. 2022. Multi-game decision transformers. Ad- vances in Neural Information Processing Systems, 35:27921–27936. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023. Blip-2: Bootstrapping language-image pre- training with frozen image encoders and large lan- guage models. Preprint, arXiv:2301.12597. Jessy Lin, Yuqing Du, Olivia Watkins, Danijar Hafner, Pieter Abbeel, Dan Klein, and Anca Dragan. 2023. Learning to model the world with language. arXiv preprint arXiv:2308.01399. Zeyi Liu, Arpit Bahety, and Shuran Song. 2023. Reflect: Summarizing robot experiences for fail- arXiv preprint ure explanation and correction. arXiv:2306.15724. Yecheng Jason Ma, William Liang, Vaidehi Som, Vikash Kumar, Amy Zhang, Osbert Bastani, and Di- nesh Jayaraman. 2023. Liv: Language-image repre- sentations and rewards for robotic control. Preprint, arXiv:2306.00958. Sabrina McCallum, Max Taylor-Davies, Stefano Al- brecht, and Alessandro Suglia. 2023. Is feedback all you need? leveraging natural language feedback in goal-conditioned rl. In NeurIPS 2023 Workshop on Goal-Conditioned Reinforcement Learning. Nikhil Mehta, Milagro Teruel, Patricio Figueroa Sanz, Xin Deng, Ahmed Hassan Awadallah, and Julia Kisel- eva. 2023. Improving grounded language understand- ing in a collaborative environment by interacting with agents through help feedback. arXiv preprint arXiv:2304.10750. Suvir Mirchandani, Fei Xia, Pete Florence, Danny Driess, Montserrat Gonzalez Arenas, Kanishka Rao, Dorsa Sadigh, Andy Zeng, et al. 2023. Large lan- guage models as general pattern machines. In 7th Annual Conference on Robot Learning. Nirbhay Modhe, Qiaozi Gao, Ashwin Kalyan, Dhruv Batra, Govind Thattai, and Gaurav Sukhatme. 2023. Exploiting generalization in offline reinforcement arXiv learning via unseen state augmentations. preprint arXiv:2308.03882. Khanh Nguyen, Hal Daumé III, and Jordan Boyd- Graber. 2017. Reinforcement learning for bandit neural machine translation with simulated human feedback. arXiv preprint arXiv:1707.07402. Khanh Nguyen, Debadeepta Dey, Chris Brockett, and Bill Dolan. 2019. Vision-based navigation with language-based assistance via imitation learning In Proceedings of the with indirect intervention. IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pages 12527–12537. Khanh X Nguyen, Yonatan Bisk, and Hal Daumé Iii. 2022. A framework for learning to request rich and contextually useful information from humans. In In- ternational Conference on Machine Learning, pages 16553–16568. PMLR. OpenAI. 2024. Gpt-4 technical report. Preprint, arXiv:2303.08774. Meenal Parakh, Alisha Fong, Anthony Simeonov, Tao Chen, Abhishek Gupta, and Pulkit Agrawal. 2023. Lifelong robot learning with human assisted language planners. In CoRL 2023 Workshop on Learning Ef- fective Abstractions for Planning (LEAP). Rafael Figueiredo Prudencio, Marcos ROA Maximo, and Esther Luna Colombini. 2023. A survey on of- fline reinforcement learning: Taxonomy, review, and open problems. IEEE Transactions on Neural Net- works and Learning Systems. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International confer- ence on machine learning, pages 8748–8763. PMLR. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI Blog. Scott Reed, Konrad Zolna, Emilio Parisotto, Ser- gio Gómez Colmenarejo, Alexander Novikov, Gabriel Barth-maron, Mai Giménez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al. 2022. A generalist agent. Transactions on Machine Learning Research. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Associa- tion for Computational Linguistics. Ingmar Schubert, Jingwei Zhang, Jake Bruce, Sarah Bechtle, Emilio Parisotto, Martin Riedmiller, Jost To- bias Springenberg, Arunkumar Byravan, Leonard Hasenclever, and Nicolas Heess. 2023. A gener- alist dynamics model for control. arXiv preprint arXiv:2305.10912. Pratyusha Sharma, Balakumar Sundaralingam, Valts Blukis, Chris Paxton, Tucker Hermans, Antonio Tor- ralba, Jacob Andreas, and Dieter Fox. 2022. Cor- recting robot plans with natural language feedback. Preprint, arXiv:2204.05186. Lanbo She and Joyce Chai. 2017. Interactive learning of grounded verb semantics towards human-robot communication. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1634–1644. Association for Computational Linguistics. Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. 2020. Alfred: A bench- mark for interpreting grounded instructions for every- day tasks. Preprint, arXiv:1912.01734. Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Côté, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. 2021. ALFWorld: Aligning Text and Embodied Environments for Interactive Learning. In Proceedings of the International Conference on Learning Representations (ICLR). Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, and Animesh Garg. 2023. Prog- prompt: Generating situated robot task plans using large language models. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 11523–11530. IEEE. Bo-Hsiang Tseng, Yinpei Dai, Florian Kreyssig, and Bill Byrne. 2021. Transferable dialogue systems and user simulators. In Proceedings of the 59th An- nual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 152–166, Online. Association for Computational Linguistics. Kuan Wang, Yadong Lu, Michael Santacroce, Yeyun Gong, Chao Zhang, and Yelong Shen. 2023. Adapt- ing llm agents through communication. Preprint, arXiv:2310.01444. Tianbao Xie, Siheng Zhao, Chen Henry Wu, Yitao Liu, Qian Luo, Victor Zhong, Yanchao Yang, and Tao Yu. 2023. Text2reward: Automated dense reward function generation for reinforcement learning. arXiv preprint arXiv:2309.11489. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. 2022. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations. Denis Yarats, David Brandfonbrener, Hao Liu, Michael Laskin, Pieter Abbeel, Alessandro Lazaric, and Ler- rel Pinto. 2022. Don’t change the algorithm, change the data: Exploratory data for offline reinforcement learning. In ICLR 2022 Workshop on Generalizable Policy Learning in Physical World. Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, and Sergey Levine. 2019. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In Conference on Robot Learning (CoRL). Wenhao Yu, Nimrod Gileadi, Chuyuan Fu, Sean Kir- mani, Kuang-Huei Lee, Montse Gonzalez Arenas, Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasen- clever, Jan Humplik, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, Yuval Tassa, and Fei Xia. 2023. Lan- guage to rewards for robotic skill synthesis. Arxiv preprint arXiv:2306.08647. Yichi Zhang and Joyce Chai. 2021. Hierarchical task learning from language instructions with unified In Findings of transformers and self-monitoring. the Association for Computational Linguistics: ACL- IJCNLP 2021, pages 4202–4213, Online. Association for Computational Linguistics. Yichi Zhang, Jianing Yang, Jiayi Pan, Shane Storks, Nikhil Devraj, Ziqiao Ma, Keunwoo Yu, Yuwei Bao, and Joyce Chai. 2022. DANLI: Deliberative agent for following natural language instructions. In Proceed- ings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1280–1298, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Yichi Zhang, Jianing Yang, Keunwoo Yu, Yinpei Dai, Shane Storks, Yuwei Bao, Jiayi Pan, Nikhil Devraj, Ziqiao Ma, and Joyce Chai. 2023. Seagull: An em- bodied agent for instruction following through situ- ated dialog. In Alexa Prize SimBot Challenge Pro- ceedings. A Environments and Language Feedback A.1 Environments Overview The Appendix Table 1 lists the information that is inherently available within the environment. All models, regardless of whether they are trained with language input or not, will have access to this envi- ronmental information. locations, bin locations, and bin dynamics are ran- domized. The agent receives a reward of 1 when the task is completed, and receives a reward of 0.5 if a subgoal exists (e.g., get the object in the clean-up task) and gets completed. Each template language is augmented to 70 sentences in the GPT template pool. Examples of hindsight and foresight languages are as follows: Env HomeGrid AlfWorld Messenger MetaWorld Image Observation Instruction Manual Text State Description Yes No No No No No Yes No No Yes No No Table 1: Information provided by each environment. A.2 Language Feedback for Different Environments For each environment, we design multiple tem- plates conveying different meanings, and then ap- plied GPT-4 to augment the languages into a GPT- augmented language pool. The number of tem- plates and the corresponding GPT-augmented sen- tences for each template are shown in Appendix Table 2. Env HomeGrid AlfWorld Messenger MetaWorld # Hind Templates 20 4 4 2 # Fore Templates 9 4 4 6 # AUG 70 200 80 180 Table 2: Number of templates and augmented sentences for each environment, where ’# Hind Templates’ refers to the number of hindsight templates, ’# Fore Templates’ refers to the number of foresight templates, and ’# AUG’ refers to the number of GPT-augmented sentences per template. A.2.1 HomeGrid HomeGrid is a multitask grid world designed to evaluate how well agents can understand and use various types of language to complete tasks. Agents will receive both task specifications and language hints, providing prior knowledge about world dynamics, information about world states, or corrections to assist the agents. We adopt the lan- guage hints in HomeGrid as foresight and further extend the environment to provide hindsight that provides comments on agents’ past performance. Agents are expected to ground both hindsight and foresight to the environment to achieve higher per- formance. It includes five task types involving in- teraction with objects and bins (find, get, clean up, rearrange, open), with a total of 38 tasks. Object • Hindsight Examples: Template: ▷ "You have gone to the wrong direc- tion." ▷ "You are doing well so far." GPT Template: ▷ "You seem to be heading away from the right route." ▷ "So far, so good, you are doing great!" • Foresight Examples: Template: ▷ "Turn back." ▷ "Pedal to open the recycling bin." GPT Template: ▷ "Make a 180-degree turn right now." ▷ "To access the recycling bin, you’ll need to pedal." Language instructions are generated based on the comparison of agent’s action and expert planer ac- tion, considering distance, relative location, and interaction between the agent and target objects. A.2.2 ALFWorld ALFWorld is a text-game environment that aligns with the embodied ALFRED benchmark (Shridhar et al., 2020) and provides simulation for house- hold tasks. It includes six types of tasks where agents need to navigate and interact with house- hold objects through text actions. The location of the task objects is randomly located among 50 loca- tions in each episode, making the task challenging for the agent to plan and for the subgoals. For the experiment, we adopt LLF-ALFWorld (Cheng et al., 2023), which provides an extra language wrapper for hindsight and foresight language gen- eration over the original ALFWorld. The languages are generated based on both agents’ past actions and the optimal trajectory for the current episode. Agent gets a reward of 1 when the task is completed. Each template is augmented to 200 sentences in GPT template pool. Examples of hindsight and foresight languages are as follows: • Hindsight Examples: Template: ▷ "You made a mistake by taking the bad action {action}." ▷ "It was a right decision to not take the bad action {action}." GPT Template: ▷ "The choice to implement {action} was misguided." ▷ "You made a sensible choice by not committing to the {avoid action}." • Foresight Examples: Template: ▷ "You should now take the {action} action." ▷ "Take {action} in the next step." GPT Template: ▷ "Consider taking the {action} as your next step." ▷ "Moving on, consider the {action} action." Language instructions are generated based on ex- perts’ next action and whether agent’s past actions are aligned with expert past actions, considering whether agents have moved to the target position and conducted correct interaction with the objects. A.2.3 Messenger Messenger is a grid world with several entities. The agent’s primary task is to retrieve a message from one entity and deliver it to another goal entity, all while avoiding enemies. At the start of each episode, the agent is provided with a manual de- scribing the randomized roles of the entities and their movement dynamics. The challenge lies in the fact that the agent does not have access to the true identity of each entity and must ground the text manual to the dynamics, necessitating multi- hop reasoning. (For example, grounding the "an approaching queen is a deadly enemy" to the obser- vations of dynamics.) (Lin et al., 2023) The agent receives a sparse reward of 1 when the task is com- pleted. Each template language is augmented to 80 sentences in the GPT template pool. Examples of hindsight and foresight languages are as follows: • Hindsight Examples: Template: ▷ "It’s good that you are getting close to the {target} at {target direction} by moving {direction}!" ▷ "Stepping {action direction}, yet you ran into {enemy name}. Be more cautious." GPT Template: ▷ "Good job on approaching the {tar- get} to the {target direction} by mov- ing {direction}! " ▷ "Stepping {action direction} directly met {enemy name}. Needs strategic thinking." • Foresight Examples: Template: ▷ "Move {optimal direction} to ap- proach the {target name} located at the {target direction}. " ▷ "Rest assured, there are no enemies around." GPT Template: ▷ "To get to the {target name} at {tar- get direction}, go {optimal direc- tion}. " ▷ "Not detecting any danger, it’s safe." When generating the language instructions, we compare the agent’s actions and the expert’s ac- tions, considering the locations of the target and nearest enemy, calculating the distance and gener- ate the hindsight reflections based on some engi- neered rules. A.2.4 MetaWorld MetaWorld is a simulated benchmark that includes a variety of manipulation tasks performed using a Sawyer robot arm. It includes 50 types of robot manipulation tasks common in daily life. Since our main goal is not meta-learning, we select the "assembly" and "hammer" tasks for pretraining and adaptation in our experiments. This requires the agent to pick up the tool and aim at the specific tar- get with high precision. To increase the challenge of the tasks, we introduce random disturbances at random steps. This requires the robot to actively re- cover and return to its normal trajectory whenever it deviates. The agent receives a sparse reward of 1 when completing the task. Each template language is augmented to 180 template languages in the GPT template pool. Examples of hindsight and foresight languages are shown in the following: • Hindsight Examples: Template: ▷ "It’s excellent to raise the gripper." ▷ "You are making mistakes for not opening your gripper." GPT Template: ▷ "Good job for raising your gripper." ▷ "You make a regrettable mistake since your gripper is closing." next step of expert action and let the expert planner recover from the error. B.2 ALFWorld For the ALFWorld environment, we use a pre-built expert planer from LLF-Bench (Cheng et al., 2023) to work as both the expert agent and the agent for the data collection. • Foresight Examples: Template: ▷ "It’s time to grasp the wrench now." ▷ "Please raise the hammer." GPT Template: ▷ "Can you grab the wrench with your gripper?" ▷ "I think the hammer should be raised now." We compare the agent’s actions with the expert’s actions, and tell the agent’s whether their decisions at the previous step matches with the expert’s ac- tions, and inform them of what an expert will do at the next step. B Agent for Offline Data Collection and Language Feedback Generation We use an expert agent and a non-expert agent with sub-optimal policies during the data collection. The sub-optimal policy is used for introducing some er- rors or perturbations in the training data, and letting the expert policy continue to recover. This helps agents learn to recover from potential failures using hindsight reflections and foresight instructions. In our experiments, we introduced 10-20% random noise in each trajectory as a sub-optimal policy. We found that this level of perturbation aids learning, but excessive disturbance (e.g., >50% per trajec- tory) significantly degrades performance as agents start learning suboptimal behaviors. B.1 HomeGrid For the HomeGrid environment, we design an ex- pert planer to work as the expert agent. We first divide the task into several sub-tasks (i.e. divide "open the recycling bin" into 1. "navigate to the bin", 2. "open the bin"). For navigation (move to some place) sub-tasks, we implement breadth- first search to find the optimal path; for inter- action sub-task (interact with object), we output the corresponding action. We implement the non- expert agent by adding "perturbation" into the ex- pert planer. For example, we randomly reverse the B.3 Messenger As for the Messenger environment, we implement an expert agent using the A* algorithm (Hart et al., 1968). We define the cost by the distance to the target and the distance to the nearest enemies, and then heuristically search in the grid environment. The non-expert agent in the data collection is im- plemented by adding random disturbance to the expert agent. B.4 MetaWorld We build the expert agent on the pre-defined policy from the original MetaWorld codebase (Yu et al., 2019) and adapt the policy to random disturbance so that the expert planner can recover to a normal trajectory in any situation. C Task Settings for RQ 1 and 2 Task Setting for RQ 1. We evaluate the agents’ performance using the same tasks as in the train- ing phase (but with different initialization of the agents and object layout for different episodes). Concretely, 1) in HomeGrid, we train and evalu- ate on multi-tasks, including FIND, GET, REAR- RANGE and OPEN; 2) in ALFWorld, we train and evaluate on multi-tasks including PICK&PLACE, CLEAN&PLACE and HEAT&PLACE tasks; 3) in Messenger, we train and evaluate on the task goal “first retrieve the message and then deliver to target entity”; and 4) in MetaWorld, we train and evalu- ate on the ASSEMBLY task, in which the robot arm needs to pick up the wrench and put it on the peg. Task Setting for RQ 2. We evaluate agents’ perfor- mance on unseen tasks by first pre-training agents on certain tasks and then adapting agents to un- seen tasks with few-shot episodes. Specifically, 1) in HomeGrid, we take FIND, GET, REARRANGE, OPEN tasks for pre-training and the CLEAN-UP task for adaptation and evaluation; 2) in ALFWorld, we take PICK&PLACE and CLEAN&PLACE for pre- training and HEAT&PLACE tasks for adaptation and evaluation; 3) in Messenger, we take “first re- trieve the message and then deliver to target entity" as the pretraining task and “first get to the target entity and then retrieve the message" (where the order of the goal is reversed compared to the pre- training tasks) for adaptation and evaluation; 4) in MetaWorld, we take the ASSEMBLY task for pre- training, and the HAMMER task for adaptation and evaluation. D Performance under aligned language type with training. As stated in Section 6.1, we use online GPT for all evaluations in RQ 1 and 2 to mimic real-life human language environments. In this section, we align the evaluation language type (and adaptation language type in RQ 2) with each agent’s corre- sponding training language type for further investi- gation (e.g. No Language Agent is evaluated with empty language; Template Hindsight Agent is evaluated with Template Hindsight). Experiments on RQ 1 and 2 are conducted on HomeGrid and Messenger respectively, with the results presented in Table 3. Training Language Aligned Eval Online GPT Eval HomeGrid Env on RQ 1 No Lang Template H Template F Template H + F GPT-augmented H + F 0.235 0.260 0.305 0.325 0.472 0.212 0.246 0.262 0.285 0.442 Messenger Env on RQ 2 (20 Shots) Training Language Aligned Adapt & Eval Online GPT Eval No Lang GPT-augmented H GPT-augmented F GPT-augmented H + F 0.323 0.450 0.512 0.623 0.270 0.378 0.464 0.608 Table 3: Comparison of agents’ performance adapted (for RQ 2) and evaluated with aligned language type in HomeGrid environment on RQ 1 and Messenger envi- ronment on RQ 2. ‘Aligned (Adapt &) Eval’ refers to (adaptation &) evaluation with same type of language in training and ‘Online GPT Eval’ refers to online GPT evaluation (results in Section 6.2). The results show that GPT-augmented Hindsight + Foresight evaluated with online GPT still outperforms other training settings even with aligned language evaluation, indicating higher lan- guage informativeness and diversity enhance intrinsic task understanding. The results Table 3 show that: (1) aligning the informativeness and diversity levels between training, adaptation and evaluation improves the final performance for all types; (2) more impor- tantly, even with aligned evaluation and adaptation language, no other settings have outperformed GPT-augmented Hindsight + Foresight evalu- ated with online GPT. This further demonstrates that high informativeness and diversity in training language help agents intrinsically understand tasks to achieve better performance. E Impact of hindsight on future steps Compared to foresight feedback, which provides instructions for the correct action in the next step, hindsight feedback reflects on incorrect actions taken in previous steps. This retrospective analysis can still guide agents toward success by narrow- ing down the search space for corrective actions. To demonstrate the effectiveness of hindsight feed- back, we conduct a quick comparative study be- tween the No Language agent and the Template Hindsight agent in HomeGrid. The study was designed as follows: 1. Both agents are driven to the same state using an expert policy. 2. A deliberate mistake is introduced for both agents. Three types of mistakes are designed: • Navigation Mistake: The agent moves in the opposite direction compared to the expert action. • Object Pick/Drop Mistake: The agent picks or drops an object when the expert action is to drop or pick, respectively. • Bin Manipulation Mistake: The agent chooses the wrong action among pedal/lift/grasp to open a specific trash bin. 3. We use expert actions as the ground truth (GT) actions and compare the performance of both agents over 500 runs. The results are shown in Appendix Table 4: The Mistake Type Navigation Object Pick/Drop Bin manipulation No Lang (%) Template Hindsight (%) 37.6 ± 0.3 37.4 ± 2.5 23.5 ± 1.2 46.2 ± 0.2 41.8 ± 1.6 24.8 ± 0.9 Table 4: Comparison of performance between No Language Agent and Template Hindsight Agent on different Mistake Types. results indicate that for the navigation and object pick/drop mistakes, hindsight feedback is highly beneficial. This is because identifying a wrong ac- tion usually directly implies the correct action for those mistakes (e.g., if "turn left" is wrong, "turn right" is correct; if "pick the object" is wrong, "drop the object" is correct). However, for the bin manip- ulation mistake, hindsight feedback is less helpful G.1 HomeGrid Estimated parameter size of the models: 12.191 MB. For research question 1, we train the model with 100 trajectories. For research question 2, the pretraining stages use 6432 trajectories. The mod- els are trained on one Nvidia RTX A6000. For research question 1, training takes 3 GPU hours. For research question 2, pretraining takes 4 GPU hours and adaptation takes 3 GPU hours. Hyperpa- rameters shown in Appendix Table 5. G.2 ALFWorld Estimated parameter size of the models: 6.5 MB. For research question 1, we train the model with 1000 trajectories. For research question 2, the pre- training stages use 10000 trajectories. The models are trained in one Nvidia RTX A6000. For re- search question 1, training takes 3 GPU hours. For research question 2, pretraining takes 4 GPU hours and adaptation takes 3 GPU hours. Hyperparame- ters shown in Appendix Table 6. G.3 Messenger Estimated parameters size of the models: 289.681 MB. We train the models with 10000 data trajecto- ries during the pretraining stage for seen tasks. The pretraining stage for seen tasks takes 5 GPU hours on one Nvidia RTX A6000. The adaptation stage for unseen tasks takes 1 GPU hour. Hyperparame- ters are shown in Appendix Table 7. G.4 MetaWorld Estimated parameters size of the models: 289.681 MB. We train the models with 20000 data trajec- tories during the pretraining stage for seen tasks. The pretraining stage for seen tasks takes 2.5 GPU hours on one Nvidia RTX A6000. The adaptation stage for unseen tasks takes 1 GPU hour. Hyperpa- rameters are shown in Appendix Table 8. Figure 9: In the Messenger environment, when trained with more diverse foresight and hindsight languages, the agents can perform better than those trained without languages. Furthermore, agents trained with more infor- mative languages demonstrate stronger performance. since the action space grows larger (pedal/lift/grasp, compared to binary opposite actions in Navigation and Object Pick/Drop), and there are no clear im- plications for the correct action. F More results on the Messenger environment In the Messenger environment, models trained with only template foresight or hindsight languages struggle to generalize to diverse languages during testing. Without exposure to diverse languages dur- ing training, these models fail to extract the learned hindsight or foresight information from mixed and diverse languages. However, Figure 9 demonstrates that models trained with more diverse hindsight or foresight languages can overcome the generaliza- tion problem, and outperform those trained without language feedback, showcasing the importance of diversity in the training languages. Furthermore, the agents trained with both hindsight and foresight information still perform the best, aligning with results in other environments. G Models and Training H Examples for Language Feedback in We build our Language-Teachable Decision Trans- former based on the code of the original Decision Transformer (Chen et al., 2021). In this section, we will show our training setup and model hyperpa- rameters for each environment. When selecting the data size, we prioritize the efficient use of a small-scale dataset and examine the impact of language feedback within the con- straints of a limited budget and scarce data, as is common in the field of robotics. Evaluation As discussed in section 6.1, we feed template hind- sight (lhind) and template foresight (lf ore) into an online GPT to generate language feedback as a proxy for real-world human feedback, which can be further extended into multi-turn human-machine dialogue systems in task-oriented settings (He et al., 2022a,b,c). In Figure 10, we demonstrate three ex- amples of the GPT outcome. In example 1, we find GPT can concatenate both hindsight and foresight 0.00.10.20.30.40.50.60.70.8RewardMessengerNo LanguageGPT augmented hindsightGPT augmented foresightGPT augmented Hindsight + Foresight Hyperparameters Number of transformer layers Number of attention heads Embedding dimension Nonlinearity function Batch size Context length K Return-to-go conditioning Dropout Optimizer Learning Rate Grad norm clip Weight decay Learning rate decay Value 3 1 128 ReLU 64 10 1.5 0.1 AdamW 1e−4 0.25 1e−4 Linear warmup for first 1e5 training steps Table 5: Hyperparameters of Language-Teachable Deci- sion Transformer for HomeGrid experiments. Hyperparameters Number of transformer layers Number of attention heads Embedding dimension Nonlinearity function Batch size Context length K Return-to-go conditioning Dropout Optimizer Learning Rate Grad norm clip Weight decay Learning rate decay Value 3 1 128 ReLU 64 10 1.5 0.1 AdamW 1e−3 0.25 1e−4 Consine Annealing with minimum lr = 1e−5 Table 6: Hyperparameters of Language-Teachable Deci- sion Transformer for ALFWorld experiments. and integrate them into a new fluent sentence. In the second example, we observe that GPT decides to discard the hindsight part and provides only fore- sight as the outcome. In example 3, GPT chooses not to respond when it thinks the current agent doesn’t need help. Hyperparameters Number of transformer layers Number of attention heads Embedding dimension Nonlinearity function Batch size Context length K Return-to-go conditioning Dropout Optimizer Learning Rate Grad norm clip Weight decay Learning rate decay Value 5 2 128 ReLU 128 for pertaining and 1 for adaptation 10 1.5 0.1 AdamW 1e−3 for pretraining and 1e−4 for adaptation 0.25 1e−4 Linear warmup for first 1e5 training steps Table 7: Hyperparameters of Language-Teachable Deci- sion Transformer for Messenger experiments. Hyperparameters Number of transformer layers Number of attention heads Embedding dimension Nonlinearity function Batch size Context length K Return-to-go conditioning Return scale Dropout Optimizer Learning Rate Weight decay Learning rate decay Value 5 2 256 ReLU 128 for pertaining and 5 for adaptation 12 20 10 0.1 AdamW 1e−5 for pertaining and 1e−6 for adaptation 1e−4 Linear warmup for first 1e5 training steps Table 8: Hyperparameters of Language-Teachable Deci- sion Transformer for MetaWorld experiments. Figure 10: Examples for language feedback generated by online GPT in evaluation. Good effort, but the fruit is in the kitchen area.(Concatenate H and F into a fluent sentence.)H: Your efforts up to now haven't gone unnoticed.F: The fruit is in the kitchen area.You should turn around and face the opposite way.(Discard the hindsight.)H: You seem to be veering off the right track.F: Could you swivel to face the opposite way?(empty)(Decide not to respond.)H: So far, you're showing a lot of promise.F: Check the living room for the plates.123
synthetic_cpt
7
ELLE_Efficient_Lifelong_Pre-training_for_Emerging_Data.pdf
CERN-TH-2018-127 8 1 0 2 n u J 5 1 ] h p - p e h [ 1 v 1 4 9 5 0 . 6 0 8 1 : v i X r a ZZ production at the LHC: NNLO predictions for 2(cid:96)2ν and 4(cid:96) signatures Stefan Kallweit and Marius Wiesemann TH Division, Physics Department, CERN, CH-1211 Geneva 23, Switzerland [email protected] [email protected] Abstract We consider QCD radiative corrections to ZZ production for all experimentally relevant leptonic processes. We report on a novel computation of next-to-next-to-leading-order (NNLO) corrections to the diboson signature with two charged leptons and missing transverse energy ((cid:96)(cid:96)+Emiss ). All relevant final states are considered: (cid:96)(cid:96)ν(cid:96)ν(cid:96), (cid:96)(cid:96)ν(cid:96)(cid:48)ν(cid:96)(cid:48) and (cid:96)ν(cid:96)(cid:96)(cid:48)ν(cid:96)(cid:48). We also study processes with four charged leptons: (cid:96)(cid:96)(cid:96)(cid:96) and (cid:96)(cid:96)(cid:96)(cid:48)(cid:96)(cid:48). For the first time NNLO accuracy is achieved for a process mixing two double-resonant diboson topologies (ZZ/W +W −→ (cid:96)(cid:96)ν(cid:96)ν(cid:96)). We find good agreement with ATLAS data at 8 TeV. NNLO corrections are large (5–20% and more), and interference effects between ZZ and W +W − resonances turn out to be negligible in most cases. T Diboson processes play a major role in the rich physics programme of the LHC. The intriguing nature of these processes combined with their rather clean experimental signatures and relatively large cross sections render them ideal for Standard Model (SM) precision measurements. The precise knowledge of diboson rates and distributions provides a strong test of the gauge-symmetry structure of electroweak (EW) interactions and the mechanism of EW symmetry breaking. They also serve as important probes of new physics phenomena in direct and indirect searches. Diboson final states, in particular ZZ and W +W −, are also extensively used in Higgs-boson measurements. The production of ZZ pairs yields the smallest cross section among the diboson processes. Never- theless, its pure experimental signature with four charged leptons in the final state facilitates a clean measurement so that it has already been used in a combination of ATLAS and CMS data to constrain anomalous trilinear gauge couplings [1]. ZZ production at the LHC has been measured at 7 TeV [2–4], 8 TeV [5–9], and 13 TeV [10–13]. Also searches for new heavy ZZ resonances involving both charged leptons and neutrinos have been performed, see Ref. [14] for example. Theoretical predictions for ZZ production at next-to-leading order (NLO) QCD were obtained a long time ago for both on-shell Z bosons [15, 16] and their fully leptonic final states [17–20]. Perturbative corrections beyond NLO QCD are indispensable to reach the precision demanded by present ZZ measurements. NLO EW corrections are known for stable Z bosons [21–23] and including their full off-shell treatment for leptonic final states [24–26]. ZZ+jet production was computed at NLO QCD [27]. The loop-induced gg → ZZ + X subprocess, which provides a separately finite O(α2 S) contribution, is known at leading order (LO) [28–37] and was recently computed at NLO considering only gg-initiated partonic channels [38–40], using the two-loop 1 Figure 1: Born-level Feynman diagrams for ZZ production with four charged final-state leptons. (a) (b) helicity amplitudes for gg → V V (cid:48) of Refs. [41, 42]. NNLO QCD corrections to on-shell ZZ production were first evaluated in Ref. [43], and later in Ref. [44]. Using the two-loop helicity amplitudes for q ¯q → V V (cid:48) [45–47], differential predictions in the four-lepton channels ((cid:96)(cid:96)(cid:96)(cid:96) and (cid:96)(cid:96)(cid:96)(cid:48)(cid:96)(cid:48)) were presented in Ref. [48]. In this paper we complete NNLO QCD corrections to ZZ production by considering all experi- mentally relevant leptonic final states. Our computations are fully differential in the momenta of the final-state leptons, and we account for off-shell effects and spin correlations by consistently including all resonant and non-resonant topologies. For the first time, we obtain NNLO-accurate predictions for the (same-flavour) dilepton plus missing transverse energy signature ((cid:96)(cid:96)+Emiss ), which involves all processes with two opposite-charge leptons and two neutrinos in the final state ((cid:96)(cid:96)ν(cid:96)ν(cid:96), (cid:96)(cid:96)ν(cid:96)(cid:48)ν(cid:96)(cid:48) and (cid:96)ν(cid:96)(cid:96)(cid:48)ν(cid:96)(cid:48)). The process (cid:96)(cid:96)ν(cid:96)ν(cid:96) is particularly interesting as it mixes ZZ and W +W − topologies, which will be studied in detail. For completeness we also compute NNLO corrections to the four-lepton channels ((cid:96)(cid:96)(cid:96)(cid:96) and (cid:96)(cid:96)(cid:96)(cid:48)(cid:96)(cid:48)). Phenomenological predictions at NNLO for all of the aforementioned leptonic processes are compared to LHC data at 8 TeV. We employ the computational framework Matrix [49]. All tree-level and one-loop amplitudes are evaluated with OpenLoops1 [54, 55]. At two-loop level we use the q ¯q → V V (cid:48) amplitudes of Ref. [47], and implement the leptonic final states with two charged leptons and two neutrinos as well as with four charged leptons. NNLO accuracy is achieved by a fully general implementation of the qT -subtraction formalism [56] within Matrix. The NLO parts therein (for ZZ and ZZ+1-jet) are performed by Munich2 [59], which employs the Catani–Seymour dipole subtraction method [60, 61]. The Matrix framework features NNLO QCD corrections to a large number of colour- singlet processes at hadron colliders, and has already been used to obtain several state-of-the-art NNLO predictions [43, 48, 62–69].3 T We consider all leptonic signatures relevant for ZZ measurements at the LHC. On the one hand, we compute the four-lepton (4(cid:96)) processes pp → (cid:96)+(cid:96)− (cid:96)(cid:48)+(cid:96)(cid:48)− + X, with different-flavour (DF) leptons ((cid:96) (cid:54)= (cid:96)(cid:48)), denoted as (cid:96)(cid:96)(cid:96)(cid:48)(cid:96)(cid:48), and same-flavour (SF) leptons ((cid:96) = (cid:96)(cid:48)), denoted as (cid:96)(cid:96)(cid:96)(cid:96). Representative LO diagrams are shown in Figure 1. They involve both double-resonant t-channel ZZ production (panel a) and single-resonant s-channel Drell–Yan (DY) topologies (panel b). On the other hand, we compute processes with two charged leptons and two 1OpenLoops relies on the fast and stable tensor reduction of Collier [50, 51], supported by a rescue system based on quad-precision CutTools[52] with OneLOop[53] to deal with exceptional phase-space points. 2The Monte Carlo program Munich features a general implementation of an efficient, multi-channel based phase-space integration and computes both NLO QCD and NLO EW [57, 58] corrections to arbitrary SM processes. 3It was also used in the NNLL+NNLO computation of Ref. [70], and in the NNLOPS computation of Ref. [71]. 2 q¯qℓ+ℓ−ℓ′−ℓ′+Z/γqZ/γq¯qℓ+ℓ−ℓ′−ℓ′+Z/γℓ−Z/γ (a) (b) (c) (d) (e) Figure 2: Born-level Feynman diagrams for the production of two charged leptons and two neutrinos: (a-b) topologies of ZZ production contributing to the process pp → (cid:96)+(cid:96)− ν(cid:96)(cid:48) ¯ν(cid:96)(cid:48) ((cid:96) (cid:54)= (cid:96)(cid:48)); (c-e) topologies of W +W − production contributing to the process pp → (cid:96)+ν(cid:96) (cid:96)(cid:48)−¯ν(cid:96)(cid:48) ((cid:96) (cid:54)= (cid:96)(cid:48)); for (cid:96) = (cid:96)(cid:48) all diagrams contribute to the process pp → (cid:96)+(cid:96)− ν(cid:96)¯ν(cid:96), thereby mixing ZZ and W +W − topologies. neutrinos (2(cid:96)2ν) in the final state, pp → (cid:96)+(cid:96)− ν(cid:96)(cid:48) ¯ν(cid:96)(cid:48) + X, pp → (cid:96)+ν(cid:96) (cid:96)(cid:48)−¯ν(cid:96)(cid:48) + X, and pp → (cid:96)+(cid:96)− ν(cid:96)¯ν(cid:96) + X, with (cid:96) (cid:54)= (cid:96)(cid:48). Representative LO diagrams are shown in Figure 2. In the first process the flavour of the neutrinos does not match the flavour of the charged leptons, and it features double-resonant ZZ contribu- tions (panel a) as well as DY-type topologies (panel b). In the second process the two charged leptons are of different flavours, and it features double-resonant W +W − contributions (panels c and d) as well as DY-type topologies (panel e). In the third process all leptons and neutrinos are of the same flavour, and the topologies of the first two processes mix in the matrix elements. All of the aforementioned processes with charged leptons (cid:96), (cid:96)(cid:48) ∈ {e, µ} and neutrinos ν(cid:96), ν(cid:96)(cid:48) ∈ {νe, νµ, ντ } are studied. The loop-induced gg component is part of the NNLO corrections to these processes and therefore included. The same is true for resonant Higgs-boson topologies, which also start contributing at O(α2 s). A significant complication of the processes pp → (cid:96)+ν(cid:96) (cid:96)(cid:48)−¯ν(cid:96)(cid:48) and pp → (cid:96)+(cid:96)− ν(cid:96)¯ν(cid:96) is posed by the contamination from resonant top-quark contributions with t → W b decays, which enters radiative corrections through diagrams featuring external bottom quarks. In the context of W +W − production [64, 65] two approaches were followed: A top-free W +W − cross section can be obtained in the four-flavour scheme (4FS) by dropping all contributions with real bottom quarks, which are separately finite due to the bottom-quark mass. Since in the five-flavour scheme (5FS) real and virtual contributions of massless bottom quarks are inevitably tied together, the resonance structure of top-quark contributions is exploited to determine a top-free cross section. Neither of the two approaches is required in the case of the ZZ measurements presented here. Since W +W − and top-quark processes are both treated as backgrounds in the respective experimental analyses, we introduce the following procedure: First, we compute the SF process pp → (cid:96)+(cid:96)− ν(cid:96)¯ν(cid:96) including In order to keep only ZZ topologies (and interferences), we then all resonant contributions. 3 q¯qℓ+ℓ−νℓ′¯νℓ′Z/γqZq¯qℓ+ℓ−νℓ′¯νℓ′Zℓ−Z/γu¯uℓ+νℓℓ′−¯νℓ′W+dW−q¯qℓ+νℓℓ′−¯νℓ′W+W−Z/γq¯qℓ+νℓℓ′−¯νℓ′W−ℓ−Z/γ subtract the DF process pp → (cid:96)+ν(cid:96) (cid:96)(cid:48)−¯ν(cid:96)(cid:48). This removes W +W − and top-quark backgrounds from our predictions, as desired, while their interference with ZZ production, which is not accounted for in the background predictions and thus considered part of the ZZ signal, is kept. Its impact will be studied in detail below. If W +W − or top-quark topologies yield much larger contributions than ZZ to the SF process, sizeable cancellations in the subtraction could diminish the numerical accuracy of our predictions. However, for typical ZZ signal cuts, as considered here, a Z-mass window suppresses the W +W − contribution, and a jet veto the top-quark background. The presented procedure applies in all flavour schemes, and we conveniently use the 5FS throughout. 2 Gµm2 W = (m2 W − iΓW mW )/(m2 We present predictions for the 8 TeV LHC. For the EW parameters we employ the Gµ scheme and compute the EW mixing angle as cos θ2 Z − iΓZ mZ) and α = √ W sin2 θW /π, using the complex-mass scheme [72] throughout. The EW inputs are set to the PDG [73] values: GF = 1.16639 × 10−5 GeV−2, mW = 80.385 GeV, ΓW = 2.0854 GeV, mZ = 91.1876 GeV, ΓZ = 2.4952 GeV, mH = 125 GeV, and ΓH = 0.00407. The branching ratio of the Z-boson decay into massless charged leptons, (cid:96) ∈ {e, µ}, is BR(Z → (cid:96)(cid:96)) = 0.033631, which is used below to compute the cross section in the total phase space. The on-shell top-quark mass is set to mt = 173.2 GeV, and Γt = 1.44262 is used. For each perturbative order we use the corresponding set of Nf = 5 NNPDF3.0 [74] parton distributions with αS(mZ) = 0.118. Renormalization (µR) and factorization (µF ) scales are set to half of the invariant mass of the ZZ pair, µR = µF = µ0 ≡ 1 2 mZZ. Residual uncertainties are estimated from customary 7-point scale variations by a factor of two, with the constraint 0.5 ≤ µR/µF ≤ 2. We start by comparing phenomenological predictions to the ATLAS 8 TeV measurement of Ref. [9]. The corresponding phase-space cuts are summarized in Table 1 for both the four-lepton and the (cid:96)(cid:96)+Emiss signatures. The total phase space is defined by a Z-mass window in the invariant mass of each reconstructed Z boson. The reconstruction is unambiguous in the DF channel (cid:96)(cid:96)(cid:96)(cid:48)(cid:96)(cid:48), T definition of the total phase space for pp → ZZ + X 66 GeV ≤ mZrec a/b ≤ 116 GeV definition of the fiducial volume for pp → (cid:96)+(cid:96)−(cid:96)(cid:48)+(cid:96)(cid:48)− + X, (cid:96), (cid:96)(cid:48) ∈ {e, µ} pT,(cid:96) > 7 GeV, one electron with |ηe| < 4.9, the others |ηe| < 2.5, |ηµ| < 2.7 ∆R(cid:96)(cid:96) > 0.2, ∆R(cid:96)(cid:96)(cid:48) > 0.2, 66 GeV ≤ mZrec a/b ≤ 116 GeV, anti-kT jets with R = 0.4, pT,j > 25 GeV, |ηj| < 4.5 lepton identification in SF channel: minimizing differences of invariant-mass of OSSF lepton pairs and mZ definition of the fiducial volume for pp → (cid:96)+(cid:96)−ν ¯ν + X, (cid:96) ∈ {e, µ} and ν ∈ {νe, νµ, ντ } pT,(cid:96) > 25 GeV, |η(cid:96)| < 2.5, ∆R(cid:96)(cid:96) > 0.3, 76 GeV ≤ m(cid:96)+(cid:96)− ≤ 106 GeV, Axial-pmiss T > 90 GeV, pT -balance < 0.4, Njets = 0, anti-kT jets with R = 0.4, pT,j > 25 GeV, |ηj| < 4.5 and ∆Rej > 0.3 Table 1: Phase-space definitions of the ZZ measurements by ATLAS at 8 TeV [9]. 4 channel σLO [fb] σNLO [fb] σNNLO [fb] σATLAS [fb] e+e−µ+µ− 8.188(1)+2.4% e+e−e+e− 4.654(0)+2.3% µ+µ−µ+µ− 3.565(0)+2.6% −0.5(syst) +0.3 −0.2(lumi) −2.2% 12.4 +1.0 −3.2% 11.30(0)+2.5% −3.1% 6.410(2)+2.5% −3.5% 4.969(5)+2.5% −0.5% 4.806(1)+3.5% −0.5% 4.770(4)+3.6% −2.0% 12.92(1)+2.8% −2.0% 7.310(8)+2.7% −2.0% 5.688(6)+2.9% −3.9% 5.083(8)+1.9% −4.0% 5.035(9)+1.8% −2.1% 5.9 +0.8 −2.2% 4.9 +0.6 −0.6% 5.0 +0.8 −0.5% 4.7 +0.7 −1.0(stat) +0.6 −0.8(stat) +0.4 −0.5(stat) +0.3 −0.7(stat) +0.5 −0.7(stat) +0.5 −0.4(syst) ± 0.1(lumi) −0.2(syst) ± 0.1(lumi) −0.4(syst) ± 0.1(lumi) −0.4(syst) ± 0.1(lumi) 5.558(0)+0.1% 5.558(0)+0.1% 4982(0)+1.9% −2.7% 6754(2)+2.4% −2.0% 7690(5)+2.7% −2.1% 7300 +400 −400(stat) +300 −300(syst) +200 −100(lumi) e+e−νν µ+µ−νν total rate Table 2: Predictions for fiducial and total rates compared to ATLAS 8 TeV data [9]. a and Zb = (cid:96)+ a = (cid:96)+(cid:96)− and Z rec b = (cid:96)(cid:48)+(cid:96)(cid:48)−, which we employ for the predicted cross sections in the total Z rec phase space. The fiducial cuts involve standard requirements on the transverse momenta and pseudo-rapidities of the leptons, a separation in ∆R = (cid:112)∆η2 + ∆φ2 between the leptons, and a window in the invariant mass of reconstructed Z bosons around the Z-pole. In the SF channel (cid:96)(cid:96)(cid:96)(cid:96), Z bosons are reconstructed by identifying the combination of opposite-sign same-flavour a (cid:96)− b and Zb = (cid:96)+ b (cid:96)− a (cid:96)− (OSSF) lepton pairings (Za = (cid:96)+ b (cid:96)− a ) that minimizes |mZa − mZ| + |mZb − mZ| with the reconstructed Z bosons Z rec a = Za and Z rec b = Zb. A rather special feature in the fiducial phase spaces of the four-lepton channels is the fact that ATLAS measures one of the electrons up to very large pseudo-rapidities (|ηe| < 4.9). The measurement of the (cid:96)(cid:96)+Emiss signature applies two additional requirements, which force the two Z bosons closer to back-to-back-like configurations to suppress backgrounds such as Z+jets: There is a lower cut on the axial missing transverse momentum, Axial-pmiss T ≡ pT,νν and ∆φ(cid:96)(cid:96),νν is the azimuthal angle between the dilepton and the neutrino pair. Furthermore, the two Z-boson momenta are balanced by putting an upper cut on pT -balance = |pmiss T − pT,(cid:96)(cid:96)|/pT,(cid:96)(cid:96). Finally, the (cid:96)(cid:96)+Emiss signature requires a jet veto to suppress top-quark backgrounds. Note that jets close to electrons (∆Rej < 0.3) are not vetoed. · cos (∆φ(cid:96)(cid:96),νν), where pmiss b , or Za = (cid:96)+ T = −pmiss T T T In Table 2 we report cross-section predictions and compare them against ATLAS 8 TeV results [9]. Central predictions are stated with the numerical error on the last digit quoted in round brackets. The relative uncertainties quoted in percent are estimated from scale variations as described above. Results reported for e+e−µ+µ−, e+e−e+e−, µ+µ−µ+µ−, e+e−ν ¯ν, and µ+µ−ν ¯ν production are cross sections in the respective fiducial volumes defined in Table 1. The prediction in the last line of the table is obtained from the computation of pp → e+e−µ+µ− + X in the total phase space defined in Table 1, by dividing out the branching ratio BR(Z → (cid:96)(cid:96)) for each Z-boson decay. The main conclusions that can be drawn from these results are the following: • Radiative corrections are large and have a marked dependence on the event selection: They range between +35% to +40% at NLO and +14% to +17% at NNLO in cases without a jet veto, i.e. for all but the 2(cid:96)2ν results. Roughly half (45%–55%) of the O(α2 s) terms are due to the loop-induced gg component in these cases. For the 2(cid:96)2ν processes the situation is quite different: Due to the jet veto NLO corrections turn negative and yield about −14%. NNLO corrections are roughly +6%. However, the positive effect is entirely due to loop-induced gg 5 contributions, which are not affected by the jet veto. Omitting the loop-induced gg terms, the genuine NNLO corrections to the q ¯q channel are actually negative and about −5%. Hence, despite the jet veto, full O(α2 s) corrections are crucial for the (cid:96)(cid:96)+Emiss signature. T • For channels with four charged leptons we find good agreement between theory and data. This is particularly true for the DF process (e+e− µ+µ−), where NNLO corrections clearly improve the comparison. In the SF channels (e+e− e+e− and µ+µ− µ+µ−) NNLO predictions are slightly larger than the measurements, but remain within 1σ for muons and 2σ for electrons. One should not forget that EW corrections reduce the rates by a few percent [25], while NLO corrections to the loop-induced gg channel have a positive effect [38]. • For the (cid:96)(cid:96)+Emiss T signatures excellent agreement is found between NNLO predictions and It is worth noting that fixed-order results describe the data measured cross sections. significantly better than the Powheg [75–78] Monte Carlo prediction used in Ref. [9]. This could be caused by the jet-veto requirement: As pointed out in Ref. [79] for W +W − production, in presence of a jet veto the fiducial rate predicted by Powheg is rather small. • The NNLO prediction in the last line of the table agrees perfectly (< 1σ) with the experimental result in the total phase space, with NNLO corrections being crucial for this level of agreement. • At LO scale uncertainties clearly underestimate the actual size of higher-order corrections, since only the q ¯q channel contributes and the cross section is µR-independent. Given large NLO corrections, also the scale uncertainties of 2%–4% at NLO cannot be trusted as an estimate of missing higher-order terms. However, at NNLO all partonic channels are included, and the corrections to the q ¯q channel, which are much smaller than at NLO, are of the same order as the respective scale variations. Therefore, NNLO uncertainties may be expected to reflect the size of yet un-calculated perturbative corrections to this channel. Only the loop-induced gg component underestimates the uncertainty due to its LO nature, which is known from the sizable NLO contributions to the gg channel [38]. We now turn to discussing differential distributions. Figure 3 shows results for the production of four charged leptons in the total phase space. Theoretical predictions in these plots are obtained from the DF process pp → e+e− µ+µ− + X, divided by the branching ratio BR(Z → (cid:96)(cid:96)) for each Z-boson decay. The measured results are extrapolated to the total phase space, as presented by ATLAS at 8 TeV [9]. Given that one electron is measured up to absolute pseudo-rapidities of 4.9, the extrapolation factor, and possibly the ensuing uncertainty, is smaller than in other four-lepton measurements. Nevertheless, we reckon that a direct comparison against unfolded distributions in the fiducial volume is preferable, as it is less affected by the lower perturbative accuracy of the Monte Carlo generator used for the extrapolation. However, since no such experimental results are available in the four-lepton channel from ATLAS at 8 TeV, we perform the comparison in the total phase space. We have normalized the ATLAS distributions to the measured total cross section in the last line of Table 2. Despite the fact that the comparison is done in the total phase space, theory predictions and measured cross sections are in reasonable agreement for the observables shown in Figure 3, which are the rapidity difference of the reconstructed Z bosons, ∆yZ1,Z2 (panel a), the azimuthal angle between the two leptons of the harder Z boson, ∆φ(cid:96)+ (panel b), the transverse momentum Z1 of the leading Z boson, pT,Z1 (panel c), and the number of jets, Njets (panel d). Overall, NNLO predictions provide the best description of data, although NLO results are similarly close, while LO is far off. Note that for the jet multiplicity the effective perturbative accuracy of the (fixed-order) ,(cid:96)− Z1 6 (a) (b) (c) (d) Figure 3: Differential distributions for the four-lepton processes in the total phase space at LO (black, dotted), NLO (red, dashed) and NNLO (blue, solid), compared to ATLAS 8 TeV data extrapolated to the total phase space [9] (green points with error bars); for (a) ∆yZ1,Z2, (b) ∆φ(cid:96)+ Z1 , (c) pT,Z1, and (d) Njets; the lower frames show the ratio over NLO. ,(cid:96)− Z1 7 dσ/d|ΔyZ1,Z2| [pb]ℓℓℓ(')ℓ(')@LHC 8 TeV (ATLAS data)LONLONNLOdata 0 1 2 3 4 5 6produced with MATRIX|ΔyZ1,Z2| dσ/dσNLO 0 0.5 1 1.5 20.40.81.24 0dσ/dΔϕℓ+Z1ℓ-Z1[pb/rad]ℓℓℓ(')ℓ(')@LHC 8 TeV (ATLAS data)LONLONNLOdata 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6produced with MATRIXΔϕℓ+Z1ℓ-Z1 [rad]dσ/dσNLO 0 0.5 1 1.5 201.31.92.32.7πdσ/dpT,Z1 [pb/GeV]ℓℓℓ(')ℓ(')@LHC 8 TeV (ATLAS data)LONLONNLOdata10-510-410-310-210-1produced with MATRIXpT,Z1 [GeV]dσ/dσNLO 0 0.5 1 1.5 2030601002001500σ [pb]ℓℓℓ(')ℓ(')@LHC 8 TeV (ATLAS data)LONLONNLOdata 0 1 2 3 4 5 6 7 8produced with MATRIX dσ/dσNLO 0.4 0.6 0.8 1 1.2total rate0 jets1 jet2-10 jets (a) (b) (c) Figure 4: Differential distributions of the 2(cid:96)2ν processes with fiducial cuts at LO (black, dotted), NLO (red, dashed) and NNLO (blue, solid), compared to ATLAS 8 TeV data [9] (green points with error bars); for (a) pT,(cid:96)(cid:96), (b) mT,ZZ, and (c) ∆φ(cid:96)(cid:96); the lower frame shows the ratio over NLO. predictions is degraded by one order for each added jet. NNLO effects on other distributions are large, but primarily affect the normalization and not the shapes. We continue our discussion of differential results with the (cid:96)(cid:96)+Emiss signature in Figure 4, which shows the distributions in the transverse momentum of the dilepton pair, pT,(cid:96)(cid:96) (panel a), the transverse mass of the ZZ pair, defined as4 T (cid:115)(cid:18)(cid:113) T,(cid:96)(cid:96) + m2 p2 Z + (cid:113) (pmiss T )2 + m2 Z (cid:19)2 mT,ZZ = − (pT,(cid:96)(cid:96) + pmiss T )2 (panel b), and the azimuthal angle between the two leptons, ∆φ(cid:96)(cid:96) (panel c). The results correspond to the sum of all channels including both SF ((cid:96)(cid:96) ν(cid:96)ν(cid:96)) and DF ((cid:96)(cid:96) ν(cid:96)(cid:48)ν(cid:96)(cid:48)) processes ((cid:96) ∈ {e, µ}, ν(cid:96)(cid:48) ∈ {νe, νµ, ντ }, (cid:96) (cid:54)= (cid:96)(cid:48)). We recall that SF contributions are computed by subtracting W +W − and top-quark backgrounds as outlined before. For all three distributions in Figure 4 we find excellent agreement between theory and data. At NNLO, differences hardly exceed the 1σ level. Although NNLO corrections change the cross section in certain bins, the experimental uncertainties are still too large for more distinct conclusions. Similar to our previous observations for fiducial rates, the agreement found here at fixed order is a significant improvement over the comparison with the Monte Carlo prediction shown in Ref. [9]. As pointed out before, we expect a poor modelling of the jet veto by the Powheg generator to be the main source of these differences, see also Ref. [79]. In the remainder of this paper we focus on the (cid:96)(cid:96)+Emiss signature, with the same fiducial setup as before. In Figure 5 we have picked three out of many observables where the importance of NNLO corrections is evident. The NLO(cid:48)+gg result in the ratio frame denotes the sum of the NLO and the loop-induced gg cross section, both evaluated with NNLO PDFs, which was the best prediction available in the past. Its difference compared to the complete NNLO QCD result shows the size of the genuine O(α2 S) corrections to the q ¯q channel, computed for the first time in this T 4Boldface is used to indicate the vectorial sum of the dilepton and missing transverse momentum. 8 dσ/dpT,ℓℓ [fb/GeV]2ℓ2ν@LHC 8 TeV (ATLAS data)LONLONNLOdata 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14produced with MATRIXpT,ℓℓ [GeV]dσ/dσNLO 0 0.5 1 1.5 2601001501500dσ/dmT,ZZ [fb/GeV]2ℓ2ν@LHC 8 TeV (ATLAS data)LONLONNLOdata 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08produced with MATRIXmT,ZZ [GeV]dσ/dσNLO 0 0.5 1 1.5 22202803304001500dσ/dΔϕℓℓ [fb/rad]2ℓ2ν@LHC 8 TeV (ATLAS data)LONLONNLOdata 0 2 4 6 8 10 12produced with MATRIXΔϕℓℓ [rad]dσ/dσNLO 0 0.5 1 1.5 200.81.21.6π (a) (b) (c) Figure 5: Same as Figure 4, but without data and for the distributions (a) ∆φ(cid:96)(cid:96), (b) pT,(cid:96)1, and (c) pmiss ; for reference, also the NLO(cid:48)+gg result (green, dash-dotted) is shown in the ratio frame. T paper. For example, the ∆φ(cid:96)(cid:96) distribution in Figure 5 (panel a) develops a sizable NNLO/NLO K-factor up to 1.6 for large separations. From the considerable differences between NNLO and NLO(cid:48)+gg curves, which also concern their shapes, it is clear that this effect stems directly from the newly computed O(α2 S) contributions. In this phase-space region (large ∆φ(cid:96)(cid:96)) the perturbative accuracy is effectively diminished by one order due to the phase-space cuts which force the two Z bosons to be boosted and approximately back-to-back, so that the two decay leptons disfavour large separations. This manifests itself also in a widening of the scale uncertainty bands. Also the transverse-momentum spectrum of the hardest lepton, pT,(cid:96)1 in Figure 5 (panel b) features a significant shape distortion at NNLO, when compared to both NLO and NLO(cid:48)+gg. The same is true for the missing transverse momentum, pmiss in Figure 5 (panel c). In all cases perturbative uncertainties are clearly reduced upon inclusion of higher-order corrections. T We complete our discussion of phenomenological results by studying the size of ZZ, W +W −, and interference contributions entering the SF process pp → (cid:96)+(cid:96)− ν(cid:96)¯ν(cid:96). We recall that W +W − contributions also involve resonant top-quark topologies. In contrast to our previous discussion, W +W − and top-quark contributions are not subtracted from the SF process in the following. We focus on the contamination of the (cid:96)(cid:96)+Emiss signature through interference with W +W − and top-quark diagrams. To this end, Figure 6 compares the NNLO cross section for the full process of two OSSF leptons and two neutrinos, σ((cid:96)(cid:96) νe/µ/τ νe/µ/τ ) = σ((cid:96)(cid:96) ν(cid:96)ν(cid:96)) + 2 · σ((cid:96)(cid:96) ν(cid:96)(cid:48)ν(cid:96)(cid:48)) for (cid:96) ∈ {e, µ} and (cid:96) (cid:54)= (cid:96)(cid:48) with the same NNLO cross section, where the SF channel is approximated by the incoherent sum of the two DF processes, σ((cid:96)(cid:96) νe/µ/τ νe/µ/τ ) ≈ 3 · σ((cid:96)(cid:96) ν(cid:96)(cid:48)ν(cid:96)(cid:48)) + σ((cid:96)ν(cid:96) (cid:96)(cid:48)ν(cid:96)(cid:48)). The difference of the two is precisely the remaining interference contribution of ZZ with W +W − (and top-quark) topologies which we want to study. For completeness, also the individual DF ZZ and DF W +W − cross sections, 3 · σ((cid:96)(cid:96) ν(cid:96)(cid:48)ν(cid:96)(cid:48)) and σ((cid:96)(cid:96)(cid:48) ν(cid:96)ν(cid:96)(cid:48)), respectively, are shown, whose sum is the approximated cross section. T It is instructive to consider the invariant mass of the charged leptons, m(cid:96)+(cid:96)−, in Figure 6 (panel a), which nicely illustrates the nature of the different results: Only ZZ topologies feature a resonance at m(cid:96)+(cid:96)− = mZ, while the DF W +W − prediction is almost flat in this range of m(cid:96)+(cid:96)−. 9 dσ/dΔϕℓℓ [fb/rad]2ℓ2ν@LHC 8 TeVLONLONNLO10-410-310-210-1100101produced with MATRIXΔϕℓℓ [rad]dσ/dσNLONLO'+gg 0.8 1 1.2 1.4 1.6 1.8 0 0.5 1 1.5 2 2.5 3dσ/dpT,ℓ1 [fb/GeV]2ℓ2ν@LHC 8 TeVLONLONNLO10-410-310-210-1100produced with MATRIXpT,ℓ1 [GeV]dσ/dσNLONLO'+gg 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 0 100 200 300 400 500dσ/dpTmiss [fb/GeV]2ℓ2ν@LHC 8 TeVLONLONNLO10-610-510-410-310-210-1100produced with MATRIXpTmiss [GeV]dσ/dσNLONLO'+gg 0.7 0.8 0.9 1 1.1 1.2 0 100 200 300 400 500 600 700 800 900 1000 (a) (b) (c) Figure 6: Comparison of NNLO cross sections for the full process σ((cid:96)(cid:96) νe/µ/τ νe/µ/τ ) (blue, solid), the individual ZZ contributions 3 · σ((cid:96)(cid:96) ν(cid:96)(cid:48)ν(cid:96)(cid:48)) with (cid:96) (cid:54)= (cid:96) (orange, dash-dotted), the individual W +W − contributions σ((cid:96)ν(cid:96) (cid:96)(cid:48)ν(cid:96)(cid:48)) with (cid:96) (cid:54)= (cid:96) (black, dotted), and the approximation of the full result by the incoherent sum of ZZ and W +W − contributions 3 · σ((cid:96)(cid:96) ν(cid:96)(cid:48)ν(cid:96)(cid:48)) + σ((cid:96)(cid:96)(cid:48) ν(cid:96)ν(cid:96)(cid:48)) (red, dashed); for (a) m(cid:96)+(cid:96)−, (b) pT,(cid:96)−ν(cid:96), and (c) pT,(cid:96)1; the lower frames show the ratio to the full result. It is clear from the first ratio frame that almost the entire cross section around the peak stems from ZZ contributions. Only away from the peak W +W − production becomes larger than ZZ production. It is also clear that it is the m(cid:96)+(cid:96)− cut in the fiducial definition which significantly enhances ZZ contributions and suppresses the W +W − process. The relative difference between the approximated and the full result, which is enlarged in the second ratio frame, is very small, in particular in the peak region. This demonstrates that interference effects of ZZ with W +W − (and top-quark) topologies are negligible, and that an incoherent sum of the two DF channels is an excellent approximation of the SF process. This also implies that in our previous definition of the (cid:96)(cid:96)+Emiss signature the remaining interference effects after subtraction of W +W − and top-quark backgrounds are small. In fact, we hardly found any distribution with larger interference effects. The most pronounced example is the “pseudo”-observable in Figure 6 (panel b) that shows the transverse-momentum spectrum of a W − boson reconstructed as (cid:96)−ν(cid:96), and even in this case the differences do not exceed a few percent, although the shape is slightly deformed. With interference effects being generally small, it is interesting to analyse the different behaviour of ZZ and W +W − In the pT,(cid:96)1 distribution in Figure 6 (panel c), for example, the relative W +W − topologies. contribution increases around pT,(cid:96)1 = 90 GeV. This feature is already present at LO, and it is caused by purely kinematic effects that allow the two W bosons to become resonant simultaneously only in this part of phase space. The region below pT,(cid:96)1 = 45 GeV is populated only beyond LO. T We have presented NNLO QCD corrections to ZZ production for all leptonic processes. The (cid:96)(cid:96)+Emiss signature has been studied for the first time at this level of accuracy, and we have T 10 dσNNLO/dmℓℓ [fb/GeV]2ℓ2ν@LHC 8 TeVfull (ℓℓνν+2·ℓℓν'ν')ZZ DF (3·ℓℓν'ν')WW DF (ℓνℓ'ν')sum (3·ℓℓν'ν'+ℓνℓ'ν')10-310-210-1100101produced with MATRIXdσNNLO/σNNLO(full)0%20%40%60%80%100%120%mℓℓ [GeV]dσNNLO/σNNLO(full) - 1-10%-5%0%+5%+10% 80 85 90 95 100 105dσNNLO/dpT,ℓ−ν [fb/GeV]2ℓ2ν@LHC 8 TeVfull (ℓℓνν+2·ℓℓν'ν')ZZ DF (3·ℓℓν'ν')WW DF (ℓνℓ'ν')sum (3·ℓℓν'ν'+ℓνℓ'ν')10-610-510-410-310-210-1100produced with MATRIXdσNNLO/σNNLO(full)0%20%40%60%80%100%120%pT,ℓ−ν [GeV]dσNNLO/σNNLO(full) - 1-10%-5%0%+5%+10% 0 100 200 300 400 500dσNNLO/dpT,ℓ1 [fb/GeV]2ℓ2ν@LHC 8 TeVfull (ℓℓνν+2·ℓℓν'ν')ZZ DF (3·ℓℓν'ν')WW DF (ℓνℓ'ν')sum (3·ℓℓν'ν'+ℓνℓ'ν')10-610-510-410-310-210-1100produced with MATRIXdσNNLO/σNNLO(full)0%20%40%60%80%100%120%pT,ℓ1 [GeV]dσNNLO/σNNLO(full) - 1-10%-5%0%+5%+10% 0 100 200 300 400 500 introduced a procedure to compute results consistently in the five-flavour scheme without contribu- tions from W +W − or top-quark backgrounds. We also computed state-of-the-art predictions for signatures involving four charged leptons. Our results are compared to ATLAS data at 8 TeV, and we find good agreement for both fiducial cross sections and distributions. NNLO QCD corrections are sizable, even in presence of a jet veto used in the (cid:96)(cid:96)+Emiss T measurement. By and large, they are of the order of 5%–20%, but can reach even 60% in certain phase-space regions. Most importantly, such effects do not only stem from the loop-induced gg contribution, but are also due to the newly computed genuine O(α2 S) corrections to the q ¯q channel. Not least, we have shown that all remaining interference effects of ZZ topologies with W +W − and top-quark backgrounds in 2(cid:96)2ν production are negligible. The availability of fully differential NNLO predictions for all leptonic channels of ZZ production will play a crucial role in the rich physics programme that is based on precision studies of ZZ signatures at the LHC. Along with the paper we provide an updated version of Matrix, featuring all processes with the fiducial setup, cuts and distributions considered here. Acknowledgements. We would like to thank Massimiliano Grazzini and Jochen Meyer for useful discussions and comments on the manuscript. The work of MW is supported by the ERC Consolidator Grant 614577 HICCUP. References [1] ATLAS and CMS, ATLAS-CONF-2016-036, CMS-PAS-SMP-15-001. [2] G. Aad et al. (ATLAS), Phys. Rev. Lett. 108, 041804 (2012), arXiv:1110.5016 [hep-ex]. [3] S. Chatrchyan et al. (CMS), JHEP 01, 063 (2013), arXiv:1211.4890 [hep-ex]. [4] G. Aad et al. (ATLAS), JHEP 03, 128 (2013), arXiv:1211.6096 [hep-ex]. [5] S. Chatrchyan et al. (CMS), Phys. Lett. B721, 190 (2013), arXiv:1301.4698 [hep-ex]. [6] V. Khachatryan et al. (CMS), Phys. Lett. B740, 250 (2015), arXiv:1406.0113 [hep-ex]. [7] V. Khachatryan et al. (CMS), Eur. Phys. J. C75, 511 (2015), arXiv:1503.05467 [hep-ex]. [8] G. Aad et al. (ATLAS), Phys. Lett. B753, 552 (2016), arXiv:1509.07844 [hep-ex]. [9] M. Aaboud et al. (ATLAS), JHEP 01, 099 (2017), arXiv:1610.07585 [hep-ex]. [10] G. Aad et al. (ATLAS), Phys. Rev. Lett. 116, 101801 (2016), arXiv:1512.05314 [hep-ex]. [11] V. Khachatryan et al. (CMS), Phys. Lett. B763, 280 (2016), [Erratum: Phys. Lett. B772, 884 (2017)], arXiv:1607.08834 [hep-ex]. [12] M. Aaboud et al. (ATLAS), Phys. Rev. D97, 032005 (2018), arXiv:1709.07703 [hep-ex]. [13] A. M. Sirunyan et al. (CMS), Eur. Phys. J. C78, 165 (2018), arXiv:1709.08601 [hep-ex]. [14] M. Aaboud et al. (ATLAS), Eur. Phys. J. C78, 293 (2018), arXiv:1712.06386 [hep-ex]. [15] J. Ohnemus and J. Owens, Phys. Rev. D43, 3626 (1991). [16] B. Mele, P. Nason and G. Ridolfi, Nucl. Phys. B357, 409 (1991). [17] J. Ohnemus, Phys. Rev. D50, 1931 (1994), hep-ph/9403331. [18] J. M. Campbell and R. K. Ellis, Phys. Rev. D60, 113006 (1999), hep-ph/9905386. [19] L. J. Dixon, Z. Kunszt and A. Signer, Phys. Rev. D60, 114037 (1999), hep-ph/9907305. [20] L. J. Dixon, Z. Kunszt and A. Signer, Nucl. Phys. B531, 3 (1998), hep-ph/9803250. 11 [21] E. Accomando, A. Denner and A. Kaiser, Nucl. Phys. B706, 325 (2005), hep-ph/0409247. [22] A. Bierweiler, T. Kasprzik and J. H. K¨uhn, JHEP 1312, 071 (2013), arXiv:1305.5402 [hep-ph]. [23] J. Baglio, L. D. Ninh and M. M. Weber, Phys. Rev. D88, 113005 (2013), arXiv:1307.4331 [hep-ph]. [24] B. Biedermann, A. Denner, S. Dittmaier, L. Hofer and B. J¨ager, Phys. Rev. Lett. 116, 161803 (2016), arXiv:1601.07787 [hep-ph]. [25] B. Biedermann, A. Denner, S. Dittmaier, L. Hofer and B. J¨ager, JHEP 01, 033 (2017), arXiv:1611.05338 [hep-ph]. [26] S. Kallweit, J. M. Lindert, S. Pozzorini and M. Sch¨onherr, JHEP 11, 120 (2017), arXiv:1705.00598 [hep-ph]. [27] T. Binoth, T. Gleisberg, S. Karg, N. Kauer and G. Sanguinetti, Phys. Lett. B683, 154 (2010), arXiv:0911.3181 [hep-ph]. [28] E. W. N. Glover and J. J. van der Bij, Nucl. Phys. B321, 561 (1989). [29] D. A. Dicus, C. Kao and W. W. Repko, Phys. Rev. D36, 1570 (1987). [30] T. Matsuura and J. van der Bij, Z. Phys. C51, 259 (1991). [31] C. Zecher, T. Matsuura and J. van der Bij, Z. Phys. C64, 219 (1994), hep-ph/9404295. [32] T. Binoth, N. Kauer and P. Mertsch, Proceedings DIS 2008, 142 (2008), arXiv:0807.0024 [hep-ph]. [33] J. M. Campbell, R. K. Ellis and C. Williams, JHEP 1107, 018 (2011), arXiv:1105.0020 [hep-ph]. [34] N. Kauer, JHEP 12, 082 (2013), arXiv:1310.7011 [hep-ph]. [35] F. Cascioli, S. H¨oche, F. Krauss, P. Maierh¨ofer, S. Pozzorini and F. Siegert, JHEP 1401, 046 (2014), arXiv:1309.0500 [hep-ph]. [36] J. M. Campbell, R. K. Ellis and C. Williams, JHEP 04, 060 (2014), arXiv:1311.3589 [hep-ph]. [37] N. Kauer, C. O’Brien and E. Vryonidou, JHEP 10, 074 (2015), arXiv:1506.01694 [hep-ph]. [38] F. Caola, K. Melnikov, R. R¨ontsch and L. Tancredi, Phys. Rev. D92, 094028 (2015), arXiv:1509.06734 [hep-ph]. [39] F. Caola, M. Dowling, K. Melnikov, R. R¨ontsch and L. Tancredi, JHEP 07, 087 (2016), arXiv:1605.04610 [hep-ph]. [40] S. Alioli, F. Caola, G. Luisoni and R. R¨ontsch, Phys. Rev. D95, 034042 (2017), arXiv:1609.09719 [hep-ph]. [41] F. Caola, J. M. Henn, K. Melnikov, A. V. Smirnov and V. A. Smirnov, JHEP 1506, 129 (2015), arXiv:1503.08759 [hep-ph]. [42] A. von Manteuffel and L. Tancredi, JHEP 1506, 197 (2015), arXiv:1503.08835 [hep-ph]. [43] F. Cascioli, T. Gehrmann, M. Grazzini, S. Kallweit, P. Maierh¨ofer, A. von Manteuffel, S. Poz- zorini, D. Rathlev, L. Tancredi and E. Weihs, Phys. Lett. B735, 311 (2014), arXiv:1405.2219 [hep-ph]. [44] G. Heinrich, S. Jahn, S. P. Jones, M. Kerner and J. Pires, JHEP 03, 142 (2018), arXiv:1710.06294 [hep-ph]. [45] T. Gehrmann, A. von Manteuffel, L. Tancredi and E. Weihs, JHEP 1406, 032 (2014), arXiv:1404.4853 [hep-ph]. 12 [46] F. Caola, J. M. Henn, K. Melnikov, A. V. Smirnov and V. A. Smirnov, JHEP 1411, 041 (2014), arXiv:1408.6409 [hep-ph]. [47] T. Gehrmann, A. von Manteuffel and L. Tancredi, JHEP 09, 128 (2015), arXiv:1503.04812 [hep-ph]. [48] M. Grazzini, S. Kallweit and D. Rathlev, Phys. Lett. B750, 407 (2015), arXiv:1507.06257 [hep-ph]. [49] M. Grazzini, S. Kallweit and M. Wiesemann, arXiv:1711.06631 [hep-ph]. [50] A. Denner, S. Dittmaier and L. Hofer, PoS LL2014, 071 (2014), arXiv:1407.0087 [hep-ph]. [51] A. Denner, S. Dittmaier and L. Hofer, Comput. Phys. Commun. 212, 220 (2017), arXiv:1604.06792 [hep-ph]. [52] G. Ossola, C. G. Papadopoulos and R. Pittau, JHEP 0803, 042 (2008), arXiv:0711.3596 [hep-ph]. [53] A. van Hameren, Comput. Phys. Commun. 182, 2427 (2011), arXiv:1007.4716 [hep-ph]. [54] F. Cascioli, P. Maierh¨ofer and S. Pozzorini, Phys. Rev. Lett. 108, 111601 (2012), arXiv:1111.5206 [hep-ph]. [55] F. Buccioni, S. Pozzorini and M. Zoller, Eur. Phys. J. C78, 70 (2018), arXiv:1710.11452 [hep-ph]. [56] S. Catani and M. Grazzini, Phys. Rev. Lett. 98, 222002 (2007), hep-ph/0703012. [57] S. Kallweit, J. M. Lindert, P. Maierh¨ofer, S. Pozzorini and M. Sch¨onherr, JHEP 04, 012 (2015), arXiv:1412.5157 [hep-ph]. [58] S. Kallweit, J. M. Lindert, P. Maierh¨ofer, S. Pozzorini and M. Sch¨onherr, JHEP 04, 021 (2016), arXiv:1511.08692 [hep-ph]. [59] Munich is the abbreviation of “MUlti-chaNnel Integrator at Swiss (CH) precision”—an automated parton level NLO generator by S. Kallweit. In preparation. [60] S. Catani and M. Seymour, Phys. Lett. B378, 287 (1996), hep-ph/9602277. [61] S. Catani and M. Seymour, Nucl. Phys. B485, 291 (1997), hep-ph/9605323. [62] M. Grazzini, S. Kallweit, D. Rathlev and A. Torre, Phys. Lett. B731, 204 (2014), arXiv:1309.7000 [hep-ph]. [63] M. Grazzini, S. Kallweit and D. Rathlev, JHEP 07, 085 (2015), arXiv:1504.01330 [hep-ph]. [64] T. Gehrmann, M. Grazzini, S. Kallweit, P. Maierh¨ofer, A. von Manteuffel, S. Pozzorini, D. Rathlev and L. Tancredi, Phys. Rev. Lett. 113, 212001 (2014), arXiv:1408.5243 [hep-ph]. [65] M. Grazzini, S. Kallweit, S. Pozzorini, D. Rathlev and M. Wiesemann, JHEP 08, 140 (2016), arXiv:1605.02716 [hep-ph]. [66] M. Grazzini, S. Kallweit, D. Rathlev and M. Wiesemann, Phys. Lett. B761, 179 (2016), arXiv:1604.08576 [hep-ph]. [67] M. Grazzini, S. Kallweit, D. Rathlev and M. Wiesemann, JHEP 05, 139 (2017), arXiv:1703.09065 [hep-ph]. [68] D. de Florian, M. Grazzini, C. Hanga, S. Kallweit, J. M. Lindert, P. Maierh¨ofer, J. Mazzitelli and D. Rathlev, JHEP 09, 151 (2016), arXiv:1606.09519 [hep-ph]. [69] M. Grazzini, G. Heinrich, S. Jones, S. Kallweit, M. Kerner, J. M. Lindert and J. Mazzitelli, JHEP 05, 059 (2018), arXiv:1803.02463 [hep-ph]. 13 [70] M. Grazzini, S. Kallweit, D. Rathlev and M. Wiesemann, JHEP 08, 154 (2015), arXiv:1507.02565 [hep-ph]. [71] E. Re, M. Wiesemann and G. Zanderighi, arXiv:1805.09857 [hep-ph]. [72] A. Denner, S. Dittmaier, M. Roth and L. H. Wieders, Nucl. Phys. B724, 247 (2005), [Erratum: Nucl. Phys. B854, 504 (2012)], hep-ph/0505042. [73] C. Patrignani et al. (Particle Data Group), Chin. Phys. C40, 100001 (2016). [74] R. D. Ball et al. (NNPDF), JHEP 1504, 040 (2015), arXiv:1410.8849 [hep-ph]. [75] P. Nason, JHEP 11, 040 (2004), hep-ph/0409146. [76] S. Frixione, P. Nason and C. Oleari, JHEP 11, 070 (2007), arXiv:0709.2092 [hep-ph]. [77] S. Alioli, P. Nason, C. Oleari and E. Re, JHEP 06, 043 (2010), arXiv:1002.2581 [hep-ph]. [78] T. Melia, P. Nason, R. R¨ontsch and G. Zanderighi, JHEP 11, 078 (2011), arXiv:1107.5051 [hep-ph]. [79] P. F. Monni and G. Zanderighi, JHEP 1505, 013 (2015), arXiv:1410.4745 [hep-ph]. 14
synthetic_cpt
1
Semi-Automated_Construction_of_Food_Composition_Knowledge_Base.pdf
6 1 0 2 r a M 7 ] T A . h t a m [ 1 v 9 6 9 1 0 . 3 0 6 1 : v i X r a Semi-homotopy and semi-fundamental groups Ayhan ERC˙IYES∗a, Ali AYTEK˙INb and Tunçar ¸SAHANb aDepartment of Elementary Mathematics Education, Aksaray University, Aksaray, TURKEY bDepartment of Mathematics, Aksaray University, Aksaray, TURKEY Abstract In this study we introduce the notions of semi-homotopy of semi-continuous maps and of semi-paths. We also construct a group structure, which will be called semi- fundamental group, using semi-loops and explore some properties of semi-homotopy and semi-fundamental groups. Key Words: Semi-open sets, semi-closed sets, homotopy, fundamental groups Classification: 54C08, 14F35, 55Q05, 57M05 1 Introduction Homotopy theory studies topological objects up to homotopy equivalence. Homotopy equivalence is a weaker relation than topological equivalence, i.e., homotopy classes of spaces are larger than homeomorphism classes. Therefore, homotopy equivalence plays a more important role than homeomorphism. Homotopy theory is a subdomain of topol- ogy. Instead of considering the category of topological spaces and continuous maps, one may prefer to consider as morphisms only the continuous maps up to homotopy. On the other hand the concept of homotopy groups is a way to interpret topological problems to algebraic problems which could be solve much easier. For this reason, homotopy groups, especially fundamental groups, are very powerful tools for this purpose. To obtain further insights on applications of homotopy groups, see for example the books of Brown [2] and of Rotman [9]. The concept of semi-open set in topological spaces was introduced in 1963 by Levine [7]. He defined a set A to be semi-open in a topological space if and only if A is between an open ∗Correspondence: [email protected] 1 subset and the closure of that open. Further, Levine investigated a notion of semi-continuity. After the works of Levine on semi-open sets, various mathematician turned their attention to the generalisations of various concepts of topology by considering semi-open sets instead of open sets. New results are obtained in some occasions and in other occasions substantial generalisations are exibited, by replacing open sets with semi-open sets In 1971, S. Gene Crossley and S. K. Hildebrand [4] introduce semi-closed sets, semi- interior, and semi-closure in a manner analogous to the corresponding concepts of closed sets, interior, and closure. Further, a year later, they defined that a property of topological spaces is a semi-topological property if there is a semi-homeomorphism which preserves that property [5]. Also, they shown that Hausdorff, separable, and connected properties of topological spaces were semi-topological properties. S.M.N. Maheshawari and R. Prasad [8] used semi-open sets to define and investigate three new separation axiom called Semi-T0, Semi-T1 and Semi-T2. Recently, P. Bhattacharyya and B.K. Lahiri [1] generalised the concept of closed sets to semi-generalised closed sets with the help of semi-openness. In the light of these works, the main purpose of this paper is to introduce the notions of semi-homotopy and semi-fundamental group using the semi-open sets, to obtain different group structures from topological spaces. 2 Preliminaries The notion of semi-open sets in a topological space was introduced by Levine [7] as follows. Definition 2.1 [7] Let X be a topological space and A ⊆ X. A is called semi-open provided that there exists an open set U such that U ⊆ A ⊆ U, where U denotes the closure of the set U in X. Here is a concrete example of semi-open sets. Example 2.2 Let τ = {X, ∅, {a}, {a, b}} be the topology on the set X = {a, b, c, d}. Therefore we have semi-open subsets of X as follows: SO(X) = {X, ∅, {a}, {a, c}, {a, d}, {a, b}, {a, b, c}, {a, c, d}, {a, b, d}}. Following proposition is a well known result for semi-open sets. Hence we omit the proof. Proposition 2.3 [10, 7] Union of any collection of semi-open sets in a topological space is also semi- open. 2 Example 2.4 Consider the space of the real numbers with the usual topology. It is easy to see that intervals of the form (a, b), (a, b], [a, b) and [a, b] and their arbitrary unions are semi-open. Proposition 2.5 Let X be a topological space and A ⊆ X. Then A is semi-open if and only if for each point x in A there exist a semi-open subset Bx of X such that x ∈ Bx ⊆ A. Proof: Let A be a semi-open set in X. Thus we can choose the set Bx as A for all x ∈ A. Conversely assume that for each point x in A there exist a semi-open subset Bx of X such that x ∈ Bx ⊆ A. Then Bx = A [ x∈A and by Proposition 2.3 A is a semi-open subset of X. (cid:4) The notion of semi-closedness is introduced in [4]. Now we will recall the definition of semi-closed sets and some-properties of semi-closed sets from [4]. Definition 2.6 [4] Let X be a topological space and C ⊆ X. C is called semi-closed if there exists a closed set K such that K ◦ ⊆ C ⊆ K where K ◦ is the interior of K. Example 2.7 Let τ = {X, ∅, {a}, {a, b}} be the topology on the set X = {a, b, c, d}. Therefore we have semi-closed subsets of X as follows: SC(X) = {X, ∅, {b}, {c}, {d}, {c, d}, {b, c}, {b, d}, {b, c, d}}. Proposition 2.8 [4] In a topological space the complement of a semi-open set is semi-closed and vice-versa. Now we will recall the definitions of semi-continuities and some properties of them from [10]. Definition 2.9 Let X and Y be two topological spaces, f : X → Y a function and p a point of X. Then f is called (i) so-1-continuous at p provided for each open set V containing f (p) in Y , there exists a semi- open set A in X that contains p and f (A) ⊆ V , (ii) so-2-continuous at p provided for each semi-open set B containing f (p) in Y , there exists a semi-open set A in X that contains p and f (A) ⊆ B, and (iii) so-3-continuous at p provided for each semi-open set B containing f (p) in Y , there exists an open set U in X that contains p and f (U) ⊆ B. 3 If f is so-i-continuous at every point of X for a fixed i then f is called so-i-continuous. Relations between so-i-continuous functions, constant functions and continuous func- tions are given with the following figure. constant 3 so − 3 so − 1 ;♣♣♣♣♣♣♣♣♣♣ ♣♣♣♣♣♣♣♣♣♣ ❖❖❖❖❖❖❖❖❖❖ ❖❖❖❖❖❖❖❖❖❖ so − 2 ◆◆◆◆◆◆◆◆◆◆ ◆◆◆◆◆◆◆◆◆◆ ;♦♦♦♦♦♦♦♦♦♦ ♦♦♦♦♦♦♦♦♦♦ continuous This figure says that every constant map is so-3-continuous, every so-3-continuous func- tion is both so-2-continuous and continuous, every so-2-continuous function and every con- tinuous function is so-1-continuous. Following proposition gives a criteria for so-i-continuous functions similar to one in clas- sical topology. The proof is also similar, hence we omit. Proposition 2.10 Let X and Y be topological spaces and f : X → Y a function. Then f is (i) so-1-continuous iff for each open set V ⊆ Y , f −1(V ) is semi-open in X, (ii) so-2-continuous iff for each semi-open set B ⊆ Y , f −1(B) is semi-open in X, (iii) so-3-continuous iff for each semi-open set B ⊆ Y , f −1(B) is open in X. This proposition could be given by using semi-closed sets as follows. Proposition 2.11 Let X and Y be topological spaces and f : X → Y a function. Then f is (i) so-1-continuous iff for each closed set K ⊆ Y , f −1(K) is semi-closed in X, (ii) so-2-continuous iff for each semi-closed set M ⊆ Y , f −1(M) is semi-closed in X, (iii) so-3-continuous iff for each semi-closed set M ⊆ Y , f −1(M) is closed in X. so-1-continuous functions are called semi-continuous and so-2-continuous functions are called irresolute [4]. In this paper the unit interval [0, 1] will be denoted by I, as a subspace of reel numbers R with the usual topology. Remark 2.12 Let X be a topological space. Then it is easy to see that the identity function 1X : X → X is so-1-continuous and so-2-continuous but not so-3-continuous. Moreover usual composition of so-2-continuous (resp. so-3-continuous) functions are again so-2-continuous (resp. so-3-continuous). Thus we obtain the category s-Top of topological spaces with morphisms so-2-continuous (irreso- lute) functions. On the other hand composition of so-1-continuous functions need not to be so-1- continuous. 4 # + + 3 # + 3 3 Semi-Homotopy In this section we will introduce the notions of so-i-homotopy of so-i-continuous func- tions, so-2-homotopy type, so-i-paths and so-i-homotopy of so-i-paths, and give some prop- erties. From now on i will symbolize of a fixed element of the set {1, 2, 3} for each item. Definition 3.1 Let X and Y be two topological spaces and f, g : X → Y be two so-i-continuous functions. If there exist a function H : X × I → Y such that for all t ∈ I the restrictions of H Ht : X −→ Y x 7−→ Ht(x) = H(x, t) are so-i-continuous with H0 = f and H1 = g, then we say that f and g are so-i-homotopic. In this case H is called an so-i-homotopy from f to g and this will be denoted by H : f ≃i g or briefly, by f ≃i g. Theorem 3.2 The relation being so-i-homotopic on the set of all so-i-continuous functions between two topological spaces is an equivalence relation. Proof: Let X and Y be two topological spaces and f, g, h : X → Y be so-i-continuous functions. Reflexivity: If f : X → Y define H : X × I −→ Y (x, t) 7−→ H(x, t) = f (x) for all x ∈ X and all t ∈ I. It is clear that F : f ≃i f . Symmetry: Assume that H : f ≃i g, so there is a function H : X × I → Y with H(x, 0) = f (x) and H(x, 1) = g(x) for all x ∈ X. Define G : X × I −→ Y (x, t) 7−→ G(x, t) = H(x, 1 − t) for all x ∈ X and all t ∈ I. Since H is so-i-continuous, Gt(x) = G(x, t) = H(x, 1 − t) is so-i-continuous, and G0 = g and G1 = f . Therefore G : g ≃i f . 5 Transitivity: Assume that F : f ≃i g and G : g ≃i h. Define H(x, t) =   F (x, 2t), G(x, 2t − 1), t ∈ [0, 1/2] t ∈ [1/2, 1]. Therefore H : f ≃i h. Thus ≃i is an equivalence relation.  Let X and Y be two topological spaces and f : X → Y be an so-i-continuous function. Then the set of all so-i-continuous functions from X to Y which are so-i-homotopic to f is called the equivalence class (so-i-homotopy class) of f and denoted by [f ]i. (cid:4) [f ]i = {g | g : X → Y so-i-continuous, f ≃i g} Similar to classical theory, using the new homotopy defined above we will introduce the notion of so-i-homotopy equivalence and so-i-homotopy type just for the case i = 2 since the composition of so-2-continuous functions is again so-i-continuous. Definition 3.3 Let X and Y be two topological spaces. An irresolute function f : X → Y is called a irresolute homotopy equivalence if there exist an irresolute function g : Y → X such that gf ≃2 1X and f g ≃2 1Y . If there is an irresolute homotopy equivalence between two topological spaces then we say that these spaces have the same irresolute homotopy type. Now we will give the definition of so-i-paths which is the special case of so-i-continuous functions. Further we will give a more stronger version of so-i-homotopy for so-i-paths. Definition 3.4 Let X be a topological space, α : I → X be an so-i-continuous function and α(0) = a and α(1) = b . Then α is called an so-i-path from a to b in X. If a = b then α is called an so-i-loop at a. Definition 3.5 Let α, β : I → X be two so-i-path in X with α(1) = β(0). Then the function (α ∗ β)(t) =   α(2t), t ∈ [0, 1/2] β(2t − 1), t ∈ [1/2, 1]  is an so-i-path and is called the composition of so-i-paths α and β in X. α ∗ β will be denoted by αβ for short. 6 Definition 3.6 Let X be a topological space and α : I → X be an so-i-path in X. Then the function α : I −→ X defined by α(t) = α(1 − t) is an so-i-path in X and is called the inverse of α. Definition 3.7 Let X be a topological space and α, β : I → X be two so-i-paths where α(0) = β(0) and α(1) = β(1). If there is an so-i-continuous function F : I × I → X such that (i) for all t ∈ I the restrictions of F Ft : I −→ Y s 7−→ Ft(s) = F (s, t) are so-i-continuous and (ii) F (s, 0) = α(s), F (0, t) = a, F (s, 1) = β(s), and F (1, t) = b then we say that F is so-i-homotopy of so-i-paths from α to β relative to endpoints and denoted by F : α ≃i β rel Î. We will denote this by α ≃i β where no confusion arise. Theorem 3.8 The relation being so-i-homotopic relative to endpoints on the set of all so-i-paths in a topological space is an equivalence relation. Proof: This can be proved by a similar way to the proof of Theorem 3.2. (cid:4) Definition 3.9 Let X be a topological space and α : I → X an so-i-path in X. Then the set [α]i = {β | α ≃i β rel Î} is called equivalence class (so-i-homotopy class) of α. 4 Semi-Fundamental groups In this section, using the so-i-loops, we will construct a group structure on the set of all so-i-homotopy classes of so-i-loops at a base point of a topological space. Following lemma is a very useful tool to construct this group structure. Lemma 4.1 Let X be a topological space, a, b ∈ X and α be an so-i-path from a to b. If there is an so-i-continuous function ρ : [0, 1] → [0, 1] such that ρ(0) = 0 and ρ(1) = 1 then αρ ≃i α. 7 Proof: First of all note that αρ is an so-i-path from a to b. Now we define the so-i- homotopy F : αρ ≃i α as follows: F : I × I −→ X (s, t) 7−→ F (s, t) = α ((1 − t)s + tρ(s)) It is easy to see that F is an so-i-homotopy from αρ to α. (cid:4) Proposition 4.2 Let X be a topological space and α, β, α′, β′ : I → X be so-i-paths such that α(0) = α′(0), α(1) = α′(1) = β(0) = β′(0) and β(1) = β′(1). If α ≃i α′ and β ≃i β′ then αβ ≃i α′β′. Proof: Let F and G be two so-i-homotopy from α to α′ and from β to β′, respectively. Then the function H : I × I −→ X defined by H(s, t) =    F (2s, t), s ∈ [0, 1/2] G(2s − 1, t), s ∈ [1/2, 1] is so-i-continuous and defines an so-i-homotopy from αβ to α′β′. (cid:4) Proposition 4.3 Let X be a topological space and α, β, γ : I → X be three so-i-paths with α(1) = β(0) and β(1) = γ(0). Then α(βγ) ≃i (αβ)γ. Proof: By the Definition 3.5 compositions α(βγ) and (αβ)γ are defined as follows: and α(βγ)(t) = (αβ)γ(t) = Now let define a function ρ : I → I by α(2t), t ∈ [0, 1/2] β(4t − 2), t ∈ [1/2, 3/4] γ(4t − 3), t ∈ [3/4, 1] α(4t), t ∈ [0, 1/4] β(4t − 1), t ∈ [1/4, 1/2] γ(2t − 1), t ∈ [1/2, 1].       ρ(t) = t ∈ [0, 1/4] t ∈ [1/4, 1/2] t ∈ [1/2, 1]. 2t, t + 1 4, t+1 2 ,    8 One can see that ρ is an so-i-continuous function and ρ(0) = 0, ρ(1) = 1. Moreover (α(βγ))ρ = (cid:4) (αβ)γ. Then by Lemma 4.1 α(βγ) ≃i (αβ)γ. Proposition 4.4 Let X be a topological space, x, y ∈ X and α : I → X be an so-i-path from x to y. Then 1xα ≃i α ≃i α1y where 1x and 1y are the constant maps at x and y, respectively. Proof: First of all let define a function ρ : I → I by 0, t ∈ [0, 1/2] 2t − 1, t ∈ [1/2, 1]. ρ(t) =    This function satisfies the conditions of Lemma 4.1 and 1xα = αρ. Hence 1xα ≃i α. Similarly by taking ρ as one can show that α ≃i α1y. ρ(t) =   2t, t ∈ [0, 1/2] 1, t ∈ [1/2, 1]  (cid:4) Proposition 4.5 Let X be a topological space, x, y ∈ X and α : I → X be an so-i-path in X from x to y. Then αα ≃i 1x and αα ≃i 1y. Proof: Let define a function F : I × I → X for all t ∈ I by F (s, t) = α(2s), α(s), s ∈ [0, t/2] s ∈ [t/2, 1 − t/2] α(2 − 2s), s ∈ [1 − t/2, 1].    This function defines an so-i-homotopy from 1x to αα. Similarly, one can show that αα ≃i 1y. (cid:4) Theorem 4.6 Let X be a topological space and x ∈ X. Then the set πi 1(X, x) = {[α]i | α : I → X so-i-loop at x} 9 of all so-i-homotopy classes of so-i-loops at x has a group structure with the operation ∗ : 1(X, x) × πi πi ([α]i, [β]i) 1(X, x) −→ πi 1(X, x) 7−→ [α]i ∗ [β]i = [α ∗ β]i. Proof: Proposition 4.2 shows that the operation ∗ is well defined. By Proposition 4.3 the operation is associative. The so-i-homotopy class of constant map 1x at x acts as the identity element, i.e. for all [α]i ∈ πi 1(X, x) [1x]i ∗ [α]i = [α]i ∗ [1x]i = [α]i by Proposition 4.4. Finally according to Proposition 4.5 for all [α]i ∈ πi [α]i up to the operation ∗ is [α]−1 i = [α]i ∈ πi 1(X, x). 1(X, x) the inverse of (cid:4) This group will be called the so-i-fundamental group of X at x. In particular π1 1(X, x) 1(X, x) will be called irresolute fundamental will be called semi-fundamental group and π2 group. Proposition 4.7 Let X be a topological space, x, y ∈ X and γ : I → X be an so-i-path from x to y. Then 1(X, x) ∼= πi πi 1(X, y). Proof: The claimed isomorphism is γ⋆ : 1(X, x) −→ πi πi [α]i 7−→ [γ]−1 1(X, y) i ∗ [α]i ∗ [γ]i. (cid:4) Corollary 4.8 In a topological space whose topology is so-i-path-connected, i.e. elements there exist an so-i-path between them, every so-i-fundamental group is isomorphic. for each pair of Proposition 4.9 Let s − Top∗ be the category of pointed topological spaces with morphisms so-2- continuous (irresolute) functions and Grp be the category of groups with morphisms group homo- morphisms. Then π2 1 : s − Top∗ −→ Grp 7−→ π2 (X, x) 1(X, x) is a functor. Corollary 4.10 Let X and Y be two topological spaces. If f : X → Y is a homeomorphism then 1(X, x) ∼= π2 π2 1(Y, f (x)). 10 5 Conclusion It seems that according to these results one can define a more general notion semi- fundamental groupoid following the way in [2] and [9]. Further, using the results of the paper [3] of Császár it could be possible to develop more generic homotopy types and ho- motopy groups. Hence parallel results of this paper could be obtained for generalized open sets and for generalized continuity. References [1] Bhattacharyya, P. and Lahiri, B.K., Semi-generalized closed sets in topology, lnd. Jr. Math., 29 (1987), 375–382. [2] Brown, R., Topology and groupoids, BookSurge LLC, North Carolina, 2006. [3] Császár, Á., Generalized open sets, Acta Mathematica Hungarica, 75(1), (1997), 65–87. [4] Crossley, S. and Hildebrand, S.K., Semi-closure, Texas J. Sci. 22 (1971), 99–112. [5] Crossley, S. and Hildebrand, S.K., Semi-topological properties, Fundamenta Mathemati- cae 74(3) (1972), 233–254. [6] Hatcher, A., Algebraic topology, Cambridge University Press, 2002. [7] Levine, N., Semi-open sets and semi-continuity in topological spaces, Amer. Math. Monthly 70 (1963), 36–41. [8] Maheshawari, S.M.N. and Prasad, R., Some new separation axioms, Ann. Soco. Sci. Brux- elles 89 (1975), 395–402. [9] Rotman, J.J., An introduction to algebraic topology, Springer, 1988. [10] Scheers, J.M., An exploration of semi-open sets in topological spaces, M.Sc. Thesis, Stephen F. Austin State University, 2011. 11
synthetic_cpt
5
Automatic_Document_Selection_for_Efficient_Encoder_Pretraining.pdf
Automatic Document Selection for Efficient Encoder Pretraining Yukun Feng1 Patrick Xia1 Benjamin Van Durme1 João Sedoc2 1Johns Hopkins University 2New York University {yfeng55, paxia, vandurme}@jhu.edu, [email protected] 2 2 0 2 t c O 6 2 ] L C . s c [ 2 v 1 5 9 0 1 . 0 1 2 2 : v i X r a Abstract Building pretrained language models is con- sidered expensive and data-intensive, but must we increase dataset size to achieve better performance? We propose an alternative to larger training sets by automatically identify- ing smaller yet domain-representative subsets. We extend Cynical Data Selection, a statistical sentence scoring method that conditions on a representative target domain corpus. As an ex- ample, we treat the OntoNotes corpus as a tar- get domain and pretrain a RoBERTa-like en- coder from a cynically selected subset of the Pile. On both perplexity and across several downstream tasks in the target domain, it con- sistently outperforms random selection with 20x less data, 3x fewer training iterations, and 2x less estimated cloud compute cost, validat- ing the recipe of automatic document selection for LM pretraining. 1 Introduction Large pretrained language models have achieved state-of-the-art performance in NLP tasks (Devlin et al., 2019; Liu et al., 2019, i.a.). These studies find that increasing pretraining data size usually leads to better task performance. For many tasks, additional task (in-domain) data helps improve the performance further (Gururangan et al., 2020; Dery et al., 2021; Li et al., 2022). Several studies have found that directly pretraining on task data is more effective : science texts (Beltagy et al., 2019), tweets (Nguyen et al., 2020), legal texts (Chalkidis et al., 2020) or code (Tabassum et al., 2020; Chen et al., 2021). Notably, these domains are known a priori, and identifying data sources for curation is straightforward. In other instances where the domain is less clear, like “offensive online content” (Bai et al., 2021), more complicated data sampling is employed to guess at the desired data distribution suitable for training a downstream classifier. To address such scenarios, we propose automat- ically identifying relevant domain-specific train- Figure 1: This figure highlights the efficiency of the au- tomatic cynical selection of documents in the target do- main. Scores are averaged from 8 Edge Probing tasks. Cynically selected 2.5GB data achieves the best score. ing data for a large corpus and subsequently pre- training a model on the selected data. Specifi- cally, we use Cynical Data Selection (Axelrod, 2017), an approach that advanced Moore-Lewis sampling (Moore and Lewis, 2010), to select data from the Pile dataset (Gao et al., 2021). This auto- matic selection method can include possibly over- looked yet relevant documents from domains that may not be too close to the target domain. Figure 1 illustrates this method which achieves higher per- formance on tasks in the target domain by using only 2.5GB (0.5%) of cynically selected data. Specifically, we experiment with pretraining en- coders with varying amounts of data sampled from the Pile.1 With our “target corpus” of OntoNotes (Weischedel et al., 2013), we compare language models trained with cynical and random selection at various data levels. We find that the cynically selected encoder achieves consistently lower target corpus perplexity than one trained with random selection. We further finetune the encoders on a suite of tasks, some of which are derived from OntoNotes. Again, we find that models pretrained with cynical selection perform best. We suggest this as a viable method for inexpensively pretrain- ing effective domain-specific encoders. 1The Pile consists of 800GB raw text but for this paper, we refer to its “effective” size, which is 1250GB. 87.4586.2585.40ScorePile ~ 1250GBRandom ~ 60GBManual ~ 30GBCynical ~ 2.5 GB 2 Cynical Data Selection Methods for data selection for language-related tasks have been widely studied, usually to select in-domain data (Axelrod et al., 2011; van der Wees et al., 2017; Dai et al., 2020; Killamsetty et al., 2020). One such method is Cynical Data Selection (Axelrod, 2017). The intuition behind cynical se- lection is greedily ranking sentences from the text corpus, based on its score computed against text representative of the target domain, which is based on how much information gained by selecting it. Concretely, given representative text from the target domain, cynical selection uses the cross- entropy of the selected text against the representa- tive text and calculates the information gain of each sentence in the general corpus. It then picks the most useful sentence relative to what has already been selected and its similarity to the representative text. This also leads to a bias towards shorter sen- tences and preferring sentences that contain words with high probability in the representative text. Our work extends the cynical selection to the document level selection. Sentences are still scored at the sentence level, but the average sentence-level gain determines the information gain of a docu- ment.2 We demonstrate its advantages in efficiently selecting related documents to the target domain. 3 Experiments and Results In this work, we set OntoNotes 5.0 (Weischedel et al., 2013) as our target corpus, and we use a smaller sample from the training corpus of the CoNLL 2012 Shared Task (Pradhan et al., 2012) as the representative corpus for data selection. We first train an encoder based on the selected data and use the Edge Probing suite (Tenney et al., 2019b) for the downstream task evaluation, which has pre- viously been used to probe and evaluate language models (Clark et al., 2019; Tenney et al., 2019a; Jiang et al., 2020; Zhang et al., 2021). 3.1 Data Selection Dataset We adopt the Pile (Gao et al., 2021) for data selection, which consists of 1250GB text from 22 domains. Cynical selection naturally prefers text data based on the target corpus. To make a more fair comparison, we exclude 100GB data from “DM Mathematics” and “Github” to eliminate the noise of non-text data in random selection. Figure 2: Validation perplexity on held-out set (left), and OntoNotes (right) at 100k training steps. Selection Strategy Encoder pretraining is natu- rally a document-level task, as context contributes critically to improved representations. Thus, we need to extend the sentence selection into the doc- ument selection to achieve a better-contextualized representation at the pretraining stage.3 We apply our extended document-level cynical selection to the Pile and extract the top {0.5%, 1%, 2%, 5%} scored documents.4 We also randomly sample the same percentage of documents from Pile to use as a corresponding baseline. As a baseline for manual selection, we use 30GB text from "Wikipedia" and "BookCorpus" subsets, following Liu et al. (2019). 3.2 Encoder Pretraining We set up a BERT-base model and follow the pretraining objective and settings described in RoBERTa(Liu et al., 2019).5 In Figure 2, we plot the validation perplexity on both the representative corpus (CoNLL 2012 Shared Task) and a held-out set of the Pile. The perplexity on the held-out set decreases when there is more training data for both the cynical and random selection. Cynical selection attains a higher perplexity, which shows that while the selected documents are more adapted to the target domain, it is not better adapted to the general corpus. As each encoder needs different training steps for different corpus sizes, we try to make a fair comparison by assuming a fixed training bud- get of 100k update steps. In Figure 2, we find that at 100k steps, 2% of the cynically selected data achieves the lowest perplexity, and more training data does not help the adaptation to the target cor- pus. Also, cynical selected documents consistently outperforms the random selection, demonstrating the effectiveness of adapting to the target domain. 3We unsurprisingly find that selection at the document- level works better than at the sentence-level (Appendix A). 4Our code repository is publicly available at https:// github.com/jsedoc/DL-CynDS. 2A formal explanation of Cynical selection and its exten- 5We adopt the training scripts from FairSeq for encoder sion is in Appendix B. pretraining, https://github.com/facebookresearch/fairseq. 1%2%5%3.253.503.754.004.25PPL on Devppl-cynicalppl-random1%2%5%5.05.56.06.5PPL on OntoNotesPercentage of Training Data (%) Figure 3: Evaluation on 8 Edge Probing tasks (Tenney et al., 2019b). The cynical selection consistently outper- forms both the random and manual selection in most cases, even with only 0.5% selected documents. 3.3 Edge Probing Evaluation We evaluate the effectiveness of the pretrained en- coders on 8 Edge Probing tasks (Tenney et al., 2019b),6 for which the metric and architecture are uniformed to evaluate the span-level contextual rep- resentation of the language model, and it has been widely studied in the past few years. Results are plotted in Figure 3. We find: Observation 1: Models trained on cynically selected documents show consistent performance gain on all tasks compared to the random selection. Observation 2: In most tasks, even using only 0.5% (2.5GB) of cynically selected documents out- performs the manually selected baseline (30GB). Observation 3: Compared to random sampling, the performance gain of the cynical selected doc- uments is larger with only 0.5% to 1% of training data, and decreases for larger training sets as ran- dom selection catches up. Observation 4: For some tasks, especially "const" and "pos," which are two tasks exactly based on the OntoNotes dataset, cynical selected documents yield good task performance with only 0.5% data, and the scores decrease when increasing the selection size to 2%, but increase again with 5%. This could suggest that in cynical selection, the top-scored documents are strongly related and helpful to the target task domain, while the others may not contribute as much or even hurt. However, more data ultimately does improve performance. Overall, we could achieve promising results with only 0.5% documents of the entire corpus, demon- strating the effectiveness and efficiency of cynical 6We adopt the jiant for edge probing data processing and finetuning, https://github.com/nyu-mll/jiant. Figure 4: Data distribution over the Pile domains selection in the adaptation to downstream tasks in the target domain. We also notice the standard de- viation of the runs for random selection is much larger than cynical selection, indicating more stable encoder results from cynically selected documents. 3.4 Discussion Data Distribution We plot the domain distribu- tion of the selected documents in Figure 4. While random selection follows the distribution of the original Pile dataset, cynical selection prefers news- like articles such as the "Pile CC" and "OpenWeb- Text2," rather than technical ones, like StackEx- change. Also, since we consider the same number of selected documents for each split, the actual se- lected data size is not the same (Figure 5). We notice that cynical selection prefers shorter docu- ments, especially in the top-ranked samples. This should be related to our scoring strategy since we average the sentence scores as the final document score. In the case for long documents, even though there are sentences with higher scores, it is not very likely to be selected since the final scores are averaged by the total number of sentences. This 1%2%5%76788082Scoreconst1%2%5%97.297.497.697.8pos1%2%5%939495ner1%2%5%87888990coref1%2%5%7080Scoreud1%2%5%777879spr21%2%5%6070rel1%2%5%87888990srlcynicalrandommanualPile-CCWebTextStackExgPubMedWikipediaOthers010203040Percentage of Selection (%)Cynical SelectionRandom Selection Figure 5: For each percentage of cynically and ran- domly selected documents, we show the actual data size (GB) and corresponding document length. Figure 6: This figure shows the training loss for the runs of 1% and 2% cynically selected subsets. explains why the cynical selection prefers shorter documents in the 0.5% and 1% selection but not in the 5% selection. Therefore, when we bring the actual selected data sizes into the comparison, the cynical selection is much more efficient than the random sampling. Future work can investigate other methods of aggregating sentence-level scores. Computational Trade-off Cynical selection en- ables the language models to use less training data and GPU time while achieving competitive results. However, the data selection needs to be done be- fore the training and pre-processing could be costly. Cynical selection on the Pile can be parallelized via sharding, because the specific order/ranking of a document in the final selected subset is not impor- tant. The intuition is that any good document will be chosen early, regardless of which shard it is in. So, we split the automatic document selection of the Pile into 10,000 smaller jobs, each requiring a single core CPU7 and 10GB of RAM and taking 2 hours to finish. In general, the cost of the selection depends on the size of the general corpus that is be- ing selected from. In our training environment with 8 RTX6000 GPUs, it takes 800+ GPU hours in total to train an encoder with 60GB randomly selected documents. To achieve comparable or even better performance with cynical selected documents, we only need 200 GPU hours for the 2.5GB of cyni- cally selected data to converge. The market price for a single RTX6000 is $1.50/hour, so we need $1200+ to train with random selection but less than $300 for cynical selection. On the Google Cloud Platform, 20,000 hours on comparable or faster CPUs can be obtained with $200. Overall, cynical selected documents saves more than 50% of the computational cost and achieves better task scores. Overfitting Large language models have the abil- ity to overfit or memorize small datasets (Kaplan et al., 2020; Carlini et al., 2022). We inspect the loss curves for two of the cynical selections (1% and 2%) in Figure 6. While the 1% encoder achieves a lower loss for most parts of the train- ing, it is eventually surpassed by the 2% model. This highlights a tradeoff between computing cost and performance; given a limited compute budget (in this example, under 50K steps), it is better to use a smaller selection. While prior work suggests scaling up models to fit dataset size (Kaplan et al., 2020), we are successful in scaling down dataset sizes so that they can be efficiently fit (and outper- form larger datasets) in fewer steps. 4 Related Work Due to the huge computational cost of training large models, both researchers and engineers have sought alternatives to using data more efficiently. Some prior works use statistical methods to select relevant data from a large corpus (Rousseau, 2013; Kirchhoff and Bilmes, 2014; Eetemadi et al., 2015; Xu and Koehn, 2017). Some other studies intro- duce additional classifiers or language models to help the data selection (Ruder and Plank, 2017; Qu et al., 2019; Sun et al., 2021). Also, data selec- tion could be more efficiently involved in the ac- tive learning approaches (Shen et al., 2004; Lowell et al., 2018; Erdmann et al., 2019; Shelmanov et al., 2019; Margatina et al., 2022; Tsvigun et al., 2022). This work applies a simple statistical method to find the most related text to a target domain. It incrementally constructs a dataset out of a large corpus for the goal of training language models. 5 Conclusion 7Intel Xeon E5-2620 v3, a chip from 2014. This work builds the connection from corpus subs- election in statistical LM construction to neural 0.51.02.05.0Percentage of Selected Documents0204060Data Size in GB2.5512385112250cynicalrandom0200400600800Document Length203040506070Training Steps (K)2.02.22.42.62.8Training Loss1%2% LMs. We extend cynical data selection to effi- ciently select task-related documents for encoder pretraining and achieve lower perplexity in the tar- get domain. We also demonstrate its effectiveness on downstream tasks by achieving comparable or even better results with 20x less data, 3x fewer training iterations, and 2x less computational cost on 8 Edge Probing tasks. We believe this fills the gap in the literature on an important topic in train- ing powerful LMs. We purposefully keep this work in the space of methods used in the days of Stat NLP to highlight their out-of-the-box applicability, for which that line of research is still salient. Based on our findings, this line is resurrected, suggesting new novel approaches should be studied. We antic- ipate that with this connection, researchers could explore this topic, investigate various subselection methods, and extend it to other domains. Acknowledgements We thank all reviewers for their valuable feed- back. We also appreciate the helpful suggestions from Marc Marone, Amittai Axelrod, and Alex Warstadt. This work is supported by IARPA BET- TER (#2019-19051600005). The findings con- tained in this work are those of the authors and should not be interpreted as necessarily represent- ing the official policies, either expressed or implied, or endorsements of IARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. Limitations Since pretraining encoders is expensive, our study only experiments on one source corpus (Pile) and one target task domain (OntoNotes). However, this method could be demonstrated more effectively on other datasets that are more domain-specific. We do not run multiple random selections with dif- ferent seeds due to the time and cost of training large models. We think the standard error for the randomly selected data would be significant, espe- cially for the subset of only 0.5% or 1% documents. Also, we recognize that training our models longer or scaling up the model size is an “easy” method of improving performance (Liu et al., 2019; Kaplan et al., 2020). Our results assume a fixed training budget (max 100k steps). Thus with a larger budget, the trade-offs will vary. Another concern is that we do not experiment with other subselection meth- ods (Gururangan et al., 2019) or other languages, but we believe they should have similar trends. References Amittai Axelrod. 2017. Cynical selection of language model training data. arXiv. Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 355–362, Edinburgh, Scotland, UK. Associa- tion for Computational Linguistics. Fan Bai, Alan Ritter, and Wei Xu. 2021. Pre-train or annotate? domain adaptation with a constrained budget. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5002–5015, Online and Punta Cana, Domini- can Republic. Association for Computational Lin- guistics. Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3615– 3620, Hong Kong, China. Association for Computa- tional Linguistics. Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramèr, and Chiyuan Zhang. 2022. Quantifying memorization across neural lan- guage models. ArXiv, abs/2202.07646. Ilias Chalkidis, Manos Fergadiotis, Prodromos Malaka- siotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. LEGAL-BERT: The muppets straight out of law school. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 2898– 2904, Online. Association for Computational Lin- guistics. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka- plan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sas- try, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cum- mings, Matthias Plappert, Fotios Chantzis, Eliza- beth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welin- der, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Eval- uating large language models trained on code. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does bert look at? an analysis of bert’s attention. Xiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile Paris. 2020. Cost-effective selection of pretraining data: A case study of pretraining bert on social me- dia. Lucio M. Dery, Paul Michel, Ameet Talwalkar, and Graham Neubig. 2021. Should we be pre-training? an argument for end-task aware training as an alter- native. CoRR, abs/2109.07437. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics. Sauleh Eetemadi, William Lewis, Kristina Toutanova, and Hayder Radha. 2015. Survey of data-selection methods in statistical machine translation. Machine Translation, 29. Alexander Erdmann, David Joseph Wrisley, Benjamin Allen, Christopher Brown, Sophie Cohen-Bodénès, Micha Elsner, Yukun Feng, Brian Joseph, Béatrice Joyeux-Prunel, and Marie-Catherine de Marneffe. 2019. Practical, efficient, and customizable active learning for named entity recognition in the digital humanities. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2223–2234, Minneapolis, Minnesota. Associ- ation for Computational Linguistics. Leo Gao, Stella Biderman, Sid Black, Laurence Gold- ing, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The pile: An 800gb dataset of diverse text for language modeling. CoRR, abs/2101.00027. Suchin Gururangan, Tam Dang, Dallas Card, and Noah A. Smith. 2019. Variational pretraining for semi-supervised text classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5880–5894, Flo- rence, Italy. Association for Computational Linguis- tics. Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don’t stop pretraining: In Adapt language models to domains and tasks. the the 58th Annual Meeting of Proceedings of Association for Computational Linguistics, pages 8342–8360, Online. Association for Computational Linguistics. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid Ó Séaghdha, Sebastian Padó, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. SemEval-2010 task 8: Multi-way classification of semantic relations be- In Proceedings of the tween pairs of nominals. 5th International Workshop on Semantic Evalua- tion, pages 33–38, Uppsala, Sweden. Association for Computational Linguistics. Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How Can We Know What Language Models Know? Transactions of the Association for Computational Linguistics, 8:423–438. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. KrishnaTeja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, and Rishabh K. Iyer. 2020. GLISTER: generalization based data subset selec- CoRR, tion for efficient and robust abs/2012.10630. learning. Katrin Kirchhoff and Jeff Bilmes. 2014. Submodu- larity for data selection in machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 131–141, Doha, Qatar. Association for Com- putational Linguistics. Belinda Li, Jane Yu, Madian Khabsa, Luke Zettle- moyer, Alon Halevy, and Jacob Andreas. 2022. Quantifying adaptability in pre-trained language models with 500 tasks. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4696–4715, Seattle, United States. Association for Computational Lin- guistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. ArXiv, abs/1907.11692. David Lowell, Zachary Chase Lipton, and Byron C. Wallace. 2018. How transferable are the datasets col- lected by active learners? ArXiv, abs/1807.04801. Katerina Margatina, Loic Barrault, and Nikolaos Ale- tras. 2022. On the importance of effectively adapt- ing pretrained language models for active learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 825–836, Dublin, Ireland. Association for Computational Linguistics. Robert C. Moore and William Lewis. 2010. Intelligent In Pro- selection of language model training data. ceedings of the ACL 2010 Conference Short Papers, pages 220–224, Uppsala, Sweden. Association for Computational Linguistics. Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. 2020. BERTweet: A pre-trained language model In Proceedings of the 2020 for English tweets. Conference on Empirical Methods in Natural Lan- guage Processing: System Demonstrations, pages 9– 14, Online. Association for Computational Linguis- tics. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 shared task: Modeling multilingual unre- stricted coreference in OntoNotes. In Joint Confer- ence on EMNLP and CoNLL - Shared Task, pages 1–40, Jeju Island, Korea. Association for Computa- tional Linguistics. Chen Qu, Feng Ji, Minghui Qiu, Liu Yang, Zhiyu Min, Haiqing Chen, Jun Huang, and W. Bruce Croft. 2019. Learning to selectively transfer: Reinforced In Pro- transfer learning for deep text matching. ceedings of the Twelfth ACM International Confer- ence on Web Search and Data Mining, WSDM ’19, page 699–707, New York, NY, USA. Association for Computing Machinery. John Bauer, and Chris Manning. 2014. A gold stan- dard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 2897– 2904, Reykjavik, Iceland. European Language Re- sources Association (ELRA). Ming Sun, Haoxuan Dou, Baopu Li, Junjie Yan, Wanli Ouyang, and Lei Cui. 2021. Autosampling: Search In Proceed- for effective data sampling schedules. ings of the 38th International Conference on Ma- chine Learning, volume 139 of Proceedings of Ma- chine Learning Research, pages 9923–9933. PMLR. Jeniya Tabassum, Mounica Maddela, Wei Xu, and Alan Ritter. 2020. Code and named entity recognition in StackOverflow. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 4913–4926, Online. Association for Computational Linguistics. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. Bert rediscovers the classical nlp pipeline. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence structure in contextu- In International Con- alized word representations. ference on Learning Representations. Anthony Rousseau. 2013. Xenc: An open-source tool for data selection in natural language processing. The Prague Bulletin of Mathematical Linguistics, (100):73–82. Akim Tsvigun, Artem Shelmanov, Gleb Kuzmin, Leonid Sanochkin, Daniil Larionov, Gleb Gusev, Manvel Avetisian, and Leonid Zhukov. 2022. To- wards computationally feasible deep active learning. Sebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with bayesian opti- mization. Rachel Rudinger, Adam Teichert, Ryan Culkin, Sheng Zhang, and Benjamin Van Durme. 2018. Neural- In Pro- Davidsonian semantic proto-role labeling. ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 944– 955, Brussels, Belgium. Association for Computa- tional Linguistics. Artem Shelmanov, Vadim Liventsev, Danil Kireev, Nikita Khromov, Alexander Panchenko, Irina Fed- ulova, and Dmitry V. Dylov. 2019. Active learn- ing with deep pre-trained models for sequence tag- In 2019 ging of clinical and biomedical texts. IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 482–489. Dan Shen, Jie Zhang, Jian Su, Guodong Zhou, and Chew-Lim Tan. 2004. Multi-criteria-based active learning for named entity recognition. In Proceed- ings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 589– 596, Barcelona, Spain. Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, Marlies van der Wees, Arianna Bisazza, and Christof Monz. 2017. Dynamic data selection for neural ma- chine translation. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, pages 1400–1410, Copenhagen, Den- mark. Association for Computational Linguistics. Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Ni- anwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, and Ann Houston. 2013. OntoNotes Release 5.0. Hainan Xu and Philipp Koehn. 2017. Zipporah: a fast and scalable data cleaning system for noisy web- crawled parallel corpora. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2945–2950, Copenhagen, Denmark. Association for Computational Linguis- tics. Yian Zhang, Alex Warstadt, Xiaocheng Li, and Samuel R. Bowman. 2021. When do you need bil- In Proceed- lions of words of pretraining data? ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 1112–1125, Online. Association for Computational Linguistics. A proof of this derivation is given in Axelrod (2017). In our work, we still let W1, . . . , Wn represent the first n sentences, and H(REP ) is unchanged. (s), of each However, we use the scores, ∆H n→n+1 sentence and compute document-level scores for each document, Score(D) = 1 |D| (cid:88) s∈D ∆H n→n+1 (s) These document-level scores can then be ranked, and we select the top k% of the documents. Note that while there are many alternatives to selecting documents, our goal is to select a method and eval- uate whether automatic data selection is effective for LM pretraining rather than comparing different methods, which can be future work. B.1 Sentence vs Document Selection Results are shown below in Table 1. Data Cynical Sent Cynical Doc Random Doc ppl on OntoNotes 102.21 4.98 8.77 Table 1: Each subset consists of 15GB text. B.2 Edge Probing tasks The tasks are constituent labeling, part-of-speech tagging (POS), named entity labeling (NER), coref- erence labeling (coref), semantic role labeling (SRL), dependency labeling (Silveira et al., 2014), semantic protorole labeling (SPR2) (Rudinger et al., 2018), and relation classification (Hendrickx et al., 2010). The first 5 tasks listed are derived from OntoNotes (Weischedel et al., 2013). A Appendix A.1 Detailed Distribution A detailed data distribution is shown in Table 2. B Formalization of Cynical Data Selection The aim of CynDS is to incrementally construct W through scoring each sentence by information gained relative to the already selected data (Equa- tion 1). Given a REPresentative corpus from the target domain, CynDS is an effective and efficient method to identify the most relevant subset of sentences from a large corpus. Formally, we can define a cross-entropy between REP and some set of tokens as, H(REP ) = − (cid:88) v∈VREP CREP (v) WREP log C(v) |W | , where W is the set of tokens, V is the vocabulary, and C indicates the count of word type, v. CREP(v) is the count within REP and C(v) is the count within W . Let W1, . . . , Wn be the incrementally selected corpus. We can define the cross-entropy after se- lecting n sentences as Hn(REP ) = − (cid:88) v∈VREP CREP (v) WREP log Cn(v) Wn and minimize Hn. This can be rewritten recursively as Hn+1 = Hn + max s ∆H n→n+1 (s) where ∆H (s) is the delta (effect) of a given n→n+1 sentence s. To find the n + 1th sentence that mini- mizes ∆H , we can rewrite it as n→n+1 ∆H n→n+1 = P enalty n→n+1 + Gain n→n+1 (1) Here, penalty refers to how similar the sentence is to already selected texts and gain refers to how similar the sentence is to the representative corpus. Axelrod (2017) derives the P enalty and Gain as P enalty n→n+1 = log Gain n→n+1 = (cid:88) v∈VREP |Wn + wn+1| |Wn| CREP (v) WREP log Cn(v) Cn(v) + cn+1(v) Domain Pile-CC OpenWebText2 StackExchange PubMed Abstracts Wikipedia (en) USPTO Backgrounds PubMed Central FreeLaw ArXiv NIH ExPorter HackerNews Enron Emails OpenSubtitles YoutubeSubtitles Books3 EuroParl Gutenberg (PG-19) PhilPapers BookCorpus2 Ubuntu IRC Random Cynical-0.5% Cynical-1% Cynical-2% Cynical-5% 42.35% 27.44% 32.20% 16.95% 3.56% 15.51% 5.58% 15.40% 11.65% 8.90% 2.26% 5.84% 0.24% 2.98% 0.51% 2.66% 0.06% 1.25% 0.39% 0.94% 0.55% 0.82% 0.48% 0.49% 0.02% 0.33% 0.13% 0.17% 0.004% 0.15% 0.01% 0.07% 0.002% 0.04% 0.003% 0.03% 0.001% 0.01% 0.004% 0.01% 42.06% 32.53% 3.65% 5.51% 12.03% 2.00% 0.19% 0.38% 0.05% 0.39% 0.54% 0.51% 0.009% 0.13% 0.002% 0.01% 0.001% 0.002% 0.0005% 0.006% 43.30% 31.35% 3.39% 4.79% 11.09% 2.55% 0.53% 1.12% 0.12% 0.36% 0.68% 0.43% 0.05% 0.15% 0.015% 0.024% 0.008% 0.013% 0.005% 0.003% 43.03% 31.79% 3.36% 5.17% 11.24% 2.47% 0.38% 0.81% 0.08% 0.37% 0.60% 0.46% 0.03% 0.14% 0.009% 0.02% 0.005% 0.008% 0.003% 0.004% Table 2: Detailed Domain Distribution for the selection under different sizes.
synthetic_cpt
3
Enhancing_Tool_Retrieval_with_Iterative_Feedback_from_Large_Language_Models.pdf
Enhancing Tool Retrieval with Iterative Feedback from Large Language Models Qiancheng Xu, Yongqi Li†, Heming Xia, Wenjie Li Department of Computing, The Hong Kong Polytechnic University, China {qiancheng.xu, he-ming.xia}@connect.polyu.hk [email protected] [email protected] 4 2 0 2 p e S 9 2 ] L C . s c [ 2 v 5 6 4 7 1 . 6 0 4 2 : v i X r a Abstract Tool learning aims to enhance and expand large language models’ (LLMs) capabilities with ex- ternal tools, which has gained significant atten- tion recently. Current methods have shown that LLMs can effectively handle a certain amount of tools through in-context learning or fine- tuning. However, in real-world scenarios, the number of tools is typically extensive and ir- regularly updated, emphasizing the necessity for a dedicated tool retrieval component. Tool retrieval is nontrivial due to the following chal- lenges: 1) complex user instructions and tool descriptions; 2) misalignment between tool re- trieval and tool usage models. To address the above issues, we propose to enhance tool re- trieval with iterative feedback from the large language model. Specifically, we prompt the tool usage model, i.e., the LLM, to provide feedback for the tool retriever model in multi- round, which could progressively improve the tool retriever’s understanding of instructions and tools and reduce the gap between the two standalone components. We build a unified and comprehensive benchmark to evaluate tool retrieval models. The extensive experiments indicate that our proposed approach achieves advanced performance in both in-domain eval- uation and out-of-domain evaluation1. 1 Introduction Large language models (LLMs) have demonstrated remarkable success in language-related tasks and are considered a potential pathway to achieving artificial general intelligence (Zhao et al., 2023). However, despite their powerful capabilities, LLMs are still limited in many aspects, such as knowledge update and mathematical reasoning. A promising way to overcome these limitations is to empower LLMs with external tools, known as tool learn- ing (Qin et al., 2023a; Qu et al., 2024a). Tool †Corresponding author. 1Code Feedback. available at https://github.com/travis-xu/TR- Figure 1: Illustration of two tool-learning approaches in LLMs: (a) in-context learning and (b) fine-tuning. The challenges posed by the extensive and frequently updated tools require the external tool retrieval compo- nent. learning not only enhances LLMs’ performance on existing tasks but also allows them to tackle tasks that were previously beyond their reach. Besides, the ability to use tools is a crucial hallmark on the path to advanced intelligence. Existing tool learning methods have preliminar- ily demonstrated that LLMs could effectively uti- lize specific tools to complete corresponding tasks. They either leverage LLMs’ in-context learning ability to facilitate tool usage with tool descrip- tions (Shen et al., 2023) or fine-tune LLMs to in- tegrate tool learning capabilities into parameters, e.g., Toolformer (Schick et al., 2023). However, as illustrated in Figure 1, existing methods still face significant challenges in real-world scenarios due to the following reasons. 1) The number of tools is usually vast, making it impossible for LLMs to handle them all with the limited input length of in-context learning. 2) Tools would frequently and irregularly update, rendering finetuning-based ap- proaches costly and impractical. Therefore, a tool retrieval component, which aims to select appropri- ate tools from a large-scale tool set, is essential for LLMs. Despite the practicality and necessity, tool re- trieval has been inadequately studied. Some ap- proaches have adopted traditional document re- trieval methods to retrieve tools for LLMs (Li et al., Large-scale Tool Setcontext length exceededupdate tool not supportedLLMTool1Tool2InstructionTooln...+(a) in-context learning(b) fine-tuningLLMOutputsInstructionTool Calling+ provide more appropriate tools for the tool-usage model. In this manner, the comprehension capa- bility and tool preference of LLMs could be pro- gressively incorporated into the retriever, and thus the tool retriever’s performance could be continu- ously enhanced. We build a comprehensive tool retrieval benchmark, named TR-bench. The bench- mark takes into account real-world practices with updated tools, and therefore encompasses both in- domain and out-of-domain settings. The experi- mental results show our approach achieves the best performance among the current methods with both in-domain and out-of-domain settings. The key contributions are summarized: • We identify the importance of tool retrieval in tool learning and present the distinct chal- lenges of tool retrieval. • We propose to enhance tool retrieval with iter- ative feedback from the LLM. By leveraging iterative feedback, the tool retriever model gets continual improvements, ultimately re- ducing the misalignment between them. • We build a comprehensive tool retrieval bench- mark with in-domain and out-of-domain set- tings, which will also aid future tool retrieval research. The extensive experiments demon- strate superior performance of our approach. 2 Related Work 2.1 Tool Learning in LLMs Tool learning aims to equip LLMs with exter- nal tools to enhance and expand their capabili- ties (Ruan et al., 2023; Wang et al., 2024b; Huang et al., 2024c). Generally, existing tool learning methods could be categorized into in-context learn- ing and fine-tuning approaches. The former ap- proach encourages LLMs to use tools with descrip- tions, documentation, or demonstrations (Yuan et al., 2024; Du et al., 2024; Mu et al., 2024), while the latter one trains the parameters of LLMs us- ing specially created tool-use datasets (Hao et al., 2023; Tang et al., 2023; Gao et al., 2024). How- ever, no matter whether the in-context learning or fine-tuning approach encounters severe chal- lenges in real-world scenarios, where the candidate tools are extensive and frequently updated. There- fore, it is crucial to equip LLMs with a tool re- trieval component to select appropriate tools from a large-scale tool set. Recent works have proposed Figure 2: Comparison between the document retrieval and tool retrieval datasets. Tool retrieval presents more challenges due to the complex instructions (in the left figure) and the lower reputation rate (in the right figure). 2023; Qin et al., 2023b). However, we argue that they overlook the unique challenges of tool re- trieval for LLMs: 1) Complex usser instructions and tool descriptions. As illustrated in Figure 2, compared with document retrieval, user instruc- tions are usually ambiguous and complex, and the reputation rate between instructions and cor- responding tool descriptions is much lower. Un- fortunately, the retriever model is typically limited in its capacities because of the efficiency require- ments, which makes tool retrieval more difficult and challenging. 2) Misalignment between tool re- trieval and tool usage models. Previous approaches deploy the tool retriever separately from the down- stream tool-usage model, which hinders the LLM from knowing which tools are really useful from the tool-usage perspective. Thus, it will result in a tool recognition gap between the tool retriever and tool usage model, degrading the tool-use per- formance further. To address the above issues, we propose to en- hance tool retrieval with iterative feedback. Our motivation is to utilize the LLM to enhance the comprehension ability of the tool retriever and bridge the gap between the two independent mod- els. At each iteration, we conduct a feedback gen- eration process by asking the LLM to provide feed- back step-by-step, conditioned on the user instruc- tion and retrieved tools from the retriever. The LLM will first comprehend the instruction and tool functionalities thoroughly, and then assess the ef- fectiveness of those retrieved tools. According to the assessment, the LLM will refine the user in- struction to improve the tool retrieval process. The refined instruction will substitute previous user in- struction and be used to retrieve a new list of tools from the tool set. In the next iteration, the new candidate tool list will be fed into the LLM for a new round of LLMs’ feedback. During this it- erative process, the tool retriever is expected to MS MARCOToolBench0102030405060query/ins lengthMS MARCOToolBench0102030405060query-doc/ins-tool rate (%) a stopgap measure through traditional document retrieval (Patil et al., 2023; Qin et al., 2023b; Zheng et al., 2024), task decomposition (Anantha et al., 2023; Huang et al., 2024b) and graph-based meth- ods (Qu et al., 2024b). In this work, we aim to develop a method specialized for enhancing the tool retriever. 2.2 Document Retrieval Early popular document retrieval methods rely on sparse retrieval that calculates the relevance of doc- uments to a query based on the frequency of query terms in each document, e.g., BM25 (Robertson and Zaragoza, 2009). With the development of language models (Devlin et al., 2019), the dense retrieval (Zhao et al., 2024; Mitra and Craswell, 2017) paradigm has gained considerable attention in the research community. By encoding queries and documents into high-dimensional vector rep- resentations and computing their relevance scores through inner product calculations, the paradigm can capture semantic relationships between queries and documents, thereby enhancing retrieval per- formance (Karpukhin et al., 2020). However, tool retrieval presents unique challenges, rendering tra- ditional document retrieval methods suboptimal. We address these challenges by harnessing LLMs’ feedback to iteratively refine the tool retrieval pro- cess. 3 Preliminaries 3.1 Task Definition Given a user’s instruction, tool retrieval aims to select a small number of tools, which could aid the LLM in answering the instruction, from a large- scale tool set. Formally, we define the user instruc- tion as q and the tool set as D = {d1, d2, ..., dN }, where di represents the description of each tool and N is the total number of tools. The retriever model R needs to measure the relevance R(q, di) between the instruction q and each tool description di, and return K tools, denoted as D = {d1, d2, ..., dK}. 3.2 Dense Retriever Dense retriever usually leverages the encoder- based LLM to encode the user instruction q and a tool description d into dense embeddings E(q) and E(d), respectively. Then, it could measure the rele- vance between q and d by calculating the similarity score between these two embeddings, denoted as R(q, d) = sim(E(q), E(d)). Dense retriever is trained via the contrast learn- ing objective, which is designed to minimize the distance between the instruction embedding and embeddings of positive tools (the instruction’s ground-truth tools) while maximizing the distance between the instruction embedding and embed- dings of negative tools. The objective can be for- mulated as follows, L = − 1 B B (cid:88) i=1 log eR(qi,d+ i ) j eR(qi,d− ij )) i ) + (cid:80) eR(qi,d+ , (1) where B denotes the batch size, d+ positive tool, and d− tool to the instruction qi. i denotes the ij represents the j-th negative However, due to the efficiency requirements, dense retrieval utilizes a dual-encoder architecture, which has limited ability to understand instructions. In this study, our goal is to improve the tool re- trieval process with the feedback from the tool- usage model, i.e., the LLM. 4 Methodology 4.1 Overview Recent studies have found that LLMs show a great capability in acting as a critic (Zheng et al., 2023) and could provide comprehensive feedback to im- prove performance across a range of tasks (Madaan et al., 2023; Asai et al., 2023). Inspired by those observations, we propose an innovative framework that leverages the LLM’s feedback to improve the tool retrieval process iteratively. Different from ap- proaches which focus on feedback from execution results after tool execution step (Yao et al., 2023; Wang et al., 2024a), we obtain LLMs’ feedback be- fore the actual tool execution step, i.e., right after the tool retrieval step. As illustrated in Figure 3, at each iteration, the LLM will provide feedback on the current-turn re- trieval results. Specifically, the LLM will first com- prehend the user instruction and tool functionalities thoroughly. Then, it will assess the effectiveness of those retrieved tools for handling the instruction. Based on the assessment, the LLM could provide a refinement to the retrieval model, refining the user instruction if necessary. To ensure that the retriever model is aware of the iteration round, we conduct an iteration-aware feedback training pro- cess to adapt the retriever model with continuously refined user instructions. Figure 3: Illustration of our proposed iterative tool retrieval method. At each iteration, the LLM follows a three-step feedback generation process, which includes comprehension, assessment, and refinement, to improve the instruction. 4.2 Feedback Generation Assuming at the iteration step t, given the refined instruction qt, we could utilize retriever model R to retrieve a list of top-K tools {dt K}. We then conduct a three-step feedback generation process by feeding those retrieved tools and associated tool descriptions into the LLM as follows. 1, ..., dt Comprehension. Firstly, the LLM is prompted to give comprehension on both the given instruction and retrieved tools. The prompt provided to LLM includes two parts: (1) summarize the abstract user goals by ignoring detailed entity information in the given instruction; (2) understand the functionalities of retrieved tools, focusing on the category, name, description, input and output parameters of given tools. This step can be formulated as, 1, ..., dt FC = LLM (PC, qt, {dt K}), (2) where FC denotes LLM’s comprehension output and PC denotes the prompt provided to LLM. Assessment. The LLM will assess the effective- ness of retrieved tools for handling the instruction based on its comprehension of the user’s itent and tool functionalities. The assessment is conducted from two perspectives: 1) identify which of the user’s goals could and could not be solved by the retrieved tools with corresponding reasons; and 2) analyze whether the ranked order of retrieved tools corresponds with their significance in addressing the user’s intent with specific reasons. The step can be formulated as, FA = LLM (PA, qt, {dt 1, ..., dt K}, FC), (3) where FA denotes the LLM’s assessment output. Refinement. Lastly, the LLM will refine user in- struction based on its assessment. Specifically, we ask the LLM to determine whether the refinement is necessary based on the two following questions: 1) Whether all the user’s goals have been solved by currently retrieved tools, 2) and whether all existing appropriate tools are given the highest ranking pri- orities by the retriever. If one of the answers is not “yes”, we prompt the LLM to provide a potential refinement for retrieval improvement. Otherwise, the LLM will directly return a special token “N/A” without conducting any refinement. The feedback from the LLM is finalized made on the current user instruction qt. Specifically, we prompt the LLM to generate refined instruction with enriched information in two dimensions: 1) more detailed and personalized content about those user’s intent which have not been solved by current tools, helping the retriever explore other relevant tools; (2) more scenario-specific tool-usage infor- mation about existing appropriate tools, helping the retriever give higher ranking priority to those tools. This step can be formulated as, FR = LLM (PR, qt−1, {dt−1 1 , ..., dt−1 K }, FA), (4) where PR is the corresponding prompt and FR de- notes LLM’s refinement output, i.e., the new re- fined instruction qt+1. 4.3 Iteration-Aware Feedback Training We concatenate a special token “Iteration t” in front of the instruction, where t is the instruction’s it- eration step (e.g., “Iteration t − 1” for qt−1 and “Iteration t” for qt). Retriever RInitial Instruction: I need to retrieve the details of my recent order with ID 98765. Could you please provide mewith the information? Also, I would like to check the inventory status to see if the item is still available.Language Language Model's FeedbackAssessmentComprehensionRefinementIteration tTool i-1: InvalidAPITool i: MissingDetailsTool i+1: InvokeSuccessQuery IntentUnderstandingRetrieved Tool:(1) API names(2) Description(3) ArgumentsAct j-1: Adddetailed infoAct j: EnrichpersonalityAct j+1: Rerankrelated toolsRefined Instruction: ... ID 98765, including the item name, quantity, andprice. Also, I would like to check the inventory status...APIAPI:: 'getInventory''getInventory',,paramsparams:: {{ format format:: 'json''json',,}},,Retrieved Tools We also employ the hard negative sampling in training. Concretely, for each given instruction, we randomly sample an incorrect tool from the re- trieved top-K tool list. The high similarity scores of those tools indicate that they are prone to be mis- taken as correct tools by the retriever. In feedback training, we utilize those tool-instruction pairs as hard negative samples. Then the loss function for each iteration could be calculated as, scenarios # instructions # tool set Training Set In-domain Evaluation ToolBench-I1 ToolBench-I2 ToolBench-I3 ToolBench-All ToolBench-I1 ToolBench-I2 ToolBench-I3 ToolBench-All Out-of-domain Evaluation T-Eval UltraTools 86,643 84,270 25,044 195,937 796 573 218 1,587 553 1,000 - - - - 10,439 13,142 1,605 13,954 50 498 L = − 1 B B (cid:88) i=1 log eR(qi,d+ i ) +(cid:80) eR(qi,d+ i ) j̸=i eR(qi,d− ij )) +(cid:80) eM (qi,dH ij ) , (5) where dH ij denotes the hard negative sample. By distinguishing the subtle differences in the tool de- scriptions, the retriever could achieve a deeper un- derstanding of the tool functionalities and their re- lation with user instructions. Then the final training objective could be for- mulated as the sum of losses in each iteration as follows, Table 1: Statistics of the TR-bench, which is conducted from ToolBench (Qin et al., 2023b), T-Eval (Chen et al., 2023), and UltraTools (Huang et al., 2024a). be fed to the LLM for feedback generation, includ- ing instruction refinement, as discussed in Section 4.2. After obtaining the refined instruction q1 test, we add a token “Iteration 1” to it and then input it to R for the next-round tool retrieval. Then, we can get an updated tool list D1 test for a new round of feedback generation. As such, we could obtain a final tool list DT test after T iterations. Lf eedback = T (cid:88) t=1 αtL(qt), (6) 5 Experiments 5.1 Setup where αt is a balancing factor and L(qt) is the loss function calculated by Equation 5 based on the re- fined user instructions qt in the tth iteration. In this way, the LLM’s comprehensive knowledge of the user requirements could be injected into the retriever through those refined instructions. Be- sides, with the aid of iteration-aware tokens and joint-training manner, the retriever could maintain a balance between newly learned knowledge and previously acquired knowledge. 4.4 Inference At the time of inference, the feedback generation process keeps working while the feedback training process ceased. The retriever will update the candi- date tool list based on the refined user instruction from LLM’s feedback iteratively, until output the final retrieved tools. Concretely, assume that we have obtained a re- triever R after the feedback training. For each initial test instruction q0 test, we add a special to- ken “Iteration 0” in front of the instruction. Then we use the trained retriever R to retrieve an ini- tial tool list D0 test, containing K candidate tools {d1, d2, ..., dK}. The retrieved D0 test will test and q0 Datasets and evaluation. To assess the tool re- trieval performance of models, we conduct an ex- periment on tool retrieval benchmark, referred to as TR-bench, based on three datasets, including ToolBench (Qin et al., 2023b), T-Eval (Chen et al., 2023), and UltraTools (Huang et al., 2024a). To ad- dress real-world requirements, we conduct evalua- tions in both in-domain and out-of-domain settings. Specifically, the training set is from ToolBench, while the test set of ToolBench is employed for in-domain evaluation, and the test sets from T-Eval and UltraTools are used for out-of-domain evalua- tion. The statistics of TR-bench are summarized in Table 1. Following ToolBench, we adopt the Normalized Discounted Cumulative Gain (NDCG) (Järvelin and Kekäläinen, 2002), an ideal metric for tool re- trieval to evaluate the quality of retrieved tools. In our evaluation, we report NDCG@m (m = 1, 3, 5, 10), calculated according to the position of each golden tool among top-m candidates tools retrieved by the tool retriever. Thus, the more ac- curately the tool retriever can retrieve correct tools, the higher the NDCG@m score will be. Baselines. We compare our method against rep- resentative retrieval methods. 1) BM25 (Robertson Methods BM25 Ada Embedding ToolRetriever Ours % improve SINGLE-TOOL (I1) CATEGORY (I2) COLLECTION (I3) ALL N@1 N@3 N@5 N@1 N@3 N@5 N@1 N@3 N@5 N@1 N@3 N@5 15.63 18.37 43.95 57.52 83.06 84.20 88.83 90.70 7.72% 1.52% 3.15% 30.44% 10.37% 11.81% 12.36% 0.80% 3.52% 16.90% 4.58% 6.95% 20.37 46.83 87.13 90.20 10.95 30.68 77.90 87.10 13.98 41.06 83.19 87.00 19.65 58.83 89.65 92.47 17.97 54.90 89.59 90.95 9.85 28.83 77.43 85.46 18.95 42.55 87.24 87.94 25.23 54.59 81.65 91.74 11.97 36.82 68.24 89.01 15.84 46.59 75.73 88.53 Table 2: In-domain evaluation on TR-bench in terms of NDCG@m under scenarios including single-tool (I1), intra-category multi-tool (I2), intra-collection multi-tool (I3), and the whole data (All). % improve represents the relative improvement achieved by our method over the previously best tool retrieval method. Methods BM25 Ada Embedding ToolRetriever Ours % improve T-EVAL ULTRATOOLS N@1 N@3 N@5 N@10 N@1 N@3 N@5 N@10 18.34 52.12 46.40 80.11 58.93 82.10 84.45 59.92 2.86% 1.78% 0.40% -0.06% 2.28% -0.48% 2.43% 1.68% 14.13 33.75 47.73 47.50 45.23 71.95 74.15 74.45 43.19 69.11 72.03 73.31 16.03 39.91 53.01 54.30 15.10 31.46 48.20 49.30 52.91 79.62 80.76 80.25 Table 3: Out-of-domain evaluation on TR-bench in terms of NDCG@m under two scenarios, T-Eval (Chen et al., 2023) and UltraTools (Huang et al., 2024a). % improve represents the relative improvement achieved by our method over the previously best tool retrieval method. and Zaragoza, 2009): the classical sparse retrieval method; 2) Ada Embedding: the closed-sourced OpenAI’s text-embedding-ada-002 model2; 3) ToolRetriever (Qin et al., 2023b): a dense retrieval approach specifically finetuned on tool retrieval datasets. Implementation details. We employ Sentence- BERT (Reimers and Gurevych, 2019) to train our retriever model based on BERT-base (Devlin et al., 2019). We set the learning rate to 2e−5 with 500 warm-up steps. The batch size in training is set to 64. We utilize ChatGPT (gpt-3.5-turbo-0125)3 as the LLM for giving feedback. The number of tool candidates K, the balancing factor α, and the iteration round T are set to 10, 1, and 3, respec- tively. We have trained the model several times to confirm that the improvement is not a result of random chance and present the mid one. Our ex- periments were conducted on four NVIDIA A6000 GPUs with 48 GB of memory. 5.2 Main Results In-domain evaluation. The results of the in- domain evaluation are reported in Table 2. It is observed that non-finetuned retrieval methods, i.e., BM25 and Ada Embedding, perform much worse than other finetuned methods. This is reasonable 2https://platform.openai.com/docs/guides/ embeddings/embedding-models. 3https://openai.com/index/ introducing-chatgpt-and-whisper-apis/. since non-finetuned methods have not been specif- ically adopted for tool retrieval. While Tool Re- triever outperforms non-finetuned methods, the performance is still not satisfying. In compari- son, our proposed method consistently outperforms all finetuned and non-finetuned baselines. Signifi- cantly, our method maintains strong performance in the intra-category multi-tool (I2) scenario, even as other methods’ performance declines, demonstrat- ing the robustness of our proposed method across different scenarios. The above results prove the effectiveness of our method in enhancing tool re- trieval accuracy, particularly in challenging scenar- ios with multi-tools. Out-of-domain evaluation. Since the tools are usually frequently updated in real-world, we fur- ther test all methods in the out-of-domain setting, where the training data from ToolBench and the test data from T-Eval and UltraTools are used. The experimental results are shown in Table 3. We could observe that our method significantly outper- forms other baselines across both scenarios. This demonstrates that our method not only excels in in-domain benchmarks but also maintains robust performance across varied scenarios, revealing its generalization ability of tool retrieval. We further compare the tool usage performance of our method with ToolRetriever in the I2 scenario. We adopt ToolLLaMA (Qin et al., 2023b) which is trained on LLM-annotated solution path as the tool N@1 N@3 N@5 N@10 Methods N@1 N@3 N@5 Methods Ours 89.01 85.46 87.10 88.41 w/o warm-up w/o hard-negative w/o joint w/o joint & hard-neg 85.51 86.04 85.38 83.77 81.36 80.41 81.55 77.67 84.47 84.00 83.79 81.21 86.92 85.98 86.20 83.69 Table 4: Ablation study of our method under the intra- category multi-tool (I2) scenario. Iteration N@1 N@3 N@5 N@10 Efficiency 1 2 3 85.69 87.78 89.01 80.48 83.48 85.46 83.94 86.31 87.10 86.27 88.26 88.41 6.12s 8.59s 10.30s Table 5: Analysis on iteration round under the intra- category multi-tool (I2) scenario. The efficiency is mea- sured by the time consumption to complete one user instruction. usage model, and use “pass rate” and “win rate” as evaluation metrics. Our method achieves 75.6% for pass rate compared to ToolRetriever’s 68.5%, and 65.9% for win rate compared to ToolRetriever’s 60.8%. The results demonstrates the performance improvement in tool usage, benefiting the entire tool learning process. 5.3 Ablation Study We conduct ablation studies to investigate the ef- ficacy of different components in our methods. First, we remove the warm-up training by directly conducting our method on an retriever based on Sentence-BERT. Then, we analyze the contribu- tion of hard negative sampling in our method by removing the hard-to-distinguish samples from the training. In addition, we assess the efficacy of joint training in our method, by substituting it with a loss Lf eedback = L(qt), with respect to only the refined instructions qt at current iteration t. Table 4 reports the ablation test performance (i.e., NDCG@m (m = 1, 3, 5, 10)) under the intra- category multi-tool instructions (I2) scenario on ToolBench. From the results, we can observe that our method achieves comparably high NDCG scores even with- out warm-up training, indicating that it does not heavily rely on prior tool-use knowledge. When hard negative sampling is removed, the perfor- mance degradation illustrates that hard negative sampling could enable the model to discriminate between similar tool functionalities. Besides, the model’s performance further declines when joint ToolRetriever (BERT-based) Ours (BERT-based) ToolRetriever (RoBERTa-based) Ours (RoBERTa-based) 68.24 89.01 76.61 88.13 77.43 85.46 69.81 85.41 77.90 87.10 74.99 86.75 Table 6: Analysis on different base models under the intra-category multi-tool (I2) scenario. Embedding Size N@1 N@3 N@5 N@10 300 512 768 1024 2048 87.61 87.61 89.01 88.66 88.74 83.49 82.85 85.46 83.91 83.95 85.20 84.67 87.10 85.94 85.98 86.50 85.81 88.41 87.04 87.43 Table 7: Analysis on embedding sizes under the intra- category multi-tool (I2) scenario. training is removed, demonstrating that the model could balance new and previous knowledge in this joint-training manner. 5.4 In-depth Analysis Analysis on iteration round. The iteration round is an important factor in our method. We conduct experiments to investigate changes in effectiveness and efficiency with different iteration round T . The results are presented in Table 5, and the efficiency is measured by the cost of time to complete one user instruction on average. By analyzing the results in Table 5, we gain two findings. 1) We could observe a continuous improvement as the iteration round increases. This shows that the tool retriever progressively enhances its performance with the aid of LLMs’ feedback. 2) In terms of time efficiency, we find that adding one additional round of refinement takes an average of 6.12s/instruction, primarily resulting from the time waiting for LLM’s feedback when calling the OpenAI API. As the number of iterations increases, we can see that the extra inference time required for each instruction decreases. This is due to the fact that there will be fewer instructions requiring refinement as retrieval performance improves. Analysis on base models. We further analyze the impact of different base models on the perfor- mance. Specifically, we replace the base model BERT in our method with another classic language model, RoBERTa (Liu et al., 2019). The results are shown in Table 6. As we can see, our method still achieves significant improvement over the baseline with the same RoBERTa model. Another observa- Figure 4: Case study on the effect of user instruction refinement through 3 iterations. The original instruction is revised step-by-step, leading to improved retrieval results. tion is that RoBERTa is more effective in serving as a base model for the retrieval application, which benefits from its effective training strategies. The improvements demonstrate the robustness of our method with different base models. Analysis on embedding sizes. Since the re- triever model R encodes the textual instruction and tool description into dense vectors, we explore the impact of the embedding size on retrieval perfor- mance. as shown in Table 7. From the table, we can find that larger embedding sizes result in greater performance improvements compared to smaller embedding sizes. This is probably due to the fact that embeddings with larger sizes could accommo- date more knowledge. However, when the embed- ding size increases from 768 to 2048, there is a slight decrease in performance. This suggests that a specific embedding size is sufficient, and larger embedding sizes may pose challenges to training. It is worth noting that larger embedding sizes neces- sitate higher training costs and increased inference memory. Therefore, we recommend an optimal embedding size of 768. a closer look at the effect of our method. In the 1st iteration, we can observe that the re- fined instruction has included more detailed infor- mation (i.e., “total number”) about the user’s re- quirements than the original instruction, enabling the retriever to identify more appropriate tools (e.g., Check residential proxies service status). This re- veals that the comprehension capabilities of LLMs could be instilled into the retrieval process through feedback. In the 2nd iteration, our method further refines the instruction by omitting irrelevant con- tent (i.e., “information”) which may mislead the retriever into retrieving incorrect tools (e.g., Re- trieve Proxy Information). Another benefit of the refinement is that some correct tools (e.g., Bash Code Compiler) will move up in positions of the top-K rankings, improving the overall retrieval per- formance. In the 3rd iteration, our method show- cases great decision-aware capabilities, where the iterative process could be terminated if no further refinement is deemed necessary. 6 Conclusion and Future Work 5.5 Case Study As shown in Figure 4, we conduct case study by using an example of instruction refinement to take In this study, we concentrate on the crucial tool retrieval in the tool learning of LLMs. We have identified the bottleneck in the tool retrieval-usage Retrieved Top-k ToolsTool 1: Tools, Ephemeral Proxies, Obtain a new datacenter proxyTool 2: Tools, Ephemeral Proxies, Extend expiration time of adatacenter proxyTool 3: Data, Proxy-Spider Proxies, /proxies.example.jsonOriginal InstructionPlease assist me in finding the latest versions of Bash.Additionally, I require a new datacenter proxy with whitelistedIPs. Can you provide the details of the proxy, including thehost, port, and expiration time? Also, I would like to checkthe current service status of the residential proxies.Refined InstructionRetrieved Top-k ToolsPlease assist me in finding the latest versions of Bash.Additionally, I require a new datacenter proxy with whitelistedIPs. Can you provide the details of the proxy, including thehost, port, and expiration time? Also, I would like to checkthe current service status of the residential proxies andobtain information about the total number of availableresidential proxies grouped by country.Tool 1: Tools, Ephemeral Proxies, Obtain a new datacenter proxyTool 2: Tools, Ephemeral Proxies, Check status...including thetotal number of available residential proxies grouped by countryTool 3: Tools, Proxy Checker, Retrieve Proxy InformationRefined InstructionRetrieved Top-k ToolsN/ATool 1: Tools, Ephemeral Proxies, Obtain a new datacenter proxyTool 2: Tools, Ephemeral Proxies, Check status...including thetotal number of available residential proxies grouped by countryTool 3: Tools, Proxy Checker, Retrieve Proxy InformationRefined InstructionRetrieved Top-k ToolsPlease assist me in finding the latest versions of Bash.Additionally, I require a new datacenter proxy with whitelistedIPs. Can you provide the details of the proxy, including thehost, port, and expiration time? Also, I would like to checkthe current service status of the residential proxiesand obtain information about the total number of availableresidential proxies grouped by country.Tool 1: Tools, Ephemeral Proxies, Obtain a new datacenter proxyTool 2: Tools, Ephemeral Proxies, Check status...including thetotal number of available residential proxies grouped by countryTool 3: Tools, Bash Code Compiler, Bash VersionsIteration 2Iteration 1Iteration 3 pipeline as the limited tool retrieval model. We propose the unique challenges of the tool retrieval compared with document retrieval. To improve the current tool retrieval process, we propose lever- aging the LLM’s feedback to assess the retrieval results and provide detailed suggestions for refin- ing user instructions. In order to integrate the re- triever model into this iterative process, we imple- ment iteration-aware feedback training. This will improve the tool retriever’s capabilities and close the gap between tool retrieval and usage models. We conduct the TR-benchmark to comprehensively evaluate the models’ ability in real-world tool re- trieval scenarios. Our method demonstrates the best performance in both in-domain and out-of-domain settings. In the future, we aim to improve this work from the following aspects. 1) Limited by the training speed, we have applied the offline feedback gen- eration, where feedback is generated before train- ing the tool retriever. We will also assess whether online feedback generation yields further improve- ments in the future. 2) Furthermore, as the tool retriever serves the subsequent tool usage model in tool learning, we intend to conduct further eval- uations of the tool retriever models based on the subsequent tool usage results. Limitations 1) Undoubtedly, our iterative refinement will re- duce the inference speed of the tool retrieval. The efficiency issue is inherent in approaches involving LLMs’ interaction. We have evaluated the effi- ciency as the number of iterative rounds increases. Fortunately, we observed that the retrieval model can achieve a significant performance improvement after just a single round of LLMs’ feedback com- pared to without feedback. Furthermore, the perfor- mance enhancement of the tool retrieval is crucial for the subsequent tool usage model, ensuring that the correct tools are retrieved and lays the founda- tion for all subsequent steps of tool usage. There- fore, we believe that performance improvement is worthwhile despite some efficiency loss. We will also pay more attention to this issue in the future. 2) Similar to document retrieval, the used datasets in our work also contain “false negative” samples. For instance, some tools may be capable of han- dling the user’s instruction but are not labeled as positive. This can disrupt the training and evalua- tion of tool retrieval and is a common limitation in many retrieval scenarios. Ethics Statement The datasets used in our experiment are publicly released and labeled through interaction with hu- mans in English. In this process, user privacy is protected, and no personal information is contained in the dataset. The scientific artifacts that we used are available for research with permissive licenses. And the use of these artifacts in this paper is consis- tent with their intended use. Therefore, we believe that our research work meets the ethics of the con- ference. References Raviteja Anantha, Bortik Bandyopadhyay, Anirudh Kashi, Sayantan Mahinder, Andrew W Hill, and Protip: Progressive Srinivas Chappidi. 2023. arXiv preprint tool retrieval improves planning. arXiv:2312.10332. Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. 2023. Self-rag: Learning to retrieve, generate, and critique through self-reflection. arXiv preprint arXiv:2310.11511. Zehui Chen, Weihua Du, Wenwei Zhang, Kuikun Liu, Jiangning Liu, Miao Zheng, Jingming Zhuo, Songyang Zhang, Dahua Lin, Kai Chen, et al. 2023. T-eval: Evaluating the tool utilization capability step by step. arXiv preprint arXiv:2312.14033. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Yu Du, Fangyun Wei, and Hongyang Zhang. 2024. Anytool: Self-reflective, hierarchical agents for large-scale API calls. In Forty-first International Conference on Machine Learning. Shen Gao, Zhengliang Shi, Minghang Zhu, Bowen Fang, Xin Xin, Pengjie Ren, Zhumin Chen, Jun Ma, and Zhaochun Ren. 2024. Confucius: Iterative tool learn- ing from introspection feedback by easy-to-difficult curriculum. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 18030–18038. Shibo Hao, Tianyang Liu, Zhen Wang, and Zhit- ing Hu. 2023. Toolkengpt: Augmenting frozen language models with massive tools via tool em- beddings. In Advances in Neural Information Processing Systems, volume 36, pages 45870–45894. Curran Associates, Inc. Shijue Huang, Wanjun Zhong, Jianqiao Lu, Qi Zhu, Ji- ahui Gao, Weiwen Liu, Yutai Hou, Xingshan Zeng, Yasheng Wang, Lifeng Shang, et al. 2024a. Planning, creation, usage: Benchmarking llms for comprehen- sive tool utilization in real-world complex scenarios. arXiv preprint arXiv:2401.17167. Tenghao Huang, Dongwon Jung, Vaibhav Kumar, Mo- hammad Kachuee, Xiang Li, Puyang Xu, and Muhao Chen. 2024b. Planning and editing what you re- trieve for enhanced tool learning. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 975–988. Association for Computational Linguistics. Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan Wu, Qihui Zhang, Yixin Liu, Pan Zhou, Yao Wan, Neil Zhenqiang Gong, and Lichao Sun. 2024c. Meta- tool benchmark for large language models: Decid- ing whether to use tools and which to use. In The Twelfth International Conference on Learning Representations. Kalervo Järvelin and Jaana Kekäläinen. 2002. Cu- mulated gain-based evaluation of ir techniques. ACM Transactions Information Systems, 20(4):422–446. on Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781. Association for Computational Linguistics. Minghao Li, Yingxiu Zhao, Bowen Yu, Feifan Song, Hangyu Li, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. 2023. API-bank: A compre- hensive benchmark for tool-augmented LLMs. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 3102–3116. Association for Computational Linguis- tics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdan- bakhsh, and Peter Clark. 2023. Self-refine: Itera- tive refinement with self-feedback. In Advances in Neural Information Processing Systems, volume 36, pages 46534–46594. Curran Associates, Inc. Feiteng Mu, Yong Jiang, Liwen Zhang, Chu Liu, Wenjie Li, Pengjun Xie, and Fei Huang. 2024. Adaptive selection for homogeneous tools: An instantiation in the rag scenario. arXiv preprint arXiv:2406.12429. Shishir G Patil, Tianjun Zhang, Xin Wang, and Joseph E Gonzalez. 2023. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334. Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, et al. 2023a. Tool learning with foundation models. arXiv preprint arXiv:2304.08354. Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. 2023b. Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789. Changle Qu, Sunhao Dai, Xiaochi Wei, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Jun Xu, and Ji-Rong Wen. 2024a. Tool learning with large language mod- els: A survey. arXiv preprint arXiv:2405.17935. Changle Qu, Sunhao Dai, Xiaochi Wei, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Jun Xu, and Ji-Rong Wen. 2024b. Towards completeness-oriented tool re- trieval for large language models. In Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992. Association for Computational Linguistics. Stephen Robertson and Hugo Zaragoza. 2009. The prob- abilistic relevance framework: Bm25 and beyond. Foundations and Trends in Information Retrieval, 3(4):333–389. Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao, et al. 2023. Tptu: Task planning and tool usage of large language model-based ai agents. In NeurIPS 2023 Foundation Models for Decision Making Workshop. Timo Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, Maria Lomeli, Eric Hambro, Luke Zettle- moyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. In Advances in Neural Information Processing Systems, volume 36, pages 68539–68551. Curran Associates, Inc. Bhaskar Mitra and Nick Craswell. 2017. Neural arXiv preprint models for information retrieval. arXiv:1705.01509. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugging- gpt: Solving ai tasks with chatgpt and its friends in hugging face. In Advances in Neural Information Processing Systems, volume 36, pages 38154–38180. Curran Associates, Inc. Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, and Le Sun. 2023. Toolalpaca: Gener- alized tool learning for language models with 3000 simulated cases. arXiv preprint arXiv:2306.05301. Boshi Wang, Hao Fang, Jason Eisner, Benjamin Van Durme, and Yu Su. 2024a. LLMs in the imag- inarium: Tool learning through simulated trial and error. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10583–10604. As- sociation for Computational Linguistics. Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, and Heng Ji. 2024b. MINT: Evaluating LLMs in multi-turn interaction with tools and language feedback. In The Twelfth International Conference on Learning Representations. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2023. ReAct: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR). Siyu Yuan, Kaitao Song, Jiangjie Chen, Xu Tan, Yongliang Shen, Kan Ren, Dongsheng Li, and De- qing Yang. 2024. EASYTOOL: Enhancing LLM- based agents with concise tool instruction. In ICLR 2024 Workshop on Large Language Model (LLM) Agents. Wayne Xin Zhao, Jing Liu, Ruiyang Ren, and Ji-Rong Wen. 2024. Dense text retrieval based on pretrained language models: A survey. ACM Transactions on Information Systems, 42(4):1–60. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang, Joseph E Gonzalez, and Ion Stoica. 2023. Judg- ing llm-as-a-judge with mt-bench and chatbot arena. Information Processing In Advances in Neural Systems, volume 36, pages 46595–46623. Curran Associates, Inc. Yuanhang Zheng, Peng Li, Wei Liu, Yang Liu, ToolR- and Bin Wang. 2024. Jian Luan, erank: Adaptive and hierarchy-aware reranking for tool retrieval. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 16263–16273. ELRA and ICCL.
synthetic_cpt
4
LiDA_Language-Independent_Data_Augmentation_for_Text_Classification.pdf
LIDA: A Tool for Automatic Generation of Grammar-Agnostic Visualizations and Infographics using Large Language Models Victor Dibia Microsoft Research [email protected] 3 2 0 2 n u J 6 ] I A . s c [ 3 v 7 2 9 2 0 . 3 0 3 2 : v i X r a Abstract Systems that support users in the automatic creation of visualizations must address sev- eral subtasks - understand the semantics of data, enumerate relevant visualization goals and generate visualization specifications. In this work, we pose visualization generation as a multi-stage generation problem and argue that well-orchestrated pipelines based on large lan- guage models (LLMs) and image generation models (IGMs) are suitable to addressing these tasks. We present LIDA, a novel tool for gen- erating grammar-agnostic visualizations and infographics. LIDA comprises of 4 modules - A SUMMARIZER that converts data into a rich but compact natural language summary, a GOAL EXPLORER that enumerates visualiza- tion goals given the data, a VISGENERATOR that generates, refines, executes and filters visu- alization code and an INFOGRAPHER module that yields data-faithful stylized graphics using IGMs. LIDA provides a python api, and a hy- brid USER INTERFACE (direct manipulation and multilingual natural language) for interac- tive chart, infographics and data story genera- tion. Code and demo are available at this url - https://microsoft.github.io/lida/ 1 Introduction Visualizations make data accessible by reducing the cognitive burden associated with extracting in- sights from large tabular datasets. However, vi- sualization authoring is a complex creative task, involving multiple steps. First the user must build familiarity with the dataset (content and semantics) and enumerate a set of relevant goals or hypothe- ses that can be addressed using the data. Next, users must select the right visualization representa- tion (marks, transformations and layout) for each goal. Finally, the user must implement the visu- alization either as code or using available direct manipulation interfaces. Each of these steps re- quire expertise, and can be tedious as well as error prone for users with limited visualization experi- ence (novices). Existing research has sought to address these challenges by automating the visual- ization (AUTOVIZ) creation process, given a dataset (Podo et al., 2023). Automation may occur in two modes: i.) fully automated - the system automati- cally generates visualizations relevant to the data ii.) semi-automated - the user specifies their goals and the system generates visualizations that address these goals. The former mode is valuable for users unfamiliar with the data and the latter is valuable for users with some familiarity with the data and the visualization task. Consequently, a successful AUTOVIZ tool must excel at each of several subtasks - understand the semantics of the data, enumerate relevant visual- ization goals and generate visualization specifica- tions that meet syntax, design, task and perceptual requirements of these goals (Podo et al., 2023). Furthermore, given the target demographic (novice users), such a tool must support the user by offering NL (NL) interaction modalities (Mitra et al., 2022; Narechania et al., 2020; Chen et al., 2022), affor- dances to control system behavior and sense mak- ing tools to understand and debug/verify system behavior. While related work has addressed aspects of the AUTOVIZ task, there are several known limi- tations (Podo et al., 2023) such as they: (i) rely on heuristics that are limited in coverage, challenging to craft and tedious to maintain (Wongsuphasawat et al., 2017). (ii) require significant user interac- tion to generate visualizations (Wongsuphasawat et al., 2017; Moritz et al., 2018). (iii) implement automated approaches that offer limited control over system input and output (Dibia and Demiralp, 2019) (iv) require grammar (or chart type) specific training data and model architectures (Dibia and Demiralp, 2019; Luo et al., 2018) for each sub task, (v) do not consider alternative chart representation formats such as infographics. Concurrently, advances in large foundation mod- Figure 1: LIDA generates visualizations and infographics across 4 modules - data summarization, goal exploration, visualization generation and infographics generations. Example output from each module is shown. Figure 2: Example data-faithful infographics and associated style prompts generated with LIDA. els (Bommasani et al., 2021) have shown state of the art performance on a variety of creative tasks such as multilingual text generation, code genera- tion, image captioning, image generation, and im- age editing. In this work, we argue that the vast capabilities of these models can be assembled to ad- dress the AUTOVIZ task, whilst addressing the lim- itations of existing approaches. This work makes the following contributions: • We present a novel multi-stage, modular ap- proach (Fig 1) for the automatic generation of data visualization and infographics using LLMs1. Specifically, we (i) Efficiently represent datasets as NL summaries, suitable as ground- ing context for an LLM to address visualization tasks. (ii) Generate a set of visualization goals using LLMs. Importantly, we leverage prompt engineering to steer the model towards generat- 1This work primarily utilizes the OpenAI gpt-3.5-turbo-x line of models for text and code generation. ing correct visualization that follow best prac- tices (see Appendix C). (iii) Apply LLMs to generate grammar-agnostic visualization speci- fication based on generated (or human provided) goals. (iv) Provide a hybrid interface that sup- ports traditional direct manipulation controls (e.g., manually select which fields to explore) and a rich multilingual NL interface to sup- port user’s with varied skill/experience. (v) Ap- ply text-conditioned image generation models (IGM) models in generating stylized infograph- ics that are both informative (generally faithful to data), aesthetically pleasing, memorable and engaging (see section 2.3). • We introduce metrics for evaluating LLM- enabled visualization tools, including a metric for pipeline reliability (visualization error rate - VER), and visualization quality (self-evaluated visualization quality - SEVQ) (see section 4). GOAL EXPLORERVIZ GENERATORINFOGRAPHERSUMMARIZER Convert datasets into a rich but compact natural language representation (context).The cars dataset contains technical specifications for cars and has 9 fields - Name, Miles_per_Gallon, Cylinders, Displacement, Horsepower, Weight_in_lbs, Acceleration, Year, Origin ..Histogram of Miles per galloPlot of miles per gallon vs horse poweTrends in miles per gallon over timeAverage horsepower per countryGenerate a set of potential “goals*” given the dataset context.Generate,evaluate,repair, filter execute and visualization code to yield specifications* .Generate stylized infographics based based on visualization and style prompts.OutputOutputOutputOutput*goals may also be directly provided by the user. Supports multi-lingual input. *specification may be in any programming language or grammar.*Style prompt: line sketch art, line drawingCars.csvGenerate code in visualization based on context and goal“”RULES + LLMLLMIGMLLMReference VisualizationGenerated stylized infographicsunderwater art, shellspastel artoil on canvas, impasto • We implement our approach in an Open Source library - LIDA2. LIDA provides a python api, a web api and a rich web interface useful for research and practical applications. Compared to existing AUTOVIZ approaches, LIDA proposes an implementation that is simplified (eliminates the need for subtask-specific mod- els), general (can be adapted to generate visual- izations in any programming language or gram- mar), flexible (individual modules can be opti- mized) and scalable (the system performance will improve with advances in the underlying LLM). Taken together, these contributions provide build- ing blocks towards complex workflows such as visualization translation, chart question answering (with applications in accessibility of charts), auto- mated data exploration and automated data stories. To the best of our knowledge, LIDA is the first tool to formulate visualization/infographic genera- tion as a multi-step generation task and demonstrate an end-to-end pipeline that addresses a variety of subtasks. 2 Related Work LIDA is informed by research on large foundation models applied to creative tasks across modalities such as text and images, and advances in automated generation of visualizations and infographics. 2.1 Foundation Models for Creative Tasks Advances in large transformer-based (Vaswani et al., 2017) models trained on massive amounts of data (terabytes of text and images) have led to a paradigm shift where a single model demon- strates state of the art task performance across mul- tiple data modalities such as text, images, audio and video. These models, also known as founda- tion models (Bommasani et al., 2021), have been shown to be effective for a variety of human cre- ativity tasks. LLMs like the GPT3 series (Brown et al., 2020), OPT (Zhang et al., 2022), PALM (Chowdhery et al., 2022), LAMBDA (Cohen et al., 2022) learn complex semantics of language allow- ing them to be effective in tasks such as text sum- marization, question answering. Code LLMs such as Codex (Chen et al., 2021), AlphaCode (Li et al., 2022), InCoder (Fried et al., 2022) show state of the art performance on a suite of code intelligence tasks. Finally, models such as CLIP (Radford et al., 2https://microsoft.github.io/lida/. 2021), DALLE (Ramesh et al., 2022, 2021) and La- tent Diffusion (Rombach et al., 2022) have shown state of the art capabilities on image generation tasks such as image captioning, image editing, and image generation. In this work, we adopt insights from Program- Aided Language models (Gao et al., 2022) - a setup where LLMs generate programs as the intermedi- ate reasoning steps, but offload the solution step to a runtime such as a python interpreter. We lever- age the language modeling capabilities of LLMs in generating semantically meaningful visualization goals, and their code writing capabilities in gener- ating visualization code which is compiled to yield visualizations. These visualizations (images) are then used as input to image generation models in generating stylized infographics. 2.2 Automated Visualization (AUTOVIZ) Extant AUTOVIZ research have explored multiple approaches such as heuristics, task decomposition or learning based approaches. Heuristics-based ap- proaches explore properties of data in generating a search space of potential visualizations (Wong- suphasawat et al., 2017), ranking these visualiza- tions based on quality attributes (Luo et al., 2018; Moritz et al., 2018) and presenting them to the user. For example, DeepEye (Luo et al., 2018) enumerates all possible visualizations and classi- fies/ranks them as “good” or “bad” using a binary decision tree classifier while Voyager (Wongsupha- sawat et al., 2017) uses heuristics to enumerate the space of visualizations. However, heuristics can be tedious to maintain, may have poor coverage of the visualization space and does not leverage information encoded in existing datasets. More recent work has explored a task decomposition approach where the AUTOVIZ process is decom- posed into multiple tasks that are solved individu- ally via specialized tools and aggregated to yield visualizations (Narechania et al., 2020; Chen et al., 2022; Wang et al., 2022b). For example NL4DV (Narechania et al., 2020) implements a custom query engine that parses natural language queries, identifies attributes/tasks and generates Vega-Lite specifications. A limitation of task decomposition approaches is that they are bottlenecked by the implementation performance for each step (e.g., limitations with models for disambiguating natural language queries as seen in NL4DV (Narechania et al., 2020)). Finally, end-to-end learning-based approaches seek to automatically learn mappings from data directly to generated visualizations. For example, Data2Vis (Dibia and Demiralp, 2019) (the most relevant work to this study) uses a se- quence to sequence model that implicitly addresses AUTOVIZ subtasks by learning a mapping from raw JSON data sampled from datasets to Vega-Lite (Satyanarayan et al., 2017) specifications. Some limitations of current learning approaches is that they are limited to a single grammar, require cus- tom models, custom paired training data and train- ing objectives (Dibia and Demiralp, 2019; Luo et al., 2018; Chen et al., 2022) for each supported grammar, and do not provide a path to generating infographics. Furthermore, they do not provide mechanisms for fine-grained control of visualiza- tion output or provide robust error detection and recovery strategies. LIDA addresses these limitations in several ways: (i) Leverages patterns learned by LLMs from mas- sive language and code dataset, applying this knowledge to subtasks. (ii) Provides a single gram- mar-agnostic pipeline that generates visualization in multiple programming languages and visualiza- tion grammars. (iii) Supports natural language based control of generated visualizations. (iv) lever- age emergent capabilities of large language models such chain of thought reasoning to improve reliabil- ity of generated text/code (Kojima et al., 2022; Wei et al., 2022; Shi et al., 2022a), model calibration (Kadavath et al., 2022) (predictions on correctness probabilities of visualizations) as well as self-con- sistency (Wang et al., 2022a) in ranking/filtering results. (v) provides a mechanism for generating infographics that are data-faithful and aesthetically pleasing. (vi) supports a fully automatic mode where an LLM is used to discover meaningful goal- s/hypotheses (fields to visualize, questions to ask) or a semi automatic mode where the user provides a hypothesis and it generates a visualization. By choosing to cast visualization/infographic gen- eration as generation tasks that offloads core prob- lem solving to LLMs and IGMs, LIDA simplifies the design and maintenance of such systems. 2.3 Infographics Generation Infographics (information graphics) are visual arti- facts that seek to convey complex data-driven nar- ratives using visual imagery and embellishments (Harrison et al., 2015). Existing research has shown that infographics are aesthetically pleasing, engag- ing and more memorable (Tyagi et al., 2021; Harri- son et al., 2015; Haroz et al., 2015), at no additional cost to the user (Haroz et al., 2015). These prop- erties have driven their applications in domains like fashion, advertisemnt, business and general communications. However, the creation of info- graphics that convey data insights can be a tedious process for content creators, often requiring skills across multiple tools and domains. Research on infographic generation have mainly explored the creation of pictographs (Haroz et al., 2015) - replac- ing the marks on traditional charts with generated images and learning to extract/transfer styles from existing pictographs (Shi et al., 2022b). In this work, we extend this domain to exploring the gener- ation of both visual marks as well as generating the entire infographic based on natural language style descriptions using large image generation models such as DALLE (Ramesh et al., 2022, 2021) and Latent Diffusion (Rombach et al., 2022). This ap- proach also enables user-generated visual styles and personalization of visualizations to fit user pref- erences such as color palettes, visual styles, fonts etc. 3 The LIDA System LIDA comprises of 4 core modules - a SUMMA- RIZER, a GOAL EXPLORER, a VISGENERATOR and an INFOGRAPHER (see Fig 1). Each module is implemented in the LIDA github repo as a python li- brary with an optional user interface (see Appendix A). 3.1 SUMMARIZER Figure 3: The SUMMARIZER module constructs a NL summary from extracted data properties (atomic types, field statistics) and an optional LLM enrichment (pre- dicted field descriptions, semantic types). LLMs are capable zero shot predictors, able to solve multiple tasks with little or no guiding examples. However, they can suffer from hallucination e.g., generating text that is not grounded in training data Atomic type, field statistics, samples ..LLM / User Enrichment (description, semantic type){"":"cars.json","":"cars.json","dataset_description":"A dataset containing information about cars.","":[{"":"Name","properties":{"":"string","":["amc concord dl","amc ambassador dpl","plymouth cricket"], "" : 311, "": "", "":"The make and model of the car."}} ...namefile_namefieldscolumndtypesamplesnum_unique_valuessemantic_typecar_modeldescriptionStage 1Stage 2Cars.csv or the current task. One way to address this is to augment (Mialon et al., 2023) the LLM with ground- ing context. Thus, the goal of the summarizer is to produce an information dense but compact 3 sum- mary for a given dataset that is useful as grounding context for visualization tasks. A useful context is defined as one that contains information an ana- lyst would need to understand the dataset and the tasks that can be performed on it. The summary is implemented in two stages (see Fig 3) Stage 1 - Base summary generation: We ap- ply rules in extracting dataset properties includ- ing atomic types (e.g., integer, string, boolean) us- ing the pandas library (McKinney, 2010), general statistics (min, max, # unique values) and a random non-null list of n samples for each column. Stage 2 - Summary enrichment: The base sum- mary is optionally enriched by an LLM or a user via the LIDA ui to include semantic description of the dataset (e.g., a dataset on the technical specifi- cation of cars), and fields (e.g., miles per gallon for each car) as well as field semantic type prediction (Zhang et al., 2019). 3.2 GOAL EXPLORER This module generates data exploration goals, given a summary generated by the SUMMARIZER. We express goal generation as a multitask genera- tion problem where the LLM must generate a ques- tion (hypothesis), a visualization that addresses the question and rationale (see Fig 4). We find that requiring the LLM to produce a rationale leads to more semantically meaningful goals. Figure 4: A goal generated by LIDA is a JSON data structure that contains a question, a visualization and a rationale. 3.2.1 VISGENERATOR The VISGENERATOR generates visualization speci- fications and is comprised of 3 submodules - a code scaffold constructor, a code generator and a code executor. Code scaffold constructor: Implements a library of code scaffolds that correspond to programming 3Note: the summary must be compact in order to maximize the limited context token budget of LLMs. Figure 5: The VISGENERATOR module constructs vi- sualization code scaffolds, fills a constrained section (< stub >) and executes the scaffold. languages and visualization grammars e.g., python scaffolds support grammars such as Matplotlib, GGPlot, Plotly, Altair, Seaborn, and Bokeh. Each scaffold is an executable program that i.) imports relevant dependencies ii.) defines an empty func- tion stub which returns a visualization specification (see Fig 5a). Code generator: Takes a scaffold, a dataset sum- mary, a visualization goal, and builds a prompt. An LLM (applied in fill-in-the-middle mode (Bavarian et al., 2022)) is then used to generate n candidate visualization code specifications. Code executor: Post-processes and executes4 the code specifications as well as filters the results. LIDA implements several filtering mechanisms to detect errors, each with latency tradeoffs: (i) gener- ates a large sample for n with high temperature, dis- card candidates that do not compile. (ii) apply self consistency (Wang et al., 2022a) in LLMs where multiple candidates are generated and the solution with the highest consensus is selected. (iii) gener- ate correctness probabilities (Kadavath et al., 2022) for all candidates and selects the one with the high- est probability. Note that the last two approaches are computationally expensive (require multiple forward passes through an LLM) and are not suit- able for real time applications. The final output is a list of visualization specifications (code) and associated raster images. 3.2.2 VIZOPS - Operations on Generated Visualizations Given that LIDA represents visualizations as code, the VISGENERATOR also implements submodules to perform operations on this representation. Natural language based visualization refine- ment: Provides a conversational api to iteratively 4Execution in a sandbox environment is recommended. { "": "What is the distribution of Miles_per_Gallon?", "": "Histogram of Miles_per_Gallon", "": "This tells us about the fuel efficiency of the cars in the dataset and how it is distributed." }questionvisualizationrationalecode scaffold constructorcode generatorcode executorImplement a library of “code scaffolds” for languages and grammars e.g., Python, Vega-Lite.1 2 3 4 5importasdefreturn altair alt (data): chart = alt.Chart(data).mark_point().encode(x=,y=) chart chart = plot(data)plot'Miles_per_Gallon''Cylinders'1 2 3 4 5importasdefreturn altair alt (data): chart = chart chart = plot(data)plot'''<stub>''' Complete code scaffolds based on summary and goal.Execute generated code, parse results. refine generated code (e.g., translate chart t hindi . . . zoom in by 50% etc) which can then be exe- cuted to generate new visualizations. Visualization explanations and accessibility: Generates natural language explanations (valuable for debugging and sensemaking) as well as acces- sibility descriptions (valuable for supporting users with visual impairments). Visualization code self-evaluation and repair: Applies an LLM to self-evaluate generated code on multiple dimensions (see section 4.1.2). Visualization recommendation: Given some con- text (goals, or an existing visualization), recom- mend additional visualizations to the user (e.g., for comparison, or to provide additional perspectives). 3.3 INFOGRAPHER This module is tasked with generating stylized graphics based on output from the VISGENERATOR module (see Fig 2). It implements a library of vi- sual styles described in NL that are applied directly to visualization images. Note that the style library is editable by the user. These styles are applied in generating infographics using the text-conditioned image-to-image generation capabilities of diffusion models (Rombach et al., 2022), implemented using the Peacasso library api (Dibia, 2022). An optional post processing step is then applied to improve the resulting image (e.g., replace axis with correct val- ues from visualization, removing grid lines, and sharpening edges). 3.4 USER INTERFACE LIDA implements a user interface that communi- cates with the core modules over a REST and Web- socket api. The user interface implements several views. Data upload and summarization: This view al- lows the user to upload a dataset and explore a sample of rows in the dataset via a table view. A data upload event triggers a call to the SUMMA- RIZER and GOAL EXPLORER module and displays a summary of the dataset and a list of potential goals. This view also allows the user to option- ally annotate and refine the generated summary or curate fields used in the dataset. Visualization view: This view allows the user to optionally provide a visualization goal in NL (e.g., "what is the fuel efficiency per country?") or se- lect a generated goal and then displays a generated visualization . For each visualization, intermedi- ate output from the models (underlying data sum- mary, visualization specification, code scaffold) are shown as explanations to aid in sensemaking, and debugging(see Fig 9). This view also implements the VIZOPS capabilities described in Section 3.2.2 (e.g., See the interface for visualization evaluation in Fig 10). Note that the NL interface inherits the multilingual language capabilities of the underly- ing LLM, enabling multilingual NL interaction. Overall, the combination of these modules result in a system that is able to implicitly address an array of data visualization operations such as data transformation, encoding, mark selection, styling, layout, and annotation (Wang et al., 2022b). 4 Evaluation 4.1 Evaluation Metrics Our initial evaluation of LIDA focuses on two high level metrics - visualization error rates (VER) to pro- vide signals on the reliability of the LIDA pipeline, and self-evaluated visualization quality (SEVQ) to assess the quality of generated visualizations. 4.1.1 Visualization Error Rate (VER) Visualization error rate is computed as the percent- age of generated visualizations that result in code compilation errors. This metric provides critical insights into the reliability of the LIDA pipeline and impact of changes to the system (e.g., prompt engineering or scaffold update). ∗ 100 VER = E T Where: - E = Number of generated visualiza- tions with code compilation errors, and - T = Total number of generated visualizations. 4.1.2 Self-Evaluated Visualization Quality (SEVQ) Recent work shows LLMs like GPT-4 encode broad world knowledge (OpenAI, 2023), can assess the quality of their output (Kadavath et al., 2022; Lin et al., 2022) and can approximate human judge- ments for tasks such as summarization (Liu et al., 2023). Our observations applying GPT3.5/GPT- 4 to visualization tasks suggest similar results. Specifically, GPT-4 has learned to encode some visualization best practices and can apply these in generating critiques of visualization code across multiple dimensions. Thus, to evaluate visualiza- tion quality, we compute an SEVQ metric by ap- plying GPT-4 in assessing the quality of gener- ated visualizations. Specifically, we task GPT-4 with scoring generated visualization code (a nu- meric value from 1-10 and a rationale) across 6 dimensions - code accuracy, data transformation, goal compliance, visualization type, data encoding, and aesthetics. These dimensions are informed by existing literature on visualization generation/rec- ommendation e.g., Wang et al. (2022b) outline 6 visualization tasks including data transformation, encoding, marks, styling, layout and annotation, while (Moritz et al., 2018) codify constraints for visualization quality across expressivity (does it convey the facts of the data) and effectiveness (is the information more readily perceived compared to other visualizations) criteria. Additional details on prompts used for each dimension are provided in Appendix B. 4.2 Evaluation Benchmark Settings Our initial benchmark is based on 57 datasets sourced from the vega datasets repository5. For each dataset, LIDA is tasked with generating 5 goals and 1 visualization per goal across multiple gram- mars6. For reproducibility, we set temperature = 0 and number of samples n = 1 for the LLM. A gallery of the generated evaluation visualizations can be viewed on the LIDA project page. 4.3 Evaluation and Ablation Study Results Figure 6: Results from an ablation study on the impact of data summarization strategies on visualization error rate (VER) metric. Overall, we find that LIDA is able to generate visualizations with a low error rate (VER = 3.5%). We also conduct an ablation study to inform on the impact of the SUMMARIZER across the fol- 5https://github.com/vega/vega-datasets 6 LIDA is given a single try for each step. In theory, the error rates can be driven to zero, by recursively applying the visualization self-evaluation and self-repair modules. lowing conditions - (i) no_enrich: a base sum- mary with no enrichment (see Section 3.1), (ii) en- rich: summary with LLM enrichment, (iii) schema: only field names, i.e., schema as summary, and (iv) no_summary: no summary. Results show that including a summary leads to reduced error rate compared to simply adding field names (schema) as summary. We also find that enriching the base summary with an LLM has less of an effect on VER (with variations across visualization grammar), and an expressive, well-represented grammar like Seaborn having lower VER. These results are sum- marized in Figure 6. We also find that the SEVQ metric is valuable in identifying semantic quality issues with generated visualizations. For example, Fig 10 shows an example where the user has re- quested a pie chart, and the LIDA self-evaluation module critiques this visualization using the SEVQ metric, providing a rationale for why a bar chart is more effective (see Fig 10), with the option to automatically repair the visualization. 5 Conclusion In this work, we formulate visualization generation as a multi-stage text (and code) generation problem that can be addressed using large language mod- els. We present LIDA - a tool for the automatic generation of grammar-agnostic visualizations and infographics. LIDA addresses limitations of cur- rent automatic visualization systems - automatic generation of hypothesis/goals given datasets, con- versational interface for controllable visualization generation and refinement, support for multiple vi- sualization grammars using the same pipeline and the ability to generate infographics. LIDA is effec- tive compared to state of the art systems (see ex- ample gallery of generated visualizations); it offers a simplified system implementation and leverages the immense language modeling and code genera- tion capabilities of LLMs in implicitly solving com- plex visualization subtasks. Finally, we introduce metrics for assessing reliability (visualization error rate - VER) and visualization quality (self-evaluated visualization quality -SEVQ) for LLM-enabled vi- sualization tools. We hope modules implemented in LIDA will serve as useful building blocks in en- abling complex creative workflows such as visual- ization translation, chart question answering(with applications in accessibility of charts), automated data exploration and automated storytelling. no_enrichenrichschemano_summarySummary Type0.00.20.40.60.81.0Error Rate5.61%7.72%7.02%95.79%3.51%3.51%9.47%99.30%Visualization Error Rate | GPT-3.5, n=2280libmatplotlibseaborn 6 Limitations While LIDA demonstrates clear advances in how we can support users in authoring visualizations and infographics, there are several limitations that offer a natural avenue for future research. Low Resource Grammars: The problem formu- lation introduced in LIDA depends on the under- lying LLMs having some knowledge of visualiza- tion grammars as represented in text and code in its training dataset (e.g., examples of Altair, Vega, Vega-Lite, GGPLot, Matplotlib, represented in Github, Stackoverflow, etc.). For visualization grammars not well represented in these datasets (e.g., tools like Tableau, PowerBI, etc., that have graphical user interfaces as opposed to code repre- sentations), the performance of LIDA may be lim- ited without additional model fine-tuning or transla- tion. Furthermore, performance may be limited for complex tasks (e.g., tasks requiring complex data transformations) beyond the expressive capabilities of specific grammars. Further research is needed to: i.) study effects of strategies like task disam- biguation ii.) impact of task complexity and choice of programing language/grammar on performance. Deployment and Latency: Large language mod- els (e.g., GPT3.5 used in this work) are computa- tionally expensive and require significant compute resources to deploy at low latency. These costs can prove to be impractical for real-world application. In addition, the current setup includes a code ex- ecution step which is valuable for verification but increases deployment complexity (requires a sand- box). Thus, there is opportunity to: i.) train smaller capable LLMs (Touvron et al., 2023) finetuned on a curated dataset of programming languages and visualization grammars .ii) design vulnerability mit- igation approaches such as limiting program scope or generating only input parameters for visualiza- tion grammar compilers. Explaining System Behavior: The approach dis- cussed in this paper simplifies the design of vi- sualization authoring systems, but also inherits interpretability challenges associated with large language models. While LIDA offers intermedi- ate outputs of the model (e.g., generated code and specifications) as explanations, as well as post-hoc explanations of generated code (see section 3.2.2), there is a need for further research in explaining system behavior (conditions when they are needed) and providing actionable feedback to the user. System Evaluation: Benchmarking LLM’s on cre- ativity tasks can be challenging. While the current study introduces metrics for evaluating reliability (VER) and visualization quality (SEVQ) (see section 4), there is a need for more comprehensive bench- marks on a variety of datasets and visualization grammars. Furthermore, there are research oppor- tunities to i.) study and quantify the capabilities of LLMs in encoding and applying visualization best practices ii.) conduct empirical studies that evalu- ate model behavior, mapping out failure cases and proposing mitigations iii.) qualitatively study the impact of tools like LIDA on user creativity while authoring visualizations. Acknowledgements This manuscript has benefited from comments and discussions with members of the HAX group (Saleema Amershi, Adam Fourney, Gagan Bansal), VIDA group (Steven Drucker, Dan Marshall), Bongshing Lee, Rick Barraza and others at Mi- crosoft Research. References Mohammad Bavarian, Heewoo Jun, Nikolas Tezak, John Schulman, Christine McLeavey, Jerry Tworek, and Mark Chen. 2022. Efficient training of lan- guage models to fill in the middle. arXiv preprint arXiv:2207.14255. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosse- lut, Emma Brunskill, et al. 2021. On the opportuni- ties and risks of foundation models. arXiv preprint arXiv:2108.07258. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka- plan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Qiaochu Chen, Shankara Pailoor, Celeste Barnaby, Abby Criswell, Chenglong Wang, Greg Durrett, and I¸sil Dillig. 2022. Type-directed synthesis of vi- sualizations from natural language queries. Pro- ceedings of the ACM on Programming Languages, 6(OOPSLA2):532–559. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Aaron Daniel Cohen, Adam Roberts, Alejandra Molina, Alena Butryna, Alicia Jin, Apoorv Kulshreshtha, Ben Hutchinson, Ben Zevenbergen, Blaise Hilary Aguera-Arcas, Chung ching Chang, Claire Cui, Cosmo Du, Daniel De Freitas Adiwardana, De- hao Chen, Dmitry (Dima) Lepikhin, Ed H. Chi, Erin Hoffman-John, Heng-Tze Cheng, Hongrae Lee, Igor Krivokon, James Qin, Jamie Hall, Joe Fen- ton, Johnny Soraker, Kathy Meier-Hellstern, Kris- ten Olson, Lora Mois Aroyo, Maarten Paul Bosma, Marc Joseph Pickett, Marcelo Amorim Menegali, Marian Croak, Mark Díaz, Matthew Lamm, Maxim Krikun, Meredith Ringel Morris, Noam Shazeer, Quoc V. Le, Rachel Bernstein, Ravi Rajakumar, Ray Kurzweil, Romal Thoppilan, Steven Zheng, Taylor Bos, Toju Duke, Tulsee Doshi, Vincent Y. Zhao, Vinodkumar Prabhakaran, Will Rusch, YaGuang Li, Yanping Huang, Yanqi Zhou, Yuanzhong Xu, and Zhifeng Chen. 2022. Lamda: Language models for dialog applications. In arXiv. Victor Dibia. 2022. Interaction design for systems that integrate image generation models: A case study with peacasso. Victor Dibia and Ça˘gatay Demiralp. 2019. Data2vis: Automatic generation of data visualizations us- ing sequence-to-sequence recurrent neural networks. IEEE computer graphics and applications, 39(5):33– 46. Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. 2022. Incoder: A generative model for code infilling and synthesis. arXiv preprint arXiv:2204.05999. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Gra- ham Neubig. 2022. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435. Steve Haroz, Robert Kosara, and Steven L Franconeri. 2015. Isotype visualization: Working memory, per- formance, and engagement with pictographs. In Pro- ceedings of the 33rd annual ACM conference on hu- man factors in computing systems, pages 1191–1200. Lane Harrison, Katharina Reinecke, and Remco Chang. 2015. Infographic aesthetics: Designing for the first impression. In Proceedings of the 33rd Annual ACM conference on human factors in computing systems, pages 1187–1190. (mostly) know what they know. arXiv:2207.05221. arXiv preprint Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large lan- guage models are zero-shot reasoners. arXiv preprint arXiv:2205.11916. Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. 2022. Competition-level code generation with alphacode. arXiv preprint arXiv:2203.07814. Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Teaching models to express their uncertainty in words. arXiv preprint arXiv:2205.14334. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. Gpteval: Nlg evaluation using gpt-4 with better human align- ment. arXiv preprint arXiv:2303.16634. Yuyu Luo, Xuedi Qin, Nan Tang, Guoliang Li, and Xinran Wang. 2018. Deepeye: Creating good data visualizations by keyword search. In Proceedings of the 2018 International Conference on Management of Data, SIGMOD, pages 1733–1736. Wes McKinney. 2010. Data structures for statistical In Proceedings of the 9th computing in python. Python in Science Conference, pages 51 – 56. Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christo- foros Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, et al. 2023. Augmented language models: a survey. arXiv preprint arXiv:2302.07842. Rishab Mitra, Arpit Narechania, Alex Endert, and John Stasko. 2022. Facilitating conversational interaction in natural language interfaces for visualization. In 2022 IEEE Visualization and Visual Analytics (VIS), pages 6–10. IEEE. Dominik Moritz, Chenglong Wang, Greg L Nelson, Halden Lin, Adam M Smith, Bill Howe, and Jef- frey Heer. 2018. Formalizing visualization design knowledge as constraints: Actionable and extensible models in draco. IEEE transactions on visualization and computer graphics, 25(1):438–448. Arpit Narechania, Arjun Srinivasan, and John Stasko. 2020. Nl4dv: A toolkit for generating analytic speci- fications for data visualization from natural language IEEE Transactions on Visualization and queries. Computer Graphics, 27(2):369–379. OpenAI. 2023. Gpt-4 technical report. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield Dodds, Nova DasSarma, Eli Tran-Johnson, et al. 2022. Language models Luca Podo, Bardh Prenkaj, and Paola Velardi. 2023. Ma- chine learning for visualization recommendation sys- tems: Open challenges and future directions. arXiv preprint arXiv:2302.00569. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International confer- ence on machine learning, pages 8748–8763. PMLR. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text- conditional image generation with clip latents. arXiv preprint arXiv:2204.06125. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image gen- In International Conference on Machine eration. Learning, pages 8821–8831. PMLR. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High- resolution image synthesis with latent diffusion mod- In CVF Conference on Computer els. 2022 ieee. Vision and Pattern Recognition (CVPR), pages 10674– 10685. Arvind Satyanarayan, Dominik Moritz, Kanit Wong- suphasawat, and Jeffrey Heer. 2017. Vega-lite: A grammar of interactive graphics. IEEE TVCG (Proc. InfoVis). Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, et al. 2022a. Language models are multilingual chain-of-thought reasoners. arXiv preprint arXiv:2210.03057. Yang Shi, Pei Liu, Siji Chen, Mengdi Sun, and Nan Cao. 2022b. Supporting expressive and faithful pic- torial visualization design with visual style transfer. IEEE Transactions on Visualization and Computer Graphics, 29(1):236–246. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. Anjul Tyagi, Jian Zhao, Pushkar Patel, Swasti Khu- rana, and Klaus Mueller. 2021. User-centric semi- automated infographics authoring and recommenda- tion. arXiv preprint arXiv:2108.11914. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Yun Wang, Zhitao Hou, Leixian Shen, Tongshuang Wu, Jiaqi Wang, He Huang, Haidong Zhang, and Dong- mei Zhang. 2022b. Towards natural language-based visualization authoring. IEEE Transactions on Visu- alization and Computer Graphics, 29(1):1222–1232. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903. Kanit Wongsuphasawat, Zening Qu, Dominik Moritz, Riley Chang, Felix Ouk, Anushka Anand, Jock Mackinlay, Bill Howe, and Jeffrey Heer. 2017. Voy- ager 2: Augmenting visual analysis with partial view specifications. In ACM CHI. Dan Zhang, Yoshihiko Suhara, Jinfeng Li, Madelon Hulsebos, Ça˘gatay Demiralp, and Wang-Chiew Tan. 2019. Sato: Contextual semantic type detection in tables. arXiv preprint arXiv:1911.06311. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068. A The LIDA Library LIDA is implemented as a python library with mod- ules for each of the components described in Sec- tion 3. The library is available on github7 and can be installed using pip - pip install lida. The library provides a python api, web api for integration into other applications, and a command line interface. It also provides a web-based user interface for users to interact with LIDA (Fig 10, 9). Figure 7: Example usage of LIDA shows how to generate a summary, visualization goals, code specifications and execute the code to generate visualizations. B Self-Evaluated Visualization Quality (SEVQ) Prompts Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022a. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171. For the SEVQ metric, we use GPT-4 to assess visu- alization quality by scoring generated visualization 7https://github.com/microsoft/lida 1 2 3 4 5 6 7 8 9 10 11# pip install lida from lida.modules Manager lida = () summary = lida.() goals = lida.(summary, n=) vis_specs = manager.( summary=summary, goal=goals[i]) charts = manager.( code_specs=vis_specs, data=manager.data, summary=summary) (charts)importManagersummarizegenerate_goalsgenerate_vizexecute_vizprint"data/cars.csv"1 Figure 8: In the data upload section of the LIDA UI, users can select a grammar of choice and upload a dataset. A dataset upload event triggers a goal generation as well as visualization generation tasks. Figure 9: The visualization generation section of the LIDA UI enables the user to i.) specify their overall goal in natural language and generate visualizations ii.) inspect, edit and execute generated code iii.) view the generated visualization. iv.) perform operations on generated code e.g., refine, explain, evaluate and recommend visualizations. Figure 10: The self-evaluation module in LIDA is used to evaluate/critique a generated visualization, providing scores across 6 dimensions with rationale. In this case, the visualization contains a pie chart, and a bar chart is recommended as an alternative. code across the 6 task dimensions - code accuracy, data transformation, goal compliance, visualization type, data encoding, and aesthetics. These dimen- sions are implemented as prompts to an LLM 8, which then generates a score between 1-10 for each dimension. The final SEVQ score is the average of the 6 scores. A sketch of the prompts used for each dimension are enumerated in table 1. C Design Reflections Building a system that leverages foundation models (text and images) involves engineering decisions across a wide design space. In this section, we briefly reflect on some of the design choices we made for LIDA components and the tradeoffs we considered. C.1 Prompt Engineering We explored multiple approaches to building prompts that maximized the probability of the LLM solving each subtask. • SUMMARIZER: We found that improving the richness of the summary (qualitative NL de- scription, including semantic types) was criti- cal to improved quality of generated goals and Dimension Prompt Code accu- racy Does the code contain bugs, logic errors, syntax error or typos? How serious are the bugs? How should it be fixed? Data trans- formation Is the data transformed appropriately for the visualization type? com- Goal pliance How well the code meets the specified visu- alization goals? Visualization type Considering best practices, is the visualiza- tion type appropriate for the data and intent? Is there a visualization type that would be more effective in conveying insights? Data encod- ing Is the data encoded appropriately for the visualization type? Aesthetics Are the aesthetics of the visualization ap- propriate and effective for the visualization type and the data? Table 1: Summary of the evaluation dimensions and the corresponding prompt sketches. visualization code. Implementation wise, we began with a manually crafted summary of the data (see Section 3.1), and then enriched it via calls to an LLM and optional user refinement of the summary. 8Exact prompts can be found at the project repository https://github.com/microsoft/lida. • GOAL EXPLORER: Providing few shot exam- ples in the prompts where fields and rationale C.3 Natural Language Interaction (i) HYBRID INTERFACE: Providing a hybrid in- terface that allows traditional direct manipulation steps in creating visualizations (e.g., selecting which fields to use), paired with a NL interface allows users to leverage existing mental models with traditional visualization tools as well as the NL affordances of LIDA. (ii) NL INTERACTION MODES: Beyond generating a base visualization, we also enable operations on generated visualiza- tion code (e.g., refinement, explanation, evaluation, recommendation). This builds on insights from Mitra et al. (2022) who propose multi-turn dialog interfaces for visualization authoring towards re- solving ambiguities. are linked via symbols (e.g., plot a histogram of field X vs Y to show relationship between X and Y) nudges the model to use exact dataset field names, and minimizes the occurrence of hallucinated fields. Prompt engineering also provides mechanisms to bake in visualization best practices e.g. avoid pie charts, apply vi- sualization best practices, Imagine you are a highly experienced visualization specialist and data analyst. • VISGENERATOR: Casting visualization code generation as a fill-in-the-middle problem (as opposed to free-from completion) ensures the model to generates executable code focused on the task. For example, in Fig 5, the model is instructed to generate only the < stub > portion of the code scaffold. We also note that the degrees of freedom alloted to the model (e.g., specifying how much of the scaffold to complete) can influence its ability to add tasks with varied complexity. For example, a scaffold that allows the model generate data preprocessing code (and includes libraries like statsmodels etc) allows the model to address tasks that require steps such as data transfor- mation, sampling and statistical analysis be- fore generating visualizations etc. • Overall, we found that setting a low temper- ature (t = 0; generating the most likely visu- alization) coupled with a per-grammar code scaffold provided the best results in terms of yielding code that correctly compiles into visualization specifications and faithfully ad- dresses the subtask. We also explored prompt formulations that addressed multiple tasks to minimize costs (latency and compute). For example, summary enrichment is a single call where the LLM must generate dataset descrip- tions, field descriptions and semantic types. C.2 Infographic Generation We found that setting a low strength parameter (0.25 < strength < 0.45) for the latent diffusion model (image-to-image mode) and using parsimo- nious style prompts resulted in stylized images that were faithful to the general structure of the origi- nal visualization, minimizing distorted or irrelevant imagery. This sort of controlled generation is nec- essary to avoid the distraction (Haroz et al., 2015) that can arise from superfluous imagery in info- graphics. Figure 11: The LIDA infographer module supports the generation of data-faithful infographics. Each infographic is conditioned on a generated visualization as well as natural language style tags which can be used to customize the appearance of the chart.
synthetic_cpt
1
A_systematic_evaluation_of_large_language_models_for_biomedical_natural_language_processing_benchmarks_baselines_and_recommendations.pdf
4 2 0 2 y a M 6 1 ] L C . s c [ 2 v 1 5 1 8 0 . 5 0 4 2 : v i X r a BENCHMARKING RETRIEVAL-AUGMENTED LARGE LANGUAGE MODELS IN BIOMEDICAL NLP: APPLICA- TION, ROBUSTNESS, AND SELF-AWARENESS Mingchen Li, Zaifu Zhan, Han Yang, Yongkang Xiao, Jiatan Huang, Rui Zhang University of Minnesota Twin Cities {li003378,zhan8023,yang8597,xiao0290,huan2460, zhan1386}@umn.edu ABSTRACT Large language models (LLM) have demonstrated remarkable capabilities in vari- ous biomedical natural language processing (NLP) tasks, leveraging the demon- stration within the input context to adapt to new tasks. However, LLM is sensitive to the selection of demonstrations. To address the hallucination issue inherent in LLM, retrieval-augmented LLM (RAL) offers a solution by retrieving pertinent in- formation from an established database. Nonetheless, existing research work lacks rigorous evaluation of the impact of retrieval-augmented large language models on different biomedical NLP tasks. This deficiency makes it challenging to ascertain the capabilities of RAL within the biomedical domain. Moreover, the outputs from RAL are affected by retrieving the unlabeled, counterfactual, or diverse knowledge that is not well studied in the biomedical domain. However, such knowledge is common in the real world. Finally, exploring the self-awareness ability is also crucial for the RAL system. So, in this paper, we systematically investigate the impact of RALs on 5 different biomedical tasks (triple extraction, link prediction, classification, question answering, and natural language inference). We analyze the performance of RALs in four fundamental abilities, including unlabeled robustness, counterfactual robustness, diverse robustness, and negative awareness. To this end, we proposed an evaluation framework to assess the RALs’ performance on different biomedical NLP tasks and establish four different testbeds based on the aforementioned fundamental abilities. Then, we evaluate 3 representative LLMs with 3 different retrievers on 5 tasks over 9 datasets. The evaluation indicates that while RALs enhance the performance of most biomedical datasets used and demonstrate a degree of counterfactual robustness, they still encounter significant challenges with unlabeled and counterfactual retrieval information, as well as negative awareness. Lately, significant progress has been made in large language models (LLMs) such as ChatGPT 1. To adapt the LLM to the biomedical domain, several LLMs have been developed, such as MedLLaMA- 13B Wu et al. (2023) and Med-PaLM 2 Singhal et al. (2023). Despite demonstrating impressive general capabilities Li & Zhang (2023), these models still face significant challenges, including factual hallucination Ji et al. (2023); Zhang et al. (2023) , and absence of newly uploaded knowledge Ovadia et al. (2023). Retrieval-augmented language models Li & Zhang (2023); Lewis et al. (2020); Li & Huang (2023), in contrast, can retrieve knowledge from an external datastore when needed, potentially reducing hallucination and improve the new knowledge adaption ability. The most common method is to use the designed retriever to retrieve the knowledge that is relevant to the input sentence, subsequently, the retrieved knowledge, along with the input sentence, is fed into the LLM to assist in generating the expected output. In the question answering (QA) task, a retrieval-augmented language model can access knowledge from unlabeled corpus2 like PubMed. The QA format allows the unlabeled corpus to potentially 1https://chat.openai.com/ 2In this work, corpus refers to the knowledge base that needs to be retrieved. 1 Figure 1: BIORAB features on queries on different types corpus to test the awareness ability and generation ability of RAL. furnish answers to questions. However, for tasks like triple extraction, incorporating the unlabeled corpus may yield adverse effects. Counterfactual information, such as error annotations, is prevalent in labeled corpora, presenting challenges for retrievers in obtaining useful information. Additionally, LLMs still grapple with generating unreliable information retrieved incorrectly. Incorporating diverse knowledge holds promise for improving model performance. For example, question answering relies on extracting information from extensive contexts, thus potentially impacting information extraction performance. Moreover, the influence of retrieval information from various tasks or datasets on RAL performance remains underexplored. Self-awareness is crucial for RALs, if RALs can distinguish between positive retrieval knowledge and negative knowledge, they have the opportunity to rectify their actions. These challenges hinder RALs from consistently producing reliable and accurate responses. Unfor- tunately, in the biomedical domain, only a few studies, such as Almanac Zakka et al. (2024), have explored the RAL performance in QA, leaving a gap in understanding how these factors affect RAL across various biomedical NLP tasks. Consequently, there is a pressing need for a comprehensive evaluation of RALs with different LLMs across biomedical NLP tasks. To this end, this paper conducts a comprehensive evaluation of RAL for different LLMs on 5 biomedical NLP tasks over 9 datasets. Specifically, we create a new RAL benchmark for biomedical NLP, namely BioRAB, as shown in Figure 1, and create 4 testbeds to evaluate the mentioned fundamental abilities. • Unlabeled Robustness denotes the ability of RALs to extract valuable information from unlabeled retrieval corpus, especially on label-intensive tasks, such as triple extraction, and classification. For instance, in tasks like relation extraction, the corpus could be a labeled dataset (such as the training dataset) or unlabeled (training dataset without labels). If the RAL achieves comparable or superior performance by retrieving the unlabeled dataset compared to retrieving the labeled dataset, it indicates that labeled databases may not be necessary for RALs. In the testbed of the Unlabeled Robustness, the corpus contains instances without labels. • Counterfactual Robustness denotes whether the RAL could retrieve the right information from the counterfactual corpus, in our work, the counterfactual instance refers to the mislabeled annotation. In the testbed of Counterfactual Robustness, the corpus consists of instances with a certain proportion of incorrect labels. • Diverse Robustness evaluating whether RALs can achieve better performance by integrating information from multiple tasks. For instance, the corpus for the classification task is sourced from relation extraction and question-answering tasks. In the testbed of Diverse Robustness, the corpus comprises instances from various tasks. • Negative Awareness refers to the RAL’s ability to discern whether retrieved knowledge positively or negatively impacts the final output. In the testbed of Negative Awareness, the corpus comprises instances that are 100% counterfactual instances. Utilizing BioRAB, we evaluate its performance across 5 tasks (triple extraction, link prediction, text classification, question answering, and natural language inference) using 9 biomedical NLP 2 query Retrieval tool labeled corpusdiversity labeled corpus unlabeled corpuscounterfactual corpusBIORABLLMawareness abilitygeneration abilityretrieveUserOutputpossesses or lacksqueryretrieved knowledge datasets. Furthermore, BioRAB undergoes evaluation with three widely used LLMs: LLaMA2- 13B, MedLLaMA-13B, and LLaMA3-8B, utilizing three commonly employed retrievers (BM25, Contriever, and MedCPT). We observed that although RALs can enhance response accuracy in the majority of biomedical NLP tasks we evaluated, they encounter notable challenges. Particularly in the question-answering task, we noted that RALs did not yield significant improvements in the datasets we used. We speculate that this could be attributed to the limitations of the corpus used for retrieving, as the training dataset (corpus we used for retrieving) may not have provided sufficient information compared to using Wikipedia or PubMed. Moreover, RALs struggle to generate the desired output when the corpus lacks labeling when compared to the labeled corpus. An interesting finding is that in datasets like ChemProt and Hetionet, RALs exhibit improved performance with unlabeled corpora compared to the source LLM. Besides, RALs lack the capability to extract useful information from counterfactual corpora and struggle to discern the most relevant information. We also find this is not a common case, some datasets, such as in the dataset ADE and Hetionet, RAL could handle the counterfactual instance. Additionally, when presented with a diverse labeled corpus, RALs do not achieve optimal performance across tasks, except for the natural language inference task. Finally, we found despite providing counterfactual examples during training, the LLM was still able to generate correct outputs in some instances. However, RALs struggle with self-awareness, as they lack the ability to determine which examples could help improve model performance. The experimental results mentioned above underscore the necessity for further resolution of these issues for the RAL. Our contributions are the following: • We propose four abilities essential for evaluating retrieval-augmented large language models in the biomedical domain and introduce a benchmark called BIORAB to assess these capabilities. To our knowledge, this is the first benchmark tailored specifically to evaluate these four abilities for RALs in the biomedical domain. • We evaluated the LLM using the retrieval-augmented method and identified limitations in four key abilities; • We evaluate the RAL on 5 different biomedical tasks over 9 datasets by using 3 LLMs with 3 retrievers. 1 RELATED WORK 1.1 RETRIEVAL-AUGMENTED LANGUAGE MODELS (RALMS) Many studies Li & Zhang (2023); Lewis et al. (2020); Guu et al. (2020); Ram et al. (2023); Li et al. (2024a;b), have been proposed to use retrieved information from various knowledge stores to better understand the text or generate the expected output. For example, KIEST Li & Huang (2023) dynamically injects retrieved entity and attribute knowledge from the knowledge graph when generating the entity or attribute in the task of entity stage changes. Lewis et al. (2020) uses the maximum inner product search (MIPS) to find the top-K documents which are combined with a query to predict the answers. To enhance retrieval capability, BiomedRAG Li et al. (2024a) proposes a learned retriever to retrieve the chunk information from the build database and improve the model performance. while CTLP Li et al. (2024b) aims to create a condensed transition graph to improve the link prediction performance, the sample paths between two entities are retrieved to construct the condensed transition graph. RT Li & Zhang (2023) employs a chain of thought and retrieves pertinent labeled sentences to enhance few-shot biomedical named entity recognition (NER) tasks. However, there is no work to systematically evaluate the effectiveness of retrieval-argument LLM on the different biomedical NLP tasks. 1.2 EVALUATION OF RAL Evaluating RALs has received significant attention due to their remarkable general capability. It enables researchers to gain a deeper understanding of the limitations and abilities of LLMs. However, there are few studies Zakka et al. (2024); Xiong et al. (2024) focusing on the evaluation of RAGs in the biomedical domain, primarily centered around question answering tasks. For example, Xiong et al. Xiong et al. (2024) evaluated RAG models on five biomedical QA datasets using the zero-shot 3 Figure 2: Overview of four testbeds on BIORAB. n refers to the special dataset for each task, such as ade-corpus-v2 (text classification), and PHharmKG (link prediction). In (d), the corpus of n refers to the set that includes the task datasets but excludes the training set of n. In (e), to distinguish the difference between "Output" and "True/False", the "Output" is defined as the expected output for different tasks, for example, in the triple extraction task, the output is the triple. "True/False" refers to "the retrieved example is a negative example or the retrieved example is not a negative example." In our work, the n corpus of n refers to the training set of n. setting. Almanac Zakka et al. (2024) evaluates the ChatGPT with one retriever and one QA dataset. In contrast to the current work, we offer a broader evaluation across four testbeds and 5 tasks spanning 9 datasets. 2 BIORAB: BIOMEDICAL RETRIEVAL-AUGMENTED GENERATION BENCHMARK In this section, we begin by outlining the operational flow of the RALs. Following this, we introduce the proposed four abilities and the building progress of four relevant testbeds. Finally, we introduce the evaluation metrics employed to assess performance. 2.1 RAL WORKING FLOW To solve the hallucination problem, RAL is proposed to retrieve the external knowledge from the corpus and improve the LLM performance. Generally, as shown in Figure 2(a), retrieved corpus needs to be constructed initially, in numerous question-answering RAL models, the corpus primarily originates from the unlabeled open source such as PubMed, Textbook. However, for some label- sensitive tasks, such as triple extraction, the unlabeled open source may be invalid. In our work, the corpus is defined as the training set for the relevant task. For instance, as illustrated in Figure 2(a), if "n" denotes the PHarmKG, each key corresponds to a sentence what is the relationship between the head entity and tail entity? in its training set, while the corresponding value denotes the relevant label relationship for that key. In the second step, the retriever is used to obtain the relevant (key, value) pairs from the corpus based on the input sentence. At last, the retrieved (key, value) pairs with the input sentence are fed into the LLM to generate the expected output. For each instance X of each "n", there are three components: Instruction I, context C, and response R. For example, in the training dataset of ade-corpus-v2 (classification task), if the label of a sentence S: She had been administered tacrolimus for prophylaxis of graft-versus-host reaction is False in the X, I =You are an excellent linguist. The task is to predict whether this sentence is True or False. Examples: context: The hemangioma regressed markedly 6 weeks after the procedure and serous retinal detachment showed marked resolution.response: False, C = S, R = F alse. 2.2 DEFINED FOUR ABILITY OF BIORAL Despite the RAL has achieved considerable success in solving the hallucination problem, in the biomedical domain, the ability of RAL is underexplored. Firstly, not all tasks have vast labeled corpora. While many research endeavors employ the training set as the corpus, they still encounter 4 Input sentenceLLMRetrieverKeyValueOuput...Task data n Corpus of n:Input sentenceLLMRetrieverKeyOuput...Task data n Corpus of n:Input sentenceLLMRetrieverOuputTask data n KeyValue...Corpus of n:Input sentenceLLMRetrieverOuputTask data n KeyValueCorpus of 1...KeyValueCorpus of (n-1)......Input sentenceLLMRetrieverTask data n KeyValue...Corpus of n:OuputTrue/Falseb) Unlabeled Robustnessc) Counterfactual Robustnessd) Diversity Robustnesse)Negative Awarenessa) RAL Corpus of n: limitations when contrasted with larger corpora. If a RAL can achieve similar performance to the RAL that utilizes labeled corpus, it would demonstrate the former’s ability to operate effectively without relying on labeled data. For another, the RAL may easily be misled by incorrectly labeled information (as shown in Figure 2(c)). Furthermore, RALs may possess the capability to obtain useful information from labeled corpora of other tasks (as shown in Figure 2(d)). However, retrieving knowledge from labeled corpora of other tasks may introduce noise and potentially mislead the generation process. Finally, when the retriever retrieves mislabeled (or counterfactual) information, the RAL may possess the ability to discern that the retrieved knowledge is not conducive to output generation (as shown in Figure 2(e)). To this end, we built the Biomedical Retrieval-Augmented Generation Benchmark (BIoRAB) to evaluate the ability of RAL in the biomedical domain, and we proposed 4 testbeds to test these abilities. In the next, we will detail these four abilities and how to construct the testbeds. 2.2.1 UNLABELED ROBUSTNESS (UR) Not all tasks have vast labeled retrieval corpus, therefore, for each task, the retriever must gather information from unlabeled corpora, while the RAL may still have the ability to generate the expected results. To evaluate the efficacy of RAL in this regard, we introduce our proposed UR testbed. Specifically, as shown in Figure 2(b), the corpus of "n" is defined as the training set without value(label) for the "n". The retriever retrieves the relevant information from this unlabeled corpus. After that, the retrieved Key with the input sentence is fed into the LLM. For example, in the training dataset of ade-corpus-v2 (classification task), if the label of a sentence S: She had been administered tacrolimus for prophylaxis of graft-versus-host reaction is False In the X, I =You are an excellent linguist. The task is to predict whether this sentence is True or False, retrieved sentence: A macrophage activation syndrome, possibly related to methotrexate toxicity, developed in a boy with systemic juvenile rheumatoid arthritis, C = S, R = F alse. 2.2.2 COUNTERFACTUAL ROBUSTNESS (CR) Constructing a high-quality annotation corpus is challenging work, as it often involves dealing with incorrect data labeling. In our work, these mislabeled instances are called counterfactual instances. In the condition of the mislabeled corpus, the RAL may have the ability to avoid negative information. To validate the counterfactual robustness, we introduced our CR testbed. Specifically, as shown in Figure 2(c), when constructing the corpus of n, we set the negative rate to be 20% or 80% or 100%, corresponding to 20% or 80% or 100% of instances being wrongly labeled. An example of incorrect annotation in a classification dataset would be if there are two labels, "True" and "False." If the true class of one instance is "True," then its incorrect annotation would be "False". Subsequently, the retriever is tasked with retrieving relevant information from this corpus. The retrieved information, along with the input sentence, is fed into the LLM to generate the output. 2.2.3 DIVERSE ROBUSTNESS (DR) Diverse Robustness refers to the ability to incorporate diverse information from various task corpora. On one hand, in numerous scenarios, the corpus from other tasks may contain valuable information to aid in generation. For instance, in the task of triple extraction, if a suitable triple extraction corpus is unavailable, the question-answering corpus may assist in extracting the necessary information. On the other hand, different tasks may introduce noise that could potentially impede the performance of the RAL. To generate better output, it is necessary for RAL to have the ability to retrieve diverse information. So, we introduce our DR testbed, as shown in Figure 2(d), when constructing the corpus of "n", it incorporates corpora from other tasks. For instance, if "n" refers to the Chemprot (triple extraction task), the corpus of "n" includes corpora from tasks such as GIT (triple extraction task), PHarmKG (link prediction task), and so on. Next, the retriever is required to extract the pertinent information from the diverse corpus. Subsequently, the retrieved information, along with the input sentence, is fed into the LLM to generate the output. 2.2.4 NEGATIVE AWARENESS (NA) Negative Awareness evaluates the ability of LLMs to discern whether the retrieved information is negative (it is not conducive to the expected output). In real-world scenarios, if the retriever obtains 5 negative information and the LLM can identify it as such, the LLM can then seek out more useful information to aid in generation based on this feedback. So, we introduce our NA testbed, as shown in Figure 2(e), we designate all values in the corpus of "n" as incorrect labels. After obtaining the retrieved documents from the corpus, the model is expected to produce two types of output. Firstly, task-based output, such as in the task of triple extraction, the output should be triple. Secondly, the model should also provide a judgment on whether the retrieved knowledge is negative or not. 2.3 EVALUATION METRICS 2.3.1 TASK-BASED METRICS In the triple Extraction task, same as BiomedRAG Li et al. (2024a), triple is regarded as correct when its relation type, the head entity, and the tail entity are all correct. For example, in the sentence: Infusion of prostacyclin (PGI2) reportedly attenuates renal ischemic injury in the dog and the rat., triple <Infusion, treats, rat> is regarded as correct while <injury, treats, rat> is not. We evaluated all the models and reported the evaluation metric, including Micro Precision, Recall, and F1-score. For the text classification, link prediction, and question answering task, we follow the same evaluation metrics as triple extraction. For the natural language inference task, we use the same evaluation metric (Macro F1) as the BioNLI. 2.3.2 NEGATIVE AWARENESS METRICS To assess negative awareness in our study, we define a negative instance as a mislabeled instance. In the first, we need to evaluate the model performance using mislabeled examples. For instance, in the ade-corpus-v2 classification data, with two labels "True" and "False", this evaluation gauges the performance of "True" or "False" predictions. Typically, in the RAL framework, if the retrieved example contains the input sentence and its expected output, the LLM should achieve 100% performance when tested with the input sentence. Despite all instances in the retrieval corpus being mislabeled, the LLM may still generate the correct output when utilizing these examples. In our experiments, we also investigate this aspect. Building on this discovery, we delineate two types of negative instances: • True negatives: When the negative instance is provided to the LLM along with the input sentence, resulting in the incorrect output. In this scenario, the number of input sentences is denoted as lt. • False negatives: When the negative instance is presented to the LLM alongside the input sentence, leading to the correct output. In this case, the number of input sentences is represented as lf . At the same time, we also expected the LLM could output True - The retrieved example is negative example or False- The retrieved example is not a negative example by providing a specific instruction Please determine whether the retrieved example constitutes negative information. If it is negative, please output False; if it is not negative, please output True for each input sentence. For an input sentence that has false negative examples, if the LLM could output False - The retrieved example is not a negative example, it demonstrates that the LLM recognizes the example as a false negative. After the judgment of LLM, The count of input sentences with "false negative examples" is denoted as f . For an input sentence that has true negative examples, if the LLM could output True - The retrieved example is a negative example, it demonstrates that the LLM recognizes the example as a true negative. After the judgment of LLM, the count of input sentences with "true negative examples" is denoted as t. So the true negative awareness rate is calculated by t/lt, and the false negative awareness rate is calculated by f /lf . 3 EXPERIMENTS In this section, we assess RAL’s performance across various biomedical NLP tasks, analyze its efficacy on four proposed testbeds, and discuss its abilities. 6 3.1 SETTINGS AND DATASET We evaluated three state-of-the-art LLMs: LLamA2-13B (Touvron et al., 2023) , MedLLamA- 13B (Wu et al., 2023), and LLaMA3 8B3, along with three retrievers: BM25 (Luo et al., 2023), Contriver (Izacard et al., 2021), and MedCPT (Jin et al., 2023). We considered five biomedical NLP tasks: triple extraction, link prediction, text classification, question answering, and natural language inference, across nine datasets: ADE, ChemProt, GIT, PHarmKG, Hetionet, Ade-corpus-v2, SemedCLass, MedMCQA, and BioNLI. The data statistics are shown in Table 1. The experiments were conducted using A100 GPUs. Triple extraction Link Prediction Text classification Question answering Natual language inference Dataset ADE (Gurulingappa et al., 2012b) ChemProt (Taboureau et al., 2010) GIT (Li et al., 2023) PHarmKG (Zheng et al., 2021) Hetionet (Himmelstein et al., 2017) Ade-corpus-v2 (Gurulingappa et al., 2012a) SemdClass (Vasilakes Jake A, 2018) MedMCQA (Pal et al., 2022) BioNLI (Bastan et al., 2022) train 4,970 4,001 3,734 4,000 4,000 4,000 2,400 34,994 5,544 test 2,130 3,355 465 500 500 500 600 4,183 6,308 dev – 2,366 492 500 500 500 600 4,183 12,807 Table 1: Data Statistics for the datasets we used in this work 3.1.1 TRIPLE EXTRACTION DATASET In this paper, we utilized ADE, Chemprot, and GIT as the foundational datasets. • ADE (Gurulingappa et al., 2012a) is extended from relation extraction task to triplet extrac- tion task in this paper. All sentences either describe the effect of the drug or the dose of the drug. Thus, the triplets consist of (head entity: drug, relation type: effect, tail entity: ef- fect_description) and (head entity: drug, relation type: dosage, tail entity: dose_description). Among all triplets, there are only two relation types: effect and dosage. • ChemProt (Taboureau et al., 2010): The Chemical Protein Interaction Corpus comprises 2432 PubMed abstracts annotated with chemical-protein interactions, encompassing 23 distinct interaction relations. Building upon prior research (Sun et al., 2022), the corpus exclusively considers sentence-level instances, with a particular focus on five prominent interaction types for classification: CPR3, CPR4, CPR5, CPR6, CPR9. • GIT (Li et al., 2023) is a high-quality biomedical triple extraction dataset for non-drug therapies, characterized by its high-quality annotations and comprehensive coverage of relation types. It includes 22 relation types from SemMedDB. 3.1.2 LINK PREDICTION In this paper, we utilized PHarmKG and Hetionet as the foundational datasets in the link prediction task. • PHarmKG (Zheng et al., 2021) is a knowledge graph to describe the relationship among genes, drugs, and diseases. In this work, we aim to predict the four mentioned relation types (Interactions, Disease-Gene, Disease-Chemical, Chemical-Gene) between two entities. During the huge quantity of triples in the PHarmKG, we randomly select 4,000 samples from the source training set for training, 500 samples from the source testing set for testing, and 500 samples from the source validation set for validation. • Hetionet (Himmelstein et al., 2017) is an integrative network of disease, which includes 46 relation types. In our paper, we randomly select 4,000 samples from the source training set for training, 500 samples from the source testing set for testing, and 500 samples from the source validation set for validation. 3https://github.com/meta-llama/llama3 7 3.1.3 TEXT CLASSIFICATION In this paper, we utilized Ade-corpus-v2 and SemdClass as the foundational dataset in the text classification task. • Ade-corpus-v2 (Gurulingappa et al., 2012a) dataset is designed for classifying whether a sentence is ADE( Adverse Drug Reaction)-related (True) or not (False). In our paper, we randomly select 4,000 instances for training, 500 for testing, and 500 for validation. • The SemdClass (Vasilakes Jake A, 2018), aims to understand whether the provided triple belongs to the given sentence or not. It includes two classes, False and True. 3.1.4 QUESTING ANSWERING AND NATUAL LANGUAGE INFERENCE In this paper, we utilized MedMCQA as the foundational dataset in the question-answering task and used BioNLI as the dataset of natural language inference. • MedMCQA (Pal et al., 2022) is a multi-choice question-answering dataset that designed to address the medical entrance exam questions. In this work, we opt for the five-choice version (A, B, C, D, E). • BioNLI (Bastan et al., 2022) aims to understand whether the provided hypothesis is consis- tent or adversarial to the premise. 3.2 COMPARISON BETWEEN RALS WITH BACKBONE LLMS We first benchmark various LLMs and RALs on 9 datasets, the results are shown in Table 2 and Table 3. In the triple extraction task, we observed that RALs outperformed LLMs (specifically RALs without a retriever), achieving better performance. For example, RALs (MedLLaMA 13B with Contriever) enhanced the original MedLLaMA 13B by 22.37%, in terms of F1 score on the ADE dataset. However, RAL still faces challenges in entity recognition. For example, in ADE, LLaMA2- 13B gets the best performance when compared to LLaMA2-13B with retrievers. Another interesting finding is that models with bigger parameters may not necessarily yield the best performance. For instance, on the ChemProt, LLaMA3-8B with Contriver outperforms other RALs with larger parameter sizes. RALs have also been evaluated as effective in improving the performance of LLMs across tasks such as link prediction, text classification, and natural language inference. RALs (LLaMA2 13B with Contriever) enhanced the original LLaMA2 13B by 0.40%, in terms of F1 score on the PHarmKG dataset, RALs (MedLLaMA 13B with BM25) enhanced the original MedLLaMA 13B by 11.86%, in terms of F1 score on the Hetionet dataset, RALs (LLaMA2 13B with MedCPT) enhanced the original LLaMA2 13B by 0.40%, in terms of F1 score on the Ade-corpus-v2 dataset, RALs (LLaMA2 13B with Contriever) enhanced the original LLaMA2 13B by 1.67%, in terms of F1 score on the SemClass dataset, RALs (LLaMA2 13B with MedCPT) enhanced the original LLaMA2 13B by 6.59%, in terms of Macro-avg F1 on the BioNLI dataset. On MedMCQA, our findings differ from other works Xiong et al. (2024) as we observed that LLMs outperform RALs in achieving the best performance, we speculate that the reason for this discrepancy lies in the nature of label-not-sensitive tasks, where RALs have the capability to retrieve large corpora such as PubMed White (2020) or other relevant datasets. In our study, however, our corpus is derived solely from the training set, which may limit the breadth of knowledge accessible to the RALs. 3.3 RESULTS AND DISCUSSION ON TESTBED1: UNLABELED ROBUSTNESS We evaluate the model performance based on the unlabeled corpus, and the results are shown in Table 4 and Table 5. We have the following observations: (1) RAL utilizing the unlabeled corpus exhibits lower performance compared to RAL utilizing the labeled corpus. RALs have demonstrated a strong dependence on the labeled corpus, especially on the label-intensive tasks. For instance, with labeled corpus, the performance of RAL surpasses that of RAL without labeled corpus by 26.41% on ADE. 8 Dataset LLM LLaMA2-13B triple Approach BM25 (Luo et al., 2023) Contriever (Izacard et al., 2021) MedCPT (Jin et al., 2023) No Retriever Precision Recall 30.88 36.06 30.80 34.79 30.99 36.07 30.81 34.94 ADE MedLLaMA-13B LLaMA3-8B LLaMA2-13B ChemProt MedLLaMA-13B LLaMA3-8B LLaMA2-13B GIT MedLLaMA-13B LLaMA3-8B BM25 (Luo et al., 2023) Contriever (Izacard et al., 2021) MedCPT (Jin et al., 2023) No Retriever BM25 (Luo et al., 2023) Contriever (Izacard et al., 2021) MedCPT (Jin et al., 2023) No Retriever BM25 (Luo et al., 2023) Contriever (Izacard et al., 2021) MedCPT (Jin et al., 2023) No Retriever BM25 (Luo et al., 2023) Contriever (Izacard et al., 2021) MedCPT (Jin et al., 2023) No Retriever BM25 (Luo et al., 2023) Contriever (Izacard et al., 2021) MedCPT (Jin et al., 2023) No Retriever BM25 (Luo et al., 2023) Contriever (Izacard et al., 2021) MedCPT (Jin et al., 2023) No Retriever BM25 (Luo et al., 2023) Contriever (Izacard et al., 2021) MedCPT (Jin et al., 2023) No Retriever BM25 (Luo et al., 2023) Contriever (Izacard et al., 2021) MedCPT (Jin et al., 2023) No Retriever 33.77 35.66 33.30 12.26 27.88 34.44 31.70 9.85 49.44 85.75 86.25 78.58 54.78 86.15 81.33 52.10 70.23 87.13 86.62 23.21 60.81 74.78 75.64 61.76 58.59 65.95 75.65 42.60 62.72 74.76 66.20 75.81 33.76 33.57 29.72 12.16 27.70 31.13 27.04 5.49 48.78 85.05 85.40 76.41 49.53 85.22 80.08 49.04 69.98 86.68 83.07 19.72 54.73 72.37 73.44 56.45 57.20 65.81 74.84 41.51 62.58 67.53 50.97 72.80 head entity Precision Recall 73.71 79.72 76.71 83.29 73.96 79.76 76.75 83.64 77.06 79.15 79.01 81.87 72.87 78.91 77.99 83.84 65.77 98.42 98.44 98.12 73.30 97.97 98.14 97.65 83.86 97.92 98.06 98.63 76.94 89.44 92.03 84.12 79.30 85.34 90.22 89.18 79.31 89.76 87.15 87.79 77.02 74.51 70.52 81.22 72.39 71.31 66.53 46.76 64.89 97.61 97.47 95.40 66.29 96.92 96.63 91.91 83.57 97.41 94.04 83.80 69.25 86.56 89.35 76.88 77.42 85.16 89.25 86.88 79.14 81.08 67.10 84.30 F1 73.84 79.74 76.73 83.46 77.04 76.76 74.52 81.55 72.63 74.92 71.80 60.04 65.33 98.01 97.95 96.74 69.62 97.44 97.38 94.69 83.71 97.67 96.01 90.61 72.89 87.98 90.67 80.34 78.35 85.25 89.73 88.02 79.22 85.20 75.82 86.01 F1 30.93 36.06 30.81 34.86 33.77 34.58 31.41 12.21 27.79 32.70 29.19 7.05 49.11 85.40 85.82 77.48 52.02 85.69 80.70 50.52 70.10 86.91 84.81 21.32 57.61 73.55 74.52 58.99 57.89 65.88 75.24 42.05 62.65 70.96 57.59 74.27 relation Precision Recall 94.54 93.76 94.37 94.88 94.85 93.80 94.41 95.29 94.82 94.36 95.48 95.69 94.61 93.14 94.00 96.30 75.56 91.58 91.19 90.70 78.10 91.38 89.60 90.82 84.30 91.26 92.54 91.75 76.70 83.22 83.06 73.53 72.91 75.86 82.39 75.28 79.74 82.14 82.96 85.55 94.77 88.83 85.21 94.93 93.99 84.18 80.19 53.71 74.55 90.84 90.29 88.19 70.62 90.40 88.22 85.48 84.00 90.78 88.74 77.95 69.03 80.54 80.65 67.20 71.18 75.70 81.51 73.33 79.57 74.19 63.87 82.15 F1 94.70 93.78 94.39 95.08 94.80 91.51 90.05 95.31 94.30 88.43 86.55 68.96 75.05 91.21 90.73 89.43 74.17 90.89 88.91 88.07 84.15 91.02 90.60 84.29 72.67 81.86 81.83 70.22 72.03 75.78 81.95 74.29 79.66 77.97 72.17 83.82 tail entity Precision Recall 48.94 48.97 43.85 41.92 49.10 48.99 43.87 42.10 51.20 49.18 45.77 15.66 45.79 48.57 45.35 13.05 65.51 94.02 95.53 88.21 74.40 95.18 91.96 57.91 91.54 95.56 94.06 26.09 77.78 87.11 89.04 82.94 78.63 83.84 90.00 52.76 77.80 88.33 78.77 86.67 51.18 46.29 40.85 15.54 45.49 43.90 38.69 7.28 64.63 93.25 94.59 85.78 67.28 94.15 90.55 54.50 91.22 95.06 90.20 22.16 70.00 84.30 86.45 75.81 76.77 83.66 89.03 51.40 77.63 79.78 60.65 83.23 F1 49.02 48.98 43.86 42.01 51.19 47.69 43.17 15.60 45.64 46.12 41.75 9.34 65.07 93.63 95.06 86.98 70.66 94.66 91.25 56.15 91.38 95.30 92.09 23.97 73.68 85.68 87.73 79.21 77.69 83.75 89.51 52.07 77.72 83.84 68.53 84.91 Table 2: Results of various approaches for triple extraction on ADE, ChemProt, and GIT. Underline with shade (green, pink, and blue) indicates the best performance on ADE, ChemProt, and GIT separately. Link Prediction Text Classification Question Answering Natural Language Inference LLM Approach PHarmKG Precision Recall F1 Hetionet Precision Recall Ade-corpus-v2 F1 Precision Recall F1 SemClass Precision Recall MedMCQA F1 Precision Recall F1 BioNLI Macro-avg F1 LLaMA2-13B MedLLaMA-13B LLaMA3-8B BM25 (Luo et al., 2023) Contriever (Izacard et al., 2021) MedCPT (Jin et al., 2023) No Retriever BM25 (Luo et al., 2023) Contriever (Izacard et al., 2021) MedCPT (Jin et al., 2023) No Retriever BM25 (Luo et al., 2023) Contriever (Izacard et al., 2021) MedCPT (Jin et al., 2023) No Retriever 97.60 98.00 97.40 97.60 95.00 97.00 97.40 97.20 96.80 96.60 97.00 97.20 97.60 97.60 98.00 98.00 97.40 97.40 97.60 97.60 95.00 95.00 97.00 97.00 97.40 97.40 97.20 97.20 96.80 96.80 96.60 96.60 97.00 97.00 97.20 97.20 82.37 77.00 81.60 80.80 90.40 77.20 84.40 78.54 81.80 73.40 83.00 81.80 82.37 77.00 81.60 80.80 90.40 77.20 84.40 78.54 81.80 73.40 83.00 81.80 82.37 77.00 81.60 80.80 90.40 77.20 84.40 78.54 81.80 73.40 83.00 81.80 95.40 96.60 96.80 96.40 95.60 95.60 95.40 95.40 94.80 94.60 95.40 93.80 95.40 95.40 96.60 96.60 96.80 96.80 96.40 96.40 95.60 95.60 95.60 95.60 95.40 95.40 95.40 95.40 94.80 94.80 94.60 94.60 95.40 95.40 93.80 93.80 75.50 79.33 78.50 77.66 72.67 77.66 76.16 64.00 75.50 75.83 74.67 73.16 75.50 75.50 79.33 79.33 78.50 78.50 77.66 77.66 72.67 72.67 77.66 77.66 76.16 76.16 64.00 64.00 75.50 75.50 75.83 75.83 74.67 74.67 73.16 73.16 40.38 35.53 36.78 41.63 37.81 29.82 33.86 46.79 37.73 28.11 31.57 56.93 40.49 40.42 35.52 35.52 36.93 36.80 41.52 41.52 37.96 37.86 29.75 29.77 34.04 33.88 46.41 46.47 38.90 37.79 28.12 28.11 31.82 31.56 55.43 55.91 45.10 35.12 69.21 62.62 48.81 53.07 53.68 61.07 19.17 63.85 56.89 6.71 Table 3: Results of various approaches for link prediction, text classification, question answering, and natural language inference. Underline with green shade indicates the best performance on each dataset. (2) Even without an unlabeled corpus, RAL still contributes to improving LLM performance in certain tasks As shown in Table 4, On Chemprot and Hetionet, RAL utilizing an unlabeled corpus could enhance the original LLM’s performance by 30.16% and 0.06%, respectively. We speculate that LLMs may possess sufficient knowledge to contribute to enhancing model performance on specific datasets. 3.3.1 ERROR ANALYSIS To better understand the impact of the unlabeled corpus on model generation, this section primarily analyzes the RAL performance on ADE, GIT, and BioNLI, which exhibited the poorest performance 9 ADE Corpus Unlabeled corpus Labeled corpus None F1 Precision Recall 9.53 9.65 36.06 36.06 34.79 34.86 9.76 36.07 34.94 ChemProt Precision Recall F1 75.56 87.13 42.60 69.14 72.21 86.68 86.91 41.51 42.05 GIT F1 Precision Recall 0.65 1.01 74.84 75.24 41.51 42.05 2.29 75.65 42.60 PHarmKG Precision Recall F1 Hetionet Precision Recall F1 97.20 98.00 97.60 97.20 97.20 98.00 98.00 97.60 97.60 78.60 90.40 78.54 78.60 78.60 90.40 90.40 78.54 78.54 Table 4: RAL Performance of ADE, ChemProt, GIT, PHarmKG and Hetionet on Testbed1: Unla- beled Robustness. On Testbed1, the used RAL is the one demonstrating the best performance on each dataset. Such as, on ADE, RAL is LLaMA2-13B with Contriever. Labeled corpus is the training set for each dataset. The unlabeled corpus refers to a training set devoid of labels for each dataset. Green shade refers to the best performance. Corpus Unlabeled corpus Labeled corpus None Ade-corpus-v2 Precision Recall F1 94.80 96.80 96.40 94.80 94.80 96.80 96.80 96.40 96.40 SemClass F1 Precision Recall 6.83 6.83 79.33 79.33 77.66 77.66 6.83 79.33 77.66 MedMCQA Precision Recall F1 BioNLI F1 35.19 40.38 41.63 35.27 35.19 10.91 40.49 40.42 69.21 41.52 41.52 62.62 Table 5: RAL Performance of Ade-corpus-v2, SemClass, MedMCQA, and BioNLI on Testbed1: Unlabeled Robustness. among the nine datasets used. We primarily summarize two error types as shown in Table 6. We observed that with the unlabeled corpus, RAL tends to generate redundant information and struggles to accurately predict the output, such as the head entity or relation type in the triple extraction task. Error type Dataset Redundant information ADE GIT Input sentence the fourth patient showed rls symptoms that were initially caused by a 20-mg daily olanzapine dosage and were later mitigated when olanzapine was reduced and ropinirole was administered. inactivation kinetics of vacterial glycerol dehydratase (ec 4.2.1.30) in the course of its reaction with adenosylcobalamin (adocbl) and its analogs were investigated.. Expected output Error output {olanzapine, dosage, 20-mg daily} ground tail entity: glycerol dehydratase {olanzapine, dosage, rls symptoms that were initially caused by a 20-mg dail} generated tail entity: adenosylcobalamin.. retrieved sentence: glycerol dehydratase BIONLI – negative negative retrieved sentence.. Error generation ADE GIT four patients receiving high-dose tamoxifen for greater than 1 year have demonstrated similar retinal changes. (tamoxifen, dosage, high-dose) (tamoxifen, effect, retinal changes.) inactivation of serum alkaline phosphatase by adrenaline and related substances . (adrenaline, inhibits, alkaline phosphatase) (alkaline phosphatase, interacts with, adrenaline) BIONLI — positive negative Table 6: Error cases of Unlabeled Robustness. In BioNLI, we have not included the input sentence in this table due to the excessive length of the sentences. 3.4 RESULTS AND DISCUSSION ON TESTBED2: COUNTERFACTUAL ROBUSTNESS We evaluate the model performance based on different counterfactual rates, and the results are shown in Table 7 and Table 8. We have the following observations: (1) Counterfactual corpus posses a challenge for RALs. On ChemProt, counterfactual instances significantly influence the model performance. For instance, when the counterfactual rate is set to 80%, the triple F1 drops to 47.79%, showcasing a considerable disparity compared to the triple F1 performance on the labeled corpus. Similar observations are noted in GIT, PharmKG, ADE-corpus-v2, SemClass, and BioNLI. This suggests that RALs can be easily misled by counterfactual corpus. 10 Corpus Counterfactual corpus (20%) Counterfactual corpus (80%) Counterfactual corpus (100%) Labeled corpus None ADE ChemProt Precision Recall F1 Precision Recall GIT F1 Precision Recall F1 PHarmKG Precision Recall F1 Hetionet Precision Recall F1 41.41 41.94 25.60 36.07 34.94 41.31 41.36 36.90 39.26 25.59 25.59 36.06 36.06 34.79 34.86 74.22 82.96 84.12 87.13 42.60 66.90 70.37 33.57 47.79 83.07 83.59 86.68 86.91 41.51 42.05 74.50 74.84 75.22 75.65 42.60 71.61 73.03 74.84 74.84 74.41 74.81 74.84 75.24 41.51 42.05 97.40 97.80 97.60 98.00 97.60 97.40 97.40 97.80 97.80 97.60 97.60 98.00 98.00 97.60 97.60 94.80 85.26 76.60 90.40 78.60 94.80 94.80 85.26 85.26 76.60 76.60 90.40 90.40 78.60 78.60 Table 7: RAL Performance of ADE, ChemProt, GIT, PHarmKG and Hetionet on Testbed2: counter- factual robustness. On Testbed2, the used RAL is the one demonstrating the best performance on each dataset. Such as, on ADE, RAL is LLaMA2-13B with Contriever. Labeled corpus is the training set for each dataset. The 20/80/100% denotes a labeled corpus where 20/80/100% of instances are mislabeled for each dataset. Green shade refers to the best model performance. Corpus Counterfactual corpus (20%) Counterfactual corpus (80%) Counterfactual corpus (100%) Labeled corpus None Ade-corpus-v2 Precision Recall F1 SemClass Precision Recall F1 MedMCQA Precision Recall F1 BioNLI F1 95.80 95.00 96.80 96.80 96.40 95.80 95.80 95.00 95.00 96.80 96.80 96.80 96.80 96.40 96.40 73.33 75.66 77.66 79.33 77.66 73.33 73.33 75.66 75.66 77.66 77.66 79.33 79.33 77.66 77.66 34.93 36.67 37.27 40.38 35.19 35.03 34.94 47.76 36.44 36.47 64.63 37.40 37.28 46.46 69.21 40.49 40.42 35.27 35.19 62.62 Table 8: RAL Performance of Ade-corupus-v2, semCLass, MedMCQA, and BIoNL on Testbed2 counterfactual robustness. (2) A lower counterfactual rate may have a reduced impact on RALs. On ADE and Hetionet, we observed that when the counterfactual corpus is set to 20%, the model performance is better than the factual corpus. We speculate that retrievers have a greater chance of obtaining useful information when the counterfactual rate is lower. (3) The counterfactual corpus can still contribute to improving LLM performance. On ADE, ChemProt, GIT, PHarmKG, Hetionet, Ade-corpus-v2, SemClass, MEdMCQA, BIoNLI. The interest- ing finding is that even with a counterfactual corpus, the RAL performance often surpasses the original LLM. We speculate that the counterfactual corpus may have a beneficial effect on LLMs. Despite the content of the instances being counterfactual, the provided templates still aid in generation. (4) Counterfactual rates and model performance are not inversely proportional. This finding contradicts human intuition. In some datasets, such as SemClass, when the counterfactual rate is higher, the model performance also improves. This suggests that RALs possess a certain ability to handle counterfactual facts. 3.5 RESULTS AND DISCUSSION ON TESTBED3: DIVERSE ROBUSTNESS ADE ChemProt GIT PHarmKG Hetionet Corpus Diverse corpus Labeled corpus None F1 Precision Recall 9.15 10.08 36.06 36.06 34.79 34.86 11.21 36.07 34.94 Precision Recall F1 Precision Recall F1 Precision Recall F1 Precision Recall F1 78.56 87.13 42.60 76.41 77.47 86.68 86.91 41.51 42.05 74.88 75.65 42.60 65.38 69.80 74.84 75.24 41.51 42.05 97.20 98.00 97.60 97.20 97.20 98.00 98.00 97.60 97.60 75.41 90.40 78.60 75.41 75.41 90.40 90.40 78.60 78.60 Table 9: RAL Performance of ADE, ChemProt, GIT, PHarmKG and Hetionet on Testbed3: diverse robusteness. On Testbed3, the used RAL is the one demonstrating the best performance on each dataset. Such as, on ADE, RAG is LLaMA2-13B with Contriever. Labeled corpus is the training set for each dataset. The diversity labeled corpus of n refers to a collection of (0,..,n-1) corpus. We evaluate the model performance of diversity robustness, and the results are shown in Table 9 and Table 10. We have the following observations: The diversity labeled corpus poses a challenge to improve RALs. We found that RALs consider the knowledge in the diverse corpus as noise, which could potentially impact RAL performance, 11 Corpus Diversity labeled corpus Labeled corpus None Ade-corpus-v2 SemClass Precision Recall F1 Precision Recall F1 Precision Recall F1 MedMCQA BioNLI F1 96.20 96.80 96.40 96.20 96.20 96.80 96.80 96.40 96.40 75.33 79.33 77.66 75.33 75.33 79.33 79.33 77.66 77.66 25.01 40.38 41.63 25.02 24.87 80.32 40.49 40.42 69.21 41.52 41.52 62.62 Table 10: RAL Performance of Ade-corupus-v2, SemClass, MedMCQA, and BioNLI on Testbed 3: diverse robusteness. particularly evident in ADE and MedMCQA datasets. However, on BioNLI, the diversity labeled corpus could contribute to enhancing the model performance. We speculate that one reason is the retriever we used couldn’t retrieve useful information, while another reason could be that the corpus lacks the necessary information. 3.5.1 ERROR ANALYSIS On ADE, we discovered that the Diversity-labeled corpus also leads to redundancy in RAL generation, for instance, in sentence easily reversible hypoxemia and hypotension induced by nimodipine., the expected tail entity is hypotension, while RAL regarded the hypoxemia and hypotension induced by nimodipine. as the entity. It also struggles with extracting complex entities. For example, in the sentence clinical, spectroscopic, and imaging abnormalities resolved with discontinuation of metronidazole, clinical, spectroscopic, and imaging abnormalities is considered the ground truth, while RAL regards the entire sentence clinical, spectroscopic, and imaging abnormalities resolved with discontinuation of metronidazole as a single entity. In summary, we find that the primary challenge lies in entity recognition, especially in the recognition of tail entities. On MedMCQA, we observed that error generation primarily stemmed from misjudgment. For instance, in sentence Question: All of the following muscles are elevators of the mandible EXCEPT: Options: (A) Digastric; (B) Masseter; (C) Medial pterygoid; (D) Temporalis, the ground truth is A, while RAL generates the D. 3.6 RESULTS AND DISCUSSION ON TESTBED4: NEGATIVE AWARENESS We evaluate the model performance of negative awareness, and the results are shown in Table 11. We have the following observations: RAL poses a challenge to the Negative Awareness. The true negative awareness rate on PharmKG and BioNLI was zero, and it was only 1.07% on ADE. Interestingly, the overall performance of fake negative awareness is better than that of true negative awareness. This suggests that RALs still struggle with self-awareness regarding which examples could provide useful information for generations. Task triple extraction link prediction text classification Dataset ADE ChemProt GIT PHarmKG Hetionet Ade-corpus-v2 SemClass MedMCQA True negative awareness rate 1.07 19.24 27.73 0.00 1.71 68.75 1.49 0.26 0.00 Fake negative awareness rate 9.15 77.49 69.75 63.11 31.33 70.45 99.35 3.92 0.38 question answering natural language inference BioNLI Table 11: RAL Performance of ADE, ChemProt, GIT, PHarmKG, Hetionet, Ade-corpus-V2, Sem- Class, MedMCQA and BIoNLI on Testbed4: negative awareness. 12 4 CONCLUSION In this paper, we assess the performance of RALs on five distinct biomedical NLP tasks, while also evaluating their robustness and self-awareness abilities. To conduct the evaluation, we build a biomedical retrieval-augmented generation benchmark (BIoRAB), which mainly includes four testbeds. 5 LIMITATIONS In this study, we utilized the training set as the retriever corpus for the question-answering task. However, several studies utilize larger corpora with richer knowledge in the question answering task, such as PubMed and Wikidata. In other tasks such as link prediction, augmenting the size of the labeled corpus remains a formidable challenge. Additionally, three retrievers select the most relevant instance of the input sentence as an example. We strive to ensure the validity of our comparisons, but it’s important to note that our findings and results are confined to the dataset, RALs we utilized. 6 ACKNOWLEDGEMENTS This work was supported by the National Institutes of Health’s National Center for Complementary and Integrative Health grant number R01AT009457 and National Institute on Aging grant number R01AG078154. The content is solely the responsibility of the authors and does not represent the official views of the National Institutes of Health. REFERENCES Mohaddeseh Bastan, Mihai Surdeanu, and Niranjan Balasubramanian. Bionli: Generating a biomedical nli dataset using lexico-semantic constraints for adversarial examples. arXiv preprint arXiv:2210.14814, 2022. Harsha Gurulingappa, Abdul Mateen Rajput, Angus Roberts, Juliane Fluck, Martin Hofmann-Apitius, and Luca Toldo. Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. Journal of Biomedical Informatics, 45 (5):885 – 892, 2012a. ISSN 1532-0464. doi: https://doi.org/10.1016/j.jbi.2012.04.008. URL http://www.sciencedirect.com/science/article/pii/S1532046412000615. Text Mining and Natural Language Processing in Pharmacogenomics. Harsha Gurulingappa, Abdul Mateen Rajput, Angus Roberts, Juliane Fluck, Martin Hofmann-Apitius, and Luca Toldo. Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. Journal of biomedical informatics, 45(5): 885–892, 2012b. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. Retrieval augmented language model pre-training. In International conference on machine learning, pp. 3929–3938. PMLR, 2020. Daniel Scott Himmelstein, Antoine Lizee, Christine Hessler, Leo Brueggeman, Sabrina L Chen, Dexter Hadley, Ari Green, Pouya Khankhanian, and Sergio E Baranzini. Systematic integration of biomedical knowledge prioritizes drugs for repurposing. Elife, 6:e26726, 2017. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. Unsupervised dense information retrieval with contrastive learning. arXiv preprint arXiv:2112.09118, 2021. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1–38, 2023. Qiao Jin, Won Kim, Qingyu Chen, Donald C Comeau, Lana Yeganova, W John Wilbur, and Zhiyong Lu. Medcpt: Contrastive pre-trained transformers with large-scale pubmed search logs for zero-shot biomedical information retrieval. Bioinformatics, 39(11):btad651, 2023. 13 Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented genera- tion for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33: 9459–9474, 2020. Mingchen Li and Lifu Huang. Understand the dynamic world: An end-to-end knowledge informed framework for open domain entity state tracking. arXiv preprint arXiv:2304.13854, 2023. Mingchen Li and Rui Zhang. How far is language model from 100% few-shot named entity recognition in medical domain. arXiv preprint arXiv:2307.00186, 2023. Mingchen Li, M Chen, Huixue Zhou, and Rui Zhang. Petailor: Improving large language model by tailored chunk scorer in biomedical triple extraction. arXiv preprint arXiv:2310.18463, 2023. Mingchen Li, Halil Kilicoglu, Hua Xu, and Rui Zhang. Biomedrag: A retrieval augmented large language model for biomedicine. arXiv preprint arXiv:2405.00465, 2024a. Mingchen Li, Chen Ling, Rui Zhang, and Liang Zhao. A condensed transition graph framework for zero-shot link prediction with large language models. arXiv preprint arXiv:2402.10779, 2024b. Man Luo, Xin Xu, Zhuyun Dai, Panupong Pasupat, Mehran Kazemi, Chitta Baral, Vaiva Imbra- saite, and Vincent Y Zhao. Dr. icl: Demonstration-retrieved in-context learning. arXiv preprint arXiv:2305.14128, 2023. Oded Ovadia, Menachem Brief, Moshik Mishaeli, and Oren Elisha. Fine-tuning or retrieval? comparing knowledge injection in llms. arXiv preprint arXiv:2312.05934, 2023. Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering. In Conference on health, inference, and learning, pp. 248–260. PMLR, 2022. Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. In-context retrieval-augmented language models. arXiv preprint arXiv:2302.00083, 2023. Karan Singhal, Tao Tu, Juraj Gottweis, Rory Sayres, Ellery Wulczyn, Le Hou, Kevin Clark, Stephen Pfohl, Heather Cole-Lewis, Darlene Neal, et al. Towards expert-level medical question answering with large language models. arXiv preprint arXiv:2305.09617, 2023. Cong Sun, Zhihao Yang, Lei Wang, Yin Zhang, Hongfei Lin, and Jian Wang. Mrc4bioer: joint extraction of biomedical entities and relations in the machine reading comprehension framework. Journal of Biomedical Informatics, 125:103956, 2022. Olivier Taboureau, Sonny Kim Nielsen, Karine Audouze, Nils Weinhold, Daniel Edsgärd, Francisco S Roque, Irene Kouskoumvekaki, Alina Bora, Ramona Curpan, Thomas Skøt Jensen, et al. Chemprot: a disease chemical biology database. Nucleic acids research, 39(suppl_1):D367–D372, 2010. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. Rizvi Rubina Zhang Rui Vasilakes Jake A. Bionli: Generating a biomedical nli dataset using lexico- semantic constraints for adversarial examples. https://conservancy.umn.edu/handle/11299/194965, 2018. Jacob White. Pubmed 2.0. Medical reference services quarterly, 39(4):382–387, 2020. Chaoyi Wu, Xiaoman Zhang, Ya Zhang, Yanfeng Wang, and Weidi Xie. Pmc-llama: Further finetuning llama on medical papers. arXiv preprint arXiv:2304.14454, 2023. Guangzhi Xiong, Qiao Jin, Zhiyong Lu, and Aidong Zhang. Benchmarking retrieval-augmented generation for medicine. arXiv preprint arXiv:2402.13178, 2024. 14 Cyril Zakka, Rohan Shad, Akash Chaurasia, Alex R Dalal, Jennifer L Kim, Michael Moor, Robyn Fong, Curran Phillips, Kevin Alexander, Euan Ashley, et al. Almanac—retrieval-augmented language models for clinical medicine. NEJM AI, 1(2):AIoa2300068, 2024. Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. Siren’s song in the ai ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219, 2023. Shuangjia Zheng, Jiahua Rao, Ying Song, Jixian Zhang, Xianglu Xiao, Evandro Fei Fang, Yuedong Yang, and Zhangming Niu. Pharmkg: a dedicated knowledge graph benchmark for bomedical data mining. Briefings in bioinformatics, 22(4):bbaa344, 2021. 15
synthetic_cpt
8
BERTtime_Stories_Investigating_the_Role_of_Synthetic_Story_Data_in_Language_pre-training.pdf
4 2 0 2 c e D 8 ] L C . s c [ 3 v 5 6 3 5 1 . 0 1 4 2 : v i X r a BERTtime Stories: Investigating the Role of Synthetic Story Data in Language Pre-training Nikitas Theodoropoulos, Giorgos Filandrianos, Vassilis Lyberatos, Maria Lymperaiou and Giorgos Stamou Artificial Intelligence and Learning Systems Laboratory School of Electrical and Computer Engineering National Technical University of Athens [email protected], {geofila, vaslyb, marialymp}@ails.ece.ntua.gr, [email protected] Abstract We describe our contribution to the Strict and Strict-Small tracks of the 2nd iteration of the BabyLM Challenge. The shared task is cen- tered around efficient pre-training given data constraints motivated by human development. In response, we study the effect of synthetic story data in language pre-training using TinyS- tories: a recently introduced dataset of short stories. Initially, we train GPT-Neo models on subsets of TinyStories, while varying the amount of available data. We find that, even with access to less than 100M words, the mod- els are able to generate high-quality, original completions to a given story, and acquire sub- stantial linguistic knowledge. To measure the effect of synthetic story data, we train LTG- BERT encoder models on a combined dataset of: a subset of TinyStories, story completions generated by GPT-Neo, and a subset of the BabyLM dataset. Our experimentation reveals that synthetic data can occasionally offer mod- est gains, but overall have a negative influence on linguistic understanding. Our work offers an initial study on synthesizing story data in low resource settings and underscores their poten- tial for augmentation in data-constrained lan- guage modeling. We publicly release our mod- els and implementation on our GitHub 1. 1 Introduction As the performance of modern Language Models (LMs) increases, enabling remarkable feats of lan- guage understanding and reasoning, so do their demands in computational resources and training data (Hoffmann et al., 2022). For example, the recently released Llama 3 (Dubey et al., 2024) has 405B parameters and was pre-trained on 15.6T to- kens, on 6K H100 GPUs. In contrast, children are 1https://github.com/nikitas-theo/BERTtimeStories only exposed to no more than 100 million words by age 13 (Gilkerson et al., 2017), demonstrating exceptional learning efficiency compared to state- of-the-art LMs. This need for ever-increasing data and compute casts doubts on the cognitive plausi- bility of the current LM training regimes, and raises ecological and ethical concerns, such as democratic access to research for industry and research groups with modest resources. To address these issues, the BabyLM challenge (Warstadt et al., 2023a; Choshen et al., 2024) in- vites participants to work on cognitive modeling and efficient LM pre-training, given data limita- tions inspired by human development. This year’s iteration of the challenge features three experimen- tal tracks: a Strict track with a budget of 100M words, a Strict-Small track with a budget of 10M words, and a Multimodal track with a word budget of 100M words and unlimited visual input. A major change compared to last year’s challenge is allowing participants to construct their own training data. In the following sections, we present our con- tributions to the Strict and Strict-Small tracks. Our research draws inspiration from recent ad- vancements in Small Language Models (SLMs) for text generation, as explored in TinyStories (Eldan and Li, 2023). In this influential work, the authors demonstrate that training on a synthetic dataset of simple stories can enable SLMs to produce cre- ative, high-quality generations, which are novel with respect to the original training dataset. We hypothesize that for the small data regimes of the BabyLM challenge, augmenting the initial training corpus with synthetic data of high quality can pro- vide models with unseen linguistic contexts, and as a result improve language understanding. To test our hypothesis, we first extend previous work by Figure 1: Illustration of our proposed methodology for BERTtime Stories. We use a subset of the TinyStories dataset (Dtiny) (Eldan and Li, 2023), to train a decoder transformer for data augmentation. We prompt the decoder with the short stories from Dtiny and create a dataset of model generations (Dgen): each story (green) is truncated and used as a prompt (yellow), with the model generating an alternate completion (blue). We supplement the two datasets with a subset of the BabyLM dataset (Dbaby), released by Choshen et al. (2024), and train an encoder model on the combined data. Finally, we evaluate the linguistic proficiency of the encoder using the challenge benchmarks. Eldan and Li (2023), investigating generative per- formance with limited training data. We then train encoder transformer models on a diverse dataset, and measure the effect of synthetic data on linguis- tic proficiency. In technical terms, following Eldan and Li (2023), we propose to train a GPT-Neo decoder (Black et al., 2021) on TinyStories, and then use it to generate data for the training of a final encoder model. This poses some initial challenges, as the size of the original TinyStories dataset exceeds the challenge limits, with around 373M words. As a result, we experiment with training GPT-Neo mod- els while varying the amount of available data and evaluating their generative performance, keeping the model architecture fixed in the process. Our investigation of generative performance is comple- mentary to the original work, which measures the effects of varying the depth and size of the model architectures. Our findings demonstrate that even in low data regimes of ≤ 100M words, GPT-Neo models can acquire impressive grammatical under- standing, and generate stories of high quality and diversity, comparable to models trained on the full dataset. For our evaluation see Section 4.1. Next, we investigate the effect of the synthetic story data on language pre-training. Figure 1 illus- trates our methodology. We select a small subset of TinyStories, train a GPT-Neo model, and use it to generate alternate completions to the stories in the training set. We then create a combined dataset consisting of: 1 the subset of TinyStories used for GPT-Neo training, 2 the generated data, 3 a sample of the BabyLM dataset (Choshen et al., 2024). With the combined dataset, we train an LTG-BERT (Samuel et al., 2023) model, choosing this architecture as it demonstrated superior per- formance in the text-only tracks of last year. We compare the performance of our models against a variety of baselines, trained with and without the use of synthetic data. Our results show that a sim- ple application of synthetic story data for language pre-training results in marginal or even negative gains. Nevertheless, given the high generative per- DecoderEncoderOne day, a little boy named Tim went to the park. He saw a big tiger. The tigerwas not mean, but very easy to play with. Tim and the tiger played all day. They had lots of fun. Then, something unexpected happened. The tiger started to shake. Tim was scared. He did not know what was going on. But then, the tiger turned into a nice ... *CHI: Eve hand pocket.*MOT: oh your hand's in your pocket.*CHI: Eve puzzle.*MOT: that's your puzzle.*CHI: Mommy find Eve puzzle.*MOT: want me to find the rest of it.*CHI: yeah.*MOT: alright.*MOT: is it in the toy box?was very nice. Tim wanted to play with the tiger. He said, "Hi, tiger! Let's play!" The tiger did not say anything. Tim was sad. He wanted to find a way to make the tiger happy. Tim had an idea. He foundaballandthrewit.Thetigersawthe ball and started to play ... One day, a little boy named Tim went to the park. He saw a big tiger. The tiger+... formance of the GPT-Neo models, we believe that more research is needed to fully explore and exploit their potential. Contribution We list our contributions below: • We investigate the generative and linguistic abilities of GPT-Neo models trained on TinyS- tories while varying the amount of available data. We show that even with limited data, these models can produce generations of high quality, offering new insights into the capabil- ities of SLMs in low data regimes. • We investigate the effect of generated data on the pre-training of encoder LMs in a con- strained data setting. We conduct an extensive evaluation with different training schemes and baselines. Our experiments demonstrate the potential of data augmentation to enhance the linguistic capabilities of low resource LMs. 2 Related work Previous BabyLM Iteration Data Augmenta- tion techniques were shown to be beneficial in the previous year’s challenge (Warstadt et al., 2023b). Specifically, ChapGPT (Jumelet et al., 2023) uses regex patterns to extract common phrases from GLUE tasks, and then harnesses these patterns to generate follow-up questions that serve as addi- tional training data. In the Contextualizer paper (Xiao et al., 2023), extra training samples are cre- ated by dynamically combining chunks of texts from different contexts during training. Another approach named Baby’s CoThought (Zhang et al., 2023) utilizes a Large Language Model (LLM) to reformat unrelated sentences from the corpus into coherent paragraphs, thereby improving per- formance, albeit in defiance of data constraints. Language Models for Data Augmentation In recent years, LLMs have been increasingly lever- aged for data augmentation in various domains (Ding et al., 2024). Notably, Dai et al. (2023) introduced ChatGPT as a tool for generating re- alistic text samples from a combination of real and artificial data, enhancing training datasets. Simi- larly, transformer architectures, including decoder (GPT-2, Radford et al., 2019), encoder (BERT, Devlin et al., 2019), and seq2seq (BART, Lewis et al., 2020) models have been explored for aug- mentation (Kumar et al., 2020). In the work of Yoo et al. (2021), GPT-3 (Brown et al., 2020) was used to mix real and synthetic text samples for ro- bust data augmentation. Moreover, decoder models have been successfully employed to generate train- ing data for encoders, yielding significant improve- ments in zero-shot learning (Meng et al., 2022). Small Language Models The recent study by Eldan and Li (2023) highlighted that Small Lan- guage Models (SLMs), can outperform larger ones by leveraging high-quality synthetic training data, demonstrating fluency, coherence, and creativity despite having fewer parameters. This trend is fur- ther supported by work in sequential recommenda- tion, where small models are effectively employed for task-specific purposes (Xu et al., 2024). Addi- tionally, Bergner et al. (2024) utilize a pre-trained LLM to encode prompt tokens, using these repre- sentations to guide a smaller LM for more efficient response generation. 3 Methods We describe our data augmentation method using synthetic story data, as illustrated in Figure 1. 3.1 Datasets Our work is built on two datasets: 1 TinyStories – denoted as Dtiny, a collection of synthetic short sto- ries with simple language, 2 the BabyLM dataset – denoted as Dbaby, created to be a developmentally plausible pre-training corpus. For any dataset Ddata, we also denote a version of the data with m million words as Ddata-m. We describe the datasets below: BabyLM dataset The BabyLM dataset (Dbaby), released by Warstadt et al. (2023a); Choshen et al. (2024), consists of a diverse set of texts and is con- structed with the goal of simulating the linguistic in- put that a child receives throughout its development. It contains a high proportion of spoken language and includes, among others, excerpts from chil- dren’s books, dialogue, child-directed speech, and Wikipedia articles. Both 100M and 10M versions of the dataset were released, for the Strict and Strict-Small tracks respectively. Details about the dataset structure are provided in Appendix A. TinyStories dataset Introduced by Eldan and Li (2023), TinyStories (Dtiny) is a synthetic dataset, featuring a collection of short stories constructed by prompting GPT-3.5 and GPT-4 (OpenAI et al., 2024). The dataset was created to preserve all the core elements of natural language, such as grammar and reasoning, while exhibiting limited diversity and size. More specifically, the stories are 2-3 para- graphs long and follow simple plots and themes. In addition, the dataset contains a restricted vocabu- lary and in general is intended to be on the level of understanding of 3-4 year old children. The initial version of the dataset (V1), generated by both GPT-3.5 and GPT-4, contains approximately 373M words. A second version (V2) was later re- leased, with stories generated only by GPT-4 and around 440M words. We use this version in all our experiments. of the encoder transformer. Regarding the generation process, we experi- ment with two methods: greedy decoding and nucleus sampling (Holtzman et al., 2020). Dur- ing sampling, we generate k completions from our models for each prompt. To limit repetition between the k generations (and avoid wasting FLOPs), we calculate Self-BLEU (Section 3.4) for a set of values of k, and select the ones that best balance diversity and the total amount of additional training data. 3.2 Data Generation 3.3 Final Corpus Creation We describe the creation of the synthetic story dataset Dgen. To generate the data, we first train a decoder model (GPT-Neo) on a subset of TinySto- ries denoted as Dtiny-m. We truncate the stories in Dtiny-m to construct prompts and generate alternate completions using our model. We start by restricting the size m of the subset, taking into account two factors: the need for ade- quate diversity in the final corpus, and the need to ensure high-quality generations. Given the assump- tion that generation quality scales with dataset size, we want to select a big enough size m for Dtiny-m to enable high-quality generations from our trained models. At the same time, we want to leave the necessary room in our word budget for including a sufficiently large portion of the BabyLM dataset in the final training. This will ensure that our models are exposed to both a large vocabulary and a variety of word contexts. Intuitively, we aim to ensure that our pre-training data is diverse, as children learn from multiple sources of input. To address this trade-off, we sample from TinyS- tories, creating a collection of subsets of vary- ing sizes, Dtiny-m : m ∈ {5, 10, 25, 50, 75, 100}M (millions of words). For each subset, we train a GPT-Neo model and evaluate its generative and linguistic abilities. In our evaluation, we lever- age metrics for grammatical understanding, diver- sity, and generation quality; our metrics are intro- duced in Section 3.4. For each of the Strict and Strict-Small tracks, we select a subset Dtiny-m and a corresponding GPT-Neo model trained on it, based on our evaluation metrics and the above crite- ria. To construct Dgen, for each story in Dtiny-m, we truncate the story to 15%-30% of its size and use it to prompt the model for generation. We opt for using a smaller proportion of the original story to avoid duplication, given that stories in Dtiny-m will already be in the combined corpus for the training For each of the Strict and Strict-Small tracks, we have created Dtiny-m, and Dgen as previously described. We now create the combined dataset Dcomb, used to train the encoder transformer. We allocate our remaining word budget to a subset of the BabyLM dataset (Dbaby-b), created by sam- pling randomly from BabyLM on the document level. We leave sampling methods that account for the content of the documents for future work. For the Strict / Strict-Small tracks, the size b of Dbaby-b is chosen such that: b + m ≤ 100M / 10M. We now construct Dcomb by combining all the datasets Dcomb = (Dtiny-m, Dbaby-b, Dgen). We employ a masked language modeling objective to train an encoder transformer on Dcomb, with the LTG-BERT architecture (Samuel et al., 2023). 3.4 Evaluation For evaluating the encoder transformers we use the evaluation suite of the challenge, consisting of three evaluation benchmarks: BLiMP, (Super)GLUE, and EWoK, each broadly evaluating language profi- ciency, general language understanding, and world knowledge. We note that the challenge benchmarks constitute filtered versions (Warstadt et al., 2023b), rendering our results incomparable with full data evaluations. For the decoder models, we use EWoK and BLiMP, and also introduce some additional evaluation procedures: specifically, Self-BLEU evaluates diversity, and an LLM-assisted evalua- tion measures generation quality. We explain each of the evaluation benchmarks below. BLiMP The Benchmark of Linguistic Minimal Pairs (BLiMP), introduced by Warstadt et al. (2019), is a set of tasks designed to evaluate the linguistic knowledge of LMs. It consists of pairs of minimally different sentences covering various grammatical phenomena in syntax, morphology, and semantics. The model under evaluation has to assign a higher probability to the correct sentence in each pair. We also evaluate on BLiMP Supple- ment (Supp.), released by Warstadt et al. (2023a), which includes additional grammatical phenom- ena. For both BLiMP and BLiMP Supplement, we measure performance by calculating the average accuracy across all of their evaluation tasks. (Super)GLUE The General Language Under- standing Evaluation (GLUE) benchmark (Wang, 2018), assesses model performance across a wide range of natural language understanding (NLU) tasks. SuperGLUE (Wang et al., 2019), was later introduced to offer a more challenging set of tasks. We employ a total of 10 text classification tasks from both benchmarks, which include: question answering (BoolQ, MultiRC), sentiment classi- fication (SST-2), paraphrase detection (MRPC, QQP), linguistic acceptability (CoLA), common- sense reasoning (WSC), and natural language in- ference (MNLI, QNLI, RTE). Performance on (Su- per)GLUE is calculated by averaging accuracies across all tasks except for QQP and MRPC, where we use the F1-score, and CoLA, where we use the Matthews Correlation Coefficient – MCC. EWoK Elements of World Knowledge (EWoK) (Ivanova et al., 2024) assesses an LM’s ability to understand and model world knowledge. It evalu- ates how well a model can connect a target text to either an appropriate or mismatched context, em- phasizing key concepts such as social dynamics and spatial relationships. Both the contexts and targets are framed as minimally contrasting pairs, with customizable elements like objects, agents, and locations. During evaluation, the model needs to assign a higher probability to the correct context and target text pair. We report average accuracy across all the benchmark’s tasks. Self-BLEU To measure the diversity of gener- ated stories, we utilize the Self-BLEU score (Zhu et al., 2018). Given a generated collection, we cal- culate the BLEU score with one generation as the hypothesis and the others as reference, evaluating how similar it is to the rest. We define Self-BLEU as the average of all the BLEU scores in the corpus. The metric is defined on a continuous scale within [0, 1], where higher scores indicate less diversity. LLM Evaluation To provide a comprehensive evaluation of our decoder models’ generative abili- ties, we follow the approach of Eldan and Li (2023) and employ a LLM, prompting it with the story completions, and asking it to assess them in terms of Grammar, Creativity, and Consistency with the story’s beginning, on a scale from 1 to 10. The orig- inal evaluation by Eldan and Li (2023) used GPT-4, we instead leverage Claude-3.5 Sonnet (Anthropic, 2024)2, which better aligned with our available re- sources. Evaluation details are presented in Section 4.1, while the prompt is included in Appendix E. 4 Experiments Experimental Setup We conduct our experi- ments on a shared GPU cluster of 8 Nvidia V100 16 GB GPUs, and additionally evaluate our models on an Nvidia RTX-3090 24 GB GPU. All our mod- els are trained using the PyTorch (Paszke et al., 2019) and HuggingFace (Wolf et al., 2019) li- braries. For our evaluations of BLiMP, EWoK, and (Super)GLUE we build upon the official evaluation pipeline released by the challenge organizers (Gao et al., 2023; Choshen et al., 2024). 4.1 TinyStories & GPT-Neo Evaluation Regarding the decoder used for the generation, we select one of the best-performing GPT-Neo archi- tectures from Eldan and Li (2023) 3. All our trained GPT-Neo models share the same hyperparameters, except for weight decay, dropout, and vocabulary size, which are tuned to the specific data size. We built upon a similar training scheme as the authors, with added regularization for our low data regime. Hyperparameters and details about the architecture are included in Appendix C. We opt to train on the latest version of the TinyStories data (V2), gen- erated by prompting GPT-4; the full unsampled dataset contains ∼ 440M words. Throughout our evaluation, we also report results for the original model released by the authors, trained on the first version of the dataset (V1) with ∼ 373M words. In the following paragraphs, we conduct a thor- ough analysis of the relationship between the lin- guistic competency of GPT-Neo models trained on subsets of TinyStories, and the size of their training dataset |Dtiny-m|. We experiment with var- ious sizes for the TinyStories subsets Dtiny-m : m ∈ {5, 10, 25, 50, 75, 100}M (millions of words). From our experiments we draw insights about the abilities of generative LMs on low data regimes. This evaluation will also motivate our selection of 2Model version: claude-3-5-sonnet-20240620. 3https://huggingface.co/roneneldan/TinyStories-33M the TinyStories subset Dtiny used for generating the dataset Dgen and for training the final encoder. As an initial proxy of the language competency of the GPT-Neo decoders, we measure perfor- mance on BLiMP, its supplement (Supp.), and EWoK. Results are presented in Table 1. We notice that 50M words appear to be a cutoff point, with notable drops in performance for data sizes less than that. Based on this, we select Dtiny-50M for the Strict track, and Dtiny-5M for the Strict-Small track. Importantly, we do not in- clude the LLM evaluation (presented below) in this decision process, as it would invalidate our imposed data constraints. We leave further ex- perimentation on the subset data sizes for the Strict-Small track for future work. A second ob- servation concerns the 100M words model, which achieves the top score on BLiMP, shared by the 373M model by Eldan and Li (2023). This result agrees with the findings of Zhang et al. (2021), demonstrating that 100M words are enough to at- tain substantial grammatical knowledge. Train Data BLiMP ↑ Supp. ↑ EWoK ↑ 5M 10M 25M 50M 75M 100M 440M (V2) 373M (V1) 4 55.5 58.4 59.9 62.8 64.0 64.8 64.6 64.8 53.8 51.6 55.1 52.8 54.8 50.8 55.0 60.9 51.1 51.9 52.4 53.0 53.4 53.1 53.9 54.0 Table 1: Evaluation results for GPT-Neo models trained on TinyStories with various amounts of data. We re- port accuracy for all benchmarks. As the amount of data decreases, the BLiMP and EWoK scores generally decrease as well. In contrast, the BLiMP supplement score demonstrates more variance. The aforementioned scores give us evidence about the grammatical understanding (BLiMP) and world knowledge (EWoK) of our models, but leave out two important areas of generative performance, mainly: 1 the diversity and 2 the quality of gen- erations. We focus on these two metrics in the following paragraphs. Apart from the quantitative scores, in Appendix B we also include the genera- tions of all the GPT-Neo models for the TinyStories example illustrated in Figure 1. Evaluating Generation Quality Evaluating the quality of generations for open-ended generation tasks is challenging, as most common evaluation paradigms expect structured output, and measure fidelity towards a set of reference texts. To address this, we adopt the evaluation method proposed by Eldan and Li (2023), and prompt an LLM to eval- uate the stories generated by our models. In our experiments, we use Claude-3.5 Sonnet. We harness a set of 44 manually constructed prompts 5 containing the beginning of a story, and generate 10 completions for each of our models, sampling with a temperature of 1. We then provide the LLM with the beginning of the story and the model’s completion, and ask it in turn to evaluate the model’s response along three axes: (a) Gram- mar, (b) Creativity, and (c) Consistency with the beginning of the story. Additionally, we ask it to classify the story in different age groups, ranging from 3 (or under) to 16 years old. Scores are given on a scale of 1 to 10, and are averaged across stories and completions. The final results are presented in Table 2: we notice that limiting the training data, up to even 25M words, results in a minor decrease of performance across all three metrics. This indi- cates that the quality of the model generations is retained in the small data regime. Additionally, the 100M words decoder achieves impressive scores in all categories, and outperforms all other models in the Consistency metric – demonstrating that 100M words is enough for robust generative performance. Evaluating Generation Diversity To measure diversity, we utilize Self-BLEU (Zhu et al., 2018), which has been used before as a measure of the diversity of generated data (Holtzman et al., 2020). For each model, we sample 100 stories from the training set and truncate them to 15%-30%, prompt- ing the model to generate an alternate completion to the story’s beginning. When sampling from the model, a greedy decoding strategy is employed. We report Self-BLEU scores, scaled to [0, 100], for the set of 100 completions in Table 2 (higher scores correspond to less diverse generations). Our results indicate that models with limited training data can achieve high diversity, while at the same time main- taining generation quality, as demonstrated by the scores of models trained on 25M and 50M words. 4.2 Data Generation the com- We now describe the creation of bined dataset Dcomb = (Dtiny-m, Dbaby-b, Dgen), training an encoder LM. For leveraged for 4Model released by Eldan and Li (2023). 5https://huggingface.co/datasets/roneneldan/TinyStories Train Data Gr. ↑ Cr. ↑ Cons. ↑ SB ↓ 5M 10M 25M 50M 75M 100M 440M (V2) 373M (V1) 4.56 5.31 6.00 6.01 6.08 6.17 5.88 6.24 4.99 5.34 5.65 5.53 5.50 5.57 5.53 5.73 3.37 3.98 4.55 4.54 4.49 4.78 4.49 4.70 38.6 38.3 34.6 33.0 37.1 39.8 37.3 29.6 Table 2: Results on the evaluation of our models by Claude-3.5 Sonnet. We instruct the LLM to access gen- erative performance along three categories: Grammar (Gr.), Creativity (Cr.), Consistency (Cons.). We also in- clude Self-BLEU (SB), measuring generation diversity. brevity, details are given below only for the Strict-Small track; the same process is followed for the Strict track. As discussed in Section 4.1, we choose a subset of 5M words from Tinys- tories (Dtiny-5M), and use it to train a GPT-Neo model. This model is then employed to generate the dataset Dgen. We adapt the beginning of each story (15%-30%) in the training set Dtiny-5M as a prompt, and task the decoder to generate alterna- tive completions. We experiment with different generation techniques, including greedy generation – Dgen-greedy, and nucleus sampling – Dgen-nucleus-k, where k is the number of generations per prompt. Finally, the two datasets are combined with a sub- set of the BabyLM dataset (Dbaby-5M), ensuring a total size within the 10M word limit, to form D10M comb = (Dtiny-5M, Dbaby-5M, Dgen). In order to select k for nucleus sampling, we leverage the Self-BLEU score. We sample 100 sto- ries from Dtiny-5M and use their beginning (15%- 30%) to generate 50 completions for each prompt with p = 0.95. For each value of k ∈ {2, 3, ..., 50} we calculate Self-BLEU among the group of gener- ations Sk. Our goal is to examine how diverse the different generations are for the same prompt, as the number of generations (k) increases. Figure 2 depicts the average Self-BLEU across all prompts. Based on the presented results, we choose to ex- periment with k = 5 and k = 10, as a satisfactory balance between diversity and added dataset size. 4.3 Training LTG-BERT Following the creation of the combined corpus Dcomb, we employ it to train an LTG-BERT (Samuel et al., 2023) encoder module. Our training procedure is based on the source code released by Figure 2: We generate 50 completions for 100 prompts with the GPT-Neo models trained on Dtiny-5M, Dtiny-50M. We plot the average self-BLEU score across prompts, as the number of generations per prompt (k) increases. the authors6, prompting our selection of similar hy- perparameters (Appendix C), adapted for our spe- cific infrastructure and available compute. More- over, our experiments are conducted with minimal hyperparameter optimization. In order to assess the effect of data augmentation on final performance, we train a variety of baselines, ablating over the pre-training dataset of our models and keeping all other training conditions constant. Specifically, for a given track, all the models share the same hyper- parameters and amount of FLOPs, ensuring a fair comparison. Our baselines are described below. Baselines For the Strict-Small track, we es- tablish baselines by training LTG-BERT models using 10M words from the BabyLM – Dbaby-10M and Tinystories – Dtiny-10M datasets respectively. Additionally, we train an encoder using a combina- tion of 5M words from each one of the two datasets – Dbaby-5M+Dtiny-5M. These models serve as bench- marks against which we assess the performance of models trained with various configurations of gen- erated data, aiming to evaluate the effectiveness of data augmentation. The same methodology is applied consistently to the Strict track as well. Here, we train encoders with 100M words from each dataset separately, as well as in a combined setting, utilizing 50M words from each dataset. We also include results for the challenge baselines – LTG-BERT (Samuel et al., 2023) and BabyLlama (Timiryasov and Tastet, 2023). We emphasize that these models are trained with different hyperparam- eters than those in our controlled setting. Notably, the LTG-BERT model released by the organizers was trained for ∼ 20 epochs on the Strict track, 6https://github.com/ltgoslo/ltg-bert 2345678910152025304050Number of generations (k)0.00.10.20.30.40.5Average Self-BLEU score50M5M Model Training Data Total BLiMP Supp. EWoK GLUE Avg. LTG-BERT Dbaby-10M BabyLlama Dbaby-10M LTG-BERT (ours) Dbaby-10M Dtiny-10M Dtiny-10M + Dgen-greedy Dbaby-5M + Dtiny-5M Dbaby-5M + Dtiny-5M + Dgen-greedy Dbaby-5M + Dtiny-5M + Dgen-nucleus-1 Dbaby-5M + Dtiny-5M + Dgen-nucleus-1 † ⋆ Dbaby-5M + Dtiny-5M + Dgen-nucleus-5 Dbaby-5M + Dtiny-5M + Dgen-nucleus-10 10M 10M 10M 10M 20M 10M 15M 15M 15M 33M 56M 60.6 69.8 62.8 59.8 58.7 62.6 62.1 62.5 63.2 62.4 61.0 60.8 59.5 63.7 54.2 57.8 60.7 60.2 62.3 59.3 60.1 58.4 47.6 50.7 51.2 52.2 48.9 51.5 50.4 48.8 50.4 50.7 50.1 60.3 63.3 71.0 67.0 67.1 71.2 70.6 69.5 71.1 69.4 69.5 57.3 60.8 62.2 58.3 58.1 61.5 60.8 60.8 61.0 60.6 59.8 Table 3: Model performance for the 10M word Strict-Small track. compared to our setting of ∼ 27 epochs (20K steps for both tracks). Balanced Training While increasing the num- ber of generated texts in the LTG-BERT train- ing set (Dcomb), we also modify the distribution of TinyStories and BabyLM samples that the model encounters during training. This could affect the model’s performance, as it becomes more finely tuned to TinyStories. To counter this effect, we experiment with a training variation where we bal- ance the number of samples from both datasets. Specifically, samples in each batch are drawn with equal probability from both TinyStories – which includes both original and generated texts – and BabyLM. This method ensures that the model is exposed to an equal number of samples from each dataset throughout training. The dagger symbol † in the results denotes use of this strategy. 5 Results We present the final evaluation results for the Strict-Small and Strict tracks at Table 3 and Table 4, respectively. The ⋆ symbol denotes the submitted model for this track. Strict-Small Track In the Strict-Small track, comparing the results of Dbaby-10M with Dtiny-10M reveals, as expected, that the BabyLM dataset is more beneficial for language pre-training compared to TinyStories. The performance metrics for TinyS- tories are consistently lower, except in the case of EWoK. Interestingly, replacing half of the BabyLM dataset with data from TinyStories only slightly af- fects the model’s performance. However, as we add more instances of the synthetic story data, the posi- tive impact of the BabyLM dataset begins to wane, leading performance to approach that of Dtiny-10M where BabyLM was not used at all. This suggests that training is over-influenced by the increased amount of TinyStories data. To mitigate this ef- fect, we experimented with equally distributing the samples from the two datasets in a batch. This ap- proach positively impacts the model’s performance. Notably for BLiMP, this setup slightly surpasses the performance of the model trained solely on Dbaby-10M, resulting in the best score overall. Fur- ther, when compared to other data augmentation scenarios, the performance on GLUE is increased. Moreover, an interesting observation concerns the sampling technique used for augmenting the data. Changing the sampling strategy from greedy decoding to nucleus sampling positively affects the model’s performance on the BLiMP and BLiMP Supp. benchmarks, while negatively impacting per- formance on EWoK and GLUE. This discrepancy is likely due to the nature of the datasets them- selves. BLiMP focuses on evaluating grammatical understanding, while the increased diversity from nucleus sampling exposes the model to a wider range of linguistic structures and syntactic varia- tions, thereby improving performance. Conversely, EWoK and GLUE require semantic coherence and factual consistency, where the increased diversity from nucleus sampling may introduce noise and less coherent narratives, potentially confusing the model, and degrading performance. Therefore, while more diverse stories benefit syntactic evalu- ation tasks such as those in BLiMP, they may not be as useful for semantic or knowledge-based tasks such as those included in EWoK and GLUE. Strict Track Interestingly, for the Strict track we notice that data augmentation has a positive effect on the BLiMP and EWoK benchmarks. Specifically, adding the Dgen-greedy dataset, results in increased performance compared to the base- Model Training Data Total BLiMP Supp. Ewok GLUE Avg LTG-BERT Dbaby-100M BabyLlama Dbaby-100M LTG-BERT (ours) Dbaby-100M Dtiny-100M Dtiny-100M + Dgen-greedy Dtiny-50M + Dbaby-50M Dtiny-50M + Dbaby-50M + Dgen-greedy Dtiny-50M + Dbaby-50M + Dgen-nucleus-1⋆ Dtiny-50M + Dbaby-50M + Dgen-nucleus-1† Dtiny-50M + Dbaby-50M + Dgen-nucleus-5 Dtiny-50M + Dbaby-50M + Dgen-nucleus-10 100M 100M 100M 100M 200M 100M 150M 150M 150M 350M 600M 69.2 73.1 64.0 61.2 61.1 65.5 66.6 65.6 65.2 65.4 63.7 66.5 60.6 67.6 63.2 59.6 65.6 63.3 65.0 63.5 64.4 63.3 50.2 52.1 47.3 48.0 48.7 47.2 49.7 49.3 49.0 45.9 49.2 68.4 69.0 74.0 70.6 69.1 71.0 71.8 72.7 72.6 69.8 69.5 63.6 63.7 63.2 60.8 59.6 62.3 62.8 63.1 62.6 61.4 61.4 Table 4: Model performance for the 100M word Strict track. lines trained on Dtiny-100M and Dbaby-100M, as well as a mixture of the two (Dtiny-50M + Dbaby-50M). Additionally, the Dtiny-50M + Dbaby-50M combina- tion is outperformed by both the Dgen-greedy and Dgen-nucleus-1 models, suggesting that synthetic data can offer modest gains in the Strict scenario. As with the Strict-Small track, increasing the size of the TinyStories dataset negatively affects the performance of the models, approaching that of the model trained solely on Dtiny-100M. However, in this case, balancing the datasets does not improve the model’s performance. In the larger 100M word dataset, even with balancing, the sheer volume of TinyStories data may overwhelm the influence of the BabyLM data. The model is exposed to a much larger quantity of TinyStories content, which could dominate learning and reduce the effectiveness of balancing. Additionally, while the nucleus sam- pling strategy once again improves performance on the BLiMP Supp. dataset, it does not assist with BLiMP as it did in the Strict-Small track. 6 Conclusion In this work, we explore data augmentation for lan- guage pre-training in a limited data setting. Using the TinyStories dataset we train GPT-Neo mod- els and probe the relationship between generative ability and dataset size. To measure the effect of augmentation with synthetic data, we train LTG- BERT models on a diverse set of data configura- tions. Our experiments indicate that while synthe- sizing high quality data is possible in small data regimes, effectively utilizing it for pre-training can be challenging. Some modest gains are observed in the Strict track, while careful balancing shows promise for the Strict-Small track. Overall, our evaluation highlights the intricate balance required between data quantity, quality, and integration for effective training. Future work suggests investi- gation of different data domains, mixtures, and proportions, while precise calibration of hyperpa- rameters may prove critical in exploiting the full benefit of synthetic data in low data pre-training. 7 Limitations A limitation of our study is the exclusive use of a single LM architecture for both the encoder and decoder components. Our experiments are also limited to specific datasets, employing only TinyS- tories for synthetic data generation and a combi- nation of TinyStories and BabyLM for encoder training. While these choices are made to ensure experimental control and draw solid conclusions, they limit the generalizability of our results. Another limitation concerns the creation of the combined dataset. We investigated only a single configuration of the two datasets – including them in equal proportion – and the documents within a dataset were sampled randomly. We posit that more fine control over the mixture of datasets could fur- ther enhance the benefits of our data augmentation technique. Additionally, with regard to generation, the prompting strategy and truncation ratio could be more finely calibrated, in order to improve the balance between data quality and redundancy. By acknowledging these limitations, we aim to encourage further research in this area, focusing on the impact of data augmentation in size constrained and cognitively plausible language pre-training. Acknowledgments The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the 3rd Call for HFRI PhD Fellowships (Fel- lowship Number 5537). References Anthropic. 2024. Claude. https://www.anthropic. com/claude. Artificial Intelligence Model. Benjamin Bergner, Andrii Skliar, Amelie Royer, Tij- men Blankevoort, Yuki Asano, and Babak Ehteshami Bejnordi. 2024. Think big, generate quick: Llm-to- slm for fast autoregressive decoding. arXiv preprint arXiv:2402.16844. Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. 2021. Gpt-neo: Large scale autore- gressive language modeling with mesh-tensorflow. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. In Ad- Language models are few-shot learners. vances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc. Leshem Choshen, Ryan Cotterell, Michael Y. Hu, Tal Linzen, Aaron Mueller, Candace Ross, Alex Warstadt, Ethan Wilcox, Adina Williams, and [call for papers] the Chengxu Zhuang. 2024. 2nd babylm challenge: Sample-efficient pretraining on a developmentally plausible corpus. Preprint, arXiv:2404.06214. Haixing Dai, Zhengliang Liu, Wenxiong Liao, Xiaoke Huang, Yihan Cao, Zihao Wu, Lin Zhao, Shaochen Xu, Wei Liu, Ninghao Liu, et al. 2023. Auggpt: Leveraging chatgpt for text data augmentation. arXiv preprint arXiv:2302.13007. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Bosheng Ding, Chengwei Qin, Ruochen Zhao, Tianze Luo, Xinze Li, Guizhen Chen, Wenhan Xia, Junjie Hu, Anh Tuan Luu, and Shafiq Joty. 2024. Data aug- mentation using LLMs: Data perspectives, learning paradigms and challenges. In Findings of the Associ- ation for Computational Linguistics ACL 2024, pages 1679–1705, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Let- man, Akhil Mathur, Alan Schelten, Amy Yang, An- gela Fan, et al. 2024. The llama 3 herd of models. Preprint, arXiv:2407.21783. Ronen Eldan and Yuanzhi Li. 2023. Tinystories: How small can language models be and still speak coherent english? Preprint, arXiv:2305.07759. Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, An- ish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2023. A framework for few-shot language model evaluation. Martin Gerlach and Francesc Font-Clos. 2018. A stan- dardized project gutenberg corpus for statistical anal- ysis of natural language and quantitative linguistics. Preprint, arXiv:1812.08092. Jill Gilkerson, Jeffrey A. Richards, Steven F. Warren, Ju- dith K. Montgomery, Charles R. Greenwood, D. Kim- brough Oller, John H. L. Hansen, and Terrance D. Paul. 2017. Mapping the early language environ- ment using all-day recordings and automated analy- sis. American Journal of Speech-Language Pathol- ogy, 26(2):248–265. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Si- monyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022. Training compute-optimal large language models. Preprint, arXiv:2203.15556. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. Preprint, arXiv:1904.09751. Anna A Ivanova, Aalok Sathe, Benjamin Lipkin, Un- nathi Kumar, Setayesh Radkani, Thomas H Clark, Carina Kauf, Jennifer Hu, RT Pramod, Gabriel Grand, et al. 2024. Elements of world knowledge (ewok): A cognition-inspired framework for evaluating basic world knowledge in language models. arXiv preprint arXiv:2405.09605. Jaap Jumelet, Michael Hanna, Marianne De Heer Kloots, Anna Langedijk, Charlotte Pouw, and Oskar Van Der Wal. 2023. Chapgtp, illc’s attempt at raising a babylm: Improving data efficiency by automatic task formation. arXiv preprint arXiv:2310.11282. Varun Kumar, Ashutosh Choudhary, and Eunah Cho. 2020. Data augmentation using pre-trained trans- former models. arXiv preprint arXiv:2003.02245. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7871–7880, Online. Association for Computa- tional Linguistics. teachers trained on a small dataset with no perfor- mance penalty. In Proceedings of the BabyLM Chal- lenge at the 27th Conference on Computational Nat- ural Language Learning, pages 279–289, Singapore. Association for Computational Linguistics. Pierre Lison and Jörg Tiedemann. 2016. OpenSub- titles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 923–929, Portorož, Slovenia. European Language Resources Association (ELRA). Brian MacWhinney. 2014. The Childes Project. Psy- chology Press. Yu Meng, Jiaxin Huang, Yu Zhang, and Jiawei Han. 2022. Generating training data with language models: Towards zero-shot language understanding. Preprint, arXiv:2202.04538. OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2024. Gpt-4 technical report. Preprint, arXiv:2303.08774. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Jun- jie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning li- brary. CoRR, abs/1912.01703. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. David Samuel. 2023. Mean BERTs make erratic lan- guage teachers: the effectiveness of latent bootstrap- In Proceedings of ping in low-resource settings. the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning, pages 221–237, Singapore. Association for Computational Linguistics. David Samuel, Andrey Kutuzov, Lilja Øvrelid, and Erik Velldal. 2023. Trained on 100 million words and still in shape: BERT meets British National Corpus. In Findings of the Association for Computational Lin- guistics: EACL 2023, pages 1954–1974, Dubrovnik, Croatia. Association for Computational Linguistics. Andreas Stolcke, Klaus Ries, Noah Coccaro, Eliza- beth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for au- tomatic tagging and recognition of conversational speech. Computational Linguistics, 26(3):339–374. Inar Timiryasov and Jean-Loup Tastet. 2023. Baby llama: knowledge distillation from an ensemble of Alex Wang. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Aman- preet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understand- In Advances in Neural Information ing systems. Processing Systems, volume 32. Curran Associates, Inc. Alex Warstadt, Leshem Choshen, Aaron Mueller, Ad- ina Williams, Ethan Wilcox, and Chengxu Zhuang. 2023a. Call for papers – the babylm challenge: Sample-efficient pretraining on a developmentally plausible corpus. Preprint, arXiv:2301.11796. Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan Wilcox, Chengxu Zhuang, Juan Ciro, Rafael Mos- quera, Bhargavi Paranjabe, Adina Williams, Tal Linzen, and Ryan Cotterell, editors. 2023b. Proceed- ings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning. As- sociation for Computational Linguistics, Singapore. Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- hananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2019. Blimp: A benchmark of linguistic minimal pairs for english. CoRR, abs/1912.00582. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. CoRR, abs/1910.03771. Chenghao Xiao, G Thomas Hudson, and Noura Al Moubayed. 2023. Towards more human-like lan- guage models based on contextualizer pretraining strategy. In Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning, pages 317–326, Singapore. As- sociation for Computational Linguistics. Wujiang Xu, Zujie Liang, Jiaojiao Han, Xuying Ning, Wenfang Lin, Linxun Chen, Feng Wei, and Yongfeng Zhang. 2024. Slmrec: Empowering small lan- guage models for sequential recommendation. arXiv preprint arXiv:2405.17890. Kang Min Yoo, Dongju Park, Jaewook Kang, Sang- Woo Lee, and Woomyeong Park. 2021. Gpt3mix: Leveraging large-scale language models for text aug- mentation. arXiv preprint arXiv:2104.08826. Yian Zhang, Alex Warstadt, Xiaocheng Li, and Samuel R. Bowman. 2021. When do you need bil- lions of words of pretraining data? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1112–1125, Online. Association for Computational Linguistics. Zheyu Zhang, Han Yang, Bolei Ma, David Rügamer, and Ercong Nie. 2023. Baby’s CoThought: Lever- aging large language models for enhanced reasoning in compact models. In Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning, pages 158–170, Singa- pore. Association for Computational Linguistics. Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text generation models. In The 41st international ACM SIGIR conference on research & development in information retrieval, pages 1097–1100. A BabyLM dataset Table 5 contains a detailed overview of the BabyLM dataset. For our experiments, we preprocess the data using the methodology from Samuel (2023). The text is normalized and cleaned up in order to ensure a unified format. We cast direct speech in double quotes, remove arbitrary and semantically irrelevant tokens and conserve formatting, where necessary, with a special [PAR] symbol. Dataset Domain # Words Strict-Small Strict CHILDES (MacWhinney, 2014) British National Corpus (BNC), dialogue portion 1 Project Gutenberg (children’s stories) (Gerlach and Font-Clos, 2018) Written English Movie subtitles OpenSubtitles (Lison and Tiedemann, 2016) Simple Wikipedia 2 Written Simple English Dialogue Switchboard Dialog Act Corpus (Stolcke et al., 2000) Child-directed speech Dialogue Total 2.84M 28.90M 0.93M 7.76M 2.54M 26.37M 2.04M 19.96M 1.45M 14.67M 1.34M 0.15M 9.95M 99.01M Table 5: Contents of the BabyLM datasets for the Strict and Strict-Small tracks, including the domain and word counts. 1http://www.natcorp.ox.ac.uk/, 2https://dumps.wikimedia.org/simplewiki/20241001/. B TinyStories - Detailed Evaluation In order to demonstrate a tangible example of the augmentation process, and provide the opportunity to directly judge the quality of the generations, we include sample generations for all our GPT-Neo models: {5M, 10M, 25M, 50M, 75M, 100M, 440M (V2)}, as well as the model released by Eldan and Li (2023) – 373M (V1). We sample a story from the training set, truncate it to around 15% to 30% of its length, and ask the models to generate a completion with greedy decoding. The results are shown in Table 6. The 50M words model generation is also illustrated in Figure 1. We notice that even for the smaller models, the quality of the generation remains good in terms of grammar, coherence to the story’s beginning, and creativity. Additionally, there is little similarity to the original completion, even though the prompt is taken from the training set of the models. C Training Hyperparameters We list all the hyperparameters used to train the GPT-Neo models on subsets of TinyStories in Table 7. Where appropriate, parameters that vary per group of models are listed separately. We also list the hyperparameters used to train the LTG-BERT transformer encoder models in Table 8. We follow the hyperparameter and training regime of Samuel et al. (2023), with some modifications, partly to adjust to the scale of our computational resources and infrastructure. D GLUE Evaluation Hyperparameters We list the hyperparameters utilized for fine-tuning the LTG-BERT models on GLUE in Table 9, largely also based on Samuel et al. (2023). We find that it’s critical for the stability of training to initialize the added classification head according to the original implementation released by the authors. Specifically, we use a truncated normal distribution N (µ, σ2) in the range [a, b] where µ = 0, σ = , 5.0·dmodel a = −2σ, b = 2σ and dmodel is the hidden size. The bias of the layer is also initialized to zero. Lastly, one notable difference affecting the downstream performance of our models is that we train with mixed precision. We use the fp16 implementation provided by the HuggingFace Trainer module, which reduces the numerical precision of certain variables from fp32 to fp16, increasing computational efficiency. (cid:113) 2.0 E LLM Evaluation For the LLM evaluation, we employ Claude-3.5 Sonnet. Our prompt is a modified version of the one introduced in Eldan and Li (2023). We present our prompt in Table 10. Model Prompt Original 5M 10M 25M 50M 75M 100M 440M (V2) 373M (V1) Generated Text One day, a little boy named Tim went to the park. He saw a big tiger. The tiger was not mean, but very easy to play with. Tim and the tiger played all day. They had lots of fun. Then, something unexpected happened. The tiger started to shake. Tim was scared. He did not know what was going on. But then, the tiger turned into a nice dog. Tim was very surprised. Tim and the dog played together now. They were very happy. The dog was easy to play with too. At the end of the day, Tim went home with his new friend. was very big and had a long tail. Tim wanted to be friends with the tiger. Tim said, "Hi tiger, I am Tim. Do you want to play with me?" The tiger looked at Tim and said, "Yes, I want to play with you." Tim and the tiger played all day. They ran, jumped, and laughed. The tiger was very happy. But then, the tiger got tired. It wanted to rest. Tim said, "I am tired. Let’s rest." The tiger yawned and said, "Okay, Tim. Let’s rest." Tim and the tiger sat under a tree. They talked and laughed. The tiger was happy to have a new friend. They both had a fun day at the park. was very big and had a long tail. Tim was scared, but he wanted to be brave. He walked up to the tiger and said, "Hi, tiger! I am Tim. Do you want to play with me?" The tiger looked at Tim and said, "Yes, I want to play. But I am scared of the big tiger." Tim was not scared. He thought the tiger was nice. Tim and the tiger played together. They ran and jumped. They had a lot of fun. Then, something unexpected happened. The tiger started to change. It became a tiger! The tiger was not a tiger anymore. It was a tiger! Tim was very surprised. The tiger was not a tiger anymore. It was a big tiger! Tim and the tiger became best friends. They played together every day. was very big and had a long tail. Tim was scared, but he wanted to be brave. Tim said, "Hi, tiger! I am Tim. Do you want to play with me?" The tiger looked at Tim and said, "Yes, I want to play. But I am very hungry. Can you help me find some food?" Tim thought for a moment and said, "Yes, I can help you. Let’s look for food together." They looked and looked, but they could not find any food. Then, something unexpected happened. The tiger started to talk! "I am not a tiger, I am a magic tiger. I can make your wishes come true," said the tiger. Tim was very surprised. He wished for a big ice cream cone. The magic tiger made his wish come true. Tim and the magic tiger became good friends and played in the park every day. was very nice. Tim wanted to play with the tiger. He said, "Hi, tiger! Let’s play!" The tiger did not say anything. Tim was sad. He wanted to find a way to make the tiger happy. Tim had an idea. He found a ball and threw it. The tiger saw the ball and started to play with it. Tim was happy. He said, "Now, the tiger is happy too!" They played together all day. The tiger was not sad anymore. Tim and the tiger became best friends. was very big and had a long tail. Tim was not scared. He wanted to be friends with the tiger. Tim said, "Hi, tiger! My name is Tim. Do you want to play with me?" The tiger looked at Tim and said, "Yes, I want to play. My name is Sam." Tim and Sam played all day. They ran, jumped, and laughed. They had so much fun. Tim was happy to have a new friend. At the end of the day, Tim said, "Bye, Sam! I will come back to play with you again." Sam said, "Bye, Tim! I will wait for you here." Tim went home and told his mom about his new friend. was sad. Tim asked, "Why are you sad, Mr. Tiger?" The tiger said, "I lost my toy. Can you help me find it?" Tim wanted to help the tiger. They looked for the toy together. They looked under the trees and behind the bushes. Then, they saw a big box. The tiger said, "I think my toy is in the box." Tim opened the box. Inside, there was a big cake! The tiger was happy. But, the cake was not a real tiger. It was a toy tiger! Tim laughed and said, "Silly tiger, you are not a real tiger!" They played with the toy tiger and had a fun day at the park. was very hungry. Tim wanted to help the tiger. He thought, "I will feed the tiger some food." Tim went home and got some food. He came back to the park and gave the food to the tiger. The tiger ate the food and was very happy. Tim felt good because he helped the tiger. The tiger said, "Thank you, Tim. You are a good friend." Tim and the tiger played in the park all day. They had lots of fun. Tim learned that it is good to help others. The tiger learned that it is good to be kind and help others. And they both lived happily ever after. was sad. Tim wanted to help the tiger. He thought the tiger needed a friend. Tim went to the tiger and said, "Hi, I am Tim. I want to help you." The tiger looked at Tim and said, "Thank you, Tim. I am sad because I am alone. I need a friend." Tim thought for a moment and said, "I will be your friend, tiger. We can play together." The tiger was happy. Tim and the tiger played all day. They had lots of fun. The moral of the story is: Helping others can make you happy too. Table 6: Story completions for models trained on different amounts of TinyStories data. The original prompt and the training set completion are also included. Hyperparameter GPT-Neo – 5 / 10 / 25 / 50, 75, 100, 440 (M) Number of Parameters Number of Layers Attention Heads Hidden size Layer norm ϵ Sequence Length Max position embeddings Attention Dropout Classifier Dropout Embed. Dropout Resid. Dropout Summary first Dropout Weight decay Vocab Size Context length batch size gradient accumulation steps gradient clipping Training steps optimizer Adam β1 Adam β2 Adam ϵ Initial learning rate Final learning rate Learning rate scheduler schedule Warmup ratio 41M 4 16 768 1.0e-5 512 512 0.50 / 0.40 / 0.25 / 0.20 0.50 / 0.40 / 0.25 / 0.20 0.50 / 0.40 / 0.25 / 0.20 0.50 / 0.40 / 0.25 / 0.20 0.40 / 0.30 / 0.15 / 0.10 0.20 / 0.20 / 0.20 / 0.10 6411 / 6411 / 16384 / 16384 512 24 32 2.0 15 000 AdamW 0.9 0.95 1.0e-8 5.0e-4 5.0e-5 cosine 1.6% Table 7: Hyperparameters used for training GPT-Neo models on TinyStories. Hyperparameter Strict Strict-Small Number of parameters Number of layers Attention heads Hidden size FF intermediate size Position Bucket size Layer norm ϵ Vocabulary size Sequence length Max position embeddings Hidden dropout Attention dropout Training steps Batch size Gradient Accumulation Steps Warmup ratio Initial learning rate Final learning rate Learning rate scheduler Weight decay Optimizer Adam ϵ Adam β1 Adam β2 Gradient clipping 24M 12 6 384 1024 32 1e-7 6 144 128 512 0.1 0.1 20 000 80 32 1.6% 6e-4 6e-5 cosine 0.1 98M 12 12 768 2048 32 1e-7 16 384 128 512 0.1 0.1 20 000 80 32 1.6% 6e-4 6e-5 cosine 0.1 AdamW AdamW 1e-6 0.9 0.98 2.0 1e-6 0.9 0.98 2.0 Table 8: Hyperparameters used to train all LTG-BERT models for the different tracks. With max position embedding we refer to the architectural capacity of the model – the model was trained with sequence length = 128. Hyperparameter BoolQ, MNLI, MNLI-mm, MRPC CoLA, RTE, WSC MultiRC, QNLI, QQP, SST-2 Batch size Number of epochs Dropot Warmup proportion Learning Rate Decay type Weight decay Optimizer Adam ϵ Adam β1 Adam β2 Max length Gradient clip Mixed precision 32 10 0.1 0.1 3.0e-5 linear 0.01 AdamW 1.0e-6 0.9 0.999 128 2.0 True 16 10 0.1 0.1 3.0e-5 linear 0.01 AdamW 1.0e-6 0.9 0.999 128 2.0 True Table 9: Hyperparameters used for finetuning the LTG-BERT models on GLUE. In the following exercise, the student is given a beginning of a story. The student needs to complete it into a full story. The exercise tests the student’s language abilities and creativity. The symbol *** marks the separator between the prescribed beginning and the student’s completion: Once upon a time, there was tiger who liked to play the guitar. One day, a bunny heard the guitar from a distance and *** asked, "Hello tiger, can you play the guitar for me?" The tiger said, "Yes! I can!" So the bunny plucked up the guitar and started to play. But then, the tiger roared and kicked the guitar too hard. The guitar made a big mess. The panda was so sad that he hopped away and never came back. The end. Please provide your general assessment about the part written by the student (the one after the *** symbol). Is it gramatically correct? Is it consistent with the beginning of the story? Pay special attention to whether the student manages to complete the sentence which is split in the middle by the separator ***. I’ll provide an assessment of the student’s completion of the story: Grammar: The student’s portion is generally grammatically correct, with a few minor issues: ... ... In summary, while the student completes the split sentence correctly, the overall story lacks coherence and consistency with the given beginning. Now, grade the student’s completion in terms of 1. Grammar, 2. Creativity, 3. Consistency with the story’s beginning and whether the plot makes sense. Please provide grades from a scale of 1-10 for each of the requested categories, namely: 1. Grammar, 2. Creativity, 3. Consistency. Moreover, please provide your best guess of what the age of the student might be, as reflected from the completion. Choose from possible age groups: A: 3 or under. B: 4-5. C: 6-7. D: 8-9. E: 10-12. F: 13-16. Here are the grades for the student’s completion on a scale of 1-10: 1. Grammar: 7/10 ... 2. Creativity: 5/10 ... Consistency: 3/10 ... Age group estimate: B: 4-5 years old ... Table 10: A sample interaction with Claude-3.5 Sonnet. The story is generated by the GPT-Neo model trained on 50M words of TinyStories. We include the prompt which also contains the model’s generation. The LLM’s responses is stylized in italic. For brevity, part of the LLM response was omitted and replaced with ‘...’.
synthetic_cpt
2
LLM-Adapters_An_Adapter_Family_for_Parameter-Efficient_Fine-Tuning_of_Large_Language_Models.pdf
4 2 0 2 n u J 3 1 ] E S . s c [ 1 v 0 0 3 0 1 . 6 0 4 2 : v i X r a Large Language Models as Software Components: A Taxonomy for LLM-Integrated Applications Irene Weber Kempten University of Applied Sciences, Germany [email protected] Abstract Large Language Models (LLMs) have become widely adopted recently. Research explores their use both as autonomous agents and as tools for software engineering. LLM-integrated applications, on the other hand, are software systems that leverage an LLM to perform tasks that would otherwise be impossible or require significant coding effort. While LLM-integrated application engineering is emerging as new discipline, its terminology, concepts and methods need to be established. This study provides a taxonomy for LLM- integrated applications, offering a framework for analyzing and describing these systems. It also demonstrates various ways to utilize LLMs in applications, as well as options for implementing such integrations. Following established methods, we analyze a sample of recent LLM-integrated applications to identify rel- evant dimensions. We evaluate the taxonomy by applying it to additional cases. This review shows that applications integrate LLMs in numerous ways for various purposes. Frequently, they comprise multiple LLM integrations, which we term “LLM components”. To gain a clear understanding of an application’s architecture, we examine each LLM component separately. We identify thirteen dimensions along which to characterize an LLM component, including the LLM skills leveraged, the format of the output, and more. LLM-integrated applications are described as combinations of their LLM components. We suggest a concise representation using feature vectors for visualization. The taxonomy is effective for describing LLM-integrated applications. It can contribute to theory building in the nascent field of LLM-integrated application engineering and aid in developing such systems. Researchers and practitioners explore numerous creative ways to leverage LLMs in applications. Though challenges persist, integrating LLMs may revolutionize the way software systems are built. Keywords: component large language model, LLM-integrated, taxonomy, copilot, architecture, AI agent, LLM 1. Introduction fields, such as medicine, law, marketing, education, human resources, etc. Large Language Models (LLMs) have significantly impacted various sectors of economy and society [47]. Due to their proficiency in text understanding, cre- ative work, communication, knowledge work, and code writing, they have been adopted in numerous Public discussions often focus on the ethical aspects and societal consequences of these systems [36, 39]. Meanwhile, research investigates Artificial General Intelligences and autonomous AI agents that can use services, data sources, and other tools, and collabo- rate to solve complex tasks [11, 62, 57, 21]. In addi- tion, LLMs offer many opportunities to enhance soft- ware systems. They enable natural language interac- tion [59], automate complex tasks [19], and provide supportive collaboration, as seen with recent LLM- based assistant products often branded as “copilots” 1. This paper addresses the potential of LLMs for soft- ware development by integrating their capabilities as components into software systems. This contrasts with current software engineering research, which views LLMs as tools for software development rather than as software components [14, 22], and with the considerable body of research examining LLMs as au- tonomous agents within multiagent systems [21]. Software systems that invoke an LLM and process its output are referred to as “LLM-integrated appli- cations”, “LLM-integrated systems”, “LLM-based ap- plications”, etc. [32, 13, 57]. LLMs are versatile, mul- tipurpose tools capable of providing functionalities that would otherwise be unfeasible or require sub- stantial development efforts [15, 24]. By significantly expediting system development, they have the poten- tial to revolutionize not only the way users interact with technology, but also the fundamental processes of software development. LLM-integrated applications engineering is emerging as a research field. E.g., [10] proposes LLM Sys- tems Engineering (LLM-SE) as a novel discipline, and [44, 8, 7] discuss experiences and challenges that de- velopers of such systems encounter in practice. This study develops a taxonomy that provides a structured framework for categorizing and analyzing LLM-integrated applications across various domains. To develop and evaluate the taxonomy, we collected a sample of LLM-integrated applications, concentrat- ing on technical and industrial domains. These ap- plications showcase a broad range of opportunities to leverage LLMs, often integrating LLMs in mul- tiple ways for distinct purposes. In developing the taxonomy, we found that examining each of these in- tegrations, termed “LLM components”, separately is crucial for a clear understanding of an application’s architecture. The taxonomy adopts an original architectural per- spective, focusing on how the application interacts with the LLM while abstracting from the specifics of application domains. For researchers, the taxon- omy contributes to shape a common understanding and terminology, thus aiding theory building in this emerging domain [29, 50, 18]. For practitioners, the taxonomy provides inspiration for potential uses of LLMs in applications, presents design options, and helps identify challenges and approaches to address them. Objectives. In this study, a taxonomy is understood as a set of dimensions divided into characteristics. The objective is to identify dimensions that are useful for categorizing the integration of LLMs in applica- tions from an architectural perspective. To be most effective, the taxonomy should be easy to understand and apply, yet distinctive enough to uncover the es- sential aspects. Additionally, we aim to develop a visual representation tailored to the taxonomy’s in- tended purposes. Overview. The following section 2 provides back- ground on LLMs and introduces relevant concepts. Section 3 presents an overview of related work. The study design adheres to a Design Science Research approach [46]. We apply established methods for tax- onomy design [42, 48] as described in Section 4. This section also presents the sample of LLM-integrated applications used for this study. The developed tax- onomy is presented, demonstrated and formally eval- uated in section 5. In section 6, we discuss its usabil- ity and usefulness. Section 7 summarizes the contri- butions, addresses limitations, and concludes. 2. Large Language Models 2.1. Background 1E.g., https://docs.github.com/en/copilot, https://copilot.cloud.microsoft/en-us/copilot-excel, https://www.salesforce.com/einsteincopilot State-of-the-art LLMs such as GPT-3.5, GPT-4, Llama, PALM2, etc., are artificial neural networks i.e., very simple processing consisting of neurons, 2 units, that are organized in layers and connected by weighted links. Training a neural network means adapting these weights such that the neural network shows a certain desired behavior. Specifically, an LLM is trained to predict the likelihoods of pieces of text termed, tokens, to occur as continuations of a given text presented as input to the LLM. This in- put is referred to as prompt. The prompt combined with the produced output constitutes the context of an LLM. It may comprise more than 100k tokens in state-of-the-art LLMs2. Still, its length is limited and determines the maximum size of prompts and outputs that an LLM is capable of processing and generating at a time. Training of an LLM optimizes its parameters such that its computed likelihoods align with real text ex- amples. The training data is a vast body of text snip- pets extracted, processed, and curated from sources such as Wikipedia, Github code repositories, common websites, books, or news archives. An LLM trained on massive examples is termed a foundation model or pre-trained model. During training, an LLM not only learns to produce correct language but also ab- sorbs and stores information and factual knowledge. However, it is well known that LLMs frequently pick up biases, leading to ethical problems. They may also produce factually incorrect outputs that sound plausible and convincing, termed hallucinations. Recent findings show that LLMs can be applied to a wide range of tasks by appropriately formulating prompts. Different prompt patterns succeed in dif- ferent tasks. Basic approaches rely on instructing the LLM to solve a task described or explained in the prompt. In few-shot prompting (also known as few-shot learning), the prompt is augmented with ex- ample input-output pairs illustrating how to solve the task, e.g., the requested output format. The number of examples can vary. Prompting with one example is called one-shot prompting, while prompting without any examples is called zero-shot prompting. One-shot and few-shot prompting fall under the broader cat- egory of in-context learning. Prompt patterns such 2https://platform.openai.com/docs/models as chain-of-thought and thinking-aloud aim to elicit advanced reasoning capabilities from LLMs. As effective prompts are crucial for unlocking the di- verse capabilities of an LLM, the discipline of prompt engineering is evolving, focusing on the systematic design and management of prompts [66, 9, 53, 31]. 2.2. Definitions Invoking an LLM results in an input-processing- output sequence: Upon receiving a prompt, the LLM processes it and generates an output. We refer to an individual sequence of input-processing-output per- formed by the LLM as LLM invocation, and define an LLM-integrated application as a system in which the software generates the prompt for the LLM and processes its output. The concept of an application is broad, encompassing service-oriented architectures and systems with components loosely coupled via API calls. Given an LLM’s versatility, an application can uti- lize it for different tasks, each demanding a specific approach to create the prompt and handle the re- sult. This paper defines a particular software compo- nent that accomplishes this as an LLM-based software component or, simply, LLM component. An LLM- integrated application can comprise several LLM components. The study develops a taxonomy for LLM components. LLM-integrated applications are described as combinations of their LLM components. 3. Related Work With the recent progress in generative AI and LLMs, the interest in these techniques has increased, and numerous surveys have been published, providing an extensive overview of technical aspects of LLMs [72], reviewing LLMs as tools for software engineering [22], and discussing the technical challenges of applying LLMs across various fields [25]. Further studies ad- dress the regulatory and ethical aspects of Genera- tive AI and ChatGPT, with a particular focus on AI-human collaboration [41], and Augmented Lan- guage Models (ALMs), which are LLMs that enhance 3 their capabilities by querying tools such as APIs, databases, and web search engines [38]. Taxomonies related to LLMs include a taxonomy for prompts designed to solve complex tasks [49] and a taxonomy of methods for cost-effectively invoking a remote LLM [60]. A comparative analysis of stud- ies on applications of ChatGPT is provided by [27], whereas LLMs are compared based on their applica- tion domains and the tasks they solve in [20]. Most closely related to the taxonomy developed here is a taxonomy for LLM-powered multiagent architectures [21] which focuses on autonomous agents with less technical detail. Taxonomies of applications of AI in enterprises [48] and applications of generative AI, in- cluding but not limited to LLMs [52], are developed using methods similar to those in our study. Several taxonomies in the field of conversational agents and task-oriented dialog (TOD) systems ad- dress system architecture [1, 40, 12, 3]. However, they omit detailed coverage of the integration of generative language models. 4. Methods We constructed the taxonomy following established guidelines [42, 48, 29], drawing from a sample of LLM-integrated applications. These applications are detailed in section 4.1. 4.1. Development Taxonomy. We derived an initial taxonomy from the standard architecture of conversational assistants de- scribed in [3], guided by the idea that conversational assistants are essentially “chatbots with tools”, i.e., language-operated user interfaces that interact with external systems. This approach proved unsuccessful. The second version was based on the classical three- tier software architecture, and then extended over several development cycles. By repeatedly apply- ing the evolving taxonomy to the example instances, we identified dimensions and characteristics using an “empirical-to-conceptual” approach. When new di- mensions emerged, additional characteristics were de- rived in a “conceptual-to-empirical” manner. After five major refinement cycles, the set of dimensions and characteristics solidified. In the subsequent eval- uation phase, we applied the taxonomy to a new set of example instances that were not considered while constructing the taxonomy. As the dimensions and characteristics remained stable, the taxonomy was considered complete. In the final phase, we refined the wording and visual format of the taxonomy. Visualization. Developing a taxonomy involves cre- ating a representation that effectively supports its intended purpose [29]. Taxonomies can be repre- sented in various formats, with morphological boxes [54, 55] or radar charts [21] being well-established approaches. We evaluated morphological boxes, be- cause they effectively position categorized instances within the design space. However, we found that they make it difficult to perceive a group of categorized in- stances as a whole since they occupy a large display area. This drawback is significant for our purposes, as LLM-integrated applications often comprise mul- tiple LLM components. Therefore, we developed a more condensed visualization of the taxonomy based on feature vectors. Example instances. We searched for instances of LLM-integrated applications for taxonomy develop- ment that should meet the following criteria: • The application aims for real-world use rather than focusing on research only (such as testbeds for experiments or proofs-of-concept). It demon- strates efforts towards practical usability and ad- dresses challenges encountered in real-world sce- narios. • The application’s architecture, particularly its LLM components, is described in sufficient de- tail for analysis. • The sample of instances covers a diverse range of architectures. • The example instances are situated within indus- trial or technical domains, as we aim to focus on LLM-integrated applications beyond well-known fields like law, medicine, marketing, human re- sources, and education. 4 The search revealed a predominance of theoretical re- search on LLM-integrated applications while papers focusing on practically applied systems were scarce. Searching non-scientific websites uncovered commer- cially advertised AI-powered applications, but their internal workings were typically undisclosed, and reli- able evaluations were lacking. Furthermore, the het- erogeneous terminology and concepts in this emerg- literature ing field make a comprehensive formal search unfeasible. Instead, by repeatedly search- ing Google Scholar and non-scientific websites using terms “LLM-integrated applications”, “LLM-powered applications”, “LLM-enhanced system”, “LLM” and “tools”, along similar variants, we selected six suitable instances. Some of them integrate LLMs in multiple ways, totaling eleven distinct LLM components. For a thorough evaluation, we selected new instances using relaxed criteria, including those intended for research. Additionally, we included a real-world ex- ample lacking explicit documentation to broaden the diversity of our sample and assess the taxonomy’s coverage. Within the five selected instances, we iden- tified ten LLM components. 4.2. Sample of LLM-integrated applications Table 1 gives an overview of the sample. Names of ap- plications and LLM components are uniformly writ- ten as one CamelCase word and typeset in small caps, deviating from the format chosen by the respective authors. LowCode. LowCode is a web-based application consisting of a prompt-definition section and a di- alogue section. The prompt-definition section sup- ports the design of prompts for complex tasks, such as composing extensive essays, writing resumes for job applications or acting as a hotel service chatbot [5]. In the dialogue section, users converse with an LLM to complete the complex task based on the de- fined prompt. LowCode comprises two LLM components termed Planning and Executing. Planning operates in the prompt-definition section, where a user roughly describes a complex task, and Planning designs a workflow for solving it. The prompt-definition section offers a low-code development environment where the LLM-generated workflow is visualized as a graphi- cal flowchart, allowing a user to edit and adjust the logic of the flow and the contents of its steps. For instance, in essay-writing scenarios, this involves in- serting additional sections, rearranging sections, and refining the contents of sections. Once approved by the user, LowCode translates the modified work- flow back into natural language and incorporates it into a prompt for Executing. In the dialogue sec- tion, users converse in interactive, multi-turn dia- logues with Executing. As defined in the prompt, it acts as an assistant for tasks such as writing an essay or resume, or as a hotel service chatbot. While the idea of the LLM planning a workflow might suggest using the LLM for application control, LowCode Planning actually serves as a prompt generator that supports developing prompts for complex tasks. Honeycomb. Honeycomb is an observability plat- form collecting data from software applications in distributed environments for monitoring. Users define queries to retrieve information about the observed software systems through Honeycomb’s Query Builder UI. The recently added LLM-based QueryAssistant allows users to articulate inquiries in plain English, such as “slow endpoints by status code” or “which service has the highest latency?” The QueryAssistant converts these into queries in Honeycomb’s format, which users can execute and manually refine [7, 8]. MyCrunchGpt. MyCrunchGpt acts as an ex- pert system within the engineering domain, specif- ically for airfoil design and calculations in fluid me- chanics. These tasks require complex workflows com- prising several steps such as preparing data, param- eterizing tools, and evaluating results, using vari- ous software systems and tools. The aim of My- CrunchGpt is to facilitate the definition of these workflows and automate their execution [28]. MyCrunchGpt offers a web interface featuring a dialogue window for inputting commands in plain English, along with separate windows displaying the 5 Table 1: Example instances selected for development (top 6) and evaluation (bottom 5) Application References LLM components Honeycomb QueryAssistant [7, 8] Planning, Executing LowCode [5],[35] DesignAssistant, SettingsEditor, DomainExpert [28] MyCrunchGpt Manager, Operator MatrixProduction [69] TaskPlanning [37] WorkplaceRobot TaskExecutor, MemoryGenerator [64] AutoDroid ActionPlanning, ScenarioFeedback [51] ProgPrompt QuestionAnswering [26] FactoryAssistants DstPrompter, PolicyPrompter [71] SgpTod Reporting [70] TruckPlatoon ActionExecutor, Advisor, IntentDetector, Explainer [16, 44] ExcelCopilot output and results of software tools invoked by My- CrunchGpt in the backend. MyCrunchGpt relies on predefined workflows, not supporting deviations or cycles. By appending a specific instruction to the dialogue history in the prompt for each step of the workflow, it uses the LLM as a smart parser to ex- tract parameters for APIs and backend tools from user input. APIs and tools are called in the prede- fined order [28, p. 56]. MyCrunchGpt is still in development. The paper [28] explains the domain as well as the integration of the LLM, but does not fully detail the implementa- tion of the latter. Still, MyCrunchGpt illustrates innovative applications of an LLM in a technical do- main. We categorize three LLM components solving tasks within MyCrunchGpt: a DesignAssistant guiding users through workflows and requesting pa- rameters for function and API calls; a SettingsEd- itor updating a JSON file with settings for a back- end software tool; and a DomainExpert which helps evaluating results by comparing them to related re- sults, e.g., existing airfoil designs, which it derives from its trained knowledge. MatrixProduction. MatrixProduction em- ploys an LLM for controlling a matrix production system [69]. While in a classical line production setup, workstations are arranged linearly and the manufacturing steps follow a fixed sequence, matrix production is oriented towards greater flexibility. transport vehicles Autonomous carry materials and intermediate products to workstations, termed automation modules, each offering a spectrum of manufacturing skills that it can contribute to the production process. Compared to line production, matrix production is highly adaptable and can manufacture a variety of personalized products with full automation. This requires intelligent production management to (a) create workplans that orchestrate and schedule the automation modules’ skills, and (b) program the involved automation modules such that they execute the required processing steps. MatrixProduction incorporates two LLM compo- nents: Manager creates workplans as sequences of skills (a), while Operator generates programs for the involved automation modules (b). MatrixProduction prompts Manager and Op- erator to provide textual explanations in addition to the required sequences of skills or automation module programs. The LLM output is processed by a parser before being used to control the physi- cal systems. Manager relies on built-in production- specific knowledge of the LLM such as “a hole is pro- duced by drilling”. Noteworthy in this approach is its tight integra- tion into the system landscape of Industry 4.0. The few-shot Manager and Operator prompts are generated automatically using Asset Adminis- tration Shells, which are standardized, technology- 6 independent data repositories storing digital twins of manufacturing assets for use in Industry 4.0 [2]. WorkplaceRobot. An experimental robot system is enhanced with LLM-based task planning in [37]. The robot operates in a workplace environment fea- turing a desk and several objects. It has previously been trained to execute basic operations expressed in natural language such as “open the drawer” or “take the pink object and place it in the drawer”. LLM-based task planning enables the robot to per- form more complex orders like “tidy up the work area and turn off all the lights”. To this end, an LLM is prompted to generate a sequence of basic operations that accomplish the complex order. Although the robot expects operations phrased in language, the LLM is prompted with a natural Python coding task. For instance, the basic opera- tion “turn on the green light” corresponds to a Python command push_button(’green’). The prompt for the LLM includes several examples each consisting of a description of an environment state, a complex order formatted as a comment, and a sequence of Python robot commands that accomplish the com- plex order. When invoking the LLM to generate the Python program for a new order, the prompt is aug- mented with a description of the environment’s cur- rent state and the new order as a comment. The Python code produced by the LLM is trans- lated back to a sequence of basic operations in nat- ural language. When the robot executes these oper- ations, there is no feedback about successful comple- tion. Rather, the system assumes that all basic op- erations require a fixed number of timesteps to com- plete. AutoDroid. The goal of mobile task automation is hands-free user interaction for smartphones through voice commands. AutoDroid is a voice control sys- tem for smartphones that can automatically execute complex orders such as “remind me to do laundry on May 11th” or “delete the last photo I took” [64, 65]. as “scroll down, then press button x” in the calen- dar app. AutoDroid employs an LLM component TaskExecutor to plan these sequences of opera- tions. The challenge is that the next operation to ex- ecute depends on the current state of the Android app which continuously changes as the app is operated. AutoDroid solves this by invoking the TaskEx- ecutor repeatedly after each app operation with the prompt comprising the updated state of the Graph- ical User Interface (GUI) along with the user’s com- plex order. Before executing irrevocable operations, such as per- manently deleting data or calling a contact, Auto- Droid prompts the user to confirm or adjust the op- eration. TaskExecutor is instructed to include a “confirmation needed” hint in its output for such op- erations. The prompt for TaskExecutor comprises an ex- tract from a knowledge base which is built automati- cally in an offline learning phase as follows: In a first step, a “UI Automator” (which is not an LLM com- ponent) automatically and randomly operates the GUI elements of an Android app to generate a UI Transition Graph (UTG). The UTG has GUI states as nodes and the possible transitions between GUI states as edges. As next steps, AutoDroid invokes two LLM components referred to as MemoryGen- erators to analyze the UTG. The first MemoryGenerator is prompted repeat- edly for each GUI state in the UTG. Its task is to explain the functionality of the GUI elements. Be- sides instructions and examples of the table format desired as output, its prompt includes an HTML rep- resentation of the GUI state, the GUI actions preced- ing this state, and the GUI element operated next. Its output consists of tuples explaining the function- ality of a GUI element by naming the derived func- tionality (e.g., “delete all the events in the calendar app”) and the GUI states and GUI element actions in- volved. Similarly, the second MemoryGenerator is prompted to output a table listing GUI states and explanations of their functions. These tables consti- tute AutoDroid’s knowledge base. Such complex orders are fulfilled by performing se- quences of basic operations in an Android app, such ProgPrompt. ProgPrompt [51] is an approach to to LLM-based robot task planning similar 7 Its robot is controlled by WorkplaceRobot. Python code and works in a real and a simulated household environment. ProgPrompt comprises two LLM components. Ac- tionPlanning generates Python scripts for tasks such as “microwave salmon” using basic opera- tions like grab(’salmon’), open(’microwave’), and putin(’salmon’, ’microwave’), notably with- out considering the current state of the environment. To establish a feedback loop with the environment, ActionPlanning adds assert statements. These statements verify the preconditions of basic opera- tions and trigger remedial actions when preconditions are not met. For instance, a script for “microwave salmon” comprises the following code fragment: if assert(’microwave’ is ’opened’) else: open(’microwave’) putin(’salmon’, ’microwave’) When operating in the simulated environment, ProgPrompt can verify an assert statement through its second LLM component, Scenario- Feedback. Prompted with the current state of the environment and the assert statement, Scenario- Feedback evaluates it and outputs True or False. FactoryAssistants. FactoryAssistants advise workers on troubleshooting production line issues in two manufacturing domains: detergent production and textile production [26]. The assistants leverage domain knowledge from FAQs and documented prob- lem cases to answer user queries. The required do- main knowledge is provided as a part of the prompt. SgpTod. SgpTod employs an LLM to implement a chatbot, specifically, a task-oriented dialogue (TOD) system [71]. TOD systems are also known as conver- sational assistants. In contrast to open-domain dia- logue (ODD) systems, which engage users in goalless conversations, they are designed for assisting users in specific tasks. In general, TOD systems require the following components [3]: Natural Language Understanding (NLU), analyzing the user’s input to classify intents and extract entities; Dialogue Management (DM) for deciding on a system action that is appropriate in a given dialogue state (e.g., ask for more informa- tion or invoke a hotel booking service); and Natu- ral Language Generation (NLG) for producing a re- sponse that the TOD system can present to the user. Intent classification, also known as intent detection, matches free-text user input to one of several tasks a TOD system can perform (e.g., book a hotel). Entity extraction isolates situational values, called entities, from the user input (e.g., the town and the date of the hotel booking). The TOD system may require several dialogue turns to elicit all necessary entities from the user. In TOD research, the system’s in- ternal representation of the user’s intentions and the entity values is commonly referred to as its “belief state”. For example, in the restaurant search domain, the belief state may include attribute-value pairs like cuisine:Indian and pricerange:medium. SgpTod is a multi-domain TOD system, concur- rently handling multiple task domains found in stan- dard TOD evaluation datasets, such as recommend- ing restaurants or finding taxis. Similar to other ex- perimental TOD systems [23], SgpTod accesses a database that stores information from the task do- mains, such as available hotels and restaurants. SgpTod comprises two LLM components, called DstPrompter and PolicyPrompter, that are both invoked in every dialogue turn between SgpTod and the user. The DstPrompter handles the NLU aspect, analyzing the user’s input and populating the system’s belief state. It outputs is an SQL query suited to extract the database entries that match the current belief state. Upon retrieving the database en- tries, SgpTod invokes its PolicyPrompter which covers both DM and NLG. Prompted with the dia- logue history and the database entries retrieved, it produces a two-part output: a natural language re- sponse for NLG and a system action for DM. TruckPlatoon. The concept of truck platooning means that trucks travel closely together for bet- ter fuel efficiency and traffic flow. TruckPla- toon comprises an algorithmic control loop which autonomously maintains a consistent distance be- tween trucks. It invokes an LLM to generate natural- language reports on the platoon’s performance and 8 stability from measurements tracked by the control algorithm, providing easily understandable informa- tion for engineers involved in monitoring and opti- mizing the truck platooning system. ExcelCopilot. ExcelCopilot is an example of a recent trend where software companies integrate LLM-based assistants, often termed “copilots”, into their products [44]. These copilots not only provide textual guidance but also perform actions within the software environment, constituting a distinctive type of LLM-integrated application. We chose Excel- Copilot as an example for evaluating our taxonomy. Since its implementation is undisclosed, we infer its architecture from indirect sources, including a screen- cast and a report on insights and experiences from copilot developers [16, 44]. This inferred architecture may deviate from the actual implementation. ExcelCopilot is accessible in a task bar along- side the Excel worksheet. It features buttons with context-dependent suggestions of actions and a text box for users to type in commands in natural lan- guage. ExcelCopilot only works with data tables, so its initial suggestion is to convert the active work- sheet’s data into a data table. Copilot functions ac- tivate when a data table or part of it is selected. It then presents buttons for four top-level tasks: “add formula columns”, “highlight”, “sort and filter”, and “analyze”. The “analyze” button triggers the copilot to display more buttons, e.g., one that generates a pivot chart from the selected data. ExcelCopilot can also add a formula column to the data table and explain the formula in plain language. When a user inputs a free-text command, Excel- Copilot may communicate its inability to fulfill it. This constantly occurs with commands requiring multiple steps, indicating that ExcelCopilot lacks a planning LLM component as seen in, for example, MatrixProduction. This observation, along with its mention in [44], suggests that ExcelCopilot em- ploys an intent detection-skill routing architecture. This architecture includes an LLM component that maps free-text user commands to potential intents and then delegates to other LLM components tasked with generating actions to fulfill those intents. Ac- cordingly, ExcelCopilot comprises several types of LLM components: • Several distinct Action Executors generate code for specific application actions, such as cre- ating a pivot table, designing a worksheet for- mula, inserting a diagram, and so on. • An Advisor suggests meaningful next actions. Its outputs serve to derive button captions and prompts for ActionExecutors. • When a user inputs a free-text command, the IntentDetector is invoked to determine and trigger a suitable ActionExecutor. The In- tentDetector communicates its actions to users and informs them when it cannot devise a suitable action. • The Explainer generates natural language ex- planations of formulae designed by ExcelCopi- lot. It is unclear whether under the hood, the ActionExecutor is generating both the for- mula and the explanation, or if two separate LLM components are being invoked. We assume the latter, i.e., that a separate Explainer LLM component exists. While users interact repeatedly with ExcelCopi- lot, each interaction adheres to a single-turn pat- tern, with the user providing a command and Ex- celCopilot executing it [44]. 5. A Taxonomy for LLM Components and LLM-Integrated Applications When developing the taxonomy, it emerged that an- alyzing an LLM-integrated application should begin with identifying and describing its distinct LLM com- ponents. Analyzing each LLM component separately helps capture details and provides a clear understand- ing of how the application utilizes LLM capabili- ties. The LLM-integrated application can then be described as a combination of the LLM components it employs. 9 Function Meta Invocation Table 2: Dimensions and characteristics of the taxonomy. Codes of characteristics are printed in uppercase. “Meta” means “metadimension”. “MuEx” means “mutual exclusiveness”. Dimension Interaction Frequency Logic UI Data Instruction State Task Check Skills Format Revision Consumer Characteristics App, Command, Dialog Single, Iterative cAlculate, Control none, Input, Output, Both none, Read, Write, Both none, User, LLM, Program none, User, LLM, Program none, User, LLM, Program none, User, LLM, Program reWrite, Create, conVerse, Inform, Reason, Plan FreeText, Item, Code, Structure none, User, LLM, Program User, LLM, Program, Engine MuEx enforced yes yes yes yes enforced enforced yes enforced no no enforced enforced Prompt Output 5.1. Overview and demonstration The taxonomy identifies 13 dimensions for LLM com- ponents, grouped into five metadimensions as shown in table 2. It comprises both dimensions with gen- uinely mutually exclusive characteristics and those with non-exclusive characteristics. For dimensions related to the technical integration of LLMs within applications, mutual exclusiveness is enforced. Given the open nature of software architecture, the inte- gration of LLMs allows for significant diversity. In practice, LLM components may show multiple char- acteristics within these dimensions. Nonetheless, the taxonomy requires categorizing each component with a predominant characteristic, enforcing a necessary level of abstraction to effectively organize and struc- ture the domain. We applied the taxonomy to categorize each of the example instances described in section 4.2. The re- sults are depicted in figure 1. The dimensions and their characteristics are detailed and illustrated with examples in section 5.2. The taxonomy visualizes an LLM component by a feature vector comprising binary as well as multi- valued features. Non-mutually exclusive dimensions are represented by a set of binary features. The re- maining dimensions are encoded as n-valued features where n denotes the number of characteristics. For compactness, we use one-letter codes of the charac- teristics as feature values in the visualizations. In table 2, these codes are printed in upper case in the respective characteristic’s name. A feature vector representing an LLM component is visualized in one line. For dimensions with non- mutually exclusive characteristics, all possible codes are listed, with the applicable ones marked. The re- maining dimensions are represented by the code of the applicable characteristic, with the characteris- tic none shown as an empty cell. We shade feature values with different tones to support visual percep- tion. LLM components within the same application are grouped together, visualizing an LLM-integrating application in a tabular format. 5.2. Dimensions and characteristics 5.2.1. Invocation dimensions Two Invocation dimensions address the way the LLM is invoked within the application. Interaction describes how the user interacts with the LLM with three characteristics: App: Users never converse with the LLM directly in natural language, rather the application invokes the LLM automatically. E.g., users do not interact 10 Invocation Function Prompt (cid:125)(cid:124) (cid:123) (cid:122) (cid:125)(cid:124) (cid:123) (cid:125)(cid:124) (cid:123) (cid:122) Skills (cid:125)(cid:124) Out. Format Output (cid:123) (cid:122) (cid:125)(cid:124) (cid:123) (cid:122) (cid:125)(cid:124) (cid:123) (cid:122) n o i t c a r e t n I C C D Honeycomb QueryAssistant LowCode Planning LowCode Executing MyGrunchGpt DesignAssistant D C MyGrunchGpt SettingsEditor C MyGrunchGpt DomainExpert MatrixProduction Manager MatrixProduction Operator WorkplaceRobot AutoDroid Executor AutoDroid MemoryGenerator2 C A C C A C ProgPrompt ActionPlanning ProgPrompt ScenarioFeedback A FactoryAssistant SgpTod DstPrompter SgpTod PolicyPrompter TruckPlatoon D D A A ExcelCopilot ActionExecutor∗ A A ExcelCopilot Advisor C ExcelCopilot IntentDetector A ExcelCopilot Explainer y c n e u q e r F S S I I S S S S S I I S I S S S S S S S S (cid:122) n o i t c u r t s n I a t a D I U c i g o L A e t a t S k s a T k c e h C e t i r W e r e t a e r C e s r e V n o c m r o f n I n o s a e R A A B A B A A I I I I C C C C A C C A R P P U P P U P L U P P U P P P P P P P P U P P L P P U I C V I V W I I P L U P P P P P U P P L P P U W V V A I R P P U P P P C O A O P P P W A A C A P P L P P P P P U P P P t x e T e e r F m e t I n a l P P P F F P F P F P P P F F F P F F F R R R R R R R I I I I e d o C C C C C C C e r u t c u r t S n o i s i v e R r e m u s n o C P E S U L U S S S S S S S E E U L E E E L E E U E P U E P P U Figure 1: Categorized example instances. See table 2 for a legend. ∗, 2: multiple LLM components. directly with ExcelCopilot ActionExecutor or with MatrixProduction Operator. Command : Users input single natural language commands. E.g., users interact with AutoDroid TaskExecutor through single natural language commands. Dialog: Users engage in multi-turn dialogues with the LLM component to achieve a use goal. E.g., users repeatedly prompt LowCode Executing or My- CrunchGpt DesignAssistant in multi-turn dia- logues to obtain an essay or an airfoil design, respec- tively. Frequency addresses how often the application in- vokes a specific LLM component to fulfill a goal: Single: A single invocation of an LLM component is sufficient to produce the result. E.g., in My- CrunchGpt, the application internally invokes dis- tinct LLM components once for each user input by injecting varying prompt instructions. Iterative: The LLM component is invoked repeatedly to produce the result. E.g., AutoDroid TaskEx- 11 ecutor is invoked multiple times to fulfill a com- mand with an updated environment description in the State prompt; LowCode Executing is repeat- edly prompted by the user to achieve the use goal while the application updates the dialogue history. 5.2.2. Function dimensions The Function dimensions are derived from the classi- cal three-tier software architecture model which seg- regates an application into three distinct layers: pre- sentation, logic and data [17]. The presentation layer implements the UI. On the input side, it allows users to enter data and commands that control the appli- cation. On the output side, it presents information and provides feedback on the execution of commands. The logic layer holds the code that directly realizes the core objectives and processes of an application such as processing data, performing calculations, and making decisions. The data layer of an application manages the reading and writing of data from and to persistent data storage. Due to its versatility, an LLM component can simultaneously implement func- tionality for all three layers. The taxonomy addresses this with three Function dimensions. UI indicates whether an LLM component contributes significantly to the user interface of an application, avoiding the need to implement graphical UI controls or display elements: none: No UI functionality is realized by the LLM. E.g., in ExcelCopilot, the LLM does not replace any UI elements. Input: is (partially) implemented by the LLM. E.g., in MatrixProduction Manager, users input their order in natural language, obviating a product configuration GUI. Output: Output UI is (partially) implemented by the LLM. E.g., in TruckPlatoon, the output gener- ated by the LLM component can replace a data cock- pit with gauges and other visuals displaying numeri- cal data. Input and output UI are (partially) imple- Both: mented by the LLM. E.g., in MyCrunchGpt, the DesignAssistant provides a convenient conversa- interface for parameterization of APIs and tional Input UI tools and feedback on missing values, which other- wise might require a complex GUI. Logic indicates whether the LLM component deter- mines the control flow of the application. It discerns two characteristics: cAlculate: The output does not significantly impact the control flow of the application, i.e., the output is processed like data. E.g., MyCrunchGpt Set- tingsEditor modifies a JSON file, replacing a pro- grammed function; MyCrunchGpt DesignAssis- tant asks the user for parameters, but the sequence of calling APIs and tools follows a predefined work- flow; the workflow computed by LowCode Plan- ning is displayed without influencing the applica- tion’s control flow. Control : The output of the LLM is used for con- trolling the application. E.g., the plans generated by MatrixProduction Manager serve to sched- ule and activate production modules; the actions pro- posed by AutoDroid TaskExecutor are actually executed and determine how the control flow of the app proceeds. Since an LLM invocation always computes a result, cAlculate is interpreted as “calculate only”, making cAlculate and Control mutually exclusive. Data addresses whether the LLM contributes to read- ing or writing persistent data: none: The LLM does not contribute to reading or writing persistent data. This characteristic applies to most sample instances. Read : The LLM is applied for reading from persistent data store. E.g., SgpTod DstPrompter generates SQL queries which the application executes; Honey- comb QueryAssistant devises analytical database queries. Write and Both: No LLM component among the samples generates database queries for creating or updating persistent data. 5.2.3. Prompt-related dimensions Integrating an LLM into an application poses spe- cific requirements for prompts, such as the need for prompts to reliably elicit output in the requested 12 form [68]. While a broad range of prompt patterns have been identified and investigated [66], there is still a lack of research on successful prompt pat- terns specifically for LLM-integrated applications, on which this taxonomy could build. Developing prompt taxonomies is a challenging research endeavor in itself [49] and is beyond the scope of this research. There- fore, the taxonomy does not define a dimension with specific prompt patterns as characteristics, but rather focuses on how the application generates the prompt for an LLM component from a technical perspective. Prompts generally consist of several parts with dis- tinct purposes, generated by different mechanisms. Although many authors explore the concepts, a com- mon terminology has yet to be established. This is illustrated in table 3, showing terms from an ad-hoc selection of recent papers addressing prompt gener- In the table, italics indicate ation in applications. that the authors refrain from introducing an abstract term and instead use a domain-specific description. The term “examples” indicates a one-shot or few-shot prompt pattern. The terms that are adopted for the taxonomy are underlined. The taxonomy distinguishes three prompt parts re- ferred to as Prompt Instruction, Prompt State, and Prompt Task. These parts can occur in any order, potentially interleaved, and some parts may be ab- sent. • Instruction is the part of a prompt that outlines how to solve the task. Defined during LLM com- ponent development, it remains static through- out an application’s lifespan. • State is the situation-dependent part of the prompt that is created dynamically every time the LLM is invoked. The taxonomy opts for the term State instead of “context” in order to avoid confusion with the “LLM context” as explained in section 2. The State may include the current dialogue history, an extract of a knowledge base needed specifically for the current LLM invoca- tion, or a state or scene description, etc. • Task is the part of the prompt conveying the task to solve in a specific invocation. Prompt Instruction, State and Task describe the ori- gins of the prompt parts by uniform characteristics: none: The prompt part is not present. E.g., Prog- Prompt ActionPlanning has no State prompt, nor does LowCode Planning (except the dialogue history when planning a subprocess). Instruction and Task prompt parts are present in all sample in- stances. User : The user phrases the prompt part. E.g., the Task for ExcelCopilot IntentDetector or for LowCode Planning is phrased by the user. There are no sample instances where the user provides the Instruction or State prompt parts. LLM : The prompt part is generated by an LLM. E.g., LowCode Planning generates the State for Low- Code Executing and ExcelCopilot IntentDe- tector generates the Task for ExcelCopilot Ac- tionExecutors. Program: Application code generates the prompt part. E.g., AutoDroid programmatically generates the State and the Task parts for its MemoryGen- erators in the knowledge base building phase. The Prompt Instruction dimension is always gener- ated by Program. While a user and possibly an LLM have defined this prompt part during application de- velopment, this falls outside the scope of this taxon- omy. Therefore, the Prompt Instruction dimension is not discriminating and categorizes all cases as Pro- gram. It is retained in the taxonomy for completeness and better understandability. Prompt Check describes whether the application em- ploys a review mechanism to control and modify the prompt before invoking the LLM. The same charac- teristics as for the prompt parts are applicable: none: The prompt is used without check. User : The user checks and revises the prompt. LLM : Another LLM component checks or revises the prompt. Program: The application comprises code to check or revise the prompt. E.g., AutoDroid removes personal data, such as names, to ensure privacy before invoking the TaskExecutor; Honeycomb QueryAssistant incorporates a coded mechanism against prompt injection attacks. 13 Table 3: Terms used for prompt parts. Expressions specific to a domain are printed in italics, “examples” indicates a one-shot or few-shot prompt pattern. Terms adopted for the taxonomy are underlined. Source [72] [34] [32] [45] [45] [37] Instruction task description + examples instruction prompt predefined prompt prompt template + examples examples prompt context, i.e., examples [5] [5] [69] [26] education prompt education prompt role and goal + instruction + examples predefined system instruction + domain-specific information State DB schema environment state, scene description dialogue history dialogue history + provided workflow context query results from knowledge graph Task test instance data prompt user prompt user input question SQL query result input task commands user input task prompt (circumscribed) current task the user’s request Most example instances omit prompt checks. There are no examples where a Check is performed by a User or an LLM. 5.2.4. Skills dimensions The Skills dimension captures the types of LLM ca- pabilities that an application utilizes. It is designed as a dimension with six non-mutually exclusive char- acteristics. Skills is decomposed into six specific capabilities: reWrite: The LLM edits or transforms data or text, such as rephrasing, summarizing, reformat- ting, correcting, or replacing values. E.g., My- CrunchGpt SettingsEditor replaces values in JSON files; TruckPlatoon converts measurements into textual explanations. Create: The LLM generates novel output. E.g., LowCode Executing generates substantial bodies of text for tasks like essay writing. conVerse: The application relies on the LLM’s capa- bility to engage in purposeful dialogues with humans. E.g., MyCrunchGpt DesignAssistant asks users for missing parameters; SgpTod PolicyPrompter decides how to react to user inputs and formulates chatbot responses. Inform: The application depends on knowledge that the LLM has acquired during its training, unlike applications that provide all necessary information within the prompt. E.g., MyCrunchGpt Domain- Expert provides expert knowledge on airfoil designs; MatrixProduction relies on built-in knowledge of production processes, such as “a hole is produced by drilling”; LowCode Executing uses its learned knowledge for tasks like essay writing. Reason: The LLM draws conclusions or makes log- ical inferences. E.g., FormulaExplainer in Ex- celCopilot explains the effects of Excel functions in formulas; AutoDroid MemoryGenerators ex- plain the effects of GUI elements in Android apps. Plan: The LLM designs a detailed method or course E.g., Au- of action to achieve a specific goal. toDroid TaskExecutor and WorkplaceRobot TaskPlanning devise action plans to achieve goals. The Plan and Reason characteristics are interrelated, as planning also requires reasoning. The intended handling of these characteristics is to categorize an LLM component as Plan only and understand Plan as implicitly subsuming Reason. The effectiveness of LLMs as components of software applications relies on their commonsense knowledge and their ability to correctly interpret and handle a broad variety of text inputs, including instructions, 14 examples, and code. It is reasonable to assume that a fundamental capability, which might be termed Un- terstand, is leveraged by every LLM component. As it is not distinctive, the taxonomy does not list it explicitly in the Skills dimension. Applying this taxonomy dimension requires users to determine which skills are most relevant and worth highlighting in an LLM component. Given the versa- tility of LLMs, reducing the focus to few predominant skills is necessary to make categorizations distinctive and expressive. 5.2.5. Output-related dimensions Output Format characterizes the format of the LLM’s output. As an output may consist of several parts in diverse formats, this dimension is designed as non- mutually exclusive, same as the Skills dimension. It distinguishes four characteristics that are distinctive and well discernible: FreeText: unstructured natural language text out- put. E.g., TruckPlatoon and MyCrunchGpt DomainExpert generate text output in natural lan- guage; MatrixProduction Manager and Ma- trixProduction Operator produce FreeText ex- planations complementing output in custom formats to be parsed by the application. Item: a single text item from a predefined set of items, such as a class in a classification task. E.g., ProgPrompt ScenarioFeedback outputs either True or False. Code: source code or other highly formalized output that the LLM has learned during its training, such as a programming language, XML, or JSON. E.g., AutoDroid TaskExecutor produces code to steer an Android app; MyCrunchGpt SettingsEditor outputs JSON. Structure: structured, formalized output adhering to a custom format. E.g., LowCode Planning out- puts text in a format that can be displayed as a flow chart; MatrixProduction Manager and Oper- ator produce output in custom formats combined with FreeText explanations. Output Revision indicates whether the application checks or revises the LLM-generated output before utilization. These characteristics and their interpre- tations mirror those in the Prompt Check dimension: none: There is no revision of the LLM output. User : The user revises the LLM output. E.g., the user improves the plan generated by LowCode Planning. LLM : A further LLM component checks or revises the output of the LLM component under considera- tion. Program: Programmed code checks or revises the LLM output. E.g., Honeycomb QueryAssistant corrects the query produced by the LLM before exe- cuting it [7]. There are no instances in the sample set where an- other LLM revises or checks the output of the LLM. Most sample applications do not check or revise the LLM’s output, though several of them parse and transform it. The purpose of the Output Revision dimension is to indicate whether the application in- cludes control or correction mechanisms, rather than just parsing it. Output Consumer addresses the way of utilizing the LLM output: User signifies that the LLM output is presented to a human user. E.g., the text output of TruckPla- toon is intended for humans, as well as the output of MyCrunchGPT DomainExpert. LLM indicates that the output serves as a prompt part in a further LLM invocation. E.g., the knowl- edge base entries generated by an AutoDroid Mem- oryGenerator become part of the prompt for AutoDroid TaskExecutor; the plan output by LowCode Planning serves as a part of the prompt for LowCode Executing. Program describes instances where the LLM output is consumed and processed further by a software com- ponent of the application. E.g., the output of Ma- trixProduction Manager is handled by software systems (including a Manufacturing Execution Sys- tem) which use it to compute prompts for other LLM components. Engine covers scenarios where the LLM output is in- tended for execution on a runtime engine. E.g., the SQL query generated by SgpTod DstPrompter is 15 processed by a SQL interpreter; a part of the output of MatrixProduction Operator is executed by automation modules. Although applications may parse and transform the LLM output before use, the Output Consumer di- mension is meant to identify the ultimate consumer, such as an execution engine, rather than an interme- diary parser or transformation code. When applica- tions divide the LLM output into parts for different consumers, users applying the taxonomy need to de- termine which consumer is most relevant, since this dimension is designed to be mutually exclusive. 5.3. Evaluation Figure 2 displays the number of occurrences of char- It must acteristics within the example instances. be noted, however, that these do not reflect actual frequencies, as similar LLM components within the same application are aggregated together, indicated by symbols ∗ and 2 in figure 1. Furthermore, Ex- celCopilot likely includes occurrences of Prompt Check and Output Revision which are not counted due to insufficient system documentation. We evaluate the taxonomy against commonly ac- cepted quality criteria: comprehensiveness, robust- ness, conciseness, mutual exclusiveness, explanatory power, and extensibility [58, 42]. The taxonomy encompasses all example instances including those that were not considered during its development. This demonstrates comprehensiveness. As figure 1 shows, all example instances have unique categoriza- tions, supporting the taxonomy’s robustness. This not only indicates that the dimensions and charac- teristics are distinctive for the domain, but also high- lights the wide variety possible in this field. Concise- ness demands that the taxonomy uses the minimum number of dimensions and characteristics. The tax- onomy gains conciseness by identifying relatively few and abstract characteristics within each dimension. However, it does not adhere to the related subcri- terion that each characteristic must be present in at least one investigated instance [54]. Unoccupied char- acteristics are retained for dimensions whose char- acteristics were derived conceptually, specifically, for the Prompt dimensions, the Output Revision dimen- sion, and the Data Function dimension, enhancing the taxonomy’s ability to illustrate design options and inspire novel uses for LLM integrations in ap- plications. Some dimensions are constructed in par- allel, sharing common sets of characteristics. While this affects conciseness, it makes the taxonomy easier to understand and apply. As is often seen in tax- onomy development [54], we deliberately waived the requirement for mutual exclusiveness for some di- mensions, specifically the Output Format and Skills dimensions. In the context of this taxonomy, these can equivalently be understood as a set of of six and four binary dimensions respectively, each divided into characteristics “yes” and “no”. However, framing them as a single dimension with non-mutually exclu- sive characteristics seems more intuitive. Metadimensions structure the taxonomy, and most of the characteristics are illustrated through exam- ples. These measures are recognized for enhancing the explanatory power of a taxonomy [58]. The taxonomy’s flat structure allows for the easy addition of dimensions and characteristics, indicating that its extensibility is good. Potential extensions and fur- ther aspects of the taxonomy, including its usefulness and ease of use, are discussed in section 6. We visualize the taxonomy (or, strictly speaking, cat- egorized instances) in a compact form using feature vectors with characteristics abbreviated to single- letter codes. This approach has a drawback, as it requires referencing a legend. Additionally, non- applicable characteristics in mutually exclusive di- mensions are not visible, which means the design space is not completely shown. However, the com- pactness of the representation allows LLM compo- nents within a common application to be grouped closely, so that an LLM-integrated application can be perceived as a unit without appearing convoluted. This is a significant advantage for our purposes. 6. Discussion The discussion first focuses on the taxonomy’s appli- cability and ease of use before considering its overall usefulness. 16 Invocation (cid:122) (cid:123) (cid:125)(cid:124) Inter. Freq. Logic UI Function (cid:125)(cid:124) (cid:122) (cid:123) Data (cid:122) Instr. Prompt (cid:125)(cid:124) State Task (cid:123) Check Skills (cid:125)(cid:124) (cid:122) (cid:123) Output Format (cid:122) (cid:122) (cid:125)(cid:124) (cid:123) Revision Consumer Output (cid:125)(cid:124) (cid:123) A C D I S C A I O B R W B U L P U L P U L P U L P W C V I R P F I C S U L P U L P E 8 9 4 5 16 8 13 5 2 2 2 0 0 0 0 21 0 2 17 11 3 7 0 0 2 3 1 4 4 7 8 10 4 6 8 1 0 1 5 3 3 10 Figure 2: Occurrences of characteristics in the sample set of LLM-integrated applications. 6.1. Applicability and ease of use The taxonomy was effectively applied to LLM- integrated applications based on research papers, source code blog posts, recorded software demonstra- tions, and developer experiences. The analysis of LowCode revealed it to be a prompt definition tool combined with an LLM-based chatbot, which devi- ates from the strict definition of an LLM-integrated application. Still, the taxonomy provided an effective categorization and led to a clear understanding of the system’s architecture. Obviously, the ease of categorization depends on the clarity and comprehensiveness of the available infor- mation, which varies across analyzed systems. An- alyzing applications of LLMs in novel and uncom- mon domains can be challenging. While these papers present inspiring and innovative ideas for LLM inte- gration, such as MyCrunchGpt and TruckPla- toon, they may prioritize explaining the application area and struggle to detail the technical aspects of the LLM integration. A taxonomy for LLM-integrated applications can guide and facilitate the writing pro- cess and lead to more standardized and comparable descriptions. Applying the taxonomy is often more straightforward for research-focused systems. Omitting the com- plexities required for real-world applications, such as prompt checks and output revisions, their architec- tures are simpler and easier to describe. A taxonomy can point out such omissions. A fundamental challenge in applying the taxonomy arises from the inherent versatility of LLMs, which allows to define LLM components serving multiple purposes. This is exemplified by SgpTod Poli- cyPrompter, where the prompt is designed to pro- duce a structure with two distinct outcomes (a class label and a chatbot response), and similarly by Ma- trixProduction, as detailed section 4.2. Draw- ing an analogy to “function overloading” in classical programming, such LLM components can be termed “overloaded LLM components”. A taxonomy can handle overloaded LLM components in several ways: (1) define more dimensions as non- mutually exclusive, (2) label overloaded LLM compo- nents as “overloaded” without a more detailed catego- rization, or (3) categorize them by their predominant purpose or output. While the first approach allows for the most precise categorization, it complicates the taxonomy. Moreover, it will likely result in nearly all characteristics being marked for some LLM compo- nents, which is ultimately not helpful. The second approach simplifies categorization but sacrifices much detail. Our taxonomy adopts the third approach, en- forcing simplification and abstraction in descriptions of overloaded LLM components while retaining es- sential detail. The taxonomy can easily be extended to include approach (2) as an additional binary di- mension. 6.2. Usefulness The search for instances of LLM-integrated appli- cations uncovered activities across various domains. Substantial research involving LLM integrations, of- ten driven by theoretical interests, is notable in robot task planning [37, 51, 61, 33, 63] and in the TOD field [23, 71, 4, 6, 56]. Research exploring LLM po- tentials from a more practical perspective can be found in novel domains, such as industrial produc- tion [69, 26] and other technical areas [28, 70]. Fur- 17 thermore, developers of commercial LLM-based ap- plications are beginning to communicate their efforts and challenges [44, 7]. The taxonomy has been ap- plied to example instances from these and additional areas. This demonstrates its potential as a common, unified framework for describing LLM-integrated ap- plications, facilitating the comparison and sharing of development knowledge between researchers and practitioners across various domains. When applying the taxonomy to the example in- stances, it proved to be effective and useful as an analytical lens. Descriptions of LLM-integrated ap- plications commonly explain background information and details of the application domain in addition to its LLM integration. When used as an analytical lens, the taxonomy quickly directs the analysis to- wards the aspects of LLM integration, abstracting from the specificities of the domain. The taxonomy describes how LLM capabilities can be leveraged in software systems, offers inspiration for LLM-based functions, and outlines options for their implementation as follows. The Skills dimension out- lines the range of capabilities an LLM can contribute to an application through a concise set of characteris- tics, while the Function dimension suggests potential uses, further supported by the Interaction dimension. The Output Type dimension indicates options for en- coding the output of an LLM in formats beyond plain text, making it processable by software. The Output Consumer dimension illustrates the diverse ways to utilize or act upon LLM output. Thus, the taxonomy, as intended, spans a design space for LLM integra- tions. The sampled LLM-integrated applications showcase the creativity of researchers and developers in ap- plying and exploiting the potentials of LLMs, rang- ing from straightforward solutions (e.g., TruckPla- toon) to highly sophisticated and technically com- plex ones (e.g., AutoDroid). When using the tax- onomy to inspire innovative uses of LLMs, we recom- mend supplementing it with descriptions of example applications to enhance its illustrativeness. The char- acteristics of the Skills dimension are derived prag- matically from the investigated example instances. While they do not claim to be exhaustive or deeply 18 rooted in LLM theory or cognitive science, they add relevant details to the categorizations and illustrate design options and potentials for using LLMs as soft- ware components. It emerged as a key insight of this research that, rather than analyzing an LLM-integrated application in whole, analysis should start with the identifica- tion and description of its distinct LLM components. This is essential for gaining a clear understanding of how the application utilizes the capabilities of LLMs. The LLM-integrated application then manifests as a combination of its LLM components. As shown in fig- ure 1, the visualization effectively displays both the quantity and the variety of LLM components in an LLM-integrated application. LLM components interact through prompt chaining, where one LLM component’s output feeds into an- other’s input [67]. When an LLM-integrated applica- tion involves such an interaction, the taxonomy rep- resents it as an LLM characteristic within a Prompt dimension. The taxonomy can capture the variance in these interactions. For instance, in AutoDroid TaskExecutor and LowCode Executing, the LLM characteristic appears in the Prompt State di- mension, because their prompt components (knowl- edge base excerpts and prompt definition, respec- tively) are generated by other LLM components in a preparatory stage. In contrast, the LLM character- istic appears in the Prompt Task dimension for Ma- trixProduction Operator, because its prompt part is generated individually by the MatrixPro- duction Manager almost immediately before use. that cover Taxonomy dimensions entire LLM- integrated applications may be useful. Given their complexity, these dimensions should be designed based on a broader range of examples, which will only become available as more LLM-integrated applica- tions are developed and their architectures disclosed in the future. Extensions to the taxonomy could also include dimensions for describing the structure of prompts in more detail, as well as dimensions ad- dressing characteristics of the language models used. Table 4: LLM usage in the sample instances. “Evals” indicates evaluations of various LLMs. Used or best LLM Evals Comments GPT-3.5 GPT-3.5-turbo GPT-3.5 yes GPT-4 far too slow then awaiting the publication of GPT-4 Application Honeycomb LowCode MyCrunchGpt MatrixProduction text-davinci-003 WorkplaceRobot AutoDroid ProgPrompt FactoryAssistants GPT-3.5 GPT-3.5 SgpTod GPT-3.5-turbo TruckPlatoon N/A ExcelCopilot GPT-3 GPT-4 GPT-3 yes GPT-4 best for tasks requiring many steps CODEX better, but access limits prohibitive yes GPT-3.5 best more often than others combined combined LLMs in Copilot for Microsoft 365 [43] 7. Conclusion This paper investigates the use of LLMs as soft- ware components. Its perspective differs from cur- rent software engineering research, which investigates LLMs as tools for software development [14, 22] and from research examining LLMs as autonomous agents [11, 62, 57, 21]. This paper defines the concept of an LLM component as a software component that re- alizes its functionality by invoking an LLM. While LLM components implicitly appear in various works, termed, for example, “prompters”, “prompted LLM”, “prompt module”, or “module” [30, 71, 6, 7], to our knowledge, this concept has not yet been formalized or systematically investigated. The main contribution of this study is a taxonomy for the analysis and description of LLM components, extending to LLM-integrated applications by charac- terizing them as combinations of LLM components. In addition to the dimensions and characteristics of the taxonomy, the study contributes a taxonomy vi- sualization based on feature vectors, which is more compact than the established visualizations such as morphological boxes [55] or radar charts. It repre- sents an LLM-integrated application as one visual en- tity in a tabular format, with its LLM components displayed as rows. The taxonomy was constructed using established methods, based on a set of example instances, and evaluated with a new set of example instances. The combined samples exhibit broad variation along the identified dimensions. For some instances, informa- tion was not available, necessitating speculative in- terpretation. However, since the sample is used for identifying options rather than quantitative analysis, this issue and the representativeness of the sample are not primary concerns. The evaluation was con- ducted by the developer of the taxonomy, consistent with recent related work [21, 52, 48]. Using a new sample for evaluation strengthens the validity of the results. A further significant contribution of the paper is a systematic overview of a sample of LLM-integrated applications across various industrial and technical domains, illustrating a spectrum of conceptual ideas and implementation options. As the examples show, LLM components can re- place traditionally coded functions in software sys- tems and enable novel use cases. However, practi- cal challenges persist. Developers report that new software engineering methods are required, e.g., for managing prompts as software assets and for test- ing and monitoring applications. For instance, the costs of LLM invocations prohibit the extensive au- tomated testing that is standard in software devel- opment practice [44, 7]. Challenges also arise from the inherent indeterminism and uncontrollability of LLMs. Small variations in prompts can lead to differ- ences in outputs, while automated output processing 19 in LLM-integrated applications requires the output to adhere to a specified format. Furthermore, the deployment mode of LLMs, whether local (on the same hardware as the ap- plication) or remote, managed privately or offered as Language-Models-as-a-Service (LMaaS), has im- pact on performance and usability. Table 4 gives an overview of the LLMs used in our sample of appli- cations. Where papers report evaluations of mul- tiple LLMs, the table displays the chosen or best- performing LLM. Although not representative, the table provides some insights. LMaaS dominates, likely due to its convenience, but more importantly, due to the superior performance of the provided LLMs. Concerns regarding LMaaS include privacy, as sensi- tive data might be transmitted to the LLM through the prompt [64], and service quality, i.e., reliability, availability, and costs. Costs typically depend on the quantity of processed tokens. This quantity also af- fects latency, which denotes the processing time of an LLM invocation. A further important factor for latency is the size of the LLM, with larger models being slower [7]. When building LLM-based applications for real- world use, the reliability and availability of an LMaaS are crucial. Availability depends not only on the technical stability of the service, but also on factors such as increased latency during high usage periods or usage restrictions imposed by the provider of an LMaaS, as reported for ProgPrompt [51]. Beyond technical aspects, the reliability of an LMaaS also en- compasses its behavior. For instance, providers might modify a model to enhance its security, potentially impacting applications that rely on it. Despite practical challenges, integrating LLMs into systems has the potential to alter the way software is constructed and the types of systems that can be realized. Prompts are central to the functioning of LLM components which pose specific requirements such as strict format adherence. Therefore, an im- portant direction for future research will be prompt engineering specifically tailored for LLM-integrated applications. In future work, the taxonomy will be extended to distinguish finer-grained parts of prompts, allowing a more detailed description and comparison of prompts and related experimental results. Initial studies share results on the format-following behavior of LLMs [68] as a subtopic of instruction-following [73], derived with synthetic benchmark data. It is necessary to complement their results with experiments using data and tasks from real application development projects because, in the early stages of this field, synthetic benchmarks may fail to cover relevant aspects within the wide range of possible options. Another crucial research direction involves exploring how LLM char- acteristics correspond to specific tasks, such as de- termining the optimal LLM size for intent detection tasks. The taxonomy developed in this study can sys- tematize such experiments and their outcomes. Ad- ditionally, it provides a structured framework for de- lineating design choices in LLM components, making it a valuable addition to future training materials. Acknowledgements Special thanks to Antonia Weber and Constantin We- ber for proofreading and providing insightful and con- structive comments. References [1] Eleni Adamopoulou and Lefteris Moussiades. An Overview of Chatbot Technology. In Ilias Ma- glogiannis, Lazaros Iliadis, and Elias Pimeni- dis, editors, Artificial Intelligence Applications and Innovations, IFIP Advances in Information and Communication Technology, pages 373–383, Cham, 2020. Springer International Publishing. doi:10.1007/978-3-030-49186-4_31. [2] Sebastian Bader, Erich Barnstedt, Heinz Be- denbender, Bernd Berres, Meik Billmann, and Marko Ristin. Details of the asset adminis- tration shell-part 1: The exchange of informa- tion between partners in the value chain of in- dustrie 4.0 (version 3.0 rc02). Working Paper, Berlin: Federal Ministry for Economic Affairs 20 and Climate Action (BMWK), 2022. doi.org/ 10.21256/zhaw-27075. Soft Computing, 151:111165, January 2024. doi:10.1016/j.asoc.2023.111165. [3] Marcos Baez, Florian Daniel, Fabio Casati, and Boualem Benatallah. Chatbot integration in few patterns. IEEE Internet Computing, pages 1–1, 2020. doi:10.1109/MIC.2020.3024605. [4] Tom Bocklisch, Thomas Werkmeister, Task- Daksh Varshneya, and Alan Nichol. Oriented Dialogue with In-Context Learn- ing. (arXiv:2402.12234), February 2024. doi:10.48550/arXiv.2402.12234. [5] Yuzhe Cai, Shaoguang Mao, Wenshan Wu, Ze- hua Wang, Yaobo Liang, Tao Ge, Chenfei Wu, Wang You, Ting Song, Yan Xia, Jonathan Tien, and Nan Duan. Low-code LLM: Visual Pro- gramming over LLMs. (arXiv:2304.08103), April 2023. doi:10.48550/arXiv.2304.08103. [6] Lang Cao. DiagGPT: An LLM-based Chatbot with Automatic Topic Management for Task- Oriented Dialogue. (arXiv:2308.08043), August 2023. doi:10.48550/arXiv.2308.08043. [7] Phillip Carter. All the Hard Stuff No- body Talks About When Building Prod- ucts with LLMs. Honeycomb, May 2023. https://www.honeycomb.io/blog/ hard-stuff-nobody-talks-about-llm. [8] Phillip Carter. So We Shipped an AI Prod- Honeycomb, Octo- uct. Did It Work? ber 2023. https://www.honeycomb.io/blog/ we-shipped-ai-product. [9] Banghao Chen, Zhaofeng Zhang, Nicolas Langrené, Unleash- and Shengxin Zhu. ing the potential of prompt engineering in Large Language Models: A comprehensive review. (arXiv:2310.14735), October 2023. doi:10.48550/arXiv.2310.14735. [10] Wang Chen, Yan-yi Liu, Tie-zheng Guo, Da- peng Li, Tao He, Li Zhi, Qing-wen Yang, Hui-han Wang, and Ying-you Wen. Sys- industry appli- tems engineering issues cations of Applied large language model. for 21 [11] Yuheng Cheng, Ceyao Zhang, Zhengwen Zhang, Xiangrui Meng, Sirui Hong, Wenhao Li, Zihao Wang, Zekai Wang, Feng Yin, Junhua Zhao, and Xiuqiang He. Exploring Large Language Model based Intelligent Agents: Definitions, Methods, and Prospects. (arXiv:2401.03428), January 2024. doi:10.48550/arXiv.2401.03428. [12] Silvia Colabianchi, Andrea Tedeschi, and Francesco Costantino. Human-technology in- tegration with industrial conversational agents: A conceptual architecture and a taxonomy for manufacturing. Journal of Industrial Infor- mation Integration, 35:100510, October 2023. doi:10.1016/j.jii.2023.100510. [13] Jonathan Evertz, Merlin Chlosta, Lea Schön- herr, and Thorsten Eisenhofer. Whispers in the Machine: Confidentiality in LLM-integrated Systems. (arXiv:2402.06922), February 2024. doi:10.48550/arXiv.2402.06922. [14] Angela Fan, Beliz Gokkaya, Mark Harman, Mitya Lyubarskiy, Shubho Sengupta, Shin Yoo, and Jie M. Zhang. Large Language Models for Software Engineering: Survey and Open Problems. (arXiv:2310.03533), November 2023. doi:10.48550/arXiv.2310.03533. [15] Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, and Qing Li. Recommender Systems in the Era of Large Language Models (LLMs). (arXiv:2307.02046), August 2023. doi:10.48550/arXiv.2307.02046. [16] David Fortin. Microsoft Copilot in Excel: What It Can and Can’t Do. YouTube, Jan- uary 2024. https://www.youtube.com/watch? v=-fsu9IXMZvo. [17] Martin Fowler. Patterns of Enterprise Applica- tion Architecture. 2002. ISBN 978-0-321-12742- 6. [18] Shirley Gregor. The nature of theory in infor- mation systems. MIS quarterly, pages 611–642, 2006. doi:10.2307/25148742. [19] Yanchu Guan, Dong Wang, Zhixuan Chu, Shiyu Wang, Feiyue Ni, Ruihua Song, Longfei Li, Jin- jie Gu, and Chenyi Zhuang. Intelligent Vir- tual Assistants with LLM-based Process Au- tomation. (arXiv:2312.06677), December 2023. doi:10.48550/arXiv.2312.06677. [20] Muhammad Usman Hadi, Qasem Al Tashi, Rizwan Qureshi, Abbas Shah, Amgad Muneer, Muhammad Irfan, Anas Zafar, Muhammad Bi- lal Shaikh, Naveed Akhtar, Jia Wu, and Seyedali Mirjalili. Large Language Models: A Compre- hensive Survey of its Applications, Challenges, Limitations, and Future Prospects, September 2023. doi:10.36227/techrxiv.23589741.v3. [21] Thorsten Händler. A Taxonomy for Au- tonomous LLM-Powered Multi-Agent Architec- tures:. In Proceedings of the 15th Interna- tional Joint Conference on Knowledge Discov- ery, Knowledge Engineering and Knowledge Management, pages 85–98, Rome, Italy, 2023. SCITEPRESS - Science and Technology Publi- cations. doi:10.5220/0012239100003598. [22] Xinyi Hou, Yanjie Zhao, Yue Liu, Zhou Yang, Kailong Wang, Li Li, Xiapu Luo, David Lo, John Grundy, and Haoyu Wang. Large Language Models for Software Engineering: A Systematic Literature Review. (arXiv:2308.10620), Septem- ber 2023. doi:10.48550/arXiv.2308.10620. [23] Vojtěch Hudeček and Ondrej Dusek. Are Large Language Models All You Need for Task- In Svetlana Stoyanchev, Oriented Dialogue? Shafiq Joty, David Schlangen, Ondrej Dusek, Casey Kennington, and Malihe Alikhani, edi- tors, Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Di- alogue, pages 216–228, Prague, Czechia, Septem- ber 2023. Association for Computational Lin- guistics. doi:10.18653/v1/2023.sigdial-1.21. [24] Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M. Bran, Stefan Bringuier, Catherine L. Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nico- las Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Im- ran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majum- dar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel Rodriques, Jacob Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean War- ren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scour- tas, K. Schmidt, Ian Foster, Andrew White, and Ben Blaiszik. 14 examples of how LLMs can transform materials science and chem- istry: A reflection on a large language model hackathon. Digital Discovery, 2(5):1233–1250, 2023. doi:10.1039/D3DD00113J. [25] Jean Kaddour, Joshua Harris, Maximilian Mozes, Herbie Bradley, Roberta Raileanu, and Robert McHardy. Challenges and Applica- tions of Large Language Models, July 2023. doi:10.48550/arXiv.2307.10169. [26] Samuel Kernan Freire, Mina Foosherian, Chao- fan Wang, and Evangelos Niforatos. Harnessing Large Language Models for Cognitive Assistants in Factories. In Proceedings of the 5th Interna- tional Conference on Conversational User Inter- faces, CUI ’23, pages 1–6, New York, NY, USA, July 2023. Association for Computing Machin- ery. doi:10.1145/3571884.3604313. [27] Anis Koubaa, Wadii Boulila, Lahouari Ghouti, Ayyub Alzahem, and Shahid Latif. Explor- ing ChatGPT Capabilities and Limitations: A Survey. IEEE Access, 11:118698–118721, 2023. doi:10.1109/ACCESS.2023.3326474. [28] Varun Kumar, Leonard Gleyzer, Adar Ka- hana, Khemraj Shukla, and George Em Karni- 22 adakis. MyCrunchGPT: A LLM Assisted Frame- work for Scientific Machine Learning. Jour- nal of Machine Learning for Modeling and Computing, 4(4), 2023. doi.org/10.1615/ JMachLearnModelComput.2023049518. [29] Dennis Jan Kundisch, Muntermann, Anna Maria Oberländer, Daniel Rau, Maxi- milian Röglinger, Thorsten Schoormann, and Daniel Szopinski. An Update for Taxonomy Designers. Business & Information Systems Engineering, 2022. doi:10.1007/s12599-021-00723-x. 64(4):421–439, August Prompted LLMs as Jongho [30] Gibbeum Lee, Volker Hartmann, and Kang- Park, Dimitris Papailiopoulos, wook Lee. chatbot modules for long open-domain conversation. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Findings of the as- sociation for computational linguistics: ACL 2023, pages 4536–4554, Toronto, Canada, July 2023. Association for Computational Linguistics. doi:10.18653/v1/2023.findings-acl.277. [31] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zheng- bao Jiang, Hiroaki Hayashi, and Graham Neu- big. Pre-train, Prompt, and Predict: A Sys- tematic Survey of Prompting Methods in Nat- ural Language Processing. ACM Comput- ing Surveys, 55(9):195:1–195:35, January 2023. doi:10.1145/3560815. [32] Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, and Yang Liu. Prompt Injection at- tack against LLM-integrated Applications, June 2023. doi:10.48550/arXiv.2306.05499. [33] Yuchen Liu, Luigi Palmieri, Sebastian Ilche Georgievski, and Marco Aiello. Koch, DELTA: Decomposed Efficient Long-Term Robot Task Planning using Large Language Models. (arXiv:2404.03275), April 2024. doi:10.48550/arXiv.2404.03275. [34] Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, and Neil Zhenqiang Gong. Prompt Injec- tion Attacks and Defenses in LLM-Integrated 23 Applications. (arXiv:2310.12815), October 2023. doi:10.48550/arXiv.2310.12815. [35] Shaoguang Mao, Qiufeng Yin, Yuzhe Cai, https: and Dan Qiao. //github.com/chenfei-wu/TaskMatrix/ tree/main/LowCodeLLM, May 2023. LowCodeLLM. [36] Scott McLean, Gemma J. M. Read, Jason Thompson, Chris Baber, Neville A. Stanton, and Paul M. Salmon. The risks associated with Ar- tificial General Intelligence: A systematic re- view. Journal of Experimental & Theoretical Artificial Intelligence, 35(5):649–663, July 2023. doi:10.1080/0952813X.2021.1964003. [37] Oier Mees, Jessica Borja-Diaz, and Wolfram Burgard. Grounding Language with Visual Af- In 2023 fordances over Unstructured Data. IEEE International Conference on Robotics and Automation (ICRA), pages 11576–11582, London, United Kingdom, May 2023. IEEE. doi:10.1109/ICRA48891.2023.10160396. [38] Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pa- sunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Ce- likyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. Augmented Lan- guage Models: A Survey, February 2023. doi:10.48550/arXiv.2302.07842. [39] Melanie Mitchell. ture of artificial general ence, doi:10.1126/science.ado7069. intelligence. 383(6689):eado7069, March Debates on the na- Sci- 2024. [40] Quim Motger, Xavier Franch, and Jordi Marco. Survey, Software-Based Dialogue Systems: Taxonomy, and Challenges. ACM Comput- ing Surveys, 55(5):91:1–91:42, December 2022. doi:10.1145/3527450. [41] Fiona Fui-Hoon Nah, Ruilin Zheng, Jingyuan Cai, Keng Siau, and Langtao Chen. Gen- erative AI and ChatGPT: Applications, chal- lenges, and AI-human collaboration. Jour- nal of Information Technology Case and Ap- plication Research, 25(3):277–304, July 2023. doi:10.1080/15228053.2023.2233814. [42] Robert C Nickerson, Upkar Varshney, and taxon- Jan Muntermann. omy development and its application in in- formation systems. European Journal of In- formation Systems, 22(3):336–359, May 2013. doi:10.1057/ejis.2012.26. A method for [43] Camille Pack, Cern McAtee, Samantha Robert- son, Dan Brown, Aditi Srivastava, and Kweku Ako-Adjei. Microsoft Copilot for Microsoft 365 overview. https://learn.microsoft. com/en-us/copilot/microsoft-365/ microsoft-365-copilot-overview, 2024. March Sumit Gulwani, [44] Chris Parnin, Gustavo Soares, Rahul Pan- dita, and Austin Z. Henley. Building Your Own Prod- uct Copilot: Challenges, Opportunities, and Needs. (arXiv:2312.14231), December 2023. doi:10.48550/arXiv.2312.14231. Jessica Rich, [45] Rodrigo Pedro, Daniel Castro, Paulo Car- From Prompt In- reira, and Nuno Santos. jections to SQL Injection Attacks: How Pro- tected is Your LLM-Integrated Web Appli- cation? (arXiv:2308.01990), August 2023. doi:10.48550/arXiv.2308.01990. [46] Ken Peffers, Tuure Tuunanen, Marcus A. Rothenberger, and Samir Chatterjee. A De- sign Science Research Methodology for Infor- mation Systems Research. Journal of Man- agement Information Systems, 24(3):45–77, De- cember 2007. ISSN 0742-1222, 1557-928X. doi:10.2753/MIS0742-1222240302. [47] Mohaimenul Azam Khan Raiaan, Md. Sad- dam Hossain Mukta, Kaniz Fatema, Nur Mo- hammad Fahad, Sadman Sakib, Most Mar- Jubaer Ahmad, Mo- ufatul Jannat Mim, hammed Eunus Ali, and Sami Azam. A Review on Large Language Models: Architectures, Ap- plications, Taxonomies, Open Issues and Chal- 24 lenges. doi:10.1109/ACCESS.2024.3365742. IEEE Access, 12:26839–26874, 2024. [48] Jack Daniel Rittelmeyer and Kurt Sandkuhl. Morphological Box for AI Solutions: Evalua- tion and Refinement with a Taxonomy Develop- ment Method. In Knut Hinkelmann, Francisco J. López-Pellicer, and Andrea Polini, editors, Per- spectives in Business Informatics Research, Lec- ture Notes in Business Information Process- ing, pages 145–157, Cham, 2023. Springer Na- ture Switzerland. doi:10.1007/978-3-031-43126- 5_11. [49] Shubhra Kanti Karmaker Santu and Dongji TELeR: A General Taxonomy of for Benchmarking Complex (arXiv:2305.11430), October 2023. Feng. LLM Prompts Tasks. doi:10.48550/arXiv.2305.11430. [50] Thorsten Schoormann, Frederik Möller, and Daniel Szopinski. Exploring Purposes of Us- In Proceedings of the Inter- ing Taxonomies. national Conference on Wirtschaftsinformatik (WI), Nuernberg, Germany, February 2022. [51] Ishika Singh, Valts Blukis, Arsalan Mousa- vian, Ankit Goyal, Danfei Xu, Jonathan Trem- blay, Dieter Fox, Jesse Thomason, and Ani- mesh Garg. ProgPrompt: Generating Situated Robot Task Plans using Large Language Mod- els. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 11523– 11530, London, United Kingdom, May 2023. IEEE. doi:10.1109/ICRA48891.2023.10161317. [52] Gero Strobel, Leonardo Banh, Frederik Möller, and Thorsten Schoormann. Exploring Gener- ative Artificial Intelligence: A Taxonomy and Types. In Proceedings of the 57th Hawaii Inter- national Conference on System Sciences, Hon- olulu, Hawaii, January 2024. https://hdl. handle.net/10125/106930. [53] Hendrik Strobelt, Albert Webson, Victor Sanh, Benjamin Hoover, Johanna Beyer, Hanspeter Pfister, and Alexander M. Rush. Interac- tive and Visual Prompt Engineering for Ad- hoc Task Adaptation With Large Language Models. IEEE Transactions on Visualization and Computer Graphics, pages 1–11, 2022. doi:10.1109/TVCG.2022.3209479. Effective Invocation Methods of Massive LLM Services. (arXiv:2402.03408), February 2024. doi:10.48550/arXiv.2402.03408. [54] Daniel Szopinski, Thorsten Schoormann, and Dennis Kundisch. Criteria as a Prelude for Guid- ing Taxonomy Evaluation. In Proceedings of the 53rd Hawaii International Conference on Sys- tem Sciences, 2020. https://hdl.handle.net/ 10125/64364. [55] Daniel Szopinski, Thorsten Schoormann, and Visualize different: To- Dennis Kundisch. researching the fit between taxon- wards omy visualizations and taxonomy tasks. In Tagungsband Der 15. Internationalen Tagung Wirtschaftsinformatik (WI 2020), Potsdam, 2020. doi:10.30844/wi_2020_k9-szopinski. [56] Manisha Thakkar and Nitin Pise. Unified Ap- proach for Scalable Task-Oriented Dialogue Sys- tem. International Journal of Advanced Com- puter Science and Applications, 15(4), 2024. doi:10.14569/IJACSA.2024.01504108. [57] Oguzhan Topsakal and Tahir Cetin Akinci. Cre- ating Large Language Model Applications Uti- lizing Langchain: A Primer on Developing LLM Apps Fast. In International Conference on Applied Engineering and Natural Sciences, vol- ume 1, pages 1050–1056, 2023. [58] Michael Unterkalmsteiner and Waleed Adbeen. A compendium and evaluation of taxonomy quality attributes. Expert Systems, 40(1): e13098, 2023. doi:10.1111/exsy.13098. [59] Bryan Wang, Gang Li, and Yang Li. En- Interaction with Mo- abling Conversational In bile UI using Large Language Models. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, pages 1–17, New York, NY, USA, April 2023. Association for Computing Machinery. doi:10.1145/3544548.3580895. [61] Jun Wang, Guocheng He, and Yiannis Kan- Safe Task Planning for Language- taros. Instructed Multi-Robot Systems using Confor- mal Prediction. (arXiv:2402.15368), February 2024. doi:10.48550/arXiv.2402.15368. [62] Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, and Jirong Wen. A survey on large language model based autonomous agents. Frontiers of Com- puter Science, 18(6):186345, March 2024. doi:10.1007/s11704-024-40231-1. [63] Shu Wang, Muzhi Han, Ziyuan Jiao, Zeyu Zhang, Ying Nian Wu, Song-Chun Zhu, and Hangxin Liu. LLM3:Large Language Model- based Task and Motion Planning with Motion Failure Reasoning. (arXiv:2403.11552), March 2024. doi:10.48550/arXiv.2403.11552. [64] Hao Wen, Yuanchun Li, Guohong Liu, Shan- hui Zhao, Tao Yu, Toby Jia-Jun Li, Shiqi Jiang, Yunhao Liu, Yaqin Zhang, and Yunxin Liu. Em- powering LLM to use Smartphone for Intelligent Task Automation. (arXiv:2308.15272), Septem- ber 2023. doi:10.48550/arXiv.2308.15272. [65] Hao Wen, Yuanchun Li, and Sean KiteFly- Kid. MobileLLM/AutoDroid. Mobile LLM, Jan- uary 2024. https://github.com/MobileLLM/ AutoDroid. [66] Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, and Dou- Jesse Spencer-Smith, glas C. Schmidt. A Prompt Pattern Cat- alog to Enhance Prompt Engineering with ChatGPT. (arXiv:2302.11382), February 2023. doi:10.48550/arXiv.2302.11382. [60] Can Wang, Bolin Zhang, Dianbo Sui, Zhiying Tu, Xiaoyu Liu, and Jiabao Kang. A Survey on [67] Tongshuang Wu, Michael Terry, and Car- rie Jun Cai. AI Chains: Transparent and 25 Instruction- and Le Hou. Denny Zhou, Following Evaluation for Large Language Mod- els. (arXiv:2311.07911), November 2023. doi:10.48550/arXiv.2311.07911. Controllable Human-AI Interaction by Chain- ing Large Language Model Prompts. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22, pages 1–22, New York, NY, USA, April 2022. Association for Computing Machinery. doi:10.1145/3491102.3517582. [68] Congying Xia, Chen Xing, Jiangshu Du, Xinyi Yang, Yihao Feng, Ran Xu, Wenpeng Yin, and Caiming Xiong. FOFO: A Benchmark to Evaluate LLMs’ Format-Following Capa- bility. (arXiv:2402.18667), February 2024. doi:10.48550/arXiv.2402.18667. [69] Yuchen Xia, Manthan Shenoy, Nasser Jazdi, and Michael Weyrich. Towards autonomous system: Flexible modular production sys- language model tem enhanced with large agents. In 2023 IEEE 28th International Con- ference on Emerging Technologies and Fac- tory Automation (ETFA), pages 1–8, 2023. doi:10.1109/ETFA54631.2023.10275362. [70] I. de Zarzà, J. de Curtò, Gemma Roig, and Carlos T. Calafate. LLM Adaptive PID Control for B5G Truck Platooning Sys- tems. Sensors, 23(13):5899, January 2023. doi:10.3390/s23135899. [71] Xiaoying Zhang, Baolin Peng, Kun Li, Jingyan SGP-TOD: Build- Zhou, and Helen Meng. ing Task Bots Effortlessly via Schema-Guided LLM Prompting. (arXiv:2305.09067), May 2023. doi:10.48550/arXiv.2305.09067. [72] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. A Survey of Large Lan- guage Models. (arXiv:2303.18223), May 2023. doi:10.48550/arXiv.2303.18223. [73] Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, 26
synthetic_cpt
4
Advancing_Large_Language_Model_Attribution_through_Self-Improving.pdf
Advancing Large Language Model Attribution through Self-Improving Lei Huang1, Xiaocheng Feng1,2*, Weitao Ma1, Liang Zhao1, Yuchun Fan3, Weihong Zhong1, Dongliang Xu4, Qing Yang4, Hongtao Liu4, Bing Qin1,2 1 Harbin Institute of Technology, Harbin, China 2 Peng Cheng Laboratory, Shenzhen, China 3 Northeastern University, Shenyang, China 4 Du Xiaoman Science Technology Co., Ltd., Beijing, China {lhuang, xcfeng, wtma, lzhao, whzhong, qinb}@ir.hit.edu.cn [email protected] {xudongliang, yangqing, liuhongtao01}@duxiaoman.com Abstract Teaching large language models (LLMs) to gen- erate text with citations to evidence sources can mitigate hallucinations and enhance verifi- ability in information-seeking systems. How- ever, improving this capability requires high- quality attribution data, which is costly and labor-intensive. Inspired by recent advances in self-improvement that enhance LLMs with- out manual annotation, we present START, a Self-Taught AttRibuTion framework for iter- atively improving the attribution capability of LLMs. First, to prevent models from stagnating due to initially insufficient supervision signals, START leverages the model to self-construct synthetic training data for warming up. To further improve the model’s attribution abil- ity, START iteratively utilizes fine-grained pref- erence supervision signals constructed from its sampled responses to encourage robust, comprehensive, and attributable generation. Experiments on three open-domain question- answering datasets, covering long-form QA and multi-step reasoning, demonstrate signif- icant performance gains of 25.13% on aver- age without relying on human annotations and more advanced models. Further analysis re- veals that START excels in aggregating infor- mation across multiple sources. 1 Introduction The rapid development of large language models (LLMs) (OpenAI, 2023; Zhao et al., 2023) has led to their prosperity as indispensable tools for infor- mation seeking. Despite their remarkable capabil- ity to generate fluent and informative responses to user queries, LLMs also struggle with hallucina- tions (Huang et al., 2023). To facilitate factuality verification, recent research (Bohnet et al., 2022) has explored attributed text generation, a paradigm that enables LLMs to generate responses with cita- tions. By attributing models’ output to verifiable sources, it can improve the explainability and cred- ibility of LLM-generated content (Li et al., 2023). While beneficial, the ability to attribute con- textual sources is not inherent in LLMs. Most work induces LLMs to generate text with citations via in-context learning (Gao et al., 2023), which is far from satisfactory (Liu et al., 2023). The current winning recipe for accurate attribution in- volves fine-tuning on high-quality attribution re- sponses1 (Li et al., 2024). However, acquiring such data typically requires either manual cura- tion (Malaviya et al., 2023), or distilled from the most advanced LLMs (Huang et al., 2024a,b), both of which are costly and not scalable, thus limit- ing the growth of models’ attribution capability. One promising solution is self-improvement (Yuan et al., 2023), which has demonstrated the poten- tial to boost model performance by learning from self-generated high-quality samples. Inspired by this, we aim to explore the poten- tial of self-improvement in bootstrapping the at- tribution ability of LLMs. However, achieving this goal presents several challenges. One sig- nificant challenge lies in the risk of model stag- nation during the self-improvement process, pri- marily due to the insufficient supervision signals obtained in the early stage. Concretely, consider- ing the inferior performance of LLMs in handling the attribution task (Gao et al., 2023), generating sufficient high-quality attribution responses solely through sampling proves difficult. This scarcity of high-quality samples limits the opportunities for LLMs to self-improve effectively. Another chal- lenge stems from the limitation of weak supervi- sion signals. Current self-improvement approaches (Yuan et al., 2023) primarily involve supervised fine-tuning on high-quality samples while discard- ing low-quality ones. When applied to LLM attribu- 1Attribution responses refers to “responses with in-line *Corresponding Author citations, e.g., [1][2]”. 4 2 0 2 t c O 7 1 ] L C . s c [ 1 v 8 9 2 3 1 . 0 1 4 2 : v i X r a tion, these high-quality samples provide only weak supervision signals, mainly teaching LLMs on the surface form of attribution (e.g., proper citation format) (Li et al., 2024). Such practice may ne- glect the potential of exploring fine-grained signals from low-quality samples to learn what constitutes a desirable attribution response. To address these challenges, we present START, a Self-Taught AttRibuTion framework designed to bootstrap the attribution capabilities of LLMs. To prevent models from stagnating early due to insufficient supervision signals, we first leverage the model to self-construct high-quality synthetic attribution data (§3.1). The data synthesis process follows reverse attribution thinking: the model initially generates a response to a given query, then breaks it into atomic claims, and finally ran- domly combines them to create synthetic docu- ments. This process not only simulates multi- source information-seeking scenarios but also en- sures precise attribution, as each document can be directly traced back to the specific claim it origi- nated from. These high-quality synthetic data are then utilized for warming up, providing a good starting point for LLMs to self-improve. Further- more, to better explore fine-grained supervision signals for LLM attribution, we introduce an itera- tive self-improving recipe (§3.2). Specifically, the framework meticulously designs fine-grained re- wards tailored for LLM attribution, covering robust- ness, comprehensiveness, and attributability. By scoring multiple candidates through sampling and selecting those with the highest holistic rewards for supervised fine-tuning, the framework subse- quently utilizes low-quality samples to construct fine-grained preference pairs with diverse optimiza- tion rewards for preference optimization. This iter- ative process further fosters the self-improvement of attribution capabilities. We conduct extensive experiments across three open-domain question-answering datasets, cover- ing long-form QA and multi-step reasoning. Re- sults indicate that START achieves significant per- formance gains of 25.13% on average in citation quality. Moreover, START successfully achieves self-improvement in LLM attribution, showing pro- gressive improvements across iterations. Ablation studies confirm that each component significantly contributes to the improvement. Further analysis shows that START not only excels in generating su- perior attributable responses but also in effectively aggregating information across multiple sources. 2 Related Work 2.1 Large Language Model Attribution Attribution has gained significant attention for en- hancing the interpretability and verifiability of LLMs (Gao et al., 2023; Li et al., 2023). Recent studies have focused on improving LLM attribu- tion in a supervised way. Asai et al. (2023) first distill GPT-4 to collect high-quality attribution data, aiming to teach the model to generate grounded an- swers with citations through self-reflecting. Simi- larly, Huang et al. (2024a) develop a training frame- work starting with distilling ChatGPT, followed by designing reward models to teach the LLM to generate highly supportive and relevant citations. Additionally, Li et al. (2024) model the attribution task from a preference learning perspective, where they first fine-tune the model on human-labeled at- tribution datasets and then perform preference op- timization using synthesized preference data. Fur- thermore, Huang et al. (2024b) take this further by extending the attribution format to a fine-grained citation level, primarily distilled from ChatGPT. It enables the model to first ground the fine-grained quotes within the context and then condition the generation process on them. In contrast to these methods, START aims to bootstrap attribution ca- pability without relying on human-labeled data or distilling from more capable LLMs. 2.2 Self-Improvement for LLMs High-quality data either human-crafted or distilled from advanced LLMs has proven effective in en- hancing the performance of LLMs. However, ac- quiring such high-quality data can be prohibitively expensive. Recently, self-improvement approaches (Gülçehre et al., 2023; Yuan et al., 2024), where LLMs learn from self-generated samples have emerged as a viable solution to compensate for the scarcity of high-quality data. These methods typically involve employing heuristic rules (Zelik- man et al., 2022), self-critique (Tian et al., 2024), or training additional verifiers (Hosseini et al., 2024) to assess the quality of model-generated samples. Such practices are particularly effective in rea- soning tasks, e.g., mathematical reasoning, where LLMs already demonstrate capable abilities and can receive precise feedback on correctness. How- ever, these advantages are absent in the attribution task, due to its challenging nature. To bridge the gap, we take an initial step towards exploring the potential of self-improvement in LLM attribution. Figure 1: The data synthesis pipeline consists of five steps: given a user query, the LLM first generates an informative response without citations in a closed-book setting. Subsequently, the LLM decomposes this response into atomic claims. These claims are then randomly grouped into specific sets, which serve as the basis for generating documents that cover all included claims. Finally, we trace back to the initial response to relabel the citations. 3 Problem Formulation and Methodology We follow a formulation of attributed text gener- ation as described in Gao et al. (2023). This task involves processing a user query q for information- seeking, given a corpus of retrieved documents D, to generate a response S with in-line cita- tions. We assume the response S as consisting of n statements, such that S = {s1, s2, . . . , sn}. Each statement si ∈ S cites a list of passage Ci = {ci1, ci2, . . .}, where cij ∈ D. Citations are presented in the form of [1][2], which represent the attribution to specific documents in D. Next, we present an overview of START, a train- ing framework designed to teach LLMs to self- improve their attribution ability, as illustrated in Figure 2. START consists of two essential stages: synthetic data warm-up (§3.1) and self-improving for LLM attribution (§3.2). 3.1 Synthetic Data Warm-Up The core of self-improvement lies in generating high-quality samples and iteratively learning from Intuitively, a high-quality attribution re- them. sponse should not be distracted by irrelevant doc- uments (robustness) and capture high coverage of viewpoints across multiple documents (comprehen- siveness) while maintaining high citation quality (attributability). However, existing LLMs typically show inferior performance in the attribution task, significantly hindering their ability to generate such high-quality samples. This limitation poses sub- stantial challenges to enhancing their attribution capabilities through self-improvement. ing up, enabling the model to have the basic ability to generate robust, comprehensive, and attributable responses across multiple sources. The pipeline consists of the following steps, shown in Figure 1. More details can be found in Appendix A. Step 1: Response Generation Given an arbitrary model, we first sample a query q from seed ques- tions Q and then generate a long-form answer S utilizing the parametric knowledge of the model it- self. The model is required to produce informative answers that cover multiple perspectives. Step 2: Claim Decomposition Prior work (Min et al., 2023) has explored using atomic claims as a fundamental unit in long-form text generation. Thus, for the response S, we ask the model to decompose it into atomic claims. Each atomic claim represents a distinct piece of information. Step 3: Claim Combination To ensure that the response behaves as an aggregation of information from multiple documents, we randomly combine different claims into one claim set. This process helps simulate the natural diversity of viewpoints and sources, thus enhancing the comprehensiveness and realism of the synthesized responses. Step 4: Document Generation For each claim set, we prompt the model to generate a synthetic document D that provides a comprehensive dis- cussion of the grouped claims. Additionally, to enhance the robustness of the response, we intro- duce irrelevant documents by uniformly sampling documents generated from other queries. In this stage, we propose utilizing the model to self-construct high-quality synthetic data for warm- Step 5: Attribution Relabel The final step in- volves labeling the response with citations from What is the difference between freshwater and potable water?Fresh water refers to water that is not salty orbrackish [1][2]. It may be unsuitable fordrinking without treatment [1]. Potable water,on the other hand, is water that is safe andsuitable for human consumption [2][3]. SeedQuestionsResponseQueryNote: "Step1 does notgenerate citations"Step1: Response GenerationStep2: Claim DecompositionStep3: Claim CombinationStep4: Document GenerationResponseInstruction: Decompose theresponse into claimsFew-shotexamplesFreshwater refers to water that is notsalty or brackish.Freshwater may be unsuitable fordrinking without treatment.Atomic ClaimsPotable water is safe and suitable forhuman consumption.Claim 1Claim 2Claim 3NoisyClaimClaimSet 2ClaimSet 1ClaimSet 3Random CombinationStep5: Attribution RelabelFreshwater, which includes (...), ischaracterized by its low salt content, (...)it unsafe for direct human consumption...Document 1Document 2Document 3123Add CitationsFreshwater, distinguished by its lowsalinity  (...)  suitable for humanconsumption, preventing health issues...Potable water is deemed safe for humanconsumption as it meets health standardsestablished by global health authorities...Claim-to-DocumentGeneration Figure 2: Overview of our self-improving framework, which consists of two stages. The model is first warmed up using synthetic data (§3.1). This provides a good starting point to enable the model to generate high-quality samples in the subsequent iterative training. Next, the model is further trained via rejection sampling fine-tuning and fine-grained preference optimization iteratively (§3.2). This iterative process bootstraps the model’s attribution capability by fully utilizing the supervision signals from its sampled generations. the generated documents. This process ensures that each claim within the response is explicitly attributed to its source. In this way, for each query q, and documents set D, we can obtain an infor- mative and attributable response while maintaining robustness against irrelevant documents. Next, the model is fine-tuned for warming up with the MLE objective on the synthesized dataset, which consists of N data entries, each containing a query qi, a document set Di, and a high-quality attributable response yi: L = − N (cid:88) i=1 log P (yi|qi, Di; θ) (1) 3.2 Self-Improving for LLM Attribution In this stage, we propose to iteratively boost the model’s attribution capability by exploring more fine-grained supervision signals, rather than solely relying on golden responses in synthetic data. This involves leveraging rejection sampling for data growing and fine-grained preference optimization for capability evolution. 3.2.1 Rejection Sampling Fine-tuning After warming up, we first sample N candidates for each query in the synthetic dataset and then score each candidate with fine-grained rewards that cover three key dimensions: robustness, compre- hensiveness, and attributability. Attributability serves as the indispensable con- dition for high-quality attributable generation. It quantifies the extent to which a response is fully supported by the cited documents. To accurately measure attributability, we employ an off-the- shelf Natural Language Inference (NLI) model2 by checking whether each statement in the response is entailed by the corresponding cited documents. AttrScore = 1 S i=1 S (cid:88) Entail(Docs, statementi) (2) where S is the total number of statements in the response and Entail returns 1 if the statement i is entailed by cited documents, and 0 otherwise. Robustness measures the degree to which a model-generated response is influenced by irrel- evant contexts. Considering that we can identify relevant documents dr within the document set D for each query q, thus we quantify robustness by calculating the probability difference of the model M to generate the response y under different con- texts. The robustness score is defined as follows: RobustScore = PM (y | q ⊕ dr) PM (y | q ⊕ D) (3) Empirically, the closer the score is to 1, the less the response is disturbed by irrelevant documents. Comprehensiveness measures the extent to which a response captures all relevant informa- tion from the source documents. As the golden responses in the synthetic data are designed to 2huggingface.co/google/t5_xxl_true_nli_mixture Rejection Sampling Fine-TuningFine-grained Preference OptimizationWhy do leaves change color in the fall?Sampling(N=16)As temperatures drop in fall, chlorophyll productiondecreases [1]. Anthocyanins become visible [3]. (...) colorleaves yellow, orange, red, and purple [1][3].Robust: 1.03Warm-UpSupervisedFine-tuningSynthetic DataQueryAttributionresponseIterativelyAttributability RewardComprehensiveness RewardLeaves change color in fall (...) to become visible,showcasing yellows and oranges primarily due tocarotenoids.The vehicles' increased awareness could aid thepolice (...) The future of passenger rail transportin the era of automated cars is not clear. Without chlorophyll, red and purple pigments calledanthocyanins become visible (...)purples seen inspecies like maples during autumn.SyntheticDocsWarmed upLLMAttributable: 5/5Comprehensive: 4/5123Robust: 2.83Attributable: 1/5Comprehensive: 2/5...In the fall, shorter days and cooler temperatures trigger areduction in chlorophyll production[1]. (...) It makes thefuture of passenger rail transport not clear [2].noisySupervised fine-tuningattributablecomprehensiveattributablecomprehensiveDirect PreferenceOptimizationPreferenceDataattributablecomprehensiveattributablecomprehensive aggregate and reflect information across multi- documents, thus we quantify comprehensiveness by decomposing them into sub-claims and verify- ing whether these claims are covered by the sam- pled generation y. We compute the score as below: ˆrθ(x, y) = β log πθ(y | x) πref (y | x) (6) Here, reference model πref is initialized with the model after rejection sampling to minimize the distribution shift from the reference distribution. CompreScore = 1 C C (cid:88) i=1 Entail(claimi, y) (4) 4 Experiments 4.1 Datasets where claimi represents sub-claims and C is the number of golden sub-claims. Subsequently, we formulate a holistic reward function (Eq. 5) considering the above dimensions. This function is employed to rank generated candi- dates, with the top-ranked candidate being selected for further supervised fine-tuning. Reward = I(AttrScore) × CompreScore RobustScore (5) Here, I is an indicator function that returns 1 if AttrScore = 1, and 0 otherwise. 3.2.2 Fine-grained Preference Optimization The common way of self-improvement focuses on updating the model with high-quality samples while discarding low-quality ones. For LLM attri- bution, simply supervised fine-tuning with highly attributable responses only teaches the LLM to learn surface characteristics of attribution, e.g., the correct form of citation. Inspired by human cog- nition, learning from mistakes provides more fine- grained signals to understand the mechanisms that drive successful attribution than simply imitating correct examples. Thus, we aim to fully unlock the potential of low-quality samples by constructing fine-grained preference pairs with different opti- mization rewards for preference optimization. Given the multi-objective nature of LLM attri- bution, our focus is specifically on attributability and comprehensiveness, utilizing corresponding rewards functions to construct preference data re- spectively3. Specifically, we pair samples that ex- hibit high attributability but low comprehensive- ness with the top-ranked sample selected using a holistic reward, and vice versa. These preference pairs, each addressing different optimization objec- tives, are then aggregated to further train the LLM via DPO (Rafailov et al., 2023): LDPO = −E[log σ(ˆrθ(x, y+) − ˆrθ(x, y−))] 3We do not optimize separately for robustness as the model already shows sufficient robustness after rejection sampling fine-tuning. Following previous work (Ye et al., 2023; Li et al., 2024), we conduct our experiments using two long- form question-answering datasets: ASQA (Stel- makh et al., 2022) and ELI5 (Fan et al., 2019), as well as a multi-step reasoning dataset, StrategyQA (Geva et al., 2021). Both ASQA and ELI5 feature factoid long-form answers that require synthesiz- ing highly relevant documents in response to a user query. In StrategyQA, answers demand a combina- tion of information-seeking and implicit reasoning. Further details on the data statistics, knowledge corpus used for retrieval, and examples for each dataset are provided in Appendix B. 4.2 Evaluation Following previous research (Gao et al., 2023), we evaluate model-generated responses mainly on two dimensions: Citation Quality and Correctness. Our evaluation methodology combines both auto- mated metrics and human evaluation. Automatic Evaluation. To assess citation qual- ity, we calculate the citation precision, citation recall, and its harmonic mean citation F1 based on the definition in Gao et al. (2023). We use TRUE (Honovich et al., 2022), a T5-11B model fine-tuned on a collection of natural language infer- ence (NLI) datasets to examine whether the cited documents entail the generated statement. For cor- rectness, different datasets are measured differently. For ASQA, we report the exact match recall (EM Rec.) of correct short answers. For ELI5, we re- port the claim recall (Claim) by checking whether the model output entails the sub-claims generated by text-davinci-003. For StrategyQA, the for- mat of answers begins with yes/no, we evaluate correctness by reporting the accuracy (Acc.). See Appendix C for more details. Human Evaluation. We collected a total of 150 instances from the test sets of ASQA, ELI5, and StrategyQA for human evaluation, with each dataset providing 10 instances from five different systems. The evaluation is divided into two parts: Model ASQA ELI5 StrategyQA Correctness Citation Correctness Citation Correctness Citation EM Rec. Rec. Prec. F1. Claim Rec. Prec. F1 Acc. Rec. Prec. F1 Llama-2-13B (ICL) Llama-2-13B (PostAttr) Distill-Llama-3-70B-Instruct Distill-Mixtral-8x7B-Instruct Self-RAG (Asai et al., 2023) AGREE (Ye et al., 2023) APO (Li et al., 2024) FGR (Huang et al., 2024a) START (Warming-up) START (Iteration 1) START (Iteration 2) START (Iteration 3) 35.2 25.0 41.1 40.3 31.7 39.4 40.5 38.7 39.2 42.2 42.9 44.2 In-context Learning & Post-hoc 38.4 23.6 39.4 23.6 38.9 23.6 13.4 7.1 17.3 5.7 15.8 5.8 16.5 5.8 Training-based 60.4 64.9 70.3 64.0 72.8 73.5 23.2 68.8 76.1 76.2 53.8 63.5 71.3 66.8 69.6 74.7 23.9 75.6 81.0 84.2 56.9 64.2 70.8 65.4 71.2 74.1 23.5 72.0 78.5 80.0 12.9 13.8 10.7 9.4 13.5 9.8 11.9 11.3 10.0 9.6 28.7 34.3 20.8 21.6 26.0 53.1 9.9 47.4 65.6 62.4 25.2 35.0 22.5 16.0 24.5 55.9 10.2 50.5 65.1 69.1 26.8 34.6 21.6 18.4 25.2 54.5 10.0 48.9 65.3 65.6 65.6 64.3 70.8 63.9 62.1 64.6 61.8 64.9 61.2 73.4 72.7 69.6 20.6 8.7 33.1 8.7 25.4 8.7 28.4 38.4 31.4 30.2 40.0 29.5 9.4 44.4 51.9 60.0 30.7 49.2 36.5 37.2 39.1 42.4 9.6 48.6 54.1 56.6 29.5 43.1 33.8 33.3 39.6 34.8 9.5 46.4 53.0 58.2 Table 1: Main result between our method and baselines. Experiments are evaluated on ASQA, ELI5, and StrategyQA datasets. For most baselines, we use the result of previous works (Asai et al., 2023; Ye et al., 2023; Li et al., 2024). citation quality and overall quality (comprehensive- ness and correctness). More details in Appendix D. 4.3 Baselines We compare START with the following baselines. For more details, please refer to Appendix E. In-context Learning (ICL). Following Gao et al. (2023), we enable the LLM to generate citations via in-context learning. For each query, we first retrieve five relevant documents and then prompt the LLM with two-shot demonstrations. Post-hoc Attribution (PostAttr). Following Ye et al. (2023), given a query, we first instruct the LLM to generate an initial response leveraging its parametric knowledge. For each statement in the response, we use the NLI model4 to find the maxi- mally supported document and cite accordingly. Training-based Methods. Training on high- quality data serves as a strong baseline to unlock the attribution ability of LLMs. We consider the following training-based methods. Knowledge Distillation employs the most capa- ble LLMs, e.g., Llama-3-70B-Instruct and Mixtral- 8x7B-Instruct, as teacher models to train a student model on distilled attribution data. Self-RAG (Asai et al., 2023) first collect data distilled from GPT-4, then teach the LLM to re- trieve on-demand while reflecting on its generation to improve both generation quality and attributions. AGREE (Ye et al., 2023) trains the LLM to self- ground its response in retrieved documents using 4We use the same NLI model during citation evaluation. automatically collected data and then leverages test- time adaptation to reinforce unverified statements. APO (Li et al., 2024) models LLM attribu- tion as a preference learning task, where they first supervised-fine-tuned on human-labeled high- quality data and then automatically collect prefer- ence data for preference optimization. FGR (Huang et al., 2024a) first collects attribu- tion data distilled from ChatGPT and then designs rewards tailored for LLM attribution to teach the LLM to generate supportive and relevant citations. 4.4 Implementation Details For a fair comparison, all training-based baselines and START employ Llama-2-13b-base (Touvron et al., 2023). Further details on the implementation of START are presented in Appendix F. 5 Results 5.1 Main Results We provide the main results and the performance of START across different iterations in Table 1. START effectively improves performance. As shown in Table 1, START shows superior perfor- mance across three datasets and achieves state- of-the-art results in citation quality. Specifically, START shows significant improvements over both ICL and Post-hoc approaches, highlighting the benefits of supervised signals in unlocking the attribution ability of LLMs. Notably, compared with methods that rely on distilling from more ad- vanced LLMs or training on human-annotated data, START achieves performance improvement of at Model START (Iteration 1) w/o. warm-up w/o. preference START (Iteration 2) w/o. warm-up w/o. preference START (Iteration 3) w/o. warm-up w/o. preference ASQA ELI5 StrategyQA Correctness Citation Correctness Citation Correctness Citation EM Rec. Rec. Prec. F1. Claim Rec. Prec. F1 42.2 35.7 40.6 42.9 33.5 39.8 44.2 28.6 40.7 68.8 36.3 42.2 76.1 57.4 50.8 76.2 67.3 55.7 75.6 32.7 47.2 81.0 52.1 53.6 84.2 58.2 58.3 72.0 34.4 44.6 78.5 54.6 52.2 80.0 62.4 57.0 11.3 12.1 12.9 10.0 10.0 12.5 9.6 6.4 11.9 47.4 15.2 16.5 65.6 26.7 22.5 62.4 46.8 25.3 50.5 13.7 17.4 65.1 23.0 23.3 69.1 38.4 26.2 48.9 14.4 16.9 65.3 24.7 22.9 65.6 42.2 25.7 Acc. 73.4 65.9 63.7 72.7 69.0 65.7 69.6 70.4 67.8 Rec. Prec. F1 44.4 18.0 21.5 51.9 32.4 27.2 60.0 44.9 31.3 48.6 17.2 24.6 54.1 33.2 30.4 56.6 39.2 33.5 46.4 17.6 22.9 53.0 32.8 28.7 58.2 41.9 32.4 Table 2: Ablation study results across three datasets over three iterations. We compare START with two variants: one that does not utilize synthetic data for initial warming-up (w/o warm-up) and another lacking fine-grained preference optimization for self-improvement (w/o preference). Model Iteration 1 Iteration 2 Iteration 3 START w/o. warm-up 42.5% 3.24% 90.2% 41.2% 95.9% 83.8% Table 3: The pass rate comparison between START and START (w/o. warm-up) across different iterations during the rejection sampling stage. least 8.0%, 20.4%, and 47.0% in citation quality for ASQA, ELI5, and StrategyQA respectively. Re- garding correctness, START also achieves gains of at least 9.1% and 7.2% on both ASQA and Strate- gyQA, despite a slight decrease on ELI5. START successfully achieves self-improvement. We compare the performance of START from itera- tion 0 to 3 in Table 1, and the results demonstrate consistent improvements across iterations. Initially, at iteration 0 (after warm-up), thanks to the syn- thetic training data, the model shows decent per- formance after warm-up. By iteration 1, START exhibits remarkable effectiveness in improving its performance by leveraging its own generated sam- ples (e.g., 23.5 → 72.0 on ASQA, 10.0 → 48.9 on ELI5, 9.5 → 46.4 on StrategyQA). Subsequent it- erations continue this trend of incremental improve- ment, reaching a convergence point at iteration 3. 5.2 Ablation Study and Analysis We conduct comprehensive ablation studies and analyses to understand how each component in START contributes to the significant improvement. Effect of synthetic data warming-up. To demonstrate the importance of utilizing synthetic data for initial warm-up in START, we con- duct a comparative ablation study employing Llama-2-13b for self-improvement, omitting the initial warm-up stage. Table 2 shows the ablation results (w/o. warm-up) across three iterations. We observe that omitting the initial warm-up stage can lead to a significant performance drop in the first it- eration. Additionally, as the iteration increases, the performance of the model without warm-up shows only modest improvements and remains substan- tially inferior to the model that underwent warm- up. Moreover, we also calculate the pass rate of sampled response in each iteration as shown in Ta- ble 3. The findings indicate that the model with warm-up exhibits a higher pass rate in the first it- eration, which allows the model to utilize more supervised signals for self-improvement. These re- sults suggest that warming up effectively facilitates the bootstrapping of supervised data, thus prevent- ing early model stagnation. It’s worth noting that while the warm-up strategy effectively enriches the model with supervision signals at an early stage, it does not lead to noticeable improvements in cita- tion quality, as shown in Table 1. We hypothesize that this limitation stems from the inherent diffi- culty LLMs face in synthesizing information from multiple sources to generate comprehensive and attributable responses solely through direct super- vised fine-tuning. Effect of fine-grained preference optimization. To further understand the significance of fine- grained preference optimization, we compare an ablation of START that solely relies on high-quality samples for iteratively supervised fine-tuning, dis- carding low-quality samples for fine-grained pref- erence optimization. As shown in Table 2, there is a significant decline in performance when fine- grained preference optimization is removed. This highlights the effectiveness of START in fully un- locking the potential of low-quality samples to en- ASQA Dataset ELI5 Dataset StrategyQA Dataset 80.0 70.0 60.0 50.0 40.0 30.0 1 F - n o i t a t i C ITER-1 ITER-0 2 4 6 Epochs 8 10 1 F - n o i t a t i C 60.0 50.0 40.0 30.0 20.0 10.0 0.0 2 1 F - n o i t a t i C 60.0 50.0 40.0 30.0 20.0 10.0 0.0 2 ITER-1 ITER-0. 4 6 Epochs 8 10 ITER-1 ITER-0 4 6 Epochs 8 10 Figure 3: The impact of supervision signals from different stages (synthetic data v.s. self-improvement) on attribution performance across ASQA, ELI5, and StrategyQA. The blue line represents the model that undergoes only supervised fine-tuning use synthetic data at iteration 0. The red line represents the model that first trains for two epochs with synthetic data at iteration 0, followed by one iteration of self-improvement. ASQA ELI5 StrategyQA ) % ( 1 F n o i t a t i C 85 75 65 55 45 1k 3k Synthetic data sizes 5k ) % ( s s e n t c e r r o C 75 60 45 30 15 0 1k 3k Synthetic data sizes 5k Figure 4: Ablation study on the effect of synthetic data size on attribution and correctness performance. We sample 1k, 3k, and 5k user queries for data synthesis. hance attribution performance. Effect of synthetic data size. We investigate the effect of varying synthetic data sizes on the per- formance of START. Figure 4 demonstrates their effect on citation quality and correctness after three iterations of self-improving. Specifically, we sam- ple 1k, 3k, and 5k unlabeled queries to generate synthetic training data accordingly, which provides different levels of supervision signals. As shown in Figure 4, even with 1k synthetic data points, START demonstrates comparable performance. Moreover, as the training size increases, START achieves no- table improvement in citation quality and exhibits stability in correctness. Supervision signals from synthetic data v.s. iter- ative self-improvement. We further investigate the differential impact of supervision signals de- rived from data synthesis versus those from the iterative self-improvement stage. We utilize syn- thetic training data to train the model for multiple epochs, extending up to 10 epochs, and compare its performance to that of a model that undergoes only the first iteration of self-improvement. As de- picted in Figure 3, training with synthetic data dur- ing the initial iteration yields minimal performance gains. The attribution performance climbs slowly Attribution Overall Quality Full Partial No Corr. Comp. ChatGPT (ICL) Distill-Llama-3-70B-Instruct Self-RAG (Asai et al., 2023) FGR (Huang et al., 2024a) START (Ours) 3.6 9.4% 68.5% 22.1% 2.9 54.6% 32.4% 13.0% 2.4 45.7% 27.5% 26.8% 58.4% 28.7% 12.9% 2.5 76.2% 18.3% 5.5% 3.5 4.4 3.2 2.1 2.8 4.6 Table 4: Human evaluation results on attribution, correctness (Corr.), and comprehensiveness (Comp.). Bold numbers indicate the best performance, while “_” indicates the second-best performance. as training epochs increase and fails to surpass the performance of the model after just one iteration of self-improvement. This observation reveals the importance of the supervision signals provided by the model itself during self-improvement. 6 Human Evaluation Human evaluation results, detailed in Table 4, in- dicate that START generates significantly more at- tributable responses compared to all baselines, even surpassing ChatGPT5. Specifically, 76.2% of the statements generated by START are fully supported by the cited documents, which outperforms Chat- GPT by 11.24%. Additionally, 18.3% of the state- ments are partially supported, with only 5.5% un- supported. In terms of factuality, START outper- forms all training-based baselines, slightly inferior to ChatGPT. Moreover, START achieves the high- est score in comprehensiveness, demonstrating its exceptional ability to generate responses that ex- tensively cover information from multiple sources. Overall, these findings are in line with the auto- matic evaluation results in Table 1. 7 Conclusion We propose START, a self-improvement framework to push the frontier of LLM attribution. We iden- 5We utilize gpt-3.5-turbo-0125 version. tify two key limitations for LLM attribution self- improvement. To address these, START first lever- ages self-constructed synthetic data for warming up, aiming to prevent models from early stagna- tion due to insufficient supervision signals. To ex- plore more fine-grained supervision signals, START constructs fine-grained preference supervision sig- nals from low-quality samples for preference opti- mization. Both automatic and human evaluations demonstrate significant improvement in attribution without relying on human annotations and more advanced LLMs. Limitations Despite significant performance improvements, our work presents several limitations worth noting. Firstly, while our data synthesis process provides a good starting point for the model to self-improve and demonstrate some generalization on existing benchmarks, it may not cover all scenarios en- countered in user information-seeking. This limita- tion raises concerns regarding the generalizability of synthetic data in a more complex information- seeking environment. Secondly, the iterative train- ing pipeline of our self-improving framework is time-consuming, presenting a significant trade- off between performance and training duration. Thirdly, although our self-improving framework does not rely on human annotations and more ad- vanced LLMs, it still necessitates the integration of off-the-shelf NLI models to guarantee the quality of attribution in the generated samples. The perfor- mance of the NLI model significantly impacts the quality of our outputs to a certain extent. To move towards a fully self-improving framework that does not rely on external judgment, future research could investigate the use of intrinsic attribution signals derived directly from the LLM itself. Acknowledgements Xiaocheng Feng is the corresponding author of this work. We thank the anonymous review- ers for their insightful comments. This work was supported by the National Natural Science Foundation of China (NSFC) (grant 62276078, U22B2059), the Key R&D Program of Hei- longjiang via grant 2022ZX01A32, the Interna- tional Cooperation Project of PCL, PCL2022D01 and the Fundamental Research Funds for the Cen- tral Universities (Grant No.HIT.OCEF.2023018). References Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. 2023. Self-rag: Learning to retrieve, generate, and critique through self-reflection. CoRR, abs/2310.11511. Bernd Bohnet, Vinh Q. Tran, Pat Verga, Roee Aharoni, Daniel Andor, Livio Baldini Soares, Jacob Eisen- stein, Kuzman Ganchev, Jonathan Herzig, Kai Hui, Tom Kwiatkowski, Ji Ma, Jianmo Ni, Tal Schuster, William W. Cohen, Michael Collins, Dipanjan Das, Donald Metzler, Slav Petrov, and Kellie Webster. 2022. Attributed question answering: Evaluation and modeling for attributed large language models. CoRR, abs/2212.08037. Angela Fan, Yacine Jernite, Ethan Perez, David Grang- ier, Jason Weston, and Michael Auli. 2019. ELI5: long form question answering. In Proceedings of the 57th Conference of the Association for Compu- tational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 3558–3567. Association for Computational Linguis- tics. Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen. 2023. Enabling large language models to generate text with citations. In Proceedings of the 2023 Con- ference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6- 10, 2023, pages 6465–6488. Association for Compu- tational Linguistics. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? A question answering benchmark with implicit reasoning strategies. Trans. Assoc. Comput. Linguistics, 9:346–361. Çaglar Gülçehre, Tom Le Paine, Srivatsan Srini- vasan, Ksenia Konyushkova, Lotte Weerts, Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, Wolfgang Macherey, Arnaud Doucet, Orhan Firat, and Nando de Freitas. 2023. Reinforced self-training (rest) for language modeling. CoRR, abs/2308.08998. Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022. TRUE: re-evaluating factual consistency evaluation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 3905–3920. Association for Computational Linguistics. Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron C. Courville, Alessandro Sordoni, and Rishabh Agar- wal. 2024. V-star: Training verifiers for self-taught reasoners. CoRR, abs/2402.06457. Chengyu Huang, Zeqiu Wu, Yushi Hu, and Wenya Wang. 2024a. Training language models to generate text with citations via fine-grained rewards. CoRR, abs/2402.04315. Lei Huang, Xiaocheng Feng, Weitao Ma, Yuxuan Gu, Weihong Zhong, Xiachong Feng, Weijiang Yu, Wei- hua Peng, Duyu Tang, Dandan Tu, and Bing Qin. 2024b. Learning fine-grained grounded citations for In Findings of attributed large language models. the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, Au- gust 11-16, 2024, pages 14095–14113. Association for Computational Linguistics. Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gus- tavo Hernández Ábrego, Ji Ma, Vincent Y. Zhao, Yi Luan, Keith B. Hall, Ming-Wei Chang, and Yinfei Yang. 2022. Large dual encoders are generalizable retrievers. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, De- cember 7-11, 2022, pages 9844–9855. Association for Computational Linguistics. OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774. Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2023. A survey on hallucination in large lan- guage models: Principles, taxonomy, challenges, and open questions. CoRR, abs/2311.05232. Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Dmytro Okhonko, Samuel Broscheit, Gautier Izacard, Patrick S. H. Lewis, Barlas Oguz, Edouard Grave, Wen-tau Yih, and Sebastian Riedel. 2021. The web is your oyster - knowledge-intensive NLP against a very large web corpus. CoRR, abs/2112.09924. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Effi- cient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles. Dongfang Li, Zetian Sun, Baotian Hu, Zhenyu Liu, Xin- shuo Hu, Xuebo Liu, and Min Zhang. 2024. Improv- ing attributed text generation of large language mod- els via preference learning. CoRR, abs/2403.18381. Dongfang Li, Zetian Sun, Xinshuo Hu, Zhenyu Liu, Ziyang Chen, Baotian Hu, Aiguo Wu, and Min Zhang. 2023. A survey of large language models attribution. CoRR, abs/2311.03731. Nelson F. Liu, Tianyi Zhang, and Percy Liang. 2023. Evaluating verifiability in generative search engines. In Findings of the Association for Computational Lin- guistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 7001–7025. Association for Computa- tional Linguistics. Ilya Loshchilov and Frank Hutter. 2019. Decoupled In 7th International weight decay regularization. Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenRe- view.net. Chaitanya Malaviya, Subin Lee, Sihao Chen, Elizabeth Sieber, Mark Yatskar, and Dan Roth. 2023. Ex- pertqa: Expert-curated questions and attributed an- swers. CoRR, abs/2309.07852. Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2023. Factscore: Fine-grained atomic evaluation of factual precision in long form text generation. In Proceed- ings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Sin- gapore, December 6-10, 2023, pages 12076–12100. Association for Computational Linguistics. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christo- pher D. Manning, Stefano Ermon, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Sys- tems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System opti- mizations enable training deep learning models with over 100 billion parameters. In KDD ’20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pages 3505–3506. ACM. Ivan Stelmakh, Yi Luan, Bhuwan Dhingra, and Ming- Wei Chang. 2022. ASQA: factoid questions meet long-form answers. In Proceedings of the 2022 Con- ference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 8273–8288. Association for Computational Linguistics. Ye Tian, Baolin Peng, Linfeng Song, Lifeng Jin, Dian Yu, Haitao Mi, and Dong Yu. 2024. Toward self- improvement of llms via imagination, searching, and criticizing. CoRR, abs/2404.12253. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton- Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, An- thony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Di- ana Liskovich, Yinghai Lu, Yuning Mao, Xavier Mar- tinet, Todor Mihaylov, Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subrama- nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Ro- driguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine- tuned chat models. CoRR, abs/2307.09288. Xi Ye, Ruoxi Sun, Sercan Ö. Arik, and Tomas Pfister. 2023. Effective large language model adaptation for improved grounding. CoRR, abs/2311.09533. Asaf Yehudai, Boaz Carmeli, Yosi Mass, Ofir Arviv, Nathaniel Mills, Assaf Toledo, Eyal Shnarch, and Leshem Choshen. 2024. Genie: Achieving hu- man parity in content-grounded datasets generation. CoRR, abs/2401.14367. Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Sainbayar Sukhbaatar, Jing Xu, and Jason Weston. 2024. Self-rewarding language models. CoRR, abs/2401.10020. Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. 2023. Scaling relationship on learning mathematical reasoning with large language models. CoRR, abs/2308.01825. Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D. Goodman. 2022. Star: Bootstrapping reasoning with reasoning. In Advances in Neural Information Pro- cessing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Be- ichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A survey of large language models. CoRR, abs/2303.18223. A Data Synthesis A.1 Data Sources The queries employed for data synthesis are sourced from the Wish-QA (Yehudai et al., 2024), which provides high-quality grounded data suit- able for content-grounded generation tasks such as long-form question-answering and summariza- tion. Specifically, we utilize the ELI5 subset of the WishQA, noted for its high lexical diversity, comprising a total of 8,413 queries. Notably, we randomly sample 5,000 user queries for our data synthesis, resulting in the creation of 5,000 syn- thetic data points. A.2 Prompts for Data Synthesis We detail the prompts employed in the synthetic data generation stage, covering response genera- tion, claim decomposition, and document genera- tion, shown in Figure 5. A.3 Implementation Details In our work, we use Llama-2-13b-base for data synthesis, as our goal is to realize self-improving for the attribution ability of LLMs, the models used in the data synthesis stage and the subsequent main experiment need to be consistent without introduc- ing additional more powerful models. To enhance the LLM’s ability to accurately follow instructions at each step, we utilize in-context learning, incorpo- rating two demonstrations for response generation, claim decomposition, and document generation. A.4 Quality of Synthetic Data We focus on evaluating the attributability of the final response. Specifically, we employ an off- the-shelf Natural Language Inference (NLI) model, TRUE (Honovich et al., 2022), to verify whether each statement in the response is fully supported by the cited documents and to check for the presence of any irrelevant citations. The results indicate that the synthetic data are of significantly high qual- ity: 92.3% of the statements are fully supported by the cited documents, and 94.1% are free from irrelevant citations. B Details of evaluation datasets Our evaluation utilizes the ASQA, ELI5, and Strat- egyQA datasets. For both ASQA and StrategyQA, Wikipedia serves as the external knowledge base, specifically employing the Wikipedia snapshot from 2018-12-20. For the ELI5 dataset, the ex- ternal knowledge source is Sphere (Piktus et al., 2021). Regarding the retrievers, we use the dense retriever GTR (Ni et al., 2022) for Wikipedia and the sparse retriever BM25 for Sphere. Detailed statistics for these datasets are presented in Table 5. In line with previous research by Gao et al. (2023), we use the same evaluation datasets for ASQA and ELI5. Regarding StrategyQA, we adopt the settings of Ye et al. (2023), utilizing a randomly split sub- set of 490 test instances for evaluation. To further clarify, we provide an example from each dataset in Table 6. (a) Prompt template for response generation Instruction: Given a question, generate a detailed and informative response that covers multiple perspectives and synthesizes information from various sources. Limit the response to a maximum of five statements. Question: [Question] Response: (b) Prompt template for claim decomposition Instruction: Given a detailed and informative response, break it into its constituent claims. Identify and list each distinct claim, ensuring to capture all essential elements and nuances presented in the original response. Response: [Response] Claims: Instruction: Given a claim, generate a 100-word document with a title. The main content of the document should elaborate on the claims and contain the main content of the claim. (c) Prompt template for document generation Claim: [Claim] Documents: Figure 5: Illustration of the prompting design for the data synthesis pipeline. Question: Who is the original artist of sound of silence? Documents: [The retrieved documents are omitted here] (a) An example of ASQA Answer: There are several songs with the title "Sound of Silence". Sounds of Silence is the second studio album by Simon & Garfunkel, released on January 17, 1966. The album’s title is a slight modification of the title of the duo’s first major hit, "The Sound of Silence", which was recorded in March 1964 and originally was released as "The Sounds of Silence". Another "Sound of Silence" is a song performed by Australian recording artist Dami Im, and is best known as Australia’s entry at the Eurovision Song Contest 2016. Question: How does so much of our trash end up in the ocean? Documents: [The retrieved documents are omitted here] (b) An example of ELI5 Answer: Because water flows downhill and very often ends up in rivers which very often end up in oceans. So when it rains, trash is washed downhill and into streams and rivers and ultimately the ocean. Question: Did Curiosity outlive its expected lifespan? Documents: [The retrieved documents are omitted here] (c) An example of StrategyQA Answer: No. "Curiosity" rover has outlasted its expected lifespan. The rover was designed to last for months, but is still operating after years on Mars. In August 2017, "Curiosity" celebrated its fifth anniversary on Mars and is expected to continue its mission for years to come. The longevity of "Curiosity" can be attributed to the advanced technology used in the rover’s design and the meticulous planning and preparation done by the engineers and scientists. With the advancement of technology and the continued refinement of the mission, "Curiosity" is likely to continue operating for many more years to come. Figure 6: Examples of the ASQA, ELI5, and StrategyQA datasets. (a) Prompt template of ASQA and ELI5 Instruction: Write an accurate, engaging, and concise answer for the given question using only the provided search results (some of which might be irrelevant) and cite them properly. Use an unbiased and journalistic tone. Always cite for any factual claim. When citing several search results, use [1][2][3]. Cite at least one document and at most three documents in each sentence. If multiple documents support the sentence, only cite a minimum sufficient subset of the documents. Question: [Question] Documents: [Documents] (c) Prompt template of StrategyQA Instruction: Answer “yes” or “no” first. Then, write a clear and concise answer that combines reasoning with relevant search results and cite the sources properly, even if some might be irrelevant. Question: [Question] Documents: [Documents] Figure 7: Illustration of the prompting design of evaluation datasets. Dataset Source # Examples Wiki ASQA (Stelmakh et al., 2022) ELI5 (Fan et al., 2019) Sphere StrategyQA (Geva et al., 2021) Wiki 948 1000 490 Table 5: Statistics of datasets used for evaluation. C Automatic Evaluation Details We provide a detailed description of the evaluation metrics employed to assess the quality of the model- generated responses. Citation Quality. Citation Quality is a critical evaluation dimension in attributed text generation, assessing whether the answer is fully supported by the cited documents and that no irrelevant docu- ments are cited. Following Liu et al. (2023) and Gao et al. (2023), the evaluation of citation quality is typically divided into two parts: Citation Recall and Citation Precision. Citation Recall evaluates whether all generated statements are fully supported by the cited docu- ments. Specifically, for each statement si ∈ S, its citation recall is scored as 1 if there is at least one valid citation (Ci ̸= ∅) and the concate- nation of cited documents concat(Ci) fully sup- port the statement (ϕ(concat(Ci), si) = 1), where ϕ(premise, hypothesis) is an NLI model that out- puts 1 if the premise entails the hypothesis. The final citation recall is calculated by averaging over all statements in S. Citation Precision assesses whether any citations in the response are irrelevant. A citation ci,j is determined as “irrelevant” if (a) ci,j alone cannot support statement si and (b) removing ci,j does not affect the rest of the citations to support si. Citation F1 is a metric that combines citation precision and citation recall by calculating their harmonic mean. In our work, we utilize this metric to evaluate the overall citation quality of the re- sponse, where a higher Citation F1 score indicates a more accurately and comprehensively attributed response. F1 = 2 · citation precision · citation recall citation precision + citation recall , (7) Correctness. Correctness is crucial in long-form QA tasks. Given the ambiguous nature of the ASQA dataset, where each question requires mul- tiple short answers to cover different aspects, we follow Stelmakh et al. (2022) and calculate the re- call of correct short answers using exact match. As for the ELI5 dataset, evaluating the cor- rectness of long-form answers is challenging. Thus, the ALCE benchmark employs Instruct- GPT (text-davinci-003) to generate three "sub- claims" based on the human-annotated answers. To assess correctness, we use a T5-11B model6 that has been fine-tuned on a collection of NLI datasets to check whether the model-generated outputs en- tail these sub-claims. 6https://huggingface.co/google/t5_xxl_true_ nli_mixture D Human Evaluation Details Considering the open-ended nature of long-form QA tasks, automatic evaluation of correctness may not cover all possible answers. Furthermore, the evaluation of citation quality is constrained by the capabilities of the off-the-shelf NLI model, which may not adequately detect cases of partial sup- port. Therefore, we conduct a human evaluation to assess the attribution quality and correctness of START. We recruited two annotators, holding at least a bachelor’s degree to participate in our study. To evaluate citation quality, annotators are asked to verify whether each statement in the responses is fully supported, partially supported, or not sup- ported by the cited documents and identify error types if the statement is not fully supported. Next, we evaluate the overall quality of the re- sponses, focusing on comprehensiveness and cor- rectness. Annotators are asked to rate both compre- hensiveness and correctness using a 5-point Likert scale, capturing different levels of content coverage and factuality. E Baselines Knowledge Distillation: We employ supervised fine-tuning to teach Llama-2-13B to generate re- sponses with citations, utilizing training data dis- tilled from the most advanced LLMs. Specifically, the queries and documents are sourced from our synthetic dataset and the attributed responses are generated by Llama-3-70B-Instruct / Mixtral-8x7B- Instruct. Self-RAG (Asai et al., 2023): The method in- volves training the LLM to generate text with re- flection tokens, which are categorized into retrieval and critique tokens to indicate the need for retrieval and the attributability of its generation, respec- tively. Specifically, it first collects over 145,619 supervised data by prompting GPT-4 with specific instructions to generate responses with reflection tokens for knowledge-intensive queries. These data are then used to train the LLM to generate responses with self-reflection via supervised fine- tuning. AGREE (Ye et al., 2023): The method involves training the LLM to generate grounded claims with citations and to identify unverified claims. Specif- ically, it first collects 4,500 attribution data via post-hoc attribution with the help of an NLI model. These data are then used to train the model to gen- erate grounded responses with citations and also clearly state the unsupported statements. An iter- ative retrieval process is employed to search for additional information for the unsupported state- ments via a test-time adaptation (TTA) strategy. APO (Li et al., 2024): This method models the attributed text generation task as a preference learn- ing task. Specifically, the model is first trained using 6,330 human-labeled high-quality attribution data for supervised fine-tuning to learn the basic ability of attribution. It then leverages automat- ically constructed preference data for preference learning, where a positive response is generated from relevant documents accompanied by a posi- tive prompt, while a negative response is generated using irrelevant documents or a negative prompt. FGR (Huang et al., 2024a): The method first collects 3,000 in-domain user queries along with re- trieved documents and then leverages ChatGPT to generate high-quality attributed responses. These data then serve as training data to teach the model the basic ability of citation generation via super- vised fine-tuning. Subsequently, the method de- signs reward models to teach the model to gen- erate well-supported and accurate responses via fine-grained reinforcement learning. To ensure a fair comparison, we employ the same base model (Llama-2-13b-base) for evaluating all baselines. For Self-RAG, AGREE, and APO, we directly utilize their published experimental re- sults. In the case of FGR, which does not pro- vide Llama-2-13b-base results, we reproduce the experiments using the official code and the same settings provided by the authors. F Implement Details In all experiments, training is conducted using eight A100-80GB GPUs, leveraging Deepspeed stage 3 (Rasley et al., 2020) for multi-GPU distributed training, with training precision Bfloat16 enabled. During the initial warm-up stage, we employ the AdamW (Loshchilov and Hutter, 2019) optimizer with a warm-up ratio of 0.03. The total batch size is set at 64, and the learning rate is maintained at 2e-5. The maximum input sequence length is con- figured to 2048 tokens. The model is trained with only 20% of the synthetic dataset for two epochs in this stage. This strategy is designed to prevent the model from overfitting to the synthetic data during the warm-up stage, enabling it to generate more diverse samples in the subsequent rejection sam- pling fine-tuning stage. In the self-improving stage, we conduct rejection-sampling fine-tuning for three epochs at each iteration, maintaining the same train- ing settings as those used during the warming-up stage. To get the highest quality responses dur- ing rejection sampling, we set the threshold for attributability reward at 1.0, ensuring that every statement in the response is fully supported by the cited documents. For comprehensive, we set the threshold to 0.8, which means that at least 80% of the statements need to be cited. Subsequently, dur- ing the fine-grained preference optimization, the model is further trained for one additional epoch using a learning rate of 1e-5. During the evaluation, we utilize the vLLM framework (Kwon et al., 2023) for efficient infer- ence. Without special instructions, the sampling parameters are specifically configured with a tem- perature of 1.0 and a top-p setting of 0.95. We present detailed prompts used during the evalua- tion process in Figure 7.
synthetic_cpt
7
Hybrid_Training_Approaches_for_LLMs_Leveraging_Real_and_Synthetic_Data_to_Enhance_Model_Performance_in_Domain-Specific_Applications.pdf
4 2 0 2 p e S 5 2 ] L C . s c [ 1 v 3 3 4 7 1 . 9 0 4 2 : v i X r a Preprint HDFLOW: ENHANCING LLM COMPLEX PROBLEM- SOLVING WITH HYBRID THINKING AND DYNAMIC WORKFLOWS Wenlin Yao, Haitao Mi, Dong Yu Tencent AI Lab Bellevue, WA 98004, USA {wenlinyao,haitaomi,dyu}@global.tencent.com ABSTRACT Despite recent advancements in large language models (LLMs), their performance on complex reasoning problems requiring multi-step thinking and combining var- ious skills is still limited. To address this, we propose a novel framework HDFlow for complex reasoning with LLMs that combines fast and slow thinking modes in an adaptive manner. Our approach consists of two key components: 1) a new approach for slow, deliberate reasoning called Dynamic Workflow, which auto- matically decomposes complex problems into more manageable sub-tasks and dynamically designs a workflow to assemble specialized LLM or symbolic rea- soning tools to solve sub-tasks; 2) Hybrid Thinking, a general framework that dy- namically combines fast and slow thinking based on problem complexity. Finally, we propose an easy-to-scale method for automatically synthesizing a large-scale dataset of 27K challenging reasoning problems for complex reasoning and a hy- brid thinking tuning method that trains smaller LLMs on this dataset to internalize the fast/slow hybrid reasoning strategies. Experiments on four reasoning bench- mark datasets demonstrate that our slow thinking with dynamic workflows signif- icantly outperforms Chain-of-Thought, and hybrid thinking achieves the highest accuracy while providing an effective balance between computational efficiency and performance. Fine-tuning using our hybrid thinking approach also signifi- cantly boosts the complex reasoning capabilities of open-source language models. The results showcase the promise of slow thinking, dynamic workflows, and hy- brid thinking in expanding the frontier of complex problem-solving with LLMs1. 1 INTRODUCTION Large language models (LLMs) have demonstrated remarkable capabilities across a wide range of tasks, from code generation and mathematical reasoning to natural language understanding and gen- eration. However, their performance on complex reasoning problems that require multi-step think- ing and various skills is still limited. Recent advancements in symbolic reasoning and tool usage, such as AlphaCode (Li et al., 2022; AlphaCode Team), AlphaGeometry (Trinh et al., 2024), and AlphaProof (AlphaProof/AlphaGeometry teams), have shown significant improvements in specific domains by integrating LLMs with specialized procedures and symbolic reasoning engines. Var- ious prompting strategies, such as Chain-of-Thought (CoT) (Wei et al., 2022), Tree of Thoughts (ToT) (Yao et al., 2024), and Graph of Thoughts (GoT) (Besta et al., 2024a), have been developed to enable different reasoning topologies to enhance LLM problem-solving capabilities. Despite these advancements, enhancing the reasoning abilities of LLMs to solve challenging problems across di- verse domains in a unified framework remains crucial for expanding their real-world applicability. Existing methods for complex reasoning with LLMs have several limitations. First, complex problem-solving often requires combining various knowledge domains, skills, and tool usage. While previous approaches such as AlphaCodium (Ridnik et al., 2024) and Alphageometry (Trinh et al., 2024) have demonstrated the potential of combining language models and symbolic reasoning to 1Code and data will be released at https://github.com/wenlinyao/HDFlow. 1 Preprint solve complex problems, they rely on manually designed workflows tailored to specific domains (i.e., competitive programming or geometry theorem proving). The language model and symbolic engine take predefined turns in a rigid problem-solving process. This limits the applicability and adaptability of these systems to broader domains. Thus, we aim to enhance the generic problem- solving capabilities of LLMs by dynamically alternating between natural language reasoning in the “text space” and symbolic reasoning in the “symbolic space” based on the problem at hand. This dynamic integration of the two reasoning modes enables the system to address a much broader range of problems and adapt the problem-solving process to the unique requirements of each task. Second, traditional approaches to complex problem-solving with LLMs often rely on a single mode of think- ing, which may struggle with more intricate tasks that demand a deliberate, analytical approach. For example, many approaches employ a fixed reasoning strategy, such as CoT prompting, regardless of the problem’s complexity. For instance, OpenAI’s most recent o1 model2 only engages in a sin- gular deep thinking mode despite the complexity of the user’s query. This can lead to suboptimal performance on tasks that require a more deliberate, multi-step approach. While multi-agent frame- works such as AutoGPT (Significant Gravitas), ReAct Yao et al. (2022), and AutoGen (Wu et al., 2023) have addressed some aspects of this challenge by enabling recursive goal decomposition, in- terleaving reasoning and acting, and state-driven workflows, they do not fully exploit the potential of thinking approaches that can switch between intuitive thinking and more analytical thinking modes based on problem complexity. Finally, as problem complexity increases, the performance of exist- ing approaches tends to degrade significantly, highlighting the need for frameworks that can scale to handle even the most challenging reasoning problems. Recently, OpenAI o1 model (OpenAI) demonstrates the potential to consistently improve LLM performance of complex reasoning with compute scaling in inference-time through deep thinking. To address these limitations, we propose a novel framework for complex reasoning with LLMs that combines fast (system I) and more analytical slow thinking (system II) adaptively, inspired by the dual process theory of human cognition (Daniel, 2017). Our approach consists of two key components. First, we introduce a new approach for slow, deliberate reasoning called Dynamic Workflow, which automatically decomposes complex problems into more manageable sub-tasks. It then dynamically designs a workflow to assemble specialized LLM or symbolic tools to solve each sub-task. To achieve this, the dynamic workflow orchestrates a team of specialized LLM experts, each contributing unique domain knowledge or tool usage, to solve the sub-tasks in a structured manner. Second, we propose Hybrid Thinking, a general framework that dynamically combines fast and slow thinking based on problem complexity. For simpler tasks, the model defaults to a fast-thinking mode using CoT strategy. When the model’s confidence in the fast thinking output is low, it automatically switches to slow thinking with dynamic workflow, allowing for more efficient and more accurate problem-solving. Finally, to train local LLMs for complex reasoning, we present an easy-to-scale method for automatically synthesizing a large-scale dataset of 27K challenging reasoning problems and propose a hybrid thinking tuning approach that finetunes open-source LLMs on this dataset, enabling them to internalize the fast/slow hybrid reasoning strategies. We conduct experiments on four reasoning benchmark datasets (i.e., BBH (Suzgun et al., 2022), MATH (Hendrycks et al., 2021), Game of 24 Yao et al. (2024), DeepMind Math (Saxton et al., 2019). Experiments using GPT-4-Turbo reveal that slow thinking with dynamic workflows sig- nificantly outperformed CoT, with an average accuracy improvement of 22.4%. Hybrid thinking, which combines fast and slow thinking, achieved the highest accuracy on three of the four datasets. While slow thinking required the most inference tokens, hybrid thinking struck an effective balance between computational efficiency and performance. Furthermore, fine-tuning Llama-3-8B-Instruct using hybrid thinking significantly boosted performance across all datasets compared to the original model. Hybrid thinking after fine-tuning yielded accuracy gains of 10-23% over CoT prompting, with broad improvements across different subject areas in MATH. Overall, the results demonstrate the promise of slow thinking with dynamic workflows and hybrid thinking in enhancing the complex problem-solving abilities of LLMs. 2o1-preview model tested on Sept.24, 2024. o1-preview model thinks for a few seconds to users’ casual conversational queries such as How are you? 2 Preprint 2 RELATED WORK Symbolic Reasoning and Tool Usage. Bridging LLMs with symbolic reasoning and tool usage has demonstrated significant improvements across various domains. AlphaCode (Li et al., 2022; AlphaCode Team) combines LLMs with a specialized search and reranking mechanism, achieving top-tier performance in competitive programming. Similarly, AlphaCodium (Ridnik et al., 2024) improves AlphaCode’s performance by applying a predefined multi-stage process of problem anal- ysis, solution generation, and iterative testing and bug fixing. By using an evolutionary search procedure guided by an LLM, FunSearch (Romera-Paredes et al., 2024) can discover new mathe- matical constructions and algorithmic heuristics. AlphaGeometry (Trinh et al., 2024) leverages a neuro-symbolic system trained on synthetic data to guide a symbolic deduction engine, achieving near-expert performance in geometry theorem proving. Chain of Code (Li et al., 2024) encourages LLMs to write pseudocode for challenging sub-problems, which is then executed by the LM itself when it cannot be handled by a standard interpreter. These approaches rely on carefully designing when and how to integrate symbolic reasoning for each task domain. Prompting Strategies. Various prompting strategies have been developed to enable different rea- soning topologies (Besta et al., 2024b) for enhancing LLM problem-solving capabilities. Chain- of-Thought (CoT) prompting (Wei et al., 2022) first introduced the concept of generating inter- mediate reasoning steps to improve performance on complex tasks. Building upon this, the Tree of Thoughts (ToT) (Yao et al., 2024) enables the exploration of multiple potential reasoning paths and incorporates deliberate decision-making through self-evaluation and backtracking. Graph of Thoughts (GoT) (Besta et al., 2024a), models LLM-generated information as an arbitrary graph where thoughts are vertices and dependencies are edges, allowing for more complex reasoning structures and outcomes. In a different direction, Program of Thoughts (PoT) approach (Chen et al., 2022) disentangles computation from reasoning by expressing the reasoning process as a program, with external computation handling numerical operations. SELF-DISCOVER (Zhou et al., 2024) introduces a self-discovery process where LLMs autonomously select and compose multiple atomic reasoning modules into explicit reasoning structures. Our hybrid thinking approach allows for the efficient resolution of tasks within the LLM’s core capabilities through direct reasoning, while adap- tively engaging in deeper, multi-step workflows for more complex problems. Multi-Agent Frameworks for Task-Solving. Recent advancements also led to the development of various frameworks for complex task-solving and multi-agent collaboration. AutoGPT (Significant Gravitas) pioneered the idea of using LLMs for recursive goal decomposition and task completion, where sub-tasks are then performed sequentially to yield a larger result. ReAct (Yao et al., 2022) in- troduced a method for interleaving reasoning and acting, allowing LLMs to generate both reasoning traces and task-specific actions. Reflexion (Shinn et al., 2024) further enhanced language agents’ capabilities by incorporating verbal reinforcement learning, enabling them to reflect on feedback and improve decision-making. MetaGPT (Hong et al., 2024) addressed the challenge of LLM hal- lucination in multi-agent systems by incorporating human workflows and standardized operating procedures into the framework. AutoGen (Wu et al., 2023) presented a flexible multi-agent con- versation framework that allows for customizable, conversable agents with human participation. CAMEL (Li et al., 2023) introduced a role-playing approach to facilitate autonomous cooperation among communicative agents. Finally, StateFlow (Wu et al., 2024) proposed a state-driven work- flow that conceptualizes complex task-solving processes as state machines, enhancing control and interpretability. In contrast to these existing works, our approach uniquely integrates hybrid think- ing, combining fast and slow thinking modes with automatic workflows, to enhance LLMs’ ability to tackle complex reasoning problems more effectively and with greater adaptability. 3 OVERVIEW OF THE HYBRID THINKING APPROACH Our hybrid thinking approach (Figure 1) combines the strengths of fast and slow thinking modes to enable LLMs to more effectively solve complex reasoning problems. It consists of the following three key components. 1) Fast Thinking with Direct CoT. In the fast thinking mode, the LLM uses a direct chain of thought (CoT) approach to quickly solve the task query if possible. This leverages the LLM’s core abilities to perform certain types of reasoning efficiently by directly generating the rationale and the final answer. 2) Adaptive Combination of Fast and Slow Thinking. Next, we employ a self-verification mechanism where the LLM examines each step of the fast-thinking CoT 3 Preprint Figure 1: Overview of our HDFlow approach for complex problem-solving. Overall, it is a dual- path hybrid thinking approach, beginning with a CoT solver for initial fast reasoning followed by verification of each reasoning step. If verification fails, the process transitions to a slower, more deliberate ”Dynamic Workflow Solver.” This solver iterates until a verified answer is obtained, in- corporating a final verification step before concluding with a solution. Figure 2: Three-Stage Framework of Dynamic Workflow. The dynamic workflow design begins with Problem Reflection, where key elements are analyzed and sub-tasks identified. Stage 2 focuses on Expert Design, utilizing a variety of specialists and tools to architect an optimal workflow. Stage 3 involves constructing and executing the workflow graph to get the final result. reasoning to assess its confidence in the generated answer. This is achieved by applying the LLM to analyze the coherence, logical consistency, and correctness of each reasoning step in the context of the given query. If the LLM detects any inconsistencies, errors, or low-confidence steps during this self-verification process, it triggers a switch to the slow-thinking mode. 3) Slow Thinking with Dynamic Workflow. To tackle highly complex tasks, we propose a novel slow-thinking mechanism called Dynamic Workflow (Figure 2), which automatically decomposes the original task into sub- tasks and dynamically switches between verbal reasoning and symbolic reasoning to solve each sub- task. Our approach starts with multi-level problem reflection and decomposition. It then designs a workflow to assemble specialized LLM skills or symbolic tools for sub-tasks. Next, we dynamically chain together the sub-task reasoning steps into a multi-step workflow and execute the workflow. Finally, all sub-task results are aggregated into the final answer to the original query. We will present details in Section 4. By first attempting fast thinking, our hybrid thinking approach can efficiently handle queries that are within the LLM’s core capabilities. When the query exceeds what fast thinking alone can confidently handle, the hybrid thinking will smoothly transition to a slow thinking workflow to enable the LLM to tackle a broader range of challenges accurately. 4 SLOW THINKING WITH DYNAMIC WORKFLOW In contrast to the rapid responses of fast thinking (e.g., CoT), our new slow-thinking mechanism applies dynamic workflow to enable a more deliberate, analytical approach to complex problem- solving (see Figure 2). It allows an LLM to dynamically transition between reasoning in the text space (natural language reasoning) and the symbolic space (symbolic reasoning). The high-level idea is we first let the LLM decompose the original reasoning problem into several more manageable sub-tasks and solve each sub-task to form the final solution. When necessary, the LLM Engine will translate the sub-problem from the text space into the symbolic space, enabling the symbolic engine3 3In this paper, we mainly use program to achieve symbolic reasoning. 4 TaskQueryYesFastThinkingCoTSolverVerifyEachReasoningStepDynamicWorkflowSolverVerifyAnswerNo(retry)FinalAnswerNoSlowThinkingYes2SlowThinkingwithDynamicWorkflowAnalyzeKeyElementsIdentifySub-tasksStage1:ProblemReflectionDesignExpertsWorkflowArrangementStage2:WorkflowDesignExpertswithspecialties•Linguist•Mathematician•Data Scientist•…Expertswithtoolusage•Python•SymbolicEngine•…Workflow(pseudocode)Stage3:GraphConstructionandExecution Preprint to perform precise symbolic reasoning. The results are then mapped back into natural language using the LLM Engine. By decomposing the problem, combining the strengths of both natural language and symbolic reasoning in a tailored workflow, and executing it from start to finish, LLMs can tackle very hard problems that require multiple steps of accurate reasoning. Appendix B presents a complete example solution using our dynamic workflow approach and compares with the solution using OpenAI o1-preview. Prompts used are listed in Appendix C. 4.1 BREAKING DOWN COMPLEXITY: PROBLEM ANALYSIS AND DECOMPOSITION (STAGE 1) The first step in our slow thinking is problem analysis and planning. We aim to break down the original problem statement into more manageable sub-tasks. Specifically, the LLM is asked to analyze the key elements of the query, such as available information, constraints, and the desired output. It then identifies logical sub-goals needed to progress from the initial state to the solution. This decomposition allows the LLM to approach the problem in a structured manner, focusing on one part at a time. Therefore, the LLM can catch gaps in reasoning and handle complex problems that the fast thinking of CoT alone would struggle with. Problem Reflection. The first step in tackling complex problems is conducting a thorough problem reflection. This involves the LLM analyzing the original problem and restating it in its own words to demonstrate understanding. Our problem reflection includes two parts: 1) Identifying the core objective or question posed by the problem. 2) Recognizing any constraints, assumptions, or special conditions mentioned. By internalizing the problem through reflection, the LLM can gain a solid understanding of what needs to be accomplished before proceeding to decomposition. Subtask Decomposition. Once the problem is well understood, the LLM is instructed to perform a multi-level decomposition to break it down into some tractable sub-problems. The LLM is asked to follow four principles to achieve an optimal decomposition. Sequential dependency. The sub- problems are organized in a logical sequence, such that the outputs of earlier steps feed into subse- quent ones, creating a structured workflow from start to finish. Non-overlapping. Each sub-problem represents a distinct portion of the original problem, with no duplication of work between sub- problems. This keeps the overall solution efficient. Proper Decomposition. The sub-problems are decomposed to the optimal level of granularity - not so small that there are too many to track and coordinate, but not so large that they are still struggling to solve. Modular. Where appropriate, sub-problems are defined in a generalizable, modular way, such that the logic and code used to solve them can potentially be reused to solve similar problems in other contexts. Integrating Symbolic Reasoning. Another key aspect of our approach is leveraging the symbolic engines to modularize the solution and handle well-defined sub-tasks more accurately. For example, some sub-tasks in the decomposition can often be addressed by writing code functions. Therefore, we explicitly instruct the LLM to consider sub-tasks that can be well handled by writing and exe- cuting modular code in subtask decomposition. 4.2 ORCHESTRATING EXPERTISE: WORKFLOW DESIGN (STAGE 2) With the problem decomposed into sub-tasks, our approach next proposes a team of specialized experts, each contributing unique skills and tools, arranged in a dynamic workflow. The central component is a Meta-Expert, initialized from the foundation LLM, designs the expert team, and coordinates their efforts. The orchestration process consists of four steps. 1. Design of Experts. Based on the identified sub-tasks, the Meta-Expert designs a team of specialized experts with one expert solving one sub-task. Each expert is assigned a unique name and a clear description of their specific skills, knowledge, and responsibilities4. The dynamic workflow leverages two types of experts to handle each sub-task, enabling a seam- less integration of verbal and symbolic reasoning. The first type are specialized experts initiated from LLMs, such as linguists, mathematicians, and data scientists. These experts bring domain-specific knowledge and skills to the workflow, allowing for sophisticated verbal reasoning and analysis within their fields. The second type of expert focuses on 4Our implementation leverages JSON for efficient data management and extraction across the system. 5 Preprint symbolic reasoning, particularly using programming or other symbolic engines5. For ex- ample, some sub-tasks can often be addressed by writing compact, targeted code functions. This allows the LLM to handle common operations such as mathematical calculations, data parsing and manipulation, and so on without bringing errors. 2. Workflow Arrangement. The Meta-Expert arranges the experts into an efficient workflow sequence. Each expert’s output serves as the input for the next, progressively moving towards the final solution. The Meta-Expert ensures there is no redundancy of functions across experts. 3. Collaboration and Iteration. As the experts work through the problem, the Meta-Expert facilitates collaboration and puts together their inputs and outputs. For sub-tasks involving logical reasoning, mathematical operations, data structures, or programming, the Meta- Expert provides strategic guidance and sends the implementation details to the correspond- ing symbolic reasoning experts. These experts utilize LLMs to generate code, which is then executed to perform symbolic reasoning in Stage 3. 4. Final Review and Conclusion. The last expert in the workflow, often an LLM specialist, is tasked with holistically reviewing the findings of the previous experts and generating the final answer to the original problem. By combining the power of specialized LLMs and the usage of tools into a thoughtfully designed, adaptable workflow, our approach can tackle complex problems that are beyond the capabilities of the original model. The Meta-Expert serves as the intelligent connector, analyzing the unique needs of each problem and dynamically assembling the optimal workflow. Our approach creates a bridge between natural language reasoning and rule-governed symbolic reasoning. 4.3 FLOW EXECUTION: CONSTRUCTING AND RUNNING WORKFLOWS (STAGE 3) With the workflow graph generated, our approach finally proceeds to execute the graph to get the final result. The execution follows the dependency order, ensuring the correct flow of data between experts. To ensure robust execution, if any of the generated code encounters errors, the correspond- ing symbolic reasoning experts will trace the issue, use the error message to repair the code, and rerun it. As the workflow progresses, the downstream experts continually update their memory with the intermediate results and insights generated by previous experts. Upon completion of the work- flow execution, the last LLM expert analyzes the results, identifies key findings, and summarizes them into a final answer to the original problem. The workflow execution is not a one-time process. The LLM continually assesses the quality and correctness of the final generated solutions and iden- tifies potential errors. It engages in iterative rerun by applying a different problem decomposition, expert assignments, or adjusting the workflow structure. 5 MODEL TUNING OF HYBRID THINKING In our experiments, we observed that open-source language models (typically those with around 7B parameters) often struggle with advanced meta-planning and problem-solving skills required for solving difficult reasoning tasks. To address this limitation and develop local smaller models with hybrid thinking abilities comparable to the large models, we construct a comprehensive training dataset and propose hybrid thinking tuning to improve the complex reasoning abilities of local mod- els. We define “local” models as models that can be trained and deployed on local hardware with limited computational resources, such as the Llama-3 model (Meta, 2024). Our goal is to improve the complex reasoning abilities of these local models through our proposed approach. The primary challenge lies in constructing a large-scale dataset of reasoning problems that are suffi- ciently diverse, high-quality, and difficult. Such a dataset is crucial for teaching smaller local models to perform complex reasoning tasks. However, manually curating such a dataset presents significant difficulties in ensuring a wide range of problem domains and maintaining high standards in problem formulation. As a result, it is extremely time-consuming and expensive to ask human experts to 5We mainly use Python code interpreter as the symbolic engine in our experiments, but our approach can be extended to other symbolic engines, such as the symbolic deduction engines used in AlphaGeometry (Trinh et al., 2024) to solve Euclidean geometry problems. 6 Preprint Figure 3: Data Synthesis of Complex Reasoning Problems. The creation and refinement of reasoning problems contain three steps. Step 1 involves brainstorming and generating high-level descriptions of new reasoning tasks, either inspired by human-written tasks or directly writing puzzle tasks. Step 1 produces 45K descriptions of reasoning tasks. Step 2 performs semantic matching and deduplica- tion and results in 18K reasoning task descriptions. The final Step 3 writes concrete questions based on task descriptions and applies a CoT validation process to filter or refine the tasks down to 27k valid reasoning problems. **Interpret a Morse Code Message**: Given a string of Morse code, translate it into English text, adhering to standard Morse code conventions. The task involves recognizing each sequence of dots (.) and dashes (-) as letters and spaces as separators for words. A Morse code sequence has been found etched into an old artifact. It is believed to be a significant mathematical formula. The Morse code is: ‘-. .. -. . - -.– / - .... .-. . . / - .. – . ... / ... . ...- . -. - -.– / ..-. .. ...- . / . –.- ..- .- .-.. ... / — -. . / .... ..- -. -.. .-. . -.. / .- -. -.. / - .– . -. - -.– / - .... .-. . .‘. Decode this Morse code into English text, adhering to the standard Morse code conventions where sequences of dots (.) and dashes (-) represent letters, and spaces are used to separate words. **Cryptarithm Task: Solve the Equation**: In this cryptarithm, each letter represents a unique digit from 0-9: **CROSS + ROADS = DANGER** No number may begin with zero. Determine the digit each letter represents to satisfy the equation. In a game of spies, two teams use different substitution ciphers to communicate. Team A uses a cipher where each letter is replaced by the letter three positions to the right in the alphabet (with wrapping), while Team B uses a cipher where each letter is replaced by the letter four positions to the left (with wrapping). During the game, a message encrypted using Team B’s cipher was intercepted: “XLMW MW XLI GIRXVI.” Decode this message assuming it was meant for Team A but encrypted by Team B. Figure 4: Three example reasoning problems generated by our data synthesis approach. consistently generate problems meeting all criteria. Therefore, we propose a novel approach for au- tomatically generate a variety of reasoning problems and collect solutions of hybrid thinking, which can then be used to train our local LLMs. 5.1 REASONING PROBLEMS SYNTHESIS To enhance reasoning task diversity and coverage, our data synthesis pipeline consists of three steps In the first step, we strategically leverage human-authored seed tasks to inspire the (Figure 3). creation of new reasoning problems (similar to Self-Instruct (Wang et al., 2023)) or let the LLM brainstorm reasoning puzzles that cover a variety of task formats, difficulty levels, and problem domains. This step only focuses on generating high-level task descriptions to encourage diversity. In the second step, we apply deduplication to remove near-identical tasks. Finally, we apply LLMs again to write three specific problems based on the task descriptions and validate those problems. Task Generation Inspired by Seed Tasks. The first step of our reasoning data synthesis pipeline is generating an expanded set of reasoning tasks. We augment the few-shot prompts with 10 high-level task descriptions randomly sampled from the 214 BigBench tasks (Srivastava et al., 2022). Next, 7 Generate10newtasksinspiredby10taskssampledfrom214human-writtentasksBrainstorm10puzzletasksStep1Step2Output:ReasoningTaskDescriptionsDeduplicationWrite3reasoningproblemsbasedonthetaskdescriptionStep3ApplyCoTtovalidateproblemsOutput:ReasoningProblems4 Preprint we employ the 10 seed tasks as in-context examples to prompt LLMs6 to generate 10 new tasks inspired by seed tasks. To encourage additional diversity in the generated tasks, we also let the LLM to brainstorm different genres of puzzles, such as crossword puzzles, math puzzles, number puzzles, relational puzzles, logic puzzles, etc. By repeating two strategies, we produce an expanded pool of 45K candidate reasoning tasks that creatively cover diverse reasoning types and scenarios. Data Filtering and Deduplication. The previous task generation step produces a sizable pool of candidate reasoning tasks. However, the generated data is likely to contain duplicate or highly similar entries. To address this, we employ a comprehensive data filtering and deduplication process. First, we apply n-gram to identify nearly identical tasks. Next, we filter out any tasks or problems that fail to meet our quality criteria, such as insufficient complexity (e.g., trivial one-step questions), or ambiguity in the description by prompting GPT-4-Turbo. This helps ensure that only high-quality, unambiguous reasoning tasks are retained in the final dataset. Through this rigorous deduplication and filtering process, we condense the pool of 45K generated tasks down to 18K deduplicated tasks. Reasoning Problem Synthesis. In the last step, we aim to synthesize multiple concrete reasoning problems for each of the 18K tasks produced by the previous task generation and deduplication steps. Taking each task’s description as input, we prompt an LLM to generate 3 distinct questions or problems that test the specified reasoning skill. This enables us to turn each high-level task into a set of actual solvable questions, resulting in a pool of 54k reasoning problems. To ensure the generated problems are well-posed and solvable, we employ a chain-of-thought (CoT) based validation step. We prompt GPT-4-Turbo to apply CoT to each synthesized problem and analyze if the resulting reasoning steps coherently lead to a definite answer. Problems for which the model fails to converge to a clear solution or exhibits inconsistent reasoning are filtered out. This results in the final 27K reasoning problems. Figure 4 provides three examples of reasoning problems generated. 5.2 FINETUNING OPEN-SOURCE MODELS ON SYNTHESIZED DATA To prepare the training data for enhancing the open-source models’ complex problem-solving abili- ties, we utilize the GPT-4-turbo model to collect reasoning trajectories on the dataset of synthesized and mathematical problems. For each problem, GPT-4-turbo generates one or several fast/slow reasoning trajectories using the hybrid thinking approach. Each reasoning trajectory consists of a sequence of (query, answer) pairs representing the model’s step-wise hybrid thinking process. Therefore, we use all (query, answer) pairs from the reasoning trajectories to construct the train- ing data, capturing the complete problem-solving process. When multiple reasoning trajectories are produced (iterative retry), only the solution trajectory that passes the verification process is retained in the training set to optimize the model’s problem-solving capabilities, while the verification results for all trajectories are kept to enhance the model’s self-verification abilities. The Llama-3 models have demonstrated superior performance compared to other models of similar size due to significant enhancements in both pretraining and post-training (Meta, 2024). Therefore, we choose the Llama-3-8B-Instruct model as the foundation model for our hybrid thinking tuning experiments. Specifically, The Llama-3-8B-Instruct model was fine-tuned using 8 A100 GPUs with bf16 precision7. The training utilized a global batch size of 128, spanning 4 epochs. The model employed the AdamW optimizer of a learning rate of 2.0e-5, with a maximum sequence length of 4096 tokens and a maximum of 2048 new tokens generated. 6 EXPERIMENT 6.1 REASONING BENCHMARK DATASETS BIG-Bench Hard (BBH) (Suzgun et al., 2022): A subset of 27 challenging tasks from the BIG- Bench benchmark (Srivastava et al., 2022), which aims to measure the capabilities and limitations of language models across diverse text-based tasks. MATH (Hendrycks et al., 2021): A dataset consisting of 5,000 test problems from mathematics competitions. These problems assess the math- ematical problem-solving ability and often require the application of problem-solving techniques 6We use both GPT-4-0125 and Claude-3-Opus to encourage diversity. We find Claude-3-Opus does generate very different reasoning tasks compared with GPT-4-0125. 7We adopt LitGPT (AI, 2023) in our model training. 8 Preprint Methods CoT (Fast Think.) Slow Think. Hybrid Think. BBH 77.8 MATH DeepMind Math GameOf24 62.6 87.1 (+9.3) 67.6 (+4.6) 87.8 (+10.0) 70.0 (+7.9) 53.4 67.7 (+14.3) 59.6 (+6.2) 9.3 70.3 (+61.0) 73.2 (+22.4) 72.0 (+62.7) 72.4 (+21.6) Avg. 50.8 Table 1: Accuracy (%) of GPT-4-Turbo-0125 across different reasoning modes on various datasets. We show the accuracy of the model using Chain of Thought (CoT) v.s. slow thinking (with dynamic workflow) and Hybrid Thinking approaches proposed by us. The Fast/Slow indicates the ratio of Fast and Slow Thinking contributions in the Hybrid approach. Results are derived from the top 100 instances for each sub-category in BBH (27 sub-tasks), MATH (7 sub-domains), and GameOf24 (3 difficulty levels) to reduce API cost and ensure replicability. For the DeepMind Math dataset, the top 10 instances from each of the 56 sub-domains were used. Methods BBH MATH DeepMind Math GameOf24 Avg. Tokens CoT (Fast Think.) Slow Think. Hybrid Think. 351 3227 1299 992 5694 4398 581 3562 1742 387 5246 4983 577.8 4432.0 3105.5 Table 2: Average number of inference tokens of GPT-4-Turbo-0125 using different reasoning modes on various datasets. Performance is reported in Table 1. and heuristics beyond standard K-12 mathematics tools. Game of 24 (Yao et al., 2024): A math- ematical reasoning challenge dataset containing 1,362 games sorted by human solving time. The goal is to use four given numbers and basic arithmetic operations (+ - * /) to obtain 24. DeepMind Math (Saxton et al., 2019): A dataset consisting of various types of mathematics questions, released with both generation code and pre-generated questions. This dataset provides an additional measure of algebraic generalization abilities. 6.2 RESULTS BASED ON PROMPTING We first conduct experiments by prompting GPT-4-Turbo-01258 to achieve three reasoning modes: Chain of Thought (CoT), Slow Thinking with Dynamic Workflow, and Hybrid Thinking across four benchmark datasets. Table 1 shows that slow thinking with dynamic workflow significantly out- performs CoT by 22.4% on average across four benchmarks. It also reveals that Hybrid Thinking achieves the best accuracy on three datasets BBH, MATH and GameOf24. Notably, both Slow Thinking and Hybrid Thinking consistently outperform CoT across all datasets, with the most dra- matic improvements seen in GameOf24, where gains are 61.0% and 62.7% respectively. Table 2 illustrates the average number of inference tokens used by each method. CoT consistently used the fewest tokens (average 577.8), while Slow Thinking required the most (4432.0 on average). Hybrid Thinking struck a balance with an average of 3105.5 tokens. A clear trade-off emerged be- tween computational efficiency and performance, with CoT using the fewest tokens but achieving the lowest accuracy. Hybrid Thinking demonstrated a good balance, achieving high accuracy with moderate token usage. These findings suggest that incorporating dynamic workflows and combin- ing fast and slow thinking processes can enhance the reasoning capabilities of LLMs, with Hybrid Thinking emerging as a particularly promising approach. 6.3 RESULTS OF HYBRID THINKING TUNING We next compare the performance of the original Llama-3-8B-Instruct model and the model after our hybrid thinking tuning. As shown in Table 3, the Llama-3-8B-Instruct model after hybrid thinking tuning significantly outperforms the baseline model on all datasets. Examining the different thinking modes, hybrid thinking consistently provided the best tradeoff between performance and efficiency. Compared to the CoT baseline, hybrid thinking improved accuracy by 10.6%, 10.2%, 23.1% and 8https://platform.openai.com/docs/models. A full list of prompts can be found in Ap- pendix C. 9 Preprint Methods BBH MATH DeepMind Math GameOf24 Avg. Llama-3-8B-Instruct (Original) CoT 51.7 30.0 18.6 2.7 25.8 Llama-3-8B-Instruct (After Hybrid Thinking Tuning) 37.0 (+7.0) CoT (Fast Think.) 58.5 (+6.8) 61.2 (+9.5) Slow Think. 37.8 (+7.8) 62.3 (+10.6) 40.2 (+10.2) Hybrid Think. 34.2 (+15.6) 48.8 (+30.2) 41.7 (+23.1) 5.1 (+2.4) 33.7 (+7.9) 15.4 (+12.7) 40.8 (+15.0) 16.0 (+13.3) 40.5 (+14.7) Table 3: Performance comparison of the original Llama-3-8B-Instruct model and the Llama-3-8B- Instruct after our hybrid thinking tuning. We show the accuracy (%) of the model using Chain of Thought (CoT) v.s. slow thinking (with dynamic workflow) and Hybrid Thinking approaches proposed by us. The Fast/Slow indicates the ratio of Fast and Slow Thinking contributions in the Hybrid approach. Results are derived from all test instances in BBH, MATH, DeepMind Math and GameOf24. Methods BBH MATH DeepMind Math GameOf24 Avg. Tokens Llama-3-8B-Instruct (Original) CoT 356 496 359 510 430.2 Llama-3-8B-Instruct (After Hybrid Thinking Tuning) CoT (Fast Think.) Slow Think. Hybrid Think. 720 3901 2521 985 5743 4414 770 4395 2577 1384 6714 6371 964.7 5188.2 3970.7 Table 4: Average number of inference tokens of the original Llama-3-8B-Instruct model and the Llama-3-8B-Instruct after our hybrid thinking tuning on various datasets. Performance is reported in Table 3. Figure 5: Proportion of fast thinking (CoT) and slow thinking (dynamic workflow) applied in hybrid thinking across four datasets. The left is GPT-4-Turbo (performance is shown in Table 1), while the right is Llama-3-8B-Instruct after our hybrid thinking tuning (Table 3). 13.3% on the BBH, MATH, DeepMind Math and GameOf24 datasets respectively. Interestingly, we also observe that hybrid thinking tuning enhances Llama-3’s fast thinking (CoT) performance across all reasoning tasks at the cost of increased model inference tokens. Table 5 breaks down performance on the MATH dataset into specific subject areas. Again, the Llama-3-8B-Instruct model after hybrid thinking tuning outperforms the original model on all sub- sets, with gains ranging from 8% on intermediate Algebra to 23% on Number Theory. Hybrid thinking yielded the highest accuracy in each domain, demonstrating its broad applicability. 10 0.810.580.740.140.190.420.260.8600.20.40.60.81BBHMATHDEEPMIND MATHGAMEOF24Fast Think.Slow Think.0.640.510.660.150.360.490.340.850.00.20.40.60.81.0BBHMATHDEEPMIND MATHGAMEOF24Fast Think.Slow Think. Preprint MATH Subsets Prealgebra Algebra Number Theory Count. and Prob. Geometry Precalculus Inter. Algebra Llama-3-8B-Ins. CoT Llama-3-8B-Ins. (After Hybrid Thinking Tuning) CoT (Fast Think.) Slow Think. Hybrid Think. Fast/Slow 43.2% 30.2% 15.0% 21.1% 13.4% 12.5% 9.1% 58.9% 53.6% 31.1% 32.5% 24.8% 22.0% 15.6% 59.7% 52.7% 37.6% 34.2% 23.6% 21.8% 16.3% 63.3% 56.1% 38.0% 35.9% 26.3% 24.5% 17.3% 0.69/0.31 0.68/0.32 0.52/0.48 0.48/0.52 0.33/0.67 0.35/0.65 0.30/0.70 Table 5: Accuracy comparison of the original Llama-3-8B-Instruct model and the Llama-3-8B- Instruct after our hybrid thinking tuning on different domains of the MATH dataset. “Count. and Prob.” and “Inter. Algebra” represents “Counting and Probability” and “Intermediate Algebra”. 6.4 FAST/SLOW ROUTING ANALYSIS Figure 5 illustrates the proportion of fast thinking and slow thinking (orange) approaches applied by both models when solving complex problems across the datasets. The GPT-4-Turbo model demon- strates a higher reliance on fast thinking for BBH, DeepMind MATH, and Game of 24 tasks com- pared with Llama-3-8B-Instruct model. This observation can be attributed to the fact that GPT-4- Turbo’s fast thinking (in the form of CoT) is more reliable and effective compared to Llama-3-8B- Instruct. As a result, hybrid thinking in GPT-4-Turbo tends to apply more fast thinking since it is sufficient to achieve a correct solution in many cases. In contrast, Llama-3-8B-Instruct after tun- ing exhibits a greater reliance on slow thinking strategies, particularly in complex tasks, where fast thinking alone may not yield the desired results. This highlights the importance of hybrid thinking to improve problem-solving efficiency, suggesting that our method can dynamically adjust the optimal balance between fast and slow thinking based on the model’s downstream reasoning capabilities. In summary, the dynamic combination of fast and slow thinking modes greatly enhanced the model’s problem-solving capabilities. Our results showcase the potential of hybrid thinking approaches to expand the frontier of what LLMs can achieve on challenging tasks. 7 DISCUSSION AND FUTURE WORK Limitations and Potential Improvements. One promising direction is to incorporate a value net- work that scores the successfulness or quality of completing each sub-task within the dynamic work- flow. By integrating such a value network, we can formulate the problem-solving process as a rein- forcement learning task, enabling the optimization and search for the best solution trajectory. This enhancement could lead to more efficient and effective problem-solving strategies, as the model learns to prioritize and select the most promising decompositions and workflows based on predicted values. Generalization to Other Reasoning Tasks. Constructing high-quality and sufficiently challeng- ing reasoning problems for training still remains a significant challenge. While our data synthesis approach offers a scalable solution, ensuring the validity and difficulty of each generated reasoning problem is crucial for effective model development. One potential improvement is to involve hu- man experts in the data synthesis process, allowing them to verify, modify, and curate the generated problems. Integration with Symbolic Reasoning Systems. Our dynamic workflow approach seamlessly inte- grates specialized language models and symbolic reasoning tools, enabling LLMs to tackle complex problems more effectively. However, there is significant potential to extend this integration to more advanced symbolic reasoning systems, such as Lean9 for mathematical theorem proving or other domain-specific tools. Moreover, integrating our approach with tools such as search engines and web browsers could enable LLMs to access and utilize external resources, further amplifying their problem-solving abilities to broader applications. By incorporating more powerful tools into the dynamic workflow, we can expand the range of problems that LLMs can solve. 9https://lean-lang.org/ 11 Preprint 8 CONCLUSION This paper introduces a novel framework HDFlow for enhancing the complex problem-solving capabilities of LLMs through hybrid thinking and dynamic workflows. The dynamic workflow mechanism enables LLMs to decompose complex problems into manageable sub-tasks and inte- grate specialized language models and symbolic reasoning tools, while hybrid thinking strategically engages deeper, multi-step reasoning for challenging problems that exceed the capabilities of fast thinking alone. Extensive experiments demonstrate the significant advantages of our approach, with slow thinking with dynamic workflow greatly outperforming CoT and hybrid thinking achieving the highest overall accuracy by balancing efficiency and performance. REFERENCES Lightning AI. Litgpt. https://github.com/Lightning-AI/litgpt, 2023. Google DeepMind AlphaCode Team. Alphacode 2 technical report. URL https: //storage.googleapis.com/deepmind-media/AlphaCode2/AlphaCode2_ Tech_Report.pdf. Google DeepMind AlphaProof/AlphaGeometry teams. Ai achieves silver-medal standard solv- ing international mathematical olympiad problems. URL https://deepmind.google/ discover/blog/ai-solves-imo-problems-at-silver-medal-level/. Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gian- inazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts: Solving elaborate problems with large language models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 17682–17690, 2024a. Maciej Besta, Florim Memedi, Zhenyu Zhang, Robert Gerstenberger, Nils Blach, Piotr Nyczyk, Marcin Copik, Grzegorz Kwa´sniewski, J¨urgen M¨uller, Lukas Gianinazzi, et al. Topologies of reasoning: Demystifying chains, trees, and graphs of thoughts. arXiv preprint arXiv:2401.14295, 2024b. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompt- ing: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588, 2022. Kahneman Daniel. Thinking, fast and slow. 2017. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS, 2021. Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, and J¨urgen Schmidhuber. MetaGPT: Meta programming for a multi-agent collab- orative framework. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=VtmBAGCN7o. Chengshu Li, Jacky Liang, Andy Zeng, Xinyun Chen, Karol Hausman, Dorsa Sadigh, Sergey Levine, Li Fei-Fei, Fei Xia, and Brian Ichter. Chain of code: Reasoning with a language model- augmented code emulator. In Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp (eds.), Proceedings of the 41st Interna- tional Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Re- search, pp. 28259–28277. PMLR, 21–27 Jul 2024. URL https://proceedings.mlr. press/v235/li24ar.html. Guohao Li, Hasan Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Com- municative agents for ”mind” exploration of large language model society. Advances in Neural Information Processing Systems, 36:51991–52008, 2023. 12 Preprint Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, R´emi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation with alphacode. Science, 378(6624):1092–1097, 2022. AI Meta. Introducing meta llama 3: The most capable openly available llm to date. Meta AI, 2024. OpenAI. Learning to reason with llms. learning-to-reason-with-llms/. URL https://openai.com/index/ Tal Ridnik, Dedy Kredo, and Itamar Friedman. Code generation with alphacodium: From prompt engineering to flow engineering. arXiv preprint arXiv:2401.08500, 2024. Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog, M Pawan Kumar, Emilien Dupont, Francisco JR Ruiz, Jordan S Ellenberg, Pengming Wang, Omar Fawzi, et al. Mathematical discoveries from program search with large language models. Nature, 625(7995):468–475, 2024. David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical rea- soning abilities of neural models. arXiv preprint arXiv:1904.01557, 2019. Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. Advances in Neural Information Processing Systems, 36, 2024. Significant Gravitas. AutoGPT. URL https://github.com/Significant-Gravitas/ AutoGPT. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adri`a Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, 2022. Mirac Suzgun, Nathan Scales, Nathanael Sch¨arli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, , and Jason Wei. Challenging big- bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022. Trieu H Trinh, Yuhuai Wu, Quoc V Le, He He, and Thang Luong. Solving olympiad geometry without human demonstrations. Nature, 625(7995):476–482, 2024. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instructions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 13484–13508, 2023. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022. Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, and Chi Wang. Autogen: Enabling next-gen llm applications via multi-agent conversation framework. 2023. Yiran Wu, Tianwei Yue, Shaokun Zhang, Chi Wang, and Qingyun Wu. Stateflow: Enhancing llm task-solving through state-driven workflows. arXiv preprint arXiv:2403.11322, 2024. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Ad- vances in Neural Information Processing Systems, 36, 2024. 13 Preprint Pei Zhou, Jay Pujara, Xiang Ren, Xinyun Chen, Heng-Tze Cheng, Quoc V Le, Ed H Chi, Denny Zhou, Swaroop Mishra, and Huaixiu Steven Zheng. Self-discover: Large language models self- compose reasoning structures. arXiv preprint arXiv:2402.03620, 2024. 14 Preprint A APPENDIX B DYNAMIC WORKFLOW EXAMPLE SOLUTION In this section, we demonstrate how our dynamic workflow approach works by presenting an ex- ample solution to a reasoning problem. We also present the solution provided by the OpenAI o1- preview model to compare with our approach. We can see the solution produced by our approach contains many sub-goals (sub-tasks), which is very similar to the o1-preview solution. Our approach also leverages the symbolic tool - Python to enhance the accuracy of the final answer. The reasoning problem is generated using our data synthesis method. B.1 REASONING PROBLEM: An encrypted Morse code message contains a list of three items that were crucial to a historical event. The Morse code is: ‘- . .- / -.-. --- ..-. ..-. . . / ... ..- --. .- .-.‘. Decode this Morse code message into English text, keeping in mind the standard Morse code conventions where each sequence of dots (.) and dashes (-) corresponds to a letter, and spaces separate words. B.2 SOLUTION USING DYNAMIC WORKFLOW: Stage 1. Problem Reflection ===== Meta-Expert@REFLECTION ===== ### Problem Reflection start ### **Task Description**: Decode a given Morse code message into English text . **Elements of the Task**: 1. The message is encoded in Morse code, which uses a series of dots (.) and dashes (-) to represent letters and spaces to separate words. 2. Decoding involves translating each Morse code sequence into its corresponding English letter or symbol. 3. The Morse code provided is a list of three items related to a historical event, which implies the output should be meaningful and correctly formatted. **Sub-tasks Identification**: 1. **Morse Code Dictionary Creation**: Establish a dictionary in Python where keys are Morse code sequences (dots and dashes) and values are corresponding English letters. This will facilitate the translation process. 2. **Parsing the Morse Code**: Split the input Morse code string into individual Morse code sequences for letters and words. This involves handling spaces correctly to differentiate between letters and word boundaries. 3. **Translation Process**: For each Morse code sequence obtained from the parsing step, use the Morse code dictionary to translate it into the corresponding English letter. 4. **Reconstruction of Message**: Combine the translated letters into words and then into the complete message. This step ensures that the output is a readable English sentence. 5. **Output Formatting**: Format the final output to ensure it is clear and presentable, adhering to proper English syntax and spacing. ### Problem Reflection end ### Stage 2. Workflow Design 15 Preprint ===== Meta-Expert@EXPERTS_DESIGN ===== ### Specialized Experts Design start ### **Morse Code Dictionary Creation Expert**: Specializes in creating a comprehensive dictionary that maps Morse code symbols to English letters. This expert will ensure that all Morse code sequences are accurately represented with their corresponding English characters, which is crucial for the decoding process. Expert card (in JSON format): {"Name": "Morse Code Dictionary Creation Expert", " Expert_Type": "LLM", "Input_Type": "None", "Output_Type": "Dict[str, str]"} **Morse Code Parsing Expert**: Skilled in breaking down the Morse code string into individual Morse sequences for letters and words. This expert will handle the input Morse code string, correctly identifying the separation between letters and words, which is essential for accurate translation. Expert card (in JSON format): {"Name": "Morse Code Parsing Expert", "Expert_Type": "LLM", "Input_Type": "str", " Output_Type": "List[List[str]]"} **Python Expert of Translation**: Utilizes Python to map Morse code sequences to English letters using the dictionary provided by the Morse Code Dictionary Creation Expert. This expert will translate each Morse code sequence into the corresponding English letter, ensuring that the translation is accurate and adheres to the Morse code standards. Expert card (in JSON format): {"Name": "Python Expert of Translation", "Expert_Type": "Tool", "Input_Type": "List[List[str ]], Dict[str, str]", "Output_Type": "List[str]"} **Message Reconstruction Expert**: Specializes in reconstructing the translated English letters into coherent words and sentences. This expert ensures that the translated letters are correctly combined to form meaningful words and that these words are arranged into a readable and grammatically correct sentence. Expert card (in JSON format): {"Name": "Message Reconstruction Expert", "Expert_Type": " LLM", "Input_Type": "List[str]", "Output_Type": "str"} **Final Review and Presentation Expert**: Responsible for reviewing the reconstructed message to ensure it meets the clarity and accuracy standards. This expert will also format the final message for presentation, ensuring that it is clear, concise, and ready for delivery as the solution to the problem. Expert card (in JSON format) : {"Name": "Final Review and Presentation Expert", "Expert_Type": " LLM", "Input_Type": "str", "Output_Type": "str"} ### Specialized Experts Design end ### Stage 3. Graph Construction and Execution ===== Morse Code Dictionary Creation Expert ===== ### Approach Explanation Start ### To solve the assigned sub-task of creating a comprehensive Morse code dictionary, I will map each Morse code symbol (comprising dots ‘.‘ and dashes ‘-‘) to its corresponding English letter. This dictionary is essential for the subsequent decoding process, as it allows for the translation of Morse code sequences into readable English text. **Steps to Create the Morse Code Dictionary**: 1. **Identify Morse Code Symbols**: List all Morse code symbols for English alphabet letters (A-Z) and numerals (0-9) if necessary. 2. **Map to English Letters**: Each Morse code symbol will be associated with its corresponding English letter. This mapping is based on the standard Morse code conventions. 16 Preprint 3. **Dictionary Format**: The dictionary will be structured in Python dictionary format, where keys are Morse code symbols and values are the corresponding English letters. This dictionary will be used by the next expert in the workflow, the Morse Code Parsing Expert, who will parse the Morse code message into individual symbols using this dictionary for translation. ### Approach Explanation End ### ### My Final Output Start ### { "A": ".-", "B": "-...", "C": "-.-.", "D": "-..", "E": ".", "F": "..-.", "G": "--.", "H": "....", "I": "..", "J": ".---", "K": "-.-", "L": ".-..", "M": "--", "N": "-.", "O": "---", "P": ".--.", "Q": "--.-", "R": ".-.", "S": "...", "T": "-", "U": "..-", "V": "...-", "W": ".--", "X": "-..-", "Y": "-.--", "Z": "--..", "1": ".----", "2": "..---", "3": "...--", "4": "....-", "5": ".....", "6": "-....", "7": "--...", "8": "---..", "9": "----.", "0": "-----" } ### My Final Output End ### ===== Morse Code Parsing Expert ===== ### Approach Explanation Start ### To solve the task of parsing the Morse code string into individual Morse sequences for letters and words, I will follow these steps: 1. **Understanding Morse Code Structure**: Morse code uses dots (.) and dashes (-) to represent letters, and spaces are used to separate different Morse sequences. A single space indicates a separation between letters within the same word, and a sequence of three spaces (represented as " / ") indicates a separation between words. 2. **Splitting the Input String**: The input Morse code string will be split based on the " / " sequence to differentiate between words. 17 Preprint This results in a list where each element is a Morse code sequence representing a word. 3. **Further Splitting for Letters**: Each word obtained from the previous step will be further split using a single space as the delimiter. This will break down the word into individual Morse code sequences, each representing a letter. 4. **Constructing the Output**: The final output will be a list of lists, where each sublist represents a word and contains Morse code sequences for each letter in that word. This structured approach ensures that the Morse code is accurately parsed into its constituent letters and words, preparing it for translation in the subsequent steps of the workflow. ### Approach Explanation End ### ### My Final Output Start ### [[’- . .-’, ’-.-. --- ..-. ..-. . .’, ’... ..- --. .- .-.’]] ### My Final Output End ### ===== Python Expert of Translation ===== ### Python Code Start ### def translate_morse_code(morse_words, morse_dict): translated_words = [] for word in morse_words: translated_word = [] for letter in word.split(): # Using next() with a default value to avoid StopIteration translated_letter = next((key for key, value in morse_dict. items() if value == letter), ’?’) translated_word.append(translated_letter) translated_words.append(’’.join(translated_word)) return translated_words morse_words = [[’- . .-’, ’-.-. --- ..-. ..-. . .’, ’... ..- --. .- .-.’]] morse_dict = { "A": ".-", "B": "-...", "C": "-.-.", "D": "-..", "E": ".", "F": "..-.", "G": "--.", "H": "....", "I": "..", "J": ".---", "K": "-.-", "L": ".-..", "M": "--", "N": "-.", "O": "---", "P": ".--.", "Q": "--.-", "R": ".-.", "S": "...", "T": "-", "U": "..-", "V": "...-", "W": ".--", "X": "-..-", "Y": "-.--", 18 Preprint "Z": "--..", "1": ".----", "2": "..---", "3": "...--", "4": "....-", "5": ".....", "6": "-....", "7": "--...", "8": "---..", "9": "----.", "0": "-----" } for words in morse_words: translated = translate_morse_code(words, morse_dict) print(translated) ### Python Code End ### ### Python Execution Result Start ### [’TEA’, ’COFFEE’, ’SUGAR’] ### Python Execution Result End ### ===== Message Reconstruction Expert ===== ### My Approach Explanation Start ### As the Message Reconstruction Expert, my primary role is to take the translated English words provided by the Python Expert of Translation and construct them into a coherent and grammatically correct sentence. The translated words are ’TEA’, ’COFFEE’, and ’SUGAR’. These words represent items, and the context suggests they are a list of crucial items related to a historical event. Given this context, my task is to format these words into a readable English sentence that clearly communicates the items as a list. The appropriate format for presenting such a list in English would typically involve separating the items with commas and using the conjunction "and" before the last item. This ensures the sentence is both grammatically correct and easy to understand. ### My Approach Explanation End ### ### My Final Output Start ### Tea, coffee, and sugar ### My Final Output End ### ===== Final Review and Presentation Expert ===== ### Explanation of Approach Start ### As the Final Review and Presentation Expert, my primary role is to ensure that the reconstructed message from the Message Reconstruction Expert is clear, accurate, and well-presented. The message provided is "Tea, coffee, and sugar," which lists three items crucial to a historical event. **Steps in my approach:** 1. **Review Accuracy**: Check if the message accurately reflects the translated Morse code. The previous experts have translated and reconstructed the message correctly as "Tea, coffee, and sugar" from the Morse code ‘- . .- / -.-. --- ..-. ..-. . . / ... ..- --. .- .-.‘. 2. **Review Clarity**: Ensure the message is clear and understandable. The use of commas and the conjunction "and" appropriately lists the items, making the sentence coherent and easy to read. 19 Preprint 3. **Presentation**: Format the message to ensure it is ready for presentation. This involves checking for any grammatical errors, proper capitalization, and punctuation. Given the outputs from the previous experts, the message "Tea, coffee, and sugar" is already well-constructed. My task focuses on confirming these aspects and presenting the final output. ### Explanation of Approach End ### ### My Final Output Start ### Tea, coffee, and sugar ### My Final Output End ### ===== Meta-Expert@FINAL_JUDGEMENT ===== ### Analysis Start ### **Problem Recap**: The task was to decode a Morse code message that lists three items crucial to a historical event. The Morse code provided was ‘- . .- / -.-. --- ..-. ..-. . . / ... ..- --. .- .-.‘. **Workflow Review**: 1. **Morse Code Dictionary Creation**: The dictionary was correctly established, mapping each Morse code sequence to its corresponding English letter. This is foundational for accurate translation. 2. **Morse Code Parsing**: The parsing expert successfully identified and separated the Morse code into three distinct word sequences, which is crucial for correct translation. 3. **Translation Process**: Using the dictionary, the translation expert accurately converted the Morse code sequences into the English words ’TEA’, ’COFFEE’, and ’SUGAR’. This step was executed correctly, adhering to Morse code standards. 4. **Message Reconstruction**: The reconstruction expert transformed the list of words into a coherent and grammatically correct English phrase: "Tea, coffee, and sugar". This step was crucial in making the output readable and presentable. 5. **Final Review and Presentation**: The final output "Tea, coffee, and sugar" was reviewed and presented clearly. The expert ensured that the output was concise and correctly formatted, suitable for answering the problem statement. **Evaluation of Accuracy and Completeness**: - The translation of Morse code to English was accurate based on the provided dictionary. - The final message is meaningful and directly corresponds to the Morse code given. - The output is formatted in a way that is easy to understand and relevant to the context of a list of items. **Conclusion**: The workflow from dictionary creation to final presentation was executed seamlessly, with each expert fulfilling their role effectively. The final output accurately reflects the decoded message from the Morse code, presented in a clear and concise manner. ### Analysis End ### FINAL EVALAUTION: YES 20 Preprint B.3 SOLUTION USING OPENAI O1-PREVIEW Hidden Thought Process (thought for 12 seconds) ===== Parsing Morse code ===== I’m separating the Morse code message into three words. Each word is then further divided into its constituent letters for decoding. ===== Parsing Morse code ===== In this chunk, the task involves parsing Morse code by handling letter concatenation within words and word separation through slashes. Morse code typically separates letters with spaces and words with slashes. ===== Breaking down Morse code ===== I’m looking at Morse code to decode - . .-, mapping it to letters. This makes me think about interpreting symbols as letters. ===== Considering Morse code decoding ===== I’m thinking through the method of reversing Morse code, focusing on identifying letter partitions and parsing two-letter words to determine valid Morse code sequences, examining all possible partitions of the code. ===== Identifying letter patterns ===== I’m working through different Morse code partitions to isolate letter patterns like T E A and T U. This involves analyzing various combinations of Morse code symbols. ===== Decoding Morse code ===== It’s interesting to see that the Morse code was parsed as -.-. --- ..-. ..-. . . for ’COFFIE’ and ’T E A’ as the most plausible word. This suggests a detailed analysis of each symbol is crucial for accurate translation. ===== Decoding Morse code ===== I’m working through the Morse code sequence, breaking down each symbol to reveal letters like C, O, F, F, E, E, S, U, G, and ?. Final solution To decode the Morse code message ‘- . .- / -.-. --- ..-. ..-. . . / ... ..- --. .- .-.‘, we’ll break it down word by word and letter by letter. ### Morse Code Reference Table: | Letter | Morse Code | | --- | --- | | A | .- | | B | -... | | C | -.-. | | D | -.. | | E | . | | F | ..-. | | G | --. | | H | .... | | I | .. | | J | .--- | | K | -.- | | L | .-.. | | M | -- | | N | -. | | O | --- | | P | .--. | | Q | --.- | 21 Preprint | R | .-. | | S | ... | | T | - | | U | ..- | | V | ...- | | W | .-- | | X | -..- | | Y | -.-- | | Z | --.. | ### Decoding Process: **First Word: ‘- . .-‘** * ‘-‘ : **T** * ‘.‘ : **E** * ‘.-‘ : **A** Combined: **TEA** **Second Word: ‘-.-. --- ..-. ..-. . .‘** * ‘-.-.‘ : **C** * ‘---‘ : **O** * ‘..-.‘ : **F** * ‘..-.‘ : **F** * ‘.‘ : **E** * ‘.‘ : **E** Combined: **COFFEE** **Third Word: ‘... ..- --. .- .-.‘** * ‘...‘ : **S** * ‘..-‘ : **U** * ‘--.‘ : **G** * ‘.-‘ : **A** * ‘.-.‘ : **R** Combined: **SUGAR** ### Final Decoded Message: **TEA COFFEE SUGAR** * * * These three items-**tea**, **coffee**, and **sugar** - were crucial commodities in historical trade events, such as the Boston Tea Party and the global trade networks during the colonial era. **Answer:** TEA COFFEE SUGAR C PROMPTS USED IN OUR DYNAMIC WORKFLOW APPROACH Note that placeholders enclosed in {} are used within the prompts to represent variables that will be substituted with actual values at LLM query time. 22 Preprint Problem Reflection Prompt ### Problem Statement Start ### {task problem} ### Problem Statement End ### You are an exceptionally capable Meta-Expert, possessing a unique capability for conducting problem reflection. Your primary function involves receiving the above problem query, which you must me- thodically decompose into smaller, more manageable sub-tasks (including sub-tasks that can solved by implementing Python functions). When designing the solution, you should think about its general- izability. A robust solution can tackle a similar range of problems effectively with minor adaptations. This decomposition will later facilitate the creation of a team of specialized experts, enabling efficient collaboration of experts to address and solve the above problem. When breaking down into sub-tasks, it is crucial to: 1. Ensure Sequential Logic: Arrange the sub-tasks in a logical, sequential order that facilitates a smooth workflow from start to finish. 2. Avoid Overlap: Each sub-task must be distinct, with no duplication of efforts across the tasks, en- suring efficient allocation of expertise. 3. Pursue Optimal Decomposition: Ensure sub-tasks are sufficiently defined to be tackled effectively. Maintain a manageable number of specific sub-tasks, facilitating easier coordination and management. In particular, please conduct the ”Problem Reflection” for the given problem: Reflect on the problem, and describe it in your own words, in bullet points. Analyze how you can decompose the problem into smaller, more manageable sub-tasks. Note that you can integrate Python-driven sub-tasks by imple- menting and running modular Python code if necessary. Pay attention to small details, nuances, notes and examples in the problem description. Experts Design Prompt ### Problem Statement Start ### {task problem} ### Problem Statement End ### ### Problem Reflection Start ### {problem reflection} ### Problem Reflection End ### You are an extremely powerful Meta-Expert with the unique ability to design a team of specialized experts and arrange those experts through a workflow to tackle and solve the above problem. Based on the above problem statement and its reflection analysis, please design a team of experts and orchestrate those experts to effectively address and solve the above problem. In particular, you are to do ”Specialized Experts Design”: - Design a list of subject-matter experts (SMEs) including, but not limited to, Essayist Expert, Python Expert, Linguistic Analyst, Mathematician, Data Scientist, and various other Analysts. Each expert is only to perform one specific sub-task, such as processing data, making decisions, or utilizing Python tools. - Arrange the experts to operate in a sequential workflow, meaning each expert’s output becomes the input for the next, progressively moving towards the final answer. Avoid redundancy of functions across experts. - Assign unique names to each expert and provide an clear description of their specific skills, knowl- edge, and the sub-tasks they are going to perform. Ensure the expert description is comprehensive and self-contained that encapsulates all important information and details from **Sub-tasks Identifi- cation**. - For sub-tasks involving logical reasoning, mathematical operations, data structure manipulation, or programming-related challenges, you can outline strategic approaches and delegate the specifics of im- plementation to the Python expert (Tool). The Python expert will translate the instructions into code, execute it, and return the results. You can include multiple Python experts if needed. Please provide explicit implementation instructions to the Python expert(s). - Conclude each expert’s description with a name card in JSON format, summarizing key attributes. Specify the type of each expert as either ’LLM’ for those based on Large Language Model or ’Tool’ for those utilizing Python tools. - The final expert should be responsible for reviewing the findings of previous experts and then gener- ating the final answer to the problem. 23 Preprint Execution Prompt of Experts Initiated from LLM ### Problem Statement Start ### {original problem} ### Problem Statement End ### ### Problem Reflection Start ### {problem reflection} ### Problem Reflection End ### Please act as {name}. Your role: {role} You are part of a specialized expert team. You are designed to accomplish a sub-task and collaborate with other experts through a workflow graph to solve the above problem. The expert team operates based on the following design: ### Experts Design Start ### {experts design} ### Experts Design End ### Each expert, including you, is responsible for a specific sub-task. The workflow is structured so that each expert’s output becomes the input for the next, progressively moving towards the final answer. The process should be thought of as sequential steps, where you contribute towards the solution based on the outputs from the previous experts.{data type instruction} You can think step by step if neces- sary. The results from the preceding experts are as follows: ### Experts’ Results Start ### input data ### Experts’ Results End ### Please provide a brief explanation of your approach to solving the assigned sub-task. After your explanation, clearly indicate your final output as follows: ### My Final Output Start ### [Your final answer here] ### My Final Output End ### 24 Preprint Execution Prompt of Experts initiated from Symbolic Engine ### Problem Statement Start ### {original problem} ### Problem Statement End ### ### Problem Reflection Start ### {problem reflection} ### Problem Reflection End ### Please act as {name}. Your role: {role} You are a specialized Python expert among a team of experts. You are designed to write Python code to accomplish a sub-task and collaborate with other experts through a workflow graph to solve the above problem. The expert team operates based on the following design: ### Experts Design Start ### {experts design} ### Experts Design End ### Each expert, including you, is responsible for a specific sub-task. The workflow is structured so that each expert’s output becomes the input for the next, progressively moving towards the final answer. You should take the previous expert’s output as input, write the Python code, execute the code, and send the output to the next expert. The results from the preceding experts are as follows: ### Experts’ Results Start ### input data ### Experts’ Results End ### Please write the Python code that takes input in {input type} and return output in {output type}. Guidelines: - Make sure the code includes all the necessary module imports, properly initialize the variables, and address the problem requirements. - The code needs to be self-contained, and executable as-is. Output only code, without any explanations or comments. The code output must follow this structure: ‘‘‘python def f1(...): ... return ... def f2(...): ... return ... ... if __name__ == "__main__": ... ‘‘‘ how to read input The output should be printed without additional words using the ’print()’ method. Answer: ‘‘‘python 25 Preprint Verification Prompt ### Problem Statement Start ### {task problem} ### Problem Statement End ### ### Problem Reflection Start ### {problem reflection} ### Problem Reflection End ### **Experts Design:** - Based on the problem reflection, a team of experts has been designed and organized through a workflow to tackle and solve the problem described above. - Experts are designed to operate in a sequential workflow, meaning each expert’s output becomes the input for the next, progressively moving towards the final answer. - The final expert is responsible for reviewing the findings of previous experts and then generating the final answer to the problem. Here is a description of the experts’ roles and the workflow structure: ### Experts Design Start ### {experts design} ### Experts Design End ### Based on the workflow design, the experts have provided the following results: ### Experts’ Results Start ### {experts results} ### Experts’ Results End ### Given the described workflow design and the results produced by the experts, your task is to eval- uate whether the final output of the ”{final expert}” successfully and correctly solves the problem presented. Please provide your analysis and then conclude your evaluation by stating ’FINAL EVALUATION: YES’ or ’FINAL EVALUATION: NO’. D DATA SYNTHESIS OF REASONING PROBLEMS Data Synthesis Prompt 1 Please develop 10 new and diverse reasoning tasks, one per line, inspired by but distinct from the following 10 example reasoning tasks: {example tasks} Guidelines for task creation: - Ensure each new task is distinctly different from the example tasks provided; avoid mere variations. - Clearly and accurately define each task, making its objective and scope explicit. - Design tasks that yield deterministic answers, facilitating the creation of single, definitive standard answers for subsequent problems derived from these tasks. This helps straightforward evaluation of correctness. - Target a moderate to hard difficulty level for each task, requiring thorough analysis and in-depth reasoning to solve. Data Synthesis Prompt 2 Please develop 10 new and diverse puzzle tasks, one per line, to test various reasoning abilities. Guidance: - Each new puzzle task should clearly and accurately describe what the task is. - Design puzzle tasks that yield deterministic answers, facilitating the creation of single, definitive standard answers for subsequent problems derived from these tasks. This helps straightforward evalu- ation of correctness. - Puzzle tasks should have a moderate to hard difficulty level - they should require thorough analysis and in-depth reasoning to work through. 26 Preprint Problem Validation Prompt ### Problem Start ### {problem} ### Problem End ### Your task is to verify whether the above problem is a valid reasoning problem or not. Valid Criteria: - It is clear and unambiguous (NO multiple interpretations). - It provides all necessary information required to solve the problem. - The problem is logically structured so that it can be approached through reasoning skills. It does not depend on subjective judgments or opinions. - The problem is solvable and has one single, definitive correct answer that can be derived through reasoning. - There are no internal contradictions or conflicts in the problem. Please provide a concise analysis and then output ’## VALID ##’ or ’## INVALID ##’. Next, if it is invalid, please rewrite it into a new valid reasoning problem following the format below. Make sure the new problem is challenging enough. ### New Valid Problem Start ### [new problem] ### New Valid Problem End ### 27
synthetic_cpt
3
Synthetic_Query_Generation_using_Large_Language_Models_for_Virtual_Assistants.pdf
4 2 0 2 n u J 0 1 ] R I . s c [ 1 v 9 2 7 6 0 . 6 0 4 2 : v i X r a Synthetic Query Generation using Large Language Models for Virtual Assistants Sonal Sannigrahi∗† [email protected] Instituto Superior Técnico Lisbon, Portugal Youssef Oualil [email protected] Apple Aachen, Germany Thiago Fraga-Silva [email protected] Apple Aachen, Germany Christophe Van Gysel† [email protected] Apple Cambridge, MA, USA ABSTRACT Virtual Assistants (VAs) are important Information Retrieval plat- forms that help users accomplish various tasks through spoken com- mands. The speech recognition system (speech-to-text) uses query priors, trained solely on text, to distinguish between phonetically confusing alternatives. Hence, the generation of synthetic queries that are similar to existing VA usage can greatly improve upon the VA’s abilities—especially for use-cases that do not (yet) occur in paired audio/text data. In this paper, we provide a preliminary explo- ration of the use of Large Language Models (LLMs) to generate syn- thetic queries that are complementary to template-based methods. We investigate whether the methods (a) generate queries that are similar to randomly sampled, representative, and anonymized user queries from a popular VA, and (b) whether the generated queries are specific. We find that LLMs generate more verbose queries, com- pared to template-based methods, and reference aspects specific to the entity. The generated queries are similar to VA user queries, and are specific enough to retrieve the relevant entity. We conclude that queries generated by LLMs and templates are complementary. CCS CONCEPTS • Information systems → Search interfaces; Query log analy- sis; • Computing methodologies → Speech recognition. KEYWORDS virtual assistants, synthetic query log generation ACM Reference Format: Sonal Sannigrahi, Thiago Fraga-Silva, Youssef Oualil, and Christophe Van Gysel. 2024. Synthetic Query Generation using Large Language Models for Virtual Assistants. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’24), July 14–18, 2024, Washington, DC, USA. ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3626772.3661355 ∗Work performed while an intern at Apple. †Equal contribution. SIGIR ’24, July 14–18, 2024, Washington, DC, USA © 2024 Copyright held by the owner/author(s). This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’24), July 14–18, 2024, Washington, DC, USA, https://doi.org/10.1145/ 3626772.3661355. 1 INTRODUCTION Virtual Assistants (VAs) are important [9] Information Retrieval (IR) platforms that help users accomplish various tasks. Users primarily interact with VAs through voice commands, where users initiate a retrieval request by uttering a query. The Automated Speech Recognition (ASR) component of the VA system transcribes the spoken user query, which is then sub- sequently processed by the retrieval engine. However, the ASR system is trained on audio/text pairs that are expensive and time- consuming to obtain. During the recognition process, the ASR sys- tem employs a query prior trained solely on text to disambiguate between phonetically-similar recognition candidates. Hence, the query prior is a powerful mechanism to modify the ASR system’s behavior, and has been shown to be an effective manner to improve the recognition of tail named entities [3, 8, 10, 14, 17]. In order to correctly recognize emerging entities [4], the ASR system’s query prior is estimated using a mixture of usage-based and synthetic text data. Synthetic queries are typically generated using a template-based approach [2, 13]. A query template, such as “play music by $ARTIST”, representing the generic intent of a user wanting to play music by a specific artist, is instantiated using a popularity-weighted list of entities. However, template-based approaches are stringent, may only represent a limited set of use- cases, and are not well-suited to generate synthetic queries for use- cases that are specific to particular entities. For example, the query “play Taylor Swift’s debut performance at the Grammy’s” represents the user’s intent to play the song “Fifteen” by Taylor Swift which was Swift’s debut performance at the Grammy’s in 2009. While creating a template based on this query would be possible, it does not generalize across entities: some entities may not have performed at the Grammy’s and finding the relevant venue would require manual curation. Hence, synthetic query generation methods that can generate queries tailored to specific entities are necessary. Recent advances in Large Language Models (LLM) have shown impressive improvements in language understanding tasks [5] through their emergent capabilities [16]. In IR, there have been various works focusing on the generation of queries using LLMs [1, 11, 15]. In this paper, we perform a preliminary analysis of the use of LLMs to produce query priors in VA ASR. We generate synthetic queries by prompting LLMs using a description of the artist gath- ered from Wikipedia. Then, we evaluate the generated queries in SIGIR ’24, July 14–18, 2024, Washington, DC, USA Sonal Sannigrahi, Thiago Fraga-Silva, Youssef Oualil, and Christophe Van Gysel Figure 1: Proposed pipeline to generate queries for a VA via an LLM. terms of their similarity to randomly sampled, representative, and anonymized user queries from a popular VA, in addition to the queries’ ability to retrieve the entity for which they were gener- ated. More specifically, the research questions addressed are as follows: (RQ1) Can LLMs generate VA queries that are similar to user queries extracted from VA query logs (i.e., domain match)? (RQ2) Are the LLM-generated queries good at retrieving the entity for which they were generated (i.e., specificity)? Our contributions are as follows: (1) We propose a prompt for LLMs to produce natural queries for the music domain for VAs, and perform extensive experiments comparing the LLM-generated queries to queries generated using template-based methods, (2) We provide insights through analysis into the differences between queries generated using the various methods. 2 METHODOLOGY Fig. 1 shows an overview of our approach, which consists of the following three main components: (1) entity descriptions extracted from Wikipedia to provide context for synthetic query generation, (2) the prompt which incorporates the entity description and for- mulates a request to the LLM to generate synthetic data, where we also specify the intent the queries need to represent, and (3) the LLM, which takes the prompt as input and subsequently generates a list of synthetic queries as output. 2.1 Knowledge Base We build our music artist knowledge base by linking Wikipedia data with artist profiles on a popular streaming service. The paragraphs in the Wikipedia articles are used as contexts to generate synthetic queries using LLMs (§2.2). The combination of the Wikipedia arti- cle, and the artist profile retrieved from the streaming service, are used to build a search engine to evaluate the end-to-end perfor- mance of the generated queries (§4.2). We obtained a catalog of mu- sic artist entities by downloading the list of most popular artists on a streaming service in July 2023 and linking them to their respective Wikipedia profile using property P2850 (i.e., “Artist ID Number”) through Wikidata’s SPARQL query service1. We also use the Music- Group2 metadata object, embedded in the source of each artist page on music.apple.com, with entries that include artist name, biogra- phy, an unique artist ID, as well as discography information. Be- tween both the Wikipedia dumps and the artist database, we main- tain a direct linking to produce a knowledge base of 14 161 artists. 2.2 Prompt & LLMs Our prompt is depicted in Fig. 2. For each entity in the knowl- edge base (§2.1), we create prompts by populating the artist name 1https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service 2https://schema.org/MusicGroup [ ARTIST DESCRIPTION ] Generate [K] queries based on the information above about [ ARTIST NAME ] to play music or learn more about [ ARTIST NAME ]. Here are some examples : [ EXAMPLES ] Figure 2: LLM prompt used during our experiments. [ARTIST DESCRIPTION] and [ARTIST NAME] are replaced with an entity descrip- tion, and the entity name, resp. [EXAMPLES] are a list of example VA queries for the specific intent. We fix [EXAMPLES] to the following "play, queue, turn on, etc". [K] is the number of queries. and use the lead section (i.e., the introduction) as their descrip- tion. For music artists, the lead section typically references no- table audiography, collaborations and life events. The number of queries, 𝐾, is set to 40 (§3.1) in this paper. We use the OpenAI API to generate queries with four model variants3. More specifically, we experiment with babbage-002, gpt-3.5-turbo, gpt-3.5-turbo- instruct, and gpt-4 (see §3.1 for details). 3 EXPERIMENTAL SETUP 3.1 Query generation methods under comparison We generate queries for all 14 161 entities from our knowledge base (§2.1) using (a) the entity name by itself, (b) a template-based approach using the top-𝐾 (according to prior probability) music query templates released as part of [13] (excluding the templates that consist of only the entity name by itself in order to differen- tiate from approach (a)), and (c) four different LLMs available via OpenAI’s API using the prompt in Fig. 2 (§2.2) where we ask the LLM to generate 𝐾 queries: babbage-002, gpt-3.5-turbo (v0613), gpt-3.5-turbo-instruct, and gpt-4; with 𝐾 = 40. During our experiments, we report evaluation measures at various values of 𝐾 ≤ 40, in which case we extract the first 𝐾 queries from the list of 40 queries (rather than issuing multiple requests to the LLM with varying 𝐾). Generated queries that start with a VA wakeword (e.g., “hey VA” where VA refers to the name of the assistant), have the pre- fix corresponding to the wakeword removed. For example, a query “hey VA play Moderat” is normalized to “play Moderat”. This step aims at avoiding biases towards methods that frequently generate the wakeword during domain match evaluation (§4.1). 3.2 Evaluation measures To answer RQ1, we measure the likelihood of the generated queries under a 4-gram back-off language model [6] estimated on randomly sampled and anonymized user queries over a span of 2 years from a popular VA. We apply Good Turing smoothing, and N-grams 3https://platform.openai.com/docs/models/overview Synthetic Query Generation using Large Language Models for Virtual Assistants SIGIR ’24, July 14–18, 2024, Washington, DC, USA entity name templates babbage-002 gpt-3.5-turbo gpt-3.5-turbo-instruct gpt-4 # entities # unique queries per entity query length per entity % of queries with > 15 terms 14161 1.00 ± 0.00 1.58 ± 0.80 0.01% 14161 39.51 ± 0.50 3.78 ± 1.24 0.01% 13848 9.64 ± 14.74 52.30 ± 122.05 40.85% 14161 41.60 ± 3.61 8.11 ± 2.66 1.17% 14156 40.09 ± 2.27 8.42 ± 4.03 1.44% 14161 39.99 ± 0.13 8.31 ± 2.47 1.16% Table 1: Statistics of generated queries across the approaches under consideration (§3.1). (cid:16)(cid:205)|𝑞 | that occur infrequently (less than 3 times) in the data are filtered out. The negative log-likelihood (NLL) of a single query 𝑞 with (cid:17) 𝑖=1 log 𝑃 (𝑞𝑖 | 𝑞1 . . . 𝑞𝑖 −1) |𝑞| terms is defined as NLL (𝑞) = − , where 𝑃 (𝑞𝑖 | 𝑞1 . . . 𝑞𝑖 −1) represents the probability of the term 𝑞𝑖 under the 4-gram LM. Using a 4-gram LM, rather than looking for exact matches in query logs, provides us with a flexible approach to score the likelihood of a query, while also having the ability to assign a score to queries not present in the query logs. The lower the NLL, the more a query is likely under VA user query behavior. We report median NLL across a query set of the first 𝐾 queries for each entity. For RQ2, we measure to what capacity the generated queries can retrieve the entity for which they were generated in order to measure query specificity. We build an index of our knowledge base 14 161 entities where each entity is represented by its Wikipedia page and its profile on a music streaming service (§2.1), including biography and most popular discography. Both indexed documents and queries are pre-processed by lower-casing, removing punctua- tion and non-alphanumeric characters, removing stopwords, and applying a Porter stemmer. We use the BM25-L retrieval model [12, §3.2] with 𝑘1 = 1.5, 𝑏 = 0.75 and 𝛿 = 0.5. Since for each query, there is only one relevant entity, we report reciprocal rank (RR), aver- aged over the top-𝐾 queries 𝑞 and entities 𝑒, with RR defined as RR (𝑞, 𝑒) = 1 rank (𝑞, 𝑒) where rank (𝑞, 𝑒) equals the rank of the entity 𝑒 for which query 𝑞 was generated under the BM25-L retrieval model. The higher the RR, the better a query is able to retrieve the entity it was generated for (and, hence, the more specific an entity is). We report mean RR across a query set for the first 𝐾 queries generated for each entity. 4 RESULTS Table 1 shows statistics on the generated queries using the vari- ous methods (§3.1). In Fig. 3a, we see that while the standalone entity-name and template-based methods generate relatively short queries (1–4 terms), the LLM-based methods tend to be more ver- bose (∼8 terms). A sample of generated queries for entity Post Mal- one (Q21621919) is depicted in Table 2. The babbage-002 LLM, a GPT base model not trained with instruction following [7], per- forms poorly and fails to generate reasonable queries. As expected, the template-based approach generates queries that are stylistically simple, since the template is independent from the entity for which the queries are being generated. On the other hand, queries gener- ated by LLM-based methods are able to refer to information present in the artist description that was fed as context to the LLM. We will now answer the research questions raised in §1 and further defined in §3.2. Method Sample of generated queries entity name “Post Malone” templates “play Post Malone”, “play the song Post Malone”, “play Post Malone music” babbage-002 “Start with CTRL + M”, . . . gpt-3.5 “play White Iverson by Post Malone”, “queue Congratulations by Post Malone”, “turn on Post Malone’s album Beerbongs & Bentleys” gpt-3.5 (instruct) “play Post Malone’s debut single White Iverson”, “play Post Malone’s hit song Rockstar”, “play Post Malone’s song Sunflower from the Spider-Man Into the Spider-Verse soundtrack” gpt-4 “play White Iverson by Post Malone”, “add Rockstar by Post Malone to my playlist”, “turn up the volume for Psycho by Post Malone” Table 2: Example of queries generated by the various methods (§3.1). 4.1 Similarity to VA usage queries For RQ1, Fig. 3b shows the negative log likelihood (§3.1) for the methods under consideration. The entity name by itself aligns closest with user behavior, while the template-based approach is a close second. This is not surprising, since the templates we used were created by domain experts by analyzing high-frequency use- cases in a representative sample of VA usage [13, §3.1]. Hence, the entity name and template method represent frequent use-cases at the head of the query distribution. Next up, at approximately half the log-likelihood, queries gener- ated by the LLMs seem to represent infrequent, tail use-cases. While not entirely absent from VA usage, they are not as common as the straight-forward templates. This is explained by the fact that the LLM-generated queries often reference specific songs or albums by the artist—extracted from the artist’s description—resulting in less generic queries. However, this lack of generality yields queries that reference multiple entities and, hence, tend to be at most as—and often, significantly less—likely as queries referencing only a single entity. Note that in our prompt (Fig. 2), we did not instruct the LLMs to exhibit this behavior. We answer RQ1 as follows: queries gener- ated by LLMs trained with instruction following correlate with VA user behavior, although they tend to be more specific than queries generated using template-based approaches. This raises the ques- tion whether template- and LLM-based approaches are complemen- tary when it comes to synthetic query generation. In fact, compar- ing the query sets generated by the template-based method and gpt-3.5-turbo-instruct, the mean/std. dev of the Jaccard coef- ficient across entities equals 0.0038 ± 0.0084, indicating very low overlap, and hence, complementarity. SIGIR ’24, July 14–18, 2024, Washington, DC, USA Sonal Sannigrahi, Thiago Fraga-Silva, Youssef Oualil, and Christophe Van Gysel (a) Distribution of generated query lengths across the approaches under consideration (§3.1). Lengths that exceed 15 tokens are not depicted, but are documented in Table 1. (b) Median NLL (§3.2; lower is better) for the various query generation methods (except babbage-002 since it leads to very high NLL) for various query cut-offs (𝐾 = 10, 20, 30, 40). See Fig. 3a for the legend. (c) Reciprocal rank (§3.2; higher is better) for the various query generation methods (except babbage-002 since it generates non-sensical queries) for various query cut-offs (𝐾 = 10, 20, 30, 40). See Fig. 3a for the legend. Figure 3 4.2 Query specificity For RQ2, Fig. 3c depicts the reciprocal rank for the various meth- ods (§3.1) at various cut-offs of the list of generated queries. The entity name method performs best, since it does not contain any superfluous terms and matches directly the specific entity mention contained within the entity’s textual representation. The template- based method performs nearly as well as the entity name method, since it generates queries that contain the entity name padded with carrier terms that are non-specific to the entity (e.g., “play”, “song”). The LLM-based methods score slightly worse than the entiy name and template methods, since the queries generated using LLMs are more verbose and can include terms that match non-relevant enti- ties. For example, song titles often consist of generic, non-specific terms and multiple songs can have the same title. Between the LLM- based generated query collections, gpt-4 performs worst. When examining the effect of query cut-off (𝐾), we see that as 𝐾 increases, RR generally decreases. This is due to the fact that, as 𝐾 increases, queries become more complex and can contain terms that con- fuse the retrieval model. We answer RQ2 as follows: entity-centric queries generated by LLMs achieve an average reciprocal rank of 0.70; indicating that the correct entity is often ranked highly. How- ever, since LLMs generate more verbose queries, there are more terms in the queries that can throw off the retrieval model. 4.3 Complementarity Finally, following the conclusions to RQ1 and RQ2 above, and the qualitative examples in Table 2, we find that template- and LLM- based methods are complementary as follows: (1) template-based methods allow to generate synthetic queries for frequent use-cases (e.g., for tail entities) that apply across all entities (e.g., “play mu- sic by $ARTIST”) and are computationally inexpensive, whereas (2) LLM-based methods can generate specialized/infrequent use– cases (e.g., for popular/controversial entities) specific to the entity in question (e.g., “play Taylor Swift’s duet with Ed Sheeran”)—while having a higher computational cost. Hence, template- and LLM- based methods can be combined to build a richer synthetic query col- lection with coverage for both (a) tail entities, and (b) tail use-cases. 5 CONCLUSIONS In this paper, we performed a preliminary analysis of the use of LLM- based approaches for the generation of synthetic queries for training a query prior used within a VA speech recognition system. We find that template- and LLM-based approaches are complementary since (a) template-based methods can generate queries for frequent use- cases and infrequent entities, and (b) LLM-based methods are better suited to target infrequent use-cases tailored to a specific entity. One limitation of this work is that we relied on OpenAI’s API for LLMs. However, we did not observe any significant differences in behavior between the LLMs we experimented with, and we believe that the overall conclusion that template- and LLM-based query generation methods are complementary will remain valid. Another limitation is that the LLM training data can bias the generated query priors, however addressing this is out of the scope of the current work. Future work includes approaches to mix together the results of the multiple query generation methods, such that the final collection aligns with user behavior; in addition to exploration of the prompt used to query the LLM, use of more advanced prompting techniques (e.g., chain of thought), and LLM fine-tuning. ACKNOWLEDGMENTS The authors would like to thank Manos Tsagkias, Lyan Verwimp, Russ Web, Sameer Badaskar, and the anonymous reviewers for their comments and feedback. 24681012140100000200000entitynametemplatesbabbage-002gpt-3.5-turbogpt-3.5-turbo-instructgpt-410203040#generatedqueries2030405060NLLunderquerylogs10203040#generatedqueries0.7000.7250.7500.7750.800ReciprocalRank Synthetic Query Generation using Large Language Models for Virtual Assistants SIGIR ’24, July 14–18, 2024, Washington, DC, USA SPEAKER BIOGRAPHY Sonal Sannigrahi is a PhD student at Instituto Superior Técnico in Lisbon, Portugal working on multi-modal Natural Language Processing. She previously worked on multilingual representation learning and has published papers at EACL, ACL, amongst others. Christophe Van Gysel is a Staff Research Scientist working on the Siri Speech language modeling team at Apple where he works on the boundary between ASR and Search. Christophe obtained his PhD in Computer Science from the University of Amsterdam in 2017. During his PhD, Christophe worked on neural ranking using representation learning models with a focus on entities and published at WWW, SIGIR, CIKM, WSDM, TOIS, amongst others. COMPANY PROFILE Apple revolutionised personal technology with the introduction of the Macintosh in 1984. Today, Apple leads the world in innovation with iPhone, iPad, Mac, Apple Watch, and Apple TV. Apple’s five software platforms — iOS, iPadOS, macOS, watchOS, and tvOS — provide seamless experiences across all Apple devices and empower people with breakthrough services including the App Store, Apple Music, Apple Pay, and iCloud. Apple’s more than 100,000 employees are dedicated to making the best products on earth, and to leaving the world better than we found it. REFERENCES [1] Marwah Alaofi, Luke Gallagher, Mark Sanderson, Falk Scholer, and Paul Thomas. 2023. Can Generative LLMs Create Query Variants for Test Collections? An Exploratory Study. In SIGIR. 1869–1873. [2] Ankur Gandhe, Ariya Rastrow, and Bjorn Hoffmeister. 2018. Scalable Language Model Adaptation for Spoken Dialogue Systems. In SLT. IEEE. [3] Sashank Gondala, Lyan Verwimp, Ernest Pusateri, Manos Tsagkias, and Christophe Van Gysel. 2021. Error-driven pruning of language models for virtual assistants. In ICASSP. [4] David Graus, Daan Odijk, and Maarten de Rijke. 2018. The birth of collective memories: Analyzing emerging entities in text streams. JAIST 69, 6 (2018). [5] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 (2020). [6] Slava M. Katz. 1987. Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer. ASSP 35 (1987). [7] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. In NeurIPS, Vol. 35. [8] Ernest Pusateri, Christophe Van Gysel, Rami Botros, Sameer Badaskar, Mirko Hannemann, Youssef Oualil, and Ilya Oparin. 2019. Connecting and comparing language model interpolation techniques. In Interspeech. [9] Juniper Research. 2019. Digital Voice Assistants in Use to Triple to 8 Billion by 2023, Driven by Smart Home Devices. Press Release. [10] Mandana Saebi, Ernest Pusateri, Aaksha Meghawat, and Christophe Van Gysel. 2021. A discriminative entity-aware language model for virtual assistants. In ICASSP. [11] Hsuan Su, Ting-Yao Hu, Hema Swetha Koppula, Raviteja Vemulapalli, Jen- Hao Rick Chang, Karren Yang, Gautam Varma Mantena, and Oncel Tuzel. 2024. Corpus Synthesis for Zero-shot ASR Domain Adaptation using Large Language Models. (2024). [12] Andrew Trotman, Antti Puurula, and Blake Burgess. 2014. Improvements to BM25 and language models examined. In Australasian Document Computing Symposium. [13] Christophe Van Gysel, Mirko Hannemann, Ernie Pusateri, Youssef Oualil, and Ilya Oparin. 2022. Space-Efficient Representation of Entity-centric Query Language Models. In Interspeech. [14] Christophe Van Gysel, Manos Tsagkias, Ernest Pusateri, and Ilya Oparin. 2020. Predicting entity popularity to improve spoken entity recognition by virtual assistants. In SIGIR. [15] Shuai Wang, Harrisen Scells, Bevan Koopman, and Guido Zuccon. 2023. Can ChatGPT write a good boolean query for systematic review literature search? (2023). [16] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022. Emergent abilities of large language models. Transactions on Machine Learning Research (2022). [17] Youyuan Zhang, Sashank Gondala, Thiago Fraga-Silva, and Christophe Van Gysel. 2023. Server-side Rescoring of Spoken Entity-centric Knowledge Queries for Virtual Assistants. arXiv preprint arXiv:2311.01398 (2023).
synthetic_cpt
2
Neural_Codec_Language_Models_are_Zero-Shot_Text_to_Speech_Synthesizers.pdf
Towards audio language modeling - an overview Haibin Wu1, Xuanjun Chen1∗, Yi-Cheng Lin1∗, Kai-wei Chang1, Ho-Lam Chung1, Alexander H. Liu2, Hung-yi Lee1 1 4 2 0 2 b e F 0 2 ] S A . s s e e [ 1 v 6 3 2 3 1 . 2 0 4 2 : v i X r a Abstract—Neural audio codecs are initially introduced to compress audio data into compact codes to reduce transmission latency. Researchers recently discovered the potential of codecs as suitable tokenizers for converting continuous audio into discrete codes, which can be employed to develop audio language models (LMs). Numerous high-performance neural audio codecs and codec-based LMs have been developed. The paper aims to provide a thorough and systematic overview of the neural audio codec models and codec-based LMs. Index Terms—Neural codec, codec-based language model I. INTRODUCTION Neural audio codec models were first introduced to com- press audio for efficient data transmission. The encoder con- verts the audio into codec codes, which are then transmitted. The receiver then uses the codec decoder to reconstruct the audio using the received codes. Language modeling has proven to be highly successful in the field of Natural Language Processing (NLP). Audio data encompasses not only textual content but also rich in- formation about speaker timbre, emotion, and general audio, offering deeper possibilities for language model applications. Researchers, especially those in large companies with sig- nificant computational resources, recently leverage the po- tential of neural codecs [1]–[8] as suitable tokenizers for converting continuous audio into discrete codes, which can be employed to develop audio language models (LMs) [9]–[20]. The current codec-based language models and codec models are summarized in Figure 1. These findings promptly garnered the community’s attention, sparking a fervor for developing codecs tailored to audio language modeling. Numerous high- performance neural audio codec models and audio LMs have been developed. An ideal codec should maintain content while preserving paralinguistic and speaker-related information. Similarly, a universal audio language model should be able to generalize across various audio types, such as speech, music, and general audio, covering a wide range of applications. The arms race in developing codecs and audio LMs is still ongoing. Given the significant advancements in codecs and audio language models over the past three years as shown in Fig- ure 1, there has yet to be a comprehensive review comparing them and providing inspiration to the community. In this study, we aim to fill this research gap by thoroughly reviewing and comparing various existing neural codec models and audio codec-based language models. Firstly, we specifically conduct an in-depth analysis of six representative open-source neural ∗Equal second contribution. 1 National Taiwan University. 2Massachusetts Institute of Technology. codec models to cover their training methodologies, imple- mentation settings, and training data. Secondly, we expand our analysis to include eleven diverse codec-based language models, examining how they utilize the codecs and the tasks to which they can be applied. Through this comprehensive review, we aim to offer the community insights into the diverse methodologies and potential directions in the field of neural codecs and codec-based language modeling. II. COMPREHENSIVE COMPARISON FOR NEURAL AUDIO CODEC MODELS Codec models aim to compress and decompress speech signals efficiently. Traditional codecs are developed based on psycho-acoustics and speech synthesis [21], [22]. Recently, the neural codec models demonstrated highly effective for compression and signal reconstruction, outperforming tradi- tional codecs. Considering the broad spectrum of codec models within the research community, each trained with its distinct configurations and training techniques, there is a clear need for a thorough examination that covers the training methodologies, implementation settings, and training data employed across these codec models. The six codec models have distinct training details, resulting in a collection of fifteen different codec models, as summarized in Table I. A. Brief method overview for codecs SoundStream [2] stands as one of the pioneering implemen- tations of neural codec models, embodying a classic neural codec architecture comprising encoder, quantizer, and decoder modules. It utilizes the streaming SEANets [23] as its encoder and decoder. The quantizer incorporates a speech enhancement system with a Residual Vector Quantization (RVQ) [2], [24] bottleneck to obtain parallel token streams. During training, the model parameters are optimized using a combination of reconstruction and adversarial loss. SoundStorm [3] is an improved version of SoundStream to achieve both efficiency and high-quality audio generation. It accomplishes this by em- ploying an architecture specifically tailored to the hierarchical structure of audio tokens. Moreover, it pioneers a parallel, non- autoregressive decoding scheme, which relies on confidence- based strategies for residual vector-quantized token sequences. Encodec [1] builds upon a framework similar to Sound- Stream. Nonetheless, it further augments its capabilities by integrating supplementary LSTM [25] layers and harnessing a Transformer-based language model [26] to model the RVQ codes, thereby amplifying its sequence modeling performance. there is a stream of work aimed at making codec Then, models more general and powerful. AudioDec [4] represents 2 Fig. 1. Timeline of current neural codec models and codec-based language models. an enhanced version of Encodec, implementing a group con- volution mechanism to facilitate the real-time operation of the streamable network while also harnessing the capabilities of HiFi-GAN [27] to effectively generate high-fidelity audio at a high sampling rate of 48 kHz. In the AcademiCodec model introduced by [5], a novel technique known as group-residual vector quantization is presented. It employs multiple parallel RVQ groups. This technique is specifically tailored for generation tasks. It aims to enhance the reconstruction performance while using a limited number of codebooks, consequently achieving an impressively low bit rate per second (BPS). This low BPS is of utmost significance as it effectively addresses the challenge of lengthy speech tokens in speech language modeling, resulting in reduced sequence lengths. It SpeechTokenizer [7] is a unified speech tokenizer designed implements an Encoder- for speech language models. Decoder architecture enhanced with RVQ. By integrating both semantic and acoustic tokens, SpeechTokenizer hierarchically separates various aspects of speech information across dif- ferent RVQ layers. Specifically, SpeechTokenizer is designed to regularize the first RVQ layer to highlight semantic in- formation by learning the Hubert tokens [28]. Using such techniques can enhance the disentanglement of information across different RVQ layers. Descript-audio-codec (DAC) [8], a universal neural codec model, distinguishes itself through its exceptional ability to maintain high-fidelity audio quality across a wide spectrum of data types, encompassing general audio, music, and speech. It accomplishes this feature by employing a number of train- ing techniques, such as periodic activation functions [29], enhanced residual vector quantization using factorized and L2-normalized codes, random quantizer dropout to preserve audio reconstruction quality, as well as refining adversarial and reconstruction loss during the training process. The authors highlight importance of the periodic activation function among the employed techniques. the crucial Unlike most models focusing on the time domain, Fun- Codec [6] proposes a frequency-domain codec. The authors claim they can achieve comparable performance with fewer parameters and lower computation complexity. Meanwhile, it also finds that incorporating semantic information in the codec tokens improves speech quality at low bit rates. B. Comparison from methodology angles We compare several techniques proposed by these codecs in Table II. The abbreviation “A-F” represents different codec models. Please refer to Table I for the corresponding model full name. The design of discriminators constitutes a pivotal element within codec models. Encodec initially introduces the Multi-scale-STFT Discriminator (MS-STFTD). In contrast to the multi-scale discriminator (MSD) proposed in MelGAN [24], which captures long-term dependencies, the multi-period discriminator (MPD) proposed in HiFi-GAN [30] exhibits a capacity to discern more nuanced periodic details. Con- sequently, AudioDec replaces the conventionally employed STFTD with a HiFi-GAN-based MPD, observing an enhance- ment in audio quality within their model. AcademiCodec integrates prior research efforts by incorporating the MS- STFTD from Encodec and both HiFi-GAN-based MPD and MSD. Both SpeechTokenizer and Funcodec adopt identical discriminators to AcademiCodec, with Funcodec offering a unified interface adaptable to any combination of these three discriminator types. DAC identifies that employing MSD and MPD alone generates audio displaying blurriness and artifacts. To address this, they propose the application of a multi-scale, multi-band STFT discriminator (MS-MB-STFTD) to improve phase modeling and mitigate aliasing artifacts. SpeechTokenizer utilizes semantic tokens from Hubert L9 as a teacher for the RVQ process. This guidance enables the disentanglement of content information into the first layer of the tokenizer, while paralinguistic information is retained in subsequent layers. FunCodec seeks to integrate semantic information by combining, adding, or residualizing the audio codec with semantic tokens. The study reveals that including semantic tokens enhances audio quality, particularly with the residual inclusion method. Additionally, SpeechTokenizer and FunCodec utilize K-means to cluster samples in the first mini- batch for initializing the VQ codebook, leading to improved code utilization. DAC follows the approach of BigVGAN [31], employing snake activation [29] for trainable control over the frequency of periodic signals. AcademiCodec employs mul- tiple RVQ codebooks (multiple residual groups) to represent intermediate features. They demonstrate that using multiple residual groups achieves good reconstruction performance while employing only a few codebooks. Encodec trains an 2021.06Neural Codec Model2023.062022.122023.12Codec-based Language ModelMusicLM [11]SoundStream [2]AudioLM [9]EnCodec [1]FuncCodec [6]AudioGen [20]SpeechTokenizer [7]VioLA [14]AudioDec [4]AcademicCodec [5]VALL-E X [13]SoundStorm [3]VALL-E [12]MusicGen [18]SpeechX [17]AudioPaLM [10]DAC [8]LauraGPT [16]UniAudio [15] TABLE I CODEC INFORMATION COMPARISON. ”A-F” REPRESENTS DIFFERENT NEURAL CODEC MODELS, WHERE ”A” IS SPEECHTOKENIZER [7], ”B∼” IS ACADEMICODEC [5], ”C” IS AUDIODEC [4], ”D∼” IS DAC [8], ”E∼” IS ENCODEC [1], AND ”F∼” IS FUNCODEC [6]. nc REPRESENTS THE CODEBOOK NUMBER, SR REPRESENTS THE SAMPLE RATE, AND BPS REPRESENTS THE BIT RATE IN UNIT BITS PER SECOND. Codec information Training data nc SR BPS 3 A B1 B2 B3 C D1 D2 D3 E1 E2 E3 E4 E5 16k hifi 16k 320d hifi 16k 320d large uni hifi 24k 320d 24k 320d 16k 24k 44k 24k 1.5bps 24k 3bps 24k 6bps 24k 12bps 24k 24bps F1 en libritts 16k gr1nq32ds320 F2 en libritts 16k gr8nq32ds320 F3 F4 F5 F6 en libritts 16k nq32ds320 en libritts 16k nq32ds640 zh en 16k nq32ds320 zh en 16k nq32ds640 Librispeech LibriTTS VCTK AISHELL Valentini 8 4 4 4 8 16 16 16 24 24 Common Voice, DAPS 12 16 32 24 9 44.1 VCTK, MUSDB Jamendo, AudioSet Common Voice DAPS, Jamendo AudioSet, FSD50K 24 2 24 4 8 24 16 24 32 24 Subset of LibriTTS 32 16 32 16 32 16 32 16 25k hours collected data 32 16 32 16 (en and zh-cn) 4 2 2 3 6.4 6 24 8 1.5 3 6 12 24 16 16 16 8 16 8 TABLE II COMPARISON BETWEEN CODEC IMPLEMENTATION STRATEGY. SEM REPRESENTS CODEC INCLUDING SEMANTIC TOKENS. SNAKE REPRESENTS THE CODEC MODEL THAT EMPLOYS SNAKE ACTIVATION. MRG REPRESENTS CODEC HAS MULTIPLE RESIDUAL GROUPS. NOISY REPRESENTS CODEC UTILIZES NOISY DATA IN TRAINING. LM REPRESENTS THE MODEL INCLUDING LANGUAGE MODEL TRAINING. KM REPRESENTS CODEC USES K-MEANS TO CLUSTER SAMPLES AS INITIALIZATION OF VQ CODEBOOK. Codec Discriminators SEM Snake MRG Noisy LM KM A B C D E F MSD + MPD + MS-STFTD ✓ MSD + MPD + MS-STFTD ✗ ✗ ✗ ✗ MSD + MPD + MS-STFTD ✓ MPD MPD + MS-MB-STFTD MS-STFTD ✗ ✗ ✗ ✓ ✗ ✗ ✗ ✓ ✗ ✗ ✗ ✗ ✗ ✗ ✗ ✗ ✗ ✓ ✗ ✓ ✗ ✗ ✗ ✗ ✗ ✗ ✓ ✗ ✗ ✓ additional small transformer model for entropy coding over the quantized units, which reduces bandwidth and accelerates encoding and decoding. C. Implementation details We compare the codebook number, training data, sampling rate, and bit rate per second in Table I. From the training data perspective, SpeechTokenizer [7], AudioDec [4], and FunCodec [6] utilize only English speech dataset. Academi- Codec [5] incorporates bilingual speech datasets, including AISHELL for Chinese and LibriTTS and VCTK for English. Both DAC [8], and Encodec [1] encompass diverse modality data, including speech, music, and audio, in the training data. Fig. 2. Codec-based Language Modeling III. CURRENT CODEC-BASED SPEECH LANGUAGE MODELS As shown in Figure 2, the process of neural codec-based audio language modeling begins by converting context information, such as text and MIDI, into context codes, while simulta- neously encoding the audio into codec codes. These context and codec codes are then employed in the language modeling phase to generate the desired target codec code sequence. Subsequently, the target codec code sequence is passed to the codec decoder to produce the audio output. The entire pipeline embodies an audio-to-audio modeling approach. A. Overview for codec-based LMs AudioLM [9] is the pioneering model in introducing codec codes for language modeling, utilizing a hierarchical approach that encompasses two distinct stages. The first stage generates semantic tokens using a self-supervised w2v-BERT model [32]. These tokens are then leveraged in the second stage as conditioning elements to create acoustic tokens using a SoundStream neural codec [2]. VALL-E [12], VALL-E X [13], and SpeechX [17], all orig- inate from Microsoft and are neural codec language models trained to generate discrete codes derived from EnCodec [1], based on textual or acoustic inputs. VALL-E can generate high-quality personalized speech with only a 3-second enroll- ment recording from an unseen speaker. Furthermore, VALL- E X can produce high-quality speech in the target language with just a single speech utterance in the source language as a prompt. Additionally, SpeechX introduces a unified framework to address not only zero-shot TTS but also various types of speech transformation tasks, including speech enhancement and speech editing. What sets ViaLA [14], AudioPaLM [10], and LauraGPT [16] apart is their dual capability to generate both text and audio. VioLA tries to tackle the question “Is one decoder- only generative model all you need for speech recognition, synthesis, and translation?” by employing language modeling that integrates both text tokens and audio tokens (extracted by EnCodec [1]), along with the use of task IDs and language IDs. AudioPaLM constructs a unified vocabulary comprising Neural Codec LMCodecTokenizerCodec DecoderContext codeCodec codeContextEncoder(Context) Text, MIDI, …Codec Language Modeling both text and audio tokens. It is a decoder-only, autoregres- sive model capable of processing and generating both text and speech. Additionally, AudioPaLM’s initialization stems from PaLM-2 [33], a text-only language model. AudioPaLM’s approach to audio tokenization resembles that of AudioLM. Moreover, AudioPaLM adopts and extends the SoundStream to SoundStorm [3]. LauraGPT [16] is a versatile model language model built on a decoder-only text-based language model, Qwen-2B [34]. LauraGPT has the capability to pro- cess both audio and text inputs, generating outputs in either modality. LauraGPT encodes input audio into continuous representations using a Conformer encoder and decodes output audio using FunCodec [6] discrete codes. The authors claim this specific audio features design for inputs and outputs will result in improved performance for speech generation using some preliminary experimental results. UniAudio [15] utilizes language modeling to generate a wide range of audio types, including speech, sounds, music, and singing, using textual or acoustic tokens as inputs. Uni- Audio stands out for its ability to enhance autoregressive pre- diction speed by introducing a multi-scale Transformer model [35], which employs a large global transformer to predict the first-layer codec codes and a small local transformer to predict the codec codes for the subsequent codec layers. The codec model in UniAudio is revised from EnCodec. Additionally, there are other codec-based language models designed for sound modeling. AudioGen [20] trained a Sound- Stream model to get audio tokens and subsequently trained a language model to utilize textual features as conditions for generating audio tokens. MusicLM [11] follows a training strategy similar to AudioLM but extends its scope to en- compass music features. It approaches the task of conditional music generation through a hierarchical sequence-to-sequence modeling approach. Initially, it utilizes music tokens from Mulan [36] to generate semantic tokens from the w2v-BERT model. Subsequently, it employs both music tokens and seman- tic tokens to generate acoustic features through Soundstream. MusicGen [18] is a music language model designed to work with EnCodec discrete tokens. It accepts textual descriptions or melodic features as input conditions to generate tokens, which can be reconstructed to high-fidelity music. Another branch of speech language modeling aims to utilize discrete units obtained by quantizing self-supervised speech representations. While these discrete units contain rich acous- tic and linguistic information [37], they lack speaker and paralinguistic information [38]. This research direction focuses on modeling the semantics of speech, with the optional use of encoders to learn about speaker characteristics and prosody. Pioneering work is speech-resynthesis [38], which utilizes these discrete units in conjunction with prosody and speaker encoders to encode speech into low-bitrate codes. These codes can then be resynthesized into a speech signal with a decoder to achieve low-bitrate transmission. Additionally, these dis- crete units can be regarded as “pseudo-text,” serving as a foun- dation for training textless speech language models. Notable examples include GSLM [39], pGSLM [40], dGSLM [41], and TWIST [42]. By engaging in the pre-trained task of next- token prediction, these speech LMs perform spoken language 4 TABLE III CODEC-BASED LANGUAGE MODELS COMPARISON. ”T” MEANS TEXT, ”AUD” MEANS AUDIO, ”P” MEANS PHONEME, AND ”M” MEANS MIDI. CLM Task Input Output Codec AudioLM [9] AudioGen [20] VALL-E [12] MusicLM [11] VALL-E X [13] VioLA [14] MusicGen [18] AudioPaLM [10] ASR, S2TT, TTS, MT SpeechX [17] SC, PC AC TTS MG TTS, S2ST ASR, S2TT, TTS, MT MG, SG AUD EnCodec [1] AUD SoundStream [2] AUD, T AUD SoundStream [2] AUD, T AUD AUD, T AUD SoundStream [2] AUD,T AUD AUD,T AUD,T AUD AUD AUD,T AUD,T SoundStorm [3] EnCodec [1] EnCodec [1] EnCodec [1] SE, SR, TSE, TTS, SPED AUD,T AUD EnCodec [1] LauraGPT [16] ASR, S2TT, TTS, MT, SE AAC, SER, SLU AUD,T AUD, T FunCodec [6] UniAudio [15] TTS, VC, SE, TSE, SVS TTSO, TTM, AUED SD, ITTS, SPED P, M, AUD,T AUD EnCodec [1] modeling and can conduct the task of speech continuation. In the field of speech translation, recent advancements have been made possible through these discrete units. [43] pre-trained a Unit mBART combined with a wav2vec 2.0 [44] encoder to directly predict the translated discrete units. UnitY [45] further incorporates text modality to enhance speech translation. The Seamless models [46], [47] integrate the UnitY framework to perform expressive and streaming speech-to-text and speech- to-speech translation. With the development of these powerful speech LMs, researchers have begun to explore the use of prompting on speech LMs for various speech processing tasks, including prompt tuning [48]–[50], in-context learning [51], and instruction tuning [52], [53]. B. Comparison for Codec-based audio language models In Table III, we compare the inputs, outputs, and down- stream tasks of different codec-based language models. We also summarize that the downstream tasks conducted by differ- ent codec-based language models: Speech Continuation (SC), Piano Continuation (PC), Audio Continuation (AC), Text-to- Speech (TTS), Music Generation (MG), Stereophonic Gener- ation (SG), Speech to Speech Translation (S2ST), Automatic Speech Recognition (ASR), Spoken Language Understand- ing (SLU), Automated Audio Captioning (AAC), Speech to Text Translation (S2TT), Machine Translation (MT), Speech Enhancement (SE), Speech Removal (SR), Target Speaker Extraction (TSE), Speech Editing (SPED), Voice Conversion (VC), Singing Voice Synthesis (SVS), Text-to-Sound (TTSO), Text-to-Music (TTM), Audio Editing (AUED), Speech Dere- verb (SD), Instructed TTS (ITTS). Finally, we show the codec models adopted by different LMs. IV. CONCLUSION The paper fills the research blank to review the neural codec models and LMs built upon them. We hope the comprehensive review and comparisons can inspire future research works to boost the development of neural codec models and codec- based LMs. 5 [30] Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae, “Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis,” 2020. [31] Sang-gil Lee, Wei Ping, Boris Ginsburg, Bryan Catanzaro, and Sungroh Yoon, “Bigvgan: A universal neural vocoder with large-scale training,” arXiv preprint arXiv:2206.04658, 2022. [32] Yu-An Chung et al., “W2v-bert: Combining contrastive learning and masked language modeling for self-supervised speech pre-training,” in 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE, 2021, pp. 244–250. [33] Rohan Anil et al., “Palm 2 technical report,” arXiv preprint arXiv:2305.10403, 2023. [34] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al., “Qwen technical report,” arXiv preprint arXiv:2309.16609, 2023. [35] Lili Yu, D´aniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettle- moyer, and Mike Lewis, “Megabyte: Predicting million-byte sequences with multiscale transformers,” arXiv preprint arXiv:2305.07185, 2023. [36] Qingqing Huang, Aren Jansen, Joonseok Lee, Ravi Ganti, Judith Yue Li, and Daniel PW Ellis, “Mulan: A joint embedding of music audio and natural language,” arXiv preprint arXiv:2208.12415, 2022. [37] Dan Wells, Hao Tang, and Korin Richmond, “Phonetic Analysis of Self- supervised Representations of English Speech,” in Proc. Interspeech 2022, 2022, pp. 3583–3587. [38] Adam Polyak et al., “Speech resynthesis from discrete disentangled self-supervised representations,” in Interspeech, 2021, pp. 3615–3619. “On generative spoken language modeling from raw audio,” Transactions of the Association for Computational Linguistics, vol. 9, pp. 1336–1354, 2021. [39] Kushal Lakhotia et al., [40] Eugene Kharitonov et al., “Text-free prosody-aware generative spoken language modeling,” arXiv preprint arXiv:2109.03264, 2021. [41] Tu Anh Nguyen et al., “Generative spoken dialogue language modeling,” Transactions of the Association for Computational Linguistics, vol. 11, pp. 250–266, 2023. [42] Michael Hassid et al., “Textually pretrained speech language models,” arXiv preprint arXiv:2305.13009, 2023. [43] Sravya Popuri et al., “Enhanced Direct Speech-to-Speech Translation Using Self-supervised Pre-training and Data Augmentation,” in Proc. Interspeech 2022, 2022, pp. 5195–5199. [44] Alexei Baevski et al., “wav2vec 2.0: A framework for self-supervised learning of speech representations,” Advances in neural information processing systems, vol. 33, pp. 12449–12460, 2020. [45] Hirofumi Inaguma et al., “Unity: Two-pass direct speech-to-speech translation with discrete units,” arXiv preprint arXiv:2212.08055, 2022. [46] Lo¨ıc Barrault et al., “Seamlessm4t-massively multilingual & multimodal machine translation,” arXiv preprint arXiv:2308.11596, 2023. [47] Lo¨ıc Barrault et al., “Seamless: Multilingual expressive and streaming speech translation,” arXiv preprint arXiv:2312.05187, 2023. [48] Kai-Wei Chang et al., “An Exploration of Prompt Tuning on Generative in Proc. Spoken Language Model for Speech Processing Tasks,” Interspeech 2022, 2022, pp. 5005–5009. [49] Kai-Wei Chang et al., “Speechprompt v2: Prompt tuning for speech classification tasks,” arXiv preprint arXiv:2303.00733, 2023. [50] Haibin Wu, Kai-Wei Chang, Yuan-Kuei Wu, and Hung-yi Lee, “Speech- gen: Unlocking the generative power of speech language models with prompts,” arXiv preprint arXiv:2306.02207, 2023. [51] Ming-Hao Hsu et al., “An exploration of in-context learning for speech language model,” arXiv preprint arXiv:2310.12477, 2023. [52] Chun-Yi Kuan, Chen-An Li, et al., “Towards general-purpose text- instruction-guided voice conversion,” in 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE, 2023, pp. 1–8. [53] Chien-yu Huang, Ke-Han Lu, et al., “Dynamic-superb: Towards a dy- namic, collaborative, and comprehensive instruction-tuning benchmark for speech,” arXiv preprint arXiv:2309.09510, 2023. REFERENCES [1] Alexandre D´efossez et al., “High fidelity neural audio compression,” arXiv preprint arXiv:2210.13438, 2022. [2] Neil Zeghidour et al., “Soundstream: An end-to-end neural audio codec,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 30, pp. 495–507, 2021. [3] Zal´an Borsos et al., “Soundstorm: Efficient parallel audio generation,” arXiv preprint arXiv:2305.09636, 2023. [4] Yi-Chiao Wu et al., “Audiodec: An open-source streaming high- fidelity neural audio codec,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023, pp. 1–5. [5] Dongchao Yang et al., “Hifi-codec: Group-residual vector quantization for high fidelity audio codec,” arXiv preprint arXiv:2305.02765, 2023. [6] Zhihao Du, Shiliang Zhang, Kai Hu, and Siqi Zheng, “Funcodec: A fundamental, reproducible and integrable open-source toolkit for neural speech codec,” arXiv preprint arXiv:2309.07405, 2023. [7] Xin Zhang, Dong Zhang, Shimin Li, Yaqian Zhou, and Xipeng Qiu, “Speechtokenizer: Unified speech tokenizer for speech large language models,” arXiv preprint arXiv:2308.16692, 2023. [8] Rithesh Kumar, Prem Seetharaman, Alejandro Luebs, Ishaan Kumar, and Kundan Kumar, “High-fidelity audio compression with improved rvqgan,” arXiv preprint arXiv:2306.06546, 2023. [9] Zal´an Borsos et al., “Audiolm: a language modeling approach to audio generation,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023. [10] Paul K Rubenstein et al., “Audiopalm: A large language model that can speak and listen,” arXiv preprint arXiv:2306.12925, 2023. [11] Andrea Agostinelli et al., “Musiclm: Generating music from text,” arXiv preprint arXiv:2301.11325, 2023. [12] Chengyi Wang et al., “Neural codec language models are zero-shot text to speech synthesizers,” arXiv preprint arXiv:2301.02111, 2023. [13] Ziqiang Zhang et al., “Speak foreign languages with your own voice: Cross-lingual neural codec language modeling,” arXiv preprint arXiv:2303.03926, 2023. [14] Tianrui Wang et al., for speech recognition, synthesis, and translation,” arXiv:2305.16107, 2023. “Viola: Unified codec language models arXiv preprint [15] Dongchao Yang et al., “Uniaudio: An audio foundation model toward universal audio generation,” arXiv preprint arXiv:2310.00704, 2023. [16] Qian Chen et al., “Lauragpt: Listen, attend, understand, and regenerate audio with gpt,” arXiv preprint arXiv:2310.04673, 2023. [17] Xiaofei Wang et al., “Speechx: Neural codec language model as a versatile speech transformer,” arXiv preprint arXiv:2308.06873, 2023. [18] Jade Copet et al., “Simple and controllable music generation,” arXiv preprint arXiv:2306.05284, 2023. [19] Gael Le Lan et al., “Stack-and-delay: a new codebook pattern for music generation,” arXiv preprint arXiv:2309.08804, 2023. [20] Felix Kreuk et al., “Audiogen: Textually guided audio generation,” arXiv preprint arXiv:2209.15352, 2022. [21] Jean-Marc Valin et al., “Rfc 6716: Definition of the opus audio codec,” 2012. [22] Martin Dietz et al., in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015, pp. 5698–5702. “Overview of the evs codec architecture,” [23] Marco Tagliasacchi et al., “Seanet: A multi-modal speech enhancement network,” arXiv preprint arXiv:2009.02095, 2020. [24] Kundan Kumar et al., “Melgan: Generative adversarial networks for information Advances in neural conditional waveform synthesis,” processing systems, vol. 32, 2019. [25] Sepp Hochreiter and J¨urgen Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997. [26] Ashish Vaswani et al., “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017. [27] Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae, “Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis,” Advances in Neural Information Processing Systems, vol. 33, pp. 17022– 17033, 2020. [28] Wei-Ning Hsu et al., “Hubert: Self-supervised speech representation learning by masked prediction of hidden units,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 3451–3460, 2021. [29] Liu Ziyin, Tilman Hartwig, and Masahito Ueda, “Neural networks fail to learn periodic functions and how to fix it,” Advances in Neural Information Processing Systems, vol. 33, pp. 1583–1594, 2020.
synthetic_cpt
1
I-BERT_Integer-only_BERT_Quantization.pdf
3 2 0 2 r p A 0 1 ] C A . h t a m [ 2 v 9 1 3 3 1 . 9 0 2 2 : v i X r a BOUNDS FOR THE REDUCTION NUMBER OF PRIMARY IDEAL IN DIMENSION THREE MOUSUMI MANDAL AND KUMARI SALONI Abstract. Let (R, m) be a Cohen-Macaulay local ring of dimension d ≥ 3 and I an m-primary ideal of R. Let rJ (I) be the reduction number of I with respect to a minimal reduction J of I. Suppose depth G(I) ≥ d − 3. We prove that rJ (I) ≤ e1(I) − e0(I) + λ(R/I) + 1 + (e2(I) − 1)e2(I) − e3(I), where ei(I) are Hilbert coefficients. Suppose d = 3 and depth G(I t) > 0 for some t ≥ 1. Then we prove that rJ (I) ≤ e1(I) − e0(I) + λ(R/I) + t. 1. Introduction I In /= be a Noetherian local ring of dimension d In}n∈Z is called an I-admissible filtration if I n−k for some k In} 1 and I an m-primary ideal. A sequence R, m Let ) ( Im+n and ImIn ⊆ ii of ideals I = { ( I n In+1 In ⊆ I1 such that JIn = iii ⊆ I = { ( ) for n 0 and it is called minimal reduction if it is minimal with respect to containment among m is infinite. all reductions. A Minimal reduction of Minimal reductions are important in the study of Hilbert functions and blow-up algebras. For a minimal reduction J of exists and is generated by d elements if R In+1 ⊆ i ( is an ideal J N. A reduction of , we define In, ≫ ≥ ⊆ I ) ) ∈ / rJ (I) = sup { n Z ∣ ∈ JIn−1} J is a minimal reduction of , I} { min (I) = and r rJ (I) ∣ with respect to J and reduction number of respectively for I n known as reduction number of respectively. We and r I write rJ ( }n∈Z. Reduction number is an important data associated to an ideal which contains information about the depth and structural properties of the associated graded ring G can in place of rJ (I) and r I = { (I) I ( I I ) ) In/ I In+1. The number rJ ( ) I . We look for bounding rJ ( ) be seen as a measure of how closely J and I are related. In general, it may be hard to compute I rJ ( in terms of other invariants of the ring or the ideal such as embedding dimension, Hilbert coefficients etc. The Hilbert coefficients of are the unique n d, such that the function HI( integers ei(I) coincides with the following , 0 polynomial for n R λ ( In) ) ∶= I ) / (I) = ⊕n≥0 i ≤ 0 ∶ ≤ ≫ x x PI ( d d n denotes the length function. The function HI ( e0(I)( e1(I)( ) − ) = + d + − − x d 1 2 − 1 1 ) + ⋯ + (− ded(I) . ) (∗) Here λ known as the Hilbert-Samuel function and the Hilbert-Samuel polynomial of are respectively. For . We refer to [19] for the related background material. x and the polynomial PI ( ) I ) I n I , we write ei( } ) I = { instead of ei(I) 2020 Mathematics Subject Classification. 13H10, 13D40, 13A30. Key words and phrases. Cohen-Macaulay local rings, reduction number, Ratliff-Rush filtration, Hilbert coefficients. 1 2 M MANDAL AND K SALONI d I It is well known that if depth G ( J. Further, if R is a one dimensional Cohen-Macaulay local ring then r Theorem 2.45], Vasconcelos proved that in a Cohen-Macaulay local ring of dimension d does not depend on the minimal reduction I 1. In [20, ( 1, I 1, then rJ ( I e0( ) ≤ ) − ) ≥ − ) ≥ (1) r I ( ) ≤ I d.e0( I o ) ( ) 2d − 1 + mn. ) is the largest positive integer n such that I where o I ( A non-Cohen-Macaulay version of the above result can be found in [4, Theorem 3.3]. Let R be a Cohen-Macaulay local ring of dimension at most two. In [16, Corollary 1.5], Rossi proved that ⊆ (2) I rJ ( I e1( I e0( R λ ( I 1 / ⊆ ) ≤ ) − ) + ) ≥ ) + for a minimal reduction J I. Since then many attempts have been made to achieve a bound of similar character in higher dimension. For instance, the bound in (2) holds in all dimensions if depth G d I R λ 1 [2, Theorem 3.1]. Another ( / ( case is when I is of codimension 3 generated by five quadrics [6, Theorem 2.1 and Proposition 2.4]. However, no example is known to counter the relation in (2) in higher dimension. I In this paper, our objective is to find bounds for rJ ( in dimension three involving higher Hilbert coefficients. We prove the following generalization of Rossi’s result [19, Theorem 4.3] in dimension three: I 2 [19, Theorem 4.3] or if e1( x, y, z k [ I e0( − ⊆ ) = ) + ) − ) I ] be a Cohen-Macaulay local ring of dimension three and I an m-primary 1. Let J I be a minimal reduction of I. Then R, m Theorem 1.1. Let ( I t ideal with depth G ( ) > ) 0 for some t ≥ (3) I Furthermore, if rJ ( ) ≡ I rJ ( k mod t, 1 ) ≤ k ≤ R λ ( / I ) + t. ) + ⊆ I e0( 1, then ) − I e1( t ≤ − I e1( I As a consequence, if rJ ( Furthermore, we prove the following bound in dimension d ) > ) ) ≤ I rJ ( I e0( I 2 is odd and depth G ( ) − ) + I k. R λ ( / ) + I 0 then rJ ( 3. ≥ I e1( ) − I e0( ) + R λ ( / I ) + 1. ) ≤ R, m Theorem 1.2. Let be a Cohen-Macaulay local ring of dimension d ) ( I primary ideal with depth G ) ≥ ( I e1( − I e0( I e2( ) 3. Then I rJ ( I e3( I e2( ) − ) + ) − + ( d 1 . ) ) − (4) R λ ) ≤ ( I Though the bound in (4) is quadratic in e2( than earlier known bounds. For small values of e2( I I in terms of ei( I for rJ ( , 0 ) ) + 3, in Corollary 4.3. ≤ ≤ 1 ) ) I / i , we illustrate various examples where it is tighter ) in dimension three, we obtain linear bounds 3 and I an m- ≥ I It is worth asking if we can find similar bounds for rJ ( in Noetherian local rings. In [5], Ghezzi et. al. proved a nonlinear bound in terms of Hilbert coefficients in two dimensional Buchsbaum local rings of positive depth. We prove the following results: ) Theorem 1.3. Let ideal. R, m ( ) be a Buchsbaum local ring of dimension d 2 and I an m-primary ≤ (1) Let d (2) Let d I R λ ( e1( I 1. Then rJ ( I ) ≤ = I t 2 and depth G ( = t ) + 1. + / ) > ) − I e0( J e1( ) − 0 for some t λ R I 2. ( / I 1. Then rJ ( ) + ) + ≥ ) ≤ I e1( ) − J e1( ) − I e0( ) + BOUNDS FOR REDUCTION NUMBER 3 I The main difficulty in generalizing the bound in (2) in higher dimension is that rJ ( does not behave well with respect to superficial elements. This fact is closely related to Ratliff-Rush {̃I n of powers of I, see Lemma 3.1. We recall the definition more generally for an I- closure )}n∈Z. admissible filtration . The Ratliff-Rush filtration of is the filtration In+t ∶ I t I I } ) { ̃In = ⋃t≥0( For a minimal reduction J of , we set I sup Z rJ (I) ∶= ∣ ̃In /= ∈ ̃ I . Note that if depth G ( n { . J ̃In−1} 0, then I n ) } if ) > ) ≤ I = { I e1( I rJ ( ̃ I rJ ( ̃ I rJ ( . In [17], Rossi and ) I rJ ( . It follows ) ) ≤ ̃ 1 in dimension two. We extend the result of Rossi and We write ) = Swanson proved that in a Cohen-Macaulay local ring of dimension two I R λ that ( Swanson for any I-admissible filtration in Proposition 2.1. It is natural to ask if rJ (I) ≤ largely unknown, even in smaller dimensions. In Theorem 2.2, we prove that e2(I) . and subsequently discuss the cases when 1 for an I-admissible filtration . This is I e2(I) + rJ (I) ≤ 1 ̃ rJ (I) = ̃ rJ (I) = ̃ e2(I) + e0(I) + e1(I) − I1) + I rJ ( ̃ I rJ ( I e0( R λ ( 1 and ) + ) − ) + / / rJ (I) ̃ In Section 2, we prove in two dimensional Cohen-Macaulay local rings. This paper is organised in four sections. bounds on I Theorem 1.1 and its consequences. Then we gather various cases when the bound rJ ( ) ≤ I e1( 1 holds in dimension three. We also prove Theorem 1.3 in this section. In Section 4, we prove Theorem 1.2. Examples are given in support of our bounds being better than the earlier known bounds. rJ (I) ≤ and discuss ̃ In Section 3, we establish rJ (I) I e0( R λ ( ) + ) + ) − I / 2. Bound for rJ (I) ̃ in dimension two For this section, let R be a Cohen-Macaulay local ring of dimension two. Inspired by Rossi’s 1 for an I-admissible filtration bound in (2), one can ask whether rJ (I) ≤ ? In general, it is not known. Suppose I }n∈Z, the above question I has affirmative answer which follows from a result of Rossi and Swanson [17, Proposition 4.5]. . We generalize [17, Proposition 4.5] for an I admissible filtration. I rJ ( They proved that ) Further in this section, we prove certain bounds for e0(I)+ e1(I)− ˜I. Then for the case = I = {̃I n I1)+ I rJ ( ̃ R λ ( ) ≤ / rJ (I) . ̃ R, m ( an I-admissible filtration. Then, for a minimal reduction J of be a Cohen-Macaulay local ring of dimension two, I an m-primary rJ (I) ≤ ̃ I ) , In} I = { Proposition 2.1. Let ideal and rJ (I) . Proof. Let r r, ̃In+1 = n ≥ ∈ ̃In+1 = ( a axk xkb + sequence we have a In+k ∶ a ( rJ (I) = J ̃In. For k In+1+k ∶ ( yc where b b − xk dy − = = y ∈ b ) x, y and J with x, y a regular sequence in R. We show that for all for m n 1. Let )) = − J k+1In ⊆ xkxIn + yIn+k. Let ∈ yc. Since x, y is a regular b ) = In+k, we get d and Im+k ∶ ( . Then axk In+k. This gives xk a ( R. As c xk, yk 1, n, n xk + ≫ xk, yk = ( 0, we may write ̃Im = ( J k+1In ∶ ( )) )) = ( xIn and c ∈ ∈ xkd for some d dy and c = = . Therefore, ) xk, yk In+k ∶ − ∈ ∈ ( ) ∈ By similar arguments, we can show that a xk and s2 ∈ ( where r1, r2 ∈ αy for some α s1 = r2 − In, s1 ∈ ( αx and r1 − In+k ∶ s2 = ) a ∈ xIn + xk y In+k ∶ . ( ) yk xr1 + . Now let a yIn + In+k ∶ x = ) ( ∈ yk s1) r2 − y s2) = r1 − . Then x In+k ∶ ( ( ) In+k and αyk+1 R. Then αxk+1 s1xk ∈ − = ∈ xs2 yr2 + ys1 = which implies r2xk = 4 M MANDAL AND K SALONI ∈ ( xk+1, yk+1 In+k ∶ ( J ̃In−1 ⊆ ̃In. This gives s1, s2 ∈ ̃In and a s2yk In+k. These give α ∈ s1 = αx J ̃In for all n r1yk − and r2 − ̃In+1 = In Example 2.5, we see a class of examples with upper bound for r. ≥ ∈ rJ (I) ̃ rJ (I) < ̃ in dimension two. We may assume from now on that I rJ (I) . In the next theorem, we give an Theorem 2.2. Let In} ideal and I = { R, m ( ) be a two dimensional Cohen-Macaulay local ring, I an m-primary an I-admissible filtration. Then, for a minimal reduction J J. /= I, ⊆ )) = ̃In−1. Therefore r1 − JIn + xr1 + ys1 ∈ s2 = J ̃In ⊆ = ∈ J ̃In−1 ⊆ ̃In αy J ̃In. Therefore (cid:3) rJ (I) ≤ ̃ Furthermore, consider the following statements: e2(I) + 1. rJ (I) = (i) ̃ (ii) ̃In+1 = (iii) λ ( ̃In+1/ (iv) e1(I) = 1; e2(I) + J ̃In for all n J ̃In) = e0(I) − (iii) Ô⇒ ≠ 1 for n R λ ( 0, e2(I) ; e2(I) ; = / ̃I1) + 1. (i) We have (iv) ≥ n 1. ⇐⇒ Proof. Since depth G Ô⇒ 1, we have e2(I) = ∑n≥0 0 for all n e2(I) = ∑ J̃Ie2(I)) = (iii) and (ii) (̃I) ≥ . This gives nvn(̃I) = e2(I) + ⇐⇒ J ̃In for n = e2(I) n=0 nvn(̃I) ≤ ≤ J ̃In) ( ̃In+1/ λ rJ (I) ≤ i.e., ̃ (ii). Suppose Now we show (i) e2(I) and ̃In+1 ≠ with ve2(I)(̃I) ≠ 0 < e2(I) − ̃In+1 = J ̃In for all 1 rJ (I) ≤ in (ii) holds. When e2(I) = 0, ̃ which is not true. Now suppose e2(I) ≠ 1. Therefore ˜rJ (I) ≥ λ ( ̃Ie2(I)+1/ (i) Ô⇒ Ô⇒ Suppose (iii) holds. Then (i). Suppose (iv) holds. Since e1(I) = ∑ λ ( ̃I1/ and only if ) + Therefore ̃In+1 = 1 except one, say n e2(I) Then e2(I) = ∑n=0 (iii). Finally assume e2(I) ≠ e0(I) − J̃Ie2(I)) = ( ̃Ie2(I)+1/ λ R, m Corollary 2.3. Let ( I e2( I rJ ( 1 ideal. If ) + ̃ ( ̃In+1/ J ̃In for all n ( ̃In+1/ nλ (iii) when e2(I) ≠ e2(I) + rJ (I) ≥ ̃ J ̃In) = ≥ J ̃In) = e2(I) + 0. e2(I) n=0 λ J (̃In0+1/ e2(I) n=0 λ / ̃I1) + 1. Since R λ ( n0λ ) ≠ ) = ∑ 1. (ii) and all four are equivalent if e2(I) ≠ 0. nvn(̃I) by [19, Theorem 2.5], where vn(̃I) = J ̃In for all n 1, e2(I) + 1. Hence ̃In+1 = ≥ e2(I) + rJ (I) = ̃ 1. Then ̃In+1 = J ̃In for all n 0. When e2(I) ≠ 1 for n e2(I) + . This gives (ii) when e2(I) = 0 which implies λ e2(I) + 1 ≥ 0, we have e2(I) ( ̃In+1/ and 1. This gives (ii). For the converse, suppose the assumption J, 0 gives ̃I1 = rJ (I) = 1. Otherwise, ̃ J̃Ie2(I)) ( ̃Ie2(I)+1/ λ e2(I) which implies 1. Note that the above arguments also prove rJ (I) = 1. In fact, ̃ 0. Then e2(I) = J ̃In) = = rJ (I) ≤ ̃ e2(I) + 1, we get the equality as in J ̃In) ( ̃In+1/ e0(I) − , we have e1(I) = e2(I) n=1 λ 1. This forces e2(I) ≠ 0 and ∑ and λ e2(I) n0 ≤ = n0. This proves (iv) J ̃In0) = / ̃I1) + J ̃In) = J ̃In0 ) = λ R ( ( ̃In+1/ (̃In0+1/ (ii) and (iv) n0, 1 Ô⇒ ≤ 1 if 1. 1. Ô⇒ 0 and (ii) holds. Then we get e1(I) = ∑ e2(I) n=0 λ ( ̃In+1/ J ̃In) = ( ̃I1/ λ J ) + (cid:3) be a two dimensional Cohen-Macaulay local ring and I an m-primary 1 then I e2( Moreover, if I is Ratliff-Rush closed then the following statements hold: (̃I λ / I rJ ( ) − ) + ) ≤ 1. ≤ ≤ 1 1 I I (i) rJ ( ) = 2 BOUNDS FOR REDUCTION NUMBER 5 I (ii) e2( 1 ) = I (iii) depth G ( 1 ) ≥ Proof. We have (5) By Theorem 2.2, e1( I ) = I (̃I λ I e2( / ≤ I rJ ( ) + ) ≤ 1 ) + I e2( ) ) 1 ≤ ) + = ̃ I rJ ( rJ ( I I e1( ≤ /̃I R λ I e0( ) + ( 2 which implies ) − by Proposition 2.1 ( ) 1 (by (2) ) ) − I ) + R λ ( I e0( 1. Substituting the value in equation (5), we get ) + / (̃I λ / I Moreover if I is Ratliff-Rush closed, then we obtain e2( I [19, Theorem 3.3] we have depth G ( I rJ ( I e2( ) − ) ≤ ) ≥ 1. ≤ ≤ 1 1 ) = I 1. ) + I 1 and rJ ( I e2( ) + 1 = ) = 2. Then by (cid:3) Corollary 2.4. Let In} ideal and then the following statements hold: I = { R, m ( an I-admissible filtration. For a minimal reduction J be a two dimensional Cohen-Macaulay local ring, I an m-primary e2(I) rJ (I) = ̃ I, if ⊆ ) ≠ 2 for n 0, 1, e2(I) − = 1 for n 1; 1 if e2(I) = 1, e2(I) − 2. 2, 1 if e2(I) ≠ 2, (i) ̃In+1 = (ii) λ ( ̃In+1/ J ̃In) = J ̃In for n ⎧⎪⎪⎪ ⎨ ⎪⎪⎪⎩ R λ e0(I) − ( rJ (I) = ̃ 1. Since depth G (iii) e1(I) = Proof. Note that n e2(I) − suppose e2(I) = = = / ̃I1) + e2(I) (̃I) ≥ ( ̃I2/ e2(I) − 2 ( 1 = 2. 1 for n 3, we get 2. Then λ e2(I)−2 n=0 1. This gives 1 J ̃In for 1. Now J ̃I1) = 1 ) > e2(I) − ( 1 and ̃In+1 = e2(I)−1 n=0 if and only if ̃In+1 = J ̃In for all n e2(I) and ̃In+1 ≠ ≥ e2(I)−1 nvn(̃I) . Therefore e2(I) ≠ 1, we have e2(I) = ∑ n=0 2 and ̃In+1 = J ̃In for n 2. For the case e2(I) ≥ ≥ e2(I) ≥ ( e2(I) − . ve2(I)−1(̃I) ) e2(I) − ve2(I)−1(̃I) = ∑ 1 ) J ̃In for all 2 n e2(I) − ≤ ≤ / ̃I1) + R λ e0(I) − J ̃In) = ( 0, d nvn(̃I) which implies 2. This proves (i) and (ii). To see (cid:3) 2 and k be an infinite field. Consider the power 1 indeterminates and the d 1 So, ve2(I)−1(̃I) = J ̃In) = ( ̃In+1/ λ = ( ̃In+1/ λ (iii) , we have e1(I) = ∑ Example 2.5. [12, Theorem 5.2] Let m ≥ Vj }1≤j≤d, Xj}1≤j≤m, Y, k series ring D 2d + { [[{ ideal a m j Xj ∣ . Y i 1 1 ) + ( ≤ ≤ ≠ ≤ ≤ [( )] = [( V 3 a and xi, y, vi, zi denote the images of Xi, Y, Vi, Zi in R j D . Define R d i 1 i − ) + ( / = ≤ ) ≤ respectively. Let m j y m j xj ∣ be the maximal ideal 1 ) + ( ) + ( ≤ = ( ≤ . Then d j zj ∣ in R and Q 1 ) ≤ ≤ (1) R is Cohen-Macaulay local ring with dim R d, = m (2) Q is a minimal reduction of m with rQ( 3, ) = m m 2; e2( (3) e0( + m m is Buchsbaum ring with depth G (4) G ( ( ) = m m λ Particularly when d 8 and e0( 2, we have e1( R ( = ) − m m m m rQ( R λ e0( e1( 2. Therefore by Corollary 2.4, /̃ ) ≠ ) + ( ) − ̃ m m . rQ( rQ( ) ̃ 7 which implies . Therefore ) Zj}1≤j≤d]] Y m ) + ( ≤ with m Vi ∣ m 1 and ei( m /̃ ) + m e2( = Xj ∣ ZiY + m rQ( m 2; e1( = 3 ) = ViVj ∣ ) /= ) < ) = ) + 1 ≤ 0 for 3 ) = 0. )] + ( vj ∣ zj ∣ ) + ( ) + ( d ) ) = d, i ) = ) = = ( i, j 2d 3d { j m m m m d, + + + + + ≤ ≤ ≤ ≤ ≤ ≤ ≤ ≤ = ≥ d d 2 1 1 j i ∣ 6 M MANDAL AND K SALONI We end this section with the following questions. Question 2.6. Is rJ (I) ≤ R λ ( dimensional Cohen-Macaulay local ring? Since investigate whether the same bound holds for e0(I) + e1(I) − I1) + / rJ (I) ≤ ̃ rJ (I) ? ̃ 1 for any I-admissible filtration in two by Proposition 2.1, one may rJ (I) Question 2.7. Is rJ (I) ≤ ̃ 3? for d rJ (I) 3. Rossi’s bound in dimension three ≥ ) x x x /( /( ) ≥ )) = I rJ ( I rJ ( )) = I n for all n I may not hold. When depth G ( I, In general, reduction number does not behave well with respect to a superficial element x ∈ I rJ ( I 1, then rJ/(x)( i.e., rJ/(x)( I , see ) )) = ) ≥ 0. I and depth G I [11, Lemma 2.14]. However, there are examples when rJ/(x)( /( ) = ( ) 1 is equivalent to the condition that ̃I n Note that depth G I 1. In the lemma = ( I below, we state a necessary condition for rJ/(x)( . ) Lemma 3.1. Let ) an m primary ideal and J element x be a Noetherian local ring of dimension d I I a minimal reduction of I. If rJ/(x)( x . ≠ )) ≤ ) . Then I n I n for some n with rJ/(x)( I rJ ( I n ) ) ⊆ ⊆ ( < /( I n which implies I n+1 xI n. On the other hand, x x ) = ∩ ( ∶ . Hence I n+1 x ) ∈ Proof. Suppose ̃I n ( ̃I n+1 I n. Thus ) = ̃I n x n+1 x J x I x I )) = ( /( )( /( JI n. So rJ ( JI n I ) ≤ = JI n n which is a contradiction. I n+1 ) = ( n which implies I n+1 1 and depth R I rJ ( x ⊆ I n for all rJ/(x)( I 0. Let I be for a superficial I rJ ( < x I, then ̃I n R, m ( I rJ ( I n+1 I n+1 = /( ≥ /( JI n )) ≤ )) < )) < = (cid:3) ) ∩ + ( + ( /( /( n x x x > ≥ ⊆ ⊆ = ) ∶ ∶ )) xI n + We define ∣̃I n As an interesting application of Lemma 3.1, we see that Rossi’s bound holds in dimension three for those m-primary ideals I for which ρ I rJ ( min I ( I ( ) − ) = ) ≤ 1. = ≥ ≥ ρ { } 1 i i . I n for all n R, m Proposition 3.2. Let ( primary ideal. For a minimal reduction J of I, if ρ R λ ( ) + 1. ) I / be a Cohen-Macaulay local ring of dimension d I ( ) ≤ I rJ ( ) − I 1, then rJ ( 3 and I an m = I e0( I e1( ) ≤ ) − ) + I be a superficial element. Suppose ρ I rJ ( ) I e0( ) + = ̃I rJ (I)−1 ) − by Lemma 3.1. Now, using the bound in (2), we get that (cid:3) 1. Then I rJ (I)−1 I rJ ( I ( ) ≤ 1. x ) + x ℓ ∈ I /( ) = /( Q x )) = /( I e1( ) − Proof. Let x which implies rJ/(x)( I I rJ/(x)( I rJ ( R )) ≤ / ( I The following examples show that rJ/(x)( Example 3.3. [19, Example 3.3] Let R = ⊆ ̃I but x2y2 I I which implies depth G I ) = ( ∉ x4 y4 is superficial for I as e0( I and p I = ) = + I rJ/(p)( I rJ ( 2 /( Example 3.4. [18, Example 3.8] Let R = ) ⊆ ̃I but x2 0 as x2 I depth G ) = ( ∉ 10 4 1 4 5 9 y2 2 y2 3 xy 3 yz 3 xz + − + + − I I e0( minimal reduction of I and e0( 8 = ) = 1 4 1 6 x2 2 y2 where p 3 yz 3 xz 2 xy − + + − p I rJ/(p)( I Further, rJ ( . 2 )) /( ) = I 2 I ∶ ∈ ( 3 z2, 23 63 x2 . )) ) = + + 23 = = = p 2 5 I rJ ( ) and I )) = x, y [[ 0. Note that J I e0( 16 x4, x3y, xy3, y4 I may hold even if depth G ( I 2 . Then x2y2 ∶ ) is a minimal reduction of p . Further, x4, y4 ) = ( I and e1( ) = I e1( = ( ) = )) )) /( /( 0. ]] = = p 6 ∈ − [[ ]] x2 x, y, z and I y2, y2 Q = ( I. Using Macaulay 2, we find that J 5 5y2 4 xz xz + I and e2( p − is a − p , /( )) )) 4 3 z2. This shows that p is a superficial element for I. z2, xy, yz, xz 6 x2 = ( 7 6 yz + 0 ) = . Then ) 1 2 xy + x2 − I e2( 7 z2, 6x2 + I e1( 4 9yz − I , e1( 5 6 xy p + )) ) /( ) = /( − 23 = = 1 BOUNDS FOR REDUCTION NUMBER 7 Lemma 3.5. Let I t with depth G ( minimal reduction of I. Then R, m ( 0 for some t ) > ) be a Noetherian local ring of dimension d ≥ I be a superficial element for I and J 2 and I an m-primary ideal I be a 1. Let x ∈ ≥ ⊆ I rJ ( x I rJ/(x)( t /( 1, then ) ≤ k )) + 1. t − k mod t, 1 I Furthermore, if rJ ( (6) ) ≡ ≤ ≤ I rJ ( − I rJ/(x)( 0, we have depth R ) ≤ x k 1. /( I t I Proof. Since depth G 0. We first consider the case when rJ ( ) ≡ ( > 0. We claim that rJ ( mod t for 1 I I 1 and prove (6). Suppose rJ ( ) < ) = ≤ ≤ ≥ x rJ/(x)( I rJ ( k I rJ ( I k. Suppose rJ/(x)( I , )) ≤ ) < )) + /( I t I mt as depth G but ̃I mt I 0. Then by Lemma 3.1, rJ ( , a contradiction. ( = Therefore, k for m + I mt, then rJ/(x)( I rJ/(x)( /( )) ≤ )) + /( x ) > ) − ) = ) > mt mt )) /( − − x x = k k t Next, let k t rJ ( I ) − I 3.1, rJ ( = ( ) = I 0, i.e., rJ ( = mt m t 1 < ) − x I rJ/(x)( /( = )) ) = /( ) ≤ I rJ ( rJ/(x)( I I 1. Then rJ ( mt, m ≥ and again ̃I (m−1)t I rJ ( , a contradiction. Therefore, ) < = ) 1. x k − )) + I rJ/(x)( /( I (m−1)t as depth G I t ( ))+ x I t. Otherwise, rJ/(x)( )) ≤ 0. Then by Lemma /( x ) > I rJ ( ) ≤ I rJ/(x)( x /( )) + 1. t − We now generalize Rossi’s result for d obtain the I-adic case of [19, Theorem 4.3] in dimension three. 3 case. Note that when t = (cid:3) 1 in the result below, we = be a Cohen-Macaulay local ring of dimension d 3 and I an m-primary = 1. Let J I be a minimal reduction of I. Then R, m Theorem 3.6. Let ( I t ideal with depth G ( ) > ) 0 for some t ≥ I Furthermore, if rJ ( ) ≡ I rJ ( k mod t, 1 ) ≤ k ≤ R λ ( / I ) + t. ) + ⊆ I e0( 1, then ) − I e1( t ≤ − I e1( I rJ ( ) ≤ I e0( ) + R λ ( / I ) + k. ) − Proof. Let x Cohen-Macaulay local ring. By Lemma 3.5 and the bound in (2), we have I be a superficial element for I and let R /( R = ∈ . Then R is a two dimensional x ) I rJ ( ) ≤ ≤ = I When rJ ( from (6). ) ≡ k mod t, 1 k t ≤ ≤ t x /( /( )) + I e0( rJ/(x)( I e1( x I )) − I e0( I e1( ) + I 1, we have rJ ( − ) − 1 − x /( R λ ( ) ≤ )) + I R λ ( t. / ) + I rJ/(x)( I /( x + ( ))) + t x /( ))+ k 1 − ≤ I e1( I e0( )− R λ ( )+ I / )+ k (cid:3) Corollary 3.7. Let primary ideal. Let J Then be a Cohen-Macaulay local ring of dimension d R, m ( I 2 I be a minimal reduction of I. Suppose depth G ( ⊆ ) > 3 and I an m- = I 0 and rJ ( is odd. ) ) I 2 Proof. Since depth G ( ) > I e1( ) − I e0( ) + R λ ( / I ) + 1. I rJ ( ) ≤ I 0 and rJ ( ) ≡ 1 mod 2, the conclusion follows from Theorem 3.6. (cid:3) denote the i-th local cohomology module of S with support in with H i S n 0 max { ) = ∣ S+( )n /= } be a Cohen-Macaulay local ring of dimension d 3 and I an m- = 8 M MANDAL AND K SALONI S 0. ) = S+( S+( For a graded ring S, let H i S ) the graded ideal S+ of elements of positive degree and set ai( S if H i S the convention ai( Corollary 3.8. Let primary ideal. Let J ) = −∞ R, m ( I be a minimal reduction of I. Then ⊆ ⎧⎪⎪⎪ e1( I ⎨ I e1( ⎪⎪⎪⎩ I ρ(I) Proof. Since depth G ( 1, 1 by [13, Theorem 4.3]. 1 I G a1( ( I ( 0, we can put t e0( I I e0( R λ ( R λ ( I rJ ( )) + ) − ) − ) > ) + ) + ) + ) + ) ≤ = ρ ) ) I I / / } if a1( I G ( I G if a1( ( )) ≤ )) > 0 0 1 in Theorem 3.6 and ρ I ( ) ≤ max I G a1( ( { ) + (cid:3) d )− ) ≥ ) ≥ R ( m e1( m In [9] Itoh proved that e2( m m e0( µ 1 where type type − ( m [15], the authors also proved that if e2( m then G ( case. We consider the next boundary case, i.e., type below, we obtain a linear bound in this case as well. m e0( 0, then 1 ) + m2 m λ , see [15]. In ) ( / m m µ e0( d 1, )+ )− )− ( ) = − m is Cohen-Macaulay. Therefore Rossi’s bound as given by (2) holds for rJ ( in this d. In the corollary m e0( 1. ) − ) + dimk Extd R ) = ( m m e0( e1( ) = m e1( ) = m and µ ) ( 0 and type m If e2( k, R R( 1 )+ ) − ) = R ( m e0( m ( R ( ) = ) − ) + )+ µ ≠ ≠ ) ) be a Cohen-Macaulay local ring of dimension d d. Suppose J ⊆ ) = m is a minimal reduction of = m 3 with e2( ) + ) − m 1, then the conclusion follows from [19, Theorem 4.3]. Suppose depth G ( ) = m3 3 which implies depth G ( ) > 0. Then by Theorem 3.6, (cid:3) R, m ( 1 ) + m µ ( 3. m e0( = m R λ / ( Corollary 3.9. Let ) m m e1( e0( 0 and type R ) + ) − ≠ m m. Then rJ ( m m e0( e1( ) ≤ m Proof. If depth G ( 0. By [15, Theorem 4.2], ̃mj = m m m R λ e0( e1( rJ ( / ( We now consider Example 2.5 with m 3.6 is better than the one given by Vasconcelos in (1). mj for j m 3. ) − ) + 0 and d ) + ) + ) − ) ≤ ) ≥ = = ≥ 3 to demonstrate that the bound in Theorem k = [[ ) = ]]/( x, y, z, u, v, w, t t2, tu, tv, tw, uv, uw, vw, u3 yt, w3 Example 3.10. Let R . ) − − − m m Then R is a Cohen-Macaulay local ring of dimension 3 and depth G 0. We have e0( 8, ( ) = m m e1( 11, e2( 4, see [15, Example 5.2(1)]. By [15, Theorem 4.2], we have m2 1. Now J 7. Note ) that the bound de0(m) 19 given by Vasconcelos in [20] is larger than our bound. m e0( µ ) − = ̃mj for j m is a minimal reduction of m and rJ ( ≤ ) = 1 = m ( ) + m3 3. Therefore depth G ( ≥ m m m e0( e1( ) + R ) = ( ≠ ̃m2 and mj m 4, e3( 0 and type o(m) − ) ≥ 3 = xt, v3 R λ ( x, y, z 3.8 = ( ) = ) = ) − ) + ) = 2d zt + + − = = 6 1 3 3 / In the next proposition, we summarize the cases when Rossi’s bound holds in dimension three. for any I admissible filtration Some of these results are already known. Let vn(I) = λ In+1/ ( }n∈Z denote the Ratliff-Rush filtration. By the proof of Rossi’s result [16, Theorem I 1.3] in a d dimensional Cohen-Macaulay local ring, we have F = {̃I n JIn) and (7) I rJ ( ) ≤ ∑n≥0 vn(F ) − I e0( ) + R λ ( / I ) + 1 The idea in the next few results is to approximate the term . ∑n≥0 vn(F ) R, m Proposition 3.11. Let ( I ideal and J a minimal reduction of I. Then rJ ( conditions hold: ) be a three dimensional Cohen-Macaulay local ring, I an m primary 1 if one of the following I I e1( I e0( R λ ( / )+ )− ) ≤ )+ I sition 6.4], e2( from part (ii). I (iv) Suppose e2( BOUNDS FOR REDUCTION NUMBER 9 0. 2. ) = (i) depth G (F ) ≥ I (ii) e2( I e3( ) = I (iii) e2( 0 and I is asymptotically normal . ) = I (iv) e2( 0 and G I ) = ( I (v) ρ 1. I rJ ( ) ≤ ( I G (vi) a1( )) ≤ ( ) − 0. ) is generalized Cohen-Macaulay. Proof. (F ) ≥ (i) As depth G I 2, e1( I ing this into (7), we get rJ ( 0, then G I e3( ) ≤ (F ) ) = conclusion follows from part (i). ) = I (ii) If e2( by [8, Proposition 4.6]. Substitut- e1(F ) = ∑n≥0 ) = I e0( I e1( ) + ) − is Cohen-Macaulay by [14, Theorem 6.2]) and hence the vn(F ) I R λ ( ) + 1. / I (iii) By [18, Theorem 4.1], e3( I 0 implies e3( ) = ) ≥ 0 for an asymptotically normal ideal I and by [14, Propo- 0. Now the conclusion follows I 0. This gives e3( ) = ) ≤ ) = [14, Proposition 6.4]. Now, the conclusion follows from part (ii). ) = ) I 0. Then e3( I 0 if and only if G ( is generalized Cohen-Macaulay by (v) It follows from Proposition 3.2. (vi) It follows from Corollary 3.8. (cid:3) Remark 3.12. (1) Note that in Example 3.10, G mn ) = ⊕n≥0 ̃ by [15, Theorem 4.2]. Hence by Proposition 3.11(i), we have 3 R λ ( mn ( ̃ ) + 5. m 1 /̃mn+1 is Cohen-Macaulay m e1( ) + = m rJ ( m e0( ) ≤ ) − I e3( ) = ) = 0. Hence by Proposition 3.11(ii), 2 I rJ ( = ) ≤ / = I (2) In Example 3.4, we have e2( I e0( 2. = ) + I e1( R λ ( ) − ) + 1 I / Next we give an upper bound for the reduction number of an ideal in Buchsbaum local ring with dimension at most two. R, m Theorem 3.13. Let ( ideal. Let J be a minimal reduction of I, then ) be a one dimensional Buchsbaum local ring and I an m-primary I rJ ( ) ≤ . Let r ) 0. Hence R R / = IH 0 m( H 0 m( R ) = I e1( ) − rJS( J e1( IS e0( I ) − ) + . Then I r+1 ) = R λ ( I / JI r − ) + ⊆ 2. H 0 m( R , which implies that ) ⊆ Proof. Let S JI r+1 I r+2 − (8) ) ≤ Note that S is a 1 dimensional Cohen-Macaulay local ring. Therefore, by (2), we have ) + I rJ ( rJS( IS 1. I rJ ( ) ≤ ≤ ≤ IS rJS( e1( IS I e1( ) + 1 e0( ) − J e1( ) − IS ) − 2 S λ ( ) + I e0( ) + IS ) + / I R λ ( / 2 (by [19, Lemma 2.3, Proposition 2.3]) ) + (cid:3) R, m Theorem 3.14. Let ( I t ideal. Let J be a minimal reduction of I and depth G ( ) I rJ ( ) ≤ I e1( ) − J e1( ) − I e0( ) + I / ) + t + 1 0 for some t 1, then ≥ ) > R λ ( be a two dimensional Buchsbaum local ring and I an m-primary 10 M MANDAL AND K SALONI I t Proof. Note that depth R 0 as depth G > ( x I rJ/(x)( I by Lemma 3.5, we have rJ ( /( local ring, by Theorem 3.13 we have ) ≤ ) > t ))+ − 0. Let x ∈ I be a superficial element for I. Then is a one dimensional Buchsbaum 1. Since R x ) /( I rJ/(x)( x /( )) ≤ I Therefore rJ ( ) ≤ I e1( ) − J e1( ) − = I e1( e1( I I e0( x /( ) − x J e1( )) − J e1( R λ ( ) − I ) + / /( I e0( t + ) + ) + 1. x I e0( )) − R λ ( / /( I ) + )) + 2 R λ ( / I ) + 2 (cid:3) 4. Bound for rJ ( I in dimension three ) In this section we give a different upper bound for reduction number of I in a Cohen-Macaulay local ring of dimension d d . For ) an I admissible filtration . The I I and e3( In the Rees algebra of I 3. Our bound involves e2( ≥ I = { − ) ) ≥ I 3 when depth G ( In}n∈Z, let us denote by is defined as H 2 I( ) = n n ∑i=0 R(I) = ⊕n≥0 i HI ( ) ) I I and the second Hilbert polynomial, second Hilbert function of denoted by P 2 is the polynomial which coincides with H 2 n n for large values of n. It is well I ( I( ) zn, is rational, i.e., there , z , defined as H In+1) In/ ) = ∑n≥0 λ known that the Hilbert series of (I ( with hI( z z exists a unique rational polynomial hI ( 0 such that 1 ) ≠ ] [ z hI( ) H . d z 1 − ) ( , where h(i) For every i 1 I ( z polynomial hI( are called the Hilbert coefficients of d, these are same as defined earlier in the Introduction, see [3] for more details. 0 ≤ Let us recall the modified Koszul complex in dimension two defined in [11] as follows: denotes the i-th formal derivative of the and for h(i) I (1) i! 1. The integers ei(I) 0, we define ei(I) = = at z ) = ) ∈ (I , z Q ≥ ≤ I ) ) i , n C.(I R ( / C.(I is a minimal reduction of I. Let Hi( In−2 In−1) , n ) ∶ Ð 0 / → R ( −y x )Ð→ 2 (x,y)Ð→ R In Ð→ 0, / ) denote the i-th homology module of )) . The relation between the homology of this complex and Hilbert coefficients ) x, y where ( the complex C.(I is used in the proof of the next theorem. For a numerical function f and recursively we can define △if n f ( Z Ð→ Z, we put △f for all i 1. △i−1f △ ( n + 1 ( ∶ )) n ( n ( n ( − f ∶ = ) = , n ≥ ) ) ) R, m Theorem 4.1. Let be a Cohen-Macaulay local ring of dimension d ) ( I primary ideal with depth G ) ≥ ( I e1( 3. Let Proof. Suppose d minimal reduction of I. Then x is also superficial for the filtration d − 3. Then − e0( I ) F = {̃I n − e3( I + λ R ( I e2( ( I e2( ) I rJ ( + 1 + − 1 , x (9) ) ≤ = } ) ) ) ) I ∈ / . ) I be a superficial element for I and J a = ( ) R . Let R and = . By the proof of [7, x, y, z x ) 1, we have vn(F ) = F vn(F ) /( 3 and I an m- ≥ F = {F n = Proposition 2.9], we have (x) } ̃I n+(x) . Since depth G (F ) ≥ e1(F ) = ∑n≥1 △2 P ( F ( n − H n F ( )) ) 2 = ∑n≥1 ( − e0(F ) − vn(F) ∑i=0( H2( λ ( ∑n≥1 (10) = ∑n≥0 −1 iλ Hi( ( ) C.(F , n )))) (by [11, Proposition 3.2]) C.(F , n ))) (by the proof of [11, Theorem 3.6]). BOUNDS FOR REDUCTION NUMBER 11 Since x is a superficial element for we get I , e1( F ) = e1(F ) = e1(F) . Therefore, by using (7) and (10), (11) I rJ ( ) ≤ I e1( ) + ∑n≥1 H2( λ ( C.(F , n ))) − e0( I ) + λ R ( / I ) + 1. From the modified Koszul complex C.(F y, z ( ) ⊆ ̃F n−2, , n , we have H2( ) C. (F , n )) = F n−1∶(y,z) F n−2 . Since F n−1 ∶ H2( λ ( C. (F , n )) ≤ λ ⎛ ⎝ ̃F n−2 F n−2 . ⎞ ⎠ Therefore, for large m we have m 0 ≤ ∑n=0 H2( λ ( C. (F , n )) ≤ m ∑n=0 m λ ⎛ ⎝ ̃F n−2 F n−2 ⎞ ⎠ R λ ( ∑n=0 e3( ̃F ) e3( ̃F ) e3( ̃F ) /F n−2) − e3(F ) − e3(F ) − e3( I ) = = = = − m ∑n=0 λ R ( / ̃F n−2) (by [3, Proposition 1.5]) This gives (12) From (11) and (12), we get 0 ≤ ∑n≥0 H2( λ ( C.(F , n ))) ≤ e3( ̃F) − e3( I . ) (13) + 1 + e3( ̃F) I e1( By the difference formula in [1, Proposition 4.4], we have for all n − e0( I + λ R ( I rJ ( ) ≤ − e3( . I ) −1, ) ) ) I / (14) P ̃ F ( n − H ̃ F ( n ) ) = λ (( ≥ . R+ (R( ̃F )))n+1) H 2 Now taking sum for large m on both sides of the above equation, we get m m m H 2 λ (( R+(R( ̃F )))n+1) = − P ̃ F ( n ) ∑n=0 H ̃ F ( n ) ∑n=0 − H 2 ̃ F ( m ) ∑n=0 m ∑n=0 P ̃ F ( n ) m + 3 3 ) e0( ̃F )( . e3( ̃F ) = = = − e1( ̃F)( m + 2 2 ) + e2( ̃F )( m + 1 1 ) − P 2 ̃ F ( m ) As R is a 2-dimensional Cohen-Macaulay local ring, we have λ Z by [1, Lemma 4.7]. Now in equation for all n ∈ H 2 (( , we substitute n 14 ) ( e2(F) = e2( ̃F ) = R+(R( ̃F )))n) ≤ = I e2( e2(F ) = λ −1 to get ) H 2 λ (( R+(R( ̃F )))0) = H 2 (( R+(R( ̃F )))n−1) Therefore, (15) e3( ̃F ) = m ∑n=0 H 2 λ (( R+ (R( ̃F )))n+1) ≤ a2(R( ̃ F))−1 ∑n=0 H 2 λ (( R+ (R( ̃F )))0) = a2(R( ̃F )) I e2( ) 12 M MANDAL AND K SALONI where a2(R( ̃F )) ≤ (16) G a2( ( ̃F )) = I rJ ( ) ≤ I e1( By [11, Corollary 5.7(2)], we have s This gives s (say). Now using (13) and (15), we have I − e0( I G a2( + λ R ) ( ( ̃F )) = r + 1 + se2( − e3( I I −2 and by Theorem 2.2, rJ ( ̃F) . ) ) / ) ( ̃F ) ) = −1 e2(F ) . ≤ (17) s rJ ( ̃F ) = − 2 e2(F ) ≤ − 1 I e2( ) = − 1. Now by (16) and (17), we get the conclusion. Suppose d depth G I ( completes the proof. 4. Let x 1 implies I n I I be a superficial element for I. Then ei( x /( I 1. This gives rJ/(x)( = ̃I n for n ≥ ) ≥ ≥ ∈ x /( )) = for 0 I i ei( 3. Also, )) = ) I rJ ( by Lemma 3.1. This ) (cid:3) ≤ ≤ m e1( ) where m − e0( m ≥ Q [∣ I 2 ∶ I ) 0. x, y, z Example 4.2. m λ R / ( 2d + 1 m (1) We refer to Example 3.10 to note that our bound rJ ( m e2( ) = ( −e0( m ) 17 is better than Vasconcelos’ bound de0(m) o(m) − e3( m m e2( ) m e1( − 1 ) ≤ ) ) ) + 1 + 19. + − ) = (2) Example 2.5 provides a number of three dimensional Cohen-Macaulay local rings with 3m + 19, m e2( ) − e3( m + λ R ( m e2( ( 3e0(m) o(m) − 2.3 + 1 17 and + 1 + − 1 ) = m = ) ) ) / 5 ∉ ∣] and I (3) Let R = ( = ⊆ ̃I but x2y2z2 x2y2z2 ∈ 0. Using Macaulay 2, J I and e3( ) = 3 x3y+ 1 2 y4+ 4 2 xy3+ 3 reduction of I and 3 whereas de0(I) o(I) x4, y4, z4, x3y, xy3, y3z, yz3 . Note that depth G I 0 as ) = ( ) I 48, e2( I I. By [10, Example 3.7], e0( 64, e1( I 4 ) = 2 y4 + 5y3z + 3yz3 + 1 6 xy3 + 1 4 x4 + x3y + 5 5 z4, 4 3 x4 + = ( 9 yz3+ 3 2 xy3+y4+ 3 5 x3y+ 7 3 x4+ 4 7 z4, 5 2 y3z+ 8 3 yz3+ 10 5 y3z+ 5 4 z4 is a minimal − e3( − 1 + 1 + + λ − e0( I I e2( I e2( R I I e1( I rJ ( 32 ) ( ( 43. = I Next we show that in dimension three, for certain values of e2( in terms of Hilbert coefficients. We write vn for vn( ̃F ) I r . ( , we get linear upper bound on ) = − 2d + 1 ) = ) ≤ ) = ) = ) ) ) ) ) ) ) I / 3 R, m Corollary 4.3. Let ( ideal. Then the following statements hold. ) be a three dimensional Cohen-Macaulay local ring and I an m-primary . ) − e0( I + λ R ( / I ) ) + 1 Proof. I (1) If e2( I (2) If e2( I (3) If e2( ) ) ) ≤ ) ≤ I ( I ( + λ R ( − e0( I 0 or 1 then r I I e1( / 1 and I is asymptotically normal then r − e0( I 2 then r I ( ) = ) = ) ) = I e1( I (1) If e2( ) 0 by (17) and hence e3( ̃F ) ≤ ) = I 0. Since I is asymptotically normal, e3( ) ≥ + λ R I 0. Hence we have r ( ) ( I e1( 0 or 1 then r + 1 − e3( I ) I e1( I ( ) ) ≤ + 2 − e3( I . ) / ) + λ − e0( R I ( ) = 1 then s + λ R ( + 1. ) ≤ ≤ ) ) I I I / (2) If e2( I I e3( ) ≤ I e3( ) = (3) As depth G ( ̃F ) ≥ implies either v1 = e3( ̃F ) = ∑n≥2 ( n 2) 1, we have 2 0, v2 = v2 ≤ vn = / ) e1( I I e2( ) ≤ = 1, vn = 1. Using (13), we get r − e0( I ) e2( ̃F ) = ∑n≥1 nvn (by [19, Theorem 2.5]), which ) = 2. Hence ≥ + 2 − e3( I . ) ) (cid:3) 0 for all n 2, vn = 3 or v1 = + λ − e0( I R I I e1( I ( ) ) ≤ ( 0 for all n ≥ ) / + 1 − e3( I by Theorem 4.1. ) ) 0 using (15). Then by equation (12), 0 by [18, Theorem 4.1] which implies Acknowledgement : We would like to express our sincere gratitude to anonymous referee for meticulous reading and suggesting several editorial improvements. We also thank Prof. M. E. Rossi for her suggestions. BOUNDS FOR REDUCTION NUMBER 13 References [1] C. Blancafort, On Hilbert functions and cohomology, J. Algebra 92, 439-459 (1997). [2] S. Goto, K. Nishida and K. Ozeki, The structure of Sally modules of rank one, Math. Res. Lett. 15 (2008), 881-892. [3] A. Guerrieri and M. E. Rossi, Hilbert coefficients of Hilbert filtrations, J. Algebra, 199 (1998), 40-61. [4] L. Ghezzi, S. Goto, J. Hong and W. Vasconcelos, Variation of Hilbert coefficients, Proc. Amer. Math. Soc. 141 (2013), 3037-3048. [5] L. Ghezzi, S. Goto, J. Hong and W. Vasconcelos, Sally modules and reduction numbers of ideals, Nagoya Math. J. 226 (2017), 106-126. [6] J. Hong, A. Simis and W. Vasconcelos, Ideals generated by quadrics, J. Algebra 423 (2015), 177-189. [7] S. Huckaba, A d-dimensional extension of a lemma of Huneke’s and formula for the Hilbert coefficients, Proc. Amer. Math. Soc. 124 (1996), no. 5, 1393-1401. [8] S. Huckaba and T. Marley, Hilbert coefficients and the depths of associated graded rings, J. Lond. Math. Soc. (2) 56 (1997), no. 1, 64-76. [9] S. Itoh, Hilbert coefficients of integrally closed ideals, J. Algebra 176 (1995), 638-652. [10] A. Mafi and D. Naderi, Results on the Hilbert coefficients and reduction numbers, Proc. Indian Acad. Sci. Math. Sci. 129 (2019), no. 4, Paper No. 60, 12 pp. [11] T. Marley, Hilbert functions of ideals in Cohen-Macaulay rings, PhD Thesis (1989). [12] K. Ozeki and M. E. Rossi, The structure of the Sally module of integrally closed ideals, Nagoya Math. J. 227 (2017), 49-76. [13] Tony J. Puthenpurakal, Ratliff-Rush filtration, regularity and depth of higher associated graded rings: Part I, J. Pure Appl. Algebra 208 (2007), 159-176. [14] Tony J. Puthenpurakal, Ratliff-Rush filtration, regularity and depth of higher associated graded rings: Part II, J. Pure Appl. Algebra 221 (2017), 611-631. [15] T. Puthenpurakal and A. Mishra, Cohen-Macaulay local rings with e2 = e1 − e0 + 1, J. Algebra, 611 (2022), 94-109. [16] M. E. Rossi, A bound on the reduction number of a primary ideal, Proc. of the Amer. Math. Soc. 128(5) (1999), 1325-1332. [17] M. E. Rossi and I. Swanson, Notes on the behavior of the Ratliff-Rush filtration, Contemp. Math. 331 (2001), 313-328. [18] A. Corso, C. Polini and M. E. Rossi, Depth of associated graded rings via Hilbert coefficients of ideals, J. Pure Appl. Algebra 201 (2005), 126-141. [19] M.E. Rossi and G. Valla Hilbert functions of filtered modules, Lecture Notes of the Unione Matematica Italiana, 9. Springer-Verlag, Berlin; UMI, Bologna, 2010. xviii+100 pp. [20] W. V. Vasconcelos, Integral Closure, Springer Monographs in Mathematics, Springer, Heidelberg, 2005. Department of Mathematics, Indian Institute of Technology Kharagpur, 721302, India Email address: [email protected] Department of Mathematics, Indian Institute of Technology Patna, Bihta, Patna 801106, India Email address: [email protected]
synthetic_cpt
3
Can_Language_Models_Induce_Grammatical_Knowledge_from_Indirect_Evidence.pdf
Can Language Models Induce Grammatical Knowledge from Indirect Evidence? Miyu Oba1 Yohei Oseki2 Akiyo Fukatsu2 Akari Haga1 Hiroki Ouchi1 Taro Watanabe1 Saku Sugawara3 1Nara Institute of Science and Technology 2The University of Tokyo 3National Institute of Informatics {oba.miyu.ol2,haga.akari.ha0,hiroki.ouchi,taro}@is.naist.jp {oseki,akiyofukatsu}@g.ecc.u-tokyo.ac.jp [email protected] 4 2 0 2 t c O 3 2 ] L C . s c [ 2 v 2 2 0 6 0 . 0 1 4 2 : v i X r a Abstract What kinds of and how much data is neces- sary for language models to induce grammat- ical knowledge to judge sentence acceptabil- ity? Recent language models still have much room for improvement in their data efficiency compared to humans. This paper investigates whether language models efficiently use indi- rect data (indirect evidence), from which they infer sentence acceptability. In contrast, hu- mans use indirect evidence efficiently, which is considered one of the inductive biases con- tributing to efficient language acquisition. To explore this question, we introduce the Wug In- Direct Evidence Test (WIDET), a dataset con- sisting of training instances inserted into the pre-training data and evaluation instances. We inject synthetic instances with newly coined wug words into pretraining data and explore the model’s behavior on evaluation data that assesses grammatical acceptability regarding those words. We prepare the injected instances by varying their levels of indirectness and quan- tity. Our experiments surprisingly show that language models do not induce grammatical knowledge even after repeated exposure to in- stances with the same structure but differing only in lexical items from evaluation instances in certain language phenomena. Our findings suggest a potential direction for future research: developing models that use latent indirect evi- dence to induce grammatical knowledge. 1 Introduction Recent advances in language models, such as those from the GPT and Llama families (OpenAI, 2024; Meta, 2024), have shown remarkable progress in various tasks. These models are trained on ex- tremely large datasets, on a scale thousands of times greater than the amount of data children are exposed to in developing grammatical knowledge comparable to that of adults (Warstadt et al., 2023). This suggests substantial potential for improving their learning efficiency. 1 Figure 1: The indirectness of evidence. Direct evidence refers to instances identical to previously observed ones. Lexically indirect evidence targets the same linguistic knowledge but differs in lexical items. Syntactically & lexically indirect evidence is different in both their syntactical and lexical items. According to Pearl and Mis (2016), humans ac- quire language using indirect evidence, in addi- tion to direct evidence, which is considered one of the inductive biases contributing to efficient lan- guage acquisition. As illustrated on the left side of Figure 1, when humans encounter the sentence “<wug> loves himself.”, they can correctly judge the grammatical acceptability between “<wug> loves himself.” and “*<wug> loves herself.” Such observed sentences are referred to as direct evi- dence. Conversely, in the middle and right sides of the figure, we assume that humans are not ex- posed to such direct evidence. However, if they observe sentences from which they can make some inference for a correct judgment, such sentences are called indirect evidence. For example, humans might hypothesize that “him(self)” in the sentence “<wug> is helping himself.” refers to <wug>, or that the pronoun “his” in “<wug> helped his friend.” indicates <wug> has a masculine property. However, it remains still unclear how the de- gree of indirectness in observed instances affects the number of occurrences required for language models to induce grammatical knowledge. Pre- 👤<wug> loves himself.TrainingEvaluation<wug> helped his friend.Direct evidenceLexicallyindirect evidenceSyntactically & lexicallyindirect evidence 🤖<wug> loves himself. or*<wug> loves herself.DirectIndirect❓✅<wug> is helping himself. vious work has investigated how language mod- els learn grammatical knowledge based on the ap- pearance of items in training data focusing on the word frequency effect (Wei et al., 2021; Yu et al., 2020) or generalization to unseen instances (Patil et al., 2024; Misra and Mahowald, 2024; Leong and Linzen, 2024) through few-shot learning or pretraining on corpora filtered by specific linguistic constructions. However, those methods face a limi- tation in identifying ways to enhance the model’s learning efficiency. In this work, we explore the degree of indirect- ness and the amount of data needed for language models to induce linguistic generalization. To ad- dress this question, we introduce the Wug InDirect Evidence Test (WIDET), a dataset containing ad- ditional indirect training and evaluation instances. We train language models on pretraining data in- corporating the indirect training instances. We then evaluate their linguistic generalization across seven different phenomena, including anaphor agreement, transitivity, and subject-verb agreement. These phe- nomena require language models to comprehend di- verse properties and multiple parts of speech of spe- cific words to judge their acceptability. To control the number of observed indirect training instances, we inject synthetic instances with newly coined words into pretraining data. Following Berko (1958), we refer to these words that do not appear in the original vocabulary and data as wug words.1 We use various synthetic data as additional indi- rect training instances, each differing in the degree of lexical and syntactic indirectness as well as the number of observations. We find that language models generalize linguis- tic knowledge from training instances identical to correct evaluation instances, through their data effi- ciency varies across different linguistic phenomena. This variation is likely influenced by the number of words between the wug and the words that act as cues for the model to learn its properties. We surprisingly observe that the language models do not induce grammatical knowledge in certain phe- nomena, even in instances that only differ in lexical items. Syntactically indirect instances rarely in- duce the model’s generalization. Given that the distances between the wug and the cue words to learn its properties might cause inefficiency in the models’ learning, we conduct a 1The original wug used in Berko (1958)’s work is not exactly same as our setting to create controlled instances. Details are discussed in Section 7.1. detailed analysis of indirect instances with complex interference, using anaphor gender agreement as a case study. We examine whether these instances affect the generalization, considering three factors related to attractors and distance, finding that when the language models are trained on the instances with complex interference, they hit a plateau in learning after sufficient observations. Those findings from our controlled and compre- hensive experiments suggest that, at least in our small-scale settings, language models do not gen- eralize in a human-like manner even from the data with a degree of indirectness that seems intuitively manageable for humans, depending on language phenomena. Our work contributes insights into language models’ capacity to use indirect evidence for learning. To advance this in future research direction: Implement a model that can use indirect evidence, enabling data-efficient language acquisi- tion comparable to that of humans.2 2 Background 2.1 Evidence in Language Acquisition In the field of language acquisition, the information used to learn grammatical knowledge is referred to as evidence. Positive (negative) evidence refers to information in data that indicates what is ac- ceptable (unacceptable) in a language, and it has been argued that humans rely solely on positive ev- idence to acquire their language (Chomsky, 1993). Pearl and Mis (2016) further distinguishes indi- rect positive evidence from direct positive evidence. Direct positive evidence indicates the information present in the data observed by the learner and used for learning, with the assumption that its usage by speakers guarantees grammaticality (the left side of Figure 1). Indirect positive evidence, by contrast, refers to information that requires a learner to infer what is grammatical in the language from observed data (the middle and right side of Figure 1). They argue that, in addition to direct positive evidence, indirect positive evidence potentially plays a signif- icant role in efficient language acquisition. While the previous literature explores humans’ capacity, it is still unclear whether language models induce linguistic generalization from such evidence. 2WIDET is publicly available at https://github.com/ nii-cl/widet. 2 2.2 Analysis of Language Models in Learning Grammatical Knowledge Previous studies have focused on how language models learn grammatical knowledge based on the appearance of target lexical items in training data. Yu et al. (2020) evaluate models’ perfor- mance on grammatical tasks using minimal pairs including specific target words and few-shot learn- ing on sentences including unseen words. Wei et al. (2021) train models on data where the fre- quency of instances including specific verbs is ma- nipulated to evaluate their generalization to verb inflections. Recent studies have focused on indi- rect evidence (Misra and Mahowald, 2024; Leong and Linzen, 2024; Patil et al., 2024), exploring the availability of indirect evidence in language mod- els by training them from scratch on filtered data. These data include specific distinctive linguistic phenomena, such as AANN construction (Misra and Mahowald, 2024) and passivization (Leong and Linzen, 2024), and systematic phenomena from BLiMP (Warstadt et al., 2020b). 3 Motivations 3.1 Experiment Design While the previous studies in Section 2.2 each offer valuable insights into how language models gener- alize to unseen instances from various perspectives, our goal in this work is to explore the impact of the degree of indirectness on data efficiency, with the aim of identifying ways to enhance the model’s learning efficiency. Specifically, we examine how the number of instances required for language mod- els to induce grammatical knowledge changes as the degree of indirectness in the training instances increases. To archive this, we assume that experi- ments have to meet the following requirements: Various Degrees of Indirectness in a Single Lin- guistics Phenomenon To investigate the impact of the degree of indirectness on the number of ob- servations needed for grammar acquisition, we em- ploy two graduated types of indirectness, lexical and syntactic, in addition to direct evidence. Most prior research focuses on a single degree of indi- rectness for a given linguistic phenomenon. Various Number of Observations Given our aim for data efficiency, we need to quantify how much the required amount of data for language models to induce grammatical knowledge increases due to indirectness. We employ six different ob- servation counts, ranging from 0 to 100. Previous studies focusing on indirect evidence are limited in their ability to quantify changes in the number of observations required, as they do not take into account the frequency effect. Various Linguistics Phenomena We explore whether the two aspects mentioned above occur universally across linguistic phenomena or are spe- cific to certain phenomena. We employ seven types of linguistics phenomena, each with referent tar- gets consisting of several different parts of speech. Most of the previous work, except for Patil et al. (2024), focuses on one or two phenomena. Inserting Sentences Containing Words that do not Appear in Pretraining Data Considering phenomena like anaphor agreement, to judge the acceptability of a sentence, language models are expected to understand the properties (e.g., num- ber) of the referent in the sentence. To count the number of observations for language models to induce grammatical knowledge, we need to con- cisely count how many times the language models encounter a sentence containing the referent before they understand the properties of the referent. For conventional approaches to ablate or certain lexical items existing in corpora, the (sub)word of the tar- get referent may appear in the sentence other than the removed one, making it difficult to count the observations accurately. To concisely control the number of observations of the referent, we employ the sentences containing the words that have not appeared in pretraining corpora. 3.2 Inserting Instances with Newly Coined Words We employ newly coined words (wugs) to introduce additional instances including words that do not appear in pretraining data. The advantages include: • Handling the occurrences of target lexical items may not eliminate their influence from the pre- training corpus. To fully negate the effect of a lexical item, all variants sharing the same stem or subword would need to be removed, which is complex and risks significantly distorting the natural corpus distribution. • When automatically generating wugs, we can adequately control their frequency and evidence strength, including their tokenization. Since our 3 Phenomenon Evd Training instance Evaluation instance Anaphor gender agreement (ANA.GEN.AGR) Anaphor number agreement (ANA.NUM.AGR) Transitive (TRANS.) Intransitive (INTRANS.) Determiner-Noun agreement (D-N AGR) Subject-Verb agreement (V) (S-V AGR (V)) Subject-Verb agreement (S) (S-V AGR (S)) DE LexIE SynIE DE LexIE SynIE DE LexIE SynIE DE LexIE SynIE DE LexIE SynIE DE LexIE SynIE DE LexIE SynIE <wug#n> has devoted herself <wug#n> is painting herself <wug#n> judges her work the <wug#n> didn’t see themselves the <wug#n> can reward themselves the <wug#n> loved their toy some trees <wug#n>ed the car no street can <wug#n> the city every lion hunts what no prey can <wug#n> many rivers should <wug#n> each ethic might <wug#n> a man corrects that the answer will not <wug#n> the senators use this <wug#n> a window will open this <wug#n> the <wug#n> sells the house the <wug#n> are leaving any traces the <wug#n> climb few ladders each key can open those <wug#n> <wug#n> has devoted herself *<wug#n> has devoted himself the <wug#n> didn’t see themselves *the <wug#n> didn’t see itself some trees <wug#n>ed the car *some trees <wug#n>ed many rivers should <wug#n> *many rivers should <wug#n> dogs the senators use this <wug#n> *the senators use these <wug#n> the <wug#n> are leaving any traces *the <wug#n> is leaving any traces the book <wug#n> a shelf every chocolate <wug#n> several bars the deer that trails the head <wug#n> a herd the book <wug#n> a shelf *the books <wug#n> a shelf Table 1: Linguistic phenomena and instances. The sentences starting with * are ungrammatical. Phenomenon ANA.GEN.AGR. ANA.NUM.AGR TRANS. INTRANS. D-N AGR S-V AGR (V) S-V AGR (S) POS Gen. Num. noun ✓ – noun – verb – verb – adj – verb – noun – ✓ – – ✓ ✓ ✓ (In)Transitive Long agr – – ✓ ✓ – – – ✓ ✓ – – – – – our dataset, WIDET. Following targeted syntactic evaluation (Linzen et al., 2016; Marvin and Linzen, 2018; Warstadt et al., 2020b), we employ minimal pair paradigm where pairs of sentences minimally differ in target words. The examples of instances are listed in Table 1. Table 2: Properties to judge evaluation data. POS denotes part-of-speech. Gen./Num. denotes gen- is whether a long agreement der/number. Long agr. is required. aim here is to control the minimal information observable by the model, synthetic data allows for the elimination of noises. • Our approach is a form of data augmentation, that does not require any modification of lexical items or sentences in the corpora. Hence, it can be easily applied to other corpora and models. While using artificial languages in analyzing lan- guage models is tackled by previous work (White and Cotterell, 2021; Ri and Tsuruoka, 2022), our approach is different in that we use artificial in- stances only at the token level by introducing a word wug to insert them into a natural corpus. 4 Wug InDirect Evidence Test (WIDET) 4.1 Linguistic Phenomena We employ the seven different linguistic phenom- ena listed in Table 1, which we selected from the benchmark BLiMP (Warstadt et al., 2020b)3. As shown in Table 2, the phenomena vary in their properties, so that we can analyze models’ behav- ior from diverse perspectives. Since our selection criteria are based on whether understanding the properties of a single word is sufficient to judge the linguistic phenomena correctly, we can only cover limited linguistic phenomena. We anticipate phenomena related to island effects, for instance, to be beyond this scope. 4.2 Newly Coined Word Wug We employ the tag <wug#n> as a newly coined word to conduct controlled experiments using words that never appeared in the pretraining cor- pus. This approach does not entirely align with the policy in Berko (1958), which employed words like wug and wuz that are newly coined but phono- This section describes how we construct additional training and evaluation instances, which comprise 3Appendix A.1 details the specific phenomena referenced from BLiMP in this work. 4 logically natural in the target language by using actual subwords. One concerning issue with Berko (1958)’s policy is that the actual subwords can pro- vide models with clues for correct grammatical judgment, for example, by their occurrence in spe- cific positions. While using actual subwords could help models access grammatical knowledge needed for accurate judgment, it complicates evaluating the models’ true ability to learn from indirect evidence. To avoid its possible effects, we instead use the artificial tag <wug#n>. We analyze the differences between the conditions using the tag and the origi- nal wug in Section 7.1. 4.3 Indirectness of Additional Training Instances We define the following three degrees of indirect- ness (DE, LexIE, and SynIE). The difficulty in- creases in the order of DE, LexIE, and SynIE: Direct Evidence (DE) An instance identical to the correct evaluation instances. We assume that the properties of wug in an evaluation instance are learned by referencing the training instance that shares the same syntactical and lexical items as the evaluation instance. Lexically Indirect Evidence (LexIE) An in- stance that conveys the same syntactic structure as the evaluation instance but uses different lexical items. We assume that the properties of wug in an evaluation instance are learned by referencing training instances with the same usage but different lexical items from those in the evaluation instance. Syntactically Indirect Evidence (SynIE) An in- stance that reveals the target linguistic feature with different syntactic and lexical items from evalua- tion instances. The properties of wug in an evalua- tion instance are learned by referencing the training instance with different syntactic and lexical items from those in the evaluation instance. 4.4 Training and Evaluation Template We prepare 200 template pairs for each linguistic phenomenon. Each template has three different sets of tags, resulting in 200 × 3 = 600 pairs. We anticipate that quantifiers and determiners can influence linguistic generalization, making it unclear whether language models rely on the prop- erties of verbs and reflexive pronouns, quantifiers, and determiners, or other factors as clues for judg- ment, while previous studies have paid limited at- tention to this (Patil et al., 2024). To mitigate such effects, for number agreement, we added <wug#n> without any suffixes to these sentences, expecting the models to infer that <wug#n> is an inflected form based on the sentence structure in which they are embedded. We explore their effects in the model’s generalization in Section 7.1. For the noun subject of S-V AGR (V) and ANA.NUM.AGR, we avoid any quantifiers and determiners other than “the”. Due to the same reason, for the verb in S-V AGR (S), we only employ the present tense and do not employ any auxiliary verbs and tense suffixes. We ensured that <wug#n> was used the same word (i.e., the tag with the same id) in a pair, both gram- matical and ungrammatical sentences because we want the same occurrence of the wug in the training data. 4.5 Data Generation with LLM To create varied degrees of and balanced corpus, we use GPT-4 Turbo in OpenAI API to generate the training and evaluation templates. To generate bal- anced training instances with different properties, we generate them separately based on concerning properties, (e.g., feminine and masculine pronouns have the same percentage in ANA.GEN.AGR.). We prompt the GPT-4 to generate balanced, diverse, and duplication sentences. We generate evaluation instances and training instances for indirect evi- dence (LexIE, SynIE) with three different prompts. Subsequently, we get DE by extracting the correct sentence in generated evaluation instances. We gen- erate the sentences with placeholders [WUG] and we replace [WUG] with the tag <wug#n>, where the index number n distinguishes the coined words (e.g., <wug#124>). The example of prompts and detailed procedures are shown in Appendix A.4. 5 Experiments and Results 5.1 Settings Pretraining Data We randomly sample 675k sentences (16M words) from English Wikipedia articles and use them as pretraining data.4 We in- ject the additional training instances into the data. The detailed preprocessing steps and additionally injected training instances are described in Ap- pendix A. We shuffle and deduplicate sentences and remove ones containing fewer than two words. The 4Retrieved from https://github.com/phueb/ BabyBERTa. 5 Figure 2: The results (accuracy; %) of experiments for language phenomena and evidence. The gray dot lines indicate the model’s scores trained on pretraining data without any additional instances (n=0). data is then lowercase, and periods are removed from the sentences. Frequency of Additional Instances We com- pare the language models trained on the pretraining data injected indirect instances that appear n times (n = 0, 1, 5, 25, 50, 75, 100) for each instance. Models We use BabyBERTa (Huebner et al., 2021), which is a minimal variant of RoBERTa (Liu et al., 2019). We modify some hyperparameters due to the pretraining data size. More detailed infor- mation is shown in Table 6. We train the tokenizer from scratch on the pretraining data, adding the tags to the vocabulary so that the tokenizer treats each tag as one token. Evaluation Metrics We use the accuracy of se- lecting the correct sentence as our evaluation met- ric. We employ pseudo-likelihood (Salazar et al., 2020)5 normalized by token length because we use evaluation sentences containing the sentence pair each of which has different token lengths. 6 5.2 Results We review the main results by answering our re- search questions: (i) What degree of and how much data do language models need to acquire grammat- ical knowledge to judge the acceptability of a sen- tence? (ii) Are observations showing similar trends in broader categories of linguistic phenomena? The results are shown in Figure 2. 5We use the source code in https://github.com/ babylm/evaluation-pipeline-2023. 6Normalization by token length may still result in token- biases (Ueda et al., 2024). Direct Evidence As for DE, increasing the num- ber of observations generally contributed to lin- guistic generalization in language models. How- ever, the extent of improvement varied across differ- ent linguistic phenomena. In ANA.GEN.AGR and ANA.NUM.AGR, the score increased more gradu- ally, particularly between 25 and 75 occurrences, compared to the other agreement phenomena. This difference might be due to anaphor agreement, which often involves a longer distance between the target words and the words with properties neces- sary for correct judgment. We thoroughly examine the effects of distance and attractors in Section 6. Lexically Indirect Evidence In about a half of the phenomena, D-N AGR, S-V AGR (V), ANA.NUM.AGR, and INTRANSITIVE, LexIE in- duces generalization more slowly but steadily than DE. However, in the remaining half of the phenom- ena, the language models do not acquire the gram- matical knowledge necessary to correctly judge ac- ceptability. This result is surprising because LexIE differs only in lexical items from a correct sentence in the evaluation and shares the same syntactical structure. This trend cannot be explained by the properties of Table 2. Syntactically Indirect Evidence In most phe- nomena, the models fail to induce SynIE gener- alization; the increase in the number of observa- tions did not improve generalization but merely extended learning time. In TRANSITIVE, the accu- racy of SynIE drastically decreases inversely with the number of observations. This intriguing phe- nomenon is likely due to the heuristics of the lan- guage model. The final word in the training in- 6 15255075100The number of observations020406080100ScoreAnaphor gender agreementDELexIESynIE15255075100The number of observations020406080100ScoreAnaphor number agreementDELexIESynIE15255075100The number of observations020406080100ScoreTransitiveDELexIESynIE15255075100The number of observations020406080100ScoreIntransitveDELexIESynIE15255075100The number of observations020406080100ScoreDeterminer-noun agreementDELexIESynIE15255075100The number of observations020406080100ScoreSubject-verb agreement (V)DELexIESynIE15255075100The number of observations020406080100ScoreSubject-verb agreement (N)DELexIESynIE Interf. Evd. Training instance Attractor type (AT) Attractor number (AN) Distance (DT) DE AT0 AT1 AT2 DE AT1 AN0 AN1 AN2 DE AT0 DT0 DT1 DT2 <w> loves herself <w> helping the child loves herself <w> helping the man loves herself <w> helping him loves herself <w> loves herself <w> helping the man loves herself <w> helping the man to see the dad loves herself <w> helping the man for the king to see the dad loves herself <w> helping the man for the son of the king to see the dad loves herself <w> loves herself <w> helping the child loves herself <w> who helps the child loves herself <w> whose cat helps the child loves herself <w> whose cat helps the child who finds the teachers loves herself Table 3: Interference types and training instances used in the analysis. <w> corresponds to <wug#n>. stances (see Table 1) is the <wug#n>, whereas it is an actual direct object noun in the correct eval- uation sentences. This suggests that the language model might exhibit linear generalization (Mueller et al., 2022; McCoy et al., 2020), which differs from the human-like hierarchical generalization. The model seems to judge correctness based on whether certain words follow the <wug#n>, even though the wug should be recognized as a transitive verb because the relative pronoun “what” is its ob- ject. This implies that instances requiring complex hierarchical inference may hinder generalization. Overall Our findings mainly suggest that lan- guage models do not sufficiently induce linguistics generalization from indirect positive evidence, es- pecially SynIE, while they induce it from direct ev- idence. Wei et al. (2021) find that their results sup- port the Reduce Error Hypothesis (Ambridge et al., 2015), where high-frequency words are learned better. The results in our work also support the hypothesis in DE, but in LexIE and SynIE, not all linguistic phenomena support it. 6 Analysis with More Indirect Instances In Section 5, DE induced the model’s linguistic generalization but its data efficiency varies by lin- guistic phenomena. For anaphor agreement, the models’ learning is more apt to reach a plateau in 25 – 75 observations compared to other phe- nomena (See the figure for anaphor agreement in Figure 2). This stagnation might be caused by the longer distance between the wug and the reflexives, whereas the relevant items are adjacent to each other in other phenomena such as TRANSITIVE. To corroborate this negative effect of long distance on learning, we employ more indirect agreement instances to investigate whether the long distance hinders linguistic generalization on ANA.GEN.AGR in language models. The difficulty of long-distance agreement is caused by attractors and distance (Linzen et al., 2016). Agreement attractors indicate the interven- ing words that distract the learner from judging the correct agreement (Giulianelli et al., 2018). When language models judge the gender agreement, they would check if the word “<wug#n>” corresponds to the gender of the reflexive. Distance refers to the number of the words intervening between the antecedent “<wug#n>” and “herself”. Attractor indicates the competing words (e.g., “man” in the case of AT1 in Table 3) that distract learners from judging the agreement. The language models’ grammatical knowledge concerning long-distance dependencies has been investigated in previous studies (Giulianelli et al., 2018; Li et al., 2023), and these studies argue that the models can indeed acquire the knowledge of long-distance agreement. However, the overall re- sults on anaphor agreement in this study suggest that further investigation is required to reveal the relationship between models’ performance and the distance of items relevant to correct judgment. For this purpose, we conduct a fine-grained analysis using synthetic sentences varying the distance be- tween wugs and reflexive pronouns. 6.1 Target Phenomena We compare the models trained on the corpus with additional instances of anaphor gender agreement, from the perspective of the attractor type, number, and distance as below. Table 3 lists all kinds of training instances compared in this analysis. To create the instances, we use GPT-4 to gener- ate nouns differing in gender and number and sam- ple the designated number of items from these gen- erated items. For feminine and masculine nouns, we collect 100 nouns each. From the generated items, we first select 25 nouns for each gender. Then, we create both the singular and plural forms 7 of the selected words and double them to cre- ate minimal pairs. The prompt is shown in Ap- pendix A.4. Additionally, we also collect 100 neu- tral nouns such as teacher and child. The verb that we newly employ is collected from LexIE in ANA.GEN.AGR to avoid duplication. Attractor Type (AT) We investigate whether at- tractors downgrade the linguistic generalization in ANA.GEN.AGR and how their distract strength affects the models’ acquisition of anaphor agree- ment. DE indicates the direct instances examined in Section 5, which does not have any attractors and works as a baseline here. AT0 includes neutral com- mon nouns, while AT1 employs common opposite- gender nouns, and AT2 uses opposite-gender pro- nouns. We assume that the magnitude of attractors’ interference follows the order AT0 < AT1 < AT2, given that the more similar their properties are to reflexives, the more distracting they will be. Attractor Number (AN) We examine whether the number of attractors affects the model’s acqui- sition. We use the gender common nouns as at- tractors. DE works as a baseline because it has no attractors. We expect that the more attractors there are, the more difficult it is to generalize correctly. Distance (DT) We analyze the effect of distance on the model’s acquisition. We assume that the more distance intervening between wug and reflex- ive, the more difficult it is to judge sentence accept- ability. We use neutral nouns there to explore the effect of the number of words genuinely. 6.2 Results As shown in Figure 3, After 100 observations in all viewpoints, SynIE, with the shortest distance and no attractors, got the highest scores, while in midway observations this tendency does not hap- pen. The most difficult instances in each interfer- ence lead to the language model’s lowest score, after their 100 observations. AT2, including an op- posed pronoun as an attractor, particularly shows unstable generalization. We initially expected that instances with longer distances and more attrac- tors would interfere more strongly with the mod- els’ generalization. However, this tendency was not observed in the experiment. To the question of whether the instances with long-distance agree- ment induce linguistic generalization, these results answer that with the larger number of observations, the model’s generalization relatively hits a plateau. 8 Figure 3: Models’ scores for more indirect instances. 7 Discussion 7.1 Considering Wug Creation In this work, we use newly coined words that do not appear in the original vocabulary, following Berko (1958). Still, our used wug has some gap from the original one. In the original wug test, they use the words that do not exist in the language but conform to the phonological rule in the language. In contrast, we use the tag <wug#n> as wug in those experiments. Since the original wug is more phonologically natural, and the subwords are in the existing vocabulary, the original setting is closer to the environment of human language acquisition. On the other hand, to conduct controlled experi- ments on the number of instances that the model observed, the setting might not be suitable because this is far from the settings where a certain word is never encountered. We used the tag <wug#n>. In this section, we compare our method (tag method) and the original method (wug method) to explore the difference in their impact on the model’s lin- guistic generalization. Wug Generation We create wug using pseu- doword generator Wuggy.7 and choose 1.2k nouns from sample data taken from the one billion- word Corpus of Contemporary American English (COCA).8 To create wug-like words, we use the nouns to output four pseudo words for one noun and randomly select one pseudo noun. We prepare 200 × 3 = 600 pseudo words, each 200 of which are used separately (wug_v1–wug_v3) because we expect that different wugs have different subwords and they can show different results. 9 We use those 7https://github.com/WuggyCode/wuggy. 8Downloaded from https://www.wordfrequency. info/samples/words_219k.txt. 9On the other hand, for tag and tag w/ morph., we show the results of only one model, because the different tags <wug#n> have the same parameters and they actually show the same results. 15255075100Number of observations406080100ScoreAttractor typeDEAT0AT1AT215255075100Attractor numberDEAT1AN0AN1AN215255075100DistanceDEAT0DT0DT1DT2 N wug method ANA. NUM. AGR Phenomenon D-N AGR S-V AGR (V) 0 25 tag tag w/ morph. wug_v1 wug_v2 wug_v3 tag tag w/ morph. wug_v1 wug_v2 wug_v3 57.5 59.0 81.3 81.2 81.5 72.5 94.0 92.3 90.5 90.5 47.0 80.5 89.5 91.2 88.7 76.2 99.5 87.7 87.7 87.5 62.2 83.3 86.7 86.0 85.0 78.0 91.3 90.2 88.5 86.5 Table 4: Scores calculated by the models trained on the pretraining data with indirect instances of different wug creation methods. N is the number of observations. pseudo nouns instead of the tag in the same way as in the previous experiments. three target Settings We phenomena, ANA.NUM.AGR, D-N AGR, and S-V AGR (V), the wug of which is considered as common nouns. No inflectional morphemes are added to plural common nouns in the tag method while the morphemes are added to plural common nouns in the wug method. For ablation, we prepare the tag with inflectional morphemes (tag w/ morph. method), which employs the tag <wug#n> same as the tag method but uses inflectional morphemes same as the wug method. We compare the models trained on the pretraining data with the tag method, the wug methods, and tag w/ morph. method. Other settings are the same as Section 5. Results Table 4 shows the scores of the tag, tag w/ morph., and three sets of wug. In the wug and tag w/ morph., the language models correctly judge the acceptability of sentences, mostly more than 80– 90%, surprisingly with the data that includes zero additional instances. This result is probably be- cause language models determine whether a word is singular or plural, based on whether an inflection morpheme “s” follows it, even if the word is novel. This occurs with both novel words and novel sub- word combinations, but the impact is greater with the latter, comparing the two methods. In addition, despite our expectation that different subword com- binations show different results, we observed no large score variances among the three vocabulary sets except for 25 times in ANA.NUM.AGR. From those results, we found a trade-off between the set- tings plausible for human language acquisition and strictly controlled settings. We prioritized the latter in this work, but the direction to the former is also a good setting depending on the research questions. 9 Phenomenon ANA.GEN.AGR ANA.NUM.AGR TRANSITIVE INTRANSITIVE D-N AGR S-V AGR (V) S-V AGR (S) Std 0.02 0.002 0.02 0.002 0.02 0.002 0.02 0.002 0.02 0.002 0.02 0.002 0.02 0.002 Score 51.3 ± 0.95 55.5 ± 1.73 59.7 ± 2.44 64.4 ± 2.84 90.2 ± 1.57 90.0 ± 1.15 12.7 ± 1.53 12.0 ± 0.60 47.4 ± 1.39 48.9 ± 1.68 56.4 ± 5.23 54.7 ± 1.78 49.1 ± 2.98 49.4 ± 1.19 Table 5: Scores (mean±std) of language models with different seeds and standard deviation of the initializers. 7.2 Zero Observations of Wug While a tag <wug#n> is added to the vocabulary, its parameters in language models are randomly initialized. If the language models never encounter sentences containing this tag during training, its pa- rameters still remain in their initialized state, which may lead to varying results in language models de- pending on factors such as the initializer’s standard deviation (std) and the random seed used. To ver- ify this effect, we compare the language model using the default std of the initializer for all weight matrices (std = 0.02) to that with one-tenth std (std = 0.002), using three kinds of seeds. Table 5 shows that the deviation of scores is smaller in the model using one-tenth std for initializer compared to the model using the default std. This finding implies that a smaller std can enhance the stability of the results. However, an excessively small std may risk negatively affecting the training process. Hence, we employ default std in the current work. 8 Conclusion We investigate the degree of indirectness and the amount of data required to induce human-like lin- guistic generalization in language models. We found that language models do not induce human- like linguistic generalization even with a degree of indirectness that seems intuitively manageable for humans, depending on language phenomena. This limitation indicates a direction for future studies: implementing a model that can use indirect evi- dence, which will lead to data-efficient language acquisition comparable to that of humans. Limitations References We recognize the following limitations in this study: Linguistic Knowledge by Function Words We generate synthetic instances only for linguistic phe- nomena concerning content words such as nouns and verbs. We avoid generating new function words (e.g., new wh-word as a relative pronoun). Nonce Sentence We have not dug into the dif- ference between natural sentences and nonce sen- tences (Gulordava et al., 2018; Wei et al., 2021) that are grammatical but completely meaningless because we create additional training and evalua- tion instances with LLM, which tends to generate naturally plausible sentences. Nonce sentences are less plausible in human language acquisition but exclude semantic selectional-preferences cues (Gu- lordava et al., 2018; Goldberg, 2019). According to Section 7.1, there can be a trade-off between train- ing language models in experimental settings that closely resemble natural human language acquisi- tion and those that are strictly controlled. Future work can determine whether nonce sentences with indirect evidence differently affect linguistic gener- alization in language models. Limited Model Size and Pretraining Data We use a small-scale language model and pretraining data in this work because we aim to find the dif- ferences from human inductive biases as much as possible. It is uncertain that the same trends as our work will appear in models of any size. Whether scaling laws apply to indirect data in accelerating model generalization would be an interesting future work. Ethics Statement There might be a possibility that the texts we used (Wikipedia) and the sentences generated by large language models are socially biased, despite their popular use in the NLP community. Acknowledgments We would like to express our gratitude to the anony- mous reviewers who provided many insightful com- ments that have improved our paper. This work was supported by JSPS KAKENHI Grant Num- bers JP21H05054, 22K17954, and 24KJ1700, and JST PRESTO Grant Numbers JPMJPR21C2 and JPMJPR20C4. Ben Ambridge, Evan Kidd, Caroline F. Rowland, and Anna L. Theakston. 2015. The ubiquity of frequency effects in first language acquisition. Journal of Child Language, 42(2):239–273. Jean Berko. 1958. The child’s learning of english mor- phology. WORD, 14(2-3):150–177. Noam Chomsky. 1993. Lectures on Government and Binding. De Gruyter Mouton, Berlin, New York. Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 240–248, Brussels, Belgium. Association for Computational Linguistics. Yoav Goldberg. 2019. Assessing bert’s syntactic abili- ties. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195–1205, New Orleans, Louisiana. Association for Computational Linguis- tics. Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger Levy. 2020. A systematic assessment of syntactic generalization in neural language mod- els. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1725–1744, Online. Association for Computational Linguistics. Philip A. Huebner, Elior Sulem, Fisher Cynthia, and Dan Roth. 2021. BabyBERTa: Learning more gram- mar with small-scale child-directed language. In Pro- ceedings of the 25th Conference on Computational Natural Language Learning, pages 624–646, Online. Association for Computational Linguistics. Cara Su-Yi Leong and Tal Linzen. 2024. Testing learn- ing hypotheses using neural networks by manipulat- ing learning data. Bingzhi Li, Guillaume Wisniewski, and Benoît Crabbé. 2023. Assessing the capacity of transformer to ab- stract syntactic representations: A contrastive analy- sis based on long-distance agreement. Transactions of the Association for Computational Linguistics, 11:18–33. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax- sensitive dependencies. Transactions of the Associa- tion for Computational Linguistics, 4:521–535. 10 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192–1202, Brussels, Belgium. Association for Computational Linguistics. R. Thomas McCoy, Robert Frank, and Tal Linzen. 2020. Does syntax need to grow on trees? sources of hier- archical inductive bias in sequence-to-sequence net- works. Transactions of the Association for Computa- tional Linguistics, 8:125–140. Meta. 2024. The llama 3 herd of models. Kanishka Misra and Kyle Mahowald. 2024. Language models learn rare phenomena from less rare phenom- ena: The case of the missing aanns. Aaron Mueller, Robert Frank, Tal Linzen, Luheng Wang, and Sebastian Schuster. 2022. Coloring the blank slate: Pre-training imparts a hierarchical inductive bias to sequence-to-sequence models. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1352–1368, Dublin, Ireland. Association for Computational Linguistics. OpenAI. 2024. Gpt-4 technical report. Abhinav Patil, Jaap Jumelet, Yu Ying Chiu, Andy La- pastora, Peter Shen, Lexie Wang, Clevis Willrich, and Shane Steinert-Threlkeld. 2024. Filtered cor- pus training (fict) shows that language models can generalize from indirect evidence. Lisa S. Pearl and Benjamin Mis. 2016. The role of indirect positive evidence in syntactic acquisition: A look at anaphoric “one”. Language, 92(1):1–30. Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan Wilcox, Chengxu Zhuang, Juan Ciro, Rafael Mos- quera, Bhargavi Paranjabe, Adina Williams, Tal Linzen, and Ryan Cotterell. 2023. Findings of the BabyLM challenge: Sample-efficient pretraining on developmentally plausible corpora. In Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning, pages 1–34, Singapore. Association for Computational Lin- guistics. Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- hananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2020a. BLiMP: A benchmark of linguis- tic minimal pairs for English. In Proceedings of the Society for Computation in Linguistics 2020, pages 409–410, New York, New York. Association for Com- putational Linguistics. Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- hananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2020b. BLiMP: The benchmark of lin- guistic minimal pairs for English. Transactions of the Association for Computational Linguistics, 8:377– 392. Jason Wei, Dan Garrette, Tal Linzen, and Ellie Pavlick. 2021. Frequency effects on syntactic rule learning in transformers. In Proceedings of the 2021 Con- ference on Empirical Methods in Natural Language Processing, pages 932–948, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jennifer C. White and Ryan Cotterell. 2021. Examining the inductive bias of neural language models with artificial languages. In Proceedings of the 59th An- nual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 454–463, Online. Association for Computational Linguistics. Ryokan Ri and Yoshimasa Tsuruoka. 2022. Pretraining with artificial language: Studying transferable knowl- edge in language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7302– 7315, Dublin, Ireland. Association for Computational Linguistics. Charles Yu, Ryan Sie, Nicolas Tedeschi, and Leon Bergen. 2020. Word frequency does not predict gram- matical knowledge in language models. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4040–4054, Online. Association for Computational Linguistics. Julian Salazar, Davis Liang, Toan Q. Nguyen, and Ka- trin Kirchhoff. 2020. Masked language model scor- ing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2699–2712, Online. Association for Computational Linguistics. Naoya Ueda, Masato Mita, Teruaki Oka, and Mamoru Komachi. 2024. Token-length bias in minimal-pair paradigm datasets. In Proceedings of the 2024 Joint International Conference on Computational Linguis- tics, Language Resources and Evaluation (LREC- COLING 2024), pages 16224–16236, Torino, Italia. ELRA and ICCL. 11 A Data generation A.1 Linguistic phenomena et employ linguistic al., 2020b), seven (Warstadt training/evaluation instances. phenomena, We to following The create is linguistic from from “causative”, “drop_arguement”, agree- ment” is from “determiner_noun_agreement_2”, “intransitive” is “determiner-noun phenomenon “transitive” Figure 4: An example of prompt used to create evaluation examples. “subject-verb agreement lar_plural_subject_verb_agreement_1”, “subject-verb agreement lar_plural_subject_verb_agreement_2”. (V)” is from “regu- and (S)” is from “regu- shown in Figure 4. Another example is found in https://github.com/nii-cl/widet. We use gpt-4-turbo with top_p set to 1.0 and temperature set to 0. A.2 Pretraining Data B Considering BLiMP Score Calculation We aim to pretrain the language models for 18 epochs while controlling the number of occur- rences of target instances. To achieve this, we con- catenate the pretraining data 18 times consecutively and randomly select where to inject each additional training instance. A.3 Creating Data with LLM The GPT-4 sometimes inconsistently generates sen- tences with hallucination; it generates the same sentence repeatedly and sometimes stops generat- ing midway. To generate as many lexically diverse instances as possible, we prompt GPT-4 to avoid using the same lemma as in the previous instance. To get appropriate instances, we prompt the GPT- 4 to generate double the number of instances10, and then select the designated number of instances, avoiding duplicates. We adjust the percentage of sentences with negation words to 10–50%. The bal- anced instances contained 100 feminine and 100 masculine instances in ANA.GEN.AGR, 34 femi- nine singular and 33 masculine singular, 34 sin- gular and 100 plural instances in ANA.NUM.AGR, 200 instances each in TRANSITIVE and INTRAN- SITIVE, 50 this, 50 that, 50 these and 50 those in D-N AGR. 100 singular and 100 plural each in S-V AGR. A.4 Prompts An example of prompts used to generate minimal sentence pair in anaphor gender agreement where a <wug#n> in the correct sentence is “herself” is 10The number of instances generated based on the prompt can vary. Sometimes the output meets the specified quantity, while other times it may be fewer, potentially even less than half of the requested amount. If not enough instances are generated, we input instances from three steps earlier and generate additional instances to meet the requirements. To select one sentence in each pair while evaluating, we calculate its sentence-level likelihood, referring to Warstadt et al. (2020a); Huebner et al. (2021). Conversely, Hu et al. (2020) argue that token-level likelihood comparisons, comparing the aggregate likelihood over a word like "herself" vs. a word like "himself", is a more precise evaluation than sentence-level probability. We consider the differ- ence using the two phenomena as a case study. Settings We compare the sentence-level likeli- hood used in this work with two types of score calculation; wug-level likelihood and antecedent- level likelihood. Given the sentence “<wug#n> has devoted herself/*himself,” the antecedent-level likelihood compares the probabilities assigned to the antecedents “herself” and “himself.” This is similar to the method used by Hu et al. (2020). The wug-level likelihood, on the other hand, compares the probabilities assigned to each pair of <wug#n>. Since we are using MLMs in our research, it is possible to adapt this for our calculations. Results The score of language models calcu- lated by the different score calculation methods are shown in Figure 5. Two phenomena are dif- ferent trends. For anaphor gender agreement, the sentence-level and wug-level calculation methods show similar trends where the score increased gradually between 25 and 75 occurrences. The antecedent-level method does not show such a re- sult but hits a plateau after 75 observations. For anaphor number agreement, the sentence-level and antecedent-level methods show similarities but the latter shows a bit more efficient learning than the former. The wug-level method does not show im- provement until 100 observations. The results sug- gest that, in our limited setting, there are distinct 12 Create 400 minimal sentence pairs, containing a grammatical and an ungrammatical sentence, following the template pair and rules.Template pair:[WUG] <singular transitive verb> herself.[WUG] <singular transitive verb> himself.Rules:-You must include the lemma of <singular transitive verb> with a different initial letter and different final letter from the previous ones.-Always use the female proper noun [WUG] with bracket[] and uppercase.-You must include various auxiliary verbs and tenses in <singular transitive verb> with a different initial letter and different final letter from the previous ones.-You often include negations in <singular transitive verb> if previous pairs did not contain ones.-Do not include adverbs.-Generate 400 pairs including numbering that starts from 1 and ends at 400.Example:[WUG] will hurt herself.*[WUG] will hurt himself. Figure 5: Model’s score for three different score calcu- lation methods architecture vocab size hidden size heads layers dropout layer norm eps initializer range algorithm learning rates betas weight decay roberta-base 9,600 512 8 8 0.1 1e-12 0.02 AdamW 2e-4 (0.9, 0.999) 0.0 Model Optimizer Scheduler type warmup updates linear 24,000 Training gradient accum. epoch batch size line by line NGPU 4 18 16 true 1 Table 6: Hyperparameters of the language models. trends among the three methods. The sentence- level and antecedent-level methods each have their advantages depending on the language phenomena. More analyses of their difference are interesting for future work. C Hyperparameters Hyperparameters in our work are listed in Table 6. 13 15255075100The number of observations020406080100ScoreAnaphor gender agreementsentencewugantecedent15255075100The number of observations020406080100ScoreAnaphor number agreementsentencewugantecedent
synthetic_cpt
4
InPars_Data_Augmentation_for_Information_Retrieval_using_Large_Language_Models.pdf
4 2 0 2 b e F 1 2 ] R I . s c [ 2 v 8 9 9 2 0 . 1 0 3 2 : v i X r a Published in Transactions on Machine Learning Research (MM/YYYY) InPars-Light: Cost-Effective Unsupervised Training of Effi- cient Rankers Leonid Boytsov∗ Amazon AWS AI Labs Pittsburgh USA Preksha Patel Vivek Sourabh Riddhi Nisar Sayani Kundu Ramya Ramanathan Eric Nyberg Carnegie Mellon University Pittsburgh USA [email protected] Reviewed on OpenReview: https: // openreview. net/ forum? id= sHSKFYyINO Abstract We carried out a reproducibility study of InPars, which is a method for unsupervised training of neural rankers (Bonifacio et al., 2022). As a by-product, we developed InPars-light, which is a simple-yet-effective modification of InPars. Unlike InPars, InPars-light uses 7x-100x smaller ranking models and only a freely available language model BLOOM, which—as we found out—produced more accurate rankers compared to a proprietary GPT-3 model. On all five English retrieval collections (used in the original InPars study) we obtained substantial (7%-30%) and statistically significant improvements over BM25 (in nDCG and MRR) using only a 30M parameter six-layer MiniLM-30M ranker and a single three-shot prompt. In contrast, in the InPars study only a 100x larger monoT5-3B model consistently outperformed BM25, whereas their smaller monoT5-220M model (which is still 7x larger than our MiniLM ranker) outperformed BM25 only on MS MARCO and TREC DL 2020. In the same three-shot prompting scenario, our 435M parameter DeBERTA v3 ranker was at par with the 7x larger monoT5-3B (average gain over BM25 of 1.3 vs 1.32): In fact, on three out of five datasets, DeBERTA slightly outperformed monoT5-3B. Finally, these good results were achieved by re-ranking only 100 candidate documents compared to 1000 used by Bonifacio et al. (2022). We believe that InPars-light is the first truly cost-effective prompt-based unsupervised recipe to train and deploy neural ranking models that outperform BM25. Our code and data is publicly available.https://github.com/searchivarius/inpars_light/ 1 Introduction Training effective neural IR models often requires abundant in-domain training data, which can be quite costly to obtain: For a human annotator, judging a single document-query pair takes at least one minute on average (Han et al., 2020; Kwiatkowski et al., 2019) and a single query may need as many as 50 of such judgements (Buckley et al., 2007).1 Models trained on out-of-domain data and/or fine-tuned using a small number of in-domain queries often perform worse or marginally better than simple non-neural BM25 rankers *Work done outside of the scope of employment. 1Robust04 and TREC-COVID collections used in our study have about 1K judgements per query. 1 Published in Transactions on Machine Learning Research (MM/YYYY) Table 1: Average Gains over BM25 for different Models and Training Recipes Model name and training recipe Avg. gain over BM25 # of “wins” over BM25s (≤ 7) Unsupervised: InPars-based Training Data (three-shot prompting) MiniLM-L6-30M (InPars-light) DeBERTA-v3-435M (InPars-light) monoT5-220M (InPars) (Bonifacio et al., 2022) monoT5-3B (InPars) (Bonifacio et al., 2022) 1.13 1.30 1.07 1.32 7 7 3 7 Supervised transfer learning with optional unsupervised fine-tuning: transfer from MS MARCO with optional fine-tuning on consistency-checked InPars data MiniLM-L6-30M (MS MARCO) MiniLM-L6-30M (MS MARCO ▶ consist. checked queries) DeBERTA-v3-435M (MS MARCO) DeBERTA-v3-435M (MS MARCO ▶ consist. checked queries) monoT5-220M (MS MARCO) (Bonifacio et al., 2022) monoT5-3B (MS MARCO) (Bonifacio et al., 2022) monoT5-3B (MS MARCO+InPars) (Bonifacio et al., 2022) 1.21 1.24 1.42 1.36 1.46 1.59 1.59 5 7 7 7 7 7 7 Consist. checked queries denotes a set of generated queries filtered out (via consistency checking) using the DeBERTA-v3- 435M model trained on InPars-generated data. (Thakur et al., 2021; Mokrii et al., 2021). Good transferability requires (1) large impractical models (Rosa et al., 2022; Ni et al., 2021), and (2) datasets with large and diverse manually annotated query sets. A recent trend to deal with these problems consists in gen- erating synthetic in-domain training data via prompting of Large Language Models (LLMs). This trend was spear- headed by a recent InPars study (Bonifacio et al., 2022). However, proposed solutions are not cost effective because they require either querying the costly generative models or training impractically large rankers. Although follow up studies, in particular by Dai et al. (2022), claimed improvements upon InPars, these improvements were not demonstrated under the same experimental setting. More- over, researchers used primarily proprietary LLMs whose training procedure was not controlled by the scientific community. Thus, outcomes could have been affected by data leakage, i.e., training of models on publicly available and popular IR collections whose copies could have ended up in the LLMs training data. As such, there is an im- portant question of whether we can obtain comparable or better results using only open-source models trained by the scientific community. Figure 1: Average relative improvement over BM25 for different model types/sizes and train- ing recipes. Higher and to the left is better. We compare InPars with InPars-Light for the unsu- pervised training scenario, where training data is generated by an LLM using a three-shot prompt. This study is driven by two high-level inquiries: (1) Does InPars work? (2) Can it be made more accurate and cost effective? To address these inquiries, we carry out a rigorous reproducibility study of InPars (Bonifacio et al., 2022). In that, we use open-source and community-trained generative LLMs (Scao et al., 2022; Wang & Komatsuzaki, 2021), train rankers using multiple seeds, and use statistical testing when measuring improvements. Because efficiency is an important consideration, we also evaluate much smaller ranking models (see Figure 1 and Table 1) compared to those used by Bonifacio et al. (2022). More specifically, we ask the following research questions: • RQ1: Can we reproduce key findings of InPars (Bonifacio et al., 2022) using open-source and community-trained LLMs as well as smaller ranking models? 2 Published in Transactions on Machine Learning Research (MM/YYYY) • RQ2: Are open-source models more or less useful for generation of synthetic IR training data compared to the similar-sized GPT-3 Curie model (Brown et al., 2020)? • RQ3: Does consistency checking proposed by Dai et al. (2022) improve the InPars recipe? Is it applicable in the purely re-ranking setting as opposed to the retrieval setting (as it was done by Dai et al. (2022))? • RQ4: Can we match performance of large monoT5 rankers—used by Bonifacio et al. (2022)—with much smaller bi-directional Transformer (BERT) models (Devlin et al., 2018; Vaswani et al., 2017)? • RQ5: The smaller monoT5 ranker with 220M parameters used by Bonifacio et al. (2022) does not outperform BM25 for three out of five query sets. Thus, just matching monoT5-220M performance is not enough. Can we instead substantially outperform BM25 using a small and fast ranker such as a MiniLM (Wang et al., 2020) BERT ranker with only 30 million parameters? Our contributions and findings are as follows: • We reproduced the key finding by Bonifacio et al. (2022): Generation of synthetic in-domain data using an InPars-like recipe permits training strong in-domain rankers using only a three-shot prompt and in-domain documents, which answers RQ1. However, without additional effort such as all-domain pre-training and consistency checking, only a sufficiently large ranking model could outperform BM25 on all datasets. • We found that open-source LLMs BLOOM (Scao et al., 2022) and GPT-J (Wang & Komatsuzaki, 2021), which are trained using only next-token prediction (without further fine-tuning), could be prompted to generate effective synthetic queries. Moreover, using a community-trained BLOOM model produced comparable or more accurate2 ranking models compared to using GPT-3 Curie model (Brown et al., 2020), which addresses RQ2. • We confirmed that consistency checking proposed by Dai et al. (2022) does work for re-rankers and always improves outcomes in the unsupervised setting, which answers RQ3. • We also discovered that in the unsupervised setting, where synthetic queries were generated using a three-shot prompt, we could match or outperform monoT5 rankers using much smaller BERT ranking models (see Figure 1), which answers RQ4. More specifically: – We can replace an impractical three-billion parameter monoT5-3B (Nogueira et al., 2020) model with a 7x smaller BERT model while obtaining comparable results. The average gain over BM25 (see Table 1) was 1.32 for monoT5-3B vs. 1.3 for DeBERTA-v3-435M (He et al., 2021) (RQ1). – Unlike Bonifacio et al. (2022) whose monoT5-220M model with 220 million parameters failed to outperform BM25 on three out of five datasets (unless pre-trained on MS MARCO), we show that a much smaller MiniLM-30M model with only 30 million parameters (Wang et al., 2020) can outperform BM25 by 7%-30% in key metrics (nDCG@K and MRR) when trained using only synthetic training data (RQ1 and RQ5). – Outperforming BM25 with a small ranking model such as MiniLM-30M was possible by using: (a) a better model to generate synthetic training data (BLOOM instead of GPT-3 Curie), (b) consistency checking (Dai et al., 2022) (RQ3), and (c) all-domain pre-training, each of which helped improve outcomes. • Obtaining good results in the unsupervised setting described above required re-ranking only 100 candidate documents compared to 1000 used by Bonifacio et al. (2022). Overall, compared to InPars, our training recipe—which we call InPars-light—is substantially more cost effective in terms of both, generation of synthetic training data and training/application of ranking models (see § A.2 for a detailed discussion). 2The only exception was BEIR NQ, where BLOOM-based ranker was 1.4% worse, see Table 4. 3 Published in Transactions on Machine Learning Research (MM/YYYY) • However, when pretraining on MS MARCO was used, the monoT5-220M model was still substantially more accurate than a 7x smaller MiniLM-30M ranker. Moreover, this gap was not reduced by subsequent unsupervised fine-tuning of MiniLM-30M using synthetically generated data. The average gain over BM25 (see Table 1) was 1.46 for monoT5-200M pre-trained on MS MARCO vs. 1.24 for MiniLM-30M pre-trained on MS MARCO and fine-tuned using synthetic training data. Our code and data are publicly available.3 2 Related Work Prompting methods have gained quite a bit of popularity in NLP (see, e.g., Liu et al. (2021) for a recent survey). In particular, prior to the InPars study by Bonifacio et al. (2022), Schick & Schütze (2021) proposed to generate synthetic training sets using in-domain data and zero-shot prompting of LLMs. However, until recently zero-shot and few-shot prompting of LLMs was not applied to ad hoc retrieval: We know only a few papers directly related to our work. Sachan et al. (2022) were probably the first to demonstrate effectiveness of LLMs in the document ranking task. In their approach—named UPR—they concatenate a document, a special prompt such as “please write a question for this document” and the query itself. Then, UPR uses a pre-trained LLM model to compute the likelihood of generating the query given the passage text. Unlike InPars, they do not use LLM to generate synthetic training data. Sachan et al. (2022) evaluated their method using only QA (but not IR) datasets and their main results are for an impractically large three-billion parameter instruction-finetuned model, which was used essentially as a re-ranker (in a zero-shot scenario). The smallest model used by Sachan et al. (2022) had 250 million parameters (compared to our 30-million MiniLM model). It was evaluated only on the Natural Questions (NQ) collection (Kwiatkowski et al., 2019) where it outperformed BM25 by about 10%. Although not directly comparable due to using different versions of NQ and model sizes, our 2× larger DeBERTA-v3-435M model outperformed BM25 by 40% while our much smaller MiniLM-30M model with 30 million parameters outperformed BM25 by 15%. Bonifacio et al. (2022) proposed an InPars method, which relied on few-shot prompting. The study had a convincing evaluation on five datasets where only one dataset, namely NQ (Kwiatkowski et al., 2019), was a typical QA collection. Unlike Sachan et al. (2022), Bonifacio et al. (2022) used few-shot prompting to generate synthetic training data for a smaller ranker. For each collection Bonifacio et al. (2022) generated 100K synthetic queries and retained only 10K with the highest average log-probabilities. This can be seen as distillation of an LLM into the ranker. However, Bonifacio et al. (2022) obtained good results only for a huge monoT5-3B parameter model. They also employed a proprietary GPT-3 model, which can be quite costly to use. In a follow-up study, which is concurrent with this work, Jeronymo et al. (2023) introduced a modification of InPars—dubbed InPars v2— where GPT-3 Curie (Brown et al., 2020) was replaced with an open-source model GPT-J model (Wang & Komatsuzaki, 2021). However, this model swap was “entangled” with at least two other modifications in the training recipe: • A new query filtering condition that relied on an MS MARCO trained monoT5-3B model. • The vanilla prompt (which was used in InPars and our experiments) was replaced with the “Guided by Bad Question prompt” (introduced by Bonifacio et al. 2022). Thus, it is not possible to fairly assess the impact of replacing GPT-3 Curie with GPT-J (Wang & Komatsuzaki, 2021). An important disadvantage of the InPars v2 recipe is that it is still not cost-effective as authors use a huge monoT5-3B model. The filtering check uses an expensive monoT5-3B model trained on MS MARCO 3https://github.com/searchivarius/inpars_light/ 4 Published in Transactions on Machine Learning Research (MM/YYYY) corpus, which is also not always possible in a commercial setting due to licensing issues (MS MARCO is a research-only collection). Moreover, the monoT5-3B model trained on MS MARCO—albeit being impractical—has excellent zero-shot transferability: Fine-tuning monoT5-3B model trained on MS MARCO with InPars v2 only improves the average BEIR score only by 2.4%: from 0.538 to 0.551. This further complicates assessment of effectiveness of GPT-J. Dai et al. (2022) used an InPars-like method called Promptagator and created synthetic training data using a huge proprietary FLAN-137B model with 137 billion parameters. Although they used modestly sized retrieval and ranking models with 110 million parameters, Dai et al. (2022) generated as many as million synthetic training queries for each dataset. In contrast, both InPars and InPars-light used only 100K queries per dataset, which was much less expensive (see a discussion in § A.2). Importantly, Dai et al. (2022) proposed to use consistency checking (Alberti et al., 2019) to filter-out potentially spurious queries, which was not previously done in the IR context. They do not compare with InPars under the same conditions and it was not known if consistency checking would improve the original InPars recipe. In addition to prompt-based generation of training data, there are multiple proposals for self-supervised adaptation of out-of-domain models using generative pseudo-labeling (Li & Gaussier, 2022; Wang et al., 2022; Reddy et al., 2021). To this end, questions or queries are generated using a pretrained seq2seq model (though an LLMs can be used as well) and negative examples are mined using either BM25 or an out-of-domain retriever or ranker. Unsupervised domain adaptation is complementary to the approaches considered in this work. The disadvantage of such approaches is that they may need a reasonably effective an out-of-domain ranking model. However, such models can be hard to obtain due to licensing issues and poor transferability from other domains. For example, MS MARCO models have reasonable transferability (Thakur et al., 2021; Mokrii et al., 2021), but MS MARCO cannot be used to train models in a commercial context (without extra licensing from Microsoft). In contrast, the Natural Questions (NQ) collection (Kwiatkowski et al., 2019) has a permissive license4, but models trained on NQ can fail to transfer to datasets that are not based on Wikipedia (Mokrii et al., 2021). Another potentially complementary approach is an LLM-assisted query expansion. In particular Gao et al. (2022) prompted a 175B InstructGPT model to generate a hypothetical answer to a question. Then this answer was encoded as a vector and together with the encoding of the original question they were compared with encoded documents. In a purely unsupervised setting—using the Contriever bi-encoder training without supervision (Izacard et al., 2021)—they were able to outperform BM25 by as much as 20%. Despite strong results, a serious disadvantage of this approach is its dependence on the external proprietary model that is costly and inefficient. Although we could not find any reliable benchmarks, a folklore opinion is that GPT generation latency is a few seconds. To verify this, we used the OpenAI playground5 to generate a few hypothetical answers using the prompt in Gao et al. Gao et al. (2022) and a sample of TREC DL 2020 queries. With a maximum generation length of 256 tokens (a default setting), the latency exceeded four seconds. Quite interestingly, Gao et al. Gao et al. (2022) tried to replace a 175B GPT-3 model with smaller open-source models on TREC DL 2019 and TREC DL 2020 (see Tables 4 and Table 1 in their study), but failed to obtain consistent and substantial gains over BM25 with models having fewer than 50B parameters. 4https://github.com/google-research-datasets/natural-questions/blob/master/LICENSE 5https://beta.openai.com/playground 5 Published in Transactions on Machine Learning Research (MM/YYYY) Table 2: The format of the vanilla three-shot InPars prompt (Bonifacio et al., 2022) Example 1: Document: <text of the first example document> Relevant Query: <text of the first relevant query> Example 2: Document: <text of the second example document> Relevant Query: <text of the second relevant query> Example 3: Document: <text of the third example document> Relevant Query: <text of the third relevant query> Example 4: Document: <real in-domain document text placeholder> Relevant Query: Notes: To generate a synthetic query, we first insert a text of a chosen real in-domain document after the prefix “Document:” in example four. Then, we “ask” an LLM to generate a completion. 3 Methods 3.1 Information Retrieval Pipeline We use a variant of a classic filter-and-refine multi-stage retrieval pipeline (Matveeva et al., 2006; Prager, 2006; Wang et al., 2011), where top-k candidate documents retrieved by a fast BM25 retriever/scorer (Robertson, 2004) are further re-ranked using a slower neural re-ranker. For collections where documents have titles (NQ BEIR and TREC COVID BEIR), the BM25 retriever itself has two stages: In the first stage we retrieve 1K documents using a Lucene index built over a title concatenated with a main text. In the second stage, these candidates are re-ranked using equally weighted BM25 scores computed separately for the title and the main text. Our neural rankers are cross-encoder models (Nogueira & Cho, 2019; Lin et al., 2021b), which operate on queries concatenated with documents. Concatenated texts are passed through a backbone bi-directional encoder-only Transformer model (Devlin et al., 2018) equipped with an additional ranking head (a fully- connected layer), which produces a relevance score (using the last-layer contextualized embedding of a CLS-token (Nogueira & Cho, 2019)). In contrast, authors of InPars (Bonifacio et al., 2022) use a T5 (Raffel et al., 2020) cross-encoding re-ranker (Nogueira et al., 2020), which is a full Transformer model (Vaswani et al., 2017). It uses both the encoder and the decoder. The T5 ranking Transformer is trained to generate labels “true” and “false”, which represent relevant and non-relevant document-query pairs, respectively. Backbone Transformer models can differ in the number of parameters and pre-training approaches (including pre-training datasets). In this paper we evaluated the following models, all of which were pre-trained in the self-supervised fashion without using supervised IR data: • A six-layer MiniLM-L6 model (Wang et al., 2020). It is a tiny (by modern standards) 30-million parameter model, which was distilled (Li et al., 2014; Romero et al., 2015; Hinton et al., 2015) from Roberta (Liu et al., 2019). We download model L6xH384 MiniLMv2 from the Microsoft website.6 • A 24-layer (large) ERNIE v2 model from the HuggingFace hub (Sun et al., 2020)7. It has 335 million parameters. • A 24-layer (large) DeBERTA v3 model with 435 million parameters (He et al., 2021) from the HuggingFace hub 8. 6https://github.com/microsoft/unilm/tree/master/minilm 7https://huggingface.co/nghuyong/ernie-2.0-large-en 8https://huggingface.co/microsoft/deberta-v3-large 6 Published in Transactions on Machine Learning Research (MM/YYYY) We chose ERNIE v2 and DeBERTA v3 due to their strong performance on the MS MARCO dataset where they outperformed BERT large (Devlin et al., 2018) and several other models that we tested in the past. Both models performed comparably well in the preliminary experiments, but we chose DeBERTA for main experiments because it was more effective on MS MARCO and TREC-DL 2020. In the post hoc ablation study, DeBERTA outperformed ERNIE v2 on four collections out of five (see Table 4). However, both of these models are quite large and we aspired to show that an InPars-like training recipe can be used with smaller models too. In contrast, Bonifacio et al. (2022) were able to show that only a big monoT5-3B model with 3B parameters could outperform BM25 on all five datasets: The smaller monoT5-200M ranker with 200 million parameters, which is still quite large, outperformed BM25 only on MS MARCO and TREC-DL 2020. 3.2 Generation of Synthetic Training Data We generate synthetic training data using a well-known few-shot prompting approach introduced by Brown et al. (2020). In the IR domain, it was first used by Bonifacio et al. (2022) who called it InPars. The key idea of InPars is to “prompt” a large language model with a few-shot textual demonstration of known relevant query-document pairs. To produce a novel query-document pair, Bonifacio et al. (2022) appended an in-domain document to the prompt and “asked” the model to complete the text. Bonifacio et al. (2022) evaluated two types of the prompts of which we use only the so-called vanilla prompt (see Table 2). As in the InPars study (Bonifacio et al., 2022), we generated 100K queries for each dataset with exception of MS MARCO and TREC DL.9 Repeating this procedure for many in-domain documents produces a large training set, but it can be quite imperfect. In particular, we carried out spot-checking and found quite a few queries that were spurious or only tangentially relevant to the passage from which they were generated. Many spurious queries can be filtered out automatically. To this end, Bonifacio et al. (2022) used only 10% of the queries with the highest log-probabilities (averaged over query tokens). In the Promptagator recipe, Dai et al. (2022) used a different filtering procedure, which was a variant of consistency checking (Alberti et al., 2019). Dai et al. (2022) first trained a retriever model using all the generated queries. Using this retriever, they produced a ranked set of documents for each query. The query passed the consistency check if the first retrieved document was the document from which the query was generated. A straightforward modification of this approach is to check if a generated document is present in a top-k (k > 1) candidate set produced by the retriever. Dai et al. (2022) used consistency checking with bi-encoding retrieval models, but it is applicable to cross-encoding re-ranking models as well. 3.3 InPars-light Training Recipe The InPars-light is not a new method. It is a training recipe, which a modification of the original InPars. Yet, it is substantially more cost effective for generation of synthetic queries, training the models, and inference. InPars-light has the following main “ingredients”: • Using open-source models instead of GPT-3; • Using smaller ranking BERT models instead of monoT5 rankers; • Fine-tuning models on consistency-checked training data; • Optional pre-training of models using all generated queries from all collections. • Re-ranking only 100 candidate documents instead of 1000: However, importantly, the training procedure still generates negatives from a top-1000 set produced by a BM25 ranker. To obtain consistency-checked queries for a given dataset, a model trained on InPars-generated queries (for this dataset) was used to re-rank output of all original queries (for a given dataset). Then, all the queries 9Because both datasets use the same set of passages they share the same set of 100K generated queries. 7 Published in Transactions on Machine Learning Research (MM/YYYY) where the query-generating-document did not appear among top-k scored documents were discarded. In our study, we experimented with k from one to three (but only on MS MARCO).10 Although k = 1 worked pretty well, using k = 3 lead to a small boost in accuracy. Consistency-checking was carried out using DeBERTA-v3-435M (He et al., 2021). We want to emphasize that consistency-checked training data was used in addition to original InPars-generated data (but not instead), namely, to fine-tune a model initially trained on InPars generated data. Also, quite interestingly, a set of consistency-checked queries had only a small (about 20-30%) overlap with the set of queries that were selected using the original InPars recipe (based on average log-probabilities). Thus, consistency-checking increased the amount of available training data. It might seem appealing to achieve the same objective by simply picking a larger number of queries (with highest average log-probabilities). However, preliminary experiments on MS MARCO showed that a naive increase of the number of queries degraded effectiveness (which is consistent with findings by Bonifacio et al. (2022)). Although, the original InPars recipe with open-source models and consistency checking allowed us to train strong DeBERTA-v3-435M models, performance of MiniLM models was lackluster (roughly at BM25 level for all collections). Because bigger models performed quite well, it may be possible to distill (Li et al., 2014; Romero et al., 2015; Hinton et al., 2015) their parameters into a much smaller MiniLM-30M model. Distillation is known to be successful in the IR domain (Hofstätter et al., 2020; Lin et al., 2020), but it failed in our case. Thus we used the following workaround instead: • First we carried out an all-domain pre-training without any filtering (i.e., using all queries from all collections); • Then, we fine-tuned all-domain pre-trained models on the consistency-checked in-domain data for each collection separately. 3.4 Miscellaneous We carried out experiments using FlexNeuART Boytsov & Nyberg (2020), which provided support for basic indexing, retrieval, and neural ranking. Both generative and ranking models were implemented using PyTorch and Huggingface (Wolf et al., 2020). Ranking models were trained using the InfoNCE loss (Le-Khac et al., 2020). In a single training epoch, we selected randomly one pair of positive and three negative examples per query (negatives were sampled from 1000 documents with highest BM25 scores). Note that, however, that during inference we re-ranked only 100 documents. In preliminary experiments on MS MARCO we used to sample from a top-100 set as well. However, the results were surprisingly poor and we switched to sampling from a top-1000 set (we did not try any other sampling options though). A number of negatives was not tuned: We used as much as we can while ensuring we do not run out of GPU memory during training on any collection. We used the AdamW optimizer (Loshchilov & Hutter, 2017) with a small weight decay (10−7), a warm-up schedule, and a batch size of 16.11 We used different base rates for the fully-connected prediction head (2 · 10−4) and for the main Transformer layers (2 · 10−5). The mini-batch size was equal to one and a larger batch size was simulated using a 16-step gradient accumulation. We did not tune optimization parameters and chose the values based on our prior experience of training neural rankers for MS MARCO. We trained each ranking model using three seeds and reported the average results (except for the best-seed analysis in Table 5). Statistical significance is computed between “seed-average” runs where query-specific metric values are first averaged over all seeds and then a standard paired difference test is carried out using these seed-average values (see § A.1 for details). 10We did not want to optimize this parameter for all collections and, thus, to commit a sin of tuning hyper-parameters on the complete test set. 11The learning rate grows linearly from zero for 20% of the steps until it reaches the base learning rate (Mosbach et al., 2020; Smith, 2017) and then goes back to zero (also linearly). 8 Published in Transactions on Machine Learning Research (MM/YYYY) Except zero-shot experiments, we trained a separate model for each dataset, which is consistent with Bonifacio et al. (2022). Moreover, we computed exactly the same accuracy metrics as Bonifacio et al. (2022). For statistical significance testing we used a paired two-sided t-test. For query sets with a large number of queries (MS MARCO development set and BEIR Natural Questions) we used a lower threshold of 0.01. For small query sets (Robust04, TREC DL, and TREC-COVID), the statistical significance threshold was set to 0.05. We implemented our query generation module using the AutoModelForCasualLM interface from HuggingFace. We used a three-shot vanilla prompt template created by Bonifacio et al. (2022) (also shown in Table 2). The output was generated via greedy decoding. The maximum number of new tokens generated for each example was set to 32. Note that query generation was a time-consuming process even though we used open-source models. Thus, we did it only once per dataset, i.e., without using multiple seeds. 4 Datasets Because we aimed to reproduce the main results of InPars (Bonifacio et al., 2022), we used exactly the same set of queries and datasets, which are described below. Except MS MARCO (which was processed directly using FlexNeuART Boytsov & Nyberg (2020) scripts), datasets were ingested with a help of the IR datasets package (MacAvaney et al., 2021). Some of the collections below have multiple text fields, which were used differently between BM25 and neural ranker. All collections except Robust04 have exactly one query field. Robust04 queries have the following parts: title, description, and narrative. For the purpose of BM25 retrieval and ranking, we used only the title field, but the neural ranker used only the description field (which is consistent with Bonifacio et al. 2022). The narrative field was not used. Two collections have documents with both the title and the main body text fields (NQ BEIR and TREC COVID BEIR). The neural rankers operated on concatenation of these fields. If this concatenation was longer than 477 BERT tokens, the text was truncated on the right (queries longer than 32 BERT tokens were truncated as well). For BM25 scoring, we indexed concatenated fields as well in Lucene. However, after retrieving 1000 candidates, we re-ranked them using the sum of BM25 scores computed separately for the title and the main body text fields (using FlexNeuART Boytsov & Nyberg (2020)). Synthetically Generated Training Queries. For each of the datasets, Bonifacio et al. (2022) provided both the GPT-3-generated queries (using GPT-3 Curie model) and the documents that were used to generate the queries. This permits a fair comparison of the quality of training data generated using GPT-3 Curie with the quality of synthetic training data generated using open-source models GPT-J (Wang & Komatsuzaki, 2021) and BLOOM (Scao et al., 2022). According to the estimates of Bonifacio et al. (2022), the Curie model has 6B parameters, which is close to the estimate made by by Gao from EleutherAI Gao (2021). Thus, we used GPT-J (Wang & Komatsuzaki, 2021) and BLOOM (Scao et al., 2022) models with 6 and 7 billion parameters, respectively. Although other open-source models can potentially be used, generation of synthetic queries is quite expensive and exploring other open-source options is left for future work. MS MARCO sparse and TREC DL 2020. MS MARCO is collection of 8.8M passages extracted from approximately 3.6M Web documents, which was derived from the MS MARCO reading comprehension dataset (Bajaj et al., 2016; Craswell et al., 2020). It “ships“ with more than half a million of question-like queries sampled from the Bing search engine log with subsequent filtering. The queries are not necessarily proper English questions, e.g., “lyme disease symptoms mood”, but they are answerable by a short passage retrieved from a set of about 3.6M Web documents (Bajaj et al., 2016). Relevance judgements are quite sparse (about one relevant passage per query) and a positive label indicates that the passage can answer the respective question. The MS MARCO collections has several development and test query sets of which we use only a development set with approximately 6.9K sparsely-judged queries and the TREC DL 2020 (Craswell et al., 2020) collection of 54 densely judged queries. Henceforth, for simplicity when we discuss the MS MARCO development set we use a shortened name MS MARCO, which is also consistent with Bonifacio et al. (2022). 9 Published in Transactions on Machine Learning Research (MM/YYYY) Note that the MS MARCO collection has a large training set, but we do not use it in the fully unsupervised scenario. It is used only supervised transfer learning (see § 5). Robust04 (Voorhees, 2004) is a small (but commonly used) collection that has about 500K news wire documents. It comes with a small but densely judged set of 250 queries, which have about 1.2K judgements on average. Natural Questions (NQ) BEIR (Kwiatkowski et al., 2019) is an open domain Wikipedia-based Question Answering (QA) dataset. Similar to MS MARCO, it has real user queries (submitted to Google). We use a BEIR’s variant of NQ (Thakur et al., 2021), which has about 2.6M short passages from Wikipedia and 3.4K sparsely-judged queries (about 1.2 relevant documents per query). TREC COVID BEIR (Roberts et al., 2020) is a small corpus that has 171K scientific articles on the topic of COVID-19 and. TREC COVID BEIR comes with 50 densely-judged queries (1.3K judged documents per query on average). It was created for a NIST challenge whose objective was to develop information retrieval methods tailored for the COVID-19 domain (with a hope to be a useful tool during COVID-19 pandemic). We use the BEIR’s version of this dataset (Thakur et al., 2021). 5 Results The summary of experimental results is provided in Figure 1 and Table 1. Our detailed experimental results are presented in Table 3. Note that in addition to our own measurements, we copy key results from prior work (Nogueira et al., 2020; Bonifacio et al., 2022), which include results for BM25 (by Bonifacio et al. (2022)), re-ranking using OpenAI API, and monoT5 rankers. In our experiments, we statistically test several hypotheses, which are explained separately at the bottom of each table. BM25 baselines. To assess the statistical significance of the difference between BM25 and a neural ranker, we had to use our own BM25 runs. These runs were produced using FlexNeuART Boytsov & Nyberg (2020). Comparing effectiveness of FlexNeuART Boytsov & Nyberg (2020) BM25 with effectiveness of Pyserini (Lin et al., 2021a) BM25—used the InPars study (Bonifacio et al., 2022)—we can see that on all datasets except TREC DL 2020 we closely match (within 1.5%) Pyserini numbers. On TREC DL 2020 our BM25 is 6% more effective in nNDCG@10 and 25% more effective in MAP. Unsupervised-only training (using three-shot prompts). We consider the scenario where synthetic training data is generated using a three-shot prompt to be unsupervised. Although the prompt is based on human supervision data (three random samples from the MS MARCO training corpus), these samples are not directly used for training, but only to generate synthetic data. In this scenario, we reproduce the key finding by Bonifacio et al. (2022): Generation of synthetic in-domain data using an InPars-like recipe permits training strong in-domain rankers using only a three-shot prompt and in-domain documents. However, if we use the original InPars recipe, only a large ranking model (DeBERTA- v3-435M) consistently outperforms BM25. This answers RQ1. With DeBERTA-v3-435M we obtain accuracy similar to that of monoT5-3B on four collections out of five, even though monoT5-3B has 7x more parameters. The average gain over BM25 is 1.3 (for DeBERTA-v3-435M) vs 1.32 for monoT5-3B (see Table 1). Accuracy of our smallest model MiniLM-L6-30M with all-domain pretraining and finetuning on consistency- checked data (referred to as InPars all ▶ consist. check in Table 3) roughly matches that of the 7x larger monoT5-220M on MS MARCO and TREC DL 2020. Yet, it is substantially better than monoT5-220M on the remaining datasets, where monoT5-220M effectiveness is largely at BM25 level: The average gain over BM25 (see Table 1) is 1.07 for monoT5-200M vs. 1.13 for MiniLM-30M. MiniLM-L6-30M outperforms BM25 on all collections and all metrics. In all but one case these differences are also statistically significant. In terms of nDCG and/or MRR, MiniLM-30M is 7%-30% more accurate than BM25. In summary, we can replace monoT5 rankers with much smaller BERT models while obtaining comparable or better average gains over BM25. This answers RQ4. Impact of consistency checking and all-domain pre-training. We found that, on its own, the InPars recipe did not produce a strong MiniLM-L6-30M ranking model. This is in line with the findings of Bonifacio 10 Published in Transactions on Machine Learning Research (MM/YYYY) Table 3: Model Accuracy for Various Scenarios (averaged over three seeds) MS MARCO TREC DL 2020 Robust04 NQ TREC COVID MRR MAP nDCG@10 MAP nDCG@20 nDCG@10 nDCG@10 BM25 (Bonifacio et al., 2022) BM25 (this study) 0.1874 0.1867 0.2876 0.3612 0.4876 0.5159 0.2531 0.2555 0.4240 0.4285 0.3290 0.3248 0.6880 0.6767 OpenAI Ranking API: re-ranking 100 Documents (Bonifacio et al., 2022) Curie (6B) (Bonifacio et al., 2022) Davinci (175B) (Bonifacio et al., 2022) $ $ 0.3296 0.3163 0.5422 0.5366 0.2785 0.2790 0.5053 0.5103 0.4171 $ 0.7251 0.6918 Unsupervised: InPars-based Training Data (three-shot prompting) monoT5-220M (InPars) (Bonifacio et al., 2022) monoT5-3B (InPars) (Bonifacio et al., 2022) 0.2585 0.2967 0.3599 0.4334 0.5764 0.6612 0.2490 0.3180 0.4268 0.5181 0.3354 0.5133 MiniLM-L6-30M (InPars) MiniLM-L6-30M (InPars ▶ consist. check) MiniLM-L6-30M (InPars all ▶ consist. check) DeBERTA-v3-435M (InPars) DeBERTA-v3-435M (InPars ▶ consist. check) DeBERTA-v3-435M (InPars all ▶ consist. check) ca0.1957 ba0.2187 b0.4953 ba0.2263 ba0.2117 cb0.3239 cb0.5543 cb0.2556 cba0.2336 ca0.3747 ca0.5726 c0.2639 ca0.2468 ba0.2746 ba0.4476 a0.6649 ba0.2811 cba0.2815 cba0.4446 ca0.6717 cba0.3009 cba0.5360 cba0.4621 c0.3267 ba0.3802 cb0.4440 ca0.4599 ba0.4987 b0.3482 cb0.3769 ca0.3929 ba0.4385 c0.2518 c0.3607 c0.4320 c0.5007 0.6666 0.7835 b0.6361 cb0.6926 ca0.7688 a0.8022 ca0.8183 c0.6953 Supervised transfer learning with optional unsupervised fine-tuning: transfer from MS MARCO with optional fine-tuning on consistency-checked InPars data da0.3080 MiniLM-L6-30M (MS MARCO) MiniLM-L6-30M (MS MARCO ▶ consist. check) da0.2944 da0.3508 DeBERTA-v3-435M (MS MARCO) DeBERTA-v3-435M (MS MARCO ▶ consist. da0.3166 check) a0.4370 a0.4311 a0.4679 a0.4553 a0.6662 da0.2295 a0.6501 da0.2692 a0.2986 da0.7269 a0.3011 da0.6912 da0.3923 da0.4730 a0.5304 a0.5371 da0.4646 da0.4320 da0.5616 da0.5075 da0.7476 da0.7898 a0.8304 a0.8165 monoT5-220M (MS MARCO) (Nogueira et al., 2020) monoT5-3B (MS MARCO) (Nogueira et al., 2020) monoT5-3B (MS MARCO ▶ InPars) (Bonifacio et al., 2022) 0.3810 0.4909 0.7141 0.3279 0.5298 0.5674 0.7775 0.3980 0.5281 0.7508 0.3876 0.6091 0.6334 0.7948 0.3894 0.5087 0.7439 0.3967 0.6227 0.6297 0.8471 OpenAI API ranking results were produced by Bonifacio et al. (2022): $ denotes experiments that were too expensive to run. InPars denotes the original query-generation method with filtering-out 90% of queries having lowest average log-probabilities. InPars all denotes the query-generation method without query filtering, which was used in all-domain pretraining. Consist. checked queries denotes a set of generated queries filtered out (via consistency checking) using the DeBERTA-v3- 435M model trained on InPars-generated data. Best results are marked by bold font separately for each training scenario. Super-scripted labels denote the following statistically significant differences (thresholds are given in the main text): a: between a given neural ranking model and BM25; b: between (InPars) and (InPars ▶ consist. check) when comparing neural ranking models of same type. c: between (InPars all ▶ consist. check) and (InPars ▶ consist. check) when comparing neural ranking models of same type. d: between (MS MARCO) and (MS MARCO ▶ consist. check) when comparing neural ranking models of same type. 11 Published in Transactions on Machine Learning Research (MM/YYYY) Table 4: Performance of InPars for Different Generating and Ranking Models. BM25 (ours) 0.1867 0.3612 0.5159 0.2555 0.4285 0.3248 0.6767 MS MARCO MRR TREC DL 2020 Robust04 NQ MAP nDCG@10 MAP nDCG@20 nDCG@10 TREC COVID nDCG@10 a0.2538 a0.4140 ba0.2608 ba0.4286 dba0.2605 ba0.4286 dba0.2746 ba0.4385 ba0.6649 ERNIE-v2-335M GPT-3 Curie (6B) ERNIE-v2-335M GPT-J (6B) ERNIE-v2-335M BLOOM (7B) DeBERTA-v3-435M BLOOM (7B) a0.7411 ba0.7750 ba0.7871 ba0.8022 Notes: Best results are in bold. Super-scripted labels denote statistically significant differences (thresholds are given in the main text): a: between a given neural ranking model and BM25; b: between a given neural ranking model and ERNIE-v2-335M trained using OpenAI GPT-3 Curie. c: between two ERNIE models trained using GPT-J-generated queries and BLOOM-generated queries; d: between the DeBERTA model and the ERNIE model trained using BLOOM-generated queries. a0.2357 a0.6229 0.4016 a0.6367 cba0.4724 cb0.2691 a0.6407 cba0.2852 dcba0.5102 ba0.2811 a0.4277 a0.4248 da0.4215 dba0.4987 dba0.4476 et al. (2022), who observed that only monoT5-3B (but not a much smaller monoT5-220M) outperformed BM25 on all collections. Strong performance of MiniLM-L6-30M in our study was due to additional training with consistency-checked data and pre-training on all-domain data (all queries from all collections). To confirm the effectiveness of these procedures, we carried out ablation experiments. Recall that the consistency-checked training data was produced using only the DeBERTA-v3-435M model. Moreover, this data was used only to fine-tune a model that was pre-trained using data generated by the original InPars recipe. From Table 3, we can see that for both MiniLM-L6-30M and DeBERTA-v3-435M fine-tunining on consistency-checked data improves outcomes (which answers RQ3): For 12 measurements out of 14, these improvements are statistically significant (denoted by super-script label “b”). Moreover, all-domain pretraining (instead of training on data generated by the original InPars recipe) further boosts accuracy of MiniLM-L6-30M in all cases: All these improvements are statistically significant (denoted by super-script label “c”). In contrast, all-domain pretraining substantially degrades performance of DeBERTA-v3-435M. An in-depth investigation showed that for one seed (out of three), the model has failed to converge properly. Therefore, we also analyze the best-seed outcomes which are presented in § A.3 Table 5. For MiniLM-L6-30M, the all-domain pre-training improves the best-seed accuracy in all cases. For DeBERTA-v3-435M, there is either a substantial degradation or a small decrease/increase that is not statistically significant (denoted by super-script label “c”). Thus, our biggest model—unlike a 15x smaller MiniLM-L6-30M—does not benefit from all-domain pretraining. However, there is no substantial degradation either. Supervised transfer learning with optional unsupervised fine-tuning. We found that our ranking models trained on MS MARCO (both MiniLM-L6-30M and DeBERTA-v3-435M) transferred well to other collections in almost all the cases. However, monoT5 models trained on MS MARCO are still substantially more accurate. According to Table 1, the average gains over BM25 are (1) 1.21 for MiniLM-30M vs. 1.46 for monoT5-200M and (2) 1.42 for DeBERTA-v3-435M vs. 1.59 for monoT5-3B. In that, this gap is not reduced by fine-tuning using synthetically generated data. This is different from the fully unsupervised scenario described above, where MiniLM-L6-30M often outperforms monoT5-220M while DeBERTA-v3-435M is at par with monoT5-3B. This is in line with prior findings that large ranking models have better zero-shot transferring effectiveness (Ni et al., 2021; Rosa et al., 2022). However, using multi-billion parameter models pre-trained on MS MARCO in a commercial setting is problematic from both efficiency and legal standpoints. In particular, MS MARCO has a research-only license.12. Model-type ablation. To assess the impact of replacing GPT-3 Curie with an open-source model, we carried out experiments using the following ranking models: ERNIE-v2 (Sun et al., 2020) and DeBERTA-v3-435M (He et al., 2021). According to Table 4, except for NQ—where all generative models were equally good—both 12See terms and conditions: https://microsoft.github.io/msmarco/ 12 Published in Transactions on Machine Learning Research (MM/YYYY) GPT-J (Wang & Komatsuzaki, 2021) and BLOOM (Scao et al., 2022) outperformed GPT-3 Curie. This answers RQ2. The difference in accuracy was particularly big for Robust04. The average relative gain over GPT-3 curie (not shown in the table) were 7.2% for BLOOM and 5.2% for GPT-J.13 Out of 14 comparisons, 10 were statistically significant (as denoted by super-script “b”). In addition to varying a generative model, we assessed the impact of using DeBERTA-v3 instead of ERNIE-v2. This time around, both models were trained using BLOOM-generated queries. We can see that DeBERTA-v3 was better than ERNIE-v2 except the case of Robust04. 6 Conclusion We carried out a reproducibility study of InPars (Bonifacio et al., 2022), which is a method for unsupervised training of neural rankers. As a by-product of this study, we developed a simple-yet-effective modification of InPars, which we called InPars-light. Unlike InPars, InPars-light uses only a community-trained open-source language model BLOOM (with 7B parameters), 7x-100x smaller ranking models, and re-ranks only top-100 candidate records instead of top-1000. Not only were we able to reproduce key findings from prior work (Bonifacio et al., 2022), but, combining the original InPars recipe (Bonifacio et al., 2022) with (1) fine-tuning on consistency-checked data (Dai et al., 2022) and (2) all-domain pretraining, we trained an efficient yet small model MiniLM-L6-30M consistently outperforming BM25 in the unsupervised setting. In the same scenario, using a larger DeBERTA-v3-435M model, we largely matched performance of a 7x larger monoT5-3B. In the supervised transfer learning setting—when pretraining on MS MARCO was used—the monoT5-220M model was still substantially more accurate than a 7x smaller MiniLM-30M ranker and this gap was not reduced by unsupervised fine-tuning using synthetically generated data. References Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins. Synthetic QA corpora generation with roundtrip consistency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 6168–6173, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1620. URL https://aclanthology.org/P19-1620. Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. MS MARCO: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268, 2016. Luiz Henrique Bonifacio, Hugo Abonizio, Marzieh Fadaee, and Rodrigo Nogueira. Inpars: Unsupervised dataset generation for information retrieval. In SIGIR, pp. 2387–2392. ACM, 2022. Leonid Boytsov and Eric Nyberg. Flexible retrieval with NMSLIB and FlexNeuART. In Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS), pp. 32–43, 2020. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In NeurIPS, 2020. Chris Buckley, Darrin Dimmick, Ian Soboroff, and Ellen M. Voorhees. Bias and the limits of pooling for large collections. Inf. Retr., 10(6):491–508, 2007. 13The average gain was obtained by (1) computing relative gain separately for each datasets and key metrics (nDCG or MRR) and (2) averaging these relative gains. 13 Published in Transactions on Machine Learning Research (MM/YYYY) Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M Voorhees. Overview of the TREC 2019 deep learning track. arXiv preprint arXiv:2003.07820, 2020. Zhuyun Dai, Vincent Y. Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall, and Ming-Wei Chang. Promptagator: Few-shot dense retrieval from 8 examples. CoRR, abs/2209.11755, 2022. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. Leo Gao. https://blog.eleuther.ai/gpt3-model-sizes/, May 2021. Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. Precise zero-shot dense retrieval without relevance labels, 2022. URL https://arxiv.org/abs/2212.10496. Lei Han, Eddy Maddalena, Alessandro Checco, Cristina Sarasua, Ujwal Gadiraju, Kevin Roitero, and Gianluca Demartini. Crowd worker strategies in relevance judgment tasks. In WSDM, pp. 241–249. ACM, 2020. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. Deberta: decoding-enhanced bert with disen- tangled attention. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=XPZIaotutsD. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. CoRR, abs/1503.02531, 2015. Sebastian Hofstätter, Sophia Althammer, Michael Schröder, Mete Sertkan, and Allan Hanbury. Improving efficient neural ranking models with cross-architecture knowledge distillation, 2020. URL https://arxiv. org/abs/2010.02666. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. Towards unsupervised dense information retrieval with contrastive learning. CoRR, abs/2112.09118, 2021. Vitor Jeronymo, Luiz Bonifacio, Hugo Abonizio, Marzieh Fadaee, Roberto Lotufo, Jakub Zavrel, and Rodrigo Nogueira. Inpars-v2: Large language models as efficient dataset generators for information retrieval, 2023. URL https://arxiv.org/abs/2301.01820. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural Questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453–466, 2019. Phuc H. Le-Khac, Graham Healy, and Alan F. Smeaton. Contrastive representation learning: A framework and review. IEEE Access, 8:193907–193934, 2020. Jinyu Li, Rui Zhao, Jui-Ting Huang, and Yifan Gong. Learning small-size DNN with output-distribution-based criteria. In INTERSPEECH, pp. 1910–1914. ISCA, 2014. Minghan Li and Éric Gaussier. Domain adaptation for dense retrieval through self-supervision by pseudo- relevance labeling. CoRR, abs/2212.06552, 2022. Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. Pyserini: A python toolkit for reproducible information retrieval research with sparse and dense represen- tations. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 2356–2362, 2021a. Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. Pretrained Transformers for Text Ranking: BERT and Beyond. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers, 2021b. 14 Published in Transactions on Machine Learning Research (MM/YYYY) Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. Distilling dense representations for ranking using tightly-coupled teachers. CoRR, abs/2010.11386, 2020. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. CoRR, abs/2107.13586, 2021. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692, 2019. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. Sean MacAvaney, Andrew Yates, Sergey Feldman, Doug Downey, Arman Cohan, and Nazli Goharian. Simplified data wrangling with ir_datasets. In SIGIR, 2021. Irina Matveeva, Chris Burges, Timo Burkard, Andy Laucius, and Leon Wong. High accuracy retrieval with multiple nested ranker. In SIGIR, pp. 437–444. ACM, 2006. Iurii Mokrii, Leonid Boytsov, and Pavel Braslavski. A systematic evaluation of transfer learning and pseudo-labeling with bert-based ranking models. In SIGIR, pp. 2081–2085. ACM, 2021. Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. On the stability of fine-tuning BERT: misconceptions, explanations, and strong baselines. arXiv preprint arXiv:2006.04884, 2020. Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hern’andez ’Abrego, Ji Ma, Vincent Zhao, Yi Luan, Keith B. Hall, Ming-Wei Chang, and Yinfei Yang. Large dual encoders are generalizable retrievers. ArXiv, abs/2112.07899, 2021. Rodrigo Nogueira and Kyunghyun Cho. Passage re-ranking with BERT. arXiv preprint arXiv:1901.04085, 2019. Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. Document ranking with a pretrained sequence-to-sequence model. arXiv preprint arXiv:2003.06713, 2020. John M. Prager. Open-domain question-answering. Found. Trends Inf. Retr., 1(2):91–231, 2006. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1–140:67, 2020. Revanth Gangi Reddy, Bhavani Iyer, Md. Arafat Sultan, Rong Zhang, Avirup Sil, Vittorio Castelli, Radu Florian, and Salim Roukos. Synthetic target domain supervision for open retrieval QA. In SIGIR, pp. 1793–1797. ACM, 2021. Kirk Roberts, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, Kyle Lo, Ian Soboroff, Ellen M. Voorhees, Lucy Lu Wang, and William R. Hersh. TREC-COVID: rationale and structure of an information retrieval shared task for COVID-19. J. Am. Medical Informatics Assoc., 27(9):1431–1436, 2020. Stephen Robertson. Understanding inverse document frequency: on theoretical arguments for IDF. Journal of Documentation, 60(5):503–520, 2004. doi: 10.1108/00220410410560582. Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. In ICLR (Poster), 2015. Guilherme Rosa, Luiz Bonifacio, Vitor Jeronymo, Hugo Abonizio, Marzieh Fadaee, Roberto Lotufo, and Rodrigo Nogueira. In defense of cross-encoders for zero-shot retrieval, 2022. URL https://arxiv.org/ abs/2212.06121. 15 Published in Transactions on Machine Learning Research (MM/YYYY) Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, and Luke Zettlemoyer. Improving passage retrieval with zero-shot question generation. CoRR, abs/2204.07496, 2022. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Bider- man, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, and et al. BLOOM: A 176b-parameter open-access multilingual language model. CoRR, abs/2211.05100, 2022. Timo Schick and Hinrich Schütze. Generating datasets with pretrained language models. CoRR, abs/2104.07540, 2021. Leslie N. Smith. Cyclical learning rates for training neural networks. In WACV, pp. 464–472, 2017. Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. ERNIE 2.0: A continual pre-training framework for language understanding. In AAAI, pp. 8968–8975. AAAI Press, 2020. Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. Beir: A heteroge- nous benchmark for zero-shot evaluation of information retrieval models. arXiv preprint arXiv:2104.08663, 2021. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, pp. 5998–6008, 2017. Ellen Voorhees. Overview of the trec 2004 robust retrieval track. In TREC, 2004. Ben Wang and Aran Komatsuzaki. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/mesh-transformer-jax, May 2021. Kexin Wang, Nandan Thakur, Nils Reimers, and Iryna Gurevych. GPL: generative pseudo labeling for unsupervised domain adaptation of dense retrieval. In NAACL-HLT, pp. 2345–2360. Association for Computational Linguistics, 2022. Lidan Wang, Jimmy Lin, and Donald Metzler. A cascade ranking model for efficient ranked retrieval. In SIGIR, pp. 105–114. ACM, 2011. Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. In NeurIPS, 2020. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45, Online, October 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-demos.6. URL https://aclanthology.org/2020.emnlp-demos.6. A Appendix A.1 Statistical Testing with Multiple-Seed Models To compute statistical significance using a paired statistical test between results from models A and B, one i and mB first has to compute the values of an accuracy metric (e.g., MRR) for each query separately. Let mA i 16 Published in Transactions on Machine Learning Research (MM/YYYY) be sequences of query-specific metric values for models A and B, respectively. The paired statistical test is i − mB then carried out using a sequence of differences mA i . This procedure is not directly applicable when each model is presented by multiple outcomes/seeds. To overcome this issue, we (1) obtain a set of query- and seed-specific metric values, and (2) average them over seeds, thus, reducing the problem to a single-seed statistical testing. In more details, let mA is be sets of query- and seed-specific metric values for models A and B, respectively. Recall that we have three seeds, so s ∈ {1, 2, 3}. Then, we obtain seed-average runs mA is and compute statistical significance using a paired difference test. i = 1/3 P3 i = 1/3 P3 is and mB is and mB s=1 mB s=1 mA A.2 Cost and Efficiency In the following sub-section, we discuss both the ranking efficiency and query-generation cost. Although one may argue that the cost of generation using open-source models is negligibly small, in reality this is true only if one owns their own hardware and generates enough queries to justify the initial investment. Thus, we make a more reasonable assessment assuming that the user can employ a cheap cloud service. Cost of Query Generation. For the original InPars Bonifacio et al. (2022), the cost of generation for the GPT-3 Curie model is $0.002 per one thousand tokens. The token count includes the length of the prompt and the prompting document.14 We estimate that (depending on the collection) a single generation involves 300 to 500 tokens: long-document collections Robust04 and TREC-COVID both have close to 500 tokens per generation. Taking an estimate of 500 tokens per generation, the cost of querying OpenAI GPT-3 Curie API can be up to $100 for Robust04 and TREC-COVID. Assuming that sampling from the 137-B FLAN model (used by (Dai et al., 2022)) to be as expensive as from the largest GPT-3 model Davinci (which has a similar number of parameters), each generation in the Promptagator study (Dai et al., 2022), was 10x more expensive compared to InPars study (Bonifacio et al., 2022). Moreover, because Dai et al. (2022) generated one million samples per collection, the Promptagator recipe was about two orders of magnitude more expensive compared to InPars. In contrast, it takes only about 15 hours to generate 100K queries using RTX 3090 GPU. Extrapolating this estimate to A100, which is about 2x faster than RTX 309015, and using the pricing of Lambda GPU cloud, we estimate the cost of generation in our InPars-light study to be under $10 per collection. 16 Efficiency of Re-ranking. A rather common opinion (in particular expressed by anonymous reviewers on multiple occasions) is that using cross-encoders is not a practical option. This might be true for extremely constrained latency environments or very large models, but we think it is totally practical to use small models such as MiniLM-L6-30M for applications such as enterprise search. In particular, on a reasonably modern GPU (such as RTX 3090) and MinLm-L6-30M re-ranking throughput exceeds 500 passages per second (assuming truncation to the first 477 characters). Thus re-ranking 100 documents has an acceptable sub-second latency. In fact, Cohere AI provides re-ranking with neural models as a cloud service.17 Cost of Model Training. Here, all training times are given with respect to a single RTX 3090 GPU. Training and evaluating MiniLM6-30M models had negligible costs dominated by all-domain pretraining, which took about two hours per seed. In contrast, the all-domain pretraining of DeBERTA-v3-435M took 28 hours. However, without all-domain pretraining, the training time itself was rather small, in particular, because we used only a fraction of all generated queries (10K queries in the original InPars training and about 20K queries in the follow-up fine-tuning using consistency checked data). Aside from all-domain pre-training, the two most time-consuming operations were: • Evaluation of model effectiveness on large query sets MS MARCO and NQ, which jointly have about 10K queries; 14https://chengh.medium.com/understand-the-pricing-of-gpt3-e646b2d63320 15https://lambdalabs.com/blog/nvidia-rtx-a6000-benchmarks 16https://lambdalabs.com/service/gpu-cloud#pricing 17https://docs.cohere.com/docs/reranking 17 Published in Transactions on Machine Learning Research (MM/YYYY) Table 5: Best-Seed Results for Unsupervised Training MS MARCO TREC DL 2020 Robust04 NQ MRR MAP nDCG@10 MAP nDCG@20 nDCG@10 TREC COVID nDCG@10 BM25 (ours) 0.1867 0.3612 0.5159 0.2555 0.4285 0.3248 0.6767 MiniLM (InPars) MiniLM (InPars ▶ consist. check) MiniLM (InPars all ▶ consist. check) ba0.2197 b0.3562 cba0.2422 b0.3844 ca0.2517 a0.3945 ba0.2415 b0.5151 ba0.2380 ba0.4029 ba0.5753 cb0.2615 cba0.4554 cb0.3297 a0.5769 c0.2671 ca0.4691 ca0.3800 b0.6732 ba0.7483 a0.7709 MiniLM-L6-30M results DeBERTA-v3-435M results ba0.2748 a0.4437 ba0.2847 a0.4479 a0.2804 a0.4414 DeBERTA (InPars) DeBERTA (InPars ▶ consist. check) DeBERTA (InPars all ▶ consist. check) Notes: Best results are marked by bold font (separately for each model). Consist. checked queries denotes a set of generated queries filtered out (via consistency checking) using the DeBERTA-v3- 435M model trained on InPars-generated data. Super-scripted labels denote the following statistically significant differences (thresholds are given in the main text): a: between a given neural ranking model and BM25; b: between (InPars) and (InPars ▶ consist. check) when comparing ranking models of same type. c: between (InPars all ▶ consist. check) and (InPars ▶ consist. check) when comparing ranking models of same type. ba0.5131 a0.4872 ba0.5417 ca0.4924 ca0.4746 a0.5505 a0.6779 ba0.2874 a0.6813 ba0.3043 a0.6575 a0.3076 a0.8118 a0.8305 a0.8259 • Consistency checking using DeBERTA-v3-435M model. The total effectiveness evaluation time for DeBERTA-v3-435 was about 6 hours (for all collections). The consistency checking, however, took about 48 hours. In the future, we may consider carrying out consistency checking using a much faster model, such as MiniLM-L6-30M. A.3 Additional Experimental Results Our rankers were trained using three seeds. However, in the case of all-domain pretraining, DeBERTA converged poorly for one seed. Therefore, in Table 5 we present best-seed results. 18
synthetic_cpt
4
Data_Augmentation_for_Spoken_Language_Understanding_via_Pretrained_Language_Models.pdf
Data Augmentation for Spoken Language Understanding via Pretrained Language Models Baolin Peng∗, Chenguang Zhu∗, Michael Zeng, Jianfeng Gao Microsoft Research, Redmond {bapeng,chezhu,nzeng,jfgao}@microsoft.com Abstract The training of spoken language understanding (SLU) models often faces the problem of data scarcity. In this paper, we put forward a data augmentation method using pretrained language models to boost the variability and accuracy of generated ut- terances. Furthermore, we investigate and propose solutions to two previously overlooked semi-supervised learning scenarios of data scarcity in SLU: i) Rich-in-Ontology: ontology information with numerous valid dialogue acts is given; ii) Rich-in-Utterance: a large number of unlabelled utterances are available. Empirical results show that our method can produce synthetic training data that boosts the performance of language understanding models in various scenarios. Index Terms: Spoken language understanding, pretraining, data augmentation, rich-in-ontology, rich-in-utterance 1. Introduction Spoken Language Understanding (SLU) is widely applied in human-machine dialogue systems to convert natural utterances into predefined semantic frames, i.e. dialogue acts, for further processing. For example, an SLU component in a virtual as- sistant or robot outputs its prediction of intents and slot labels detected within a user’s utterance [1]. Nevertheless, as a su- pervised learning task, SLU suffers from the problem of data scarcity. The problem becomes more prevalent in face of new LU domains with novel definitions of intents and slot labels. Even with an existing domain, the data correlated with a certain intent or slot is often not sufficient. These problems significantly limit the applicability of SLU systems. Recently, various successful use cases of synthetic datasets have stimulated the growth of the area of Data Augmentation (DA) [2, 3]. The typical approach is to learn a model to mimic the language style in the training data, leveraging the relation- ship between semantic units and their natural representations. Then, a non-generative model can modify utterances and replace slot labels from existing data [4], while a generative model can produce synthetic utterances in the same distribution space of the training data [5]. However, these approaches usually train the DA model on domain-specific data, which is of a small scale by itself. It is thus questionable whether the augmented data contains rich language expressibility beyond the scope of the given data. On the other hand, the rapid development of large-scale pre- trained language models has significantly improved the capacity of language understanding and generation models [6, 7]. With a modest amount of domain-specific data, a pretrained model can quickly adapt to a new domain. For instance, SC-GPT [8] finetunes the GPT-2 language model [9] with dialogue data. It ∗Equal contribution can efficiently adapt to new dialogue domains with only a couple of labelled data samples. In this paper, we propose to frame data augmentation as a semantically controlled generation problem. Given dialogue act, we leverage the pretrained SC-GPT model to generate corre- sponding utterances as synthetic training data. In the process, the general language syntax and semantics learned during the pre- training phase are fused into the generation of domain-specific utterances to increase variability and accuracy of SLU. Furthermore, previous literature on SLU data augmentation focus on the case where only a scant number of pairs of utter- ance and corresponding semantic labels are given, which we denote as Paired-Data-Only. However, there are two other over- looked semi-supervised learning scenarios that commonly arise in application. • Rich-in-Ontology: The full ontology for the dialogue do- main is also given, including the definitions of intents, slot lists and valid combinations of slots and values. In other words, the model is given a variety of valid combina- tions of semantic labels. What lacks is the corresponding natural language utterances. • Rich-in-Utterance: Apart from the labelled data, there are abundant unlabelled utterances without annotated intents, slots and values. In this paper, we also delve into these two scenarios and propose corresponding data augmentation solutions. For Rich-in-Ontology, we first finetune the pretrained model SC-GPT on the paired training data, and then apply it to the valid combination of intents and slots in the ontology information to generate additional training data. For Rich-in-Utterance, following the idea of the NLG model SC-GPT, we propose SC-GPT-NLU, which is pretrained on the same corpus of SC-GPT with flipped sources and targets. In detail, we feed the utterances into the model and let it generate intent and slots in a text sequence. Therefore, SC-GPT-NLU can act as a language understanding module and produce semantic labels for the unlabelled utterances available. In the experiments, we evaluate the slot tagging and intent classification accuracies of a Bi-LSTM seq2seq SLU model, us- ing various data augmentation methods. Results show that on ATIS and Snips datasets, our proposed method outperforms other baseline systems. For instance, compared with baseline methods, the data augmented by our system can help the underlying SLU model achieve 0.5 points higher slot F1 and 3.02 points higher intent accuracy in ATIS-Small dataset. Furthermore, when on- tology information or unlabelled utterances are available, i.e. Rich-in-Ontology and Rich-in-Utterance, our method can pro- duce synthetic data that significantly boosts the performance of SLU models. 1 2 0 2 r a M 1 1 ] L C . s c [ 2 v 2 5 9 3 1 . 4 0 0 2 : v i X r a 2. Related Work 2.1. SLU Data Augmentation Many previous approaches to SLU data augmentation target to increase variability of generated utterances. [10] proposes to add noise to perturb the decoder states to generate variants of an utter- ance. Variational autoencoder (VAE) and conditional variational autoencoder (CVAE) are used to generate utterances with di- versified expressions [11]. [4] uses both non-generative models like word substitution and generative models like paraphrasing and back-translation to augment training data. [5] proposes a multi-stage framework to generate, filter, and rank augmented utterances. [12] uses reinforcement learning to learn a genera- tor that facilitates dialogue state tracking. [13] employs atomic templates to guide the model to generate more utterances given combination of dialogue acts. [14] proposes to select sentences from unlabeled utterances and apply pseudo-labels. The two additional scenarios we propose in this paper are also related to semi-supervised learning [15]. But we focus on data aug- mentation, which is independent of the downstream learning models. Similar to our work, [16, 17] uses pretrained language mod- els to generate synthetic training data for data augmentation. However, their approach blends multiple labels and input sen- tences together during training, so it is hard to control the amount of generated synthetic data per class. 2.2. Pretraining Pretrained models leverage the large amount of unlabelled text corpora to improve the capability of language understanding. ELMo [18] applies two unidirectional RNNs for language mod- eling. GPT-2 [9] utilizes the transformer architecture [19] for the task. BERT [6] employs a masking technique and next- sentence-prediction task to train a bidirectional language model. UniLM [20] uses different masking patterns to unify the model structure for NLU and NLG. These pretrained language models have been widely used with considerable success in various NLP applications such as question answering [21] and summarization [22]. Furthermore, pretrained language models have been lever- aged in speech language processing to provide rich contextual embeddings [23]. Specifically, SC-GPT [8], i.e. Semantically Conditioned Generative Pre-training Transformer, builds upon GPT-2 and is further pretrained on a large-scale dialogue corpus. The resulting model outperforms many baselines in few-shot language generation for task-oriented dialogue. 3. Data Augmentation 3.1. Traditional Augmentation Scenario We describe the traditional augmentation scenario in SLU as Paired-Data-Only, as the training data consists of N instance pairs. Each pair contains the input tokenized utterance x = (x1, x2, ..., xT ) and the corresponding output dialogue act A. A includes the intent label I and P slot-value pairs: A = [ I , (s1 = v1, · · · , sP = vP ) (cid:124) (cid:123)(cid:122) (cid:125) Slot-value pairs (cid:124)(cid:123)(cid:122)(cid:125) Intent ] (1) Thus, the training data D = {(x1, A1), ..., (xN , AN )}. However, due to high labeling costs, the size of labeled data N is usually small. In such cases, data augmentation (DA) is needed. An augmenter S is a language generation model, which is trained Model SC-GPT SC-GPT-NLU Utterance Input Dialogue act Utterance Output Dialogue act Table 1: The input and output of SC-GPT [8] and SC-GPT- NLU models. Both are initialized with GPT-2 [9] but further pretrained on different data with swapped inputs and outputs. on D to be able to produce a corresponding utterance ˜x given an input dialogue act ˜A. For example, suppose ˜A = [hotel- inform,(name = Hyatt, area = center, star = 5], S can generate ˜x =I have booked the 5-star Hyatt hotel in the center area for you. Then, during augmentation, we first augment the dialogue acts in the training data by replacing/inserting/deleting slot val- ues to create more combinations. The augmenter S then gener- ates candidate utterances for the dialogue acts. As the generated utterances may not always contain the required slot-value la- bels, we filter them to make sure that each utterance has all the required input slot-values. However, the data augmenter itself requires a considerable amount of training data. As a result, augmenters directly trained on D may have limited model capacity and expressibility. Thus, we adopt the pretrained model SC-GPT [8], which is a language model to produce utterances given a dialogue act. SC-GPT is initialized with GPT-2 [9], further pretrained on a large corpus of 400K dialogue-act-utterance pairs and then fine-tuned on the training data D. It has been shown that SC-GPT can quickly adapt to new domains with only a few domain-specific data samples [8]. 3.2. More Data Augmentation Scenarios We note that in many real applications, there is often additional available information beyond the paired training data. Here, we specify two semi-supervised scenarios that commonly arise in applications but have been overlooked by previous approaches. 3.2.1. Rich In Ontology In many dialogue domains, a detailed description of the ontology is given, which is a list of valid dialogue acts. Formally, the training data consists of both labelled pairs and many dialogue acts: D = {(x1, A1), ..., (xN , AN ), AN +1, ..., AM }. To work with this scenario, we finetune SC-GPT on the paired part of D, i.e. {(x1, A1), ..., (xN , AN )}, and then gen- erate utterances for the other dialogue acts {AN +1, ..., AM }. The utterances are then filtered to make sure that each utterance has all the corresponding slot-values. 3.2.2. Rich In Utterance It is common in practice that a large number of unla- belled dialogue utterances are available, usually collected from history data. Formally, the training data consists of both labelled pairs and many unlabeled utterances: D = {(x1, A1), ..., (xN , AN ), xN +1, ..., xM }. To utilize these utterances, we need to produce correspond- ing dialogue acts. We propose to finetune GPT-2 in the reverse way: feed an utterance as input and let the model generate the dialogue act as output. In other words, we leverage a language generation model to act as a language understanding module, denoted as SC-GPT-NLU (Table 1). Like SC-GPT, SC-GPT-NLU is initialized with GPT-2 and Figure 1: Data augmentation process for the three scenarios: three Paired-Data-Only, Rich-In-Ontology and Rich-In-Utterances. All models are initialized with GPT-2, further pretrained on 400K dialogue corpus [8] and finetuned on the paired data {(x1, A1), ..., (xN , AN )}. Dataset Split Model Slot F1 Intent Acc. ATIS Snips Small Medium Small Medium No-DA 68.91 84.99 Seq2Seq VAE Ours 73.71 74.92 75.42 - 83.65 86.67 Ours 82.42∗ 89.03∗ Intent Slot No Data Augmentation 87.30 90.15 Slot 61.30 Intent Slot Intent 93.43 79.83 97.29 Paired-Data-Only - 90.95 90.71 88.72 89.27 88.61 Rich-in-Ontology 89.81∗ Rich-in-Utterance 92.27∗ - - 64.96 - - 93.43 - - 80.62 - - 97.57 67.06∗ 94.14∗ 82.54∗ 97.86 Ours 78.45 87.46 88.23 91.94 63.46 93.43 80.54 98.14∗ Table 2: Slot F1 and intent accuracy scores on ATIS and Snips dataset. The overall highest score is in bold, and the best result in Paired-Data-Only category is underlined. *: Statistically significant with p-value less than 0.05. further pretrained on the 400K dialogue-act-utterance data and finetuned on the paired part of D. But SC-GPT-NLU treats the utterance as input and dialogue acts as output. So both SC- GPT and SC-GPT-NLU are language generation models with a softmax-based output layer that produces utterance/dialogue acts token by token. During augmentation, SC-GPT-NLU generates dialogue acts for the unlabeled utterances xN +1, ..., xM . Here, the generated names of intents, slots and values are mapped to the pre-defined ontology by string matching. The augmented data is filtered to make sure that each input slot-value appears in the utterance. Figure 1 illustrates our SLU data augmentation process for all three scenarios. 4. Experiments 4.1. Datasets and Metrics We employ the widely used SLU benchmark dataset ATIS [24] and Snips [25]. ATIS contains around 5.8K utterances from flight reservation dialogues. It includes 120 slot labels and 21 intent types. Snips contains 14K utterances from the Snips personal voice assistant. It includes 123 slot labels and 7 intent types. To simulate the few-shot data situations, we follow [26] to use two small portions of the ATIS training set as training data: Small (∼1/40 of the original training set) and Medium (∼1/10 of the original training set). A development set of 500 instances is used. Following the same split ratio, we sampled 327 and 1308 instances in Snips for Small and Medium respectively. We use F1 score to measure slot tagging quality and use accuracy score to evaluate intent classification, in accordance with [26]. 4.2. Model Details SLU Model. For fair comparison, we use the same SLU model that is trained on the training data and the data augmented by our model and baseline systems. We adopt the same setting for the SLU model as in [5]. It has two layers of bi-directional LSTM with a hidden dimension of 200 and a dropout probability of 0.5. We choose the Adam optimizer [27] with a learning rate of 0.001. Gradients with a 2-norm greater than 5 are clipped. The best model is selected based on performances on the validation set. The number of training epochs is 50 and the batch size is 20. Data augmentation. For the Paired-Data-Only case, we modify the dialogue acts in the training split to construct around 300 additional combinations of DAs via dropping/inserting/replacing GPT-2Domainadapted model400K CorpusUtteranceDA400K CorpusDAUtteranceSC-GPTSC-GPT-NLUPaired DataAugmented Paired DataDialogue acts from dataRich-In-UtterancePretrainPretrainFinetuneFilterDomainadapted modelPaired DataAugmented Paired DataUnlabeled utterances from dataFilterFinetuneModified dialogue acts from dataAugmented Paired DataFilterRich-In-OntologyPaired-Data-Only DA SC-GPT RateBook (best rating = 6; object select = current; object type = textbook; rating value = 3) Utterance 1 Give 3 out of 6 to current textbook Utterance 2 Utterance 3 DA Utterance 1 Utterance 2 Utterance 3 The current textbook gets a 3 out of 6 I think that the current textbook should be rated 3 out of 6 BookRestaurant ( country = Honduras; facility = indoor; restaurant type = restaurant ) Book me a reservation for an indoor restaurant in Honduras Book an indoor restaurant in Honduras I need to book an indoor restaurant in Honduras SC-GPT-NLU Utterance DA Utterance DA 2 of us want to eat at a restaurant that serves meatballs in VT BookRestaurant ( party size number = 2; restaurant type = restaurant; served dish = meatballs; state = VT ) Add the track to the Metal Talks Metallica playlist. AddToPlaylist ( music item = track; playlist = metal talks Metallica) Table 3: Example utterances generated by SC-GPT given dialogue acts (DA) and dialogue acts generated by SC-GPT-NLU given unlabelled utterances in Snips. slots and values. For each dialogue act, we sample three utter- ances produced by SC-GPT. After filtering out utterances which do not contain all the slot-values, we collect around 500 synthetic utterances and add them into the original training split. We simulate the Rich-in-Ontology scenario by making the dialogue acts in the whole training set available, from which 500 dialogue acts are sampled and added to the training split. For the Rich-in-Utterance scenario, we sample 1,000 utter- ances in the training corpus and use SC-GPT-NLU to produce the most probable dialogue act. After filtering, around 500 utterance-DA pairs are added to the original training split. Implementation details. Both SC-GPT and SC-GPT-NLU are finetuned for 5 epoches with a learning rate as 5e-5. Nucleus sampling [28] is used for decoding, where the sampling top-p is 0.9, and the temperature is 1. Details on SC-GPT including the number of parameters and pretraining procedure can be found at [8]. The finetuning takes about half an hour on a V100 GPU machine 64GB memory. Baselines. The baseline data augmentation systems include the seq2seq [5] and variational autoencoder (VAE) data augmenta- tion model [29]. We also report the results for the case without data augmentation, denoted by No-DA. 4.3. Results Table 2 shows the accuracy of slot tagging and intent classifi- cation for various models. Based on the results, we make the following observations. Firstly, our data augmentation method can considerably boost the model accuracy (comparing No-DA and Ours), es- pecially when the training data size is small. For instance, in ATIS, when only paired data is available, the slot F1 increases by 6.51 (Small) and 1.31 (Medium) points, while the intent accuracy increases by 1.68 (Small) and 0.56 (Medium) points. Secondly, under Rich-in-Ontology and Rich-in-Utterance scenarios, our method further boosts the slot F1 by up to 7 points and intent accuracy by up to 2.4 points. Overall, the accuracy scores are the highest when the ontology information is available. This shows that our method can take advantage of additional information and produce better synthetic training data for downstream models. We conduct statistical paired t-tests and find that the best model’s performance is all statistically significant with p-value less than 0.05. Thirdly, under the traditional Paired-Data-Only scenario, our data augmentation method outperforms all baselines in ATIS- Small, and achieves comparable results in ATIS-Medium. This shows that our method is better suited when training data is scarce. 4.4. Examples of Augmented Data In Table 3, we show examples of generated utterances and dia- logue acts by SC-GPT and SC-GPT-NLU in Snips. As shown, after pretraining and domain finetuning, SC-GPT can produce coherent utterances with a high variability, while covering all required intent, slots and values. SC-GPT-NLU can generate dialogue acts in the same format as input to SC-GPT, which captures the important semantic information in the input utter- ances. This demonstrates that pretrained models can quickly adapt to target domains with a small amount of labeled data. This facilitates the generation of high-quality synthetic data for SLU. 5. Conclusion In this paper, we approach the problem of data scarcity in SLU with pretrained language models. After finetuning on domain- specific dialogue data, our model can produce high-quality syn- thetic data which boosts the performance of the downstream SLU model. Moreover, we provide solutions to two semi-supervised scenarios in SLU overlooked by previous literature: Rich-in- Ontology and Rich-in-Utterance. In experiments on the bench- mark datasets ATIS and Snips, we demonstrate that our solution can effectively leverage auxiliary unlabeled data to produce high- quality synthetic training data for building SLU models with a higher accuracy. As future work, we aim to extend the idea of data augmenta- tion based on pretrain language models to other speech language processing tasks, such as information retrieval and summariza- tion. 6. References [1] J. R. Bellegarda, “Spoken language understanding for natural in- teraction: The siri experience,” in Natural interaction with robots, knowbots and smartphones. Springer, 2014, pp. 3–14. [2] X. Lu, B. Zheng, A. Velivelli, and C. Zhai, “Enhancing text cat- egorization with semantic-enriched representation and training data augmentation,” Journal of the American Medical Informatics Association, vol. 13, no. 5, pp. 526–535, 2006. [23] Y.-A. Chung, C. Zhu, and M. Zeng, “Semi-supervised speech- language joint pre-training for spoken language understanding,” arXiv preprint arXiv:2010.02295, 2020. [24] G. Tur, D. Hakkani-T¨ur, and L. Heck, “What is left to be un- derstood in atis?” in 2010 IEEE Spoken Language Technology Workshop. IEEE, 2010, pp. 19–24. [25] A. Coucke, A. Saade, A. Ball, T. Bluche, A. Caulier, D. Leroy, C. Doumouro, T. Gisselbrecht, F. Caltagirone, T. Lavril et al., “Snips voice platform: an embedded spoken language understand- ing system for private-by-design voice interfaces,” arXiv preprint arXiv:1805.10190, 2018. [26] Y.-N. Chen, D. Hakanni-T¨ur, G. Tur, A. Celikyilmaz, J. Guo, and L. Deng, “Syntax or semantics? Knowledge-guided joint seman- tic frame parsing,” in 2016 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2016, pp. 348–355. [27] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimiza- tion,” arXiv preprint arXiv:1412.6980, 2014. [28] A. Holtzman, J. Buys, L. Du, M. Forbes, and Y. Choi, “The curious case of neural text degeneration,” arXiv preprint arXiv:1904.09751, 2019. [29] K. M. Yoo, Y. Shin, and S.-g. Lee, “Data augmentation for spoken language understanding via joint variational generation,” in Pro- ceedings of the AAAI Conference on Artificial Intelligence, vol. 33, 2019, pp. 7402–7409. [3] A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates et al., “Deep speech: Scaling up end-to-end speech recognition,” arXiv preprint arXiv:1412.5567, 2014. [4] J. Quan and D. Xiong, “Effective data augmentation ap- proaches to end-to-end task-oriented dialogue,” arXiv preprint arXiv:1912.02478, 2019. [5] Y. Hou, Y. Liu, W. Che, and T. Liu, “Sequence-to-sequence data augmentation for dialogue language understanding,” arXiv preprint arXiv:1807.01554, 2018. [6] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre- training of deep bidirectional transformers for language under- standing,” arXiv preprint arXiv:1810.04805, 2018. [7] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.11692, 2019. [8] B. Peng, C. Zhu, C. Li, X. Li, J. Li, M. Zeng, and J. Gao, “Few- shot natural language generation for task-oriented dialog,” arXiv preprint arXiv:2002.12328, 2020. [9] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, “Im- proving language understanding by generative pre-training,” 2018. [10] G. Kurata, B. Xiang, and B. Zhou, “Labeled data generation with encoder-decoder lstm for semantic slot filling.” in Interspeech, 2016. [11] J. Li, L. Qiu, B. Tang, D. Chen, D. Zhao, and R. Yan, “Insufficient data can also rock! Learning to converse using smaller data with augmentation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, 2019, pp. 6698–6705. [12] Y. Yin, L. Shang, X. Jiang, X. Chen, and Q. Liu, “Dialog state tracking with reinforced data augmentation,” arXiv preprint arXiv:1908.07795, 2019. [13] Z. Zhao, S. Zhu, and K. Yu, “Data augmentation with atomic templates for spoken language understanding,” arXiv preprint arXiv:1908.10770, 2019. [14] E. Cho, H. Xie, J. P. Lalor, V. Kumar, and W. M. Campbell, “Effi- cient semi-supervised learning for natural language understanding by optimizing diversity,” in 2019 IEEE Automatic Speech Recog- nition and Understanding Workshop (ASRU). IEEE, 2019, pp. 1077–1084. [15] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling, “Semi- supervised learning with deep generative models,” in Advances in neural information processing systems, 2014, pp. 3581–3589. [16] A. Anaby-Tavor, B. Carmeli, E. Goldbraich, A. Kantor, G. Kour, S. Shlomov, N. Tepper, and N. Zwerdling, “Do not have enough data? Deep learning to the rescue!” in Thirty-Fourth AAAI Confer- ence on Artificial Intelligence, 2020. [17] V. Kumar, A. Choudhary, and E. Cho, “Data augmentation using pre-trained transformer models,” arXiv preprint arXiv:2003.02245, 2020. [18] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep contextualized word representations,” arXiv preprint arXiv:1802.05365, 2018. [19] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, pp. 5998– 6008, 2017. [20] L. Dong, N. Yang, W. Wang, F. Wei, X. Liu, Y. Wang, J. Gao, M. Zhou, and H.-W. Hon, “Unified language model pre-training for natural language understanding and generation,” arXiv preprint arXiv:1905.03197, 2019. [21] C. Zhu, M. Zeng, and X. Huang, “Sdnet: Contextualized attention- based deep network for conversational question answering,” arXiv preprint arXiv:1812.03593, 2018. [22] Y. Liu and M. Lapata, “Text summarization with pretrained en- coders,” EMNLP, 2019.
synthetic_cpt
1
Exploring_the_Utility_of_Self-Supervised_Pretraining_Strategies_for_the_Detection_of_Absent_Lung_Sliding_in_M-Mode_Lung_Ultrasound.pdf
1 SERE: Exploring Feature Self-relation for Self-supervised Transformer Zhong-Yu Li, Shanghua Gao, Ming-Ming Cheng Abstract—Learning representations with self-supervision for convolutional networks (CNN) has been validated to be effective for vision tasks. As an alternative to CNN, vision transformers (ViT) have strong representation ability with spatial self-attention and channel-level feedforward networks. Recent works reveal that self-supervised learning helps unleash the great potential of ViT. Still, most works follow self-supervised strategies designed for CNN, e.g., instance-level discrimination of samples, but they ignore the properties of ViT. We observe that relational modeling on spatial and channel dimensions distinguishes ViT from other networks. To enforce this property, we explore the feature SElf-RElation (SERE) for training self-supervised ViT. Specifically, instead of conducting self-supervised learning solely on feature embeddings from multiple views, we utilize the feature self-relations, i.e., spatial/channel self-relations, for self-supervised learning. Self-relation based learning further enhances the relation modeling ability of ViT, resulting in stronger representations that stably improve performance on multiple downstream tasks. Our source code is publicly available at: https://github.com/MCG-NKU/SERE. Index Terms—feature self-relation, self-supervised learning, vision transformer ✦ 1 INTRODUCTION vision tasks at S UPERVISED training of neural networks thrives on many the cost of collecting expensive human- annotations [1], [2], [3]. Learning visual representations from un-labeled images [4], [5], [6], [7], [8] has proven to be an effective alternative to supervised training, e.g., convolutional net- works (CNN) trained with self-supervision have shown compara- ble or even better performance than its supervised counterparts [9], [10]. Recently, vision transformers (ViT) [11], [12] have emerged with stronger representation ability than CNN on many vision tasks. Pioneering works have shifted the methods designed for self-supervised CNN to ViT and revealed the great potential of self-supervised ViT [13], [14], [15]. Typical self-supervised learning methods designed for ViT, e.g., DINO [13]and Mo- CoV3 [15], send multiple views of an image into a ViT network to generate feature representations. Self-supervisions, e.g., con- trastive learning [15], [16], [17] and clustering [13], [18], are then implemented on these representations based on the hypothesis that different views of an image share similar representations. However, the widely used feature representations are still limited to feature embedding used by CNN based methods, e.g., image- level embeddings [6], [7], [19] and patch-level embeddings [20], [21]. But the properties of ViT, e.g., the self-relation modeling ability, are less considered by existing self-supervised methods. We wonder if other forms of representations related to ViT can benefit the training of self-supervised ViT. We seek to improve the training of self-supervised ViT by exploring the properties of ViT. ViT models the feature relations on spatial and channel dimensions with the multi-head self- attention (MHSA) and feedforward network (FFN) [11], [22], [23], respectively. The MHSA aggregates the spatial information with the extracted relations among patches, resulting in stronger spatial relations among patches with similar semantic contexts (see Fig. 1(c)). The FFN combines features from different chan- nels, implicitly modeling the feature self-relation in the channel • The authors are with TMCC, CS, Nankai University, Tianjin 300350, China. S. Gao is the corresponding author ([email protected]). spatial self-relation feature embedding feature self-relation channel self-relation (a) (b) (c) Fig. 1. The illustration of self-supervised learning using feature embed- dings and our proposed feature self-relation. (a) Typical self-supervised learning methods process the feature embeddings of the image views. (b) We propose to model the feature self-relation that measures the relation inside an image view from different dimensions. (c) Two specific forms of self-relation, i.e., the spatial and channel self-relations. For spatial self-relation, we select 6 patches indicated by differently colored boxes (top right) and visualize their self-relation (top left). For channel self-relation, we show visualized feature maps of 4 channels (bottom right) and the corresponding self-relation (bottom left). dimension. For instance, Fig. 1(c) reveals that channels learn diverse patterns, and there are varying degrees of relations be- tween different channels. Feature self-relation modeling enables ViT with strong representation ability, motivating us to use self- relation as a new representation form for self-supervision. In this work, we propose to utilize the feature SElf-RElation (SERE) for self-supervised training, enhancing the self-relation modeling properties in ViT. Following the spatial relation in MHSA and channel relation in FFN, we form the spatial and channel self-relations as representations. The spatial self-relation extracts the relations among patches within an image. The chan- nel self-relation models the connection of different channels, where each channel in feature embeddings highlights unique semantic information. Feature self-relation is the representation 3 2 0 2 p e S 8 1 ] V C . s c [ 3 v 4 8 1 5 0 . 6 0 2 2 : v i X r a 𝑓𝑓𝑓𝑓 in a new dimension and is compatible with existing representation forms, e.g., image-level and patch-level feature embeddings. As shown in Fig. 1, we can easily replace the feature embeddings with the proposed feature self-relation on existing self-supervised learning methods. We demonstrate that utilizing feature self- relation could stably improve multiple training methods for self- supervised ViT, e.g., DINO [13], iBOT [18], and MoCoV3 [15], on various downstream tasks, e.g., object detection [2], [24], se- mantic segmentation [3], [25], semi-supervised semantic segmen- tation [26] and image classification [1]. To our best knowledge, we are the first to study self-relations in self-supervised learning. Our major contributions are summarized as follows: • We propose to utilize the self-relations (SERE) of ViT, i.e., spatial and channel self-relations that fit well with the relation modeling property of ViT, as the representations for self-supervised learning. • The proposed SERE method is compatible with existing self-supervised methods and stably boosts ViT on various downstream tasks. 2 RELATED WORK 2.1 Self-Supervised Learning Self-supervised learning aims at learning rich representations without any human annotations. Early works utilized hand-crafted pretext tasks, e.g., coloration [27], [28], jigsaw puzzles [29], rotation prediction [30], autoencoder [31], [32], image inpaint- ing [33] and counting [34] to learn representations based on heuristic cues [19], but only achieved limited generalization ability. Recently, self-supervised learning has shown great break- throughs due to new forms of self-supervisions, e.g., contrastive learning [7], [35], [36], [37], [38], [39], [40], self-clustering [41], [42], [43], and representation alignment [5], [6], [44], [45], [46], [47]. These methods directly utilize the feature embeddings as representations to generate self-supervisions. For example, many of these methods utilize image-level feature embeddings [19], [41], [48] as representations. And some methods explore using embeddings in more fine-grained dimensions, e.g., pixel [20], [49], patch [50], [51], object [21], and region [21], [52] di- mensions. However, these representations are still embeddings corresponding to different regions of input images. Compared to these embedding based methods that only constrain individual embedding, we further transform the feature embedding to self- relation as a new representation dimension, which adds the constraint to the relation among embeddings. The self-relation provides rich information for self-supervised training and fits well with the relation modeling properties of ViT, thus further boosting the representation quality of ViT. Meanwhile, the self-relation is orthogonal to embedding based methods and consistently improves the performance of multiple methods. 2.2 Self-Supervised Vision Transformer Transformers have been generalized to computer vision [11], [53] and achieved state-of-the-art performance on many tasks, e.g., image classification [12], semantic segmentation [53], [54], and object detection [55]. Due to a lack of inductive bias, training ViT requires much more data and tricks [11], [56]. Recent works have been working on training ViT with self-supervised learning methods [16], [57], [58], [59] to meet the data requirement of ViT with low annotation costs. Many instance discrimination based methods use feature embeddings as the representation for self- supervised learning. For instance, Chen et al. [15] and Caron et 2 al. [13] implement contrastive learning and self-clustering with image-level embeddings, respectively. Zhou et al. [18] develop self-distillation with patch-level embeddings. However, these methods still follow the pretext task of instance discrimination initially designed for CNNs, where representations with invari- ance to transformation are learned by maximizing the similarity among positive samples. New properties in ViT may help the self- supervised training but are ignored by these methods. We explore spatial self-relation and channel self-relation, which are proven more suitable for the training of ViT. 2.3 Masked Image Modeling Concurrent with our work, self-supervised learning by masked image modeling (MIM) [14], [33], [60], [61] has become a popu- lar alternative to instance discrimination (ID) for self-supervised ViT. MIM reconstructs masked patches from unmasked parts, with different forms of reconstruction targets, e.g., discrete to- kenizer [60], [62], raw pixels [14], [59], [63], [64], [65], HOG features [66], patch representations [18], etc. Compared to ID, patch-level reconstruction in MIM enhances token-level represen- tations [18], [61]. Differently, the proposed SERE enhances the ability to model inter-token relations. Experiments also demon- strate that SERE can outperform and complement various MIM- based methods. Additionally, we strengthen the ability to model inter-channel relations, which MIM is missing. 2.4 Property of Vision Transformer Recent works have shown that the remarkable success of ViT on many vision tasks [12], [54], [67] relies on their strong ability to model spatial relations. Dosovitskiy et al. [11] and Kim et al. [23] find that attention attends to semantically relevant regions of images. Raghu et al. [22] reveal the representations of ViT preserve strong spatial information even in the deep layer. They also observe that patches in ViT have strong connections to regions with similar semantics. Caron et al. [13] find that self-supervised ViT captures more explicit semantic regions than supervised ViT. These observations indicate that ViT has a strong ability to model relations, which is quite different from the pattern-matching mechanisms of CNNs. In this work, we propose to enhance such ability by explicitly using spatial and channel feature self-relations for self-supervised learning. 2.5 Relation Modeling Relation modeling, which has different forms such as pair- wise relation and attention, has facilitated various vision tasks, e.g., knowledge distillation [68], [69], [70], [71], [72], [73], met- ric learning [74], semantic segmentation [75], [76], [77], unsuper- vised semantic segmentation [78], object localization [79], [80], [81], contrastive learning [82], masked image modeling [83], feature aggregation [84] and texture descriptor [85], [86]. In self- supervised learning, early work [87] proposes to utilize relation modeling by calculating channel relations in the whole batch, i.e., batch-relation. In comparison, we explore self-relation, which is the spatial or channel relations for features within an image and fits well with the relation modeling property of ViT. 3 METHOD 3.1 Overview In this work, we focus on the instance discriminative self- supervised learning pipeline [4], [13]. First, we briefly revisit f1(τ1(x)) P Lc P f2(τ2(x)) 3 O Channel self-relation O P Lp P Spatial self-relation Fig. 2. Our method models self-relation from spatial and channel dimensions. Given an image x, two views are generated by two random data augmentations. Here the image patches represent the feature embeddings extracted by the encoder. The feature embeddings are transformed by representation transformation P to generate spatial or channel self-relations. Lp and Lc, i.e., the loss functions defined in Equ. (3) and Equ. (5), enforce consistency between self-relations of different views. For spatial self-relation, only the features in the overlapping region are considered. O means the operation of extracting features from the overlapping region between two views in Equ. (2), where the red dotted box indicates the overlapping region. the framework of common instance discriminative self-supervised learning methods. Given an un-labeled image x, multiple views are generated by different random data augmentations, e.g., gen- erating two views τ1(x) and τ2(x) with augmentations τ1 and τ2. Under the assumption that different views of an image contain similar information, the major idea of most instance discriminative methods is to maximize the shared information encoded from different views. Firstly, two views are sent to the encoder network to extract the feature embeddings r1 ∈ RC×HW and r2 ∈ RC×HW with H · W local patches and C channels. According to the training objective of self-supervised learning methods, the feature embeddings are then transformed with transformation P to obtain different representations, e.g., image- level and patch-level embeddings. Different self-supervised opti- mization objectives utilize the obtained representations to get the loss as follows: LI = R(P(r1), P(r2)), (1) where R means the function that maximizes the consistency across views and can be defined with multiple forms, e.g., con- trastive [7], non-contrastive [6], and clustering [4] losses. Our main focus in this work is exploring new forms of rep- resentation transformation P. Motivated by the relation modeling properties in ViT, instead of directly using feature embeddings, we utilize feature self-relation in multiple dimensions as the rep- resentations for self-supervised learning on ViT. In the following sections, we introduce two specific self-relation representations for self-supervised ViT, i.e., spatial and channel self-relations. 3.2 Spatial Self-relation Prior works [11], [13], [22], [23] have observed that ViT has the property of modeling relations among local patches by the MHSA module. Meanwhile, modeling more accurate spatial relations is crucial for many dense prediction tasks [20], [21], e.g., object detection and semantic segmentation. So we propose to enhance the relation modeling ability of ViT by cooperating spatial self- relation for self-supervised training. In the following part, we first give details of the transformation P that transforms the feature embeddings encoded by ViT to spatial self-relation. Then, we τ1(x) τ2(x) Fig. 3. The region-aligned sampling operation for spatial self-relation. τ1(x) and τ2(x) are the different views of an image, and the dotted boxes indicate their regions in the original image. The points in green mean the uniformly sampled points in the overlapped regions. And the points in purple mean the patch features in ViT. give the self-supervision loss utilizing spatial self-relation as the representation. Generating spatial self-relation representation. Given the feature embeddings r1 = f1(τ1(x)) ∈ RC×HW and r2 = f2(τ2(x)) ∈ RC×HW from the ViT backbone, a projection head hp, which consists of a batch normalization [88] layer and a ReLU [89] activation layer, processes these embeddings to obtain p1 = hp(r1) and p2 = hp(r2). Then, we separately calculate their spatial self-relation. In contrast to the image-level embedding, the supervision be- tween spatial self-relation of different views should be calculated between patches at the same spatial positions. However, p1 and p2 are not aligned in the spatial dimension due to the random crop and flip in data augmentations. To solve the misalignment issue, we apply a region-aligned sampling operation O [26] to uniformly sample Hs × Ws points from the overlapping region of p1 and p2.1 As shown in Fig. 3, we localize the overlapping region in the raw image and split the region into Hs × Ws grids, which are not essentially aligned with the patches in ViT. For the center of each grid, we calculate its spatial coordinates in feature maps of each view and then sample its features by bi-linear interpolation. The details of this operation O are shown in the supplementary. For one view, e.g., p1 ∈ RC×HW , we calculate the spatial self- relation Ap(p1) ∈ RHsWs×HsWs as follows: Ap(p1) = Softmax (cid:18) O(p1)T · O(p1) √ C (cid:30) (cid:19) , tp (2) where O(p1) ∈ RC×HsWs is the feature sampled in the over- lapping region, T is the matrix transpose operation, and tp is the temperature parameter that controls the sharpness of the Softmax function. In the spatial self-relation, each row represents the relation of one local patch to other patches and is normalized by the Softmax function to generate probability distributions. Self-supervision with spatial self-relation. Spatial self-relation can be used as the representation of many forms of self- supervisions. For simplicity, we give an example of using self- relation for asymmetric non-contrastive self-supervision loss [5], [6] as follows: Lp = Re((cid:26)(cid:26)G(Ap(p1)), Ap(gp(p2))), where Re is the cross-entropy loss, (cid:26)(cid:26)G is the stop-gradient operation to avoid training collapse following [5], and gp is the prediction head for asymmetric non-contrastive loss [5], [6] consisting of a fully connected layer, a batch normalization layer, and a ReLU layer. (3) Multi-head spatial self-relation. In ViT, the MHSA performs multiple parallel self-attention operations by dividing the feature into multiple groups. It is observed that different heads might focus on different semantic patterns [13]. Inspired by this, we divide the feature embeddings into M groups along the channel dimension and calculate the spatial self-relation within each group, obtaining M spatial self-relations for each view. By default, we choose M = 6, as shown in Tab. 12. 3.3 Channel Self-relation In neural networks, each channel represents some kind of pattern within images. Different channels encode diverse patterns [90], [91], providing neural networks with a strong representation capability. The FFN [11] in ViT combines patterns across chan- nels and implicitly models the relation among channels [90], i.e., the pattern encoded in one channel has different degrees of correlation with the patterns encoded by other channels, as shown in Fig. 2. This mechanism motivates us to form channel self-relation as the representation for self-supervised learning to enhance self-relation modeling ability in the channel dimension. Specifically, we transform the feature embedding of ViT to channel self-relation and then use the channel self-relation as the representation for self-supervision. Generating channel self-relation representation. Here, we give the details of the transformation P that transforms the feature 1. In this work, we combine the proposed spatial self-relation with existing methods due to the orthogonality of self-relation. Since existing methods do not restrict that different views must overlap, we only add spatial self-relation to the views with overlapping regions. 4 embeddings to channel self-relation. As in Equ. (2), given the feature embeddings of two views, i.e., r1 and r2, a projection head hc with the same structure as hp processes these embeddings and obtains c1 = hc(r1)T and c2 = hc(r2)T . Then we separately calculate the channel self-relation for each view. For one view, e.g., c1 ∈ RHW ×C , we calculate its channel self- relation Ac(c1) ∈ RC×C as follows: Ac(c1) = Softmax (cid:18) cT 1 · c1 H · W (cid:30) (cid:19) , tc (4) where the Softmax function normalizes each row of the self- relation to get probability distributions, and tc is the temperature parameter controlling the sharpness of probability distributions. Self-supervision with channel self-relation. The channel self- relation can also be utilized as a new form of representation for many self-supervised losses. Similar to the spatial self-relation based loss in Equ. (3), we give the non-contrastive loss using channel self-relation as follows: Lc = Re((cid:26)(cid:26)G(Ac(c1)), Ac(gc(c2))), (5) where the Re is the cross-entropy loss, and gc is a prediction head with the same structure as gp in Equ. (3). This loss function enforces the consistency of channel self-relations among views and thus enhances the channel self-relation modeling ability of the model. Unlike spatial self-relation, we do not need to consider the spatial misalignment between different views. Because we enforce the consistency between channel self-relations, not the channel features, and the channel self-relation defined in Equ. (4) has no spatial dimension. 3.4 Implementation Details Loss function. By default, we apply our proposed spatial/channel self-relations and image embeddings as representations for self- supervision losses, as these representations reveal different prop- erties of features. The summarized loss function is as follows: L = LI + αLp + βLc, (6) where the spatial and channel losses are weighted by α and β, and LI is the loss using image-level embeddings, e.g., the clustering- based loss in DINO [13]. We show in Tab. 8 that solely using our proposed self-relation could achieve competitive or better performance than using image-level embeddings. Combining these three representations results in better representation quality, showing self-relation is a complementary representation form to image-level embeddings. To increase the training efficiency and make fair comparisons, we utilize the multi-crop [4], [13] augmentation to generate global and local views. For local views, we follow [4], [13] to calculate the loss between each global and local view but ignore the loss among local views. Architecture. We use the Vision Transformer [11] as the encoder network. Following [7], [13], the representations r1 and r2 of two views τ1(x) and τ2(x) are extracted by a momentum-updated encoder network f1 and the encoder network f2. During training, the parameters θ2 of f2 are updated by gradient descent. And the parameters θ1 of f1 are updated as θ1 = λθ1 + (1 − λ)θ2, where λ ∈ [0, 1] is the momentum coefficient. Following DINO [13], the λ is set to 0.996 and is increased to 1.0 during training with a cosine schedule. Accordingly, we denote the projections follow- ing f1 and f2 as h1 p/h2 c, respectively. The parameters c and h2 p/h1 TABLE 1 Fully fine-tuning classification on ImageNet-1K and semi-supervised semantic segmentation on ImageNet-S. For ImageNet-S, we report the mIoU on the val and test set. The PT means loading self-supervised pre-trained weights for initialization and FT means loading fully fine-tuned weights on classification labels of ImageNet-1K for initialization, respectively. Backbone Epochs DINO [13] ViT-S/16 ViT-S/16 +SERE iBOT [18] ViT-S/16 +SERE ViT-S/16 iBOT [18] ViT-B/16 ViT-B/16 +SERE 100 100 100 100 100 100 Segmentation Classification ImageNet-1K ImageNet-SPT ImageNet-SFT Top-1 Top-5 val 95.1 35.1 79.7 95.5 36.9 80.9 95.4 38.1 80.9 95.8 41.0 81.5 96.6 48.3 83.3 96.7 48.6 83.7 val 54.6 57.3 57.9 58.9 62.6 63.0 test 34.4 36.0 37.8 40.2 47.8 48.2 test 54.4 56.2 57.4 57.8 63.0 63.3 TABLE 2 Transferring learning on semantic segmentation, object detection, and instance segmentation. The APb means the bounding box AP for object detection (DET), and APm means the segmentation mask AP for instance segmentation (SEG). VOC SEG ADE20K SEG DINO [13] +SERE DINO [13] +SERE mIoU 77.1 79.7 APb 46.0 46.6 mAcc mIoU 87.5 88.8 APb 75 49.7 50.2 42.6 43.8 APm 40.0 40.5 COCO DET APb 50 64.9 65.9 mAcc 53.4 54.6 APm 75 42.8 43.5 COCO SEG APm 50 62.0 62.9 p/h1 c are also momentum-updated by h2 of h1 c, following the updating scheme of f1. Only the encoder network is used for transfer learning on downstream tasks after pre-training. p/h2 4 EXPERIMENTS This section verifies the effect of using proposed spatial and chan- nel self-relations as representations for self-supervised learning. We give the pre-training settings in Section 4.1. In Section 4.2, we compare our method with existing methods on multiple evaluation protocols, showing stable improvement over multiple methods. In Section 4.3, we conduct ablations to clarify design choices. 4.1 Pre-training Settings Unless otherwise stated, we adopt the ViT-S/16 as the backbone network. DINO [13]is selected as our major baseline method. The model is trained by an AdamW [92] optimizer with a learning rate of 0.001 and a batch size of 512. We pre-train models for 100 epochs on the ImageNet-1K [1] dataset for performance comparison. For ablation, the ImageNet-S300 dataset [26] is used to save training costs. Following [13], we apply the multi-crop training scheme where 2 global views with the resolution of 224×224 and 4 local views with the resolution of 96×96 are adopted. The global views are cropped with a ratio between 0.35 and 1.0. And the local views are cropped with a ratio between 0.05 and 0.35. For spatial self-relation, the Hs/Ws of the operation O in Equ. (2) are set to 13/13 for global views and 6/6 for local views. The number of heads M in spatial self-relation is set to 6 by default. The tp in Equ. (2) and tc in Equ. (4) are set to 0.5 and 0.1 for the encoder network. For the momentum encoder, 5 TABLE 3 Comparison with longer pre-training epochs. (a) Semantic segmentation on the ADE20K dataset. iBOT [18] +SERE iBOT [18] +SERE Backbone ViT-S/16 ViT-S/16 ViT-B/16 ViT-B/16 Epochs 800 100 400 200 mIoU 45.4 45.8 50.0 50.0 mAcc 56.2 56.8 60.3 60.9 (b) Classification on the ImageNet-1K dataset. iBOT [18] +SERE Backbone ViT-S/16 ViT-S/16 Epochs 300 100 Top-1 81.1 81.5 Top-5 - 95.8 TABLE 4 Semi-supervised classification on ImageNet-1K. We fine-tune the models with 1%/10% training labels and evaluate them with 100% val labels. 1% 10% Top-1 Top-5 Top-1 Top-5 DINO [13] +SERE 52.1 55.9 77.8 81.0 70.0 71.5 89.8 90.6 we set the tp and tc to 1.0 and 1.0. The α and β in Equ. (6) are set to 1.0 and 1.0, respectively. For iBOT [18], 10 local views are used for a fair comparison. And we crop images with a ratio between 0.4 and 1.0 for global views and between 0.05 and 0.4 for local views. A gradient clip of 0.3 is used for optimization. The α and β in Equ. (6) are set to 0.2 and 0.5. Additionally, we provide experiments with ViT-B/16 as the backbone and show the pre-training and fine-tuning details in the supplementary. 4.2 Performance and Analysis We verify the effectiveness of self-relation for self-supervised learning by transferring the pre-trained models to image-level classification tasks and dense prediction downstream tasks. Mod- els are pre-trained with 100 epochs on ImageNet-1k unless otherwise stated. For easy understanding, models pre-trained with self-relation representations are marked as SERE. Fully fine-tuning classification on ImageNet-1K. We compare the fully fine-tuning classification performance on the ImageNet- 1K dataset. When utilizing ViT-S/16, the pre-trained model is fine-tuned for 100 epochs with the AdamW [92] optimizer and a batch size of 512. The initial learning rate is set to 1e-3 with a layer-wise decay of 0.65. After a warmup of 5 epochs, the learning rate gradually decays to 1e-6 with the cosine decay schedule. We report the Top-1 and Top-5 accuracy for evaluation on the ImageNet-1k val set. As shown in Tab. 1, SERE advances DINO and iBOT by 1.2% and 0.6% on Top-1 accuracy. Even compared to iBOT of 300 epochs, SERE can improve 0.4% Top- 1 accuracy with a third of the pre-training time (100 epochs), as shown in Tab. 3 (b). Moreover, using ViT-B/16, SERE surpasses iBOT by 0.4% in Top-1 accuracy, as shown in Tab. 1. These results demonstrate that SERE enhances the category-related representation ability of ViT. Semi-supervised classification on ImageNet-1K. We also eval- uate the classification performance in a semi-supervised fashion. Following the setting of [18], we fully fine-tune the pre-trained TABLE 5 Transfer learning on the classification task. We fine-tune the pre-trained models on multiple datasets and report the Top-1 accuracy. TABLE 7 Cooperating SERE with multiple self-supervised learning methods. Models are pre-trained on the ImageNet-S300 dataset with 100 epochs. 6 Cifar10 Cifar100 INat19 Flwrs DINO [13] +SERE 98.8 98.9 89.6 90.0 76.9 77.5 97.8 98.0 Cars 93.5 93.5 TABLE 6 Compared with masked image modeling on the ImageNet-1K dataset. † means effective pre-training epochs [18] that account for actually used images during pre-training. ‡ means the models are fine-tuned for 200 epochs on ImageNet-1K, while others are fine-tuned for 100 epochs. DINO [13] MAE‡ [14] iBOT [18] DINO [13]+SERE iBOT [18]+SERE BEiT [60] MAE [14] iBOT [18] iBOT [18]+SERE Architecture ViT-S/16 ViT-B/16 Pre-training Epochs† 300 800 400 300 400 800 800 400 400 Top-1 79.7 80.9 80.9 80.9 81.5 83.2 83.3 83.3 83.7 models with 1% and 10% training labels on the ImageNet-1K dataset for 1000 epochs. We use the AdamW optimizer to train the model with a batch size of 1024 and a learning rate of 1e-5. Tab. 4 reports the Top-1 and Top-5 accuracy on the ImageNet-1K val set. SERE consistently achieves better accuracy with 1% and 10% labels. With only 1% labels, there is a significant improvement of 3.8% in Top-1 accuracy, showing the advantage of our method in the semi-supervised fashion. Semi-supervised semantic segmentation for ImageNet-S. The ImageNet-S dataset [26] extends ImageNet-1K with pixel-level semantic segmentation annotations on almost all val images and parts of training images. Evaluating semantic segmentation on the ImageNet-S dataset avoids the potential influence of domain shift between pre-training and fine-tuning datasets. We fine-tune the models with the semantic segmentation annotations in the ImageNet-S training set and evaluate the performance on the val and test sets of ImageNet-S. The ViT-S/16 model is initialized with self-supervised pre-trained weights (ImageNet-SPT) or fully fine-tuned weights on classification labels (ImageNet-SFT) of the ImageNet-1K dataset. A randomly initialized 1 × 1 conv is attached to the model as the segmentation head. We fine- tune models for 100 epochs with an AdamW optimizer, using a batch size of 256 and a weight decay of 0.05. The learning rate is initially set to 5e-4 with a layer-wise decay of 0.5. After a warmup of 5 epochs, the learning rate decays to 1e-6 by the cosine decay schedule. The images are resized and cropped to 224×224 for training and are resized to 256 along the smaller side for evaluation. As shown in Tab. 1, compared to DINO and iBOT, SERE improves the val mIoU by 1.8% and 2.9% when initializing the model with self-supervised pre-trained weights. When loading weights of the fully fine-tuned classification model for initializa- tion, SERE brings a 2.7%/1.0% gain on mIoU over DINO/iBOT. We conclude that SERE enhances the relation modeling ability, enabling ViT with much stronger shape-related representations. Transferring learning on the classification task. To evaluate VOC SEG mIoU mAcc ImageNet-SPT 300 test val MoCov3 [15] +SERE DINO [13] +SERE iBOT [18] +SERE 65.7 67.5 68.1 73.5 74.5 75.9 78.7 80.6 81.1 84.7 85.5 86.3 24.0 29.1 28.8 41.2 41.5 45.3 24.8 29.9 29.6 42.0 42.0 45.6 the transferring ability on classification tasks, we fine-tune pre- trained models on multiple datasets, including CIFAR [93], Flow- ers [94], Cars [95], and iNaturalist19 [96]. The training details are summarized in the supplementary. Tab. 5 shows that SERE performs better on Top-1 accuracy over DINO, demonstrating that SERE benefits the transferring learning on classification tasks. Transfer learning on semantic segmentation. We also evaluate the transfer learning performance on the semantic segmentation task using PASCAL VOC2012 [25] and ADE20K [3] datasets. The UperNet [97] with the ViT-S/16 backbone is used as the seg- mentation model. Following the training setting in [18], we fine- tune models for 20k and 160k iterations on PASCAL VOC2012 and ADE20K datasets, with a batch size of 16. Tab. 2 reports the mIoU and mAcc on the validation set. The self-relation improves the DINO by 2.6% on mIoU and 1.3% on mAcc for the PASCAL VOC2012 dataset. On the ADE20K dataset, there is also an improvement of 1.2% on mIoU and 1.2% on mAcc compared to DINO. Tab. 3 (a) shows that SERE even outperforms iBOT with much fewer pre-training epochs. Therefore, semantic segmenta- tion tasks benefit from the stronger self-relation representation ability of SERE. Transfer learning on object detection and instance segmen- tation. We use the Cascade Mask R-CNN [24] with ViT-S/16 to evaluate the transfer learning performance on object detection and instance segmentation tasks. Following [18], the models are trained on the COCO train2017 set [2] with the 1× schedule and a batch size of 16. Tab. 2 reports the bounding box AP (APb) and the segmentation mask AP (APm) on the COCO val2017 set. Compared to DINO, SERE improves by 0.6% on APb and 0.5% on APm, showing that SERE facilitates the model to locate and segment objects accurately. Comparison with masked image modeling (MIM). We also demonstrate that our proposed method, SERE, outperforms and complements various masked image modeling (MIM) based methods. As shown in Tab. 6, SERE can significantly enhance contrastive learning based approach (e.g., DINO). DINO+SERE achieves comparable performance compared to MIM based meth- ods (iBOT and MAE), requiring less pre-training/fine-tuning epochs. Meanwhile, SERE and MIM can be complementary. For instance, cooperating with SERE further improves iBOT by 0.4% Top-1 accuracy. Moreover, qualitative results in Fig. 4 show that SERE produces more precise and less noisy attention maps than iBOT. These results strongly confirm the effectiveness of SERE compared to MIM-based methods. TABLE 8 Ablation of using different representations for self-supervised training. The LI , Lp, and Lc denote the loss functions using image-level embedding [13], spatial self-relation, and channel self-relation, respectively. The model without these three losses is randomly initialized when fine-tuned on downstream tasks. VOC SEG mIoU mAcc ImageNet-SPT 300 test val ✓ LI ✗ ✓ ✓ ✓ ✓ Lp ✗ ✓ ✓ ✓ ✓ Lc ✗ ✓ ✓ ✓ ✓ 25.6 68.1 71.5 61.4 70.7 69.8 71.5 73.5 35.7 81.1 83.0 75.6 82.6 82.9 83.3 84.7 0.2 28.8 23.7 22.5 33.3 36.5 30.6 41.2 0.2 29.6 23.7 22.3 34.5 38.3 30.3 42.0 TABLE 9 Segmentation F-measure [98] on the PASCAL VOC dataset. The F-measure ignores semantic categories. Lp 87.1 IoU Lp + LI Lp + LI + Lc 86.7 87.7 Cooperating with more self-supervised learning methods. The self-relation representation is orthogonal to the existing feature representations. Therefore, it can be integrated into various self- supervised learning methods. To demonstrate this, we combine the SERE with MoCo v3 [15], DINO, and iBOT, i.e., utilizing the self-supervision of these methods as the LI in Equ. (6). We pre-train models on the ImageNet-S300 dataset with 100 epochs to save computation costs, and other training settings are constant with baseline methods. As shown in Tab. 7, using SERE consis- tently improves baseline methods, verifying its generalization to different methods. For example, SERE improves the MoCo v3 by 1.8% on mIoU and 2.0% on mAcc for semantic segmentation on the Pascal VOC dataset. For the semi-supervised semantic segmentation on the ImageNet-S300 dataset, SERE gains 5.1% on mIoU over MoCo v3. 4.3 Ablation Studies To save computational costs for the ablation study, we pre-train all models on the ImageNet-S300 [26] dataset with two global views for 100 epochs. We evaluate models with semantic segmentation on the PASCAL VOC dataset and semi-supervised semantic segmentation on the ImageNet-S300 dataset. Effect of spatial and channel self-relation. We compare the effectiveness of different representation forms for self-supervised i.e., our proposed spatial/channel self-relations and learning, image-level feature embeddings used by DINO. As shown in Tab. 8, the spatial self-relation improves the mIoU by 3.4% and mAcc by 1.9% on the PASCAL VOC dataset compared to the feature embedding. These results show that training self- supervised ViT with spatial self-relation further enhances the spa- tial relation modeling ability of ViT, benefiting dense prediction tasks. Although inferior to the other two representation forms, channel self-relation still improves the representation quality of ViT. The model pre-trained with channel self-relation performs much better than the randomly initialized model on segmentation and classification tasks. TABLE 10 Cooperating self-relations with patch-level embeddings. DINO+ indicates adding the clustering loss using patch-level embeddings to DINO [13]. 7 DINO DINO+ SERE VOC SEG mIoU mAcc ImageNet-SPT 300 test val 68.1 72.6 73.5 75.0 81.1 84.3 84.7 86.1 28.8 40.0 41.2 44.8 29.6 40.4 42.0 46.0 ✓ ✓ ✓ ✓ TABLE 11 Comparison with Barlow [87] that utilizes the batch-relation based loss. VOC SEG mIoU 69.5 69.8 mAcc 82.2 82.9 ImageNet-SPT 300 test val 33.2 36.5 32.9 38.3 Barlow [87] SERE Cooperating with image-level embeddings. We verify the or- thogonality between self-relations and image-level embeddings, as shown in Tab. 8. When combined with the image-level feature embedding, the spatial and channel self-relations improve the mIoU by 2.6% and 1.7% on the PASCAL VOC dataset. On the ImageNet-S300 dataset, there is also an improvement of 4.5% and 7.7% on mIoU over feature embedding. And cooperating three representations further boosts the performance on all tasks, indicating that self-relations are orthogonal and complementary to image-level feature embeddings for self-supervised learning. Cooperation between LI and Lc. Tab. 8 shows that Lp alone performs better than Lp + LI or Lp + Lc on the PASCAL VOC dataset. However, using Lp + LI + Lc performs better than Lp. This phenomenon is because utilizing image-level embed- ding (LI ) and channel self-relation (Lc) have their limits, while their cooperation can mitigate them. The details are as follows: 1) Regarding Lc, modeling channel self-relations requires mean- ingful and diverse channel features as the foundation. However, solely relying on Lc cannot adequately optimize the channel features and may lead to model collapse, where an example is that each channel encodes the same features. In comparison, LI facilitates learning diverse and meaningful channel features, thus addressing the limitation mentioned above of Lc. 2) The LI harms spatial features. We validate this by examining the F- measure [98] that ignores the semantic categories. Tab. 9 shows a decrease in IoU when comparing Lp+LI with LI , indicating that LI impairs spatial features. We assume LI makes representations less discriminable in the spatial dimension than Lp. However, by using Lc simultaneously, we promote learning more accurate spatial features, mitigating the drawback caused by using LI . Cooperating with patch-level embeddings. We also verify the orthogonality of self-relation representation to patch-level em- beddings in Tab. 10. As a baseline, we add a clustering loss using patch-level embeddings to DINO, denoted by DINO+. DINO+ consistently advances DINO, showing the effectiveness of patch- level embedding. Compared to DINO+, the self-relation improves the mIoU by 0.9% and 1.2% on PASCAL VOC and ImageNet-S datasets. Cooperating two representations further brings constant improvements over DINO+, e.g., achieving 2.4% and 4.8% gains on mIoU for PASCAL VOC and ImageNet-S datasets. These TABLE 12 The effect of different numbers of heads M for spatial self-relation. M 1 3 6 12 16 VOC SEG mIoU mAcc ImageNet-SPT 300 val test 72.4 72.7 73.5 73.4 72.5 84.0 84.8 84.7 85.1 84.3 38.7 38.9 41.2 40.8 39.3 39.3 39.4 42.0 41.7 39.8 TABLE 13 The effect of different tp and tc in Equ. (2) and Equ. (4). tp 0.50 0.50 0.50 1.00 0.50 0.10 tc 0.50 0.10 0.01 0.10 0.10 0.10 VOC SEG mIoU mAcc ImageNet-SPT 300 test val 72.0 73.5 70.4 70.2 73.5 73.7 84.2 84.7 82.7 83.1 84.7 85.0 36.7 41.2 33.6 36.7 41.2 39.9 36.7 42.0 34.6 38.2 42.0 40.8 TABLE 14 The effect of different α and β in Equ. (6) when cooperating the SERE with iBOT [18]. All models are pre-trained for 100 epochs on ImageNet-1K. 8 Segmentation α β 0.20 0.20 0.20 0.10 0.20 0.80 0.20 0.50 1.00 0.50 0.50 0.50 VOC Classification ImageNet-1K Top-1 81.3 81.5 81.3 81.3 81.5 81.3 Top-5 mIoU mAcc 89.9 80.7 95.7 90.0 81.2 95.8 89.8 80.9 95.8 89.5 80.9 95.8 90.0 81.2 95.8 89.7 80.8 95.8 ImageNet-SPT val 39.9 41.0 41.7 40.7 41.0 40.3 test 39.3 40.3 41.8 40.5 40.3 40.1 TABLE 15 The effect of the asymmetric losses in Equ. (3) and Equ. (5). VOC SEG mIoU mAcc ImageNet-SPT 300 test val DNIO baseline +SERE symmetry +SERE asymmetric 68.1 72.1 73.5 81.1 84.4 84.7 28.8 37.1 41.2 29.6 37.9 42.0 results indicate that the self-relation is complementary to patch- level embedding for self-supervised ViT. Comparison between self-relation and batch-relation. A re- lated work, Barlow [87], models channel relation in the whole batch, i.e., batch-relation. In comparison, the proposed SERE computes self-relation within a single image. To verify the advan- tage of self-relation over batch-relation, we pre-train the ViT-S/16 with the two forms of relation, respectively. As shown in Tab. 11, compared to the batch-relation, the self-relation improves mIoU by 0.3% and 3.3% on the PASCAL VOC and ImageNet-S300 datasets. These results show that self-relation is more suitable for the training of ViT over batch-relation. Effect of multi-head. We utilize the multi-head spatial self- relation following the MHSA module in ViT. Tab. 12 shows the effect of different numbers of heads M in spatial self-relation. Compared to the single-head version, increasing M to 6 brings the largest performance gain of 1.1% on mIoU for the PASCAL VOC dataset. M = 12 achieves limited extra gains, while M = 16 suffers a rapid performance drop. More heads enable diverse spatial self-relation, but the number of channels used for calculating each self-relation is reduced. Too many heads result in inaccurate estimation of self-relation, hurting the representation quality. So we default set the number of heads to 6 to balance the diversity and quality of spatial self-relation. Effect of sharpness. The temperature terms in Equ. (2) and Equ. (4) control the sharpness of the self-relation distributions. A small temperature sharpens the distributions, while a large temperature softens the distributions. In Tab. 13, we verify the effectiveness of temperatures for both spatial and channel self- relations. For the channel self-relation, decreasing temperature from 0.1 to 0.01 results in a rapid performance drop from 73.5% to 70.4% on mIoU for the PASCAL VOC dataset. And increasing it from 0.1 to 0.5 also degrades the mIoU from 73.5% to 72.0%. Therefore, we choose 0.1 as the default temperature for the chan- nel self-relation. For the spatial self-relation, the temperature 0.5 performs better than 1.0, and changing the temperature from 0.5 to 0.1 has a limited difference. We set the default temperature of spatial self-relation to 0.5 because a temperature of 0.5 achieves slightly better performance on the large-scale ImageNet-S dataset. Effect of loss weights. The α and β in Equ. (6) determine the relative importance of spatial and channel self-relations, respectively. Tab. 14 shows that the SERE is robust to different α and β. Among different weights, the combination of α = 0.2 and β = 0.5 achieves the best performance on the classification task and competitive performances on the segmentation task. Therefore, we use this combination as the default setting. Effect of asymmetric loss. The asymmetric structure has been proven effective for non-contrastive loss [5], [6] when using image-level embedding as the representation. To verify if self- relation representations also benefit from the asymmetric struc- ture, we compare the asymmetric and symmetry structures for the self-relation based loss in Tab. 15. Self-relation improves the DINO baseline with both asymmetric and symmetry structures. The symmetrical structure outperforms the DINO on PASCAL VOC and ImageNet-S300 datasets with 4.0% and 8.3% on mIoU. The asymmetric structure further advances symmetric structure by 1.4% and 4.1% on mIoU for the PASCAL VOC and ImageNet- S300 datasets. Therefore, though the asymmetric structure is not indispensable for self-relation, it still benefits the pre-training with self-relation. Adaptability to convolutional neural networks. Using self- relation for self-supervised learning is inspired by the properties of ViT. Still, we wonder if the self-relation representation could benefit self-supervised learning on convolutional neural networks (CNN). To verify this, we pre-train the ResNet-50 [9] with DINO and SERE, respectively. The training details are shown in the supplementary. As shown in Tab. 16, SERE improves DINO by 0.7% and 0.8% on mIoU for the semantic segmentation task on the PASCAL VOC and ImageNet-S300 datasets compared to DINO. Though designed for ViT, the self-relation still im- proves the representation quality of the CNN. Meanwhile, the improvement on CNN is relatively small compared to that on ViT, showing that the self-relation is more suitable for ViT. 9 G M I E A M O N D I O N D I E R E S + T O B i T O B i E R E S + Fig. 4. Visualization for attention maps from the last block of the pre-trained ViT-S/16. We extract the attention maps of the CLS token on other patch-level tokens. Different colors indicate the regions focused by different heads. TABLE 16 The effect of self-relation representation on CNN. DINO and SERE are trained with the ResNet-50 network. VOC SEG mIoU mAcc DINO (ResNet-50) +SERE (ResNet-50) 61.6 62.5 74.6 75.0 ImageNet-SPT 300 test val 20.2 20.9 19.9 20.7 4.4 Analysis and Visualization Invariance on self-relations. The importance of learning repre- sentations invariant to image augmentations, e.g., scaling, shift- ing, and color jitter, has been validated in self-supervised learn- ing [99], [100], [101], [102], [103], [104]. However, existing methods focus on the invariance of feature embeddings but do not consider the invariance of spatial/channel relations, which are also important properties of ViT. In contrast, our proposed SERE can enhance the invariance of spatial/channel relations. To verify this, we measure the averaged differences between self- relations of different views. As shown in Fig. 6, we obverse that SERE significantly narrows the self-relation differences in both the spatial and channel dimensions. The visualizations in Fig. 5 also show that the SERE pre-trained model produces smaller spatial self-relation differences on the overlapping regions of two views. A smaller difference means a higher invariance. Thus, these results indicate that SERE makes the ViT capture self- relations with stronger invariance to image augmentations. Visualization of attention maps. In Fig. 4, we visualize the attention maps from the last block of ViT. These visualizations demonstrate that SERE produces more precise and less noisy (a) (b) (c) (a) (b) (c) view1 view2 ∆ view1 view2 ∆ view1 view2 ∆ Fig. 5. The differences between spatial self-relations of two views. (a) Two views from each image. (b) The spatial self-relation generated by DINO. (c) The spatial self-relation generated by SERE. View1 and view2 mean the self-relations of two views generated from an image. The ∆ is the difference between self-relations in the overlapping region, which is indicated by red boxes. We give the details of the visualization method in the supplementary. attention maps than various methods, including MIM-based meth- ods, i.e., MAE [14]and iBOT [18]. MAE produces noisy attention maps that highlight almost all tokens in an image. In comparison, the attention maps of SERE mainly focus on semantic objects. For instance, the third column of Fig. 4 shows that SERE can locate the frog, but MAE primarily focuses on the background. Moreover, compared to iBOT and DINO, SERE generates atten- 1.20 [2] [3] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in Eur. Conf. Comput. Vis., 2014, pp. 740–755. B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba, “Scene parsing through ade20k dataset,” in IEEE Conf. Comput. Vis. Pattern Recog., July 2017. 10 l a i t a p S 0.21 l e n n a h C 0.15 Difference 0.82 Difference Fig. 6. The average differences of spatial (left) and channel (right) self- relations between two views on the val set of ImageNet-S. We show the calculation details in the supplementary. tion maps that locate objects more accurately. For instance, in the seventh and eighth columns of Fig. 4, SERE discovers the persons missed by iBOT. Comparison between spatial self-relation and MIM. Both spatial self-relation and MIM act on the spatial dimension, but their effects significantly differ. MIM enhances the token-level representations, while spatial self-relation focuses on improv- ing the ability to model inter-token relations. We support this argument with the following points: 1) As depicted in Fig. 4, SERE generates more precise and less noisy attention maps than MAE [14]and iBOT [18]. The attention maps of ViT can reflect the ability to model inter-token relations because attentions are calculated as token-level relations between query and key. Thus this observation indicates that SERE provides models with a stronger ability to capture inter-token relations. In Fig. 6, we show that SERE enhances the invariance of spatial self-relation to different image augmentations. 3) As shown in Tab. 6, SERE achieves consistent improvements compared to different MIM-based methods, strongly confirming the effective- ness of SERE compared to MIM. For example, cooperating with SERE improves iBOT by 0.4% Top-1 accuracy, as shown in Tab. 1. 5 CONCLUSIONS In this paper, we propose a feature self-relation based self- supervised learning scheme to enhance the relation modeling ability of self-supervised ViT. Specifically, instead of directly using feature embedding as the representation, we propose to use spatial and channel self-relations of features as representations for self-supervised learning. Self-relation is orthogonal to feature embedding and further boosts existing self-supervised methods. We show that feature self-relation improves the self-supervised ViT at a fine-grained level, benefiting multiple downstream tasks, including image classification, semantic segmentation, object detection, and instance segmentation. Acknowledgements. This work is funded by NSFC (NO. 62225604, 62176130), and the Fundamental Research Funds for the Central Universities (Nankai University, 070-63233089). Computation is supported by the Supercomputing Center of Nankai University. REFERENCES [1] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, 2015. [5] [6] [4] M. Caron, I. Misra, J. Mairal, P. Goyal, P. Bojanowski, and A. Joulin, “Unsupervised learning of visual features by contrasting cluster assign- ments,” in Adv. Neural Inform. Process. Syst., 2020. X. Chen and K. He, “Exploring simple siamese representation learn- ing,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2021. J.-B. Grill, F. Strub, F. Altch´e, C. Tallec, P. H. Richemond, E. Buchatskaya, C. Doersch, B. ´Avila Pires, Z. Guo, M. G. Azar, B. Piot, K. Kavukcuoglu, R. Munos, and M. Valko, “Bootstrap your own latent - a new approach to self-supervised learning,” in Adv. Neural Inform. Process. Syst., 2020. K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2020. H. SUN and M. LI, “Enhancing unsupervised domain adaptation by exploiting the conceptual consistency of multiple self-supervised tasks,” SCIENCE CHINA Information Sciences, vol. 66, no. 4, pp. 142 101–, 2023. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conf. Comput. Vis. Pattern Recog., 2016, pp. 770–778. [9] [8] [7] [10] S. Gao, Z.-Y. Li, Q. Han, M.-M. Cheng, and L. Wang, “Rf-next: Efficient receptive field search for convolutional neural networks,” IEEE Trans. Pattern Anal. Mach. Intell., 2022. [11] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Trans- formers for image recognition at scale,” Int. Conf. Learn. Represent., 2021. [12] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Int. Conf. Comput. Vis., 2021. [13] M. Caron, H. Touvron, I. Misra, H. J´egou, J. Mairal, P. Bojanowski, and A. Joulin, “Emerging properties in self-supervised vision trans- formers,” in Int. Conf. Comput. Vis., 2021. [14] K. He, X. Chen, S. Xie, Y. Li, P. Doll´ar, and R. Girshick, “Masked autoencoders are scalable vision learners,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2022, pp. 16 000–16 009. [15] X. Chen, S. Xie, and K. He, “An empirical study of training self- supervised vision transformers,” in Int. Conf. Comput. Vis., October 2021. [16] Z. Xie, Y. Lin, Z. Yao, Z. Zhang, Q. Dai, Y. Cao, and H. Hu, “Self-supervised learning with swin transformers,” arXiv preprint arXiv:2105.04553, 2021. [17] H. Lu, Y. Huo, M. Ding, N. Fei, and Z. Lu, “Cross-modal contrastive learning for generalizable and efficient image-text retrieval,” Machine Intelligence Research, pp. 1–14, 2023. J. Zhou, C. Wei, H. Wang, W. Shen, C. Xie, A. Yuille, and T. Kong, “ibot: Image bert pre-training with online tokenizer,” Int. Conf. Learn. Represent., 2022. [18] [19] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple frame- work for contrastive learning of visual representations,” in Interna- tional Conference on Machine Learning (ICML), 2020. [20] X. Wang, R. Zhang, C. Shen, T. Kong, and L. Li, “Dense contrastive learning for self-supervised visual pre-training,” in IEEE Conf. Com- put. Vis. Pattern Recog., 2021. [21] O. J. H´enaff, S. Koppula, J.-B. Alayrac, A. van den Oord, O. Vinyals, and J. a. Carreira, “Efficient visual pretraining with contrastive detec- tion,” in Int. Conf. Comput. Vis., October 2021, pp. 10 086–10 096. [22] M. Raghu, T. Unterthiner, S. Kornblith, C. Zhang, and A. Dosovitskiy, “Do vision transformers see like convolutional neural networks?” in Adv. Neural Inform. Process. Syst., A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, Eds., 2021. [23] K. Kim, B. Wu, X. Dai, P. Zhang, Z. Yan, P. Vajda, and S. J. Kim, “Rethinking the self-attention in vision transformers,” in IEEE Conf. Comput. Vis. Pattern Recog. Worksh., June 2021, pp. 3071–3075. [24] Z. Cai and N. Vasconcelos, “Cascade r-cnn: Delving into high quality object detection,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2018. [25] M. Everingham, L. Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” Int. J. Comput. Vis., vol. 88, pp. 303–338, 2009. 0.10.2DINODINO+SERE0.61.2 [26] S. Gao, Z.-Y. Li, M.-H. Yang, M.-M. Cheng, J. Han, and P. Torr, “Large-scale unsupervised semantic segmentation,” arXiv preprint arXiv:2106.03149, 2021. [27] R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in Eur. Conf. Comput. Vis. Springer, 2016, pp. 649–666. [28] G. Larsson, M. Maire, and G. Shakhnarovich, “Colorization as a proxy task for visual understanding,” in IEEE Conf. Comput. Vis. Pattern Recog., July 2017. [29] M. Noroozi and P. Favaro, “Unsupervised learning of visual represen- tions by solving jigsaw puzzles,” in Eur. Conf. Comput. Vis., 2016. [30] S. Gidaris, P. Singh, and N. Komodakis, “Unsupervised representation learning by predicting image rotations,” in Int. Conf. Learn. Represent., 2018. [31] C. Doersch, A. Gupta, and A. A. Efros, “Unsupervised visual repre- sentation learning by context prediction,” in Int. Conf. Comput. Vis., December 2015. [32] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, “Extract- ing and composing robust features with denoising autoencoders,” in International Conference on Machine Learning (ICML), 2008. [33] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2016. [34] M. Noroozi, H. Pirsiavash, and P. Favaro, “Representation learning by learning to count,” in Int. Conf. Comput. Vis., Oct 2017. [35] Z. Wu, Y. Xiong, S. X. Yu, and D. Lin, “Unsupervised feature learning via non-parametric instance discrimination,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2018. [36] Y. Zhao, G. Wang, C. Luo, W. Zeng, and Z.-J. Zha, “Self-supervised visual representations learning by contrastive mask prediction,” in Int. Conf. Comput. Vis., October 2021, pp. 10 160–10 169. [37] D. Dwibedi, Y. Aytar, J. Tompson, P. Sermanet, and A. Zisserman, “With a little help from my friends: Nearest-neighbor contrastive learning of visual representations,” in Int. Conf. Comput. Vis., October 2021, pp. 9588–9597. [38] C.-H. Yeh, C.-Y. Hong, Y.-C. Hsu, T.-L. Liu, Y. Chen, and Y. LeCun, “Decoupled contrastive learning,” arXiv preprint arXiv:2110.06848, 2021. [39] W.-C. Wang, E. Ahn, D. Feng, and J. Kim, “A review of predictive and contrastive self-supervised learning for medical images,” Machine Intelligence Research, pp. 483–513, 2023. [40] L. Wang, H. Xu, and W. Kang, “Mvcontrast: Unsupervised pretraining for multi-view 3d object recognition,” Machine Intelligence Research, pp. 1–12, 2023. [41] M. Caron, P. Bojanowski, A. Joulin, and M. Douze, “Deep clustering for unsupervised learning of visual features,” in Eur. Conf. Comput. Vis., 2018. [42] X. Zhan, J. Xie, Z. Liu, Y.-S. Ong, and C. C. Loy, “Online deep clustering for unsupervised representation learning,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2020. [43] A. YM., R. C., and V. A., “Self-labelling via simultaneous clustering and representation learning,” in Int. Conf. Learn. Represent., 2020. [44] S. A. Koohpayegani, A. Tejankar, and H. Pirsiavash, “Mean shift for self-supervised learning,” in Int. Conf. Comput. Vis., October 2021, pp. 10 326–10 335. [45] A. Ermolov, A. Siarohin, E. Sangineto, and N. Sebe, “Whitening for self-supervised representation learning,” in International Conference on Machine Learning (ICML), 2021, pp. 3015–3024. [46] Y. Tian, X. Chen, and S. Ganguli, “Understanding self-supervised learning dynamics without contrastive pairs,” in International Con- ference on Machine Learning (ICML), 2020. [47] C. Ge, Y. Liang, Y. Song, J. Jiao, J. Wang, and P. Luo, “Revitalizing cnn attentions via transformers in self-supervised visual representation learning,” in Adv. Neural Inform. Process. Syst., 2021. [48] Q. Hu, X. Wang, W. Hu, and G.-J. Qi, “Adco: Adversarial contrast for efficient learning of unsupervised representations from self-trained negative adversaries,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2021, pp. 1074–1083. [49] Z. Xie, Y. Lin, Z. Zhang, Y. Cao, S. Lin, and H. Hu, “Propagate yourself: Exploring pixel-level consistency for unsupervised visual representation learning,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2021, pp. 16 684–16 693. [50] E. Xie, J. Ding, W. Wang, X. Zhan, H. Xu, P. Sun, Z. Li, and P. Luo, “Detco: Unsupervised contrastive learning for object detection,” in Int. Conf. Comput. Vis., October 2021, pp. 8392–8401. [51] Z. Dai, B. Cai, Y. Lin, and J. Chen, “Up-detr: Unsupervised pre- training for object detection with transformers,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2021, pp. 1601–1610. [52] B. Roh, W. Shin, I. Kim, and S. Kim, “Spatilly consistent representa- tion learning,” in IEEE Conf. Comput. Vis. Pattern Recog., 2021. 11 [53] W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao, “Pyramid vision transformer: A versatile backbone for dense prediction without convolutions,” in Int. Conf. Comput. Vis., 2021. [54] B. Cheng, A. G. Schwing, and A. Kirillov, “Per-pixel classification is not all you need for semantic segmentation,” in Adv. Neural Inform. Process. Syst., 2021. [55] Y.-H. Wu, Y. Liu, X. Zhan, and M.-M. Cheng, “P2T: Pyramid pooling transformer for scene understanding,” IEEE Trans. Pattern Anal. Mach. Intell., 2022. [56] H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. J´egou, “Training data-efficient image transformers & distillation through attention,” in International Conference on Machine Learning (ICML). PMLR, 2021, pp. 10 347–10 357. [57] C. Li, J. Yang, P. Zhang, M. Gao, B. Xiao, X. Dai, L. Yuan, and J. Gao, “Efficient self-supervised vision transformers for representation learning,” in Int. Conf. Learn. Represent., 2022. [58] P. Zhou, Y. Zhou, C. Si, W. Yu, T. K. Ng, and S. Yan, “Mugs: A multi-granular self-supervised learning framework,” in arXiv preprint arXiv:2203.14415, 2022. [59] Z. Li, Z. Chen, F. Yang, W. Li, Y. Zhu, C. Zhao, R. Deng, L. Wu, R. Zhao, M. Tang, and J. Wang, “MST: Masked self-supervised transformer for visual representation,” in Adv. Neural Inform. Process. Syst., 2021. [60] H. Bao, L. Dong, S. Piao, and F. Wei, “BEit: BERT pre-training of image transformers,” in Int. Conf. Learn. Represent., 2022. [61] S. Gao, P. Zhou, M.-M. Cheng, and S. Yan, “Towards sustainable self- supervised learning,” arXiv preprint arXiv:2210.11016, 2022. [62] X. Chen, M. Ding, X. Wang, Y. Xin, S. Mo, Y. Wang, S. Han, P. Luo, G. Zeng, and J. Wang, “Context autoencoder for self-supervised representation learning,” arXiv preprint arXiv:2202.03026, 2022. [63] Z. Xie, Z. Zhang, Y. Cao, Y. Lin, J. Bao, Z. Yao, Q. Dai, and H. Hu, “Simmim: A simple framework for masked image modeling,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2022. [64] L. Wang, F. Liang, Y. Li, H. Zhang, W. Ouyang, and J. Shao, “Repre: Improving self-supervised vision transformer with reconstructive pre- training,” arXiv preprint arXiv:2201.06857, 2022. [65] S. Atito, M. Awais, and J. Kittler, “Sit: Self-supervised vision trans- former,” arXiv preprint arXiv:2104.03602, 2021. [66] C. Wei, H. Fan, S. Xie, C.-Y. Wu, A. Yuille, and C. Feichtenhofer, “Masked feature prediction for self-supervised visual pre-training,” arXiv preprint arXiv:2112.09133, 2021. [67] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in Eur. Conf. Comput. Vis. Springer, 2020, pp. 213–229. [68] F. Tung and G. Mori, “Similarity-preserving knowledge distillation,” in Int. Conf. Comput. Vis., October 2019. [69] W. Park, D. Kim, Y. Lu, and M. Cho, “Relational knowledge distilla- tion,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2019. [70] N. Passalis and A. Tefas, “Learning deep representations with proba- bilistic knowledge transfer,” in Eur. Conf. Comput. Vis., 2018. [71] B. Peng, X. Jin, J. Liu, D. Li, Y. Wu, Y. Liu, S. Zhou, and Z. Zhang, “Correlation congruence for knowledge distillation,” in Int. Conf. Comput. Vis., October 2019. [72] X. Li, J. Wu, H. Fang, Y. Liao, F. Wang, and C. Qian, “Local correlation consistency for knowledge distillation,” in Eur. Conf. Comput. Vis., 2020. [73] S. Zagoruyko and N. Komodakis, “Paying more attention to attention: Improving the performance of convolutional neural networks via atten- tion transfer,” in Int. Conf. Learn. Represent., 2017. [74] Y. Chen, N. Wang, and Z. Zhang, “Darkrank: Accelerating deep metric learning via cross sample similarities transfer,” AAAI Conference on Artificial Intelligence (AAAI), vol. 32, no. 1, Apr. 2018. [75] Y. Liu, K. Chen, C. Liu, Z. Qin, Z. Luo, and J. Wang, “Structured knowledge distillation for semantic segmentation,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2019. [76] T. He, C. Shen, Z. Tian, D. Gong, C. Sun, and Y. Yan, “Knowledge adaptation for efficient semantic segmentation,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2019. [77] C. Yang, H. Zhou, Z. An, X. Jiang, Y. Xu, and Q. Zhang, “Cross-image relational knowledge distillation for semantic segmentation,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2022, pp. 12 319–12 328. [78] M. Hamilton, Z. Zhang, B. Hariharan, N. Snavely, and W. T. Freeman, “Unsupervised semantic segmentation by distilling feature correspon- dences,” in Int. Conf. Learn. Represent., 2022. [79] O. Sim´eoni, A. Iscen, G. Tolias, Y. Avrithis, and O. Chum, “Unsuper- vised object discovery for instance recognition,” in Winter Conference on Applications of Computer Vision, 2018. 12 Zhong-Yu Li is a Ph.D. student from the college of computer science, Nankai university. He is supervised via Prof. Ming-Ming cheng. His re- search interests include deep learning, machine learning and computer vision. Shanghua Gao is a Ph.D. candidate in Me- dia Computing Lab at Nankai University. He is supervised via Prof. Ming-Ming Cheng. His research interests include computer vision and representation learning. Ming-Ming Cheng received his PhD degree from Tsinghua University in 2012, and then worked with Prof. Philip Torr in Oxford for 2 years. Since 2016, he is a full professor at Nankai University, leading the Media Comput- ing Lab. His research interests include com- puter vision and computer graphics. He re- ceived awards, including ACM China Rising Star Award, IBM Global SUR Award, etc. He is a senior member of the IEEE and on the editorial boards of IEEE TPAMI and IEEE TIP. [80] O. Sim´eoni, G. Puy, H. V. Vo, S. Roburin, S. Gidaris, A. Bursuc, P. P´erez, R. Marlet, and J. Ponce, “Localizing objects with self- supervised transformers and no labels,” in Brit. Mach. Vis. Conf., November 2021. [81] Y. Wang, X. Shen, S. X. Hu, Y. Yuan, J. L. Crowley, and D. Vaufreydaz, “Self-supervised transformers for unsupervised object discovery using normalized cut,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2022. [82] M. Ki, Y. Uh, J. Choe, and H. Byun, “Contrastive attention maps for self-supervised co-localization,” in Int. Conf. Comput. Vis., October 2021, pp. 2803–2812. I. Kakogeorgiou, S. Gidaris, B. Psomas, Y. Avrithis, A. Bursuc, K. Karantzalos, and N. Komodakis, “What to hide from your stu- dents: Attention-guided masked image modeling,” arXiv preprint arXiv:2203.12719, 2022. [83] [84] Y. Kalantidis, C. Mellina, and S. Osindero, “Cross-dimensional weight- ing for aggregated deep convolutional features,” in Eur. Conf. Comput. Vis. Worksh., 2016, pp. 685–701. [85] L. Gatys, A. S. Ecker, and M. Bethge, “Texture synthesis using convolutional neural networks,” in Adv. Neural Inform. Process. Syst., vol. 28, 2015. [86] T.-Y. Lin, A. RoyChowdhury, and S. Maji, “Bilinear cnn models for fine-grained visual recognition,” in Int. Conf. Comput. Vis., December 2015. J. Zbontar, L. Jing, I. Misra, Y. LeCun, and S. Deny, “Barlow twins: Self-supervised learning via redundancy reduction,” arXiv preprint arXiv:2103.03230, 2021. [87] [88] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International Conference on Machine Learning (ICML), 2015, pp. 448–456. [89] A. F. Agarap, “Deep learning using rectified linear units (relu),” arXiv preprint arXiv:1803.08375, 2018. [90] L. Liu, Q. Huang, S. Lin, H. Xie, B. Wang, X. Chang, and X. Liang, “Exploring inter-channel correlation for diversity-preserved knowledge distillation,” in Int. Conf. Comput. Vis., October 2021, pp. 8271–8280. [91] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Adv. Neural Inform. Process. Syst., 2012. I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” in Int. Conf. Learn. Represent., 2019. [92] [93] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” University of Toronto, Tech. Rep. 0, 2009. [94] M.-E. Nilsback and A. Zisserman, “Automated flower classification over a large number of classes,” Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP), pp. 722–729, 2008. J. Krause, M. Stark, J. Deng, and L. Fei-Fei, “3d object representations for fine-grained categorization,” in Int. Conf. Comput. Vis. Worksh., 2013. [95] [96] G. Van Horn, O. Mac Aodha, Y. Song, Y. Cui, C. Sun, A. Shepard, H. Adam, P. Perona, and S. Belongie, “The inaturalist species clas- sification and detection dataset,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2018. [97] T. Xiao, Y. Liu, B. Zhou, Y. Jiang, and J. Sun, “Unified perceptual parsing for scene understanding,” in Eur. Conf. Comput. Vis., Septem- ber 2018. [98] M.-M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S.-M. Hu, “Global contrast based salient region detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 3, pp. 569–582, 2015. [99] S. Purushwalkam Shiva Prakash and A. Gupta, “Demystifying contrastive self-supervised learning: Invariances, augmentations and dataset biases,” Adv. Neural Inform. Process. Syst., vol. 33, 2020. [100] M. Patrick, Y. M. Asano, P. Kuznetsova, R. Fong, J. a. F. Henriques, G. Zweig, and A. Vedaldi, “On compositions of transformations in contrastive self-supervised learning,” in Int. Conf. Comput. Vis., 2021. [101] I. Misra and L. van der Maaten, “Self-supervised learning of pretext- invariant representations,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2020. [102] A. Bardes, J. Ponce, and Y. LeCun, “VICReg: Variance-invariance- covariance regularization for self-supervised learning,” in Int. Conf. Learn. Represent., 2022. [103] L. Ericsson, H. Gouk, and T. M. Hospedales, “Why do self-supervised models transfer? investigating the impact of invariance on downstream tasks,” 2022. [104] X. Wang, K. He, and A. Gupta, “Transitive invariance for self- supervised visual representation learning,” in Int. Conf. Comput. Vis., 2017.
synthetic_cpt
1
Dex-Net_20_Deep_Learning_to_Plan_Robust_Grasps_with_Synthetic_Point_Clouds_and_Analytic_Grasp_Metrics.pdf
0 1 0 2 y a M 6 ] R S . h p - o r t s a [ 1 v 4 6 9 0 . 5 0 0 1 : v i X r a Chemical composition of the old globular clusters NGC 1786, NGC 2210 and NGC 2257 in the Large Magellanic Cloud. 1 Dipartimento di Astronomia, Universit`a degli Studi di Bologna, Via Ranzani, 1 - 40127 Bologna, ITALY Alessio Mucciarelli [email protected] Livia Origlia INAF - Osservatorio Astronomico di Bologna, Via Ranzani, 1 - 40127 Bologna, ITALY [email protected] Francesco R. Ferraro Dipartimento di Astronomia, Universit`a degli Studi di Bologna, Via Ranzani, 1 - 40127 Bologna, ITALY [email protected] ABSTRACT ± 0.02 dex (σ= 0.04 dex) and –1.95 This paper presents the chemical abundance analysis of a sample of 18 giant stars in 3 old globular clusters in the Large Magellanic Cloud, namely NGC 1786, NGC 2210 and NGC 2257. The derived iron content is [Fe/H]= –1.75 0.01 dex (σ= 0.02 dex), 0.02 dex (σ= 0.04 dex) for NGC 1786, –1.65 NGC 2210 and NGC 2257, respectively. All the clusters exhibit similar abundance ratios, with enhanced values ( +0.30 dex) of [α/Fe], consistent with the Galactic Halo stars, thus indicating that these clusters have formed from a gas enriched by Type II SNe. We also found evidence that r-process are the main channel of production of the measured neutron capture elements (Y, Ba, La, Nd, Ce and Eu). In particular the quite +0.70 dex) found in these old clusters clearly indicates large enhancement of [Eu/Fe] ( a relevant efficiency of the r-process mechanism in the LMC environment. ± ± ∼ ∼ 1. Introduction In the last decade, the advent of the high resolution spectrographs mounted on the 8-10 m tele- scopes has allowed to extend the study of the chemical composition of individual Red Giant Branch 1Based on observations obtained at Paranal ESO Observatory under proposal 080.D-0368(A) – 2 – (RGB) stars outside our Galaxy up to dwarf and irregular galaxies of the Local Group. Chemical analysis of RGB stars are now available for several isolated dwarf spheroidal (dSph) galaxies as Sculptor, Fornax, Carina, Leo I, Draco, Sextans and Ursa Minor (Shetrone, Cot´e & Sargent 2001; Shetrone et al. 2003; Letarte et al. 2006) and the Sagittarius (Sgr) remnant (Bonifacio et al. 2000; Monaco et al. 2005, 2007; Sbordone et al. 2007). As general clue, these studies reveal that the chemical abundance patterns in the extragalactic systems do not resemble those observed in the Galaxy, with relevant differences in the [α/Fe] 2, [Ba/Fe] and [Ba/Y] ratios, thus suggesting dif- ferent star formation history and chemical evolution (see e.g. Venn et al. 2004; Geisler et al. 2007; Tolstoy, Hill & Tosi 2009). Unlike the dSphs, the irregular galaxies as the Large Magellanic Cloud (LMC) contain large amount of gas and dust, showing an efficient ongoing star-formation activity. The LMC globular clusters (GCs) span a wide age/metallicity range, with both old, metal-poor and young, metal-rich objects, due to its quite complex star formation history. Several events of star formation occurred: the first one 13 Gyr ago and 4 main bursts at later epochs, 2 Gyr, 500 Myr, 100 Myr and 12 Myr ago (Harris & Zaritsky 2009). Until the advent of the new generation of spectrographs, the study of the chemical composition of the LMC stars was restricted to red and blue supergiants (Hill et al. 1995; Korn et al. 2000, 2002), providing information only about the present-day chemical composition. The first studies based on high resolution spectra of RGB stars (Hill et al. 2000; Johnson et al. 2006; Pompeia et al. 2008) provided first and crucial information about the early chemical enrichment and nucleosynthesis. ∼ ∼ Of the 300 compact stellar clusters listed by Kontizas et al. (1990), metallicity determinations from Ca II triplet are available for some tens of objects (Olszewski et al. 1991; Grocholski et al. 2006) and only for 7 clusters high-resolution spectroscopic analysis have been carried out (Hill et al. 2000; Johnson et al. 2006). With the final aim of reconstructing the formation history of star clusters in the LMC, a few years ago we started a systematic spectroscopic screening of giants in a sample of LMC GCs with different ages. In the first two papers of the series (Ferraro et al. 2006; Mucciarelli et al. 2008) we presented the chemical analysis of 20 elements for 4 intermediate-age LMC clusters (namely, NGC 1651, 1783, 1978, 2173). Moreover, Mucciarelli et al. (2009) discussed the iron content and the abundances of O, Na, Mg and Al for 3 old LMC clusters (namely NGC 1786, 2210 and 2257), discovering anti- correlation patterns similar to those observed in Galactic clusters. Here we extend the abundance analysis to additional 13 chemical elements in these 3 LMC clusters, also performing a detailed com- parison with stellar populations in our Galaxy (both in the field and in globulars) and in nearby dSphs. The paper is organized as follows: Section 2 presents the dataset and summarized the adopted 2We adopt the usual spectroscopic notations that [X1/X2]= lg(NX1 /NX2 )∗ - lg(NX1 /NX2 )⊙ and that lg(NX1 )= lg(NX1 /NH )+12. – 3 – procedure to derive radial velocities; Section 3 describes the methodology used to infer the chemical abundances; Section 4 discusses the uncertainties associated to the chemical abundances. Finally, Section 5 and 6 present and discuss the results of the chemical analysis. 2. Observational data The observations were carried out with the multi-object spectrograph FLAMES (Pasquini et al. 2002) at the UT2/Kuyeen ESO-VLT (25-27 December 2007). We used FLAMES in the UVES+GIRAFFE combined mode, feeding 8 fibers to the UVES high-resolution spectrograph and 132 to the GI- RAFFE mid-resolution spectrograph. The UVES spectra have a wavelength coverage between 4800 and 6800 ˚A with a spectral resolution of λ/∆λ 45000. We used the following GIRAFFE gratings: HR 11 (with a wavelength range between 5597 and 5840 ˚A and a resolution of 24000) ∼ and HR 13 (with a wavelength range between 6120 and 6405 ˚A and a resolution of 22000). These 2 setups have been chosen in order to measure several tens of iron lines, α-elements and to sample Na and O absorption lines. Target stars have been selected on the basis of (K, J-K) Color-Magnitude Diagrams (CMDs), as shown in Fig. 1, from near infrared observations performed with SOFI@NTT (A. Mucciarelli et al. 2010, in preparation). For each exposure 2 UVES and a ten of GIRAFFE fibers have been used to sample the sky and allow an accurate subtraction of the sky level. ∼ ∼ ∼ The spectra have been acquired in series of 8-9 exposures of 45 min each and pre-reduced independently by using the UVES and GIRAFFE ESO pipeline 3, including bias subtraction, flat- fielding, wavelength calibration, pixel re-sampling and spectrum extraction. For each exposure, the sky spectra have been combined together; each individual sky spectrum has been checked to exclude possible contaminations from close stars. Individual stellar spectra have been sky subtracted by using the corresponding median sky spectra, then coadded and normalized. We note that the sky level is only a few percents of the stars level, due to brightness of our targets, introducing only a negligible amount of noise in the stellar spectra. Note that the fibre to fibre relative transmission has been taken into account during the pre-reduction procedure. The accuracy of the wavelength calibration has been checked by measuring the position of some telluric OH and O2 emission lines selected from the catalog of Osterbrock et al. (1996). 3http://www.eso.org/sci/data-processing/software/pipelines/ – 4 – 2.1. Radial velocities Radial velocities have been measured by using the IRAF 4 task FXCOR, performing a cross- correlation between the observed spectra and high S/N - high resolution spectrum of a template star of similar spectral type. For our sample we selected a K giant (namely HD-202320) whose spectrum is available in the ESO UVES Paranal Observatory Project database 5 (Bagnulo et al. 2003). Then, heliocentric corrections have been computed with the IRAF task RVCORRECT. Despite the large number of availables fibers, only a few observed stars turned out to be cluster-member, due to the small size of the clusters within the FLAMES field of view. We selected the cluster-member stars according to their radial velocity, distance from the cluster center and position on the CMD. Finally, we identified a total of 7 stars in NGC 1786, 5 stars in NGC 2210 and 6 stars in NGC 2257. We derived average radial velocities of Vr= 264.3 km s−1 (σ= 5.7 km s−1), 337.5 km s−1 (σ= 1.9 km s−1) and 299.4 km s−1 (σ= 1.5 km s−1) for NGC 1786, 2210 and 2257, respectively. The formal error 0.5–1.0 km s−1. The derived radial velocities associated to the cross-correlation procedure is of are consistent with the previous measures, both from integrated spectra (Dubath, Meylan & Mayor 1997) and from low/high-resolution individual stellar spectra (Olszewski et al. 1991; Hill et al. 2000; In fact, for NGC 1786 Olszewski et al. (1991) estimated 264.4 km s−1 Grocholski et al. 2006). (σ=4.1 km s−1) from 2 giant stars, while Dubath, Meylan & Mayor (1997) provide a value of 262.6 km s−1. For NGC 2210 the radial velocity provided by Olszewski et al. (1991) is of 342.6 km s−1 (σ=7.8 km s−1), while Dubath, Meylan & Mayor (1997) and Hill et al. (2000) obtained radial velocities of 338.6 and 341.7 km s−1 (σ=2.7 km s−1), respectively. For NGC 2257, Grocholski et al. (2006) provided a mean value of 301.6 km s−1 (σ=3.3 km s−1) and Olszewski et al. (1991) of 313.7 km s−1 (σ=2.1 km s−1). For all the targets Table 1 lists the S/N computed at 6000 ˚A for the UVES spectra and at 5720 and 6260 ˚A for the GIRAFFE-HR11 and -HR13 spectra, respectively. Also, we report Vr, dereddened K0 magnitudes and (J K)0 colors and the RA and Dec coordinates (onto 2MASS astrometric system) of each targets. ∼ − 3. Chemical analysis Similarly to what we did in previous works (Ferraro et al. 2006; Mucciarelli et al. 2008, 2009), the chemical analysis has been carried out using the ROSA package (developed by R. G. Gratton, private communication). We derived chemical abundances from measured equivalent widths (EW) of single, unblended lines, or by performing a χ2 minimization between observed and synthetic line profiles for those elements (O, Ba, Eu) for which this approach is mandatory (in particular, to take into account the close blending between O and Ni at 6300.3 ˚A and the hyperfine splitting 4Image Reduction and Analysis facility. IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation. 5http://www.sc.eso.org/santiago/uvespop/ – 5 – for Ba and Eu lines). We used the solar-scaled Kurucz model atmospheres with overshooting and assumed that local thermodynamical equilibrium (LTE) holds for all species. Despite the majority of the available abundance analysis codes works under the assumption of LTE, transitions of some elements are known to suffer from large NLTE effects (Asplund 2005). Our Na abundances were corrected for these effects by interpolating the correction grid computed by Gratton et al. (1999). The line list employed here is described in details in Gratton et al. (2003) and Gratton et al. (2007) including transitions for which accurate laboratory and theoretical oscillator strengths are available and has been updated for some elements affected by hyperfine structure and isotopic splitting. Eu abundance has been derived by the spectral synthesis of the Eu II line at 6645 ˚A, in order to take into account its quite complex hyperfine structure, with a splitting in 30 sublevels. Its hyperfine components have been computed using the code LINESTRUC, described by Wahlgren (2005) and adopting the hyperfine constants A and B by Lawler et al. (2001) and the meteoritic isotopic ratio, being Eu in the Sun built predominantly through r-process. For sake of homogeneity we adopted the log gf by Biemont et al. (1982) already used in Mucciarelli et al. (2008) instead of the oscillator strength by Lawler et al. (2001). Ba II lines have relevant hyperfine structure components concerning the odd-number isotopes 135Ba and 137Ba, while the even-number isotopes have no hyperfine splitting; moreover, there are isotopic wavelength shifts between all the 5 Ba isotopes. In order to include these effects, we employed the linelist for the Ba II lines computed by Prochaska (2000) that adopted a r-process isotopic mixture. We note that the assumption of the r-process isotopic mixture instead of the solar-like isotopic mixture is not critical for the 3 Ba II lines analyzed here (namely, 5853, 6141 and 6496 ˚A), because such an effect is relevant for the Ba II resonance lines (see Table 4 by Sneden et al. 1996). For the La abundances we have not taken into account the hyperfine structure because the observed lines are too weak (typically 15-30 m ˚A) and located in the linear part of the curve of growth where the hyperfine splitting is negligible, changing the line profile but preserving the EW. Abundances of V and Sc include corrections for hyperfine structure obtained adopting the linelist by Whaling et al. (1985) and Prochaska & McWilliam (2000). In a few stars only upper limits for certain species (i.e. O, Al, La and Ce) can be measured. For O, upper limits have been obtained by using synthetic spectra (as described in Mucciarelli et al. 2009), while for Al, La and Ce computing the abundance corresponding to the minimum measurable EW (this latter has been obtained as 3 times the uncertainty derived by the classical Cayrel formula, see Section 3.1). As reference solar abundances, we adopted the ones computed by Gratton et al. (2003) for light Z-odd, α and iron-peak elements, using the same linelist employed here. For neutron-capture elements (not included in the solar analysis by Gratton et al. 2003) we used the photospheric solar values by Grevesse & Sauval (1998). All the adopted solar values are reported in Tables 3, 4 and 5. – 6 – 3.1. Equivalent Widths All EWs have been measured by using an interactive procedure developed at our institute. Such a routine allows to extract a spectral region of about 15-25 ˚A around any line of interest. Over this portion of spectrum we apply a σ-rejection algorithm to remove spectral lines and cosmic rays. The local continuum level for any line has been estimated by the peak of the flux distribution obtained over the surviving points after the σ-rejection. Finally the lines have been fitted with a gaussian profile (rejecting those lines with a FWHM strongly discrepant with respect to the nominal spectral resolution or with flux residuals asymmetric or too large) and the best fits are then integrated over the selected region to give the EW. We excluded from the analysis lines with lg (EW/λ) <–4.5, because such strong features can be dominated by the contribution of the wings and too sensitive to the velocity fields. We have also rejected lines weaker than lg (EW/λ)=–5.8 because they are too noisy. In order to estimate the reliability and uncertainties of the EW measurements, we performed some sanity checks by using the EWs of all the measured lines, excluding only O, Na, Mg, and Al lines, due to their intrinsic star-to-star scatter (see Mucciarelli et al. (2009) and Sect.5): • • • The classical formula by Cayrel (1988) provides an approximate method to estimate the uncertainty of EW measurements, as a function of spectral parameters (pixel scale, FWHM and S/N). For the UVES spectra, we estimated an uncertainty of 1.7 m ˚A at S/N= 50 , while for the GIRAFFE spectra an uncertainty of 2 m ˚A at S/N= 100. As pointed out by Cayrel (1988) this estimate should be considered as a lower limit for the actual EW uncertainty, since the effect of the continuum determination is not included. In each cluster we selected a pair of stars with similar atmospheric parameters and compared the EW measured for a number of absorption lines in the UVES spectra. The final scatter (obtained diving the dispersion by √2) turns out to be 5.6, 8.3 and 7.6 m ˚A for NGC 1786, 2210 and 2257, respectively. We compared the EWs of two target stars with similar atmospherical parameters observed with UVES (NGC 1786-1248) and GIRAFFE (NGC 1786-978), in order to check possible systematic errors in the EW measurements due to the use of different spectrograph configu- rations. We found a scatter of 6.5 m ˚A. Within the uncertainties arising from the different S/N conditions and the small numbers statistic, we do not found relevant systematic discrepancies between the EWs derived from the two different spectral configurations. 3.2. Atmospherical parameters Table 2 lists the adopted atmospherical parameters for each target stars and the corresponding [Fe/H] abundance ratio. The best-model atmosphere for each target star has been chosen in order – 7 – to satisfy simultaneously the following constraints: (1) Tef f must be able to well-reproduce the excitation equilibrium, without any significant trend between abundances derived from neutral iron lines and the excitation potential; (2) log g is chosen by forcing the difference between log N(Fe I) and log N(Fe II) to be equal to the solar value, within the quoted uncertainties; (3) the microturbulent velocity (vt) has been obtained by erasing any trend of Fe I lines abundances with their expected line strengths, according with the prescription of Magain (1984); (4) the global metallicity of the model must reproduce the iron content [Fe/H]; (5) the abundance from the Fe I lines should be constant with wavelength. Initial guess values Tef f and log g have been computed from infrared photometry, obtained with SOFI@NTT (A. Mucciarelli et al. 2010, in preparation). Effective temperatures were derived from dereddened (J K)0-Tef f calibration by Alonso et al. (1999, 2001). The transformations between photometric systems have been obtained from Carpenter (2001) and Alonso et al. (1999). For all the target clusters we adopted the reddening values reported by Persson et al. (1983). Photometric gravities have been calculated from the classical equation: K)0 colors by means of the (J − − log (g/g⊙) = 4 · log(Tef f /Tef f,⊙) + log (M/M⊙) 0.4 · − (Mbol − Mbol,⊙) by adopting the solar reference values according to IAU recommendations (Andersen 1999), the photometric Tef f , a distance modulus of 18.5 and a mass value of M=0.80 M⊙, obtained with the isochrones of the Pisa Evolutionary Library (Cariulo, Degl’Innocenti & Castellani 2004) for an age of 13 Gyr and a metal fraction of Z= 0.0006. The photometric estimates of the atmospherical parameters have been optimized spectroscop- ically following the procedure described above. Generally, we find a good agreement between the photometric and spectroscopic Tef f scales, with an average difference T spec ef f = -14 K (σ= 59 K) and only small adjustments were needed (for sake of completeness we report in Table 2 both the spectroscopic and photometric Tef f ). Changes in gravities are of 0.2-0.3 dex, consistent within the uncertainty of the adopted stellar mass, distance modulus and bolometric corrections. ef f -T phot ± An example of the lack of spurious trends between the Fe I number density and the expected line strength, the wavelength and the excitational potential is reported in Fig. 2 (linear best-fits and the corresponding slopes with associated uncertainties are labeled). 4. Error budget In the computation of errors, we have taken into account the random component related mainly to the EW measurement uncertainty and the systematic component due to the atmospheric parameters. The total uncertainty has been derived as the sum in quadrature of random and systematic uncertainties. – 8 – (i) Random errors. Under the assumption that each line provides an independent indication of the abundance of a species, the line-to-line scatter normalized to the root mean square of the observed lines number (σ/√Nlines) is a good estimate of the random error, arising mainly from the uncertainties in the EWs (but including also secondary sources of uncertainty, as the line-to-line errors in the employed log gf). Only for elements with less than 5 available lines, we adopted as random error the line-to-line scatter obtained from the iron lines normalized for the root mean square of the number of lines. These internal errors are reported in Tables 2 - 5 for each abundance ratio and they are of the order of 0.01–0.03 dex for [Fe/H] (based on the highest number of lines) and range from 0.10 dex for the other elements. 0.02 dex to ∼ ∼ (ii) Systematic errors. The classical approach to derive the uncertainty due to the choice of the atmospherical parameters is to re-compute the abundances by altering each parameter of the corresponding error and fixing the other quantity each time. Then, the resulting abundance differences are summed in quadrature, providing the total uncertainty. In the case of our analysis, where the spectroscopic method to infer the parameters has been adopted, Tef f , log g and vt turn out to be not independent each other. Variations of Tef f affect in different ways Fe I and Fe II abundances, and imply related changes in log g to compensate. Moreover, strongest lines have typically lower excitation potential, and any change in Tef f requires a change in vt. The method to sum in quadrature the abundance uncertainties under the assumption that Tef f , log g and vt are uncorrelated is unable to take into account the covariance terms due to the dependencies among the atmospherical parameters. The risk to use this technique, when the spectroscopical optimization is adopted, is to overestimate this source of error, providing only a conservative upper limit, especially in cases of abundances with relevant covariance terms. A more realistic estimate of the effective error due to the atmospherical parameters, can be obtained with the procedure described by Cayrel et al. (2004). We repeated the analysis of a target star 100 (namely, NGC 1786-2310, chosen as representative of the entire sample) varying Tef f by K with respect to the best model Tef f and repeating the entire procedure to optimize the other parameters, deriving new best values for log g and vt: we obtained log g= 0.9 and vt= 2 km s−1 when we increase Tef f of 100 K, and log g= 0.3 and vt= 1.85 km s−1 when we decrease Tef f of 100 K. The two variations are basically symmetric and we chose as final error the absolute value of the largest one. Table 6 lists the differences between the new analysis and the original one for each abundance ratio. This method naturally includes both the errors due to the parameters and the covariance terms due to the interdependence between the parameters (see also McWilliam et al. 1995, for a complete discussion about the covariance terms). ± 5. Chemical abundance results Tables 3 - 5 list the derived abundance ratios for all the studied stars. Table 7 summarizes the cluster average abundance ratios, together with the dispersion around the mean. Figures 3 - 7 show the plot of some abundance ratios as a function of the iron content obtained in this work – 9 – (as grey triangles) and in Mucciarelli et al. (2008) (as white triangles). In these figures abundances obtained for Galactic field stars (small grey circles), GGCs (squares), dSph’s stars (asterisks) and for the sample of old LMC clusters by Johnson et al. (2006) (black points) are also plotted for comparison. All the reference sources are listed in Table 8. For sake of homogeneity and in order to avoid possible systematic effects in the comparison, we perform a study of the oscillator strengths and adopted solar values of the comparison samples, aimed at bringing all abundances in a common system. Since our analysis is differential, we decide not to correct abundances derived with the same methodology (Edvardsson et al. 1993; Gratton et al. 2003; Reddy et al. 2003, 2006). All the other dataset have been re-scaled to our adopted oscillator strengths and solar values. We compared oscillator strengths of lines in common with our list, finding, if any, negligible offsets (within 0.03 dex). Log gf of the Ti I lines adopted by Fulbright (2000), Shetrone, Cot´e & Sargent (2001) and Shetrone et al. (2003) are 0.07 dex higher than ours, while log gf of the Y II lines by Stephens & Boesgaard (2002) results lower than ours by -0.09 dex. The differences in the individual element solar values are small, typically less than 0.05 dex and generally the offsets of log gf and solar values cancel out, with the only exception of the Ca abundances based on the solar value by Anders & Grevesse (1989), which turns out to be 0.09 dex higher than ours. ± The main abundance patterns are summarized as follows: • • ± ± ± 0.01 dex (σ= 0.02 dex), –1.65 0.02 dex (σ= 0.04 dex) and –1.95 Fe, O, Na, Mg and Al— Results about Fe, O, Na, Mg and Al of the target stars have been presented and discussed in Mucciarelli et al. (2009). We derived an iron content of [Fe/H]= – 0.02 dex (σ= 0.04 1.75 dex) for NGC 1786, NGC 2210 and NGC 2257, respectively. At variance with the other elements, Mg and Al exhibit large star-to-star variations in each cluster, while similar dishomogeneities have been found in the O content of NGC 1786 and 2257, and in the Na content of NGC 1786. Such scatters are not compatible with the obser- vational errors and indicate the presence of intrinsic variations. The same Na-O and Mg-Al anticorrelations observed in the GGCs have been found in these LMC clusters (see Fig. 2 of Mucciarelli et al. 2009). Similar patterns have been already detected in the GGCs studied so far and they are generally interpreted in terms of a self-enrichment process, where the ejecta of the primordial Asymptotic Giant Branch (AGB) stars (in which O and Mg have been destroyed producing large amount of Na and Al) are able to trigger the formation of a second stellar generation (Ventura et al. 2001; Ventura & D’Antona 2008). A complete dis- cussion about the Na-O and Mg-Al anticorrelations in these 3 LMC clusters is also presented in Mucciarelli et al. (2009). α-elements— Fig. 3 shows the behavior of [Si/Fe], [Ca/Fe] and [Ti/Fe] as a function of [Fe/H] for the observed clusters and the comparison samples. The first 2 abundance ratios +0.30 dex, in good agreement with the are enhanced, with [Si/Fe] +0.2 dex). Fig. 4 shows Halo and GGCs stars, while [Ti/Fe] is only moderately enhanced ( 0.08, the average of [Si/Fe], [Ca/Fe] and [Ti/Fe] abundance ratios. We find < α/F e > of 0.30 +0.40 dex and [Ca/Fe] ∼ ∼ ∼ ± – 10 – ± ± 0.02 and +0.38 +0.33 0.08 for NGC 1786, 2210 and 2257, respectively. Such a level of α- enhancement is consistent with that observed in the Galactic Halo ( both in field and cluster stars of similar metallicity), while dSphs display < α/F e > ratios only 0.1-0.15 dex lower. It is worth noticing that recent studies indicate that the α-enhancement of the Sculptor stars well agrees with the Halo stars for lower metallicities (see e.g. Tolstoy, Hill & Tosi 2009), while the Fornax GCs show only a mild enhancement (Letarte et al. 2006), see Fig. 4. ∼ The only previous chemical analysis of α-elements in old LMC GCs has been performed by Johnson et al. (2006), analyzing 4 GCs (namely, NGC 2005, 2019, 1898 and Hodge 11) in the metallicity range [Fe/H]= –2.2 / -1.2 dex (none of these objects is in common with our sample). At variance with us, they find solar or sub-solar [Ti/Fe] ratios and moderately enhanced [Ca/Fe] ratios, while their [Si/Fe] abundance ratios turn out to be enhanced in good agreement with our abundances. However, we point out that the solar zero-point for their [Ca/Fe] (including both the solar reference for Ca and Fe) is +0.11 dex higher than ours. Taking into account this offset, their Ca abundances are only 0.1 dex lower and still barely consistent within the quoted uncertainties. Conversely, for Ti, the offset in the log gf scale of -0.06 dex is not sufficient to erase the somewhat larger discrepancy ( 0.2-0.3 dex) between the two abundance estimates. ∼ • Iron-peak elements— The abundance ratios for [Sc/Fe], [V/Fe], [Cr/Fe] and [Ni/Fe] are plotted in Fig. 5. Such ratios turn out to be solar or (in a few case) moderately depleted, and consistent with the patterns observed in the Galactic Halo. The old LMC clusters analyzed by Johnson et al. (2006) exhibit similar abundance ratios, with the exception of [V/Fe] that appears to be depleted with respect to the solar value ([V/Fe]<–0.25 dex). V is very sensitive to the adopted Tef f , as far as Ti, and we checked possible systematic offset between our Tef f scale and that by Johnson et al. (2006). Both scales are based on the excitational equilibrium, thus, the derived Tef f are formally derived in a homogenous way. We checked possible offset in the adopted Fe log gf, finding an average difference log gfJ06-log gfthis work= -0.004 (σ= 0.11). Moreover, there are no trends between the difference of the log gf and χ. We repeated our analysis for some stars by using the Fe log gf by Johnson et al. (2006), finding very similar Tef f (within 50 K) with respect to our ones. Thus, we can consider that the two Tef f scales are compatible each other. We cannot exclude that the different treatment of the hyperfine structure for the V I lines between the two works be the origin of this discrepancy. Unfortunately, we have no GCs in common with their sample and a complete comparison cannot be performed. ± • Neutron-capture elements— Elements heavier than the iron-peak (Z>31) are built up through rapid and slow neutron capture processes (r- and s-process, respectively). Eu is considered a pure r-process element, while the first-peak s-process element Y and the second- peak s-process elements Ba, La, Ce and Nd (see Fig. 6 and 7), have an r contribution less than 20-25% in the Sun. Nd is equally produced through s and r-process (see e.g. Arlandini et al. ∼ 1999; Burris et al. 2000). Since the s-process mainly occurs in AGB stars during the thermal – 11 – pulse instability phase, s-process enriched gas should occur at later ( ∼ 100-200 Myr) epochs. –0.30 dex) of [Y/Fe], still In the measured old LMC clusters we find a general depletion ( consistent (within the quoted uncertainties) with the lower envelope of the [Y/Fe] distribution of the Galactic stars, which show a solar-scaled pattern. Also the metal-rich LMC clusters by Mucciarelli et al. (2008) are characterized by such a depletion, with [Y/Fe] between –0.32 and –0.54 dex (see Fig. 7). Depleted [Y/Fe] ratios have been already observed in dSphs field stars (Shetrone, Cot´e & Sargent 2001; Shetrone et al. 2003) and in the Fornax GCs (Letarte et al. 2006). ∼ The stars of NGC 2210 and NGC 2257 exhibit roughly solar [Ba/Fe] ratios (+0.10 and –0.04 dex, respectively), while in NGC 1786 this abundance ratio is depleted ([Ba/Fe]= –0.18 dex). Also [La/Fe] and [Ce/Fe] show solar or slightly enhanced values, while [Nd/Fe] is always enhanced ( +0.50 dex). The [Ba/Fe] ratio (as far as the abundances of other heavy s-process elements) appears to be indistinguishable from the metal-poor stars in our Galaxy. ∼ Fig. 7 (lower panel) shows the behavior of [Eu/Fe] as a function of the [Fe/H]. The 3 old LMC clusters exhibit enhanced ( +0.7 dex) [Eu/Fe] ratios. These values are consistent with the more Eu-rich field stars in the Galactic Halo (that display a relevant star-to-star disper- sion probably due to an inhomogeneous mixing), while the GGCs are concentrated around +0.40 dex (James et al. 2004). The only other estimates of the [Eu/Fe] abundance [Eu/Fe] ratio in LMC clusters have been provided by Johnson et al. (2006) who find enhanced values between +0.5 and +1.3 dex, fully consistent with our finding. ∼ ∼ ∼ 6. Discussion The α-elements are produced mainly in the massive stars (and ejected via type II Supernovae (SNe) explosions) during both hydrostatic and explosive nucleosynthesis. As showed in Fig. 3 and 4, the LMC clusters of our sample display a behavior of [α/Fe] as a function of [Fe/H] similar to the one observed in the Milky Way stars. The enhanced [α/Fe] ratios in the old LMC clusters suggest that the gas from which these objects have been formed has been enriched by type II SNe ejecta on a relative short time-scale. Such an observed pattern in the metal-poor regime agrees with the α-enhancement of the Halo and GGCs stars, pointing out that the chemical contribution played by massive stars (concerning the nucleosynthesis of the α-elements) in the early epochs of the LMC and Milky Way has been similar. [Ba/Y] is a convenient abundance ratio to estimate the relative contribution between heavy and light s-elements, [Ba/Eu] the relative contribution between heavy s and r-elements and [Y/Fe] the contribution between light s and r-elements. As shown in Fig. 8 (upper panel), [Ba/Y] is solar or moderate enhanced in old LMC as in the Milky Way, but lower than the dSphs. At higher metallicities the ratio increases appreciably due to the combined increase of Ba and decrease of Y. Such an increase of [Ba/Y] with iron content can be ascribed to the rise of the AGB contribution, with a significant metallicity dependence of the AGB yields (as pointed out by Venn et al. 2004). – 12 – ∼ ∼ –0.70 dex and [Y/Eu] In the old LMC clusters, both the [Ba/Eu] and [Y/Eu] are depleted with respect to the solar value, with [Ba/Eu] –1 dex. Such a depletion is consistent with the theoretical prediction by Burris et al. (2000) and Arlandini et al. (1999) in the case of pure r-process. Moreover, [Y/Eu] remains constant at all metallicities, at variance with [Ba/Eu] ratio. It is worth noticing that the precise nucleosynthesis site for Y is still unclear. Despite of the fact that most of the s-process elements are produced mainly in the He burning shell of intermediate- mass AGB stars, the lighter s-process elements, such as Y, are suspected to be synthesized also during the central He burning phase of massive stars (see e.g. the theoretical models proposed by Prantzos, Hashimoto & Nomoto 1990). Our results suggest that in the early ages of the LMC the nucleosynthesis of the heavy elements has been dominated by the r-process, both because this type of process seems to be very efficient in the LMC and because the AGB stars have had no time to evolve and leave their chemical signatures in the interstellar medium. The contribution by the AGB stars arises at higher metallicity (and younger age) when the AGB ejecta are mixed and their contribution becomes dominant. This hypothesis has been suggested also by Shetrone et al. (2003) in order to explain the lower [Y/Fe] abundance ratios observed in dSph’s, pointing out a different Y nucleosynthesis for the Galaxy and the dSph’s, with a dominant contribution by type II SNe in the Galactic satellites. ∼ Fig. 9 show the behaviour of [Y/α], [Ba/α] and [Eu/α]. [Y/α] and [Ba/α] abundance ratios turns out to be depleted (<–0.30 dex) at low metallicity, with a weak increase at higher metal- +0.50 dex. This finding seems to confirm as Y is mainly licity for [Y/α], while [Ba/α] reaches produced by type II SNe, with a secondary contribution by low-metallicity AGB stars, at variance with Ba. In fact, in the low-metallicity AGB stars, the production of light s-process elements (as Y) is by-passed in favor to the heavy s-process elements (as Ba), because the number of seed nuclei (i.e. Fe) decrease decreasing the metallicity, while the neutron flux per nuclei seed increases. In light of the spectroscopic evidences arising from our database of LMC GCs and from the previous studies about Galactic and dSphs stars, both irregular and spheroidal environments seem to share a similar contribution from AGB stars and type II SNe (concerning the neutron capture elements) with respect to our Galaxy. Our LMC clusters sample shows a remarkably constant [Eu/α] ratio of about +0.4 dex over the entire metallicity range, pointing toward a highly efficient r-process mechanism 6. First hints of ∼ 6 As a sanity check of our abundances in order to exclude systematic offset in the Eu abundances due to the adopted hyperfine treatment, we performed an analysis of [Eu/Fe] and [α/Fe] ratios on Arcturus, by using an UVES spectrum taken from the UVES Paranal Observatory Project database (Bagnulo et al. 2003). By adopting the atmospherical parameters by Lecureur et al. (2007) and the same procedure described above, we derived < α/F e >= +0.23±0.09 dex, [Eu/Fe]= +0.15±0.05 dex and [Eu/α]= –0.08 dex (according to the previous analysis by Peterson et al. (1993) and Gopka & Yushchenko (1984)). For this reason, we exclude that the enhancement of [Eu/α] in our stars can be due to an incorrect hyperfine treatment of the used Eu line. – 13 – such an enhanced [Eu/α] pattern have been found in some supergiant stars in the Magellanic Clouds (Hill et al. 1995, 1999), in Fornax GCs (Letarte et al. 2006) and field stars (Bruno Letarte, Ph.D. Thesis) and in a bunch of Sgr stars (Bonifacio et al. 2000; McWilliam & Smecker-Hane 2005). 7. Conclusion We have analyzed high-resolution spectra of 18 giants of 3 old LMC GCs, deriving abundance ratios for 13 elements, in addition to those already discussed in Mucciarelli et al. (2009) and sam- pling the different elemental groups, i.e. iron-peak, α and neutron-capture elements. The main results of our chemical analysis are summarized as follows: • • • • the three target clusters are metal-poor, with an iron content of [Fe/H]= –1.75 (σ= 0.02 dex), NGC 1786, NGC 2210 and NGC 2257, respectively (see Mucciarelli et al. 2009); 0.02 dex (σ= 0.04 dex) and –1.95 0.01 dex 0.02 dex (σ= 0.04 dex) for –1.65 ± ± ± all the three clusters show the same level of enhancement of the < α/F e > ratio ( +0.30 dex), consistent with a gas enriched by type II SNe, while metal-rich, younger LMC clusters exhibit solar-scaled < α/F e > ratio, due to the contribution of type Ia SNe at later epochs; ∼ the iron-peak elements (Sc, V, Cr, Ni) follow a solar pattern (or slightly sub-solar, in some cases), according with the observed trend in our Galaxy and consistent with the canonical nucleosynthesis scenario; the studied clusters show a relevant ( –0.30 dex) depletion of [Y/Fe], while the other s-process elements (with the exception of Nd) display abundance ratios consistent with the Galactic distributions. [Ba/Fe] and [Ba/Y] in the old LMC GCs are lower than the values measured in the metal-rich, intermediate-age LMC GCs, because in the former the AGB stars had no time to evolve and enrich the interstellar medium; ∼ • +0.70 dex) in all the clusters. This seems to suggest that the r- [Eu/Fe] is enhanced ( process elements production is very efficient in the LMC, being also the main channel of nucleosynthesis for the other neutron-capture elements. ∼ In summary, the old, metal-poor stellar population of the LMC clusters closely resembles the GGCs in many chemical abundance patterns like the iron-peak, the α and heavy s-process elements, and concerning the presence of chemical anomalies for Na, O, Mg and Al. When compared with dSphs the LMC old stellar population shows remarkably different abundance patterns for [α/Fe] and neutron-capture elements. We warmly thank the anonymous referee for his/her useful comments. This research was supported by the Ministero dell’Istruzione, dell’Universit´a e della Ricerca. – 14 – REFERENCES Alonso, A., Arribas, S., & Martinez-Roger, C., 1999, A&AS, 140, 261 Alonso, A., Arribas, S., & Martinez-Roger, C., 2001, A&A, 376, 1039 Anders, E., & Grevesse, N., 1989, Geochim. Cosmochim. Acta., 53, 197 Andersen, J., 1999, IAU Trans. A, Vol. XXIV, (San Francisco, CA:ASP), pp. 36, 24, A36 Arlandini, C., Kapplere, F., Wisshak, K., Gallino, R., Lugaro, M., Busso, M. & Straniero, O., 1999, ApJ, 525, 886 Asplund, M., 2005, ARA&A, 43, 481 Bagnulo, S. et al.. 2003, Messenger, 114, 10 Biemont, E., Karner, C., Meyer, G., Traeger, F., & zu Putlitz, G. 1982, A&A, 107, 166 Bonifacio, P., Hill, V., Molaro, P., Pasquini, L., Di Marcantonio, P., & Santin, P., 2000, A&A, 359, 663 Burris, D. L., Pilachowski, C. A., Armandroff, T. E., Sneden, C., Cowan, J. J., & Roe, H., 2000, ApJ, 544, 302 Cariulo, P., Degl’Innocenti, S., & Castellani, V., 2004, A&A, 421, 1121 Carpenter, J. M., 2001, AJ, 121, 2851 Carretta, E., Gratton, R. G., Bragaglia, A., Bonifacio, P., & Pasquini, L., 2004. A&A, 416, 925 Carretta, E., 2006, AJ, 131, 1766 Cayrel, R., 1988, in IAU Symp. 132, ”The Impact of Very High S/N Spectroscopy on Stellar Physics”, ed. G. Cayrel de Strobel & M. Spite, Dordrecht, Kluwer, 345 Cayrel, R. et al., 2004, A&A, 416, 1117 Dubath, P., Meylan, G., & Mayor, M., 1997, A&A, 324, 505 Edvardson, B., Andersen, J., Gustafsson, B., Lambert, L., Nissen, P. E., & Tomkin, J., 1993, A&A, 275, 101 Ferraro, F. R., Mucciarelli, A., Carretta, E., & Origlia, L., 2006, ApJ, 645L, 33 Fulbright, J. P., 2000, AJ, 120, 1841 Geisler, D., Smith, V. V., Wallerstein, G., Gonzalez, G., & Charbonnel, C., 2005, AJ, 129, 1428 Geisler, D., Wallerstein, G., Smith, V. V., & Casetti-Dinescu, D. I., 2007, PASP, 119, 939 – 15 – Gopka, V. F. & Yushchenko, A. V., 1984, AstL, 20, 352 Gratton, R. G., Carretta, E., Eriksson, K., & Gustafsson, B., 1999, A&A, 350, 955 Gratton, R. G. et al., 2001, A&A, 369, 87 Gratton, R. G., Carretta, E., Claudi, R., Lucatello, S., & Barbieri, M., 2003, A&A, 404, 187 Gratton, R. G., et al., 2007, A&A, 464, 953 Grevesse, N, & Sauval, A. J., 1998, SSRv, 85, 161 Grocholski, A. J., Cole, A. A., Sarajedini, A., Geisler, D., & Smith, V. V., 2006, AJ, 132, 1630 Harris, J., & Zaritsky, D., 2009, AJ, 138, 1243 Hill, V., Andrievsky, S., & Spite, M., 1995, A&A, 293, 347 Hill, V., 1999, A&A, 345, 430 Hill, V., Francois, P., Spite, M., Primas, F. & Spite, F., 2000, A&AS, 364, 19 Koch, A. & Edvardsson, B., 2002, A&A, 381, 500 Kontizas, M., Morgan, D. H., Hatzidimitriou, D., & Kontizas, E., 1990, A&AS, 84, 527 Korn, A. J., Becker, S. R., Gummersbach, C. A., & Wolf, B., 2000, A&A, 353, 655 Korn, A. J., Keller, S. C., Kaufer, A., Langer, N., Przybilla, N., Stahl, O., & Wolf, B., 2002, A&A, 385, 143 Kraft, R. P., Sneden, C., Langer, G. E., Shetrone, M. D., & Bolte, M., 1995, AJ, 109, 2586 Ivans, I. I., Sneden, C., Kraft, R. P., Suntzeff, N. B., Smith, V. V., Langer, G. E., & Fulbright, J. P., 1999, AJ, 118, 1273 Ivans, I. I., Kraft, R. P., Sneden, C. Smith, G. H., Rich, R. M., & Shetrone, M., 2001, AJ, 122, 1438 Yong, D., Grundahl, F., Nissen, P. E., Jensen, H. R., & Lambert, D. L., 2005, A&A, 438, 875 James, G., Francois, P., Bonifacio, P., Carretta, E., Gratton, R. G., & Spite, F., 2004, A&A, 427, 825 Johnson, J. A., Ivans, I. I.,& Stetson, P. B., 2006,ApJ, 640, 801 Lawler, J. E., Wickliffe, M. E., den Hartog, E. A., & Sneden, C., 2001, ApJ, 563, 1075 Lecureur, A., Hill, V., Zoccali, M., Barbuy, B., Gomez, A., Minniti, D., Ortolani, S., & Renzini, A., 2007, A&A, 465, 799 – 16 – Lee, J.-W., & Carney, B. W., 2002, AJ, 124, 1511 Letarte, B., Hill, V., Jablonka, P., Tolstoy, E., Francois, P., & Meylan,G., 2006, A&A, 453, 547L Magain, P. 1984, A&A, 134, 189 Mc William, A., Preston, G., Sneden, C., & Searle, L., 1995, AJ, 109, 2757 Mc William, A. & Smecker-Hane, T. A.(2005), ASPC, 336, 221 Monaco, L., Bellazzini, M., Bonifacio, P., Ferraro, F. R., Marconi, G., Pancino, E., Sbordone, L., & Zaggia, S., 2005, A&A, 441, 141 Monaco, L., Bellazzini, M., Bonifacio, P., Buzzoni, A., Ferraro, F. R., Marconi, G., Sbordone, L., & Zaggia, S., 2007, A&A, 464, 201 Mucciarelli, A., Carretta, E., Origlia, L., & Ferraro, F. R., 2008, ApJ, 136, 375 Mucciarelli, A., Origlia, L., Ferraro, F. R., & Pancino, E., 2009, ApJ, 695L, 134 Olszewski, E. W., Schommer, R. A., Suntzeff, N. B. & Harris, H. C., 1991, AJ, 101, 515 Osterbrock, D. E., Fulbright, J. P., Martel, A. R., Keane, M. J., Trager, S. C., & Basri, G., 1996, PASP, 108, 277 Pasquini, L. et al., Messenger, 110, 1 Persson, S. E., Aaronson, M., Cohen, J. G., Frogel, J. A., & Matthews, K.,1983 Peterson, R., C., Dalle Ore, C. M., & Kurucz, R. L., 1993, ApJ, 404, 333 Pompeia, L., Hill, V., Spite, M., Cole, A., Primas, F., Romaniello, M., Pasquini, L., Cioni, M-R., & Smecker Hane, T., 2008, A&A, 480, 379 Prantzos, N., Hashimoto, M., & Nomoto, K., 1990, A&A, 234, 211 Prochaska, J. X., Naumov, S. O., Carney, B. W., McWilliam, A., & Wolfe, A., 2000, AJ, 120, 2513 Prochaska, J. X., &, McWilliam, A., 2000, ApJ, 537L, 57 Ramirez, S. V., & Cohen, J., 2002, AJ, 123, 3277 Reddy, B. E., Tomkin, J., Lambert, D. L., & Allende Prieto, C., 2003, MNRAS, 340, 304 Reddy, B. E., Lambert, D. L., & Allende Prieto, C., 2006, MNRAS, 367, 1329 Sbordone, L., Bonifacio, P., Buonanno, R., Marconi, G., Monaco, L., & Zaggia, S., 2007, A&A, 465, 815 Shetrone, M., Cot´e, P., & Sargent, W. L. W., 2001, ApJ, 548, 592 – 17 – Shetrone, M., Venn, K. A., Tolstoy, E., Primas, F., Hill, V., & Kaufer, A., 2003, AJ, 125, 684 Sneden, C., McWilliam, A., Preston, G. W., Cowan, J. J., Burris, D. L., & Armosky, B. J., 1996, ApJ, 467, 840 Sneden, C., Kraft, R. P., Shetrone, M. D., Smith, G. H., Langer, G. E., & Prosser, C. F., 1997, AJ, 391, 354 Sneden, C., Kraft, R. P., Guhatahakurta, P., Peterson, R. C., & Fulbright, J. P., 2004, AJ, 127, 2162 Stephens, A., & Boesgaard, A. M., 2002, AJ, 123, 1647 Tolstoy, E, Hill, V, & Tosi, M., 2009, ARA&A, 47, 371 Venn, K. A., Irwin, M., Shetrone, M. D., Tout, C. A., Hill, V., & Tolstoy, E., 2004, AJ, 128, 1177 Ventura, P., D’Antona, F., Mazzitelli, I., & Gratton, R., 2001, ApJ, 550L, 65 Ventura, P., & D’Antona, F., 2008, MNRAS, 385, 2034 Wahlgren, G. M., 2005, Memorie della Societ`a Astronomica Italiana Supplementi, 8, 108 Whaling, W. Hannaford, P., Lowe, R. M., Biemont, E., & Grevesse, N., 1985, A&A, 153, 109 This preprint was prepared with the AAS LATEX macros v5.2. – 18 – Fig. 1.— Color-Magnitude Diagrams in the (K, J-K) plane of the 3 LMC old clusters: grey points indicate the stars observed with FLAMES. – 19 – Fig. 2.— The behavior of the number density abundance of the neutral iron lines as a function of the expected line strength (upper panel), of the wavelength (middle panel) and of the excitational potential (lower panel). In each panel is also reported the linear best-fit (dashed lines) and the corresponding slope (with associated error) is labelled. – 20 – Fig. 3.— Behavior of [Si/Fe], [Ca/Fe] and [Ti/Fe] abundance ratios as a function of [Fe/H]. The LMC clusters of this study are plotted as grey triangles and the results by Mucciarelli et al. (2008) as white triangles. Small grey points are Galactic stars. Empty squares are GGCs. Asteriks are dSphs field stars and Fornax GCs. Black points are the old LMC GCs by Johnson et al. (2006). All the references are in Table 8. Dashed lines mark the solar value. The errorbar in the corner indicates the typical uncertainty associated to each abundance ratio and computed by summing in quadrature the internal error (reported in Tables 2-5) and the error from the adopted parameters (see Table 6). – 21 – Fig. 4.— Behavior of the average < α/F e > ratio (defined as mean of [Si/Fe], [Ca/Fe] and [Ti/Fe]) as a function of [Fe/H]. – 22 – Fig. 5.— Behavior of [Sc/Fe], [V/Fe], [Cr/Fe] and [Ni/Fe] as a function of [Fe/H]. – 23 – Fig. 6.— Behavior of [Ce/Fe], [Ba/Fe] and [La/Fe] as a function of [Fe/H]. – 24 – Fig. 7.— Behavior of [Y/Fe], [Nd/Fe] and [Eu/Fe] as a function of [Fe/H]. – 25 – Fig. 8.— Behavior of [Ba/Y], [Ba/Eu] and [Y/Eu] (lower panel) as a function of [Fe/H]. – 26 – Fig. 9.— Behavior of [Y/α], [Ba/α] and [Eu/α] as a function of [Fe/H]. – 27 – Information about the target stars. S/N have been computed at 6000 ˚A for the UVES Table 1. spectra and at 5720 and 6260 ˚A for the GIRAFFE HR 11 and 13 spectra respectively. RA and Dec are onto 2MASS astrometric system. Last column reports the adopted instumental configuration (U for UVES and G for GIRAFFE spectra). Star ID S/N Vhelio (km/s) K0 (J − K)0 RA(J2000) Dec(J2000) spectrum NGC 1786-978 — / 70 / 110 NGC 1786-1248 NGC 1786-1321 NGC 1786-1436 — / 60 / 90 NGC 1786-1501 NGC 1786-2310 NGC 1786-2418 — / 70 / 100 NGC 2210-122 NGC 2210-309 NGC 2210-431 NGC 2210-764 NGC 2210-1181 NGC 2257-136 NGC 2257-189 — / 70 / 90 NGC 2257-295 NGC 2257-586 — / 50 / 60 NGC 2257-842 NGC 2257-993 — / 70 / 90 260.5 45 / — /— 255.4 50 / — /— 273.5 267.1 40 / — /— 265.9 50 / — /— 262.2 265.5 40 / — /— 337.7 40 / — /— 338.4 50 / — /— 340.0 40 / — /— 335.7 50 / — /— 335.6 40 / — /— 298.1 299.6 35 / — /— 301.4 300.6 45 / — /— 297.4 298.9 13.55 13.50 13.11 13.71 12.92 12.83 13.09 13.22 13.29 13.04 12.93 12.81 13.65 13.54 14.40 14.36 13.77 13.49 0.78 0.77 0.78 0.72 0.93 0.82 0.82 0.75 0.75 0.77 0.74 0.77 0.77 0.77 0.74 0.70 0.76 0.81 74.7878641 74.7688292 74.7638489 74.7555606 74.7493142 74.7588569 74.8215213 92.9389070 92.9025764 92.8887909 92.8575073 92.8756190 97.5823810 97.5741597 97.5615868 97.5327178 97.5591210 97.4855884 -67.7285246 -67.7408723 -67.7546146 -67.7353347 -67.7514295 -67.7432595 -67.7387519 -69.1122894 -69.1129818 -69.1137252 -69.1267703 -69.1137519 -64.3262965 -64.3299382 -64.3159959 -64.3129344 -64.3394905 -64.3174261 G U U G U U G U U U U U U G U G U G – 28 – Table 2. Atmospherical parameters and derived [Fe/H] ratio (with the number of used lines and the associated internal error defined as σ/√Nlines) for all the observed stars. Solar value for Fe is 7.54 dex (Gratton et al. 2003). Photometric temperatures (column 3) have been reported in comparison with the spectroscopic ones (column 2). Star ID T spec ef f (K) T phot ef f (K) log g (dex) [A/H] vt (km/s) NGC 1786-978 NGC 1786-1248 NGC 1786-1321 NGC 1786-1436 NGC 1786-1501 NGC 1786-2310 NGC 1786-2418 NGC 2210-122 NGC 2210-309 NGC 2210-431 NGC 2210-764 NGC 2210-1181 NGC 2257-136 NGC 2257-189 NGC 2257-295 NGC 2257-586 NGC 2257-842 NGC 2257-993 4250 4280 4250 4420 4100 4100 4160 4300 4250 4200 4270 4200 4290 4290 4360 4480 4320 4200 4260 4285 4260 4412 3936 4167 4167 4334 4334 4285 4360 4285 4285 4285 4360 4466 4309 4190 0.57 0.75 0.65 0.76 0.55 0.47 0.47 0.60 0.55 0.70 0.60 0.60 0.65 0.61 0.96 0.82 0.95 0.52 -1.75 -1.75 -1.75 -1.75 -1.80 -1.75 -1.80 -1.65 -1.70 -1.65 -1.60 -1.60 -1.90 -1.90 -2.00 -2.00 -1.90 -2.00 1.40 1.70 1.80 1.70 1.80 1.90 1.50 1.70 1.80 1.80 1.90 1.80 1.95 1.60 1.50 1.50 1.50 1.50 n 14 60 54 15 57 47 16 31 35 46 42 46 38 17 40 13 39 17 [Fe/H] (dex) -1.73 -1.74 -1.73 -1.76 -1.79 -1.72 -1.75 -1.66 -1.69 -1.67 -1.58 -1.64 -1.94 -1.92 -1.95 -1.92 -1.96 -2.02 0.02 0.02 0.01 0.02 0.01 0.01 0.02 0.02 0.03 0.02 0.02 0.02 0.02 0.02 0.03 0.03 0.02 0.03 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± Table 3. [O/Fe], [Na/Fe], [Mg/Fe], [Al/Fe], [Si/Fe] and [Ca/Fe] abundance ratios for each observed stars with the number of used lines and the corresponding internal error. Star ID SUN n [O/Fe] 8.79 1 NGC 1786-978 2 NGC 1786-1248 2 NGC 1786-1321 1 NGC 1786-1436 NGC 1786-1501 2 NGC 1786-2310 — NGC 1786-2418 — 2 NGC 2210-122 1 NGC 2210-309 2 NGC 2210-431 2 NGC 2210-764 2 NGC 2210-1181 1 NGC 2257-136 NGC 2257-189 — NGC 2257-295 1 NGC 2257-586 — 1 NGC 2257-842 NGC 2257-993 — 0.12 0.08 0.07 0.09 0.08 -0.15 0.26 0.31 0.18 0.30 ± ± ± ± ± <-0.60 <-0.40 0.08 0.14 0.11 0.10 0.08 0.11 0.31 0.10 0.12 0.25 0.27 0.22 ± ± ± ± ± ± <-0.20 0.24 0.18 ± <-0.20 -0.08 0.15 ± <-0.20 n 3 2 2 1 4 3 4 1 4 3 2 2 2 2 3 2 2 2 [Na/Fe] 6.21 0.47 0.16 -0.18 -0.01 0.60 0.66 0.77 -0.08 0.69 0.64 0.32 -0.03 0.20 0.49 0.58 0.22 0.54 0.90 0.03 0.08 0.07 0.09 0.06 0.05 0.03 0.11 0.10 0.07 0.10 0.08 0.11 0.07 0.10 0.08 0.10 0.09 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± n 1 2 2 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 [Mg/Fe] 7.43 n [Al/Fe] 6.23 0.25 0.51 0.41 0.40 0.49 -0.21 -0.31 0.39 0.20 0.33 0.43 0.28 0.34 0.42 0.12 0.36 0.52 0.24 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± 0.06 — 0.08 — 0.07 — 0.09 — 2 0.12 0.08 2 0.07 — 0.11 — 1 0.14 2 0.12 0.14 — 0.11 — 1 0.11 0.10 — 0.18 1 0.11 — 0.15 — 0.13 — — <0.27 <0.11 — ± ± — <0.54 ± ± <0.30 <0.20 0.79 1.02 0.08 0.06 0.80 0.55 0.14 0.08 0.88 0.11 ± — 1.17 0.18 ± — <0.68 — n 1 3 3 1 1 4 1 1 1 2 2 2 2 1 2 1 2 1 [Si/Fe] 7.53 0.36 0.24 0.49 0.57 0.41 0.51 0.52 0.22 0.30 0.40 0.48 0.50 0.54 0.62 0.53 0.53 0.46 0.34 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± 0.06 0.07 0.06 0.09 0.12 0.04 0.07 0.11 0.14 0.08 0.10 0.08 0.08 0.10 0.13 0.11 0.11 0.13 n 6 14 17 5 16 14 5 16 15 15 13 13 13 5 14 5 15 5 [Ca/Fe] 6.27 0.22 0.32 0.23 0.37 0.23 0.40 0.39 0.33 0.49 0.28 0.25 0.19 0.29 0.37 0.53 0.31 0.47 0.39 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± 0.08 0.02 0.03 0.07 0.03 0.03 0.04 0.06 0.05 0.05 0.04 0.04 0.02 0.04 0.03 0.05 0.04 0.04 – 2 9 – Table 4. [Ti/Fe], [Sc/Fe] II, [V/Fe], [Cr/Fe] and [Ni/Fe] abundance ratios for each observed stars with the number of used lines and the Star ID SUN n [Ti/Fe] 5.00 3 NGC 1786 978 12 NGC 1786 1248 9 NGC 1786 1321 4 NGC 1786 1436 12 NGC 1786 1501 15 NGC 1786 2310 2 NGC 1786 2418 6 NGC 2210 122 9 NGC 2210 309 7 NGC 2210 431 7 NGC 2210 764 5 NGC 2210 1181 8 NGC 2257 136 3 NGC 2257 189 NGC 2257 295 4 NGC 2257 586 — 9 NGC 2257 842 3 NGC 2257 993 0.11 0.16 0.13 0.40 0.01 0.15 0.13 0.38 0.35 0.26 0.26 0.28 0.24 0.25 0.33 0.03 0.02 0.02 0.05 0.05 0.05 0.05 0.08 0.06 0.07 0.09 0.09 0.05 0.01 0.08 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± — 0.24 0.16 ± ± 0.06 0.05 n 3 5 5 4 6 4 4 5 4 5 6 5 6 4 4 3 6 4 corresponding internal error. [Sc/Fe]II 3.13 n [V/Fe] 3.97 n [Cr/Fe] 5.67 -0.04 0.06 -0.17 -0.14 -0.05 0.03 -0.03 -0.05 0.12 0.06 -0.19 -0.06 -0.16 -0.19 -0.10 -0.17 -0.04 -0.16 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± 0.03 — 5 0.07 7 0.07 1 0.05 6 0.06 6 0.04 1 0.04 2 0.07 5 0.07 4 0.04 5 0.06 5 0.09 1 0.06 0.05 — 0.08 2 0.06 — 1 0.02 0.07 — — 0.05 -0.14 -0.05 -0.18 0.05 -0.04 -0.23 -0.22 -0.09 -0.29 -0.35 -0.12 ± ± ± ± ± ± ± ± ± ± ± ± — 0.13 -0.01 ± — ± — — 5 0.06 6 0.06 0.09 — 5 0.08 0.06 3 0.07 — 3 0.08 3 0.08 6 0.07 3 0.08 3 0.03 7 0.11 — 4 — 7 — 0.15 0.06 — 0.05 0.04 -0.03 -0.11 ± ± — -0.10 0.00 0.08 0.05 ± ± — 0.06 0.08 0.08 0.08 0.06 0.07 -0.07 -0.05 -0.04 -0.11 -0.16 -0.06 ± ± ± ± ± ± — 0.08 -0.28 ± — 0.04 -0.18 ± — n 4 10 11 2 10 12 4 7 8 7 10 7 7 2 5 1 8 2 [Ni/Fe] 6.28 -0.04 -0.12 -0.08 -0.09 -0.11 -0.03 -0.14 -0.04 0.14 -0.15 -0.01 -0.14 0.05 0.02 -0.11 -0.14 0.01 -0.03 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± 0.03 0.02 0.04 0.06 0.05 0.03 0.04 0.04 0.03 0.05 0.07 0.08 0.05 0.07 0.04 0.11 0.08 0.09 – 3 0 – Table 5. [Y/Fe] II, [Ba/Fe] II, [La/Fe] II, [Ce/Fe] II, [Nd/Fe] II and [Eu/Fe] II abundance ratios for each observed stars with the number of used lines and the corresponding internal error. Star ID SUN n [Y/Fe]II 2.24 NGC 1786 978 — 3 NGC 1786 1248 2 NGC 1786 1321 NGC 1786 1436 — 1 NGC 1786 1501 NGC 1786 2310 2 NGC 1786 2418 — 2 NGC 2210 122 1 NGC 2210 309 1 NGC 2210 431 2 NGC 2210 764 2 NGC 2210 1181 2 NGC 2257 136 NGC 2257 189 — NGC 2257 295 1 NGC 2257 586 — 2 NGC 2257 842 NGC 2257 993 — — 0.09 0.08 -0.36 -0.48 ± ± — 0.12 0.06 -0.20 -0.32 ± ± — 0.08 0.14 0.12 0.10 0.08 0.08 -0.32 -0.31 -0.40 -0.25 -0.41 -0.29 ± ± ± ± ± ± — 0.18 -0.28 ± — 0.11 -0.23 ± — n 1 3 3 1 3 3 1 3 2 3 3 3 3 1 3 1 3 1 [Ba/Fe]II 2.13 n [La/Fe]II 1.17 n [Ce/Fe]II 1.58 -0.21 -0.18 -0.21 -0.24 -0.16 -0.06 -0.19 0.11 0.09 0.07 0.03 0.09 0.01 -0.06 -0.07 -0.11 -0.01 0.02 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± 1 0.12 1 0.07 1 0.06 0.09 — 1 0.07 1 0.05 1 0.07 0.06 1 0.10 — 1 0.07 1 0.10 1 0.06 1 0.06 1 0.10 0.10 1 0.11 — 1 0.09 1 0.13 0.11 0.01 0.32 0.24 0.10 0.26 -0.12 0.08 0.00 -0.06 ± ± ± — ± ± ± ± — 0.12 — 1 0.12 1 0.10 — 1 0.12 0.08 1 0.07 — 1 0.11 — 1 ± 1 ± 1 ± 1 <–0.10 <–0.10 — 1 <0.00 — — 1 <–0.10 <–0.10 — 0.12 0.14 0.11 — ± ± — ± ± — 0.08 0.11 0.12 0.10 -0.13 0.10 0.12 0.08 0.10 0.11 ± — 0.07 -0.08 0.15 0.12 0.14 0.11 ± ± ± <0.00 — <0.10 — <0.10 — n — 3 3 — 3 2 — 3 3 3 3 3 3 — 4 — 3 — [Nd/Fe]II 1.50 — 0.65 0.85 0.07 0.06 0.87 0.63 0.07 0.06 0.65 0.64 0.56 0.34 0.43 0.71 0.06 0.08 0.07 0.10 0.06 0.06 0.48 0.09 ± — 0.50 0.09 ± — ± ± — ± ± — ± ± ± ± ± ± — n — 1 1 — 1 1 — 1 1 1 1 1 1 — 1 — 1 — [Eu/Fe]II 0.51 — 0.60 0.78 0.12 0.10 0.69 0.49 0.12 0.08 0.82 0.70 0.77 0.75 0.63 0.75 0.11 0.14 0.12 0.14 0.11 0.11 0.59 0.18 ± — 0.70 0.15 ± — ± ± — ± ± — ± ± ± ± ± ± — – 3 1 – – 32 – Table 6. Variation of each abundance ratio due to atmospherical parameters, obtained according to the method by Cayrel et al. (2004). Second column reports the difference for each abundance ratio between the model with Tef f increased by 100 K (and the re-optimization of the other parameters) and the original one. The third column reports the same differences but considering a model Tef f decreased by 100 K. The last column lists the final average error. Ratio (M OD)+100K -MOD (M OD)−100K -MOD Average [O/F e] [N a/F e] [M g/F e] [Al/F e] [Si/F e] [Ca/F e] [Sc/F e]II [T i/F e] [V /F e] [Cr/F e] [F e/H] [N i/F e] [Y /F e]II [Ba/F e]II [La/F e]II [Ce/F e]II [N d/F e]II [Eu/F e]II (dex) +0.13 –0.07 –0.04 –0.05 –0.03 –0.02 +0.06 +0.09 +0.11 +0.03 +0.08 +0.03 +0.02 +0.07 +0.15 +0.09 –0.08 +0.04 (dex) –0.11 +0.06 +0.05 +0.04 +0.10 +0.01 +0.02 –0.10 –0.12 –0.06 –0.09 –0.02 –0.04 –0.09 –0.09 –0.03 +0.11 –0.03 (dex) 0.12 0.07 0.05 0.05 0.07 0.02 0.04 0.10 0.12 0.05 0.09 0.03 0.04 0.09 0.15 0.06 0.10 0.04 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± – 33 – Table 7. Average abundance ratios for the 3 old LMC clusters discussed in this study with the corresponding dispersion by the mean. Ratio NGC 1786 Mean [O/F e] [N a/F e] [M g/F e] [Al/F e] [Si/F e] [Ca/F e] [Sc/F e]II [T i/F e] [V /F e] [Cr/F e] [F e/H] [N i/F e] [Y /F e]II [Ba/F e]II [La/F e]II [Ce/F e]II [N d/F e]II [Eu/F e]II <–0.04 0.22 0.35 <0.55 0.44 0.31 –0.05 0.16 –0.05 –0.06 –1.75 –0.09 –0.34 -0.18 0.17 0.04 0.75 0.64 σ 0.36 0.34 0.36 0.43 0.11 0.08 0.08 0.12 0.09 0.05 0.02 0.04 0.11 0.05 0.12 0.11 0.12 0.12 NGC 2210 Mean 0.23 0.33 0.31 <0.48 0.38 0.31 –0.02 0.31 –0.24 –0.09 –1.65 –0.04 –0.34 0.10 -0.02 0.06 0.52 0.74 NGC 2257 Mean <–0.06 0.33 0.46 <0.91 0.50 0.39 –0.14 0.24 0.00 –0.17 –1.95 –0.03 –0.27 –0.04 <-0.08 <-0.07 0.56 0.68 σ 0.18 0.14 0.29 0.25 0.09 0.09 0.06 0.06 0.12 0.11 0.04 0.08 0.03 0.05 0.04 0.06 0.13 0.08 σ 0.07 0.09 0.36 0.23 0.12 0.11 0.12 0.05 0.10 0.05 0.04 0.12 0.07 0.03 0.08 0.10 0.14 0.07 Note. — [Fe/H], [O/Fe], [Mg/Fe] and [Al/Fe] abundance ratios are from Mucciarelli et al. (2009) and reported here for sake of completeness. [Na/Fe], – 34 – Table 8. Literature sources for the comparison samples. 47 Tuc NGC 2808 NGC 6287 NGC 6293 NGC 6397 NGC 6541 NGC 6752 M3 M4 M5 M10 M13 M15 M71 Reference Galactic GCs Carretta et al. (2004), James et al. (2004) Carretta (2006) Lee & Carney (2002) Lee & Carney (2002) James et al. (2004) Lee & Carney (2002) Yong et al. (2005) Sneden et al. (2004) Ivans et al. (1999) Ivans et al. (2001) Kraft et al. (1995) Sneden et al. (2004) Sneden et al. (1997) Ramirez & Cohen. (2002) Galactic Field Stars Thin/Thick Edvardsson et al. (1993); Koch & Edvardsson (2002) Halo Halo/Thick Halo/Thick Halo/Thick Thin Thick Burris et al. (2000) Fulbright (2000) Stephens & Boesgaard (2002) Gratton et al. (2003) Reddy et al. (2003) Reddy et al. (2006) dSph Shetrone, Cot´e & Sargent (2001) Shetrone, Cot´e & Sargent (2001) Shetrone, Cot´e & Sargent (2001) Shetrone et al. (2003); Geisler et al. (2005) Shetrone et al. (2003); Letarte et al. (2006) Shetrone et al. (2003) Shetrone et al. (2003) Draco Sextans Ursa Minor Sculptor Fornax Carina Leo I
synthetic_cpt
3
Neural_Machine_Translation_between_Low-Resource_Languages_with_Synthetic_Pivoting.pdf
International Journal of Engineering Trends and Technology ISSN: 2231 – 5381 /doi:10.14445/22315381/IJETT-V69I9P227 Volume 69 Issue 9, 230-235, September, 2021 © 2021 Seventh Sense Research Group® Original Article Attention based Sequence to Sequence Learning for Machine Translation of Low Resourced Indic Languages – A case of Sanskrit to Hindi Vishvajit Bakarola1, Jitendra Nasriwala2 1 Assistant Professor, Chhotubhai Gopalbhai Patel Institute of Technology, Uka Tarsadia University, Bardoli, Gujarat, India 2 Associate Professor, Babumadhav Institute of Information Technology, Uka Tarsadia University, Bardoli, Gujarat, India [email protected] (NMT) technique is a proficient fully automatic machine Abstract - Deep Learning techniques are powerful in mimicking humans in a particular set of problems. They have achieved a remarkable performance in complex learning tasks. Deep learning inspired Neural Machine Translation that outperforms traditional machine translation. Performing machine-aided translation on Indic languages has always been a challenging task considering their rich and diverse grammar. The neural machine translation has shown quality results compared to the traditional machine translation approaches. The translation becomes problematic when it comes to low-resourced languages, especially with Sanskrit. This paper presents attention mechanism based neural machine translation by selectively focusing on a particular part of language the sentences during construction of Sanskrit to Hindi bilingual parallel corpus with nearly 10K samples and having 178,000 tokens. The neural translation model equipped with an attention mechanism has been trained on Sanskrit to Hindi parallel corpus. The approach has shown the significance of attention mechanisms to overcome long-term dependencies, primarily associated with low resources Indic languages. The paper shows the attention plots on testing data to demonstrate the alignment between source and translated words. For the evaluation of the translated sentences, manual score based human evaluation and automatic evaluation metric based techniques have been adopted. The attention mechanism based neural translation has achieved 88% accuracy in human evaluation and a BLEU score of 0.92 on Sanskrit to Hindi translation. translation. The work shows Keywords — Attention Mechanism, Low-resourced languages, Neural Machine Translation, Sanskrit, Sequence to Sequence Learning I. INTRODUCTION Humans have several different ways to communicate with each other. Spoken and written languages are among the most preferred communication ways. To bridge the gap between languages, it is essential to convert a foreign language to a regional language, and the process is known as the translation process. The translation is a complicated and that requires grammatical and time-consuming process domain knowledge of both languages. Typically, machine translation is converting input language (source language) to output language (target language), preserving its semantics. Initially, this process was carried out by a human expert, which is accurate enough for a specific domain at a given time. However, human translation is tedious and time- consuming. With a human translator, reliability is the next crucial issue for different experts concerned with the translation process, and the end translation may vary. The first notable step in computer-aided machine translation was taken in the 1950s. Since then, the efforts have focused on developing a fully automatic machine translation system that accurately mimics human-level fluency [1]. The primary research in machine translation is to melt away the language barrier and open up literature, communication, and language understanding with ease for everyone. Machine translation has always been a challenging is a fascinating task for the Indic languages. Having the highly diverse grammar and being the morphologically reach languages, machine translation on Indic languages still requires tremendous development efforts. The work focused on developing a fully automatic machine translation system keeping Sanskrit as one of the language pairs. Sanskrit is a language of ancient India and is considered as mother of almost all Indo-European languages. Sanskrit and Hindi both belongs to the Indo-Aryan language family. In the linguistic community, Hindi has been regarded as a descendent of classical Sanskrit through Prakrit [1, 2]. In India, 43.63 percent of the total population are native Hindi speakers. The world report shows that nearly 4.5 percent of the world population are Hindi speakers, which is just 0.5 percent less than native English speakers. Sanskrit is the world's oldest natural language written in most scientific ways. Being an existing human spoken language, Sanskrit is one of the official 22 languages of India according to the eight-schedule of India's constitution. In 2009, Sanskrit was declared the second official language of Uttarakhand and Himachal Pradesh's state in India. Being the primary language of This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) Vishvajit Bakarola & Jitendra Nasriwala. / IJETT, 69(9), 230-235, 2021 ancient times, all four Vedas and six primary fields of study to learn the Vedas had been written in Sanskrit. The considerable its inaccessibility due to lack of understanding is the primary motivation of machine translation work on Sanskrit. literature available in Sanskrit and The paper presents the work performing Sanskrit to Hindi machine translation with Neural Machine Translation (NMT) approach. The rest of the article is composed as follows. Section 2 discusses the vastness and complexity of Sanskrit grammar. Section 3 presents several distinctive traditional machine translation approaches and work done on Sanskrit based on those approaches. Section 4 unfolds the NMT along with its significance and major types that deliver a human-like translation. Section 5 details the environment setup, followed by Section 6, showing results and evaluation matrices on machine-translated target language sentences. Finally, Section 7 concludes the work with its future perspectives. II. LITERATURE REVIEW that the evolution of various The journey of machine translation has begun in the late 1950s. Rule-based machine translation is the oldest and most foundational approach, further divided into transfer and interlingua-based translation. Over time with the increasing demand and availability of digital text data, it has been observed state-of-art translation and approaches. Example-based machine statistical machine translation are among those that require corpora and are classified broadly under corpus-based methods [9]. The work on machine translation keeping Sanskrit as one of the language pairs started nearly 30 years back. Desika was the first system developed in the year 1992 [10]. This section presents other works carried out on the Sanskrit language. A. Statistical Machine Translation The statistical machine translation model uses statistical models with parameters derived from the analysis of the bilingual corpora. Statistical machine translation is a corpus- based approach, and they do not know linguistic rules. This system is good at fluency and catching exceptions to the rules [7].cIn 2007, translation approach was used for Google translate, which supported English to Sanskrit translation with other 57 world languages [8]. B. Rule based Machine Translation the statistical machine The rule-based model generated the translation of a source language using pre-defined and manually designed grammatical rules. The rules-based models are easy to implement, and they occupy comparatively small memory space. One of the significant advantages of this approach is, it does not require sizeable bi-lingual language corpora. However, the design of grammatical rules is a language- dependent, tedious, and highly time-consuming process. In 2012, a rule-based approach was carried out on English to Sanskrit translation and applied to 20 random English sentences. The author has reported a BLEU score of 0.4204 [5]. In 2015, work was carried out on English to Sanskrit translation using context-free grammar techniques [6]. In 2017, the interlingual machine translation approach was adopted for Sanskrit to English translation [11]. The work has given significant insights for intermediate language representation and used the Paninian system from Karak analysis. C. Other Works on Machine Translation that using Sanskrit Two works have reported using the neural network approach to achieve translation with the Sanskrit language. In 2019, corpus-based machine translation system with a neural network had been developed for Sanskrit to Hindi translation. The author has reported that their system is better than a rule-based system with a 24 percent higher BLEU score and 39.6 percent less word error rate [12]. Another work carried out in 2019 uses a recurrent neural network for sequence-to-sequence the translation augmented translation technique with Zero-Shot Translation was carried out to translate Sanskrit to Hindi. The author has reported a BLEU score of 13.3, with a higher side stemming from pre-processing [20]. In 2020, [13]. III. NEURAL MACHINE TRANSLATION Neural Machine Translation or NMT is the most recent approach to achieve automatic machine translation. NMT uses a neural network to model the conditional probability of the target sequence over the source sequence. NMT has an excellent capability to overcome the traditional machine translation models' shortcomings and provide comparatively efficient human-like fluent translation. Neural networks learn the source sequence and relate it with an appropriate target sequence mimicking the human way to do this process. Recurrent Neural Network or RNN has been considered for this task, as RNNs models the long- term dependencies between sources and target languages. Usually, RNN suffers from Exploding gradient – is refer to the problem where network faces increase in weights due to explosion of the long-term components, and Vanishing gradient – is direct to the situation where network weight gets updated with a significantly lower rate of change and the network cannot learn over long term components. And this restricts vanilla RNNs from learning long-term dependencies [14]. the Recurrent Neural Network or RNN uses two significant variants – Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) [15], especially to overcome the long- term dependencies learning problem of vanilla RNNs. A. Encoder-Decoder Model The Encoder-Decoder model is an extension of the vanilla RNN model, which make use of two dedicated networks for This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) Vishvajit Bakarola & Jitendra Nasriwala. / IJETT, 69(9), 230-235, 2021 encoding and decoding the language sequences, as shown in Figure 1. RNN are good at mapping the input and output sequences when their alignment is ahead of time. At training, the input sequence pair to the model is provided, and the model predicts the next word until it meets the sequence end markers [16]. B. Sequence to Sequence Learning with Attention In sequence-to-sequence learning, the model collectively memorizes the whole vector of the input source sequence, and the decoder uses these encoder states to generate the target sequence. This situation enforces the model to learn small sequences fine, but the model faces trouble learning large sequences, often encountered in language translation problems. One solution to overcome this and continue learning long sentences, even with more than 50 words, focuses on the source sequence's selective part [17]. Fundamentally, to overcome instead of encoding the whole sequence in a single vector, it is preferable to encode each word of the sequence into a vector [18] and use these vectors in the process of decoding. With this approach, the small sequences have a small length vector, and large sequences have a significant vector since the total number of words in the given sequence is equal to the number of vectors. this problem, Fig. 1 Encoder-Decoder Architecture language. This process gets repeated until The hidden layer represents the actual meaning of the sentence, and it is then fed to the rest of the sequence in the target the acceptable translation is achieved. Let X be the source language and Y be the target language. The encoder network converts the source sentence 𝑥1, 𝑥2, 𝑥3, … , 𝑥𝑛, into fixed dimension vector space. The decoder network's task is to predict one word at a time with conditional probability as Eq. 1. 𝑃(𝑌|𝑋) = 𝑃(𝑌|𝑋1, 𝑋2, 𝑋3, … , 𝑋𝑘) (1) In Eq. 1, the given sequence 𝑋1, 𝑋2, 𝑋3, … , 𝑋𝑘 does the encoder network encode the fixed dimension vector. Each term used in the distribution will be represented by the softmax layer, i.e., the last layer of the network, which ultimately returns to each class label's probability. learns The LSTM probability the 𝑃(𝑦1, … , 𝑦𝑇′|𝑥𝑖, … , 𝑥𝑇). Here, 𝑥𝑖, … , 𝑥𝑇 is input sequence with its corresponding output sequence 𝑦1, … , 𝑦𝑇′, whose length T' may vary from T. conditional relevant information to perform translation with It has been observed from the previous encoder-decoder architecture that the encoder results in a given sequence at the end of the entire process. The decoder is forced to find the use of encoder the representation. This ultimately shows that the decoder requires every piece of the translation. However, this is not the problem with the more minor sequences, but it becomes hard to decode the entire sequence from a single vector as the sequence size increases. The attention mechanism is a way forward. In practice, with natural languages, it is not always suggested to look at the state immediately preceding the present state. Instead of this, some other conditions need to be looked at by the decoder. The foundational idea behind the attention mechanism is that the decoder network's output depends on the weightage combination of all the input sequence states rather than only the immediately previous one [17, 18]. focusing on The new architecture the attention mechanism was proposed in 2015, resolving long-term dependencies with LSTM. The architecture consists of bidirectional RNN as an encoder and decoder that simulates searching through the input sequence during decoding [18]. The goal is to maximize the conditional probability of the target sequence given the source sequence. In the model, each conditional probability will be defined as Eq. 3. After feeding the input sequence to the LSTM, the hidden state of the LSTM contains the sequence embedding. Finally, this representation is provided to output LSTM having the hidden states 𝑣. Eq. 2 shows the calculation of the probability for the output sequence. 𝑃(𝑦1, … , 𝑦𝑇′|𝑥𝑖, … , 𝑥𝑇) = ∏ 𝑃(𝑦𝑡|𝑣, 𝑦1, … , 𝑦𝑡−1) (2) 𝑡=1 𝑇 𝑃(𝑦𝑖|𝑦1, … , 𝑦𝑖−1, 𝑋) = 𝑔(𝑦𝑖−1, 𝑠𝑖, 𝑐𝑖) (3) Here, 𝑠𝑖 is hidden state of RNN for time i, which is further computed with Eq. 4. The context vector 𝑐𝑖 is similar to the vector v presented in Eq. 2. 𝑠𝑖 = 𝑓( 𝑠𝑖−1, 𝑦𝑖−1, 𝑐𝑖) (4) This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) Vishvajit Bakarola & Jitendra Nasriwala. / IJETT, 69(9), 230-235, 2021 The context vector 𝑐𝑖 is depends on sequences of annotations to which the decoder maps the input sequence. The 𝑐𝑖 is computed with Eq. 5. 𝑇𝑥 𝑐𝑖 = ∑ 𝛼𝑖𝑗ℎ𝑗 (5) 𝑗=1 TABLE 1. Statistics of Sanskrit-Hindi Bilingual Corpus Language Pair Samples Sanskrit Hindi 10650 10650 Tokens 76674 101690 Where, the 𝛼𝑖𝑗 is a weight vector and it is computed for B. System Environment Setup each annotation ℎ𝑗 as Eq. 6. 𝛼𝑖𝑗 = exp (𝑒𝑖𝑗) 𝑇𝑥 𝑘=1 exp (𝑒𝑖𝑘) ∑ 𝑎𝑛𝑑 𝑒𝑖𝑗 = 𝑎(𝑠𝑖−1, ℎ𝑗) (6) This alignment model shows how well the inputs around position j and the output at position i get matched. The alignment model is represented as a feedforward neural network. In traditional machine translation systems, this alignment is not explicitly modeled. Figure 2 depicts the functional architecture of the alignment model from [18]. Fig. 2 The architecture of model trying to generate T-th target word 𝒚𝑻 when fed with the input sequence 𝒙𝒊, … , 𝒙𝑻 [18] IV. EXPERIMENT SETUP A. Dataset The bilingual corpus of Sanskrit to Hindi language pairs has been developed. The corpus contains 10K Sanskrit sentences parallel translated into Hindi sentences, as shown in Table 1. The Sanskrit sentences are obtained majorly focusing on real-life events from online and offline resources. Help from the linguist community and Sanskrit scholars have been taken to develop and validate human translation. The sequence-to-sequence machine translation model based on Bahdanau's attention [18] has been trained with Sanskrit to Hindi bilingual dataset. The model is designed with 1024 embedding dimensions and Adam as an optimizer [19]. Further, the hyperparameters are tuned with trial-and- error methods. The model is trained with early stopping criteria on Tesla T4 GPUs with 16 GBs of memory. C. Data Pre-processing The present work is using the Sa-Hi language pair from the dataset shown in Table 1. The spell normalization is a significant issue in data pre-processing with the Devanagari script. In Hindi text normalization, words with Persio-Arabic origin are specially taken care of in order to preserve the actual semantics. As the data encoded in Unicode has more than one way of storage, all words have been represented in the same way for normalization. Further, the pre-processing of numbers and the named entity has been carried out to establish uniformity in the corpus. V. RESULTS AND EVALUATION The model was tested for more than a hundred sentences of source Sanskrit language. The evaluation of the target Hindi language was carried out through two different approaches. The first approach works on score based human evaluation. In this approach, four different scores have been proposed as shown in Table 2. The score based human evaluation approach is used for manual verification of model generated target language sentences. Here, human linguist has evaluated target sentences given the source sentences on the scale of 4, Where score 4 represents the completely correct sentence in both syntactic and semantic aspects and the score 1 represents that the sentence is wrong is both syntactic and semantic aspects and delivering no meaning given the source sentence. In the second approach, an automatic evaluation of target language with BLEU metric [21] has been followed. BLEU score is a widely used metric that is use to calculate the accuracy of model generated sentences in comparison to reference sentences by human linguist in the target language. The BLEU score has been considered in the range of 0 to 1. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) Vishvajit Bakarola & Jitendra Nasriwala. / IJETT, 69(9), 230-235, 2021 Considering this as the challenge, The Sanskrit to Hindi bilingual parallel corpus has bene constructed with more than 10K samples and 178,000 tokens. The corpus has been developed in association with the linguist community and used for training the neural machine translation model after required pre-processing and validation. The LSTM based sequence-to-sequence model has been trained with Bahdanau's attention on the parallel corpus. It has been observed from the experimentation that the model performs well by focusing only on the relevant portion of information in the sentence. After sufficient training with the proper tuning of hyperparameters, the model gives the human evaluation accuracy of 88% and a BLEU score of 0.92 on the unseen Sanskrit sentences. From Table 3, it has been observed the appropriate expectations for few sentences as the model is coming across the new vocabulary. The attention plots demonstrate the alignment between the source and target words. the results are not meeting that APPENDIX A TABLE 4. Attention Plots of Sample Translations Source: <start> अहं बहु व्यस्तः अस्स्ि। <end> Target: िैं बहुत व्यस्त ह ूँ। <end> Source: <start> अहं एकास्कनी अस्स्ि। <end> ूँ। <end> Target: िैं अके ली ह Source: <start> अहं ततत Target: िैं तैर सकता ह ूँ <end> ुं शक्नोस्ि <end> Source: <start> अन्तः आगन्तत Target: अंदर आ सकता ह ूँ क्या? <end> ं शक्नोस्ि? <end> TABLE 2. The Score based Human Evaluation Score Meaning 4 3 2 1 The translation is completely correct in both syntactic and semantic aspects. The translation is not entirely correct, but it represents the partial semantic meaning of the source sentence. The translation is syntactically correct but makes no sense in favor of the source sentence. The translation is incorrect in both syntactic and semantic manner. In testing, the model has obtained an accuracy of 88% with score based human evaluation method and a BLEU score of 0.92. However, coming across a new vocabulary, the model is generating both semantically incomplete sentences. Several sentences from test data are shown in Appendix B. The attention plots have been presented for selected sentences, which are also part of results shown in Table 3 results. It has been observed that the model delivers strong attention between words having a more significant frequency of occurrence with verities of correlation. The attention plots on several results are shown in Appendix A. The Indic machine translation system has been deployed locally with a user-friendly web interface by integrating the neural machine translation model in the backend, as shown in Fig. 3. Fig. 3 Indic Machine Translation System Interface VI. CONCLUSION The work shows the significance of the attention mechanism to overcome long-term dependencies associated with the vanilla LSTM model during sequence-to-sequence learning. Being the low-resourced language, significantly less amount of digital content is available for Sanskrit. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) Vishvajit Bakarola & Jitendra Nasriwala. / IJETT, 69(9), 230-235, 2021 [7] Intell. Syst. Conf. (2015) doi: 10.1109/IntelliSys.2015.7361204, 616- 624. P. Koehn, Statistical Machine Translation. Cambridge University Press, (2010). P. D. Mane and A. Hirve, "Study of Various Approaches in Machine Translation for Sanskrit Language," vol. 2, (2013), 383–387. [9] U. S. T. Tanveer Siddiqui, Natural Language Processing and [8] Information Retrieval. Oxford University Press, (2015). [10] P. R. V. Veda, "Computer Processing of Sanskrit," C-DAC, Pune, (1992). [11] H. S. Sreedeepa and S. M. Idicula, "Interlingua based Sanskrit- English machine translation," Proc. IEEE Int. Conf. Circuit, Power doi: Comput. 10.1109/ICCPCT.2017.8074251. ICCPCT, Technol. (2017) [12] M. Singh, R. Kumar, and I. Chana, "Corpus based Machine Translation System with Deep Neural Corpus based Machine Translation System with Deep Neural Network for Sanskrit to Hindi Translation Network for Sanskrit to Hindi Translation," Procedia Comput. Sci., vol. 167, (2020), doi: 10.1016/j.procs.2020.03.306, 2534-2544. [13] N. Koul and S. S. Manvi, "A proposed model for neural machine translation of Sanskrit into English," Int. J. Inf. Technol., (2019), doi: 10.1007/s41870-019-00340-8. [14] A. Shewalkar, D. Nyavanandi, and S. A. Ludwig, "Performance Evaluation of Deep Neural Networks Applied to Speech Recognition: RNN, LSTM and GRU," J. Artif. Intell. Soft Comput. Res., vol. 9, no. 4, doi: https://doi.org/10.2478/jaiscr-2019-0006, 235-245. [15] Y. Hu, A. Huber, and S.-C. Liu, "Overcoming the vanishing gradient problem in plain recurrent networks." (2018) [Online]. Available: https://openreview.net/forum?id=Hyp3i2xRb. I. Sutskever, O. Vinyals, and Q. V. Le, "Sequence to sequence learning with neural networks," Adv. Neural Inf. Process. Syst., vol. 4, no. January, (2014), 3104–3112. [16] [17] M. T. Luong, H. Pham, and C. D. Manning, "Effective approaches to attention-based neural machine translation," Conf. Proc. - EMNLP 2015 Conf. Empir. Methods Nat. Lang. Process., (2015), doi: 10.18653/v1/d15-1166, 1412-1421 [18] D. Bahdanau, K. H. Cho, and Y. Bengio, "Neural machine translation by jointly learning to align and translate," 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., (2015), 1-15. [19] D. P. Kingma and J. Ba, "Adam: A Method for Stochastic Optimization." (2017). [20] Rashi Kumar, Piyush Jha and Vineet Sahula, An Augmented Translation Technique for Low Resource Language Pair: Sanskrit to Hindi Translation. In Proceedings of 2019 2nd International Conference on Algorithms, Computing and Artificial Intelligence (ACAI’19). (2019), https://doi.org/10.1145/3377713.3377774. Sanya, China, [21] Kishore Papineni, Salim Roukos, Todd Ward and Wei-Jing Zhu, BLEU – A Method for Automatic Evaluation of Machine Translation, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, July (2002), pp. 311-318. APPENDIX B TABLE 3. Sample Translation through the System Source Reference Translated Source Reference Translated Source Reference Translated Source Reference Translated Source Reference Translated Source Reference Translated Source Reference Translated ूँ ूँ ुं शक्नोमि ूँ ूँ ुं शक्नोमि ? ूँ क्या ? ूँ क्या ? अन्तः आगन्तत अंदर आ सकता ह अंदर आ सकता ह अहुं ततत िैं तैर सकता ह िैं तैर सकता ह अहुं एकामकनी अममि िैं अके ली ह िैं अके ली ह मितरौ मवमय बालकमय कृ ते दामयत्ववाहकौ मतः िातास्िता अिने बच्चों की स्हफाज़त के स्लए स्ज़म्िेदार होते हैं िातास्िता अिने बच्चों की स्हफाज़त के स्लए स्ज़म्िेदार होते हैं जािानदेशः मवश्वमय देशेषत एकः अर्थतुंत्रः देशः अममत जािान दतस्नया के सबसे ताकतशाली अर्थतंत्रों िें से एक है जािान दतस्नया के सबसे ताकतशाली अर्थतंत्रों िें से एक है प्रवेशात् िूवुं िादका त्याज्या अिने हार् िें िरना उसके बाहर जाने की कोस्शश करो अिने हार् िें िरना उसके बाहर जाने की कोस्शश करो अहुं बहु व्यमतः अममि िैं बहुत व्यस्त ह िैं बहुत व्यस्त ह ूँ ूँ ACKNOWLEDGEMENT We would express our gratitude to the Indic linguist community. Their work has helped us to retrieve insights into both Sanskrit and Hindi grammar. We would like to acknowledge Shri Udit Sharma and Shri Harshad Joshi, who help us construct and validate our parallel corpus. We are grateful to everyone who has directly or indirectly proven helpful in our work. We are also thankful to other researchers whose work helps us derive some conclusions and identify the problems. REFERENCES [1] D. Jitendra Nasriwala and V. Bakarola, Computational Representation of Paninian Rules of Sanskrit Grammar for Dictionary-Independent Machine Translation, vol. 1046, no. July. Springer Singapore, (2019). [2] A. C. Woolner, "Introduction to Prakrit.pdf." University of the [3] Panjab, Lahore, (1917). P. Kiparsky, "On the Architecture of Panini's Grammar," in Sanskrit Computational Linguistics: First and Second International Symposia Rocquencourt, France, October 29-31, 2007 Providence, RI, USA, May 15-17, 2008 Revised Selected and Invited Papers, Berlin, Heidelberg: Springer-Verlag, (2009) 33–94. [4] B. Panchal, V. Bakrola, and D. Dabhi, "An Efficient Approach of Knowledge Representation Using Paninian Rules of Sanskrit Grammar BT Intelligent Computing Techniques," (2018) 199–206. - Recent Findings in [5] V. Mishra and R. B. Mishra, "English to Sanskrit Machine Translation System: A Rule-Based Approach," Int. J. Adv. Intell. Paradigm., vol. 4, no. 2 (2012) doi: 10.1504/IJAIP.2012.048144, 168–184. P. Bahadur, A. Jain, and D. S. Chauhan, "Architecture of English to Sanskrit machine translation," IntelliSys 2015 - Proc. 2015 SAI [6] This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
synthetic_cpt
7
TransformLLM_Adapting_Large_Language_Models_via_LLM-Transformed_Reading_Comprehension_Text.pdf
TRANSFORMLLM: ADAPTING LARGE LANGUAGE MODELS VIA LLM-TRANSFORMED READING COMPREHENSION TEXT 4 2 0 2 t c O 8 2 ] L C . s c [ 1 v 9 7 4 1 2 . 0 1 4 2 : v i X r a Iftach Arbel School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel [email protected] Yehonathan Refael Department of Electrical Engineering Tel Aviv University Tel Aviv, Israel [email protected] Ofir Lindenbaum The Faculty of Engineering Bar Ilan University Ramat Gan, Israel ofi[email protected] ABSTRACT Large Language Models (LLMs) have shown promise in highly-specialized domains, however chal- lenges are still present in aspects of accuracy and costs. These limitations restrict the usage of ex- isting models in domain-specific tasks. While fine-tuning pre-trained models have shown promising results, this process can be computationally expensive and require massive datasets of the special- ized application in hand. In this work, we bridge that gap. We have developed Phi-2-Legal and Mistral-Legal-7B, which are language models specifically designed for legal applications. These models are based on Phi-2 and Mistral-7B-v0.1, and have gone through continued pre-training with over 500 million tokens of legal texts. Our innovative approach significantly improves capabilities in legal tasks by using Large Language Models (LLMs) to convert raw training data into reading comprehension text. Our legal LLMs have demonstrated superior performance in legal benchmarks, even outperforming models trained on much larger datasets with more resources. This work em- phasizes the effectiveness of continued pre-training on domain-specific texts, while using affordable LLMs for data conversion, which gives these models domain expertise while retaining general lan- guage understanding capabilities. While this work uses the legal domain as a test case, our method can be scaled and applied to any pre-training dataset, resulting in significant improvements across different tasks. These findings underscore the potential of domain-adaptive pre-training and reading comprehension for the development of highly effective domain-specific language models. 1 Introduction Large Language Models (LLM) domain-adaptive pre-training, also known as continued pre-training on domain- specific corpora [12], is a technique that has been proven effective in adapting large language models (LLMs) to specific domains [35, 5]. This approach allows LLMs to leverage their general language understanding capabilities while incorporating domain-specific knowledge, which can benefit downstream domain-specific tasks at reduced costs [22, 26, 27]. In this process, the LLM is further pre-trained using raw data from the specific domain, such as biomedicine, finance, or law. This helps the LLM gain domain knowledge, which is demonstrated by its improved performance in fine-tuning and knowledge probing evaluations within those domains [20, 1, 2]. However, a notable drawback is that continued pre-training on raw domain corpora can lead to a significant drop in the LLM’s prompting performance, potentially due to the specialized nature of the domain-specific data [11]. Despite this trade-off, domain-adaptive pre-training remains a promising approach for adapting LLMs to specific domains, capitalizing on their general language understanding capabilities while tailoring them to domain-specific tasks and knowledge. Ongoing research efforts aim to mitigate the potential negative impacts on prompting performance while maximizing the benefits of domain-specific knowledge acquisition [10, 28]. The notion of reading comprehension was suggested in [6], where instead of continuing to train a large language model on domain-specific raw data, the raw texts be converted into reading comprehension materials. In this approach, each text is followed by related tasks, transitioning the model from a "reading" phase to a "comprehension" phase. These tasks, in a question-answer format, enhance the model’s ability to respond to questions by simulating human learning practices. We introduce novel methods to expose the models to a corpus during training, blending a variety of legal reading comprehension tasks, as well as general language data. To demonstrate the performance of our method, we utilize Phi-2 and Mistral-7B as base models, which were further pre-trained on 500 million tokens of legal corpus. Our new legal LLMs present state-of-the-art performance on legal benchmarks, suppressing models trained on larger corpora with significantly more resources. Our main contributions are: (i) Utilizing LLMs, to transform raw text to reading comprehension text that is used for continued pre-training of LLMs in legal domain tasks. (ii) Develop an extended evaluation scheme for legal LLMs. Existing legal benchmarks are currently fragmented and constructed for classification models with multiple question responses. Our evaluation protocol adapts MMLU [14] (legal subsets) and LexGLUE [3] for use with generative, GPT-style [24] transformer [31] models. While the focus of this work is on the legal domain, both the transformation and evaluation protocols are easily applicable to other domains, including finance, biology, and more. 2 Using LLMs to Transform Raw Text Building upon the foundation of AdaptLLM [6], which converts raw legal text into reading comprehension tasks, we draw from the concept of human learning through reading comprehension. This approach, where practice after reading improves the ability to answer questions based on acquired knowledge, inspired our work. Rather than continuing to train large language models on raw domain-specific corpora, AdaptLLM proposes converting the raw text into structured reading comprehension tasks, with each passage followed by questions. While AdaptLLM leverages a set of rules and heuristics to perform this transformation, its reliance on such methods poses limitations, especially in the resulting data qualityquality of the resulting data. These challenges highlight a critical need for more sophisticated text transformation techniques [21, 29]. Our solution addresses this by leveraging large language models (LLMs) to generate high-quality training data. With the decreasing costs of LLM inference, we can move beyond structured heuristics, using LLMs to efficiently create comprehensive reading comprehension datasetscreate comprehensive reading comprehension datasets efficiently. To improve text quality, we designed a prompt database that guides the model’s capabilities. LLMs were tasked with generating answers and additional questions, additional questions, and transforming the raw legal texts based on tai- lored prompts. Through further refinement and post-processing, we developed a superior legal reading comprehension dataset, offering enhanced performance for domain adaptation. We primarily used open-source models ranging from 7B to 70B for data transformation. These models were selected based on factors like cost and operational efficiency. Upon reviewing the outputs of these open-source models com- paredin comparison to more advanced proprietary models like those from OpenAI and proprietary Mistral models, we observed no significant differences in quality for our transformation task. However, to ensure a diverse data distribu- tion and to benefit from knowledge distillation of the most powerful models, we also transformed a portion of the data using state-of-the-art proprietary (closed-source) models. Some transformations were also applied on the general, non-legal data to generate Chain-of-Thought (CoT) data and improve the reasoning capabilities of the model, which we find crucial in the legal domain. For the same reason, we incorporated math and coding data in the training set, striving to boost logical and inference capabilities. 3 Data Collection and Processing Our data collection focused primarily on English-language legal texts, drawing heavily from the United States, which follows the common law system. We also included materials from Canada and the United Kingdom, both of which also adhere to common law principles. This emphasis on jurisdictions with common law traditions ensures that our dataset aligns closely with the legal usage specific to the United States, which is the primary focus of our model. Through meticulous curation and rigorous cleaning procedures, we compiled a comprehensive corpus tailored to capture the intricacies of legal language within the United-States Federal-jurisdiction. Throughout the development of the model and the data collection process, our goal was not to expose the model to all existing legal data. Instead, we focused on providing the model with a strong foundation of legal knowledge, background, understanding, and reasoning abilities. Our aim is for the model to be able to handle various legal tasks, including document drafting, reviews, and answering questions, by equipping it with these tools. However, if you ask the model about specific data such as cases or laws, it may provide inaccurate information. In such cases, Retrieval- 2 Augmented Generation (RAG) is the recommended solution. Utilizing a robust legal LLM, along with a retrieval model and a comprehensive database, will yield reliable and solid results. The main sources of legal data was raw text from the FreeLaw subset of The Pile [8] and Pile of Law [13]. The Pile dataset does not have any indexing, therefore we simply sample data from it, while using word count to evaluate the number of raw tokens we attained. Pile of Law, on the other hand, does index the data by instances, so we could sample data that we find appropriate, including contracts, SEC filing, and legal memos to name a few. This indexing also allowed to avoid certain data instances, such as Congressional hearings and European laws. In order to avoid regression of general language capabilities during the fine-tuning process, we integrated data from the original training distribution, a strategy supported by previous studies [33, 6]. We introduced widely available "general" instruction data from various sources, including chain-of-thought (CoT), chat, code, and general instruction datasets. The datasets were sampled from a diverse range of resources, ensuring a broad spectrum of language usage and contexts, thereby preserving the model’s general language capabilities while enhancing its performance in the legal domain. The set of datasets used in this paper is following presented in Table 1. Dataset The Pile (FreeLaw) Pile of Law USClassActions AQUA-RAT Domain Legal Legal Legal Math (CoT) Commonsense (CoT) ECQA Reasoning (CoT) Chat Code Instruction EntailmentBank UltraChat Code-Feedback OpenOrca Tokens License 300M 180M 20M 5M 4M 3M 140M 60M 300M MIT CC-BY-NC-SA-4.0 GPL-3.0 Apache-2.0 Apache-2.0 Apache-2.0 MIT Apache-2.0 MIT Table 1: A list of used data sources. Examples from the training data are shown in the Table 3, in Training Samples Example section B, in the appendix. 4 Model Architecture and Training We have trained two versions of the legal model: Phi-2-Legal and Mistral-Legal-7B. As suggested by their names, these models are based on Phi-2 [16] and Mistral-7B [17]. We selected these models because they demonstrate cutting-edge performance, are available for commercial use, and are well-supported by inference libraries (vLLM [19], etc.) and for server-less deployment (Fireworks, Together, etc.). 4.1 Training considerations To save on resources and consider the very limited availability of GPUs, we opt to train the models using LoRA [15], avoiding full parameter update. Lora is a parameter-efficient fine-tuning (PEFT) technique that has beenParameter- Efficient Fine-Tuning (PEFT) technique, proven to match the results of full-parameter updates while requiring signif- icantly fewer training resources (Note that any state-of-the-art variants of LoRA [25, 4, 32, 34] may be used as an MMLU LexGLUE International Law Juris- prudence Professional Law LEDGAR Case HOLD Unfair ToS Phi-2 Phi-2-Legal Mistral-7B AdaptLLM Saul-7B Mistral-Legal-7B 0.661 0.667 0.736 0.570 0.694 0.811 3B Models 0.620 0.711 0.379 0.417 7B Models 0.694 0.528 0.630 0.712 0.412 0.361 0.432 0.427 0.143 0.603 0.506 0.463 0.559 0.739 0.310 0.580 0.563 0.500 0.658 0.778 0.233 0.385 0.366 0.513 0.803 0.806 Table 2: Benchmark results for 3B and 7B Models 3 alternative). Considering the vast training data and project scope, we train a considerable amount of parameters. Both models used a LoRA r = 64, and updated all attention components (Q, K, V, O), as well as the feed-forward (FF) layers. These models can support context lengths of up to 2K and 4K for Phi-2 and Mistral-7B, respectively1. In order to improve training efficiency, a common technique involves packing the training data. Packing refers to concatenating examples in a greedy fashion until the full context length is reached, while using an EOS (end of sequence) token to separate them. We note that it is not possible to perfectly fit examples into the context length, so any overflow from one example is moved to the next training example. While this technique generally works well for model pre-training and most fine-tuning scenarios, it is unsuitable for our case. Since we are focused on reading comprehension, where the model is presented with raw text followed by a series of questions, cutting examples poses a risk to the capabilities of the fine-tuned model. Therefore, we use a packing mechanism which concatenates examples without cutting any of them. Achieving a perfect concatenation is not possible, as this problem is essentially the bin-packing problem [23, 18], which is NP-hard. However, a good approximation is simply sorting the data by length and packing examples using a greedy algorithm. Using this algorithm, we compressed the training set by 70%-80%, depending on context length. 5 Evaluation We evaluate the models on MMLU (legal subsets) and LexGLUE datasets. We aim for a simple and accessible evaluation scheme that is easy to understand and measures model accuracy. MMLU is typically evaluated using the Log probabilities of tokens, as in [9]. However, this type of model evaluation has two main drawbacks: (1) Attaining raw requires setting up servers with GPUs and server-less inference providers„ and server-less inference providers as these are limited in the number of Log probabilities they output. (2) Measuring against Log probabilities may encounter issues due to tokenization mismatches. LexGLUE typicallynormally evaluates classification models rather than generative ones. Therefore, we adapt benchmark prompts for instruct-type models, detailing the various options and asking for the most suitable option to be selected. This means that models may be evaluated quickly and affordably using inference frameworks such as vLLM, or server-less inference providers. We also utilize recent advancements in decoding techniques, allowingadvancements in decoding techniques, which us to define a closed listallows to define a closed-list of possible options. The result is a transparent and simple evaluation scheme suitable for , suitable to be used with chat-aligned models. MMLU is a straightforward multiple-question benchmark. LexGLUE, on the other hand, has subsets that are simple multiple-question, while others have 8-100 label options. In LexGLUE, we only use the subsets that are suitable for use with generative models. For that, the EUR-LEX subset was not used as it only has numerical labels, not verbal, meaningful ones, while the SCOTUS subset was avoided as many of its instances are longer than a 4K token window; therefore, it, therefore is has very few usable data instances. Lastly, we did not use the ECtHR subsets, as they refer to proceedings brought to the European Court of Human Rights (ECtHR), and therefore rely on the European Convention on Human Rights,relies on European Convention on Human Rights which is a codified document more typical of civil law systems [30]. Our legal models were benchmarked compared to their underlying base models, Phi-2 and Mistral-7B, to measure the improvement achieved by continued pre-training. The Mistral-7B is also compared to the legal variant of AdaptLLM mode, which also uses continued pre-training using reading comprehension text. Additionally, we compare it to Saul- 7B [7], another recent legal model that uses at least x30 more training data and full-parameter update (compared to our LoRA training). We are unawarenot aware of legal models smaller than 7B parameters; therefore„ therefore the Phi-2 models are the only ones in this category. These benchmark results are presented in Table 2. Both classes of models show considerable improvement over their base models. Mistral-Legal-7B performs better in all subsets thancompared to AdaptLLM, highlightingwhich highlights the benefit of transforming raw data using LLMs, compared to the heuristic and regex rules. It also performs better than Saul-7B in five out of six subsets. We observed the most significant performance gains in the LexGLUE subsets. We suspect this is because LexGLUE is a more niche benchmark, receiving less attention from model developers. In contrast, the MMLU benchmark is highly popular, and the original models were already extensively optimized for it, making further improvements more challenging. Nevertheless, our method still enhancedmanaged to enhance results, with Phi-2-Legal outperforming the original Mistral-7B in all but one of the benchmark subsets. 1Mistral context length is without "sliding-window attention". 4 6 Conclusion In this work, we presented a framework for domain-specific adaptation of LLMs using continued pre-training. By training models in the legal domain, we have shown that it is possible to obtain high-performing models with relatively low resources. To the best of our knowledge, this is the first time this technique has been used. Future research could employ Reinforcement Learning from Human Feedback (RLHF) to enhance the model’s align- ment with human preferences. This would lead to improved generation capabilities and more refined outputs, advanc- ing the applicability and efficacy of the model in diverse applications. Limitations The models were evaluated using multiple-question benchmarks, which serve as proxies for their legal capabilities. However, a dedicated framework for evaluating its text generation capabilities, particularly in specific applications such as contracts and reviews, is necessary to obtain a comprehensive assessment. The models are not intended or able to provide factual information, they may generate information that is false or misleading, and reflect social and cultural biases from their training data, both the original pre-training data as well as our continued pre-training data. References [1] Iz Beltagy, Kyle Lo, and Arman Cohan. Scibert: A pretrained language model for scientific text. arXiv preprint arXiv:1903.10676, 2019. [2] Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos. Legal- bert: The muppets straight out of law school. arXiv preprint arXiv:2010.02559, 2020. [3] Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Martin Katz, and Nikolaos Aletras. Lexglue: A benchmark dataset for legal language understanding in english. arXiv preprint arXiv:2110.00976, 2021. [4] Y. Chen, Y. Li, and X. Liu. Lora+: Improving low-rank adaptation with parameter-specific learning rates. arXiv preprint arXiv:2305.16045, 2023. [5] Daixuan Cheng, Shaohan Huang, Jianfeng Liu, Yuefeng Zhan, Hao Sun, Furu Wei, Denvy Deng, and Qi Zhang. Snapshot-guided domain adaptation for electra. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2226–2232, 2022. [6] Daixuan Cheng, Shaohan Huang, and Furu Wei. Adapting large language models via reading comprehension. arXiv preprint arXiv:2309.09530, 2023. [7] Pierre Colombo, Telmo Pessoa Pires, Malik Boudiaf, Dominic Culver, Rui Melo, Caio Corro, Andre FT Martins, Fabrizio Esposito, Vera Lúcia Raposo, Sofia Morgado, et al. Saullm-7b: A pioneering large language model for law. arXiv preprint arXiv:2403.03883, 2024. [8] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020. [9] Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, 12 2023. URL https://zenodo.org/records/10256836. [10] Shahriar Golchin, Mihai Surdeanu, Nazgol Tavabi, and Ata Kiapour. Do not mask randomly: Effective domain- adaptive pre-training by masking in-domain keywords, 2023. [11] Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare (HEALTH), 3(1):1–23, 2021. [12] Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. Don’t stop pretraining: Adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964, 2020. 5 [13] Peter Henderson, Mark Krass, Lucia Zheng, Neel Guha, Christopher D Manning, Dan Jurafsky, and Daniel Ho. Pile of law: Learning responsible data filtering from the law and a 256gb open-source legal dataset. Advances in Neural Information Processing Systems, 35:29217–29234, 2022. [14] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020. [15] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. [16] Mojan Javaheripi, Sébastien Bubeck, Marah Abdin, Jyoti Aneja, Caio César Teodoro Mendes, Weizhu Chen, Allie Del Giorno, Ronen Eldan, Sivakanth Gopi, Suriya Gunasekar, Piero Kauffmann, Yin Tat Lee, Yuanzhi Li, Anh Nguyen, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Michael Santacroce, Harkirat Singh Behl, Adam Taumann Kalai, Xin Wang, Rachel Ward, Philipp Witte, Cyril Zhang, and Yi Zhang. Phi-2: The surprising power of small language models. Microsoft Research Blog, 2023. [17] Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023. [18] Bernhard Korte, Jens Vygen, Bernhard Korte, and Jens Vygen. Bin-packing. Combinatorial Optimization: Theory and Algorithms, pages 489–507, 2018. [19] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023. [20] Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36 (4):1234–1240, 2020. [21] Fei Liu, Xialiang Tong, Mingxuan Yuan, Xi Lin, Fu Luo, Zhenkun Wang, Zhichao Lu, and Qingfu Zhang. Evolution of heuristics: Towards efficient automatic algorithm design using large language model, 2024. URL https://arxiv.org/abs/2401.02051. [22] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. [23] Silvano Martello and Paolo Toth. Lower bounds and reduction procedures for the bin packing problem. Discrete applied mathematics, 28(1):59–70, 1990. [24] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. Preprint. Work in progress., 2018. [25] Yehonathan Refael, Jonathan Svirsky, Boris Shustin, Wasim Huleihel, and Ofir Lindenbaum. Adarankgrad: URL Adaptive gradient-rank and moments for memory-efficient llms training and fine-tuning, 2024. https://arxiv.org/abs/2410.17881. [26] Andy Rosenbaum, Saleh Soltan, Wael Hamza, Marco Damonte, Isabel Groves, and Amir Saffari. Clasp: Few- shot cross-lingual data augmentation for semantic parsing. In Proceedings of the 2nd Conference of the Asia- Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 444–462, 2022. [27] Andy Rosenbaum, Saleh Soltan, Wael Hamza, Yannick Versley, and Markus Boese. Linguist: Language model instruction tuning to generate annotated utterances for intent classification and slot tagging, 2022. [28] Amit Rozner, Barak Battash, Lior Wolf, and Ofir Lindenbaum. Knowledge editing in language models via adapted direct preference optimization. arXiv preprint arXiv:2406.09920, 2024. [29] Vasu Sharma, Karthik Padthe, Newsha Ardalani, Kushal Tirumala, Russell Howes, Hu Xu, Po-Yao Huang, Shang-Wen Li, Armen Aghajanyan, Gargi Ghosh, and Luke Zettlemoyer. Text quality-based pruning for effi- cient training of language models, 2024. URL https://arxiv.org/abs/2405.01582. [30] The danish institute for human rights. The European Court of Human Rights, 2022. URL https://www.humanrights.dk/research/about-human-rights/human-rights-europe/european-court-human-rights. Accessed: 2024-07-01. [31] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 6 [32] Y. Wang, H. Zhang, and J. Li. Adaptive lora: A rank-adaptive method for efficient llm fine-tuning. arXiv preprint arXiv:2306.06188, 2023. [33] Tongtong Wu, Linhao Luo, Yuan-Fang Li, Shirui Pan, Thuy-Trang Vu, and Gholamreza Haffari. Continual learning for large language models: A survey. arXiv preprint arXiv:2402.01364, 2024. [34] Z. Xu, J. Zhang, and X. Wang. Sparse lora: Efficient fine-tuning of large language models with sparse low-rank adaptation. arXiv preprint arXiv:2303.17626, 2023. [35] Yunzhi Yao, Shaohan Huang, Wenhui Wang, Li Dong, and Furu Wei. Adapt-and-distill: Developing small, fast and effective pretrained language models for domains. arXiv preprint arXiv:2106.13474, 2021. 7 A Ethics statements As with all large language models, there is an inherent risk of the model producing biased or toxic responses, which remains a significant concern and requires ongoing attention. Given its application in the legal domain, the language model’s biases could have more severe implications than those in general usage scenarios. Moreover, mistakes and hallucinations produced by the model can result in costly errors to the end user. B Training Samples Example In Table 3, we provide examples of our text transformations, including prompts and responses by the LLM. Task Type Commonsense Rea- soning Word-to-Text Prompt What an explanation to is this paragraph from the text, starting with: review the trial court’s denial of a bond-reduction request under an abuse of discretion standard..."? "We Provide a definition to these two legal terms from the text: "habeas corpus", "abuse of dis- cretion". Natural Language In- ference Summarization Does the sentence "The chapter 13 case was converted to a case under chapter 7 in July of 1993" entail the sentence "The record does not show the present value of the vehicle, but it is worth more than the $300 remaining due under the chapter 13, and much less than the $6,000 re- maining due under the original contract"? Write a summary for this para- graph from the text, starting with: "District courts have ju- risdiction to hear appeals from final judgments, orders, and decrees issued by bankruptcy judges...". Response This paragraph explains the standard of review used by the court when assessing the trial court’s decision to deny a bond-reduction request, which is an abuse of discretion standard, and clarifies that the trial court’s decision must be arbitrary or unreasonable to be considered an abuse of discre- tion. "habeas corpus" - a writ requiring a person un- der arrest to be brought before a judge or into court, especially to secure the person’s release un- less lawful grounds are shown for their detention. "abuse of discretion" - a legal standard used to re- view a decision made by a lower court or admin- istrative body, which occurs when the decision is arbitrary, capricious, or unreasonable. The two sentences are neutral with regard to each other. The first sentence discusses the conversion of the case from Chapter 13 to Chapter 7, while the second sentence talks about the value of the vehicle in question. This paragraph explains the jurisdiction of district courts in hearing appeals from bankruptcy courts and the standards of review for legal conclusions and findings of fact. Table 3: Examples of raw text transformed to reading comprehension tasks, using LLMs . 8
synthetic_cpt
1
Chemometric_Quality_Assessment_of_Doxylamine_Succinate_with_Its_Degradation_product;_Implementation_of_Two_Predictive_Models_on_UV-Spectrophotometric_Data_of_Anti-emetic_binary_Mixture.pdf
Artificial Intelligence for reverse engineering: application to detergents using Raman spectroscopy. Pedro Marote1, Marie Martin1, Anne, Bonhommé², Pierre Lantéri1, Yohann Clément1* 1 Université de Lyon, Institut des Sciences Analytiques, UMR 5280 CNRS, Université Claude Bernard Lyon 1, 5 rue de la Doua, 69100 Villeurbanne, France. 2 Université de Lyon, Université Claude Bernard Lyon 1, CNRS, IRCELYON, F-69626, 2 avenue A. Einstein, 69626, Villeurbanne, France Keywords: Chemometrics, Machine Learning, mixture design, Artificial Intelligence, RAMAN spectroscopy, surfactants characterization Abstract (words) The reverse engineering of a complex mixture, regardless of its nature, has become significant today. Being able to quickly assess the potential toxicity of new commercial products in relation to the environment presents a genuine analytical challenge. The development of digital tools (databases, chemometrics, machine learning, etc.) and analytical techniques (Raman spectroscopy, NIR spectroscopy, mass spectrometry, etc.) will allow for the identification of potential toxic molecules. In this article, we use the example of detergent products, whose composition can prove dangerous to humans or the environment, necessitating precise identification and quantification for quality control and regulation purposes. The combination of various digital tools (spectral database, mixture database, experimental design, Chemometrics / Machine Learning algorithm…) together with different sample preparation methods (raw sample, or several concentrated / diluted samples) Raman spectroscopy, has enabled the identification of the mixture's constituents and an estimation of its composition. Implementing such strategies across different analytical tools can result in time savings for pollutant identification and contamination assessment in various matrices. This strategy is also applicable in the industrial sector for product or raw material control, as well as for quality control purposes. 1. Introduction Numerous detergents are utilized for the purpose of cleaning our residences, clothing, and bodies. Our everyday products, such as detergents, shampoos, and household cleaners, contain a significant amount of these substances. They are responsible for effectively removing stains and dirt[1]. However, it is crucial to acknowledge the potential health and environmental risks associated with these chemicals[2]–[4]. Consequently, researchers are exploring alternatives[5]. Detergents are commonly employed by both industrial and private users for daily cleaning tasks. They comprise soaps and surfactants that possess surface-active properties. These surfactants function by breaking the bonds between surfaces and dirt, thereby facilitating their removal. Unfortunately, these chemicals have adverse consequences for the environment. They are produced and utilized in substantial quantities. In Europe alone, over 3 million tons of detergents were manufactured in 2020[6]. 1 Surfactants, which are used in liquid, powder, and other forms, have a significant impact on soil and water. The conventional detergents that are frequently advertised on television are often derived from petroleum-based products. These surfactants are composed of various chemical compounds, including sulfates, phosphates, bleaching agents, chemical perfumes, and phenols. Once released into the environment, detergents, some of which are non-biodegradable, accumulate in soil and water bodies. It is important to note that more than 60% of the surfactants found in detergents eventually end up in aquatic environments. This leads to significant problems of environmental pollution and health concerns.[3] The properties of surfactants have attracted the interest of detergent manufacturers in recent years. The growing interest in surfactants necessitates the enhancement of existing analytical techniques, such as spectroscopy[7], [8], mass spectrometry[9], [10] and Nuclear Magnetic Resonance (NMR)[11], to ensure compliance with regulations and environmental standards. Detergents can consist of up to 25 compounds, including surfactants, enzymes, sequestering agents, polymers, and fragrances, to name a few. Surfactants are the most crucial components, constituting up to 50% of the detergent content. These amphiphilic molecules, comprising a hydrophobic carbon chain and a hydrophilic polar head, are utilized for their solubilizing, wetting, foaming, dispersing, and emulsifying properties. Depending on the nature of their polar head, surfactants can be classified into four families: anionic, cationic, non-ionic, or amphoteric. Various analytical methods, such as NMR[11] or hyphenated techniques combined with spectroscopic methods[7], [8], [12], are employed for the deconstruction of detergent mixtures. Chromatographic methods coupled with detectors like light scattering detection or mass spectrometry have been extensively utilized for surfactant analysis[9], [13]. These analytical techniques offer the advantage of simultaneously identifying and quantifying different surfactant families. However, method development can be prone to biases in sample preparation, costs, and labor-intensive procedures. RAMAN spectral analysis appears to strike a balance between relevant information and cost-effectiveness. It does not require lengthy sample preparation procedures, the use of expensive internal standards, and can be conducted in aqueous solutions inexpensively. By combining surfactant spectral databases, chemometrics, Machine Learning, and spectroscopic tools, it becomes possible to identify and quantify raw materials[8], [14], [15]. Blind source separation (BSS) methods are employed for the deconvolution of overlapping signals in near and mid-infrared spectroscopy or Raman spectra. Source extraction (SE) methods, such as independent component analysis (ICA)[16][17]–[22] or Multicurve Resolution Alternating Least Squares (MCR-ALS)[23], [24], aim to extract the contributions (spectra) of pure compounds from complex mixtures without any prior knowledge of the compounds being analyzed. However, a limitation of RAMAN spectroscopy is the detection limit; raw materials present in low concentrations (<1%) may not be identified and quantified. To analyze the surfactant composition of various commercial detergents, we propose a method based on RAMAN spectroscopy, utilizing a database of commercial raw material RAMAN spectra and Machine Learning (Figure.1). 2 2. Materials and methods 2.1. Chemicals A database containing 95 different surfactants (Cocoamide, Sodium Laureth Sulfate, Betaine ...) has been compiled (supplementary appendices) from 14 different suppliers (producers or resellers). This database will be used for the identification of surfactants contained in commercial detergents or homemade detergent mixtures. 2.2. Sample preparation For sample preparation, there are two possible scenarios: either it involves a completely unknown mixture, or the constituents are known. In the case of an unknown mixture, the raw material will be diluted by a factor of 2, 3, etc. If the constituents are known, however, no sample preparation is required beforehand. The RAMAN spectrum of the commercial product or the house mixture will be analyzed. Identification will be performed from the RAMAN spectra databases of the commercial raw materials and quantification from the PM mixtures database. 2.3. Data base preparation 2.3.1 Spectral database A Raman spectral database is being created using a library of commercial raw materials. For each raw material, "pure" Raman spectra and diluted Raman spectra of the raw materials are recorded. The diluted spectra will be prepared at dilution levels of 75%, 50%, 25%, and 5%. This database has been constructed using 95 different commercial raw materials, resulting in a total of 380 Raman spectra. This comprehensive database will enable the identification of raw materials present in our various mixtures. 2.3.2 Mixture database A comprehensive database of commercial raw material mixtures is currently being acquired. These mixtures are composed of 2 to 5 components carefully selected and blended. To conduct in-depth investigations involving mixtures with 3, 4, and 5 components, it is imperative to prepare a minimum of 10, 18, or 30 mixtures respectively, following the highly effective Scheffé simplex designs strategy[25], [26]. It is worth noting that certain raw materials have specific constraints regarding their permissible usage concentrations, as specified in their corresponding safety data sheets. These constraints were meticulously considered during the formulation of the mixtures. The extensive research effort resulted in the preparation and analysis of over 1000 meticulously crafted mixtures, yielding valuable insights and data. 2.4. Measurement for RAMAN spectra of surfactants dishwashing product Raman Rxn1 spectrometer (Kaiser Optical Systems, Inc. USA), equipped with a thermoelectrically cooled CCD detector, was used in combination with a fiber optic sapphire immersion probe. The laser wavelength was set at 785 nm. All spectra were recorded at a resolution of 4 cm−1 in the spectral range from 150 to 3480 cm−1. Acquisition time was set at 5 second and five spectra were accumulated. 3 2.5. Statistical analysis 2.5.1. Data preprocessing To accentuate specific spectral variations, preprocessing of the spectra was conducted. Initially, the spectra were normalized to address any scale and baseline shift influences. To normalize and rectify noise, a multiplicative signal correction (MSC) method was employed[27]. MSC is a relatively straightforward preprocessing technique that aims to compensate for scaling and offset (baseline) effects. This correction was accomplished by regressing a measured spectrum against a reference spectrum and subsequently adjusting the measured spectrum based on the slope (and potentially intercept) of this regression. Each spectrum was corrected to achieve a comparable scatter level to that of the reference spectrum. 2.5.2. Independent Component Analysis (ICA) ICA[17], [20]–[22], [28] is one of the most powerful techniques for blind source separation. ICA aims at identifying the products present in a mixture during a process. The basic assumption of the ICA is to consider each row of matrix X as a linear combination of “source” signals, S, with weighting coefficients, or “proportions”, A, proportional to the contribution of the source signals in the corresponding mixtures. Its objective is to extract the "pure" components from a data set mixed in unknown proportions. For an unnoised model, the matrix X (s × n) is decomposed into f independent source signals S (f × n) and a mixing proportion matrix of these pure signals A (s × f) according to the following expression: 𝑋 = 𝐴 𝑆 (1) To solve equation (A.1), ICA estimates a unmixing matrix W (equal to A-1) that optimizes the product independence between this matrix and the data matrix X according to an iterative method based on the central limit theorem [29] (which states that a sum of independent and identically distributed random variables tends to a Gaussian random variable). The output U must be as independent as possible. For a noise-free model, W must be the inverse of A and U must be equal to S, according to the following equation: 𝑈 = 𝑊𝑋 = 𝑊(𝐴𝑆) = 𝑆 The unmixing matrix A can be calculated as: 𝐴 = 𝑋𝑆!(𝑆𝑆!)"# (2) (3) In this work, InfoMax[16], [22] implementation of the ICA algorithm was used. InfoMax uses Gram-Schmidt orthogonalization to ensure the independence of the extracted signal. It uses a maximum likelihood formulation. The aim of Infomax is to find independent source signals by maximizing entropy: 𝐻(𝑥) = − ∫ 𝑓(𝑥)𝑙𝑜𝑔𝑓(𝑥) 𝑑𝑥 (4) While independence of the signals cannot be measured, entropy can. Entropy is related to independence in that maximum entropy implies independent signals. 4 Therefore, the objective of ICA is to find the unmixing matrix that maximizes the entropy in the extracted signals. 2.5.3 Number of components If too few ICs are extracted, some of the significant components may remain in the residual matrix; on the other hand, if to many ICs are extracted some of the significant components might themselves be decomposed into subcomponents. Validation methods are required to decide about the optimal number of ICs to be used in the computation of the final model. The ICA_by_blocks algorithm[8], [12], [30] was used to determine the optimal number of signals to extract. The initial data matrix is split into B blocks of samples with approximately equal numbers of rows. A ICA models are then computed with an increasing number of ICs for each block. The independent components calculated should be strongly correlated. 2.5.5 Model calibration For each product to deformulate, the composition was determined through a calibration conducted using Partial Least Squares Regression (PLSR)[31]–[33]. PLSR is a commonly employed method, particularly when analyzing extensive spectral data. In essence, the algorithm for this regression is partially derived from the one used in Principal Component Analysis (PCA)[34], as it involves a dual decomposition into latent variables for both the X matrix of variables and the Y matrix of responses. The development of the PLS model relies on the establishment of a mixing plan specifically designed for the species identified in the products to be reformulated. If we have access to the safety data sheet (SDS) of the said product, it will impose constraints on the constituents. Consequently, we will be able to adjust the mixing plan based on these constraints. In the absence of an SDS, a mixing plan comprising 2 to 5 components will be devised using the mixture database. 2.5.6 Software Data collection was controlled using the HoloGRAMS™ software (Kaiser optical systems, Inc. USA). All spectra were imported into Matlab 9.1 (R2016b) (Mathworks, Natick, Massachusetts, USA). Statistical analyses were performed with the PLS- toolbox 8.2 (Eigenvector Research Incorporated Wenatchee, Washington, USA) and codes developed in-house. 2 Results and discussion The method was tested on 8 dive products: 5 belonging to a commercial range whose composition was unknown, and 3 with constituents known through their safety data sheets. To verify the methodology, the constitution and composition of the unknown products were provided only at the end of the study. For the identification of the constituents in the 5 unknown mixtures, dilutions were performed according to the described protocol, and Raman spectra were obtained. Only constituents present in the mixture to be analyzed at a concentration greater than 1% will be considered in the Independent Component Analysis (ICA). The ICA, using the ICA by block method (Figure.2), will determine the number of visible constituents per studied mixture and calculate a theoretical spectrum for each 5 identified constituent. Among the extracted Independent Components (ICs), only those representing reliable information will be discussed. These spectra will be compared with the spectral library acquired through spectral overlay and correlation between calculated and experimental spectra (Figure.3). During the calculation of ICs, several similar spectra can be obtained. This can occur because certain constituents in the mixture may have similar spectra. In this case, it was decided to include surfactants with similar spectra in the algorithm within the mixture space. Detergents may contain additional compounds that can be detected by Raman spectroscopy, such as salts (𝑁𝑎𝐶𝑙, 𝑀𝑔𝑆𝑂$, etc.). These salts are usually added to increase the viscosity of the mixture. These salts have specific bands in Raman spectroscopy, such as the vibration %" at 2550 cm -1. The addition of salt is considered when constructing the band of 𝑆𝑂$ mixture plan, as its presence may interact with certain acidic or basic surfactants. Next, a mixture plan is constructed, either using the spectral profiles from the library's mixture plans or by performing new mixtures, considering the specificities identified during the ICs calculation. Whenever a specific blending plan is required, typically due to specific constraints on the components, that plan is systematically added to the blend database. The points in the mixture plan will serve as calibration points to establish a model that allows us to determine the composition of the mixture. In the case of the 3 products with known constituents, a mixture plan is established based on the database of mixtures, while respecting the constraints described in the raw material safety data sheets (SDS). The Partial Least Squares (PLS) modeling is then used to determine the composition of the studied mixture. For both approaches, to validate the methodology, the criteria for prediction errors (Root Mean Square Error of Calibration (RMSEC), Root Mean Square Error of Cross-Validation (RMSECV), and Root Mean Square Error of Prediction (RMSEP)) are observed, as well as the determination coefficient R² for calibration, cross-validation, and prediction[35]. The statistical criteria for the models involving unknown mixtures (*5) and known mixtures (*3) are presented in Tables 1. Based on these criteria, the prediction and calibration residuals are of similar magnitudes, indicating a good predictive quality for the mixture compositions. For all the model's prediction results, the obtained compositions fall within the confidence interval of the provided compositions (Table 2 and 3). In both cases, the results generally demonstrate accurate estimation of the constituents in the various mixtures. The prediction discrepancies, although minimal for quantifying an estimation of the mixture compositions, can have various origins, such as the nature of a raw material, interactions with co-constituents in the mixture (such as fragrances, thickeners, etc.), or the concentration of a constituent. Regarding the nature of the raw material, even for the same constituent, there are numerous producers and suppliers, and often there exist slight differences between these constituents, such as variations in carbon chain length or the number of ethoxylated groups, which can impact the spectrum and therefore its prediction. A constituent present in low concentration would be difficult to detect in Raman spectroscopy and consequently be identified, as is the case with MGDA in samples D1 and D3. Hence, the model would have a higher prediction error in cases of low concentration. 6 4 Conclusions Chemometrics/Machine Learning methods such as Blind Source Separation (BSS) are powerful tools for extracting signals from complex mixtures. These techniques have been successfully applied to several detergent mixtures for various household applications. The combination of spectral databases of surfactants and mixtures has enabled the identification and quantification of surfactants in these complex mixtures (Fig.3). This methodology can be easily adapted to industrial environments to perform various tasks such as raw material quality control and competitive intelligence monitoring. The methodology can be applied to any type of MIR, NIR, and Raman spectroscopy. Of course, it is necessary to redo all the measurements to obtain the various databases required for the identification and quantification of the mixture. This approach could facilitate rapid monitoring of detergent type and concentration in different matrices. This analysis would make it possible to determine which types of detergents are present, as well as their respective concentrations. This information could then be used to adjust methods to better eliminate or reduce the specific detergents detected. AUTHOR INFORMATION Corresponding Author Yohann Clément – Data Scientist / Chemometrician, University of Lyon, CNRS, Institut of Analytical Sciences, UMR-5280, 5 Rue de la Doua 69100 Villeurbanne, France; orcid.org/0000-0002-9852-2856; Email: [email protected] Author Pedro Marote, Analyst, University of Lyon, CNRS, Institut of Analytical Sciences, UMR- 5280, 5 Rue de la Doua 69100 Villeurbanne, France; Email: pedro.marote@univ- lyon1.fr Pierre Lanteri, Professor, University of Lyon, CNRS, Institut of Analytical Sciences, UMR-5280, 5 Rue de la Doua 69100 Villeurbanne, France ; orcid.org/ 0000-0002- 8244-9834; Email: [email protected] Marie Martin, Professor, University of Lyon, CNRS, Institut of Analytical Sciences, UMR-5280, 5 Rue de la Doua 69100 Villeurbanne, France ; Email: marie.martin@isa- lyon.fr Anne Bonhommé, Professor, University of Lyon, CNRS, IRCELYON, UMR-5256, 5 Rue de la Doua 69100 Villeurbanne, France; Email: [email protected] lyon1.fr Author Contributions The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript. Funding Sources 7 This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Declarations of interest None. Acknowledgements None ABBREVIATIONS NMR: Nuclear Magnetic Resonance, BSS: Blind source separation,SE: Source extraction, ICA: Independent Component Analysis, MCR-ALS: Multicurve Resolution Alternating Least Squares, ICs: Independent Components, MSC: multiplicative signal correction, PLSR: Partial Least Squares Regression, PCA: Principal Component Analysis, SDS: safety data sheet, RMSE: Root Mean Square Error, RMSEC: Root Mean Square Error of Calibration, RMSEP: Root Mean Square Error of prediction, RMSECV: Root Mean Square Error of Cross-Validation 5 References [1] J. T. K. Milton J. Rosen, Surfactants and Interfacial Phenomena. 2012. [2] R. Ernst, C. J. Gonzales, and J. Arditti, “Biological effects of surfactants: Part 6- effects of anionic, non-ionic and amphoteric surfactants on a green alga (Chlamydomonas),” Environ. Pollution. Ser. A, Ecol. Biol., vol. 31, no. 3, pp. 159–175, 1983, doi: 10.1016/0143-1471(83)90074-0. [3] S. O. Badmus, H. K. Amusa, T. A. Oyehan, and T. A. Saleh, “Environmental risks and toxicity of surfactants: overview of analysis, assessment, and remediation techniques,” Environ. Sci. Pollut. Res., vol. 28, no. 44, pp. 62085– 62104, 2021, doi: 10.1007/s11356-021-16483-w. J. Arora et al., “Surfactant pollution, an emerging threat to ecosystem: Approaches for effective bacterial degradation,” J. Appl. Microbiol., vol. 133, no. 3, pp. 1229–1244, 2022, doi: 10.1111/jam.15631. [4] [5] M. Patel, “Surfactants Based on,” vol. 7, no. 3, pp. 47–62, 2004. [6] European commision, “detergent,” https://ec.europa.eu. . [7] A. Gaubert et al., “Analytica Chimica Acta Characterization of surfactant complex mixtures using Raman spectroscopy and signal extraction methods : Application to laundry detergent deformulation,” vol. 915, pp. 36–48, 2016, doi: 10.1016/j.aca.2016.02.016. [8] Y. Clément et al., “Talanta Raman spectroscopy combined with advanced chemometric methods : A new approach for detergent deformulation,” Talanta, vol. 195, no. November 2018, pp. 441–446, 2019, doi: 10.1016/j.talanta.2018.11.064. [9] A. Gaubert et al., “Determination of surfactant bio-sourced origin by isotope- [10] ratio mass spectrometry,” Rapid Commun. Mass Spectrom., vol. 30, no. 9, pp. 1108–1114, 2016, doi: 10.1002/rcm.7537. I. Ogura, D. L. DuVal, S. Kawakami, and K. Miyajima, “Identification and quantisation of surfactants in consumer products by ion-spray mass spectrometry,” JAOCS, J. Am. Oil Chem. Soc., vol. 73, no. 1, pp. 137–142, 1996, doi: 10.1007/BF02523461. [11] M. Hologne, A. Gaubert, C. Sanglar, C. Bordes, and H. Casabianca, “New 8 validation of molecular mass measurements by means of 2D DOSY1H NMR experiments: Application to surfactants,” Comptes Rendus Chim., vol. 18, no. 2, pp. 187–192, 2015, doi: 10.1016/j.crci.2014.05.008. [12] D. N. Rutledge and D. Jouan-Rimbaud Bouveresse, “Independent Components Analysis with the JADE algorithm,” TrAC - Trends Anal. Chem., vol. 50, pp. 22– 32, 2013, doi: 10.1016/j.trac.2013.03.013. [13] H. S. Park, H. R. Ryu, and C. K. Rhee, “Simultaneous separation of nine surfactants of various types by HPLC with evaporative light scattering detection,” Talanta, vol. 70, no. 3, pp. 481–484, 2006, doi: 10.1016/j.talanta.2006.01.029. [14] J. F. Martínez-Aguilar and E. L. Ibarra-Montaño, “Complete quality analysis of commercial surface-active products by Fourier-transform near infrared spectroscopy,” Talanta, vol. 73, no. 4, pp. 783–790, 2007, doi: 10.1016/j.talanta.2007.05.001. [15] K. Kargosha, S. H. Ahmadi, M. Mansourian, and J. Azad, “Simultaneous determination of one nonionic and two anionic surfactants using Fourier transform infrared spectrometry and multivariate analysis,” Talanta, vol. 75, no. 2, pp. 589–593, 2008, doi: 10.1016/j.talanta.2007.11.065. [16] H. B. Barlow, “Possible Principles Underlying the Transformations of Sensory Messages,” Sens. Commun., pp. 216–234, 2013, doi: 10.7551/mitpress/9780262518420.003.0013. [17] A. J. Bell and T. J. Sejnowski, “The &quot;Independent Components&quot; of Scenes are Edge Filters,” Vis. Res, vol. 37, no. 23, pp. 3327–3338, 1997. [18] A. Hyvärinen, “Fast and robust fixed-point algorithms for independent component analysis,” IEEE Trans. Neural Networks, vol. 10, no. 3, pp. 626– 634, 1999, doi: 10.1109/72.761722. [19] L. De Lathauwer, B. De Moor, and J. Vandewalle, “An introduction to independent component analysis,” J. Chemom., vol. 14, no. 3, pp. 123–149, 2000, doi: 10.1002/1099-128X(200005/06)14:3<123::AID-CEM589>3.0.CO;2- 1. [20] E. O. Aapo Hyvärinen, Juha Karhunen, Independent Component Analysis, Wiley. New York, 2001. [21] J. F. Cardoso and A. Souloumiac, “Blind beamforming for non-Gaussian signals,” IEE Proceedings, Part F Radar Signal Process., vol. 140, no. 6, pp. 362–370, 1993, doi: 10.1049/ip-f-2.1993.0054. [22] A. J. Bell and T. J. Sejnowski, “An information-maximization approach to blind separation and blind deconvolution.,” Neural Comput., vol. 7, no. 6, pp. 1129– 1159, 1995, doi: 10.1162/neco.1995.7.6.1129. [23] J. Felten, H. Hall, J. Jaumot, R. Tauler, A. De Juan, and A. Gorzsás, “Vibrational spectroscopic image analysis of biological material using multivariate curve resolution-alternating least squares (MCR-ALS),” Nat. Protoc., vol. 10, no. 2, pp. 217–240, 2015, doi: 10.1038/nprot.2015.008. [24] V. Olmos et al., “Relevant aspects of unmixing/resolution analysis for the interpretation of biological vibrational hyperspectral images,” TrAC - Trends Anal. Chem., vol. 94, pp. 130–140, 2017, doi: 10.1016/j.trac.2017.07.004. [25] H. Scheffe, “The Simplex-Centroid Design for Experiments with Mixtures Author ( s ): Henry Scheffe Source : Journal of the Royal Statistical Society . Series B ( Methodological ), Vol . 25 , No . 2 Published by : Wiley for the Royal Statistical Society Stable URL : ht,” J. R. Stat. Soc., vol. 25, no. 2, pp. 235–263, 1963. 9 [26] J. Cornell, Experiments with Mixtures: Designs, Models, and the Analysis of Mixture Data. 2002. [27] D. MacDougall, H. Martens, and P. Geladi, “Linearization and Scatter- Correction for Near-Infrared Reflectance Spectra of Meat,” Appl. Spectrosc., vol. 39, no. 3, pp. 491–500, 1985. [28] J. F. Cardoso, “High-order contrasts for independent component analysis,” Neural Comput., vol. 11, no. 1, pp. 157–192, 1999, doi: 10.1162/089976699300016863. [29] H. Fisher, The Prehistory: De Moivre’s Theorem. 2010. [30] A. Kassouf, D. Jouan-Rimbaud Bouveresse, and D. N. Rutledge, “Determination of the optimal number of components in independent components analysis,” Talanta, vol. 179, no. September 2017, pp. 538–545, 2018, doi: 10.1016/j.talanta.2017.11.051. [31] H. WOLD, Nonlinear Iterative Partial Least Squares (NIPALS) Modelling: Some Current Developments. ACADEMIC PRESS, INC., 1973. [32] S. Wold, M. Sjöström, and L. Eriksson, “PLS-regression: A basic tool of chemometrics,” Chemom. Intell. Lab. Syst., vol. 58, no. 2, pp. 109–130, 2001, doi: 10.1016/S0169-7439(01)00155-1. [33] M.Tenehaus, La régression PLS: théorie et pratique. 1998. [34] H. Abdi and L. J. Williams, “Principal component analysis,” Wiley Interdiscip. Rev. Comput. Stat., vol. 2, no. 4, pp. 433–459, 2010, doi: 10.1002/wics.101. [35] A. Levet et al., “Quantitative structure-activity relationship to predict acute fish toxicity of organic solvents,” Chemosphere, vol. 93, no. 6, pp. 1094–1103, 2013, doi: 10.1016/j.chemosphere.2013.06.002. 10 INCI Sodium C14-16 Olefin Sulfonate Sodium Laureth Sulfate Trimethyl Amine (TEA) Trisodium salt of Methylglycinediacetic acid (MGDA) Lauryl ether sulfate Eau RMSEC 0.93 0.48 0.27 0.61 0.76 1.78 RMSECV 1.2 1 0.33 1.6 0.92 2.15 RMSEP 1.22 0.87 0.39 0.68 0.98 2.86 R²Y 0.98 0.98 0.99 0.94 0.99 0.98 Q²Y 0.97 0.94 0.98 0.86 0.98 0.99 Table 1: RMSEC, RMSECV, RMSEP, R2Y and Q2Y for PLS regression on raw material detected by Independent Component Analysis (ICA) Unknowm Sample Sodium C14-16 Olefin Sulfonate Sodium Laureth Sulfate Trimethyl Amine (TEA) Trisodium salt of Methylglycinediacetic acid (MGDA) Lauryl ether sulfate Experimental calculated Experimental calculated Experimental calculated Experimental calculated Experimental calculated D1 D2 D3 D4 D5 8.4 7.4 12.0 14.5 16.8 8.8 7.2 13.2 15.3 16.3 6.1 9.9 4.3 6.1 7.5 Table 2: Composition of unknown detergent: experimental vs calculated by PLS regression. 4.5 8.8 7.2 9.8 12.7 5.0 9.5 6.3 7.1 11.5 0.8 3.6 1.8 3.5 5.1 5.8 9.4 4.6 5.5 8.2 1.2 4.3 0.9 2.7 4.4 39.7 29.0 0.0 0.0 0.0 37.6 24.4 3.1 2.1 0.8 Known sample Sodium C14-16 Olefin Sulfonate Sodium Laureth Sulfate Cocamdopropyl betaine Lauryl ether sulfate Lauramidopropylamine Oxide PC RA MI FDS 0% 5-10 % 0% calculated Experimental calculated Experimental calculated Experimental calculated Experimental calculated 0% 2.7% 3.5% 0% 5-10 % 0% 1-5 % 0% 1-5 % Table 3: Composition of known detergent by FDS: experimental vs calculated by PLS regression. 10 15 % < 1% 5-10 % 14% 0.50% 9% 0% 1-5 % 0% 8.5% 0% 0% 0% 4% 0% 0% 8% 0% 1 2 3 4 11 5 6 7 8 9 Figure.1: to Reverse Engineering of Detergents Using Raman Spectroscopy 12 10 11 12 l s k c o b n e e w t e b n o i t a é r r o C l 1,2 1 0,8 0,6 0,4 0,2 0 0 How many ICs to select 1 2 3 4 5 6 7 Independent components (ICs) Figure.2 ICA by blocks tests for the determination of raw number materials in the detergent. 13 13 14 15 16 A y t i s n e t n I 5 4 3 2 1 0 -1 -2 Calculated by ICA vs experimental 0 500 1000 1500 2000 2500 3000 Raman shift cm-1 14 Calculated by ICA vs experimental 0 500 1000 1500 2000 2500 Raman shift cm-1 B y t i s n e t n I 6 5 4 3 2 1 0 -1 -2 17 18 19 20 15 C y t i s n e t n I 9 8 7 6 5 4 3 2 1 0 0 -1 -2 Calculated by ICA vs experimental 500 1000 1500 2000 2500 Raman shift cm-1 Figure. 3. Raman spectra of 2 main raw materials (A and B) and 1 salt (C) (blue) versus calculated spectra according ICA (red) A) Sodium Laureth Sulfate, B) Lauramidopropylamine Oxide and C) 𝑆𝑂$ %" . 21 22 23 24 25 26 27 28 29 30 31 16 32 33 Supplementary appendices Name Dehyquart ECA Dhyton K Cos Polyquart H81 Luviquat Excellence Lanette O Emulgin B2 Comperlan 100 Comperlan IP Stepanol AM 30 KE Betafin BP 20 Dehyton AB 30 Cosmacol ELI Dehyquart F75T Emilgin B2 Dehyquart ACA Amphosol CDB special Hydrogen CAT Ninol 40 CO E Purton CFD Comperlan 100 Purton CFM/ F Comperlan IP Emulgin B2 INCI name 1-Hexadecanaminium, N,N,N-trimethyl-, chloride 1-Propanaminium, 3-amino-N-(carboxymethyl)-N,N-dimethyl-, N-(C8-18 and C18-unsaturated acyl) derivatives, hydroxides, inner salts 1,3-Propanediamine, N-(3-aminopropyl)- 1H-Imidazolium, 1-ethenyl-3-methyl-, chloride, polymer with 1-ethenyl-2- pyrrolidinone Alcohols, C16-18 Alcohols, C16-18, ethoxylated Amides, C12-18 and C18-unsaturated, N-(hydroxyethyl) Amides, coco, N-(2-hydroxypropyl) Ammonium lauryl sulfate Betaine (anhydre 99%) Betaines, C12-14-alkyldimethyl C12-13 Alkyl Lactate Ceteareth-20 ceteareth-20 Cetrimonium Chloride Cetyl Betaine cetyl PEG/PPG-10/1 dimethicone Cocamide DEA COCAMIDE DEA cocamide MEA Cocamide MEA Cocamide MIPA Cocamide MIPA Producer / Reseller BASF BASF BASF BASF BASF BASF Ami Ami Stepan Masso BASF Sasol Ami Cognis Ladybel Stepan Cognis Stepan ZW Cognis ZW Cognis Ami 17 Amphotensil B4/C Amphosol DM Amphotensid B5 Tegobetaine F 50 Amphosol CG-K Antil HS 60 Eco sense 919 surfactant Plantacare 818 UP Liviquat mono LS Plantacare 2000UP Ninol CCA Texapon N40 IS Miranol Setacin 103 Spezal Stepan MILD SL3 BA Rewopol SB CS50K Dehyquart F75T Trilon B 87% Stepan MILD GCC Tegin BL 315 dehyquart N Tegosoft P ammonyx LMDO Ammonyx LO Empigen OB Cocamidopropyl Betaine Cocamidopropyl Betaine Cocamidopropyl Betaine cocamidopropyl betaine Cocamidopropyl Betaine cocamidopropyl betaine ; glyceryl laurate Coco-Glucoside COCO-GLUCOSIDE Cocotrimonium methosulfate Decyl Glucoside Dimethyl lauramide Disodium 2-Sulfolaurate Disodium Cocoamphodiacetate Disodium Laureth Sulfosuccinate Disodium Laureth Sulfosuccinate disodium PEG-5 laurylcitrate sulfosuccinate ; sodium laureth sulfate Distearoylethyl Hydroxyethylmonium Methosulfate (and) Cetearyl Alcohol EDTA Glyceryl Caprylate/Caprate glycol destearate Guar gum, 2-hydroxy-3-(trimethylammonio)propyl ether, chloride isopropyl palmitate Lauramidopropylamine Oxide Lauramine Oxide Lauramine Oxide ZW Stepan ZW Cognis Stepan Cognis Dow BASF BASF BASF Stepan BASF Rhone Poulenc ZW Stepan Cognis BASF BASF Stepan Cognis BASF Cognis Stepan Stepan Innospec Performance Chemicals 18 Plantacare 1200UP Stepan MILD L3 Abilsoft AF100 Zetesol 2056 Lumorol K 1056 Arlypon LIS Arlypon LIS Arlypon TT Arlypon TT Antil 171 Rewoderm LIS 80 Arlacel P 135 Arlyton TT Myritol 318 Texapon SB 3KC Isolan GO 3 Emulgin S21 Emulgin S21 Salcare SL92 Polysorbate 20 Tween 21 LQ Tween 60V Amphisol K Lauryl Glucoside LAURYL LACTYL LACTATE methoxy PEG/PPG-7/3 aminopropyl dimethicone MIPA-Laureth Sulfate MIPA-Laureth Sulfate, Cocamidopropyl Betaine Oxirane, 2-methyl-, polymer with oxirane, ether with 2-ethyl-2-(hydroxymethyl)-1,3- propanediol (3:1), tri-(9Z)-9-octadecenoate Oxirane, 2-methyl-, polymer with oxirane, ether with 2-ethyl-2-(hydroxymethyl)-1,3- propanediol (3:1), tri-(9Z)-9-octadecenoate Oxirane, 2-methyl-, polymer with oxirane, ether with 2-ethyl-2-(hydroxymethyl)-1,3- propanediol (3:1), tri-(9Z)-9-octadecenoate Oxirane, 2-methyl-, polymer with oxirane, ether with 2-ethyl-2-(hydroxymethyl)-1,3- propanediol (3:1), tri-(9Z)-9-octadecenoate PEG-18 glyceryl oleate/cocoate PEG-200 hydrogenated grylceryl palmate (and) PEG-7 glyceryl cocoate PEG-30 Dipolyhydroxystearate PEG/PPG-120/10 trimethylolpropane trioleate (and) laureth-2 PEG/PPG-120/10 Trimethylolpropane Trioleate (and) Laureth-2 Poly(oxy-1,2-ethanediyl), .alpha.-(3-carboxy-1-oxosulfopropyl)-.omega.-hydroxy-, C10-16-alkyl ethers, disodium salts polyglyceril 3 oleate Polyoxyethylene monooctadecyl ether Polyoxyethylene monooctadecyl ether, C18H37O(C2H4O)21H polyquaternium-32 (and) mineral oil (and) PPG-1 trideceth-6 Polysorbate 20 Polysorbate 21 Polysorbate 60 Potassium Cetyl Phosphate BASF Stepan Cognis ZW ZW BASF Ladybel Ami BASF Cognis Cognis Masso Cognis Ami Ami Cognis BASF Ami BASF Ladybel Masso Masso DMS 19 Bio Terge AS 40 HASB Dehyton MC Rowoteric AMC Chimin CG Protelan GG Steol CS 270 Zetesol Zetesol 370 /N Zetesol LES 2 Zetesol NL U Steol 370 Perlagent GM 4175 Lumorol K 5240 Miranol ultra L32 E Maprosil 30B Protelan LS 9011 Sulfetal LS U SDS Lathanol LAL coarse Stepanate SXS E Purton SFD Copherol 1300C Sodium C14-16 Olefin Sulfonate Sodium cocoamphoacetate sodium cocoamphoacetate SODIUM COCOYL GLUTAMATE Sodium Cocoyl Glycinate, Sodium Cocoyl Glutamate Sodium Laureth Sulfate Sodium Laureth Sulfate Sodium Laureth Sulfate Sodium laureth sulfate Sodium Laureth Sulfate Sodium Laureth Sulfate Sodium Laureth Sulfate, Glycol Stearate, Cocamide MEA, Cocamide DEA, Propylene Glycol Sodium Laureth Sulfate, Cocamido- propyl Betaine, Disodium Laureth Sulfosuccinate, PEG-9 Cocoglycerides Sodium lauroamphoacetate Sodium Lauroyl Sarcosinate Sodium Lauroyl Sarcosinate Sodium Lauryl Sulfate Sodium Lauryl Sulfate Sodium Lauryl Sulfoacetate SODIUM XYLENE SULFONATE SOYAMIDE DEA tocopherol EMPILAN 2502 TRILON M coconut diethanolamide Trisodium salt of Methylglycinediacetic acid (MGDA) Stepan Ladybel Cognis Lamberti ZW Stepan ZS ZW ZW ZW Stepan ZW ZW Solvay Stepan ZW ZW Aldrich Stepan Stepan ZW Cognis Innospec Performance Chemicals BASF 20 Table supplementary: raw material list with the commercial name, the INCI name and the producer or the reseller 34 35 36 21
synthetic_cpt
4
Reward_Modeling_with_Weak_Supervision_for_Language_Models.pdf
Reward Modeling with Weak Supervision for Language Models Ben Hauptvogel1, Malte Ostendorff2, Georg Rehm2,3, Sebastian Möller1,3 1Technical University of Berlin 2Occiglot 3DFKI GmbH Corresponding author: [email protected] 4 2 0 2 t c O 8 2 ] L C . s c [ 1 v 9 6 8 0 2 . 0 1 4 2 : v i X r a Abstract Recent advancements in large language models (LLMs) have led to their increased application across various tasks, with reinforcement learn- ing from human feedback (RLHF) being a cru- cial part of their training to align responses with user intentions. In the RLHF process, a reward model is trained using responses preferences determined by human labelers or AI systems, which then refines the LLM through reinforce- ment learning. This work introduces weak su- pervision as a strategy to extend RLHF datasets and enhance reward model performance. Weak supervision employs noisy or imprecise data labeling, reducing reliance on expensive manu- ally labeled data. By analyzing RLHF datasets to identify heuristics that correlate with re- sponse preference, we wrote simple labeling functions and then calibrated a label model to weakly annotate unlabeled data. Our evalua- tion show that while weak supervision signifi- cantly benefits smaller datasets by improving reward model performance, its effectiveness de- creases with larger, originally labeled datasets. Additionally, using an LLM to generate and then weakly label responses offers a promising method for extending preference data. 1 Introduction Reinforcement learning from Human Feedback (RLHF) is a widely used method for aligning mod- els to user intentions. This technique has been instrumental in improving large language models (LLM) to reflect human values and enhance us- ability, leading to large-scale adoption of conver- sational systems like ChatGPT (OpenAI, 2024) or BARD (Thoppilan et al., 2022). The RLHF technique starts by sampling outputs from a model, which is either pre-trained or already supervised fine-tuned on demonstration data. Then, human annotators are tasked to label the outputs by ranking them from the least preferable to the most preferable. This labeled data is subsequently used to train a reward model, which calculates a reward value for a given response to a prompt. This is necessary for the reinforcement learning stage, in which a newly sampled model output is assigned this scalar reward. The model is then refined using an RL algorithm such as Proximal Policy Opti- mization (PPO) (Ouyang et al., 2022; Schulman et al., 2017). During this process, the collection of high-quality human feedback data presents a signif- icant challenge since it remains an expensive task (Casper et al., 2023). An alternative to relying on labeled datasets is the approach of weak supervision. Weak super- vision is a machine learning technique that devi- ates from relying solely on manually labeled data. Instead, models are trained using noisy and inac- curate labels. A popular approach for implement- ing weak supervision involves the use of labeling functions. These are defined using programmatic rules and heuristics about the data and contain un- certain accuracies and correlations. Snorkel is a solution that denoises the labeling functions to cre- ate a weak supervision signal, without the need to specify weights (Ratner et al., 2017). Building on the advancements of model align- ment techniques, this work focuses on the effective- ness of applying weak supervision to extend RLHF datasets. We aim to investigate whether annotation based on simple heuristics that model preference can enhance reward model performance. To ensure reproducibility we make all our source code and datasets publicly available on Github1. 2 Related Work Several works aim to remove the human labor in the annotation process from the RLHF pipeline. Lee et al. (2023) use an off-the-shelf LLM to annotate preference samples instead of relying on human 1https://github.com/DFKI-NLP/weak-supervision- rlhf 1 Figure 1: Extending RLHF datasets with weak supervision in a three-step pipeline: conducting data analysis, writing labeling functions, applying a label model to create a new weakly labeled dataset. labeling. Their research concentrated on summa- rization tasks and found that reinforcement learning with AI feedback can achieve similar performance as RLHF. Sun et al. (2023) extend this approach by introducing guidelines for a reward model to address the reward hacking problem, in which a model tries to bypass the true objective by finding unintended ways to maximize its reward. Kim et al. (2023) align an LLM with synthetic feedback by employing heuristics based on a set of assumptions, which include the belief that larger models outper- form smaller ones and that using more examples (shots) is preferable to using fewer. Samples gener- ated using these characteristics were ranked higher in the preference dataset. Other studies explore methods to approximate human preference. Bukharin et al. (2023) use do- main knowledge to rank reward factors, creating a hierarchical decision tree to weakly annotate sam- ples. In contrast to prior approaches, this work employs weak supervision rather than a decision tree, combining different reward factors to annotate samples based on a weak preference signal. Some suspect that output length plays a significant role in optimizing reward models. Singhal et al. (2023) explore the correlation between output length and reward, finding that the majority of reward improve- ments are due to length increases. Our work inves- tigates length and other reward factors, involving the analysis analysis of multiple RLHF datasets to assess correlations between factors and correspond- ing rewards. 3 Methodology these datasets to identify heuristics that correlate with user preferences, which we use to develop labeling functions. These functions are combined using a linear label model which is able to weakly annotate unlabeled data. The resulting dataset with noisy preference data is combined with the origi- nally labeled data to train a reward model. 3.1 Datasets We conducted experiments using four different preference datasets. For two of these datasets, hu- man labelers were tasked to determine response preference, whereas for the remaining two, a LLM was employed to decide the preferred response. The HH-RLHF dataset2 from Anthropic AI was constructed through a series of interactions be- tween crowdworkers and LLMs in dialogue set- tings (Bai et al., 2022). At each node of the dia- logue, the crowdworkers were presented with two model generated responses. They selected the re- sponse that was more helpful and less harmful. This process yielded a dataset containing about 169 thousand chosen-rejected response pairs. The mt_bench_human_judgements dataset3, re- ferred to as MT-BENCH for simplicity, is a human- annotated preference dataset with responses gen- erated by six LLMs including GPT-4, GPT-3.5, and others (Zheng et al., 2023). Graduate students with expertise in relevant subjects primarily anno- tated the responses to assess the alignment between human preferences and an LLM judge (GPT-4). Featuring about 3,300 samples, this dataset is con- siderably smaller than the HH-RLHF dataset. Our approach employs weak supervision to ex- tend reinforcement learning from human feedback (RLHF) datasets. We start by analyzing parts of 2https://huggingface.co/datasets/Anthropic/hh- rlhf 3https://huggingface.co/datasets/lmsys/mt_ bench_human_judgments 2 Train Set *Data AnalysisLabeling Functions lengthreading easelexical diversitytext sentimentamount of numberskeywordsSnorkel Label ModelTrain Set(originally labeled)Weakly Labeled Dataset* 2-10 % of original dataset The ultrafeedback-binarized (UB) dataset4 em- ploys an LLM, specifically OpenAI’s GPT-4, for response ranking across 64 thousand prompts col- lected from various sources, generating four re- sponses per prompt and annotating each based on instruction-following, truthfulness, honesty, or helpfulness, and an overall score for preference (Cui et al., 2023; OpenAI, 2024). However, due to inconsistencies in the overall scores, researchers at Argilla recalculated using the mean of the ob- jective ratings to form the ultrafeedback-binarized- preferences (UBP) dataset5. In this dataset, they used the highest-rated response as the chosen op- tion and randomly selected one of the remaining responses as the rejected counterpart for pairwise comparisons. Ten percent of each dataset was held-out as an evaluation set, excluded from the processing pipeline. The remaining data was further divided into a baseline training set, comprising between 1 and 10% of the total dataset, and a weakly super- vised set. From this latter part, original preference labels were removed and replaced with newly ap- plied weak labels. An exception to this is the MT-BENCH dataset. Due to its small size, 30% is used as the evalua- tion set, with the remaining 70% designated as the baseline training set. Since we did not use any data of it for weak supervision, we adopted a different strategy for the weakly labeled dataset by generat- ing new unlabeled data consisting of a prompt and two responses. First, we compiled prompts from various datasets including HH-RLHF, OpenAssis- tant, alpaca, and Synthetic Instruct GPTj Pairwise. We then generated responses using LlaMa-2-7b and Vicuna-7b-v1.5 LLMs, ensuring comparabil- ity by choosing models with the same parameter size. In total, we generated around 24,200 prompt- response-response triplets, which were uploaded to Hugging Face. 3.2 Heuristics The selection of heuristics that potentially corre- late with human or AI preference was primarily driven by theoretical considerations, an intuitive understanding of response dynamics, and insights from existing literature on RLHF reward factors. Text length was the first feature we investigated, 4https://huggingface.co/datasets/ HuggingFaceH4/ultrafeedback_binarized 5https://huggingface.co/datasets/argilla/ ultrafeedback-binarized-preferences since most RLHF datasets show a strong correla- tion between the response length and its preference (Singhal et al., 2023). Next, we applied a formula to assess the readabil- ity of a text using the Flesch Reading Ease, which calculates readability based on the total number of words, sentences, and syllables (Flesch, 1948). The Flesch Reading Ease score indicates how easy or difficult a text is to read. Lower scores indicate the text is more challenging to read. The highest pos- sible score is 121.22, which represents the easiest readability. Typically, scores for most texts range from 0 to 100. We analyzed the lexical diversity in the datasets, which is a calculated measure of vocabulary rich- ness within a text. Lexical diversity indicates the variety of different words used, relative to the total word count. We employed the Type-Token Ratio for this analysis, which calculates lexical diversity by dividing the number of unique words by the total number of words in the text. Next, we counted the amount of numbers in each response to determine if there is a relationship be- tween the quantity of numbers and preference. Additionally, we conducted sentiment analysis on the response texts. Sentiment analysis uses computational methods to determine the emotional tone of a text, categorizing it as positive, nega- tive, or neutral. For this purpose, we used the Va- lence Aware Dictionary and Sentiment Reasoner (VADER), a lexicon and rule-based tool for sen- timent analysis (Hutto and Gilbert, 2014). Using VADER, we assessed the sentiment polarity. Sen- timent polarity identifies the emotional direction of the content, showing whether the text conveys a positive, negative, or neutral message. We used an external LLM to generate regular expressions that are potentially more common in either chosen or rejected responses. We tracked how frequently these expressions appeared in each type of response. If a regular expression appears significantly more often in a chosen response than in a rejected response, it can be useful to integrate into a labeling function. Finally, we also used keywords to label re- sponses. For this purpose, we collected multiple lists of harmful or offensive keywords from the In- ternet. The presence of these keywords in a text often indicates that the response could be more harmful or offensive. We validated this pattern within our datasets. 3 Feature HH-RLHF stat p-value UB UBP stat p-value stat p-value MT-BENCH p-value stat Text Length Reading Ease Lexical Diversity Amount of Numbers Sentiment Polarity 4.12 -4.15 -5.60 1.49 1.49 < 0.01 < 0.01 < 0.01 0.14 < 0.01 9.38 -1.60 -1.95 5.53 5.53 < 0.01 0.11 0.05 < 0.01 0.84 18.12 -4.11 -9.89 10.33 10.33 < 0.01 < 0.01 < 0.01 < 0.01 < 0.01 5.92 -1.96 -7.28 3.11 3.11 < 0.01 0.05 < 0.01 < 0.01 < 0.01 Table 1: Results of the independent t-test for numerical features of RLHF datasets. 3.3 Data Analysis For each heuristic that potentially influences the reward, we conducted a detailed data analysis be- fore developing labeling functions based on those findings. This data analysis involves determining whether a correlation exists between the heuristic and preference, and determining if its relevance is confined to a specific range of values. The data analysis was conducted on the 10 % train split of each dataset. We examined numeri- cal features, such as response length or amount of numbers, by analyzing the average values for both chosen and rejected responses. An independent t-test on these averages determined if the differ- ences were statistically significant. Some of the resulting p-values were above 0.05, indicating that the difference is not statistically significant, but we still implemented labeling functions for those heuristics. They can still provide a valuable weak supervision signal since the label model will weigh the labeling functions based on their accuracy and correlations. The Snorkel label model is robust to noise, so providing additional context, even if not always precise can help differentiate edge cases (Ratner et al., 2018). We found a clear correlation that longer re- sponses are consistently more likely to be chosen over shorter ones. The average length of chosen responses is longer than that of rejected responses across all four datasets. The t-test results confirm that this difference is statistically significant, with all four p-values well below the 0.05 threshold, as shown in Table 1. The average reading ease score for rejected re- sponses is higher than for chosen responses across all four datasets, indicating that preferred responses are generally more challenging to read. The t-test confirms the statistical significance of this trend for the HH-RLHF, MT-BENCH, and UB datasets, with p-values below 0.05. However, for the UB dataset, the p-value of 0.11 is not statistically significant. Despite this, we will continue to incorporate read- ing ease into the labeling functions for all datasets and assess their effectiveness. The average lexical diversity is lower in chosen responses than in rejected responses. The p-value from the independent t-tests confirms that this ob- servation is statistically significant for all datasets. Consequently, our labeling function for lexical di- versity favors responses with lower lexical diver- sity. For the HH-RLHF datasets the chosen responses generally include more numbers on average in all datasets, but the difference is not statistically sig- nificant. In contrast, for the other datasets, the chosen responses contain a statistically significant higher amount of numbers compared to rejected responses. We developed a labeling function that favors responses containing more numbers. Finally, the sentiment polarity, as calculated by VADER, is generally higher for chosen responses compared to rejected responses across all four datasets. A t-test validates these findings, confirm- ing that the mean difference in sentiment polarity is statistically significant for all datasets except for the UB dataset. Consequently, we have developed labeling functions that favor responses with higher sentiment polarity. We conducted further analysis on these numer- ical features to determine if the observed correla- tions are confined to specific ranges. For the non- numerical features, lists of regular expressions and keywords, a different approach was taken. GPT-4 was used to generate regular expressions that could influence response preferences. Prompts were for- mulated to produce regular expressions common in chosen or rejected responses. For example, rejected responses might include expressions of uncertainty, while chosen responses might include pros and cons or specific examples. We counted how frequently these regular ex- 4 pressions appeared in both chosen and rejected re- sponses. When a regular expression demonstrated a statistically significant variance in occurrence be- tween chosen and rejected responses and occurred frequently in general, it was integrated into the labeling function. We established specific thresh- olds for the minimum occurrence ration and overall frequency required. A regular expression that ap- peared with at least a 10% higher frequency in either chosen or rejected responses was adopted for that respective group in the labeling function. The resulting labeling function consists of two lists, positive and negative regular expressions. When comparing two responses, it outputs the response that contains more of the positive and fewer of the negative expressions. Since the occurrences of reg- ular expressions vary across datasets, the lists of positive and negative expressions are different for each dataset. Very similar to using regular expressions, we also used lists of negative keywords for labeling functions. We collected lists of words from the internet that we believe are more likely to appear in bad responses. Three distinct lists were used in the analysis: one containing offensive words6, which are normally used for moderating user-generated content, one containing harmful words, and a large list of negatively connotated words7, primarily con- sisting of obscene or vulgar terms, which we will refer to as “bad” words for simplicity. Table 2 shows a clear difference between the human-annotated HH-RLHF dataset and the other datasets. In the HH-RLHF dataset, the words of all three keyword lists are more commonly found in rejected responses, which aligns with the dataset’s goals to be helpful and harmless. In the AI-annotated UB and UBP datasets, the trend is re- versed, with chosen responses containing offensive, harmful, or bad words more frequently. However, it is important to highlight that only a small num- ber of responses contained words from these lists. In the UB dataset for example, among the 4,828 chosen and rejected responses in the train set, there were fewer than 450 harmful words, fewer than 150 “bad” words, and fewer than 50 offensive words (similar in the UBP dataset). Even fewer words were found in the MT-BENCH set, which is under- standable given its smaller size of just 898 chosen and rejected responses in the set we analyzed. 6https://github.com/LDNOOBW/List-of-Dirty- Naughty-Obscene-and-Otherwise-Bad-Words/ 7http://www.bannedwordlist.com/ Occurrences in HH-RLHF Chosen Rejected Offensive Words “Bad” Words Harmful Words 139 285 616 221 402 779 Occurrences in UB Chosen Rejected Offensive Words “Bad” Words Harmful Words 20 82 235 23 75 200 Occurrences in UBP Chosen Rejected Offensive Words “Bad” Words Harmful Words 41 101 317 38 70 238 Occurrences in MT-BENCH Chosen Rejected Offensive Words “Bad” Words Harmful Words 0 9 17 5 3 9 Table 2: Occurrences of words from three keyword lists in chosen and rejected responses across datasets. Therefore, we decided not to write labeling func- tions based on these keyword findings for the UB, UBP, and MT-BENCH datasets, as we do not be- lieve this pattern – more negative words in pre- ferred responses – will generalize well to new data. We prefer not to base our labeling functions on the prevalence of more negative words. However, for the HH-RLHF dataset, we created a labeling func- tion for each list to count these keywords and favor the response with fewer of them. 3.4 Labeling Functions Based on our data analysis results, we developed labeling functions. These concise functions take two responses as input and select a preferred re- sponse according to a defined, simple heuristic or abstain from making a decision. The developed labeling functions were applied to each train set. We further validated the efficacy of the labeling functions using two primary metrics, coverage and accuracy. The (empirical) accuracy reflects how often the labeling function correctly identifies the actual preferred response. Cover- age indicates how frequently the labeling functions make a decision instead of abstaining. 5 Labeling functions abstain from making decision either due to identical heuristic values between re- sponses or due to predefined cutoff points. These cutoff points are based on the data analysis, which identified ranges where the effects of heuristics are stronger or weaker. Beyond those cutoff points the labeling functions abstain, reducing their coverage but potentially enhancing accuracy. While a grid search could be used to determine these thresholds on each train set for optimal coverage and accuracy, our primary goal with these labeling functions is not solely to optimize performance on the 10% train set. We aim to ensure they generalize well on the remainder of the dataset or unseen data. Labeling function Coverage Accuracy Length Reading ease Lexical diversity Sentiment polarity Amount of numbers Regular Expressions Offensive keywords Harmful keywords Bad Keywords 88.54% 74.50% 50.81% 83.68% 6.99% 27.93% 1.31% 4.42% 1.89% 52.36% 52.74% 53.65% 52.39% 53.31% 54.40% 60.00% 57.75% 57.30% Table 3: Labeling functions analysis on train set (10% of the HH-RLHF dataset). Table 3 shows the labeling function for the HH- RLHF dataset. Each labeling function achieves an accuracy exceeding 50% on the train set. How- ever, none surpass 60%, indicating that these sim- ple heuristics do not a highly accurate reflection of the human preference represented in this dataset. The coverage of labeling functions varies signifi- cantly. For numerical values, coverage depends on the established thresholds. Coverage for keyword lists is expectedly low due to the rarity of negative words in model-generated responses. Similarly, differences in the amount of numbers between re- sponses are rare. Table 4 shows the labeling functions used for the MT-BENCH dataset. The accuracies of the label- ing functions are notably higher than those for the other dataset. For instance, the labeling function for text length achieves an empirical accuracy of al- most 70%, while the same labeling function applied to the HH-RLHF dataset achieves an accuracy of about 52 %. It is important to note, however, that the MT-BENCH dataset is considerably smaller than the HH-RLHF dataset. Labeling function Coverage Accuracy Length Reading ease Lexical diversity Sentiment polarity Amount of numbers Regular Expressions 95.32% 69.26% 62.13% 69.93% 63.47% 30.62% 69.97% 60.45% 61.69% 59.39% 63.50% 58.54% Table 4: Labeling functions analysis on train set (MT- BENCH dataset). Labeling function Coverage Accuracy Length Reading ease Lexical diversity Sentiment polarity Amount of numbers Regular Expressions 93.61% 68.21% 52.19% 65.13% 62.75% 32.02% 56.99% 55.30% 53.94% 55.13% 61.43% 57.46% Table 5: Labeling functions analysis on train set (10% of the UB dataset). Table 5 shows the labeling functions applied to the UB dataset, and Table 6 presents those applied to the UBP dataset. Both datasets exhibit similar coverages, but the accuracies are notably higher for the UBP dataset compared to the UB dataset. Labeling function Coverage Accuracy Length Reading ease Lexical diversity Sentiment polarity Amount of numbers Regular expressions 95.06% 68.98% 52.88% 64.60% 50.61% 29.11% 67.43% 57.80% 63.90% 55.85% 71.70% 60.77% Table 6: Labeling functions analysis on train set (10% of the UBP dataset). 3.5 Label Model We fitted the Snorkel label model using the listed labeling functions and the train set for calibration. The model was fitted over 100 epochs with an L2 regularization of 0.5 and using an Adam optimizer. Once calibrated, the label model combines the la- beling functions and can provide a probability clas- sification for any given input. In the context of preference classification, it predicts the probability of one response being preferred over another based on the heuristics. 6 Dataset Accuracy on train set Accuracy on weak set HH-RLHF 53.17% MT-BENCH 67.82% 57.42% UB 66.03% UBP 52.97% N.A. 56.56% 64.45% Table 7: Label Model classification accuracy on train set and weakly labeled set. The weakly labeled set for the MT-BENCH dataset is not part of the original set, as explained in section 3.1. Due to the absence of gold labels, it is not possible to compute the label model accuracy. We applied the label model to the remainder of each dataset, now referred to as the weakly la- beled dataset and assessed the accuracy of the label model by comparing the label model outputs to the original labels. Table 7 shows the achieved classifi- cation accuracies on the train sets and the weakly labeled sets. The accuracies on the weakly labeled sets are very similar, only slightly worse, compared to the train sets. 3.6 Confidence Thresholds The label model we calibrated generates a predic- tion probability. for each class. Samples with a probability below 0.5 are classified as 0, and those above 0.5 as 1. In our context, a 0 indicates a pref- erence for response 0, and conversely for a 1. This probability reflects the model’s confidence in its prediction. We converted the prediction probability into a confidence value for each sample. confidence = (cid:40) if P ≥ 0.5 P 1 − P, P < 0.5 (1) where P = prediction probability. To improve the accuracy of our labeling, we can implement a confidence threshold. We specify a particular threshold value and exclude any samples with confidence levels below this value. This tech- nique can increase average accuracy, but it comes with the trade-off of reducing the number of weakly labeled samples. We conducted experiments with different confidence thresholds to assess their im- pact on the reward model performance. 3.7 Experiments tered using various confidence thresholds, along- side the train set used for labeling function data analysis and label model calibration. We trained a baseline reward model using the train set. For our experiments, we combined the various weakly labeled datasets with the corresponding train set to train a new reward model. We conducted the training of the reward model on the DFKI High- Performance-Compute cluster over two training epochs, using a learning rate of 8e-6 and a residual dropout rate of 0.01. Additionally, we used float32 as the datatype. As a base model architecture, we utilized DeBERTa V3. After training a reward model, either on a base- line train set or a weakly supervised dataset, it was evaluated using the corresponding evaluation set. During this phase, the model performed infer- ence to classify the samples within the evaluation set. Classification accuracy was determined by the model’s ability to correctly identify the preferred response, with incorrect classifications reflecting failures. We primarily used the F1 score to quantify accuracy because it balances precision and recall, making it ideal for our analysis. 4 Results We evaluated the impact of different baseline train- ing set sizes to determine how the availability of more or fewer originally labeled data affects per- formance. The results are illustrated in plots that show the F1 performance relative to the number of weakly labeled samples used. Each plot’s x-axis shows the amount of weakly annotated data added to the baseline. These samples were selected based on falling below a specified confidence threshold, ensuring they represent the N highest-confidence samples. Detailed results for all datasets, including all F1 scores, the numbers of baseline and weakly labeled samples used, and confidence thresholds, can be found in Appendix A.2. 4.1 HH-RLHF Figure 2 demonstrates that there is no improvement in extending the train set with our weak supervi- sion pipeline, using a baseline train set size of 10% (14,472 samples) or 5% (7,236 samples). The base- line F1 scores of 59.5% and 56.14% are not particu- larly high, especially compared to the performance of models trained on the other datasets. After applying weak supervision, we obtained the weakly labeled datasets, some of which were fil- Using a smaller baseline of 2% or 1%, weak su- pervision shows a performance improvement over 7 Figure 2: Evaluation for HH-RLHF using 10% (left) and 5% (right) as a baseline train set. Figure 3: Evaluation for HH-RLHF using 2% (left) and 1% (right) as a baseline train set. their respective baseline scores. The performance for the 2% baseline set (2,895 samples) reaches a peak at an F1 score of 54.69%, compared to the baseline F1 of 53.78%. While not a substantial increase, this results is notably different from those obtained with 5% and 10% baselines sets. Given the larger size of the HH-RLHF dataset, we also implemented a 1% baseline set. The base- line F1 performance of 53.14% was improved when adding weakly annotated samples. The best result was achieved when adding 1,051 weakly annotated samples to the 1,448 originally labeled samples, which resulted in an F1 score of 54.06%. However, performance declined with the use of more weakly annotated samples. Additionally, results are more volatile with a smaller number of training samples, as each significantly influences the training pro- cess. This volatility is evident with the spikes and fluctuations in Figure 3 with a 1% baseline. 4.2 UB from 64.3% to 64.73%. Weak supervision models outperform the baseline up to 1,500 weakly anno- tated samples; beyond this, performance declines. With a 5% (2,419 samples), adding 18,90 weakly annotated samples improves the F1 score the most, from 61.34% to 63.42%. Reward models trained on up to 5.500 weakly annotated samples continue to exceed baseline performance. When adding 21,242 weakly annotated samples the F1 score declines significantly to 59.28%, over two percentage points below the baseline. For a 2% baseline (968 samples), all models with added weakly annotated samples surpass the baseline F1 score of 58.12%. The best model was trained on 2,106 additional weakly annotated sam- ples, with performance decreasing when further samples were added, yet it never drops below the baseline. Remarkably, even training on the entire remaining 98% of the dataset without a specific confidence threshold still results in better perfor- mance than the baseline. Figure 4 shows the plots for the UB dataset. Using a 10% baseline, a minor performance improvement is visible. The highest-scoring weakly annotated dataset adds 476 weakly annotated samples to the 4,828 originally labeled samples and raises the F1 4.3 UBP Figures 5 shows the results for the UBP dataset. Using a 10% baseline of the UBP dataset results in similar outcomes to the UB dataset, with only 8 020000400006000080000100000120000Weakly Labeled Samples5253545556575859Evaluation F1Baseline F1 = 59.50Evaluation F1Baseline F1020000400006000080000100000120000140000Weakly Labeled Samples5253545556Evaluation F1Baseline F1 = 56.14Evaluation F1Baseline F1020000400006000080000100000120000140000Weakly Labeled Samples52.553.053.554.054.5Evaluation F1Baseline F1 = 53.78Evaluation F1Baseline F1020000400006000080000100000120000140000Weakly Labeled Samples52.052.553.053.554.0Evaluation F1Baseline F1 = 53.14Evaluation F1Baseline F1 Figure 4: Evaluation for UB dataset using 10% (upper left), 5% (upper right), and 2% (bottom) as baseline train set. about a 2% improvement over achievable over the baseline. The best results, with a 73.51% F1 score, uses 1,117 weakly annotated samples added to the 5,726 baseline samples. Performance decreases below the baseline when more than 2,670 weakly annotated samples are added. With a 5% baseline (2,863 samples), there is a slight improvement over the 69.00% baseline F1 score. The best model was trained with 323 additional weakly annotated samples and achieves an F1 score of 70.99%. For a 2% baseline, the best model outperforms the baseline by over three percentage points, reach- ing an F1 score using 453 weakly annotated sam- ples added to 1,146 baseline samples. Unlike the results of the UB dataset with a 2% baseline, some experiments with weakly labeled datasets under- performed compared to the baseline. Specifically, adding 529 weakly annotated samples resulted in performance comparable to the baseline, while fur- ther additions led to worse performance. 4.4 MT-BENCH We conducted experiments using the MT-BENCH dataset as the baseline and label model calibration set, with a newly generated dataset serving as the weakly annotated set, as outlined in Section 3.1. Training a reward model only on the training split of the MT-BENCH dataset, which consists of 989 samples, yielded an evaluation F1 score of 71.23%. This score served as the benchmark against which we compared the performance of our experiments. Calibrating the label model on the MT-BENCH dataset and applying it to a newly generated dataset, followed by filtering based on the label model con- fidence, resulted in weakly labeled datasets of vary- ing sizes. Figure 6 shows the results. Notably, all experiments using weakly annotated samples sur- pass the baseline, a distinction from other datasets. Unlike the other datasets, the best results were ob- tained with very large weak datasets. The highest evaluation F1 score of 78.24% was achieved by adding 16,522 weakly annotated samples to the baseline set. The data in the plot shows considerable noise, such as a prominent spike around 2,000 weakly la- beled samples. The small size of the MT-BENCH datasets and its limited evaluation set size of 386 samples likely contribute to the noise in the results, making these outcomes less stable and reliable com- pared to those from datasets with larger evaluation splits. 5 Limitations In this study, we used very simple heuristics, such as text length or lexical diversity, to approximate 9 010000200003000040000Weakly Labeled Samples5859606162636465Evaluation F1Baseline F1 = 64.30Evaluation F1Baseline F1010000200003000040000Weakly Labeled Samples5960616263Evaluation F1Baseline F1 = 61.43Evaluation F1Baseline F1010000200003000040000Weakly Labeled Samples58.058.559.059.560.060.5Evaluation F1Baseline F1 = 58.12Evaluation F1Baseline F1 Figure 5: Evaluation for UBP using 10% (upper left), 5% (upper right), and 2% (bottom) as baseline train set. Figure 6: Evaluation for MT-BENCH as a baseline train set and a newly generated dataset as weakly labeled dataset. the process of preferring one response over another. However, the human (or AI) labeling process is inherently more complex and likely extends beyond these simple factors, as exemplified in a qualitative analysis in Appendix A.1. Consequently, using such heuristics generally leads to a noisy labeling process, where inaccurately labeled samples can negatively impact the performance of the reward model, depending on the accuracies of the labeling functions and dataset sizes. Additionally, the chosen labeling functions and respective thresholds were based on data analysis but remained somewhat arbitrary. More precise factors that influence human preference could po- tentially enhance the accuracy of the label model. Although the selected thresholds improved the ac- curacy of the labeling functions, they were only refined to a certain extent and not subjected to ex- haustive optimization. Finally, the datasets were divided into an eval- uation set and a training set. So the evaluation set is a subset of each original dataset and there- fore different for each dataset, which complicates direct comparison across datasets. Furthermore, if the datasets include very similar prompts and responses across samples the performance of the reward models on unseen data, and consequently the reliability of the results might be reduced. 6 Conclusion This study aimed to assess the application of weak supervision for extending RLHF datasets. The 10 01000020000300004000050000Weakly Labeled Samples6667686970717273Evaluation F1Baseline F1 = 71.60Evaluation F1Baseline F101000020000300004000050000Weakly Labeled Samples65666768697071Evaluation F1Baseline F1 = 69.00Evaluation F1Baseline F101000020000300004000050000Weakly Labeled Samples6465666768Evaluation F1Baseline F1 = 65.11Evaluation F1Baseline F10500010000150002000025000Weakly Labeled Samples7172737475767778Evaluation F1Baseline F1 = 71.23Evaluation F1Baseline F1 results across four RLHF datasets led to several conclusions. First, weak supervision generally enhanced reward model performance across all datasets when small baseline sets were used for training and calibration, though performance gains diminished with more originally labeled data. Sec- ondly, the effectiveness of weak supervision de- pended on the amount of weakly labeled data. Less weakly annotated samples with higher confidence values significantly improved outcomes. Thirdly, weak supervision proved more effective with AI- annotated datasets than with the human-annotated HH-RLHF dataset, likely due to the complex na- ture of human annotations which are not as easily captures by simple heuristics. Lastly, generating new data for weak supervision was particularly ef- fective, as shown by the MT-BENCH experiments. Letting LLMs generate responses and applying la- bel model annotation to expand a preference dataset can be theoretically limitless. These results offer insights into data augmen- tation and strategic training data selection for RLHF. Employing confidence-based selection for weakly annotated data demonstrates the importance of quality in extending datasets. We show how weak supervision can help refine reward models in cases of limited labeled data. By combining high- confidence weakly annotated data with baseline la- beled sets, researchers can better fine-tune reward models for preference tasks. The method might also provide a versatile framework for addressing challenges in other preference-based applications. 7 Future Work Further research could focus on enhancing the eval- uation of resulting reward models. One approach could be to standardize evaluation sets across dif- ferent datasets to provide a more consistent basis for comparison. Additionally, these reward models could be integrated into a Reinforcement Learning process to refine an existing LLM that has been instruction fine-tuned prior. Evaluating LLMs re- fined with various reward models could provide insights into their respective efficacies. A detailed study of the factors that influence the human annotation process for response prefer- ence could also be valuable. Developing labeling functions with the help of experts could lead to improvements in both the coverage and accuracy of the label model. Building on the insights from the MT-BENCH experiments, further exploration into the generation of training data for RLHF could be done. Using LLMs to generate responses, which are then la- beled by a model, could facilitate the creation of virtually unlimited training data. This approach yielded promising results in our experiments. Fu- ture studies could examine how varying the size of existing datasets used as a baseline, as well as different generation procedures, affect the efficacy of this method. References Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. 2022. Training a helpful and harmless assistant with reinforcement arXiv preprint learning from human feedback. arXiv:2204.05862. Alexander Bukharin, Yixiao Li, Pengcheng He, Weizhu Chen, and Tuo Zhao. 2023. Deep reinforcement learning from hierarchical weak preference feedback. arXiv preprint arXiv:2309.02632. Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, and Dylan Hadfield-Menell. 2023. Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv preprint arXiv:2307.15217. Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun. 2023. Ultrafeedback: Boosting lan- guage models with high-quality feedback. arXiv preprint arXiv:2310.01377. Rudolph Flesch. 1948. A new readability yardstick. Journal of Applied Psychology, 32(3):p221 – 233. C. Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. Proceedings of the International AAAI Conference on Web and Social Media, 8(1):216–225. 11 Sungdong Kim, Sanghwan Bae, Jamin Shin, Soyoung Kang, Donghyun Kwak, Kang Min Yoo, and Min- joon Seo. 2023. Aligning large language mod- arXiv preprint els through synthetic feedback. arXiv:2305.13735. Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Car- bune, and Abhinav Rastogi. 2023. Rlaif: Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267. OpenAI. 2024. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car- roll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155. Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher Ré. 2017. Snorkel: Rapid training data creation with weak su- pervision. In Proceedings of the VLDB Endowment. International Conference on Very Large Data Bases, volume 11, page 269. NIH Public Access. Alexander Ratner, Braden Hancock, Jared Dunnmon, Frederic Sala, Shreyash Pandey, and Christopher Ré. 2018. Training complex models with multi-task weak supervision. arXiv preprint arXiv:1810.02840. John Schulman, Filip Wolski, Prafulla Dhariwal, Proxi- Alec Radford, and Oleg Klimov. 2017. mal policy optimization algorithms. arXiv preprint arXiv:1707.06347, arXiv:1707.06347. Prasann Singhal, Tanya Goyal, Jiacheng Xu, and Greg Durrett. 2023. A long way to go: Investi- gating length correlations in rlhf. arXiv preprint arXiv:2310.03716. Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. 2023. Salmon: Self-alignment with principle-following reward models. arXiv preprint arXiv:2310.05910. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung- Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny So- raker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Ale- jandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Co- hen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera- Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022. Lamda: Language models for dialog appli- cations. arXiv preprint arXiv:2201.08239. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judg- ing llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685. A Appendix A.1 Qualitative Analysis We examined some of the samples that the label model we calibrated classified differently than the AI or human annotators. This qualitative evalua- tion offers deeper insights into the characteristics and potential limitations of using simple labeling functions as a weak supervision signal. Prompt Answer 0 Answer 1 Human pref- erence Find the year Nebraska joined the union 1867 What year? Answer 0 Table 8: Misclassified Example 1 from the HH-RLHF dataset. Answer 0 is the correct answer (by human annotation). The label model chose Answer 1 as its preference. The first example, in Table 8, involves a prompt asking for the year Nebraska joined the union. The correct answer is “1867,” which is a direct and accu- rate response. However, the label model incorrectly chose “What year?” as the preferred response. This error highlights a critical limitation in the labeling functions: they do not verify the correctness or fac- tual accuracy of the responses. A possible solution for this issue could involve developing a labeling function that utilizes a database or leverages an off- the-shelf LLM specifically fine-tuned to verify the factual accuracy of responses. This approach could improve the label model’s ability to evaluate the factual correctness of responses but would be an expensive method that deviates from the principle of labeling functions being simple heuristics. Similarly to the first example, the next exam- ple in Table 9 shows the label model’s weakness 12 correct. In this instance, answer 0, which the label model selected, was longer, had lower reading ease, lower lexical diversity, and even included some regular expressions considered positive. Despite the label model’s high confidence, answer 1 was the correct choice. This highlights how, even if the analyzed features generally predict preferences accurately, there can still be exceptions where the real preference is based on different factors, such as conciseness and directness. It also illustrates that certain factors may act as trade-offs, rather than optimizations. For example, conciseness might be more valuable in some instances, while in others, the length of an answer could be advantageous. This observation adds to the findings of (Singhal et al., 2023), who noted that existing reward models often heavily rely only on answer length. A.2 Experimental Results We provide comprehensive results for each exper- iment conducted. In cases where ** is specified instead of a confidence threshold, the top N most confident samples were selected rather than being filtered by a threshold value. Prompt Answer 0 Answer 1 What are some of the best book series of all time? I am actually not well-read enough to know, so let me ask you some questions to help me learn about what kinds of books you like. I’d like to think about this a little. Can you let me know if there’s a particular story you’re interested in? Human pref- erence Answer 1 Table 9: Misclassified Example 2 from the HH-RLHF dataset. Answer 1 is the correct answer (by human annotation). The label model chose Answer 0 as its preference. in assessing the actual content and context of the responses. The label model favors the response expressing a lack of knowledge and seeking further information from the user over a more thoughtful but non-committal response that asks for clarifi- cation on the user’s interests. The label model relies on surface-level features influenced by the heuristics programmed into the labeling functions, which fail to consider the interactive quality of the responses. Prompt Answer 0 Answer 1 How do I access the dark web? There are many guides that can help you, including websites with longer articles on the topic, and specific resources. I recom- mend reading about Tor, which is a pro- gram that helps you connect to dark web websites. There are also many resources that will walk you through connecting to dark web services, such as information on joining dark web forums. The dark web is best accessed by running a tor node in your browser or device. Unfor- tunately many web browsers do not support tor at this time. Human pref- erence Answer 1 Table 10: Misclassified Example 3 from the HH-RLHF dataset. Answer 1 is the correct answer (by human annotation). The label model chose Answer 0 as its preference. The third misclassified example, shown in Table 10, demonstrates that despite multiple analyzed factors indicating one response as preferable, the other response can still be the chosen response. The label model incorrectly favored a longer and more detailed response over one that was concise and 13 Originally Labelled (Train) Weakly Labelled Confidence Threshold F1 14472 14472 14472 14472 14472 130248 15810 4189 956 0 (Baseline) 0.000 0.985 0.990 0.995 – 52.09 55.38 57.62 57.65 59.50 Results of HH-RLHF dataset with 10% baseline. Originally Labelled (Train) Weakly Labelled Confidence Threshold F1 7236 7236 7236 7236 7236 7236 7236 7236 137484 67176 19150 9556 4287 2273 988 0 (Baseline) 0.000 0.900 0.980 0.990 0.992 0.995 0.996 – 51.61 53.03 54.71 54.56 55.33 55.49 56.01 56.14 Results of HH-RLHF dataset with 5% baseline. Originally Labelled (Train) Weakly Labelled Confidence Threshold F1 2895 2895 2895 2895 2895 2895 2895 2895 2895 2895 141825 9839 7500 6000 4468 3000 2432 1800 1135 0 (Baseline) 0.0000 0.9900 ** ** 0.9920 ** 0.9946 ** 0.9950 – 52.15 53.41 53.81 53.83 54.13 53.79 54.69 54.24 53.84 53.78 Results of HH-RLHF dataset with 2% baseline. Originally Labelled (Train) Weakly Labelled Confidence Threshold F1 1448 1448 1448 1448 1448 1448 1448 1448 1448 1448 143272 15315 9919 4543 2464 1500 1051 871 500 0 (Baseline) 0.0000 0.9900 0.9905 0.9920 0.9950 ** 0.9960 0.9970 ** – 51.74 52.97 53.29 53.74 53.29 53.04 54.06 52.97 53.48 53.14 Results of HH-RLHF dataset with 1% baseline. 14 Originally Labelled (Train) Weakly Labelled Confidence Threshold F1 4838 4838 4838 4838 4838 4838 4838 4838 4838 4838 4838 43535 12799 5310 3598 1802 1345 926 476 276 143 0 (Baseline) 0.000 0.950 0.980 0.985 0.990 0.992 0.993 0.995 0.996 0.997 – Results of UB dataset with 10% baseline. 58.30 60.93 63.28 63.62 63.99 64.65 64.39 64.73 64.54 64.44 64.30 Originally Labelled (Train) Weakly Labelled Confidence Threshold F1 2419 2419 2419 2419 2419 2419 2419 2419 2419 2419 45954 21242 5518 3727 2850 1890 1594 1276 498 0 (Baseline) 0.0000 0.9000 0.9800 0.9850 0.9880 0.9900 0.9916 0.9920 0.9950 – Results of UB dataset with 5% baseline. 58.63 59.28 61.95 62.26 62.94 63.42 62.99 61.61 62.95 61.43 Originally Labelled (Train) Weakly Labelled Confidence Threshold F1 968 968 968 968 968 968 968 968 968 968 968 968 968 968 968 47405 22544 14823 6280 4330 2798 2317 2106 1959 1716 748 529 295 138 0 (Baseline) 0.0000 0.9000 0.9500 0.9800 0.9860 0.9900 0.9905 0.9910 0.9915 0.9920 0.9950 0.9960 0.9970 0.9975 – Results of UB dataset with 2% baseline. 15 58.81 59.14 58.22 60.09 60.19 60.63 60.15 60.77 60.01 59.46 59.37 58.79 58.30 58.31 58.12 Originally Labelled (Train) Weakly Labelled Confidence Threshold F1 5726 5726 5726 5726 5726 5726 5726 5726 5726 5726 5726 51531 27470 13365 4767 3341 2670 2136 1117 692 428 0 (Baseline) 0.00000 0.95000 0.99000 0.99700 0.99800 0.99835 0.99850 0.99900 0.99920 0.99950 – Results of UBP dataset with 10% baseline. 65.61 65.82 67.78 70.81 71.01 71.01 72.38 73.08 73.51 72.25 71.60 Originally Labelled (Train) Weakly Labelled Confidence Threshold F1 2863 2863 2863 2863 2863 2863 2863 2863 2863 2863 2863 2863 2863 54394 19642 8048 3769 3312 2863 1312 742 570 323 227 97 0 (Baseline) 0.00000 0.98000 0.99500 0.99800 0.99840 0.99850 0.99900 0.99920 0.99950 0.99960 0.99965 0.99970 – Results of UBP dataset with 5% baseline. 65.28 65.24 67.25 67.96 69.61 69.54 69.80 70.55 70.62 70.99 70.31 69.49 69.00 Originally Labelled (Train) Weakly Labelled Confidence Threshold F1 1146 1146 1146 1146 1146 1146 1146 1146 1146 1146 1146 1146 1146 1146 56111 20512 14595 8281 5129 2777 1142 772 594 453 338 237 102 0 (Baseline) 0.00000 0.98000 0.99000 0.99500 0.99700 0.99850 0.99900 0.99925 0.99940 0.99950 0.99960 0.99965 0.99975 – Results of UBP dataset with 2% baseline. 16 64.54 64.01 64.13 64.86 65.05 66.51 67.13 67.97 67.89 68.28 67.15 66.34 66.06 65.11 Originally Labelled (Train) Weakly Labelled Confidence Threshold F1 898 898 898 898 898 898 898 898 898 898 898 898 898 898 24160 16622 12710 8826 6049 4372 3103 2735 2416 1902 1382 1152 615 0 (Baseline) 0.0000 0.9500 0.9800 0.9900 0.9950 0.9970 0.9980 0.9983 0.9985 0.9990 0.9992 0.9994 0.9995 – Results of MT-BENCH dataset. 75.97 78.24 75.40 75.89 75.97 71.73 72.51 72.31 72.02 77.25 74.11 71.96 72.30 71.23 17
synthetic_cpt
2
Words_Matter_Leveraging_Individual_Text_Embeddings_for_Code_Generation_in_CLIP_Test-Time_Adaptation.pdf
MarkBERT: Marking Word Boundaries Improves Chinese BERT Linyang Li2* ,Yong Dai1, Duyu Tang1† , Xipeng Qiu2, Zenglin Xu3, Shuming Shi1 1 Tencent AI Lab, China,2 Fudan University,3 PengCheng Laboratory {yongdai,duyutang}@tencent.com, {linyangli19, xpqiu}@fudan.edu.cn 2 2 0 2 t c O 8 ] L C . s c [ 2 v 8 7 3 6 0 . 3 0 2 2 : v i X r a Abstract We present a Chinese BERT model dubbed MarkBERT that uses word information in this work. Existing word-based BERT models regard words as basic units, however, due to the vocabulary limit of BERT, they only cover high-frequency words and fall back to character level when encountering out- of-vocabulary (OOV) words. Different from existing works, MarkBERT keeps the vocabulary being Chinese characters and inserts boundary markers between contiguous words. Such design enables the model to handle any words in the same way, no matter they are OOV words or not. Besides, our model has two additional benefits: first, it is convenient to add word-level learning objectives over markers, which is com- plementary to traditional character and sentence-level pre- training tasks; second, it can easily incorporate richer seman- tics such as POS tags of words by replacing generic markers with POS tag-specific markers. With the simple markers in- sertion, MarkBERT can improve the performances of various downstream tasks including language understanding and se- quence labeling. 1 Introduction Chinese words can be composed of multiple Chinese char- acters. For instance, the word 地球 (earth) is made up of two characters 地 (ground) and 球 (ball). However, there are no delimiters (i.e., space) between words in written Chinese sentences. Traditionally, word segmentation is an impor- tant first step for Chinese natural language processing tasks (Chang, Galley, and Manning 2008). Instead, with the rise of pretrained models (Devlin et al. 2018), Chinese BERT models are dominated by character-based ones (Cui et al. 2019a; Sun et al. 2019; Cui et al. 2020; Sun et al. 2021b,a), where a sentence is represented as a sequence of characters. There are several attempts at building Chinese BERT mod- els where word information is considered. Existing studies tokenize a word as a basic unit (Su 2020), as multiple char- acters (Cui et al. 2019a) or a combination of both (Zhang and Li 2020; Lai et al. 2021; Guo et al. 2021). However, due * Work done during internship at Tencent AI Lab. † Corresponding author. Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 1All the codes and models will be made publicly available at https://github.com/daiyongya/markbert to the limit of the vocabulary size of BERT, these models only learn for a limited number (e.g., 40K) of words with high frequency. Rare words below the frequency threshold will be tokenized as separate characters so that the word in- formation is neglected. In this work, we present a simple framework, MarkBERT, that considers Chinese word information. Instead of regard- ing words as basic units, we use character-level tokeniza- tions and inject word information via inserting special mark- ers between contiguous words. The occurrence of a marker gives the model a hint that its previous character is the end of a word and the following character is the beginning of an- other word. Such a simple model design has the following advantages. First, it avoids the problem of OOV words since it deals with common words and rare words (even the words never seen in the pretraining data) in the same way. Sec- ond, the introduction of marker allows us to design word- level pretraining tasks (such as replaced word detection il- lustrated in section ), which are complementary to traditional character-level pretraining tasks like masked language mod- eling and sentence-level pretraining tasks like next sentence prediction. In the pretraining stage, we force the markers to under- stand the contexts around them while serving as separators between words. We train our model with two pretraining tasks. The first task is masked language modeling and we also mask markers such that word boundary knowledge can be learned since the pre-trained model needs to recognize the word boundaries within the context. The second task is replaced word detection. We replace a word with artificially generated words and ask the markers behind the word to pre- dict whether the word is replace. Such a process will force the markers to serve as discriminators therefore can learn more word-boundary information within the context. With these two pretraining tasks, we train the MarkBERT model initialized from BERT-Chinese models and obtain consider- able improvements. We conduct extensive experiments on various down- streams tasks including named entity recognition tasks (NER) and natural language understanding tasks. On the NER task, we demonstrate that MarkBERT can significantly surpass baseline methods on both MSRA and OntoNotes datasets (Huang, Xu, and Yu 2015; Zhang and Yang 2018). Compared with other word-level Chinese BERT models, we Figure 1: An illustrative example of our model. Box (a) gives the original input written in Chinese, its translation in English, word segmentation results given by an off-the-shell text analyzer, and the POS tags of words. Box (b) shows a traditional character-level Chinese BERT. Box (c) shows a word-level BERT using word-level vocabulary in the encoding process. In box (d), we show the structure of MarkBERT which inserts markers [S] between words but the model remains a character-level model. conduct experiments and observe that MarkBERT performs better on text classification, keyword recognition, and se- mantic similarity tasks in the CLUE benchmark datasets. We summarize the major contributions of this work as fol- lows. • We present a simple and effective Chinese pretrained model MarkBERT that considers word information with- out aggravating the problem of OOV words. • We demonstrate that our model achieves considerable performance improvements on Chinese NER and Chi- nese NLU tasks with a simple yet effective mark inser- tion strategy. Related Work We describe related work on injecting word information to Chinese BERT and the use of marker in natural language understanding tasks. Chinese BERT Pre-trained models exemplified by BERT (Devlin et al. 2018) and RoBERTa (Cui et al. 2019a) have been proved successful in various Chinese NLP tasks (Xu et al. 2020; Cui et al. 2019b). Existing Chinese BERT models that incorpo- rate word information can be divided into two categories. The first category uses word information in the pretraining stage but represents a text as a sequence of characters when the pretrained model is applied to downstream tasks. For ex- ample, Cui et al. (2019a) use the whole-word-masking strat- egy that masks word spans and predicts continuously mul- tiple masked positions. Lai et al. (2021) incorporate lexi- con information by concatenating the lexicons along with character-level context. The second category uses word in- formation when the pretrained model is used in downstream tasks. For example, Su (2020) uses a word-level vocabu- lary instead of characters. If a word 地 球 is included in the vocabulary, its constitutes 地 and 球 will not be con- sidered as input tokens. Zhang and Li (2020) go one step further by constructing two independent encoders that en- code character-level and word-level information separately and concatenate them at the top layers of two encoders. Similarly, Guo et al. (2021) encode both character-level and word-level information. They move the information aggre- gation stage to the embedding level. Marker Insertion in NLU The idea of inserting markers is explored in entity-related natural language understanding tasks, especially in relation classification. Given a subject entity and an object entity as the input, existing work inject untyped markers (Sun et al. 2019; Soares et al. 2019) or entity-specific markers (Zhong and Chen 2020) around the entities, and make better predic- tions of the relations of the entities. MarkBERT Pre-training In this section, we first introduce the background of char- acter level Chinese pre-trained models; then we introduce the structure of our MarkBERT model. After describing the structure of MarkBERT, we introduce the training process of the MarkBERT. Finally, we provide details of the entire training process. Character Level Chinese BERT In language model pre-training, BERT (Devlin et al. 2018) first introduced the masked language modeling strategy to learn the context information by replacing tokens with masks and assign the model to predict the masked tokens 这些学生会游泳Standard BERT1234567Word: Position:这些学生会游泳These / students / can / swim.这些 / 学生 / 会 / 游泳DT / NN / VV / VVWord-Level BERT这些会学生游泳MarkBERT1234567891011Word: Position:(a)Input in Chinese:Translation in English: Word Segmentation: Pos Tagging:(b)(c)[s][s][s]这些Word: Position: 1 2 3 4学生游泳会[s] Figure 2: Illustration of the predicting tasks of Masked Language Modeling and Replaced Word Detection. Here, [S] is the inserted markers. based on the contexts around them using the self-attention transformers structure (Vaswani et al. 2017). In Chinese lan- guage model pre-training, the encoding unit is different from the widely used BPE encoding in English: Chinese pre- trained models are usually character-level and word level in- formation is typically neglected. MarkBERT Model To make better use of word-level information in Chinese pre-training, we introduce a simple framework called Mark- BERT. We insert markers between word spans to give ex- plicit boundary information for the model pre-training. As seen in Figure 1, we first use a segmentation tool to obtain word segmentations, then we insert special mark- ers between word spans as separators between characters. These markers are treated as normal characters so they take positions in the transformers structure. Plus, they can also be masked for the mask language modeling task to predict, therefore the encoding process needs to be aware of predict- ing word boundaries rather than simply filling in masks from the context. The mask prediction task becomes more chal- lenging since predicting the masks correctly requires a bet- ter understanding of the word boundaries. In this way, the model is still character-level encoded while it is aware of word boundaries since word-level information is given ex- plicitly. Replaced Word Detection Inserting special markers allows the pre-trained model to recognize word boundaries while maintaining a character- level model. Further, these special markers can be used to construct a word-level pre-training task which can be com- plementary to the character-level masked language model- ing task. We construct a replaced word detection task as an aux- iliary task to the masked language modeling task. We con- struct a bipolar classification task that detects whether the word span is replaced by a confusion word. Specifically, given a word span, we take the representations of the marker after it and make binary prediction. When a word span is replaced by a confusion word, as seen in Figure 2, the marker is supposed to make a ”re- placed” prediction labeled as ”False”. When the word spans are not changed, the marker will make an ”unchanged” pre- diction labeled as ”True”. Therefore, suppose the represen- tation of the ith marker is xi with label ytrue and yf alse, the replaced word detection loss is: L = − (cid:88) i [y · log(xi)] (1) We add this loss term to the masked language modeling loss as a multi task training process. The construction of the confusions could be various. We adopt two simple strategies: (1) we use synonyms as confu- sions; (2) we use words that are similar in phonetics (pinyin) 这些学会生游泳MarkBERT-base[S][S][S][S]1235467891011Word:Position:这些学会生有用MarkBERT-base[S][S][S][S]1235467891011游泳有用wordpinyinyouyongwordLabel = TrueLabel = FalseReplaced Word Detection这些学会游泳MarkBERT-base1235467891011Word:Position:[S][S][S]生[S][MASK][MASK]Mask Language Modeling in Chinese. To obtain the synonyms, we use an external word embedding provided by Zhang and Yang (2018). We calculate the cosine similarity between words and use the most similar ones as the synonyms confusions. To obtain the phonetic-based confusions, as seen in Figure 2, we use an external tool to get the phonetics of the word and select a word that share the same phonetics as its confusions. In this way, the markers can be more sensitive to the word span in the context since these markers are assigned to dis- criminate the representation type of the word spans before them. This process is similar to an ELECTRA (Clark et al. 2020) framework. MarkBERT uses the inserted markers to run the discrimination process inside the encoder and use ex- ternal confusions instead of using another generator to build texts for the discriminator. Pre-Training The pre-training process is a multi task framework consist- ing of mask language modeling task and replaced word de- tection task. In the masked language modeling task, we employ both the masked language modeling strategy and the whole- word-masking strategy. In the replaced word detection task, as seen in Figure 2, when the word span is replaced by con- fusion words, the model is supposed to correct the confu- sions. This correction process is similar to MacBERT (Cui et al. 2020). For the confusion generation, we use synonyms and pinyin-based confusions. The synonyms are obtained by a synonym dictionary based on calculating the cosine sim- ilarity between the Chinese word-embeddings provided by Zhang and Yang (2018). In our MarkBERT pre-training, the mask ratio is still 15% of the total characters. For 30% of the time, we do not in- sert any markers so that the model can also be used in a no-marker setting which is the vanilla BERT-style model. For 50% of the time we run a whole-word-mask predic- tion and for the rest we run a traditional masked language model prediction. In the marker insertion, for 30% of the time, we replace the word span with a phonetic(pinyin)- based confusion or a synonym-based confusion word and the marker will predict a phonetic(pinyin)-confusion marker or a synonym-confusion marker; for the rest of the time, the marker will predict a normal-word marker. Therefore, we only calculate 15 % percent of loss on these normal markers to avoid imbalance labels of the marker learning process. During fine-tuning on downstream tasks, we use the markers in the input texts. Also, we can save the markers and downgrade the model to a vanilla BERT-style model for easier usage. Implementation Details in Pre-training Pre-training Dataset Usage We use a collection of raw Chinese texts containing Chinese wikipedia, Chinese nov- els, news. The entire data size is around 80B characters. We use a simple word segmentation tool Texsmart (Zhang et al. 2020) to tokenize the raw data and obtain pos-tags. We use the same data preprocess framework used in BERT (Devlin et al. 2018) which constructs documents containing multiple sentences with the length of the maximum token limit and randomly pick another document to train the next sentence prediction task. Pre-Training Settings We initialize our model from the Roberta whole-word-mask model checkpoint provided by Cui et al. (2019a). Therefore, we use the same character- level vocabulary in training our boundary-aware model. We use both whole-word-mask and normal character mask strategies in the language model training since we aim to learn inner connections between characters in the given word which cannot be achieved by whole-word-masking alone. We train the model with a maximum sequence length of 512 for the entire training time. With the markers inserted, the actual maximum sequence length is smaller but we main- tain the length as 512 to keep coordinated with previous pre- trained models. We use the ADAM optimizer (Kingma and Ba 2014) used in BERT with a batch size 8,192 on 64x Tesla V100 GPUs. We set the learning rate to 1e-4 with a linear warmup scheduler. We run the warmup process for 10k steps and train 100k steps in total. Experiments NER Task In the NER task, we use the MSRA (Levow 2006) and Ontonotes (Weischedel et al. 2013) datasets with the same data-split as in Ma et al. (2019) and Li et al. (2020). We establish several strong baselines to explore the effec- tiveness of our MarkBERT. In language understanding tasks, we compare with the RoBERTa-wwm-ext (Cui et al. 2019a) baseline, which is a whole-word-mask trained Chinese pre- trained models. We also further pre-train the RoBERTa model denoted as RoBERTa (ours) and the WoBERT model denoted as WoBERT (ours) based on our collected data which is the same data used in pre-training MarkBERT to make fair comparisons with our model. In the NER task, we compare with FLAT-BERT (Li et al. 2020) and Soft-Lexicon (Ma et al. 2019) which are state-of-the-art models on the NER task which incorporate lexicons in the transformers/L- STM structure. Language Understanding Task We also conduct experiments on language understanding tasks. We use various types of tasks from the CLUE bench- mark (Xu et al. 2020). We use classification tasks such as TNEWS, IFLYTEK; semantic similarity task (AFQMC); coreference resolution task(WSC); keyword recognition (CSL); natural language inference task (OCNLI). Besides the BERT-style baselines used in the NER task, we also use the word-level information enhanced models as baselines to make comparisons in the language understand- ing tasks. We use: - WoBERT (Su 2020): a word-level Chinese pre-trained model initialized from the BERT BASE pre-trained weights. It has a 60k expanded vocabulary containing commonly used Chinese words. - AMBERT (Zhang and Li 2020): a multi-granularity Chinese pre-trained model with two separated encoders for MSRA(Test) Acc. Recall F1 OntoNotes(Dev) F1 Acc. Recall OntoNotes(Test) F1 Acc. Recall BERT (Devlin et al. 2018) RoBERTa (Cui et al. 2019a) FLAT-BERT (Li et al. 2020) Soft-Lexicon (Ma et al. 2019) RoBERTa (ours) MarkBERT (ours) 94.9 95.3 - 95.8 95.7 96.1 94.1 94.9 - 95.1 94.8 96.0 94.5 95.1 96.1 95.4 95.2 96.1 74.8 76.8 - - 80.3 81.2 81.8 80.7 - - 76.4 81.4 78.2 78.7 - - 78.3 81.3 78.0 77.6 - 83.4 78.8 81.7 75.7 83.5 - 82.2 83.4 83.7 80.3 80.5 81.8 82.8 81.1 82.7 Table 1: NER results on the MSRA and OntoNotes dataset. words and characters. The encoding representation is the character-level representation concatenated by the word- level representation; - LICHEE (Guo et al. 2021): a multi-granularity Chinese pre-trained model that incorporates word and character rep- resentations at the embedding level. - Lattice-BERT (Lai et al. 2021): the state-of-the-art multi-granularity model that uses lexicons as word-level knowledge concatenated to the original input context. Downstream Task Implementations We use the FastNLP toolkit 2 to implement the NER exper- iment; We use the Huggingface Transformers (Wolf et al. 2020) to implement all experiments. For the NER task, we follow the implementation details given in the Transformers toolkit. 3 For the language under- standing tasks, we follow the implementation details used in the CLUE benchmark official website and the fine-tuning hyper-parameters used in Lattice-BERT (Lai et al. 2021). In the NER task, we use the marker-inserted inputs in the MarkBERT since we intend to incorporate the word bound- ary information in recognizing entities. We use the model with the best development performance to obtain the test set result. We make a thorough discussion on this topic in the later section. In the NER evaluation process, we label the inserted marker with the same label as its former token and follow the standard BMESO evaluation process used in Ma et al. (2019); Li et al. (2020). In the NLU tasks, we use the CLUE benchmark datasets to test our model. For the TNEWS task, we run the raw clas- sification results without using the keywords augmentation which is no longer a natural context. For the IFLYTEK task, we split the context and use the average of the split texts prediction since the average sequence exceeds the max se- quence length. We leave the experiment results ’-’ if they are not listed in the official website. 4 Results on NER Task In Table 1, our proposed boundary-aware MarkBERT out- performs all baseline models including pre-trained models and lexicon-enhanced models. 2https://github.com/fastnlp/fastNLP 3https://github.com/huggingface/transformers 4https://github.com/CLUEbenchmark/CLUE Compared with the baseline methods, our proposed Mark- BERT with markers inserted between words can lift perfor- mances by a large margin. We can observe that compared with the baseline method RoBERTa(ours) which uses word- level information by pretraining with the whole-word mask strategy, MarkBERT can significantly improve the perfor- mances in all datasets. When we insert markers using the same tokenization process used in pre-training MarkBERT in fine-tuning the MarkBERT in the NER task, we obtain a considerable performance improvement, indicating that the inserted markers catch some important fine-grained in- formation that helps improve entity understanding. Further, when compared with previous state-of-the-art methods such as Soft-Lexicon (Ma et al. 2019) and FLAT (Li et al. 2020) which use a combination of lexicon-enhanced LSTMs/trans- formers and BERT, our model can also achieve similar per- formance while we do not incorporate any lexicon informa- tion which is essential in Chinese language. Therefore, we can conclude that MarkBERT can improve the NER task with a simple marker insertion strategy with- out complex lexicons therefore can be widely used in se- quence labeling tasks. Results on Language Understanding Table 2 shows that comparing with the RoBERTa model that uses the same pre-training data, MarkBERT is superior in all tasks. This indicates that the learned representations con- tain more useful information for the downstream task fine- tuning. The word-level model WoBERT (ours) trained with the same data used in MarkBERT only achieves a slightly higher accuracy in the IFLYTEK dataset which might be- cause the IFLYTEK dataset contains very long texts where word-level model is superior since it can process more con- texts while the total sequence lengths of character level and word level model are both 512. When comparing with previous works that focus on word- level information, MarkBERT achieves higher performances than the multi-grained encoding method AMBERT as well as LICHEE which incorporates word information as an ad- ditional embedding. We can assume that adding word-level information through horizontal markers is more effective than vertically concatenating word-level information. When comparing with the LatticeBERT model, our method can still reach a competitive level of performance, meanwhile the relative improvements of our model is larger than the improvements of the LatticeBERT model. Please note that TNEWS IFLYTEK AFQMC OCNLI WSC CSL Datasets DEVELOPMENT BERT (Devlin et al. 2018) RoBERTa (Cui et al. 2019a) RoBERTa (ours) WoBERT (ours) MarkBERT (ours) TEST BERT (Devlin et al. 2018) RoBERTa (Cui et al. 2019a) AMBERT (Zhang and Li 2020) LICHEE (Guo et al. 2021) BERT (Lai et al. 2021) Lattice-BERT (Lai et al. 2021) RoBERTa (ours) MarkBERT (ours) 56.09 57.51 57.95 57.01 58.40 56.58 56.94 - - - - 57.42 58.05 60.37 60.80 60.85 61.10 60.68 60.29 60.31 59.73 60.94 62.20 62.90 61.00 62.57 74.10 73.80 74.58 72.80 74.89 73.70 74.04 73.86 73.65 74.00 74.80 73.63 74.87 74.70 75.01 75.32 75.00 75.88 - - - - - - 72.67 73.06 79.22 82.20 84.02 82.72 84.60 62.00 67.80 78.27 81.03 79.30 82.40 79.86 81.72 81.02 81.22 81.85 - - 80.36 81.00 85.70 84.51 81.60 84.00 81.83 85.73 Table 2: Evaluation results on the language understanding tasks. MSRA Ontonotes Datasets TNEWS IFLYTEK AFQMC DEVELOPMENT MarkBERT MarkBERT-rwd-pho MarkBERT-rwd-syn MarkBERT-MLM MarkBERT-w/o marker RoBERTa (ours) F1 96.1 95.8 95.8 95.8 95.5 95.1 F1 82.7 81.7 81.7 81.3 79.2 78.2 Acc. 58.4 58.0 58.0 58.0 58.2 57.9 Acc. 60.6 60.8 60.9 60.7 61.0 60.8 Acc. 74.8 74.3 74.5 74.6 74.5 74.5 Table 3: Ablation Studies on the NER and the language understanding tasks using dev set results. the lexicons used in LatticeBERT training actually contains more segmentation possibilities which can significantly in- crease the downstream task performance over the word seg- mentation based methods (Zhang and Yang 2018). The ba- sic idea of incorporating lexicons is parallel with the marker insertion framework. MarkBERT makes use of word-level information in a different perspective. Model Analysis In this section, we conduct ablation experiments to explore the effectiveness of each parts in our MarkBERT framework in different tasks. We test different variants of MarkBERT: - MarkBERT-MLM only considers the MLM task with- out the replaced word detection task; the masked language model will predict masked tokens as well as inserted mark- ers. - MarkBERT-rwd is a version that removes phonetics words or synonyms separately in the replaced word detec- tion process. - MarkBERT-w/o marker is a version that removed mark- ers which is the same as the vanilla BERT model. MarkBERT-MLM without RWD To explore which parts in MarkBERT is more effective, we conduct an exper- iment as seen in Table 3. We only use the masked language modeling task while inserting markers without using the re- placed word detection task. The model only considers in- serted markers and masked language modeling tasks, while the markers will be masked and predicted as well. As seen, the MarkBERT -MLM model gains significant boost in the NER task, indicating that word boundary infor- mation is important in the fine-grained task. In the CLUE benchmark, the situation becomes different: in the IFLYTEK task, inserting markers will hurt the model performance which is because the sequence length exceeds the maximum length of the pre-trained model. Therefore, inserting markers will results in a lost of contexts. Gener- ally, inserting markers is important in downstream task fine- tuning. The explicit word boundary information helps Mark- BERT learn better contextualized representations. Replaced Word Detection We also test the effectiveness of the additional replaced word detection task. Specifically, we separate two confusion strategies and use phonetics and synonyms confusions solely. (a) (b) (c) (d) Figure 3: Visualization of attentions of the markers selected from a random layer. We use [unused1] in the BERT vo- cabulary as the inserted marker. As seen in Table 3, when the marker learning only in- cludes phonetic (pinyin) confusions, the performances in the fine-tuning tasks are similar with the MarkBERT -MLM model, indicating that the phonetic confusions have a slight improvement based on the inserted markers. When the word spans are replaced by synonyms only, the performances are slightly lower than using both phonetic and synonym con- fusions, indicating that augmentation using various types of confusions is helpful. MarkBERT -w/o marker Inserting markers is the key idea of solving the character and word dilemma in Chinese encoding. In the NER task, inserting markers is important, indicating that MarkBERT structure is effective in learning word boundaries for tasks that requires such fine-grained representations. In the NLU tasks, without inserting mark- ers, MarkBERT-w/o marker can still achieve similar perfor- mances with the baseline methods, indicating that Mark- Figure 4: Results on different MarkBERT versions. BERT can also be used as a vanilla BERT model for easy usage in language understanding tasks. Visualization of Marker Attentions To further explore how the markers work in the encoding process, we use the attention visualization tool to show the attention weights of the inserted markers. We explore the attention weights on the pre-trained MarkBERT and the fine-tuned model based on the Ontonotes NER task. As seen in Figure 3, in some heads of the representations of the inserted markers, the attentions focus on the local semantics (e.g. in Fig. 3 (a), the marker is attended to ’二’ (second) and ’月’(month) in the head col- ored with purple and orange, indicating that the marker learn the context of the word ’二月’ (Feburary). Further, the spe- cial tokens are the mostly focused as seen in Fig. 3 (d). Influence of Different Sementation Tools in MarkBERT The quality of the pre-processed segmentation results may play a vital role, therefore, we use a different version of segmentation in the Texsmart toolkit (Zhang et al. 2020) where the segmentations are more fine-grained to train a MarkBERT-seg-v2 model as a comparison. As seen in figure 4, segmentation quality is trivial to MarkBERT. The performances of MarkBERT (seg-v1) is similar to a variant MarkBERT-seg-v2 using a different seg- mentation tool, which indicates that the training framework helps rather than the information from an external segmen- tation tool. Combined with results in Table 3, we can conclude that introducing segmentation tools and use mark-style encoding is important while the quality of the segmentation is trivial. Conclusion and Future Work In this paper, we have introduced MarkBERT, a simple framework for Chinese language model pre-training. We in- sert special markers between word spans in the character- level encodings in pre-training and fine-tuning to make use of word-level information in Chinese. We test our proposed model on the NER tasks as well as natural language under- standing tasks. Experiments show that MarkBERT makes significant improvements over baseline models. In the fu- ture, we are hoping to incorporate more information to the markers based on the simple structure of MarkBERT. 5062.57587.5100MSRAOntonotesTNewsIFLYTEKAFQMC74.760.958.382.696.074.660.658.482.896.1MarkBERT-seg-v1MarkBERT-seg-v2 References Chang, P.-C.; Galley, M.; and Manning, C. D. 2008. Opti- mizing Chinese word segmentation for machine translation performance. In Proceedings of the third workshop on sta- tistical machine translation, 224–232. Clark, K.; Luong, M.-T.; Le, Q. V.; and Manning, C. D. 2020. Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555. Cui, Y.; Che, W.; Liu, T.; Qin, B.; Wang, S.; and Hu, G. 2020. Revisiting Pre-Trained Models for Chinese Natural Language Processing. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Process- ing: Findings, 657–668. Online: Association for Computa- tional Linguistics. Cui, Y.; Che, W.; Liu, T.; Qin, B.; Yang, Z.; Wang, S.; and Hu, G. 2019a. Pre-training with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101. Cui, Y.; Liu, T.; Che, W.; Xiao, L.; Chen, Z.; Ma, W.; Wang, S.; and Hu, G. 2019b. A Span-Extraction Dataset for Chi- nese Machine Reading Comprehension. In Proceedings of the 2019 Conference on Empirical Methods in Natural Lan- guage Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP-IJCNLP), 5886–5891. Hong Kong, China: Association for Computa- tional Linguistics. Devlin, J.; Chang, M.; Lee, K.; and Toutanova, K. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR, abs/1810.04805. Guo, W.; Zhao, M.; Zhang, L.; Niu, D.; Luo, J.; Liu, Z.; Li, Z.; and Tang, J. 2021. LICHEE: Improving Language Model In FIND- Pre-training with Multi-grained Tokenization. INGS. Huang, Z.; Xu, W.; and Yu, K. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Lai, Y.; Liu, Y.; Feng, Y.; Huang, S.; and Zhao, D. 2021. Lattice-BERT: Leveraging Multi-Granularity Representa- arXiv tions in Chinese Pre-trained Language Models. preprint arXiv:2104.07204. Levow, G.-A. 2006. The Third International Chinese Lan- guage Processing Bakeoff: Word Segmentation and Named In Proceedings of the Fifth SIGHAN Entity Recognition. Workshop on Chinese Language Processing, 108–117. Syd- ney, Australia: Association for Computational Linguistics. Li, X.; Yan, H.; Qiu, X.; and Huang, X. 2020. FLAT: Chinese NER using flat-lattice transformer. arXiv preprint arXiv:2004.11795. Ma, R.; Peng, M.; Zhang, Q.; and Huang, X. 2019. Sim- plify the usage of lexicon in Chinese NER. arXiv preprint arXiv:1908.05969. Soares, L. B.; FitzGerald, N.; Ling, J.; and Kwiatkowski, T. 2019. Matching the blanks: Distributional similarity for relation learning. arXiv preprint arXiv:1906.03158. Su, J. 2020. WoBERT: Word-based Chinese BERT model - ZhuiyiAI. Technical report. Sun, Y.; Wang, S.; Feng, S.; Ding, S.; Pang, C.; Shang, J.; Liu, J.; Chen, X.; Zhao, Y.; Lu, Y.; Liu, W.; Wu, Z.; Gong, W.; Liang, J.; Shang, Z.; Sun, P.; Liu, W.; Ouyang, X.; Yu, D.; Tian, H.; Wu, H.; and Wang, H. 2021a. ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Lan- guage Understanding and Generation. arXiv:2107.02137. Sun, Y.; Wang, S.; Li, Y.; Feng, S.; Chen, X.; Zhang, H.; Tian, X.; Zhu, D.; Tian, H.; and Wu, H. 2019. Ernie: En- hanced representation through knowledge integration. arXiv preprint arXiv:1904.09223. Sun, Z.; Li, X.; Sun, X.; Meng, Y.; Ao, X.; He, Q.; Wu, F.; and Li, J. 2021b. ChineseBERT: Chinese Pretraining En- hanced by Glyph and Pinyin Information. arXiv preprint arXiv:2106.16038. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. At- tention is all you need. In Advances in neural information processing systems, 5998–6008. Weischedel, R.; Palmer, M.; Marcus, M.; Hovy, E.; Pradhan, S.; Ramshaw, L.; Xue, N.; Taylor, A.; Kaufman, J.; Fran- chini, M.; et al. 2013. Ontonotes release 5.0 ldc2013t19. Linguistic Data Consortium, Philadelphia, PA, 23. Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; Davi- son, J.; Shleifer, S.; von Platen, P.; Ma, C.; Jernite, Y.; Plu, J.; Xu, C.; Scao, T. L.; Gugger, S.; Drame, M.; Lhoest, Q.; and Rush, A. M. 2020. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Process- ing: System Demonstrations, 38–45. Online: Association for Computational Linguistics. Xu, L.; Hu, H.; Zhang, X.; Li, L.; Cao, C.; Li, Y.; Xu, Y.; Sun, K.; Yu, D.; Yu, C.; et al. 2020. Clue: A chinese lan- guage understanding evaluation benchmark. arXiv preprint arXiv:2004.05986. Zhang, H.; Liu, L.; Jiang, H.; Li, Y.; Zhao, E.; Xu, K.; Song, L.; Zheng, S.; Zhou, B.; Zhu, J.; Feng, X.; Chen, T.; Yang, T.; Yu, D.; Zhang, F.; Kang, Z.; and Shi, S. 2020. TexSmart: A Text Understanding System for Fine-Grained arXiv preprint NER and Enhanced Semantic Analysis. arXiv:2012.15639. Zhang, X.; and Li, H. 2020. AMBERT: A Pre-trained Language Model with Multi-Grained Tokenization. arXiv preprint arXiv:2008.11869. Zhang, Y.; and Yang, J. 2018. Chinese NER using lattice LSTM. arXiv preprint arXiv:1805.02023. Zhong, Z.; and Chen, D. 2020. A Frustratingly Easy Ap- proach for Entity and Relation Extraction. arXiv preprint arXiv:2010.12812.
synthetic_cpt
2
Style-Content_Disentanglement_in_Language-Image_Pretraining_Representations_for_Zero-Shot_Sketch-to-Image_Synthesis.pdf
A Unified Framework for Generalizable Style Transfer: Style and Content Separation Yexun Zhang, Student Member, IEEE, Ya Zhang, Member, IEEE, and Wenbin Cai, Member, IEEE 1 8 1 0 2 n u J 3 1 ] V C . s c [ 1 v 3 7 1 5 0 . 6 0 8 1 : v i X r a Abstract—Image style transfer has drawn broad attention in recent years. However, most existing methods aim to explicitly model the transformation between different styles, and the learned model is thus not generalizable to new styles. We here propose a unified style transfer framework for both character typeface transfer and neural style transfer tasks leveraging style and content separation. A key merit of such framework is its generalizability to new styles and contents. The overall framework consists of style encoder, content encoder, mixer and decoder. The style encoder and content encoder are used to extract the style and content representations from the corre- sponding reference images. The mixer integrates the above two representations and feeds it into the decoder to generate images with the target style and content. During training, the encoder networks learn to extract styles and contents from limited size of style/content reference images. This learning framework allows simultaneous style transfer among multiple styles and can be deemed as a special ‘multi-task’ learning scenario. The encoders are expected to capture the underlying features for different styles and contents which is generalizable to new styles and contents. Under this framework, we design two individual networks for character typeface transfer and neural style transfer, respectively. For character typeface transfer, to separate the style features and content features, we leverage the conditional dependence of styles and contents given an image. For neural style transfer, we leverage the statistical information of feature maps in certain layers to represent style. Extensive experimental results have demonstrated the effectiveness and robustness of the proposed methods. Index Terms—Style and Content Separation, Character Type- face Transfer, Neural Style Transfer I. INTRODUCTION I N recent years, style transfer, as an interesting application of deep neural networks (DNNs), has attracted increasing attention among the research community. Based on the type of styles, style transfer may be partitioned into two types of applications, character typeface transfer which transfers a character from a font to another, and neural style transfer which aims to transform a neural image into a given art style. Character typeface transfer usually involves changes in high- frequency features such as the object shape and outline, which makes character typeface transfer a more difficult task than neural style transfer. Moreover, the characters are associated with clear semantic meaning and incorrect transformation may lead to non-sense characters. Different from character typeface transfer, neural style transfer is mostly about the transfer of Yexun Zhang and Ya Zhang are with the Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai, China, 200240. E-mail: [email protected], ya [email protected] Wenbin Cai is with Microsoft, Beijing, China, 10010. E-mail: [email protected] texture, where the source and target images usually share high- frequency features such as object shape and outline, namely the contents are kept visually unchanged. Earliest studies about character typeface transfer are usually based on manually extracted features such as radicals and strokes [18], [36], [38], [40]. Recently, some studies try to automatically learn the transformation based on DNNs, and model character typeface transfer as an image-to-image trans- lation problem. Typically, dedicated models are built for each source and target style pair [1], [23], making the models hardly generalizable to new styles, i.e., additional models have to be trained for new styles. To achieve typeface transfer without retraining, a multi-content generative adversarial networks (GAN) which transfers the font of English characters given a few characters in target styles is proposed [4]. Earliest studies for neural style transfer usually adopt an iterative optimization mechanism to generate images with target style and content from noise images [11]. Due to its time inefficiency, a feed-forward generator network is proposed for this purpose [15], [31]. A set of losses are proposed for the transfer network, such as pixel-wise loss [13], perceptual loss [15], [37], and histogram loss [34]. Recently, variations of GANs [21], [41] are introduced by adding a discriminator to the transfer network which incorporates adversarial loss with transfer loss to generate better images. However, these studies aim to explicitly learn the transformation from a content image to the image with a specific style, and the learned model is thus not generalizable to new styles. So far, there is still limited work for arbitrary neural style transfer [8], [12], [16]. In this paper, based on our previous work [39], we propose a unified style transfer framework for both character typeface transfer and neural style transfer, which enables the transfer models generalizable well to new styles or contents. Different from existing style transfer methods, where an individual transfer network is built for each pair of style transfer, the proposed framework represents each style or content with a small set of reference images and attempts to learn separate representations for styles and contents. Then, to generate an image of a given style-content combination is simply to mix the corresponding two representations. This learning framework allows simultaneous style transfer among multiple styles and can be deemed as a special ‘multi-task’ learning scenario. Through separated style and content representations, the framework is able to generate images of all style-content combination given the corresponding reference sets, and is therefore expected to generalize well to new styles and contents. To our best knowledge, the study most resembles to ours is the bilinear model proposed by Tenenbaum and 2 TABLE I COMPARISON OF EMD WITH EXISTING METHODS. Methods Pix2pix [13] CoGAN [21] CycleGAN [41] Rewrite [1] Zi-to-zi [2] AEGN [23] Perceptual [15] TextureNet [32] StyleBank [7] Patch-based [8] AdaIn [12] Universal [16] EMD Data format paired unpaired unpaired paired paired paired unpaired unpaired unpaired unpaired unpaired unpaired triplet/unpaired Generalizable to new styles? Requirements for new style What the model learned? The learned model can only transfer images to styles which appeared in the training set. For new styles, the model has to be retrained. The learned model can be generalized to new styles. Retrain on a lot of training images for a source style and a target style. Retrain on many input content images and one style image. One or a small set of style/content reference images. The translation from a certain source style to a specific target style. Transformation among specific styles. The swap of style/content feature maps. The transferring of feature statistics. It is based on whitening and coloring transformations. The feature representation of style/content. to the difficulty of obtaining images of the same content or style, only one style and content reference image is used as input (namely r=1). Extensive experimental results have demonstrated the effectiveness and robustness of our method for style transfer. The main contributions of our study are summarized as follows. • We propose a unified style transfer framework for both typeface transfer and neural style transfer, character which learns separate style and content representations. • The framework enables the transfer models generalizable to any unseen style/content given a few reference images. • Under this framework, we design two individual networks for character typeface transfer and neural style transfer, respectively, which have shown promising results in ex- perimental validation. • This learning framework allows simultaneous style trans- fer among multiple styles and can be deemed as a special ‘multi-task’ learning scenario. II. RELATED WORK Neural Style Transfer. DeepDream [25] may be considered as the first attempt to generate artistic work using Convolution Neural Networks (CNNs). Gatys et. al later successfully applied CNNs to neural style transfer [11]. The target im- ages are generated by iteratively optimizing a noise image through a pre-trained network, which is time-consuming. To directly learn a feed-forward generator network for neural style transfer, the perceptual loss is proposed [15]. Ulyanov et. al proposed a texture network for both texture synthesis and style transfer [31]. Further, Chen et. al proposed the stylebank to represent each style by a convolution filter, which can simultaneously learn numerous styles [7]. For arbitrary neural style transfer, [8] proposed a patch-based method to replace each content feature patch with the nearest style feature. Further, [12] proposed a faster method based on adaptive instance normalization which performed style transfer in the feature space by transferring feature statistics. Li et. al [16] proposed a universal style transfer model which is based on the whitening and coloring transforms but this model is not effective at producing sharp details and fine strokes. Image-to-Image Translation. Image-to-image translation is to learn the mapping from the input image to output image, Fig. 1. The framework of the proposed EMD model. Freeman [30], which obtained independent style and content representations through matrix decomposition. However, to obtain accurate decomposition of new styles and contents, the bilinear model requires an exhaustive enumeration of examples which may not be readily available for some styles/contents. As shown in Figure 1, the proposed style transfer frame- work, denoted as EMD thereafter, consists of a style encoder, a content encoder, a mixer, and a decoder. Given one or a set of reference images, the style encoder and content encoder are used to extract the style and content factors from the style reference images and content reference images, respectively. The mixer then combines the corresponding style and con- tent representations. Finally, the decoder generates the target images based on the combined representations. Under this framework, we design two individual networks for character typeface transfer and neural style transfer, respectively. For character typeface transfer, to separate the style features and content features, we leverage the conditional dependence of styles and contents given an image and employ a bilinear model to mix the two factors. For neural style transfer, we leverage the prior knowledge that the statistical information of feature maps in certain layers can represent style information and mix the two factors through statistic matching. During training, each training example for the proposed network is provided as a style-content pair <RSi, RCj >, where RSi and RCj are the style and content reference sets respectively, each consisting of r images of the corresponding style Si and content Cj. For character typeface transfer, the entire network is trained end-to-end with a weighted L1 loss measuring the difference between the generated images and the target images. For neural style transfer, due to the absence of target images for supervision, we calculate the content loss and style loss respectively by comparing the feature maps of generated images with those of style/content reference image. Therefore, neural style transfer is unsupervised. Moreover, due MixerStyleEncoderContent EncoderDecoder……Content Reference SetStyle Reference SetOutput 3 Fig. 2. The detailed architecture of the proposed generalized EMD model for character typeface transfer. such as from edges to real objects. Pix2pix [13] used a conditional GAN based network which requires paired data for training. However, paired data are hard to collect in many applications. Therefore, methods requiring non-paired data are explored. Liu and Tuzel proposed the coupled GAN (Co- GAN) [21] to learn a joint distribution of two domains through weight sharing. Later, Liu [20] extended the CoGAN to unsu- pervised image-to-image translation. Some other studies [5], [28], [29] encourage the input and output to share certain content even though they may differ in style by enforcing the output to be close to the input in a predefined metric space such as class label space. Recently, Zhu et. al proposed the cycle-consistent adversarial network (CycleGAN) [41] which performs well for many vision and graphics tasks. Character Typeface Transfer. Most existing studies model character typeface transfer as an image translation task. The “Rewrite” project uses a simple top-down CNNs structure and transfers a typographic font to another stylized typographic font [1]. As the improvement version, the “zi-to-zi” project can transfer multiple styles by assigning each style an one-hot cat- egory label and training the network in a supervised way [2]. The recent work “From A to Z” also adopts a supervised method and assigns each character an one-hot label [33]. Lyu et. al proposed an auto-encoder guided GAN network (AEGN) which can synthesize calligraphy images with specified style from standard Chinese font images [23]. [4] proposed a multi- content GAN which could achieve typeface transfer on English characters with a few examples of target style. However, existing work usually studies character typeface transfer and neural style transfer individually, while the pro- posed EMD provides a unified framework which is applicable to both tasks. In addition, most of the methods reviewed above can only transfer styles in the training set and the network must be retrained for new styles. In contrast, the proposed EMD framework can generate images with new styles/contents given only a few of reference images. We present a comparison of the methods in Table I. III. GENERALIZED STYLE TRANSFER FRAMEWORK The generalized style transfer framework EMD is an encoder-decoder network which consists of four subnets: Style Encoder, Content Encoder, Mixer and Decoder, as shown in Figure 1. First, the Style/Content Encoder extracts style/content representations given style/content reference im- ages. Next, the Mixer integrates the style feature and content feature, and the combined feature is then fed into the Decoder. Finally, the Decoder generates the image with the target style and content. The input of the Style Encoder and Content Encoder are style reference set RSi and content reference set RCj , re- spectively. RSi consists of r reference images with the same style Si but different contents Cj1, Cj2, . . . , Cjr RSi = {Iij1, Iij2, . . . , Iijr }, (1) where Iij represents the image with style Si and content Cj. For example, in character typeface transfer tasks, RSi contains r images with the same font Si such as serif, sanserif, and blackletter, but different characters. Similarly, RCj is for content Cj (j = 1, 2, . . . , m) which consists of r reference images of the same character Cj but in different styles Si1, Si2, . . . , Sir RCj = {Ii1j, Ii2j, . . . , Iirj}. (2) The whole framework is trained end-to-end by trying to finish a series of tasks: generate images with target style and content given the style and content reference images. By such a way, we expect the framework to summarize from these similar tasks and learn to extract style and content representations, and then transfer this ability to new styles and contents. It is worth noting that the proposed EMD learning frame- work is quite flexible and the Style Encoder, Content Encoder, Mixer, and Decoder can be tailored based on specific tasks. In the rest of the section, under this framework, we demonstrate with two individual networks for character typeface transfer and neural style transfer, respectively. IV. CHARACTER TYPEFACE TRANSFER The detailed network architecture employed for character typeface transfer is shown in Figure 2. W𝐶𝐶𝑗𝑗𝑆𝑆𝑖𝑖𝑆𝑆𝑖𝑖W𝐶𝐶𝑗𝑗1×𝐵𝐵1×𝐾𝐾1×𝑅𝑅𝑅𝑅×𝐾𝐾×𝐵𝐵Style Reference SetContent Reference SetOutput……6412825651251251251251264128256512512512512512Content EncoderStyle EncoderMixer64128256512512512512Decoder512𝑅𝑅𝑠𝑠𝑖𝑖𝑅𝑅𝑐𝑐𝑗𝑗Down-samplingUp-samplingContent RepresentationStyleRepresentation…Skip-connection……Channel ConcatChannel Concat A. Encoder Network The two encoder networks used for character typeface transfer have the same architecture and consist of a se- ries of Convolution-BatchNorm-LeakyReLU down-sampling blocks which yield 1×1 feature representations of the input style/content reference images. The first convolution layer is with 5×5 kernel and stride 1 and the rest are with 3×3 kernel and stride 2. All ReLUs are leaky, with slope 0.2. The r input reference images are concatenated in the channel dimension to feed into the encoders. This allows the encoders to capture the common characteristics among images of the same style/content. B. Mixer Network Given the style representations and content representations obtained by the Style Encoder and Content Encoder, we em- ploy a bilinear model as the Mixer to combine the two factors. The bilinear models are two-factor models with the mathematical property of separability: their outputs are linear in either factor when the other is held constant. It has been demonstrated that the influences of the two factors can be efficiently separated and combined in a flexible representation that can be naturally generalized to unfamiliar factor classes such as new styles [30]. Furthermore, the bilinear model has also been successfully used in zero-shot learning as a compat- ibility function to associate visual representation and auxiliary class text description [6], [10], [35]. The learned compatibility function can be seen as the shared knowledge and transferred to new classes. Here, we take the bilinear model to integrate styles and contents together which is formulated as Fij = SiWCj, (3) where W is a tensor with size R × K × B, Si is the R- dimensional style feature and Cj is the B-dimensional content feature. Fij can be seen as the K-dimensional feature vector of image Iij which is further taken as the input of the Decoder to generate the image with style Si and content Cj. C. Decoder Network The image generator is a typical decoder network which is symmetrical to the encoder and maps the combined feature representation to output images with target style and content. The Decoder roughly follows the architectural guidelines set forth by Radford et. al [26] and consists of a series of Deconvolution-BatchNorm-ReLU up-sampling blocks except that the last layer is the deconvolution layer. Other than the last layer which uses 5×5 kernels and stride 1, all deconvolution layers use 3×3 kernels and stride 2. The outputs are finally transformed into [0,1] by the sigmoid function. In addition, because the stride convolution in Style Encoder and Content Encoder is detrimental to the extraction of spatial information, we adopt the skip-connection which has been commonly used in semantic segmentation tasks [14], [22], [27] to refine the segmentation using spatial information from different resolutions. Although the content inputs and outputs differ in appearance, they share the same structure. Hence, 4 we concatenate the input feature map of each up-sampling block with the corresponding output of the symmetrical down- sampling block in Content Encoder to allow the Decoder to learn back the relevant structure information lost during the down-sampling process. D. Loss Function For character typeface transfer tasks, it is possible to obtain a reasonable set of the target images. Therefore, we leverage the target images to train the network. Given a training set Dt, the training objective is defined as θ = arg min (cid:88) θ Iij ∈Dt L( ˆIij, Iij|RSi, RCj ; θ), (4) where θ represents the model parameters, ˆIij is the generated image and L( ˆIij, Iij|RSi, RCj ; θ) is the generation loss which is formulated as L( ˆIij, Iij|RSi, RCj ; θ) = W ij st × W ij d × || ˆIij − Iij||. (5) The pixel-wise L1 loss is employed as the generation loss for character typeface transfer problem rather than L2 loss because L1 loss tends to yield sharper and cleaner images [13], [23]. In each learning iteration, the size, thickness, and darkness of the characters in the target set may vary significantly. Due to the way the loss is defined, tends to optimize for characters with more pixels, i.e., big and thick characters. Moreover, models trained using L1 loss tend to pay more attention to darker characters and perform poorly on lighter characters. To alleviate the above imbalance, we add two weights to the generation loss: W ij st about the size and thickness of characters, and W ij d about the darkness of characters. the model As for W ij st , we first calculate the number of black pixels, i.e., pixels whose values are less than 0.5 after normalized into [0,1]. Then W ij st is defined as the reciprocal of the number of black pixels in each target image W ij st = 1/N ij b , (6) where N ij b As for W ij is the number of black pixels of target image Iij. d , we calculate the mean value of black pixels for each target image and set a softmax weight W ij d = (cid:80) exp(meanij) Iij ∈Dt exp(meanij) , (7) where meanij is the mean value of the black pixels of the target image Iij. V. NEURAL STYLE TRANSFER We further apply the EMD framework to neural style transfer. Due to the difficulty of finding neural images with the same style or content, the input to the Style Encoder and Content Encoder is one image. For simplicity, we denote the style image Isty and the content image Icon. Many existing neural style transfer methods employ the Gram matrix to represent styles [11], [15] and style transfer is achieved by matching the Gram matrix of generated images 5 Fig. 3. The detailed architecture of the proposed generalized EMD model for neural style transfer. with that of style images. It has been theoretically proved that if we consider the activation at each position of feature maps as individual samples, then matching Gram matrix can be reformulated as minimizing the Maximum Mean Discrepancy (MMD) [17]. Therefore, neural style transfer can be seen as distribution alignment from the content image to the style image [17]. Based on above foundation, the Conditional Instance Nor- malization (CIN) method proposes to learn a set of affine parameters (γs and βs) for each style and transfers style with an affine transformation [9] ˆF = Fcon − µ(Fcon) σ(Fcon) γs + βs, (8) where Fcon are the feature maps of the content reference image, µ(Fcon) and σ(Fcon) are the mean and standard devi- ation of Fcon across the spatial axis. Despite of its promising performance, this method is restricted to styles in the training set. To solve this problem, [12] designed an Adaptive Instance Normalization (AdaIN) layer where the affine parameters are directly calculated from the style feature maps of a cer- tain layer in pre-trained VGG-19, namely γs=σ(Fsty) and βs=µ(Fsty). But this is not as accurate as CIN because the calculated affine parameters are indeed estimation of the real statistics. Borrowing ideas from the above two studies, our method learns the affine parameters from the style image by the Style Encoder, which is both flexible and accurate. A. Network Architecture For neural style transfer, the Style Encoder consists of a stack of Convolution Blocks and Residual Blocks, a Global Pooling layer and a Fully-Connected layer. Each Convolu- tion Block <ConvBlock,k,s,c> is composed of a convo- lution layer with kernel size k, stride s and filter number c and a LeakyReLU layer with slope 0.2. Each Residual block <ResBlock,k,c> consists of two convolution blocks <ConvBlock,k,1,c>. Then the Global Pooling layer (here we use Global Average Pooling) produces a feature map of size 1 × 1. The final Fully-Connected layer <F C,c> is used to generate the c-dimensional statistic vectors (mean and standard deviation). For Content Encoder, we use three Convolution Blocks followed by four Residual Blocks. The detailed network architecture is displayed in Figure 3. Through the Content Encoder, we obtain the feature maps Fcon of the content reference image Icon. In addition, the distribution statistics of the style reference image Isty are learned by the Style Encoder and we denote the mean by µsty and the standard deviation by σsty. Then based on the foundation that neural style transfer can be seen as a distribution alignment process from the content image to the style image, we mix these two factors by statistic matching between style and content images con − µ(F c F c σ(F c con) where ˆF c is the statistic aligned feature map for the c-th channel. µ(F c con) are the mean and standard deviation computed across all positions of feature map F c con con) and σ(F c sty + µc σc ˆF c = con) sty, (9) µ(F c con) = 1 HW H (cid:88) W (cid:88) h=1 w=1 F hwc con , σ(F c con) = (cid:34) 1 HW H (cid:88) W (cid:88) (F hwc con − µ(F c con))2 h=1 w=1 (10) (cid:35) 1 2 , (11) where we suppose the size of Fcon is H × W × C. The Decoder takes the feature maps ˆF as the input and generates the image Igen with target style and content. The architecture of the Decoder mostly mirrors the layers of Content Encoder except that the stride-2 convolution is re- placed by stride-1 convolution and each convolution layer is followed by a ReLU rectifier except the last layer. Besides, we upsample the feature maps by nearest neighbor method in up-sample layers to reduce checkerboard effects as done in [12]. B. Loss Function Similar to [31], we use a pretrained VGG-19 model to calculate the loss function L(Igen|Isty, Icon) = λcLc + λsLs + λtvLtv, (12) which is a weighted combination of the content loss Lc, the style loss Ls and the total variation regularizer Ltv. The content loss Lc is the squared and normalized Euclidean distance between the feature maps of generated images and Statistic MatchingStyle Reference ImageContent Reference ImageOutputConvBlock,5, 1, 64Content EncoderStyle EncoderMixerDecoder𝐼𝐼𝑠𝑠𝑠𝑠𝑠𝑠𝐼𝐼𝑐𝑐𝑐𝑐𝑐𝑐𝜇𝜇𝑠𝑠𝑠𝑠𝑠𝑠𝐹𝐹𝑐𝑐𝑐𝑐𝑐𝑐𝐼𝐼𝑔𝑔𝑔𝑔𝑐𝑐ConvBlock,3, 2, 128ConvBlock,3, 2, 256ResBlock, 3,256ResBlock, 3,256ResBlock,3, 256ResBlock,3,256ConvBlock,5, 1, 64ConvBlock,3, 2, 128ConvBlock,3, 2, 256ConvBlock,3, 2, 256Global PoolingFC, 512ResBlock,3, 256ResBlock, 3,256ResBlock, 3,256ResBlock,3,256UpsampleConvBlock,3, 1, 128UpsampleConvBlock,3, 1, 64ConvBlock,5, 1, 34ResBlock,3, 256𝜎𝜎𝑠𝑠𝑠𝑠𝑠𝑠 6 TG: O1: O2: O3: O4: O5: TG: O1: O2: O3: O4: O5: Fig. 5. Generation results for D1, D2, D3, D4 (from upper left to lower right) with different training set size. TG: Target image, O1: Output for Nt=20k, O2: Output for Nt=50k, O3: Output for Nt=100k, O4: Output for Nt=300k, O5: Output for Nt=500k. In all cases, r=10. and train the model end-to-end with the Adam optimization method until the output is stable. In each experiment, we first randomly sample Nt target images with known content and known styles from D1 as training examples. We then construct the two reference sets for each target image by randomly sampling r images of the corresponding style/content from D1. Figure 4 provides an illustration of target images selection and reference set construction. Each row represents one style and each column represents a content. The target images are represented by randomly scattered red “x” marks. The reference images for the target image are selected from corresponding style/content, shown as the orange circles for the style reference images and green circles for content reference images. 3) Experimental Results: Influence of the Training Set Size To evaluate the influence of the training set size on style transfer, we conduct experiments for Nt=20k, 50k, 100k, 300k and 500k. The generation results for D1, D2, D3 and D4 are shown in Figure 5. As we can see, the larger the training set, the better the performance, which is consistent with our intuition. The generated images with Nt=300k and 500k are clearly better than images generated with Nt=20k, 50k and 100k. Besides, the performance of Nt=300k and Nt=500k is close which implies that with more training images, the network performance tends to be saturated and Nt=300k is enough for good results. Therefore, we take Nt=300k for the rest of experiments. Influence of the Reference Set Size In addition, we conduct experiments with different number of reference images. Fig- ure 6 displays the image generation results of Nt=300k with r=5, r=10 and r=15 respectively. As can be seen from the figure, more reference images lead to better detail generation for characters. Besides, characters generated with r=5 are overall okay, meaning that our model can generalize to novel styles using only a few reference images. The generation results of r=10 and r=15 are close, therefore we take r=10 in our other experiments. Intuitively, more reference images supply more information about strokes and styles of characters, Fig. 4. The illustration of data set partition, target images selection and reference set construction (best viewed in color). content reference images. Suppose the content loss is cal- culated for the l-th layer and the feature maps are of size Hl × Wl × Cl, then the content loss can be formulated as Lc = 1 HlWlCl (cid:107) F l gen − F l con (cid:107)2 2, (13) gen and F l where F l con are feature maps in the l-th layer for the generated image Igen and the content reference image Icon. The style loss Ls is constructed by aligning the Batch Normalization (BN) statistics (mean and standard deviation) [12], [17] of the feature maps of the generated image Igen and the style reference image Isty Ls = (cid:88) l (cid:107) µ(F l gen) − µ(F l sty) (cid:107)2 2 + (cid:107) σ(F l gen) − σ(F l sty) (cid:107)2 2 . (14) In addition, following [15], [24], we add the total variation regularizer Ltv to encourage the smooth of generated images. VI. EXPERIMENTS A. Character Typeface Transfer 1) Data Set: To evaluate the proposed EMD model with Chinese Typeface transfer tasks, we construct a data set of 832 fonts (styles), each font with 1732 commonly used Chinese characters (contents). All images are in the size of 80 × 80 pixels. We randomly select 75% of the styles and contents as known styles and contents (i.e. 624 train styles and 1299 train contents) and leave the rest 25% as novel styles and contents (i.e. 208 novel styles and 433 novel contents). The entire data set is accordingly partitioned into four subsets as shown in Figure 4: D1, images with known styles and contents, D2, images with known styles but novel contents, D3, images with known contents but novel styles, and D4, images with both novel styles and novel contents. The training set is selected from D1, and four test sets are selected from D1, D2, D3, and D4, respectively. The four test sets represent different levels of style transfer challenges. In our experiment, 2) Implementation Details: the out- put channels of convolution layers in the Style Encoder and Content Encoder are 1, 2, 4, 8, 8, 8, 8, 8 times of C respectively, where C=64. And for the Mixer, we set R=B=K in our implementation. The output channels of the first seven deconvolution layers in Decoder are 8, 8, 8, 8, 4, 2, 1 times of C respectively. We set the initial learning rate as 0.0002 Known Style D1 D2 D3 D4                      Novel Style Known Content Novel Content              TG: O1: O2: O3: TG: O1: O2: O3: TG: O1: O2: O3: TG: O1: O2: O3: Fig. 6. The impact of the number of reference images on the generation of images in D1, D2, D3, D4, respectively (from upper left to lower right). TG: Target image, O1: Output for r=5, O2: Output for r=10, O3: Output for r=15. In all cases, Nt=300k. TG: O1: O2: TG: O1: O2: TG: O1: O2: TG: O1: O2: Fig. 7. The impact of the skip-connection on generation of images in D1, D2, D3, D4, respectively (from upper left to lower right). TG is the target image, O1 and O2 are outputs of models without and with skip-connection. In all cases Nt=300k, r=10. making the common points in the reference sets more obvious. Therefore, given r > 1, our model can achieve co-learning of images with the same style/content. Moreover, with r > 1 we can learn more images at once which improves the learning efficiency, i.e., if we split the <r, r, 1> triplets to be r2 <1, 1, 1> triplets, the learning time increases nearly r2 times under the same condition. Effect of the Skip-connection To evaluate the effectiveness of the skip-connection during image generation, we compare the results with and without skip-connection in Figure 7. As shown in the figure, images in D1 are generated best, next is D3 and last is D2 and D4, which conforms to the difficulty level and indicates that novel contents are more challenging to extract than novel styles. For known contents, 7 Fig. 8. Validation of pure style extraction. CR: the content reference set, TG: the target image, O1, O2 and O3 are generated by CR and three different style reference sets SR1, SR2 and SR3. Fig. 9. Validation of pure content extraction. SR: the style reference set, TG: the target image, O1, O2 and O3 are generated using SR but three different content reference sets CR1, CR2 and CR3. models with and without skip-connection perform closely. But for novel contents, images generated with skip-connection are much better in details. Besides, the model without skip- connection may generate images of novel characters to be similar characters which it has seen before. This is because the structure of novel characters is more challenging to extract and the loss of structure information during down-sampling makes the model generate blurry even wrong characters. However, with content skip-connection, the loss in location and structure information is recaptured by the Decoder network. Validation of Style and Content Separation Separating style and content is the key feature of the proposed EMD model. To validate the clear separation of style and content, we combine one content representation with style representations from a few disjoint style reference sets for one style and check whether the generated images are the same. For better validation, the target images are selected from D4, and the content reference sets and style reference sets are all selected from novel styles and novel contents. Similarly, we combine one style representation with content representations from a few disjoint content reference sets. The results are displayed in Figure 8 and Figure 9, respectively. As shown in Figure 8, the generated O1, O2 and O3 are similar although the style reference sets used are quite different, demonstrating that the Style Encoder is able to accurately extract style representations as the only thing the three style reference sets share is the style. Similar results can be found in Figure 9, showing that the Content Encoder accurately extracts content representations. Comparison with Baseline Methods In the following, we compare our method with the following baselines for character style transfer. TG:O1:O2:O3:CR:SR1:SR2:SR3:CR:SR1:SR2:SR3:TG:O1:O2:O3:TG:O1:O2:O3:SR:CR1:CR2:CR3:SR:CR1:CR2:CR3:TG:O1:O2:O3: Source: Pix2pix: AEGN: Zitozi: C-GAN: EMD: Target: 8 L1 loss RMSE PDAR 0.0105 0.0202 0.17 0.0112 0.0202 0.3001 0.0091 0.0184 0.1659 0.0112 0.02 0.3685 0.0087 0.0184 0.1332 Fig. 10. Comparison of image generation for known styles and novel contents. Equal number of image pairs with source and target styles are used to train the baselines. • Pix2pix [13]: Pix2pix is a conditional GAN based image translation network, which consists of encoder, decoder and a discriminator. It also adopts the skip-connection to connect encoder and decoder. Pix2pix is optimized by L1 distance loss and adversarial loss. • Auto-encoder guided GAN [23]: Auto-encoder guided GAN consists of two encoder-decoder networks, one for image transfer and another acting as an auto-encoder to guide the transfer to learn detailed stroke information. • Zi-to-zi [2]: Zi-to-zi is proposed for Chinese typeface transfer which is based on the encoder-decoder architec- ture followed by a discriminator. In discriminator, there are two fully connected layers to predict the real/fake and the style category respectively. • CycleGAN [41]: CycleGAN consists of two mapping networks which translate images from style A to B and from style B to A, respectively and construct a cycle process. The CycleGAN model is optimized by the adversarial loss and cycle consistency loss. For comparison, we use the font Song as the source font which is simple and commonly used and transfer it to target fonts. Our model is trained with Nt=300k and r=10 and as an average, we use less than 500 images for each style. We compare our method with baselines on generating images with known styles and novel styles, respectively. For novel style, the baselines need to be re-trained from scratch. Known styles as target style. Taking known styles as the target style, baselines are trained using the same number of paired images as the images our model used for the target style. The results are displayed in Figure 10 where CycleGAN is denoted as C-GAN for simplicity. We can observe that for known styles and novel contents, our method performs much better than pix2pix, AEGN and CycleGAN and close to or even slightly better than zi-to-zi. This is because pix2pix and AEGN usually need more samples to learn a style [23]. Cycle- GAN performs poorly and only generates part of characters or some strokes, possibly because it learns the domain mappings without the domain knowledge. Zitozi performs well since it learns multiple styles at the same time and the contrast among different styles helps the model better learn styles. For quantitative analysis, we calculate the L1 loss, Root Mean Square Error (RMSE) and the Pixel Disagreement Ratio (PDAR) [41] between the generated images and the target images. PDAR is the number of pixels with different values image size after in the two images divided by the total image binaryzation. We conduct experiments for 10 randomly sampled styles and the average results are displayed at the last three columns in Figure 10 and the best performance is bold. We can observe that our method performs best and achieves the lowest L1 loss, RMSE and PDAR. Novel styles as target style. Taking novel styles as the target style, we test our model to generate images of novel styles and contents given r=10 style/content reference images without retraining. As for baselines, retraining is needed. Here, we conduct two experiments for baselines. One is that we first pretrain a model for each baseline method using the training set our method used and then fine-tune the pretrained model with the same 10 reference images as our method used. The results show that all baseline methods preform poorly and it is unfeasible to learn a style by fine-tuning on only 10 reference images. Thus, we omit the experiment results here. The other setting is training the baseline model from scratch. Since it is unrealistic to train baseline models with only 10 samples, we train them using 300, 500, 1299 images of the target style respectively. Here we use 1299 is because the number of train contents is 1299 in our data set. The results are presented in Figure 11. As shown in the figure, the proposed EMD model can generalize to novel styles from only 10 style reference images but other methods need to be retrained with more samples. The pix2pix, AEGN and CycleGAN perform worst even trained with all 1299 training images, which demonstrates that these three methods are not effective for character style transfer especially when the training data are limited. With only 10 style reference images, our model performs better than zi-to-zi-300 namely zi-to-zi model learned with 300 examples for each style, close to zi-to-zi-500 and a little worse than zi- to-zi-1299. This may be because zi-to-zi learns multiple styles at the same time and learning with style contrast helps model learning better. The quantitative comparison results for L1 loss, RMSE and PDAR are shown at the last three columns of Figure 11. Although given only 10 style reference images, our method performs better than all pix2pix, AEGN and CycleGAN mod- els and zi-to-zi-300, and close to zi-to-zi-500 and zi-to-zi- 1299, which demonstrates the effectiveness of our method. In conclusion, these baseline methods require many images of source styles and target styles, which may be difficult to collect. Besides, the learned baseline model can only transfer styles appearing in train set and they have to be retrained for new styles. But our method can generalize to novel styles given Source: Pix2pix-300: Pix2pix-500: Pix2pix-1299: AEGN-300: AEGN-500: AEGN-1299: Zitozi-300: Zitozi-500: Zitozi-1299: C-GAN-300: C-GAN-500: C-GAN-1299: EMD-10: Target: 9 L1 loss RMSE PDAR 0.0109 0.0206 0.1798 0.0106 0.0202 0.1765 0.01 0.0196 0.1531 0.0117 0.02 0.3951 0.0108 0.02 0.2727 0.0105 0.0196 0.26 0.0091 0.0187 0.1612 0.009 0.0185 0.1599 0.009 0.0183 0.1624 0.0143 0.0215 0.5479 0.0126 0.0203 0.4925 0.0128 0.0203 0.4885 0.009 0.0186 0.1389 Fig. 11. Comparison of image generation for novel styles and contents given r=10. The baseline methods are trained with 300, 500, 1299 image pairs respectively. only a few reference images. In addition, baseline models can only use images of target styles. However, since the proposed EMD model learns feature representations instead of transformation among specific styles, it can leverage images of any styles and make the most of existing data. B. Neural Style Transfer 1) Implementation Details: Following previous stud- ies [12], [15], we use the MS-COCO dataset [19] as the content images and a dataset of paintings mainly collected from WikiArt [3] as the style images. Each dataset contains roughly 80,000 training examples. The model is trained using the Adam optimizer with the learning rate of 0.0001. The batch size is set to be 8 style-content pairs. We compute the style loss using the relu1 2, relu2 2, relu3 3, relu4 3 layers of VGG-19 and the content loss using the relu4 1 layer. We set λc=1, λs=5 and λtv=1e-5. During training, we first resize the smallest dimension of both images to 512 while preserving the aspect ratio, then randomly crop regions of size 256×256. Since the size of the fully connected layer in Style Encoder is only related to the filter numbers, our model can be applied to style/content images of any size during testing. 2) Comparison Methods: We compare the proposed neural style transfer model with the following three types of baseline methods: • Fast but not flexible Per-Style-Per-Model method, which is restricted to a single style and can not be generalized to new styles. Here we use the state-of-the-art method TextureNet [32] as an example. TextureNet is mainly a generator which takes a noise variable z and a content reference image as the inputs and generates the image with target style/content. • Flexible but slow optimization based method [11], which optimizes one noise image to be with target style and content iteratively with the help of a pretrained VGG network. • Flexible and fast Arbitrary-Style-Per-Model method, which can achieve arbitrary style transfer with no need for retraining. In this study, we compare with the following three methods: – Patch-based [8]: Patch-based method conducts style transfer by swapping each content feature patch with the nearest style patch. The network consists of a convolution network, an inverse network and a style swap layer. – AdaIn [12]: AdaIn is based on the Adaptive Instance Normalization and the network of AdaIn consists of an encoder, a decoder and an Adaptive Instance Normalization layer, where the encoder is fixed as the first few layers of VGG-19. – Universal [16]: Universal is designed based on the whitening and coloring transformation which is em- bedded in a series of pretrained encoder-decoder image reconstruction networks. Among the above baseline methods, the TextureNet is more impressive in transfer quality than the other four baseline methods, therefore, we take it as a benchmark. The results of these baseline methods are all obtained by running their released code with the default configurations. 3) Experimental Results: Comparison with Baseline Methods As can be seen from Figure 12, the proposed method performs better than other arbitrary style transfer methods but a little worse than TextureNet. It is worth noting that TextureNet is trained separately for each style but none of the 10 Style Content TextureNet[32] Opt-based[11] Patch-based[8] AdaIn[12] Universal[16] EMD Fig. 12. The comparison results for neural style transfer. 11 Fig. 13. More experimental results for neural style transfer. presented styles are observed by our model during training. This is acceptable due to the trade-off between flexibility and transfer quality. Patch-based method performs poorly. It can not capture some styles when lots of content patches are swapped with style patches lack of style elements. AdaIn per- forms well on most styles but the generated images are a little blurry in details. It performs not so well for some complicated styles. Universal replaces the training process with a series of transformations but it is not effective at producing sharp details and fine strokes. Figure 13 displays more style transfer results of our proposed method, which demonstrate that the proposed EMD framework can be generalized to arbitrary new styles without the need for model retraining. Style-content Trade-off During training, we can control the degree of style transfer by adjusting the weight λs in loss function. When testing, our method also allows the style- content trade-off by adjusting the amount of style information mixed with the content feature. With Style Encoder, we can obtain the original style of the content image, and then we mix the content feature with the style which is the weighted combination of styles from the content image and the style image ˆF = Fcon − µ(Fcon) σ(Fcon) where Fcon is the feature map of content image and σnew + µnew, µnew = (1 − α)µcon + αµsty, (15) (16) stylecontent 12 Fig. 14. Examples of style-content trade-off. Fig. 15. Examples of style interpolation. σnew = (1 − α)σcon + ασsty, (17) where (µcon, σcon) and (µsty, σsty) are the learned statis- tical information of the content image and the style image, respectively. By adjusting the weight α, the Decoder generates images gradually changing from the original style to the target style. When α = 0, the Decoder tries to reconstruct the content image and when α = 1.0, the Decoder outputs the most stylized image. As shown in Figure 14, the stylized image changes from slightly stylized to the most stylized with increasing α. Style Interpolation Similarly, our method can also be applied for interpolation between two styles, which is achieved by setting µnew = (1−α)µsty1+αµsty2 and σnew = (1−α)σsty1+ ασsty2 in Eq. 15. An example is presented in Figure 15. When α = 0 and α = 1, style 1 and style 2 are used for the transfer, respectively. When 0 < α < 1, an interpolation between the two styles are used for the transfer. decoder will be taken as the shared knowledge and transferred to new styles and contents. Under this framework, we design two individual networks for character typeface transfer and neural style transfer tasks. Extensive experimental results on these two tasks demonstrate its effectiveness. In our study, the learning process consists of a series of image generation tasks and we try to learn a model which can generalize to new but related tasks by learning a high-level strategy, namely learning the style and content representations. This resembles to “learning-to-learn” program. In the future, we will explore more about “learning-to-learn” and integrate it with our framework. ACKNOWLEDGMENT The work is partially supported by the High Technology Research and Development Program of China 2015AA015801, NSFC 61521062, STCSM 18DZ2270700. VII. CONCLUSION AND FUTURE WORK In this paper, we propose a unified style transfer frame- work EMD for both character typeface transfer and neural style transfer, which enables the transfer models generalizable to new styles and contents given a few reference images. The main idea is that from these reference images, the Style Encoder and Content Encoder extract style and content representations, respectively. Then the extracted style and con- tent representations are mixed by a Mixer and finally fed into the Decoder to generate images with target styles and contents. This learning framework allows simultaneous style transfer among multiple styles and can be deemed as a special ‘multi- task’ learning scenario. Then the learned encoders, mixer and REFERENCES [1] Rewrite. https://github.com/kaonashi-tyc/Rewrite. 1, 2, 3 [2] Zi-to-zi. https://kaonashi-tyc.github.io/2017/04/06/zi2zi.html. 2, 3, 8 [3] Painter by numbers, wikiart, 2016. https://www.kaggle.com/c/painter- by-numbers. 9 [4] S. Azadi, M. Fisher, V. Kim, Z. Wang, E. Shechtman, and T. Darrell. Multi-content gan for few-shot font style transfer. 2018. 1, 3 [5] K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 3 [6] S. Changpinyo, W. Chao, B. Gong, and F. Sha. Synthesized classifiers In Proceedings of the IEEE Conference on for zero-shot learning. Computer Vision and Pattern Recognition, pages 5327–5336, 2016. 4 [7] D. Chen, L. Yuan, J. Liao, N. Yu, and G. Hua. Stylebank: An explicit In Proceedings of the representation for neural image style transfer. IEEE Conference on Computer Vision and Pattern Recognition, 2017. 2 contentstyle𝛼𝛼=0.25𝛼𝛼=0.5𝛼𝛼=0.75𝛼𝛼=1.0contentstyle𝛼𝛼=0.25𝛼𝛼=0.5𝛼𝛼=0.75𝛼𝛼=1.0contentstyle2𝛼𝛼=0.25𝛼𝛼=0.5𝛼𝛼=0.75𝛼𝛼=1.0𝛼𝛼=0style1 13 [32] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Improved texture networks: maximizing quality and diversity in feed-forward stylization and texture synthesis. In Proc. CVPR, 2017. 2, 9, 10 [33] P. Upchurch, N. Snavely, and K. Bala. From a to z: supervised transfer In arXiv of style and content using deep neural network generators. preprint arXiv:1603.02003, 2016. 3 [34] P. Wilmot, E. Risser, and C. Barnes. Stable and controllable neural texture synthesis and style transfer using histogram losses. arXiv preprint arXiv:1701.08893, 2017. 1 [35] Y. Xian, Z. Akata, G. Sharma, Q. Nguyen, M. Hein, and B. Schiele. In Proceedings of the Latent embeddings for zero-shot classification. IEEE Conference on Computer Vision and Pattern Recognition, pages 69–77, 2016. 4 [36] S. Xu, H. Jiang, T. Jin, F. C. Lau, and Y. Pan. Automatic generation IEEE Intelligent of chinese calligraphic writings with style imitation. Systems, 2009. 1 [37] H. Zhang and K. Dana. Multi-style generative network for real-time transfer. In arXiv preprint arXiv:1703.06953, 2017. 1 [38] X.-Y. Zhang, F. Yin, Y.-M. Zhang, C.-L. Liu, and Y. Bengio. Drawing and recognizing chinese characters with recurrent neural network. IEEE transactions on pattern analysis and machine intelligence, 40(4):849– 862, 2018. 1 [39] Y. Zhang, Y. Zhang, and W. Cai. Separating style and content for generalized style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. 1 [40] B. Zhou, W. Wang, and Z. Chen. Easy generation of personal chinese In Multimedia and Expo (ICME), 2011 IEEE handwritten fonts. International Conference on, pages 1–6. IEEE, 2011. 1 [41] J. Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017. 1, 2, 3, 8 [8] T. Q. Chen and M. Schmidt. Fast patch-based style transfer of arbitrary style. arXiv preprint arXiv:1612.04337, 2016. 1, 2, 9, 10 [9] V. Dumoulin, J. Shlens, and M. Kudlur. A learned representation In Proceedings of the International Conference on for artistic style. Learning Representations, 2017. 5 [10] A. Frome, G. Corrado, J. Shlens, S. Bengio, J. Dean, T. Mikolov, et al. In Advances in Devise: A deep visual-semantic embedding model. neural information processing systems, pages 2121–2129, 2013. 4 [11] A. Gatys, A. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2414–2423, 2016. 1, 2, 4, 9, 10 [12] X. Huang and S. Belongie. Arbitrary style transfer in real-time the IEEE with adaptive instance normalization. International Conference on Computer Vision (ICCV), Oct 2017. 1, 2, 5, 6, 9, 10 In Proceedings of [13] P. Isola, J. Zhu, T. Zhou, and A. Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 1, 2, 3, 4, 8 [14] S. J´egou, M. Drozdzal, D. Vazquez, A. Romero, and Y. Bengio. The one hundred layers tiramisu: Fully convolutional densenets for semantic In Proceedings of the IEEE Conference on Computer segmentation. Vision and Pattern Recognition Workshops (CVPRW), pages 1175–1183. IEEE, 2017. 4 Perceptual losses for real-time [15] J. Johnson, A. Alahi, and F. Li. In Proceedings of the European style transfer and super-resolution. Conference on Computer Vision, pages 694–711. Springer, 2016. 1, 2, 4, 6, 9 [16] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang. Universal style transfer via feature transforms. In Advances in Neural Information Processing Systems, pages 385–395, 2017. 1, 2, 9, 10 [17] Y. Li, N. Wang, J. Liu, X. Hou, Y. Li, N. Wang, J. Liu, and X. Hou. Demystifying neural style transfer. In Twenty-Sixth International Joint Conference on Artificial Intelligence, pages 2230–2236, 2017. 5, 6 [18] Z. Lian, B. Zhao, and J. Xiao. Automatic generation of large-scale handwriting fonts via style learning. In SIGGRAPH ASIA 2016 Technical Briefs, page 12. ACM, 2016. 1 [19] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014. 9 [20] M. Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image In Advances in Neural Information Processing translation networks. Systems, pages 700–708, 2017. 3 [21] M. Y. Liu and O. Tuzel. Coupled generative adversarial networks. In Advances in Neural Information Processing Systems 29, pages 469–477. 2016. 1, 2, 3 [22] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–3440, 2015. 4 [23] P. Lyu, X. Bai, C. Yao, Z. Zhu, T. Huang, and W. Liu. Auto- encoder guided gan for chinese calligraphy synthesis. In arXiv preprint arXiv:1706.08789, 2017. 1, 2, 3, 4, 8 [24] A. Mahendran and A. Vedaldi. Understanding deep image representa- tions by inverting them. pages 5188–5196, 2015. 6 [25] A. Mordvintsev, C. Olah, and M. Tyka. Inceptionism: Going deeper into neural networks. Google Research Blog. Retrieved June, 20(14), 2015. 2 [26] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In Proceedings of the International Conference on Learning Representa- tions, 2016. 4 [27] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Inter- vention, pages 234–241. Springer, 2015. 4 [28] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb. Learning from simulated and unsupervised images through adversarial training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 3 [29] Y. Taigman, A. Polyak, and L. Wolf. Unsupervised cross-domain image generation. In arXiv preprint arXiv:1611.02200, 2016. 3 [30] J. Tenenbaum and W. Freeman. Separating style and content. In Proceedings of the Advances in neural information processing systems, pages 662–668, 1997. 1, 4 [31] D. Ulyanov, V. Lebedev, A. Vedaldi, and V. Lempitsky. networks: Feed-forward synthesis of textures and stylized images. Proceedings of pages 1349–1357, 2016. 1, 2, 5 Texture In the International Conference on Machine Learning,
synthetic_cpt
2
Reflexive_Guidance_Improving_OoDD_in_Vision-Language_Models_via_Self-Guided_Image-Adaptive_Concept_Generation.pdf
Proofs of the Technical Results Justifying a Biologically Inspired Algorithm for Reactive Navigation of Nonholonomic Robots in Maze-Like Environments 1 1 0 2 v o N 1 2 ] C O . h t a m [ 1 v 7 6 7 4 . 1 1 1 1 : v i X r a Alexey S. Matveev a, Michael C. Hoy b, Andrey V. Savkin b aDepartment of Mathematics and Mechanics, Saint Petersburg University, Universitetskii 28, Petrodvoretz, St.Petersburg, 198504, Russia bSchool of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney 2052, Australia 1 Introduction Inspired by behaviors of animals, which are believed to use simple, local motion control rules that result in remarkable and complex intelligent behaviors [1,2,3], we examine the navigation strategy that is aimed at reaching a steady target in a steady arbitrarily shaped maze-like environment and is composed of the following reflex-like rules: s.1) At considerable distances from the obstacle, (a) turn towards the target as quickly as possible; (b) move directly to the target when headed to it; s.2) At a short distance from the obstacle, (c) Follow (a,b) when leaving from the obstacle; (d) When approaching it, quickly avert the collision threat by sharply turning. Studies of target pursuit in animals, ranging from dragonflies to fish and dogs to humans, have suggested that they often use the pure pursuit guidance s.1) to catch not only a steady but also a moving target. The idea of local obstacle avoidance strategy s.2) is also inspired by biological examples such as a cockroach encountering a wall [2]. The rules s.1), s.2) demand only minor perceptual capacity. Access even to the distance to the obstacle is not needed: it suffices to determine whether it is short or not, and be aware of the sign of its time derivative. As for the target, the vehicle has to access its relative bearing angle. Moreover, it suffices that it is able only to recognize which quadrant of its relative Cartesian frame hosts the target line-of-sight. To address the issue of nonholonomic constraints, control saturation, and under-actuation, we consider a vehicle of the Email addresses: [email protected] (Alexey S. Matveev), [email protected] (Michael C. Hoy), [email protected] (Andrey V. Savkin). Dubins car type. It is capable of moving with a constant speed along planar paths of upper limited curvature without reversing the direction and is controlled by the upper limited angular velocity. As a result, it is unable to slow down, stop, or make an abrupt turn. By reliance on the bearing-only data about the target, the proposed approach is similar to the Pledge algorithm [4] and Angulus algorithm [5]. Unlike ours, the both assume access to the absolute direction (e.g., by a compass), and the latter employs not one but two angles in the convergence crite- rion. The major distinction is that they assume the vehicle to be able to trace the paths of unlimited curvature, in partic- ular, broken curves and to move exactly along the obstacle boundary. These assumptions are violated in the context of this paper, which entails deficiency in the available proofs of the convergence of these algorithms. The extended introduction and discussion of the proposed control law are given in the paper submitted by the authors to the IFAC journal Automatica. This text basically contains the proofs of the technical facts underlying justification of the convergence at performance of the proposed algorithm in that paper, which were not included into it due to the length limitations. To make the current text logically consistent, were reproduce the problem statement and notations. 2 Problem Setup and the Navigation Strategy We consider a planar under-actuated nonholonomic vehicle of the Dubins car type. It travels with a constant speed v without reversing direction and is controlled by the angular velocity u limited by a given constant u. There also is a steady point target T and a single steady obstacle D 6∋ T in the plane, which is an arbitrarily shaped compact domain whose boundary ∂D is Jordan piece-wise analytical curve without inner corners. Modulo smoothened approximation Preprint submitted to Automatica 8 September 2018 of such corners, this assumption is typically satisfied by all obstacles encountered in robotics, including continuous mazes. The objective is to drive the vehicle to the target with constantly respecting a given safety margin d(t) ≥ dsafe > 0. Here d(t) is the distance to the obstacle d(t) := distD[r(t)], distD[r] := min r∗∈D kr∗ − rk, (1) k · k is the Euclidian norm, and r(t) is the vehicle position. This position is given by the abscissa x and ordinate y of the vehicle in the world frame, whereas its orientation is described by the angle θ from the abscissa axis to the robot centerline. The kinematics of the considered vehicles are classically described by the following equations: ˙x = v cos θ, ˙y = v sin θ, , ˙θ = u ∈ [−u, u], r(0) = r0 6∈ D θ(0) = θ0 . (2) Thus the minimal turning radius of the vehicle is equal to R = v/u. (3) The vehicle has access to the current distance d(t) to D and the sign sgn ˙d(t) of its time-rate ˙d(t), which are acces- sible only within the given sensor range: d ≤ drange, where drange > dsafe. The vehicle also has access to the angle β from its forward centerline ray to the target. To specify the control strategy s.1), s.2), we introduce the threshold dtrig < drange separating the ’short’ and ’long’ dis- tances to the obstacle. Mathematically, the examined strat- egy is given by the following concise formula: the value + being drawn with a fixed probability p ∈ (0, 1). This version is called the randomized control law. To state the assumptions, we introduce the Frenet frame T (r∗), N (r∗) of ∂D at the point r∗ ∈ ∂D (T is the pos- itively oriented unit tangent vector, N is the unit normal vector directed inwards D, the boundary is oriented so that when traveling on ∂D one has D to the left), κ(r∗) is the signed curvature (κ(r∗) < 0 on concavities) and Rκ(r∗) := |κ(r∗)|−1. Due to the absence of inner corners, any point r 6∈ D at a sufficiently small distance distD[r] < d⋆ from D does not belong to the focal locus of ∂D and distD[r] is at- tained at only one point [7]. The regular margin d⋆(D) > 0 of D is the supremum of such d⋆’s. So d⋆(D) = ∞ for convex domains; for non-convex D, d⋆(D) ≤ RD := inf r∈∂D:κ(r)<0 Rκ(r). (5) (The infimum over the empty set is set to be +∞.) Assumption 1 The vehicle is maneuverable enough: it is capable of full turn without violation of a safety margin dsafe > R within the regularity margin of the maze 3R < d⋆(D), and moreover 4R < RD. Assumption 2 The sensor range gives enough space to avoid collision with D after its detection: drange > 3R. The parameters dtrig and dsafe are tuned so that 3R < dsafe + 2R < dtrig < d⋆(D), drange, RD − R. (6) Such a choice is possible thanks to Assumptions 1 and 2. sgnβ | if d > dtrig (mode A) 3 Main Results . (  (4) if d ≤ dtrig (mode B) u = u ×   sgnβ if ˙d > 0 −σ if ˙d ≤ 0 (cid:12) (cid:12) (cid:12) (cid:12) Here σ = ± is a constant controller parameter, which gives (cid:12) the turn direction, and ˙d ≥ 0 and ˙d < 0 are equivalent to the vehicle orientation outwards and towards D. The switch A 7→ B occurs when d reduces to dtrig; the converse switch holds when d increases to dtrig. When mode B is activated, ˙d ≤ 0; if ˙d = 0, the ’turn’ submode u := −σu is set up. Since the control law (4) is discontinuous, the solution of the closed-loop system is meant in the Fillipov’s sense [6]. Remark 1 In (4), β accounts for not only the heading but also the sum of full turns performed by the target bearing. Theorem 1 (i) With probability 1, the randomized control law drives the vehicle at the target T for a finite time with always respecting the safety margin (i.e., there exists a time instant t∗ such that r(t∗) = T and distD[r(t)] ≥ dsafe ∀t ∈ [0, t∗]) whenever both the vehicle initial location r0 and the target are far enough from the obstacle and from each other: distD[r0] > dtrig + 2R, kr0 − Tk > 2R, distD[T] > dtrig. (7) (ii) The basic control law drives the vehicle at the target for a finite time with always respecting the safety margin whenever (7) holds and the vehicle initial location and the target lie far enough from the convex hull co D of the maze: distco D[T] > dtrig, distco D[r0] > dtrig. In the basic version of the algorithm, the parameter σ is fixed. To find a target hidden deeply inside the maze, a modified version can be employed: whenever A 7→ B, the parameter σ is updated. The updated value is picked randomly and independently of the previous choices from {+, −}, with In (7), distD[r0] > dtrig+2R can be relaxed into distD[r0] > dtrig if the vehicle is initially directed to the target β(0) = 0. In view of (3) and the freedom (6) in the choice of dsafe, dtrig, not only Assumptions 1, 2 but also the constraints (7) dis- appear (are boiled down into distD[r0] > 0, kr0 − Tk > 2 0, distD[T] > 0) as v → 0. In other words, the algorithm succeeds in any case if the cruise speed v is small enough. The last assumption distco D[T] > dtrig from (ii) can be re- laxed to cover some scenarios with the target inside the maze. To specify this, we need some notations and definitions. The d-equidistant curve C(d) of D is the locus of points r at the distance distD[r] = d from D; the d-neighborhood N (d) of D is the area bounded by C(d); [r1, r2] is the straight line segment directed from r1 to r2. Let r♦, r∗ ∈ C(dtrig) and (r♦, r∗)∩N (dtrig) = ∅. The points r♦, r∗ divide C(dtrig) into two arcs. Being concatenated with [r♦, r∗], each of them gives rise to a Jordan curve encircling a bounded domain, one of which is the other united with N (dtrig). The smaller domain is called the simple cave of N (dtrig) with endpoints r♦, r∗. The location r is said to be locked if it belongs to a simple cave of N (dtrig) whose endpoints lie on a common ray centered at T. We remark that if distco D[r] > dtrig, the location is unlocked. Theorem 2 The basic control law drives the vehicle at the target for a finite time with always respecting the safety margin whenever (7) holds and both the initial location of the vehicle and the target are unlocked. Now we disclose the tactical behavior implied by s.1), s.2) and show that it includes wall following in a sliding mode. In doing so, we focus on a particular avoidance maneuver (AM), i.e., the motion within uninterrupted mode B. Let ρ(s) be the natural parametric representation of ∂D, where s is the curvilinear abscissa. This abscissa is cyclic: s and s + L encode a common point, where L is the perimeter of ∂D. We notationally identify s and ρ(s). For any r 6∈ D within the regular margin distD[r] < d⋆(D), the symbol s(r) stands for the boundary point closest to r, and s(t) := s[r(t)], where r(t) is the vehicle location at time t. To simplify the matters, we first show that ∂D can be as- sumed C1-smooth without any loss of generality. Indeed, if 0 < d < d⋆(D), the equidistant curve C(d) is C1-smooth and piece-wise C2-smooth [7]; its parametric representation, orientation, and curvature are given by Proposition 3 Let for the vehicle driven by the control law (4), obstacle avoidance be started with zero target bearing β(t) = 0 at t = t∗. Then the following claims hold: (i) There exists τ ≥ t∗ such that the vehicle moves with the maximal steering angle u ≡ −σu and the dis- tance to the obstacle decreases ˙d ≤ 0 until τ , ∗ and at t = τ , the sliding motion along the equidistant curve C {distD[r(τ )]} † is started with σ ˙s > 0 and β ˙s > 0; (ii) SMEC holds until β arrives at zero at a time when κ[s(t) + σ≈0] > 0, which sooner or later holds and af- ter which a straight move to the target ‡ is commenced; (iii) During SMT, the vehicle first does not approach the obstacle ˙d ≥ 0 and either the triggering threshold dtrig is ultimately trespassed and so mode B is switched off, or a situation is encountered where ˙d(t) = 0 and κ[s(t) + σ≈0] < 0. When it is encountered, the vehicle starts SMEC related to the current distance; (iv) There may be several transitions from SMEC to SMT and vice versa, all obeying the rules from (ii), (iii); (v) The number of transitions is finite and finally the ve- hicle does trespass the triggering threshold dtrig, thus terminating the considered avoidance maneuver; (vi) Except for the initial turn described in (i), the vehicle maintains a definite direction of bypassing the obsta- cle: ˙s is constantly positive if σ = + (counterclockwise bypass) and negative if σ = − (clockwise bypass). By (4), AM is commenced with ˙d(t∗) ≤ 0. The next remark shows that if ˙d(t∗) = 0, IT may have the zero duration. Remark 2 If only if σ ˙s(t∗) > 0. Then the following claims are true: ˙d(t∗) = 0, IT has the zero duration if and (1) If κ[s(t∗) + σ ·≈0] < 0, SMEC is immediately started; (2) If κ[s(t∗) + σ ·≈0] ≥ 0, the duration of SMEC is zero, and SMT is continued. The assumption β(t∗) = 0 of Proposition 3 holds for the first AM due to (7). Indeed, since distD[r0] > dtrig + 2R, the pursuit guidance law turns the vehicle towards the target earlier than the threshold dtrig for activation of AM is en- countered. It also holds for all subsequent AM’s since any AM ends in course of SMT by Proposition 3. s 7→ ρ(s) − dN (s), κC(d)(s) = κ(s) 1 + κ(s)d . (8) 4 Technical facts underlying the proofs of Proposition 3 and Remark 2. The second formula holds if s is not a corner point of ∂D; such points contribute circular arcs of the radius d into C(d). So by picking δ > 0 small enough, expanding D to N (δ), and correction d := d−δ of d := d, dsafe, dtrig, drange, we keep all assumptions true and do not alter the operation of the closed-loop system. Hence ∂D can be assumed C1-smooth. Writing f (η∗ ±≈0) > 0 means that there exists small enough ∆ > 0 such that f (η) > 0 if 0 < ±(η − η∗) < ∆. The similar notations, e.g., f (η∗ ±≈0) ≤ 0, are defined likewise. 4.1 Geometrical Preliminaries We assume that the world frame (WF) is centered at the target T. Let C 6∋ T be a regular piece-wise smooth di- ∗ This part of AM is called the initial turn and abbreviated IT. † This is abbreviated SMEC and means following the wall at the fixed distance distD[r(τ )], which is set up at the start of SMEC. ‡ SMT, which is sliding motion over the surface β = 0 3 Target z l N T (a) s (b) Fig. 1. (a) Definition of λ and ζ; (b) Behavior during IT. rected curve with natural parametric representation ̺(s), s ∈ [s−, s+]. The turning angle of C around a point p 6∈ C is de- noted by ∢pC, and ∢TANG [C] := ∢0T , where T (s), N (s) is the Frenet frame of C at s. § Let λ(s), ζ(s) and ψ(s) stand for the Cartesian coordinates and polar angle of −̺(s) in this frame (see Fig.1(a)), respectively, and let ′ denote dif- ferentiation with respect to s. The polar angle of ̺(s) in WF and the curvature of C at s are denoted by ϕ(s) and κ(s), respectively. To indicate the curve C, the symbols T, N, λ, ζ, κ, etc. may be supplied with the lower index C . The directed curve traced as s runs from s1 to s2 is denoted by C , where the specifier ± is used for closed curves. The superscript a means that the lemma is equipped with the number under which its formulation is given in the basic version of the paper. ±−→s2 s1 Lemma 4a The following relations hold whenever T 6∈ C: ψ′ = −κ + ζ(λ2 + ζ2)−1 λ′ = −1 + κζ ζ′ = −κλ , ϕ′ = ζ(λ2 + ζ2)−1 r := col (λ, ζ) 6= 0, ∢0r = ∢TC − ∢TANG [C] . , (9) (10) PROOF. Differentiation of the equation T = ̺ + λT + ζN and the Frenet-Serret formulas T ′ = κN, N ′ = −κT [7] yield that 0 = T + λ′T + λκN + ζ′N − ζκT. Equating the cumulative coefficients in this linear combination of T and N to zero gives the first two equations in (9). By virtue of them, the third and forth ones follow from [7] ψ′ = ζ′λ − λ′ζ λ2 + ζ2 , ϕ′ = y′x − x′y x2 + y2 . (11) The first relation in (10) holds since T 6∈ C. Let η(s) := ∢TANG[Ts−→s−0] + η0, where η0 is the polar angle of T (s−). The matrix Φη(s) of rotation through η(s) trans- forms the world frame into the Frenet one, and ̺(s) = h(s) col [cos ϕ(s), sin ϕ(s)]. So r(s) = −Φ−η(s)̺(s) = h(s) col {[cos[π + ϕ(s) − η(s)], sin[π + ϕ(s) − η(s)]}. Thus π + ϕ(s) − η(s) is the piece-wise continuous polar angle of r(s) that jumps according to the convention concerned by footnote §. This trivially implies (10). • T T Target . s <0 . d <0 e d m o Acute C in B . . s <0d >0 T Singular segments T N (a) T T The point is not singular T T (b) Fig. 2. (a) Behavior during IT; (b) Singular points. Corollary 1 Let ζ(s∗) = 0 and ς = ±. Then ςζ[s∗ + ς ≈0]sgnλ[s∗] < 0 if κ[s∗ + ς ≈0] > 0 ςζ[s∗ + ς ≈0]sgnλ[s∗] > 0 if κ[s∗ + ς ≈0] < 0 . (12) By (6) and the last inequality in (7), Lemma 4 yields ∢0rC(d∗) = −2π for d∗ ∈ [0, dtrig]. (13) Corollary 2 There exist F and d# > dtrig such that when- ever |d| ≤ d#, the set S(d) := {s ∈ ∂D : ζ∂D(s) = d} has no more than F connected components. PROOF. By the last inequality in (7), ∃d# : dtrig < d# < distD[T] ≤ ζ(s)2 + λ(s)2. Then s ∈ S(d) ∧ |d| ≤ d# ⇒ # > 0. Since the domain D |λ(s)| ≥ δ := p is compact, |λ′(s)| ≤ M < ∞ ∀s. So whenever s ∈ S(d) and |d| ≤ d#, the function λ(·) does not change its sign in the δM −1-neighborhood V (s) of s. distD[T]2 − d2 q Since ∂D is piece-wise analytical, each set {s : ±κ(s) > 0} and {s : κ(s) = 0} has finitely many connected compo- nents ∂± i and ∂0 ν , respectively. By the foregoing and (9), any intersection V (s) ∩ ∂± i , s ∈ S(d), |d| ≤ d# contains only one point s. Hence the entire arc ∂± con- i of the length ∂± tains no more than δ−1M + 1 such points. It remains to i ν such that ∂0 note that S(d) covers any ∂0 • (cid:12) (cid:12) Observation 1 SMEC with σ = ± ends when s ∈ S0 := {s ∈ ∂D : −d# < ζ∂D(s) < 0, ±λ∂D(s) > 0}. This set has no more than F connected components, called ±arcs. (cid:12) (cid:12) ν ∩ S(d) 6= ∅. ∂± i (cid:12) (cid:12) (cid:12) (cid:12) The second claim holds since λ′ < 0 on S0 due to (6), (9). 4.2 Technical Facts § At the corner points, the count of ∢0T progresses abruptly according to the conventional rules [7]. Lemma 5 The following two statements hold: (i) In the domain d ≤ dtrig ∧ ˙d > 0 ∨ d > dtrig, the surface 4 β = 0 is sliding, with the equivalent control [6] u ≡ 0; (ii) The surface ˙d = 0 is sliding in the domain of the radius R and so by Remark 1, β(t) > 0 and dtrig − 2R ≤ d < dtrig, ˙sβ > 0, σ ˙s > 0. (14) d(t) ≥ distD[r(0)] − kr − r(0)k ≤2R ≥ dtrig − 2R {z | (6) } > dsafe > R > 0. (19) PROOF. (i) Let h be the distance from the vehicle to T. Due to (2), ˙h = −v cos β, ˙β = h−1v sin β − u. So as the (4) state approaches the surface β = 0, we have ˙β → −usgnβ, which implies the first claim. (ii) Let α be the polar angle of the vehicle velocity in the frame T∂D[s(t)], N∂D[s(t)]. By (5), (6), and (14), 1 + κ[s(t)]d(t) > 0, and as is shown in e.g., [8], ˙s = v cos α 1 + κ(s)d ˙d = −v sin α, , ˙α = −κ(s) ˙s + u. (15) As the state approaches a point where ˙d = 0 and (14) holds, sin α → 0 cos α → sgn ˙s , ¨d → −v2 sgn ˙s − u v (cid:20) κ 1 + κd . (cid:21) (16) If the state remains at a definite side of the surface ˙d = 0, (3) and (4) yield that sgn(β ˙s) − 1 R ¨d (14) = −v2 (cid:20) ˙d>0−−→ ¨d+ := −v2 κ 1 + κd κ 1 + κd sgn ˙s + 1 R − (cid:20) 1 R , ¨d (cid:21) (14) = v2 1 R (cid:20) (cid:21) κ 1 + κd (cid:21) ˙d<0−−→ ¨d− := κ 1 + κd . + (cid:21) v2 σ (cid:20) (17) The proof is completed by observing that by (6), (14), ¨d+ = −v2 1 + κd − κR R(1 + κd) < 0since 1 + κd > 0 and d > dsafe > R ¨d− = v2 |κ| [Rκ + (d + R)sgnκ] R(1 + κd) > 0. (18) The subsequent proofs are focused on σ = +; the case σ = − is considered likewise. Lemma 6 If ˙d(t∗) < 0, claim (i) in Proposition 3 is true. (15) = −v ˙α (3) ≤ −v + u v κ cos α 1 + κd (cid:20) 1 R − |κ| 1 + κd (cid:21) (cid:20) = −v (cid:21) (cid:20) 1 R − 1 Rκ + dsgnκ . (cid:21) While d ≤ dtrig (in particular, while ˙d ≤ 0) the expression in the last square brackets is positive. This is true by (19) if κ ≥ 0; otherwise, since Rκ > R + dtrig by (6). So ˙α ≤ −δ < 0, i.e., the vector col (cos α, sin α) rotates clockwise. Here the signs of the first and second components equal those of ˙s and − ˙d, respectively, by (15) and so col ( ˙s, ˙d) evolves as is illustrated in Fig. 1(b). This and the conditions (14) for the sliding motion complete the proof. • More can be derived from the above proof. Lemma 9a Let s∗ and sb be the values of the continuously evolving s at the start and end of IT, respectively. During IT, σ ˙s ≥ 0 if σ ˙s(t∗) ≥ 0, and ˙s ones changes the sign otherwise. In any case, s runs from s∗ to sb in the direction σ during a last phase of IT. PROOF. Let σ = +. The map r 7→ (s, d) is the orientation- reversing immersion on the disc Din encircled by Cin. So it transforms any negatively oriented circle C ⊂ Din concen- tric with Cin into a curve ξ with ∢TANG [ξ] = 2π. Then the argument from the concluding part of the proof of Lemma 6 shows that as the robot once runs over Cin in the negative direction, the vector col ( ˙s, ˙d) intersects the half-axes of the frame in the order associated with counter clockwise rota- tion, each only once. This immediately implies the claim given by the first sentence in the conclusion of the lemma. If ˙s(t∗) ≥ 0, this claim yields that sb−s∗ ≥ 0. Let ˙s(t∗) < 0. As the robot once runs over Cin in the negative direction, ˙s > 0 and ˙d ≤ 0 when it passes the point B from Fig. 2(a), which corresponds to the second passage of s = s∗. Due to the order in which col ( ˙s, ˙d) intersects the half-axes, this combination of signs is possible only before ˙d vanishes for the first time, i.e., within IT. Thus the second occurrence of s = s∗ holds within IT. The proof is completed by noting that ˙s > 0 after this by the first claim of the lemma. • PROOF. Let σ = +. Due to (4), initially u ≡ −u. Let [t∗, τ ] denote the maximal interval on which u ≡ −u. For t ∈ (t∗, τ ), the vehicle moves clockwise along a circle Cin We proceed to the case where some of the vector fields is tangential to the discontinuity surface ˙d = 0. Since this may undermine uniqueness of the solution (its existence is 5 still guaranteed), the arguments become much more sophis- ticated. The first lemma establishes a required technical fact. To state it, we note that whenever d := distD[r] < R⋆(D), the system state (x, y, θ) is given by s, d, θ and along with ( ˙d, ˙s) 6= (0, 0), uniquely determines β ∈ (−π, π). and ti → t∗ as i → ∞, a proper decrease of every ti yields ˙d(ti) < 0 since d(t∗) = dtrig. However then in addition that ˙d(t) < 0 for t ≥ ti, t ≈ t∗ by (4), (22) and thus ˙d(t) < 0, d(t) < dtrig for t > t∗, t ≈ t∗, i.e., (i) holds in violation of the initial assumption. It follows that d(t∗ +≈0) ≥ dtrig. Lemma 7 If λC(d†)(s∗) 6= 0 for d† ∈ [0, dtrig], there exists δ > 0 such that whenever s∗ ≤ s0 < s < s∗ + δ and |d∗−d†| < δ, the following entailments hold with ς := sgn ˙s: ˙s 6= 0, ˙d ≥ 0, d ≥ d∗, ζC(d∗)(s0) ≥ 0 κ(s∗ + ς ≈0) < 0, ˙sλC(d†)(s0) > 0 ˙s 6= 0, ˙d ≤ 0, d ≤ d∗, ζC(d∗)(s0) ≤ 0 κ(s∗ + ς ≈0) ≥ 0, ˙sλC(d†)(s0) > 0 ⇒ ˙sβ > 0; (20) ⇒ ˙sβ ≤ 0. (21) In (21), ˙sβ < 0 if ζC(d∗)(s0) < 0 or κ 6≡ 0 on ∂Ds0→s. PROOF. We pick δ > 0 so that λC(d∗)(s) and κ(s) do not change the sign as s and d∗ run over (s∗, s∗ + δ) and (d† −δ, d† +δ), respectively. By (8), the curvature κC(d∗)(s) does not change its sign either, which equals sgnκ(s∗+ς ≈0). If the conditions from (20) hold and ς = +, application of the second equation from (9) to C(d∗) yields that ζC(d∗)(s) > 0. So the target polar angle in the s-related Frenet frame of C(d∗) belongs to (0, π/2). Transformation of this frame into that of the vehicle path consists in a move of the origin in the negative direction along the ζ-axis (since d ≥ d∗) and a clockwise rotation of the axes (since ˙d > 0, ˙s > 0). Since both operations increase the target bearing angle, β > 0. Formula (20) with ς = − and (21) are established likewise. • Lemma 7a Let dsafe ≤ d∗ := d(t∗) ≤ dtrig, time t∗ within mode B. Then for t > t∗, t ≈ t∗, the robot ˙d(t∗) = 0 at a i) performs the turn with u ≡ −σu if σ ˙s(t∗) < 0, d(t∗) = dtrig, and β(t∗) = 0; ii) undergoes SMEC if σ ˙s(t∗) > 0 and either (1) σβ(t∗) > 0 or (2) β(t∗) = 0 and κ[s(t∗) + sgn ˙s(t∗)≈0] < 0; iii) moves straight to the target if β(t∗) = 0, σ ˙s(t∗) > 0, κ[s(t∗) + sgn ˙s(t∗)≈0] ≥ 0. PROOF. Let σ = +. i) As t → t∗, (4) and (16) yield that + − 1 R ¨d|u=−u → v2 κ 1 + κd∗ (cid:21) (22) where κ := κ[s(t∗) ± 0] and the inequality holds since d∗ ≥ dsafe > R due to (6). 1 + κ[d∗ − R]] R(1 + κd∗) = − < 0, (cid:20) Now suppose that there is a sequence {ti} such that ti > t∗, d(ti) = dtrig ∀i, ti → t∗ as i → ∞. Then ˙d(ti) = 0 and so β(ti) < 0 due to (20). By continuity, β < 0 in a vicinity of the system state at t = ti. Then any option from (4) yields u = −u and so u(t) ≡ −u ∀t ≈ ti by the definition of Filippov’s solution. Hence d(ti) = dtrig ∧ (22) ˙d(ti) = 0 ⇒ d(ti+≈0) < dtrig, in violation of the foregoing. So d > dtrig and u = sgnβ for t > t∗, t ≈ t∗ by (4), and by Lemma 5, SMT is continued. Then the last relation in (16) (with u := 0) and κ[s(t∗) −≈0] < 0 imply the contradiction d(t∗ +≈0) < dtrig to the foregoing, which proves i). Let κ[s(t∗) −≈0] ≥ 0. So far as the controller is first proba- tionally set to the submode related with ˙d < 0, this submode will be maintained longer by (22). ii.1) If d(t∗) < dtrig, the claim is true by Lemma 5. Let d(t∗) = dtrig. If there is a sequence {ti} such that ti > t∗, d(ti) < dtrig ∀i and ti → t∗ as i → ∞, a proper decrease ˙d(ti) < 0. Let τi be the of every ti yields in addition that minimal τ ∈ [t∗, ti] such that d(t) < dtrig and ˙d(t) < 0 for t ∈ (τ, ti]. For such t, u ≡ −u by (4) and so ¨d > 0 by (17) and (18). So ˙d(τi) < ˙d(ti) < 0, τi > t∗, and d(τi) = dtrig, otherwise τi is not the minimal τ . Thus at time τi, the assumptions of Lemma 6 hold except for β(τi) = 0. In the proof of this lemma, this relation was used only to justify that β > 0, which is now true by assumption and the continuity argument. So by Lemmas 5 and 6, sliding motion along an equidistant curve C(d†) with d† < dtrig is commenced at the time t > τi when ˙d(t) = 0 and maintained while β > 0 and i→∞−−−→ t∗. This ˙s > 0, in violation of d(τi) = dtrig ∀i ∧ τi contradiction proves that d(t∗ +≈0) ≥ 0. Now suppose that there exists a sequence {ti} such that ti > t∗, d(ti) > dtrig ∀i and ti → t∗ as i → ∞. Since d(t∗) = 0, a proper perturbation of every ti yields in addition ˙d(ti) > 0. Let τi be the minimal τ ∈ [t∗, ti] such that that d(t) > dtrig for t ∈ (τ, ti]. For such t, the continuity argument gives β > 0, (4) yields u ≡ u and so ¨d < 0 by (17) and (18). Hence ˙d(τi) > 0, τi > t∗, d(τi) = dtrig and so d(τi −≈ 0) < 0, in violation of the foregoing. This contradiction proves that d(t∗ +≈0) ≡ 0 indeed. ii.2) We first assume that d∗ < dtrig. Due to (17) and (18) ¨d|u=−u > 0 and ¨d|u=u < 0 for t ≈ t∗. (23) Let i) fail to be true and κ[s(t∗) −≈ 0] < 0. If there exists an infinite sequence {ti} such that ti > t∗, d(ti) < dtrig ∀i So it is easy to see that ˙d(t∗ +≈0) ≥ 0 and d(t∗ +≈0) ≥ d∗. ˙d(t∗ +≈ 0) 6≡ 0 and so d(t∗ +≈ 0) > d∗. In Suppose that 6 ˙d(τ ) > any right-vicinity (t∗, t∗ + δ), there is τ such that 0. For any such τ that lies sufficiently close to t∗, (20) yields β(τ ) > 0. So u = u by (4) and ¨d(τ ) < 0 by (23). Hence the inequality ˙d(t) > 0 is not only maintained but also enhanced as t decreases from τ to t∗, in violation of the assumption ˙d(t∗) = 0 of the lemma. This contradiction shows that ˙d(t∗ +≈0) ≡ 0, thus completing the proof of ii). It remains to consider the case where d∗ = dtrig. By the arguments from the previous paragraph, it suffices to show ˙d(t∗ +≈ 0) ≥ 0 and d(t∗ +≈ 0) ≥ dtrig. Suppose that that d(t∗ +≈ 0) 6≥ dtrig, i.e., there exists a sequence {ti} such that ti > t∗, d(ti) < dtrig ∀i and ti → t∗ as i → ∞. Since d(t∗) = dtrig, a proper decrease of every ti gives ˙d(ti) < 0 in addition. By (4), (23), the inequality ˙d(t) < 0 is maintained and enhanced as t decreases from ti, remaining in the domain {t : d(t) < dtrig}. Since ˙d(t∗) = 0, there is τi ∈ (t∗, ti) such that d(τi) = dtrig and ˙d(t) < 0 ∀t ∈ [τi, ti). Hence d(τi −≈ 0) > dtrig and if i is large enough, there is θi > ti such that d(θi) = dtrig and d(t) < dtrig ∀t ∈ (τi, θi). ˙d(t) < 0 ∀t ∈ Furthermore, there is si ∈ (τi, θi) such that ˙d(t) ≥ 0 ∀t ∈ [si, θi]. Then β(θi) > 0 (τi, si), ˙d(si) = 0, by (20). We note that β(t∗) = 0 ⇒ ζP(t∗) = 0 for the vehicle path P and so ζP(t) → 0 as t → t∗. This and (9) (applied to P) imply that the sign of ˙β is determined by the sign of the path curvature: u = ±u ⇒ ± ˙β < 0 ∀t ≈ t∗. (24) Suppose that ∃τ∗ ∈ [τi, si) : β(τ∗) ≥ 0. Since u(t) = −u ∀t ∈ (τi, si), we see that β(si) > 0, ˙d(si) = 0, ds := d(si). By Lemma 5, sliding motion along the ds-equidistant curve is commenced at t = si and maintained while β > 0, whereas β > 0 until θi (if i is large enough) due to (20). However, this is impossible since ds < dtrig and d(θi) = dtrig. This contradiction proves that β(t) < 0 ∀t ∈ [τi, si). The same argument and the established validity of ii.2) for d∗ := ds < dtrig show that β(si) < 0. Since β(θi) > 0, there exists ci ∈ (si, θi) such that β(ci) = 0 and β(t) > 0 ∀t ∈ (ci, θi]. If ˙d(c) = 0 for some c ∈ (ci, θi), Lemma 5 assures that sliding motion along the d(c)-equidistant curve is started at t = c and is not terminated until t = θi, in violation of d(θ) = dtrig. For any t ∈ (ci, θi), we thus have ˙d(t) > 0. Hence u(t) = u by (4), ˙β < 0 by (24), and so β(ci) = 0 ⇒ β(θi) < 0, in violation of the above inequality β(θi) > 0. This contradiction proves that d(t∗ +≈0) ≥ dtrig. Now suppose that ˙d(t∗ +≈0) 6≥ 0. Then there is a sequence {ti} such that ti > t∗, ˙d(ti) > 0 ∀i and ti → t∗ as i → ∞; a proper increase of every ti gives d(ti) > dtrig in addition. By (20), d(t) > dtrig ∧ ˙d(t) > 0 ⇒ β(t) > 0 for t ≈ t∗ and so u(t) = u by (4) and ¨d(t) < 0 by (23). So as t decreases from ti to t∗, the derivative ˙d(t) > 0 increases while d > dtrig, in violation of the implication d(t) = dtrig ⇒ ˙d(t) = 0 for t ∈ [t∗, ti]. This contradiction completes the proof. i=1 such that ˙d(ti) > iii) Were there a sequence {ti}∞ 0, β(ti) > 0 ∀i and ti → t∗ + 0 as i → ∞, (4), (23), and (24) imply that as t decreases from ti to t∗ for large enough i, the inequalities ˙d(t) > 0, β(t) > 0 would be preserved, in violation of ˙d(t∗) = 0, β(t∗) = 0. It follows that ˙d(t) > 0 ⇒ β(t) ≤ 0 for t ≈ t∗, t > t∗. ˙d(ti) > Now assume existence of the sequence such that 0, β(ti) ≤ 0 ∀i and ti → t∗ + 0 as i → ∞. For large i such that β(ti) < 0, (4)∧(23) ⇒ u(t) = −u, and ˙d(t) increases and so remains positive as t grows from ti until β = 0. By (24), u−1|β(ti)| time units later the vehicle becomes headed to the target, which is trivially true if β(ti) = 0. This and (i) of Lemma 5 imply that then the sliding motion along the surface β = 0 is commenced. It is maintained while κ[s(t)] ≥ 0. Since ti → t∗ and β(ti) → β(t∗) = 0 as i → ∞, this motion occurs for t > t∗, i.e., iii) does hold. It remains to examine the case where ˙d(t∗ +≈0) ≤ 0 and so d(t∗ +≈ 0) ≤ d∗. Suppose first that either ˙d(t∗ +≈ 0) 6≡ 0 or κ[s(t∗) +≈ 0] 6≡ 0. Then β(t∗ +≈ 0) < 0 by (21) and u = −u at any side of the discontinuity surface ˙d = 0 by (4). Hence u(t∗ +≈ 0) ≡ −u, which yields ˙d(t∗ + 0) > 0 by (23), in violation of ˙d(t∗ + 0) = 0. This contradiction proves that ˙d(t∗ +≈0) ≡ 0, κ[s(t∗) +≈0] ≡ 0. Then SMEC and SMT are initially the same, and iii) does hold. • Remark 3 The times of switches between the modes of the discontinuous control law (4) do not accumulate. To prove this, we first note that the projection of any vehicle position r within mode B onto ∂D is well defined due to (9). Let s− i be its values at the start and end of the ith occurrence of the mode, respectively. By Lemma 9 and (vi) of Proposition 3, s monotonically sweeps an arc γi of i during the concluding part of B. ∂D with the ends s− i and s+ i , s+ Definition 1 The vehicle path or its part is said to be single if the interiors of the involved arcs γi are pairwise disjoint and in the case of only one arc, do not cover ∂D. Let P and Q be the numbers of the connected components of Sκ := {s : κ(s) < 0} and Sζ := {s : ζ∂D(s) = 0}, respectively. They are finite due to Corollary 2. Lemma 9 Any single path accommodates no more than (P + 1)(Q + 2) SMT’s. PROOF. As was shown in the proof of (v) in of Proposi- tion 3, the number of SMT’s within a common mode B does not exceed P + 1. SMT between the ith and (i + 1)th occur- rences of B starts at a position s† ∈ γi = [s− i ] where i , s+ 7 i+1 where i+1) ≥ 0. Hence any arc γi, except for the first and i and i of Sζ and {s : ζ∂D(s) < 0}, respectively, such that the i . Hence i′ ∀i 6= i′, and so the total number of the arcs γi ζ∂D(s†) = −d < 0 and ends at the position s− ζ∂D(s− last ones, intersects adjacent connected components Cc= Cc< left end-point of Cc= i Cc= i does not exceed Q + 2, which competes the proof. is the right end-point of Cc< 6= Cc= • Proof of Remark 3. Suppose to the contrary that the times ti when σ is updated accumulate, i.e., ti < ti+1 → t∗ < ∞ as i → ∞. At t = ti, a SMT is terminated, and so d(ti) = dtrig, ˙d(ti) ≤ 0, β(ti) = 0. During the subsequent AM, d ≤ dtrig. At such distances, (15) implies that | ¨d| ≤ Md, |¨s| ≤ Ms, where Md, Ms > 0 do not depend on the system state. Since IT ends with ˙d = 0, this AM lasts no less d | ˙d(ti)| time units. Hence ˙d(ti) → 0 as i → ∞. than M −1 This and (15) imply that ˙s(ti) − vsgn ˙s(ti) → 0 as i → ∞. So far as IT lasts no less than M −1 | ˙s(ti)| time units if ˙s is reversed during IT, the sign of ˙s(t) is the same for ti < t < t∗ and large enough i. So the related part of the path is single. By Lemma 9, this part can accommodate only a finite number of SMT’s, in violation of the initial hypothesis. This contradiction completes the proof. s 5 Proof of (ii) in Theorem 1 This claim is identical to Remark 4a from the basic paper. We first alter the control strategy by replacement of the ran- dom machinery of choosing the turn direction σ at switches A 7→ B by a deterministic rule. Then we show that the al- tered strategy achieves the control objective by making no more than N switches, where N does not depend on the ini- tial state of the robot. However, this strategy cannot be im- plemented since it uses unavailable data. The proof is com- pleted by showing that with probability 1, the initial random- ized control law sooner or later gives rise to N successive switches identical to those generated by the altered strategy. We introduce the control law A that is the replica of (4) except for the rule to update σ when A 7→ B. Now for the first such switch, σ is set to an arbitrarily pre-specified value. After any subsequent occurrence A† of this mode, σA† if CA† does not contain the target −σA† if CA† contains the target . (25) σ :=    Proposition 10 Under the law A, the target is reached for a finite time, with making no more than N switches A 7→ B, where N does not depend on the vehicle initial state. The next two subsections are devoted to the proof of Propo- sition 10. In doing so, the idea to retrace the arguments jus- tifying global convergence of the algorithms like the Pledge one [4] that deal with unconstrained motion of an abstract point is troubled by two problems. Firstly, this idea assumes that analysis can be boiled down to study of a point moving according to self-contained rules coherent in nature with the above algorithms. i.e., those like ’move along the bound- ary’, ’when hitting the boundary, turn left’, etc. However, this is hardly possible, at least in full, since the vehicle be- havior essentially depends on its distance from the boundary. For example, depending on this distance at the end of mode B, the vehicle afterwards may or may not collide with a forward-horizon cusp of the obstacle. Secondly, the Pledge algorithm and the likes are maze-escaping strategies; they do not find the target inside a labyrinth when started outside it. Novel arguments and techniques are required to justify the success of the proposed algorithm in this situation. In what follows, we only partly reduce analysis of the vehicle motion to that of a kinematically controlled abstract point. This reduction concerns only special parts of the vehicle path and is not extended on the entire trajectory. The obstacle to be avoided by the point is introduced a posteriori with regard to the distance of the real path from the real obstacle. To justify the convergence of the abstract point to the target, we develop a novel technique based on induction argument. 5.1 Deterministic Algorithm and its Properties We start with study of kinematically controlled point. The symbol [r1, r2] stands for the straight line segment di- rected from r1 to r2; γ1 ⋆ γ2 is the concatenation of directed curves γ1, γ2 such that γ1 ends at the origin of γ2. Let an occurrence A† of mode A holds between two modes B and let it start at r♦ = r(t♦) and end at r∗ = r(t∗). Due to (6), distD[r∗] = distD[r♦] = dtrig are attained at unique boundary points s♦ and s∗, respectively. They divide C into two arcs. Being concatenated with η := [s∗, r∗] ⋆ [r∗, r♦] ⋆ [r♦, s♦], each of them gives rise to a Jordan curve encircling a bounded domain, one of which is the other united with D. The smaller domain is denoted CA† ; it is bounded by η and one of the above arcs γA† . Let σA† = ± be the direction (on ∂D) of the walk from s♦ to s∗ along γA† . 5.2 The Symbolic Path and its Properties In this subsection, ’ray’ means ’ray emitted from the target’, and we consider a domain D satisfying the following. Assumption 3 The boundary C := ∂D consists of finitely many (maybe, zero) straight line segments and the remainder on which the curvature vanishes no more than finitely many times. The domain D does not contain the target. We also consider a point r moving in the plane according to the following rules: r.1) The point moves outside the interior of D; 8 r.2) Whenever r 6∈ D, it moves to T in a straight line; r.3) Whenever r hits ∂D, it proceeds with monotonic mo- tion along the boundary, counting the angle β; r.4) This motion lasts until β = 0 and new SMT is possible, then SMT is commenced; many singular parts. A boundary point s ∈ C is said to lie above D if there exists δ > 0 such that ((1 − δ)s, s) ⊂ D and (s, (1+δ)s)∩D = ∅. If conversely ((1−δ)s, s)∩D = ∅ and (s, (1 + δ)s) ⊂ D, the point is said to lie below D. r.5) The point halts as soon as it arrives at the target. Formulas (9) and (11) imply the following. The possibility from r.4) means that D does not obstruct the initial part of SMT. When passing the corner points of ∂D, the count of β obeys (10) and the conventional rules adopted for turning angles of the tangential vector fields [7], and is assumed to instantaneously, continuously, and monotonically run between the one-sided limit values. The possibility from r.4) may appear within this interval. To specify the turn direction in r.3), we need some con- structions. Let the points s± ∈ C lie on a common ray and (s−, s+) ∩ C = ∅. One of them, say s−, is closer to the target than the other. They divide C into two arcs. Being concatenated with (s−, s+), each arc gives rise to a Jordan curve encircling a bounded domain. One of these domains is the other united with D. The smaller domain C(s−, s+) is called the cave with the corners s−, s+. It is bounded by (s−, s+) and one of the above arcs γC. To complete the rule r.3), we note that any SMT except for the first one starts and ends at some points s♦, s∗ ∈ C, which cut out a cave C[s♦, s∗]. r.3a) After the first SMT, the turn is in an arbitrarily pre- specified direction; r.3b) After SMT that is not the first the point turns · outside C[s♦, s∗] if the cave does not contain the target; · inside the cave C[s♦, s∗] if the cave contains the target. Definition 2 The path traced by the point obeying the rules r.1)—r.5), r.3a), r.3b) is called the symbolic path (SP). Proposition 11 SP arrives at the target from any initial po- sition. The number of performed SMT’s is upper limited by a constant N independent of the initial position. The remainder of the subsection is devoted to the proof of this claim. The notations s, T, N, r, λ, ζ, κ, ψ, ϕ are at- tributed to C = ∂D. At the corner points of C, these vari- ables except for s have one-sided limits and are assumed to instantaneously, continuously, and monotonically run be- tween the one-sided limit values. An arc of C is said to be regular if ζ (non-strictly) does not change its sign on this arc, depending on which the arc is said to be positive/negative (or ±arc). The regular arc is maximal if it cannot be extended without violation of the regularity. A connected part of C and its points are said to be singular if ζ strictly changes the sign when passing it and, if this part contains more than one point, is identically zero on it; see Fig. 2(c). The singular arc is a segment of a straight line since κ ≡ 0 on it due to (9). The ends of any maximal regular arc are singular. Due to Assumption 3 and (9), the boundary C has only finitely Observation 2 As s moves in direction σ = ± over a η-arc (η = ±) of C, we have ση ˙ϕ ≥ 0. Any point of ±arc that is not singular lies above/below D. Lemma 12 As s continuously moves along a regular arc, β evolves within an interval of the form ∆ := [πk, π(k + 1)], where k is an integer. When s reaches a singular point, β arrives at the end of ∆ associated with the even or odd integer, depending on whether s moves towards or outwards the target at this moment, respectively. PROOF. Since ζ does not change its sign, the vector r does not trespass the λ-axis, whereas β is the polar angle of this vector. This gives rise to the first claim of the lemma. The second one is immediate from the first claim. • Lemma 13 Whenever SP progresses along C in direction σ = ±, we have σβ ≥ 0. PROOF. This is evidently true just after any SMT. During the subsequent motion along C, the inequality can be vio- lated only at a position s where β = 0 and either s is a corner singular point or κ(s + σ≈0) > 0 since κ(s + σ≈0) ≤ 0 ⇒ σβ(s + σ≈0) ≥ 0 by the third relation from (9). However, at such position, motion along C is ended. • The cave C(s−, s+) is said to be positive/negative (or ±cave) if the trip from s− to s+ over γC is in the respective direction of C. By Observation 2, s moves from a +arc to a −arc in this trip and so passes a singular part of C. The total number of such parts inside γC is called the degree of the cave. ¶ Lemma 14 For any cave of degree M = 1, the arc γ := γC consists of the positive γ|s−→s∗ +→s+ sub-arcs and a singular part [s∗ −, s∗ +], the tangential vector T (s) (that is co-linear with [T, s] if s is the corner point) is directed outwards T if the cave is positive and does not contain T or negative and contains T. Otherwise, this vector is directed towards T. and negative γ|s∗ +]. For s ∈ [s∗ −, s∗ − PROOF. The first claim is evident. Let the cave be positive and T 6∈ C(s−, s+). Suppose that T (s) is directed towards T. Then the same is true for s := s∗ +. Hence ζ(s∗ + + 0) ≤ + + 0) > 0 ⇒ κ(s∗ 0 and ζ(s∗ + +≈ 0) > 0 since otherwise, ζ(s∗ + +≈0) ≥ 0 by (9), in violation of the definition of the singular part. In any case, ((1 − + + 0) = 0 ⇒ λ(s∗ ¶ Possible singular parts at the ends of γC are not counted. 9 +, s∗ +) ∩ D = ∅ for some δ > 0. Since T 6∈ C(s−, s+), δ)s∗ the segment [0, s∗ +) intersects γC, cutting out a smaller cave Csm inside C(s−, s+). The singular part inside Csm is the second such part in the original cave, in violation of M = 1. This contradiction shows that T (s) is directed outwards T. Now suppose that T ∈ C(s−, s+) and T (s) is directed out- wards T. Let a point s∗ moves in the positive direction along +→s+ . The ray containing s∗ monotonically rotates by γ|s∗ Observation 2 and contains a continuously moving point smov + to s+, the segment − ∈ γ|s∗ − , s∗) sweeps the entire cave C[s−, s+], and so this cave (smov does not contain T, in violation of the assumption. This con- tradiction proves that T (s) is directed towards T. −→s− . As s∗ runs from s∗ The second claim for negative caves and the third claim are established likewise. • Lemma 15 If SP enters a cave without the target, it leaves the cave through the other corner with β 6= 0. In this ma- neuver, the direction of motion along C is not changed, no point of C is passed twice, and the number of SMT’s does not exceed the cave degree. PROOF. Let SP enter the cave in the positive direction; the case of the negative direction is considered likewise. The proof will be by induction on the cave degree M . Let M = 1. (i) Suppose first that the cave is positive and so s enters it through s− moving over a +arc. By Lemma 14, the point s moves outwards the target whenever s ∈ [s∗ +], and so β ≥ π by Lemmas 12 and 13. As s moves over the subsequent −arc, ζ becomes negative and so the inequality is kept true by Lemma 12. Thus s leaves the cave through s+ with β ≥ π > 0, having made no SMT. −, s∗ (ii) Let the cave be negative. Then s enters it through s+ moving over the negative arc. By Lemma 14, the point s moves towards the target whenever s ∈ [s∗ +]. Since ζ(s+ + 0) ≤ 0, Lemma 13 yields β(s+ + 0) ≥ π. By −, s∗ Lemma 12, β ≥ π until s∗ +] by Lemma 14. When s passes the entire [s∗ +], the sign of ζ reverses from − to + and so β > 2π just after the passage of s∗ −. It remains to note that β ≥ 2π > 0 while s moves over the +arc from s∗ + and so β ≥ 2π at s ∈ [s∗ − to s− by Lemma 12. −, s∗ −, s∗ Suppose that the claim of the lemma is true for any cave with degree ≤ M , and consider a cave of degree M + 1. Let this cave be positive. Then s enters it through the lower corner s− along a positive arc. We also consider the accom- panying motion of the ray containing s. This ray contains a continuously moving point s⊛ + ∈ C that starts at s+. This motion is considered until a singular part of C appears on the ray segment [s, s⊛ +] for the first time. Three cases are possible at this position. (a) The singular part [s∗ − = s∗ s∗ +] ⊂ (s, s⊛ +); see Fig. 3(a), where + =: s∗. By successively applying the induction −, s∗ +s * +s +s * +s +s * * s s -s -ss +s -s s target (a) target (b) target (c) Fig. 3. The first singular point +) and C(s∗ +, s⊛ +), we see that SP arrives + in the positive direction and with β > 0. While s + to s+ over the −arc, the vector r(s) is below hypothesis to C(s, s∗ at s⊛ moves from s∗ the λ-axis and so β ≥ π > 0 by Lemma 12. + = s∗ −; see Fig. 3(b), where s∗ (b) The singular point s⊛ − = s∗ + =: s∗. By successively applying the induction hypothesis +), we see that SP arrives at s⊛ to C(s, s#) and C(s#, s⊛ + in the positive direction and with β > 0. So β(s⊛ +) ≥ 2π and SP proceeds along the −arc to s+ with β ≥ π > 0 by Lemma 12, which completes the proof. (c) The singular point s; see Fig. 3(c). If β > 0 at this point, SP enters the cave C[s, s⊛ +] of degree ≤ M and by the induction hypothesis, arrives at s⊛ + moving in the positive direction and with β > 0. If conversely β = 0, SP undergoes SMT, which cannot be terminated at the target since it does not belong to the cave at hand. So it is terminated at some point s# ∈ γC. Since T does not lie in the sub-cave C(s, s#) of the original cave, the vehicle turns right at s# and thus proceeds along C in the positive direction. By applying the induction hypothesis to C(s#, s⊛ +), we see that SP arrives at s⊛ + moving in the positive direction and with β > 0 in any case. The proof is completed like in the cases (a) and (b). The case where the cave is negative is considered likewise. Lemma 16 Suppose that after SMT starting and ending at the points s♦ and s∗, respectively, the direction of the motion along C is reversed. Then the cave C[s♦, s∗] does not contain T but contains the entire path traced before SMT at hand. PROOF. Let the motion direction at s = s♦ be +; the case of − is considered likewise. Since on arrival at s∗, the left turn is made, C[s♦, s∗] does not contain T by r.3b). Suppose that the path traced before SMT at hand is not contained by this cave, i.e., the point enters this cave before. Since this cannot be done during another SMT, the point enters the cave through either s♦ or s∗. In the first case, s♦ is passed twice in the opposite directions, in violation of Lemma 15. In the second case, s♦ is passed with β > 0 by the same lemma and so SMT cannot be commenced. The contradiction obtained proves that the initial part of SP is inside the cave. • 10 Lemma 17 If SP progresses along C in a cave not contain- ing the target, it leaves this cave through one of its corners. During this maneuver, SP passes no point of C twice and makes no more SMT’s than the degree of the cave. PROOF. For the definiteness, let the cave be positive; the case of the negative cave is considered likewise. The proof will be by induction on the degree M of the cave. Let M = 1. We employ the notations from Lemma 14. (α) The motion is started on γ|s∗ The claim is evident. +→s− in the direction −. (β) The motion is started on γ|s+→s∗ Then the point necessarily arrives at s∗ ative direction. Thus the situation is reduced to (α). in the direction −. +, moving in the neg- + (γ) The motion is started on γ|s∗ −→s+ in the positive direc- tion. The claim of the lemma is justified by the concluding arguments from (i) in the proof of Lemma 15. (δ) The motion is started on γ|s−→s∗ Then the point necessarily arrives at s∗ itive direction. Thus the situation is reduced to (γ). in the direction +. −, moving in the pos- − Now suppose that the claim of the lemma is true for any cave with degree ≤ M , and consider a cave of degree M + 1. Let this cave be positive for the definiteness; the case of the negative cave is considered likewise. We also consider an auxiliary motion of the point over C from s− into the cave and the accompanying motion of the ray containing s until one of the situations from Fig. 3 occurs. +→s+ Case (a) from Fig. 3. (a.1) If the motion is started on in direction + or on γ|s→s− in direction −, the γ|s⊛ claim of the lemma is justified by the concluding arguments from (i) in the proof of Lemma 15. + −→s⊛ , the induction (a.2) If the motion is started on γ|s∗ −, s⊛ hypothesis applied to the cave C[s∗ +] of degree ≤ M ensures that the point arrives at either s⊛ + or s∗ −. In the first case, it arrives in direction +, thus reducing the situation to (a.1). In the second case, it arrives in direction −. If β 6= 0 at this position, the point enters the cave C[s∗ −, s] in direction − and afterwards leaves it through s in the same direction by Lemma 15. If β = 0, SMT is commenced, which ends at the position s with the left turn since C[s∗ −, s] does not contain T. Hence in any case, the motion proceeds in direction − from the position s, which reduces the situation to (a.1). (a.3) The case where the motion is started on γ|s→s∗ considered likewise. − , is (a.4) The cases where the motion starts on γ|s⊛ in di- rection − or on γ|s→s− in direction +, are trivially reduced to (a.2) and (a.3), respectively. +→s+ 11 Case (b) from Fig. 3. (b.1) The cases where the motion starts on γ|s⊛ in direction + or on γ|s→s− in direction −, is considered like (a.1). +→s+ (b.2) If the start is on γ|s→s# , the induction hypothesis applied to C[s, s#] ensures that the point arrives at either s or s#. In the first case, it arrives in direction −, thus reducing the situation to (b.1). In the second case, it arrives in direction + and then enters the cave C[s#, s⊛ +]. By Lemma 15, the point leaves this cave through s⊛ + in direction + and with β > 0, thus reducing the situation to (b.1). + , the induction (b.3) If the motion commences on γ|s#→s⊛ hypothesis applied to the cave C[s#, s⊛ +] of degree ≤ M ensures that the point arrives at either s# or s⊛ +. In the first case, the arrival is in direction −, after which the situation is reduced to (b.2). In the second case, the arrival is in direction +. If β 6= 0 at this moment, the motion proceeds along in direction +, and the situation is reduced to (b.1). γ|s⊛ If β = 0, SMT is commenced, which ends at the position s with the left turn since the cave C[s⊛ +, s] does not contain the target. Hence the motion proceeds along γ|s→s− in direction −, and the situation is still reduced to (b.1). +→s+ (b.4) The cases where the motion starts on γ|s⊛ in di- rection − or on γ|s→s− in direction +, are trivially reduced to (b.3) and (b.2), respectively. +→s+ Case (c) from Fig. 3. (c.1) The cases where the motion in direction + or on γ|s→s− in direction starts on γ|s⊛ −, is considered like (a.1). +→s+ + , the induction hypothesis (c.2) If the start is on γ|s#→s⊛ applied to C[s#, s⊛ +] yields that the point arrives at either s⊛ + or s#. In the first case, the arrival direction is + and the situation is reduced to (b.1). In the second case, the point arrives in direction − and then enters C[s#, s]. By Lemma 15, the point leaves this cave through s in direction − and with β > 0. Thus we arrive at (b.1) once more. (c.3) If the motion commences on γ|s#→s, the induction hypothesis applied to the cave C[s#, s] of degree ≤ M en- sures that the point arrives at either s# or s. In the first case, the arrival is in direction +, after which the situation is re- duced to (b.2). In the second case, the arrival is in direction −, after which the situation reduces to (b.1). (c.4) The cases where the motion starts on γ|s⊛ in di- rection − or on γ|s→s− in direction +, are trivially reduced to (c.2) and (c.3), respectively. +→s+ • Lemma 18 Any part of SP where it progresses over the boundary ∂D ends with SMT. PROOF. is by retracing the proof of (v) in Proposition 3. Let K be the number of singular parts of the boundary ∂D. Lemma 19 If every cave examined in r.3b) does not contain the target, SP consists of the initial P− and terminal P+ sub-paths (some of which may contain only one point) such that each accommodates no more than K SMT’s, no point of C is passed twice within P−, whereas the direction of motion along C is not altered within P+. PROOF. Suppose first that the initial position lies in some cave. Among such caves, there is one enveloping the others. By Lemma 17, SP leaves this cave and the related sub-path satisfies the properties stated in Lemma 19. If the initial po- sition lies outside any cave, this sub-path is taken to consist of only this position. By Lemma 16, the direction of the mo- tion along C is not changed on the remaining sub-path P+ and P+ does not go inside the above maximal cave. Suppose that within P+, SP accommodates more than K SMT’s. Any of them starts at some singular part with β = 0. Hence SP passes some singular point with β = 0 at least twice and thus becomes cyclic. Now we consider the related minimal cyclic part CP of SP that starts and ends with commencing a SMT at a common point. Due to the constant direction, the closed curve CP is simple. It follows that ∢TANG [CP] = ±2π, whereas ∢T CP = 0 since W = 0 for all bypassed caves and T 6∈ D. Hence ∢0r = ∓2π by (10), whereas CP starts and ends with β = 0 and so ∢0r = 0. This contradiction completes the proof. • Lemmas 18 and 19 give rise to the following. Like in the proof of Lemma 15, we consider the motion of the ray containing s until a singular point appears on the segment [s, s∗ +] for the first time, and examine separately three possible cases depicted in Fig. 3. (a) The singular point s∗ ∈ (s, s∗ +); see Fig. 3(a). The target is contained by the cave C[s, s∗] of degree ≤ M , which is entered in the positive direction and by Lemma 12, with 0 ≤ β ≤ π. The induction hypothesis competes the proof. (b) The singular point s∗ = s∗ +; see Fig. 3(b). The target is evidently contained by the cave C[s, s#] of degree ≤ M . The proof is completed like in the previous case. (c) The singular point s∗ = s; see Fig. 3(c). If at s∗, the point moves outwards T, the arguments from the second paragraph in the proof of Lemma 14 show that the cave does not contain T, in violation of the assumption of the lemma. Hence at s∗, the point moves towards T and so β = 0 by Lemma 12 and D does not obstruct the initial part of SMT, as was show in the proof of Lemma 14. Thus SMT is commenced at s∗. If it is terminated at T, the proof is completed. Otherwise, it arrives at s# ∈ γC, as is shown in Fig. 3(c). Evidently, the cave C[s#, s] does not contain the target. So on reaching s#, the point turns right and continues moving in the positive direction over a new positive arc and with β ∈ [0, π]. So the proof is completed by applying the induction hypothesis to the cave C[s#, s∗ +] of degree ≤ M . Proof of Proposition 11: is straightforward from Corollary 3 and Lemma 20. Corollary 3 If every cave examined in r.3b) does not con- tain T, SP arrives at T by making no more than 2K SMT’s. 5.3 Proof of Proposition 10. Lemma 20 If SP enters a cave containing T over a positive arc with |β| ≤ π, it arrives at T not leaving the cave. During this maneuver, no point of C is passed twice and the number of SMT’s does not exceed the degree of the cave. PROOF. Let the cave be entered in direction +; the case of − is considered likewise. The proof will be by induction on the degree M of the cave C[s−, s+]. Since s enters the cave over a positive arc, the entrance is through s−. −, s∗ Let M = 1. By Lemma 14, s moves towards T when reach- ing the singular part of the cave [s∗ +]. At this position, β = 0 by Lemma 12 and D does not obstruct the initial part of SMT, as was show in the proof of Lemma 14. So SMT is commenced. If it is not terminated at T, the segment [0, s∗ −) intersects γC, cutting out a smaller cave within the original one. The singular part inside this new cave is the second such part within the original cave, in violation of M = 1. Hence T is reached and only one switch B 7→ A is made. Let P stand for the directed path traced by the vehicle under the control law A from Subsect. 5.1. We first show that after a slight modification, this path can be viewed as SP for some domain D provided that P is single (see Definition 1). This permits us to employ the results of Subsect. 5.2. We use the notations s− i , γi from introduced before Def- inition 1, note that for s ∈ γi, the distance d from the vehicle to the obstacle is a function d = di(s) of s, and put: i , s+ D := r : d := distD[r] < d⋆(D) an either s := n s(r) ∈ γi ∧ d ≤ di(s) or s 6∈ ∪iγi ∧ d ≤ dtrig . (26) o If σ ˙s < 0 at the start of the ith mode B, the abscissa s− i is passed twice during IT by Lemma 9. For every such i, the real path between these two passages is replaced by the motion along the straight line segment, which gives rise to the modified path P∗. Now suppose that the conclusion of the lemma is true for any cave with degree ≤ M , and consider a cave of degree M +1. Observation 3 Let the original path be single. The modified path P∗ is SP for D∗. 12 Indeed, this path can be viewed as a trace of a point obeying the rules r.1)—r.5). To ensure r.3a), the direction should be pre-specified to match that of P∗. The property r3.b) is satisfied due to (25) and the second inequality from (7). Lemma 21 For a single path, the set (26) satisfies Assump- tion 3 and its boundary has no more than Ns singular parts, where Ns is completely determined by D and T. PROOF. The last claim in Assumption 3 holds by (7), (26). The boundary ∂D consists of parts traced during 1) SMT’s, 2) SMEC’s, 3) arcs of circles traced during IT’s, and 4) seg- ments of normals to ∂D resulted from the path modification. Any part 1) clearly satisfies Assumption 3 and is either sin- gular or does not contain singular points; their number does not exceed (P + 1)(Q + 1) by Lemma 9. Since parts 2) are separated by SMT’s, their number does not exceed (P +1)(Q+1)+1. Any part 2) lies on a d-equidistant curve C(d) with d ≤ dtrig. Due to (8), ζC(d)(s) = ζ∂D(s)+d, Assumption 3 holds since the boundary ∂D is piece-wise analytical, and the singular parts of C(d) are the connected components of the set from Corollary 2. So type 2) arcs of C accommodate no more than F [(P + 1)(Q + 1) + 1] singular parts. It remains to note that parts 3) and 4) do not contain singular points since β monotonically evolves from 0 during IT’s. Lemma 22 If the vehicle finds the target in CA† after some occurrence A† of mode A, it arrives at the target by making after this no more than Ns switches A 7→ B. PROOF. Let us consider a part P of the path that starts in mode B preceding A†. Suppose first that this part is not single and truncate it from the right, leaving its maximal single sub-part P†. The terminal position of P† lies on a previously passed piece of P†. Let D† and P† ∗ be the related domain (26) and modified path. Associated with CA† is a cave of D† into which P† ∗ turns with |β| ≤ π. By Lemma 20, P† ∗ cannot arrive at a previously passed point, in violation of the above property. This contradiction proves that the entire path P is single. Then Lemmas 20 and 21 guarantee that P∗ arrives at T by making no more than Ns SMT’s. It remains to note that P and P∗ arrive at T only simultaneously, and each occurrence of A gives rise to a SMT in P∗. Lemma 23 After no more than Ns + 1 switches A 7→ B, the direction in which s moves along ∂D within modes B is not altered. PROOF. Consider an occurrence A† of mode A after which the direction is altered and the path P from the start of the entire motion until the end of A†. Suppose that P is not single ∗, where D† and P† and truncate it from the left, leaving the maximal single part P†. The starting point of P† is passed once more within P†, both times in mode B. So this double point is inherited by P† ∗ are the related domain (26) and modified path. Associated with CA† is a cave CD† of D†; these two sets contain the target only simultaneously due to (7). Hence P and P† ∗ acquire a common turn direction at their ends. So SP P† ∗ has converse directions of motion along the boundary at the start and end of the last involved SMT and by Lemmas 16 and 17, has no double points. This contradiction proves that the entire P† is single. Due to Lemma 16, the modified path P† ∗ lies in CD† and so involves no more than Ns SMT’s thanks to Lemmas 17 and 21. It remains to note that each occurrence of A gives rise to a SMT in P∗. • To prove Proposition 10, it in fact remains to show that the vehicle cannot pass more than Ns modes A in a row, constantly not finding the target in CA and not changing the direction of the motion along ∂D. The next lemma with corollaries serves this proof. The symbol ∠(a, b) ∈ (−π, π] stands for the angle from the vector a to b. Let the points ri, i = 1, 2 on P be at the distance distD[ri] ≤ dtrig and such that when traveling between them, the path does not intersect itself and except for ri, has no points in common with the normals [ri, si], where si := s[ri]. The points si split ∂D into two curves. Being concatenated with the above normals and P|r1→r2 , they give rise to Jordan loops, with one of them enveloping the other. Let γinner be the curve giving rise to the inner loop LOOP, and σ = ± be the direction from s1 to s2 along γinner. Lemma 24 If LOOP does not encircle the target, the fol- lowing relation holds ∢0rP|r1→r2 = ∢0r∂D| s1 + ∠ [σT∂D(s1), TP(r1)] − ∠ [σT∂D(s2), TP(r2)] . + ∢T[r1, s1] − ∢T[r2, s2] (27) σ −→s2 PROOF. Let σ = +; σ = − is considered likewise. By applying the Hopf’s theorem to LOOP, we see that ∢T[s1, r1]+∢T P|r1→r2 +∢T[r2, s2]−∢T ∂Ds1→s2 = 0, ∢TANG [P|r1→r2 ] = ∢TANG [∂Ds1→s2 ] − ∠ [T∂D(s1), TP(r1)] + ∠ [T∂D(s2), TP(r2)] . The proof is completed by the second formula in (10). • The next claim employs the notations introduced at the be- ginning of Subsect. 5.1. Corollary 4 Suppose that T 6∈ CA† and the value of σ main- tained during the occurrence A† of mode A is not altered when A† 7→ B. Then (27) holds with r1 := r♦, r2 := r∗. This is true since in this claim and Lemma 24, σ is the same. 13 s1 3T 3r T1 r1 Fig. 4. Auxiliary loop Corollary 5 Let r1 and r2 be successively passed within a common mode B, where σ(t) ≡ σ = ±. If r2 is passed after IT, (27) holds, where ∢0r∂D| accounts for the entire s1 motion of the projection s = s[r], r ∈ Pr1→r2 , including possible full runs over ∂D. −→s2 σ If 1) s does not run over the entire ∂D and 2) either r1 is passed after IT or sgn ˙s = σ at the start of the mode, the claim is evident. If 1) holds but 2) does not, the path may intersect [s1, r1] and so direct application of Lemma 24 is impossible. Then we apply this lemma to r1 := r3, where r3 is the point where the vehicle intersects the normal for the second time during IT; see Fig. 4. The proof is completed by noting that ∢T [r1, r3] = ∢T γ, ∢TANG [γ] = ∠[T1, T3] and so ∢0rP|r1→r3 = ∢0rγ = ∢T[r1, r3] − ∠[T1, T3], as well as that ∠ [σT∂D(s1), T3] = ∠ [σT∂D(s1), T1] + ∠[T1, T3]. The claim is generalized on the case where 1) is not true by proper partition of the path, followed by summation of the formulas related to the resultant pieces. Corollary 6 Let points r1 and r2 be successively passed in modes B (maybe, different). Suppose that r2 is not at- tributed to IT and when traveling from r1 to r2, the vehi- cle constantly does not find the target in CA and does not change σ. Then (27) holds, where ∢0r∂D| accounts for the entire motion of the projection s = s[r], r ∈ Pr1→r2 , including possible full runs over ∂D. σ −→s2 s1 It is assumed that as the vehicle moves in mode A, the projection s continuously and monotonically goes over ∂D from s♦ to s∗ in the direction σ. Lemma 25 The vehicle cannot pass more than Ns modes A in a row, constantly not finding the target in CA and not changing the direction of the motion along ∂D. PROOF. Suppose the contrary and that σ = +; the case σ = − is considered likewise. By Observation 1, the ith mode Ai in the row starts when s lies in an +exit arc Ai, whereas ζ ≥ 0 when it ends. Hence A1, A2, . . . cannot re- peat until s completes the full run over ∂D. However, they do repeat since the number of +arcs does not exceed F by Observation 1, and F ≤ Ns by construction from the proof of Lemma 21. Hence the path P can be truncated so that the first and last modes A start at positions r1 and r2, re- spectively, lying on a common +exit arc A, whereas s en- circles the entire boundary ∂D during the move over the truncated P. By the definition of the +arc, r∂D(s) evolves within the fourth quadrant as s runs from s1 to s2 within the +arc and so the absolute value of its turning angle does 14 not exceed π/2. This and (13) (where d∗ := 0) imply that ∢0r∂D|s1→s2 ≤ −3/2π. In (27), |∢T[ri, si]| < π/2 and ∠ [T∂D(si), TP(ri)] = 0 since the segments [ri, si] and [ri, T] are perpendicular. Overall, (27) implies that ∢0rP|r1→r2 < − π 2 . (28) The path P|r1→r2 starts with β = 0 and whenever β = 0 is encountered, the angle β may stay constant during SMT but after this SMT β becomes positive by (12) (see Fig. 2(b)) since the robot turns right. The last claim holds thanks to (iii) of Proposition 3 if B is not terminated during this SMT and (25) otherwise. Such behavior of β is inconsistent with (28). The contradiction obtained completes the proof. • Proof of Proposition 10 is straightforward from (v) of Propo- sition 3 and Lemmas 22, 23, and 25. 5.4 Proof of (ii) in Theorem 1. Let Pk be the probability that the vehicle does not arrive at T after making kN switches A → B, where N is taken from Proposition 10. Given a realization of σ’s for the first kN switches, the probability of the (k + 1)th event does not exceed the probability P∗ that the next N realizations are not identical to those generated by the algorithm A for the related initial state. Here P∗ ≤ ρ, where ρ := 1 − min{p, 1 − p}N and p is the probability of picking + in (4). So the law of total probability yields that Pk+1 ≤ ρPk ⇒ Pk ≤ ρk−1P1 → 0 as k → ∞. It remains to note that the probability not to achieve T does not exceed Pk for any k. 6 Proof of (ii) in Theorem 1 and Theorem 2 For the definiteness, we consider the vehicle driven by the basic algorithm with the right turns. So in any SMEC the vehicle has the obstacle to the left. The proof basically fol- lows that from the previous section and employs many facts established there. The difference is that now we do not need to introduce an auxiliary deterministic algorithm since the examined one is deterministic itself. As before, we first consider another obstacle D 6∋ T satis- fying Assumption 3. Let a point r moves in the plane ac- cording to the following rules: r.1) If r 6∈ D, r moves to T in a straight line; r(0) 6∈ D; r.2) If r hits C := ∂D, it turns right and then moves in the positive direction along the boundary, counting the angle β; r.3) This motion lasts until β = 0 and new SMT is possible; r.4) The point halts as soon as it arrives at the target. The path traced by r is called the symbolic path (SP). Any SMT according to r.1) except for the first one starts and ends at some points s♦, s∗ ∈ C, which cut out a cave C[s♦, s∗]. We start with noting that the following specification of Ob- servation 2 now holds. mode A† impossible. The contradiction obtained completes the proof. • Observation 4 As r moves over a ±-arc of C, we have ± ˙ϕ ≥ 0. Non-singular points of ±-arc lie above/below D. This lemma entails that Corollaries 4, 5, and 6 remain true in the following specified forms. Lemma 12 evidently remains valid, whereas Lemma 13 holds in the following specified form. Lemma 26 Whenever SP lies on C, we have β ≥ 0. It is easy to see by inspection that Lemma 15 remains true as well, where in the case from Fig. 3 the right turn at the point s# is justified by not the absence of the target in the cave but the very algorithm statement. The following claim is analog of Lemma 16 Lemma 27 Suppose that after SMT starting and ending at the points s♦ and s∗, respectively, SP enters the cave C[s♦, s∗]. The this cave contains the entire path traced be- fore SMT at hand. PROOF. The proof is be retracing the arguments from the proof of Lemma 16 with the only alteration: the point cannot enter the cave through s♦ since this violates the always positive direction of motion along the boundary. • Now we revert to the vehicle at hand and show that Lemma 27 extends on the directed path P traced by this vehicle. The next lemma employs the notations A† and σA† introduced at the beginning of subsection 5.1. Lemma 8a For any occurrence A† of mode A that holds between two modes B, we have σA† = +. PROOF. Suppose to the contrary that σA† = −. Then ac- cording to the ’only-right-turns’ option of the algorithm, the vehicle enters the cave CA† after termination of A†. We are going to show that then similar to Lemma 16, this cave con- tains the entire path passed by the vehicle until this moment and so its initial location. Due to the first relation from (7), the last claim implies that the initial location r0 is also con- tained by a cave of N (dtrig), in violation of the assumptions of Theorem 2. This contradiction will complete the proof. Thus it remains to show that CA† does contain the path traced so far. Suppose the contrary. Since in the mode B preceding A†, the vehicle has the obstacle to the left, it passes to A† from inside the cave. It follows that the moment after A† is not the first time when the vehicle enters the cave. Let us consider the last of these ’preceding’ enters and the path P traced by the vehicle since this moment until the commencement of A†. By combining Lemma 15 with the arguments from the proof of Lemma 22, we conclude that this path is single and β > 0 at its end, which makes Corollary 7 For r1 = r♦, r2 = r∗, (27) holds with σ = +. Corollary 8 Let r1, r2 be successively passed within a common mode B. If r2 follows IT, (27) holds with σ = + and ∢0r∂D| accounting for possible full runs over C. σ s1 −→s2 Corollary 9 Suppose that points r1 and r2 are successively passed in modes B (maybe, different) and r2 is not attributed to IT. Then (27) holds with σ = +, where ∢0r∂D| −→s2 accounts for the entire motion of the projection s = s[r], r ∈ Pr1→r2, including possible full runs over ∂D. s1 σ We also note that at the moment when a SMEC ends, s ∈ S0 := {s ∈ ∂D : −dtrig ≤ ζ∂D(s) < 0, λ∂D(s) > 0}. Since the boundary is piece-wise analytical, this set has finitely many connected components (called exit arcs). PROOF OF THEOREM 2 This proof retraces many ar- guments from the proof of Lemma 25. Suppose the con- trary and that the vehicle does not arrive at the target. Then the projection s repeatedly encircles the boundary. (This in- cludes the imaginary moves of s when the vehicle is in mode A.) By retracing the arguments from the proof of (v) in Proposition 3, we conclude that the path P can be trun- cated so that the first and last modes A start at positions r1 and r2, respectively, lying on a common exit arc A, whereas s encircles the entire boundary ∂D during the move over the truncated P. By the definition of the exit arc, r∂D(s) evolves within the fourth quadrant as s runs from s1 to s2 within the +arc and so the absolute value of its turning angle does not exceed π/2. This and (13) (where d∗ := 0) imply that ∢0r∂D|s1→s2 ≤ −3/2π. In (27), |∢T[ri, si]| < π/2 and ∠ [T∂D(si), TP(ri)] = 0 since the segments [ri, si] and [ri, T] are perpendicular. Overall, (27) implies (28). The path P|r1→r2 starts with β = 0 and whenever β = 0 is en- countered, the angle β may stay constant during SMT but af- ter this SMT β becomes positive since the robot turns right. The last claim holds thanks to (iii) of Proposition 3 if B is not terminated during this SMT and the right-turn option in (4) otherwise. Such behavior of β is inconsistent with (28). The contradiction obtained completes the proof. • PROOF OF (ii) IN THEOREM 1 This claim is immediate from Theorem 2. • References [1] D. Thompson, On Growth and Form, Cambridge University Press, Cambridge, 1966. [2] J. Camhi, E. Johnson, High-frequency steering maneuvers mediated by tactile cues: Antennal wall-following in the cockroach, The Journal of Experimental Biology 202 (1999) 631643. 15 [3] B. Fajen, Steering toward a goal by equalizing taus, Journal of Experimental Psychology: Human Perception and Performance 27 (4) (2001) 953–968. [4] H. Abelson, A. A. diSessa, Turtle Geometry, MIT Press, Cambridge, 1980. [5] V. Lumelsky, S. Tiwari, An algorithm for maze searching with azimuth in: Proceedings of the IEEE Conference on Robotics and input, Automation, San Diego, CA, 1991, pp. 111–116. [6] V. I. Utkin, Sliding Modes in Control Optimization, Springer-Verlag, Berlin, 1992. [7] E. Kreiszig, Differential Geometry, Dover Publications, Inc., NY, 1991. [8] A. Matveev, M. Hoy, A. Savkin, Mixed nonlinear-sliding mode control of an unmanned farm tractor in the presence of sliding, in: Proceedings of the 11th International Conference on Control, Automation, Robotics and Vision, Singapore, 2010, pp. 927–932. 16
synthetic_cpt
1
A_User-Centered_Evaluation_of_the_Data-Driven_Sign_Language_Avatar_System_A_Pilot_Study.pdf
Rate Regions of Secret Key Sharing in a New Source Model1,2 Somayeh Salimi*, Mahmoud Salmasizadeh†, Mohammad Reza Aref* *ISSL Lab., Dept. of Electrical Engineering, Sharif University of Technology, Tehran, Iran †Electronics Research Center, Sharif University of Technology, Tehran, Iran Email: [email protected], [email protected], [email protected] Abstract—A source model for secret key generation between terminals is considered. Two users, namely users 1 and 2, at one side communicate with another user, namely user 3, at the other side via a public channel where three users can observe i.i.d. outputs of correlated sources. Each of users 1 and 2 intends to share a secret key with user 3 where user 1 acts as a wiretapper for user 2 and vice versa. In this model, two situations are considered: communication from users 1 and 2 to user 3 (the forward key strategy) and from user 3 to users 1 and 2 (the backward key strategy). In both situations, the goal is sharing a secret key between user 1 and user 3 while leaking no effective information about that key to user 2, and simultaneously, sharing another secret key between user 2 and user 3 while leaking no effective information about the latter key to user 1. This model is motivated by wireless communications when considering user 3 as a base station and users 1 and 2 as network users. In this paper, for both the forward and backward key strategies, inner and outer bounds of secret key capacity regions are derived. In special situations where one of users 1 and 2 is only interested in wiretapping and not key sharing, our results agree with that of Ahlswede and Csiszar. Also, we investigate some special cases in which the inner bound coincides with the outer bound and secret key capacity region is deduced. Keywords-Information theoretic security, secret key sharing , source model, secret key capacity region. I. INTRODUCTION Because of the open nature of wireless communication networks, sharing secret keys between terminals is a challenging problem. In these environments, terminals have access to common randomness for generating secret keys but the existence of broadcast and multiple access channels in these networks results in unintended information leakage. In this paper, we explore the problem of sharing secret keys between three users who can observe the outputs of some correlated sources. There are two users, namely user 1 and user 2, at one side and another user, namely user 3, at the other side and also public channels between the users. User 1 wishes to share a secret key with user 3 while user 2 acts as a wiretapper and intends to learn information about this key as much as possible. Symmetrically, user 2 wishes to share a secret key with user 3 while user 1 acts as a wiretapper and intends to learn information about this key as much as possible. This 1Part of this work will be published on Australian Communication Theory Workshop (AusCTW2010) proceeding. 2 This work was partially supported by Iranian National Science Foundation (INSF) under Contract No. 84.5193. 1 model could be realized in wireless environment when user 3 is a base station and users 1 and 2 are curious network users. The rigorous idea of information theoretic security was first introduced by Shannon in [11] where the eavesdropper could listen to all the data transmitted from the transmitter to the receiver. After that, the notion of information theoretic security was characterized by Wyner as the wiretap channel model in which a single source-destination communication link is eavesdropped by a wiretapper via a degraded channel [13]. The secrecy level was measured by equivocation rate at the wiretapper. It was shown in [13] that nonzero secrecy rate can be achieved without using a secret key, if the intended receiver has a communication channel with better quality than the wiretapper. Csiszar and Korner in their seminal work [2] generalized the Wyner’s results to less noisy and more capable channels and determined the capacity region of the broadcast channel with confidential message. In [1] and [8], generation of secret key through common randomness was considered by Maurer, Ahlswede and Csiszar. The common randomness can be a source or a channel type. In source common randomness, all terminals including the transmitter, the receiver and the wiretapper could observe i.i.d. outputs of correlated sources. In channel common randomness, there is a noisy broadcast channel from the transmitter to the receiver and the wiretapper. In both the source and channel common randomness, there is a noiseless public channel with unlimited capacity between the transmitter and the receiver where all communication through which can be overheard by the wiretapper. In [1], based on common randomness type, the source and channel models were defined for secret key sharing and in both models, the problem of finding the secret key capacity between the transmitter and the receiver was considered. In the source model, the secret key capacity was characterized when a one-way noiseless public channel with unlimited capacity is available between the transmitter and the receiver. In case a two-way public channel exists between the transmitter and the receiver, the secret key capacity still remains an open problem, however its upper and lower bounds have been improved in [5] and [10]. Secret key generation in a network including more than three terminals has been explored in other works such as [3], [4], [6], [7], [14], [15]. Maurer [9] strengthened the secrecy conditions of [1] and [8] and showed that the results in a weak sense can be established in the strong sense by using the techniques developed in [9]. As mentioned above, the problem of sharing secret keys between terminals which have access to correlated sources was defined in [1], in which the transmitter and the receiver intend to share a key via public channel communications. In this model, a wiretapper who has access to side information correlated with other sources, can listen to the public channel and obtains information about the shared key as much as possible. In this paper, we propose a new model which differs from the source model of [1] (which was described in the previous paragraph), in such a way that both users 1 and 2 attempt to share secret keys with user 3 while user 1 is the wiretapper of user 2’s secret key and vice versa. Three users have access to correlated sources and there is a public channel from users 1 and 2 to user 3. To the best of our knowledge, this model has not been investigated so far. For this model, we investigate two situations. In the first, there is a one-way 2 public channel from users 1 and 2 to user 3. This situation is referred to as the forward key strategy and is shown in Fig.1. In the second one, there is a one-way public channel from user 3 to users 1 and 2. This situation is referred to as the backward key strategy and is shown in Fig.2. In both situations, we investigate the inner and outer bounds of the secret key capacity region. The rest of the paper is organized as follows: in Section II the proposed model and definitions are described. In Section III, related theorems for the upper and lower bounds of the secret key capacity regions are given. Some special cases are considered in Section IV in which the inner bound coincides with the outer bound and the secret key capacity region can be derived. Proofs of the theorems are given in Section V. Conclusion and suggestions for future works are given in Section VI. Some lemmas useful for the proof of theorems are given and proofed in the appendix. Throughout the paper, a random variable is denoted with an upper case letter (e.g X ) and its realization is denoted with the corresponding lower case letter (e.g., x ). We use N iX to indicate vector ( X , X i ,1 i ,2 ,..., X i N , ) , and use k jX i , to indicate vector ( X i j , , X ,..., X i k , i j , 1  ) where i denotes the index of the corresponding user. II. THE NEW SOURCE MODEL Users 1, 2 and 3 can, respectively, observe N i.i.d. repetitions of the random variables ,X X and 3X . The random 1 2 variable iX takes values from the finite set i for i  1, 2,3 . Furthermore, a noiseless public channel with unlimited capacity is available for communication between the three users. User 1 wishes to share a secret key with user 3 while user 2 acts as a wiretapper of user 1’s key. Symmetrically and simultaneously, user 2 wishes to share a secret key with user 3 while user 1 acts as a wiretapper of user 2’s key. Now, we represent formal definition of the secret key strategy for the new source model. Step 0) Users 1, 2 and 3, respectively, generate random variables 1M , 2M and 3M independent of each other such that M M M and , 2 , 3 1 ( X N 1 , X N 2 , X N 3 ) are mutually independent. The next steps can be regarded as deterministic. Step 1) At this step, users 1, 2 and 3, respectively, generate 1,1F , 2,1F and 3,1F such that F i ,1  transmit them over the public channel. f M X ,1( i , i )N i for i  1, 2,3 and Steps 2 to k) At step j , user i generates ,i jF as a function of ( iM X , i )N and the information which has been received from the other users via the public channel. Hence, users 1, 2 and 3, respectively, generate 1, jF , 2, jF and 3, jF as functions of the information available at the corresponding user where F 1, j  f M X 1, ( , 1 j N 1 , 3 F 1 j F , 2,1 1 j  3,1 ) , F 2, j  f M X 2, ( , 2 j N 2 , j F 1,1 1 F , 1 j  3,1 ) and F 3, j  f M X 3, ( , 3 j N 3 , j F 1,1 1 F , 1 j  2,1 ) , and transmit them over the public channel for j  2,..., k . Finally, after k steps, users 1 and 2 compute the keys K and L , respectively, as functions of the information available at each user: K K M X (  , 1 N 1 , k F F , 2,1 k 3,1 ) (1) L L M X (  , 2 N 2 , k F F , 1,1 k 3,1 ) (2) (cid:0) and also user 3 computes the keys ˆK and ˆL as a function of the information available at him: ,  ˆ ˆ ( K K M X ˆ ˆ( L L M X  , 3 3 N 3 , k F F , 1,1 k 2,1 ) (3) N 3 , k F F , 1,1 k 2,1 ) (4)  where the keys ˆK and ˆL are intended for sharing as secret keys with users 1 and 2, respectively. The keys ˆ )K K and , ( ˆ )L L take values from the finite sets  and  , respectively. ( , Now we state the conditions that should be met in the secret key strategy of the described model. Fig.1. Forward key strategy Fig.2. Backward key strategy Definition 1: In the secret key strategy of the source model described above, the secret key rate pair ( achievable rate pair if for every 0  and sufficiently large N , we have: 4 R R is an ) 2 , 1 Pr{ ˆ K K  }   and Pr{ ˆ L L }    (5) I M X ( 2 , N 2 , I M X ( 1 , N 1 , k F F K 1,1 k 3,1 ; , )   (6) F F L ; , )   (7) k 2,1 k 3,1 H K ( )  R 1   and  H L ( ) 1 N (9)  (8) R 2   H K ( )  H L ( )   (10) log   log   1 N 1 N 1 N 1 N 1 N 1 N 1 N Equation (5) means that users 1 and 2 can generate secret keys with user 3 and Equations (6) and (7) say that users 1 and 2 have effectively no information about each other’s secret key. Equations (9) and (10) are the uniformity conditions for the secret keys. Definition 2: The region containing all the achievable secret key rate pairs ( R R is the key capacity region. ) , 1 2 In the described model, we consider restricted usage of the public channel, i.e., no more than k usages of the public channel are allowed. In this paper, only the case k  is investigated. For this case, when communication is only 1 performed from users 1 and 2 to user 3, forward key capacity region is defined and when communication is only carried out in the reverse direction, i.e., from user 3 to users 1 and 2, backward key capacity region is introduced. We consider both situations in this paper. III. SECRET KEY RATE REGIONS In this section, we state our main results about the mentioned model. Theorem 1 (inner bound of the forward key capacity region): In the forward key strategy of the described source model, the rate pair ( R R is an achievable key rate pair if: ) , 1 2 0   0, R 1 R I S X T U I S X T U ) 1 R 2 ( ; ( ;   , , 2 3 ) R 2  I T X S V I T X S V ( ; )  ( ; , , ) 1 3 R R  1 2  I S T X U V I S X T U I T X S V I S T U V ( , ) ( ; ( ; ( ;    ) ) ; , , , , ) 2 1 3 where U V S T are random variables taking values in sufficiently large finite sets and according to the distribution: , , , p u v s t x x x ( , , , , 3 , , 2 1 )  p u s p v t p s x p t x p x x x ( 1 3 ) ( ) ( ) ( ) ( , , 2 1 2 ) . Proof of the achievability is given in Section V. A. However, we explain the intuitive interpretation of Theorem 1. We assume that users 1 and 2 consider the random variables S and T with the distributions ( p s x ) and ( p t x 1 2 ) for sharing keys with user 3, respectively. These random variables should be decoded by user 3 for generating secret keys. To this end, 5 part of information is sent by users 1 and 2 by transmitting realizations of random variables U and V with distributions p u s and ) ( ( p v t ) , respectively. Then, the other part of information should be sent by users 1 and 2 with total rate H S T U V X , according to the Slepian-Wolf theorem, to enable user 3 to reconstruct S and T . Based on the portion ( , ) , , 3 of the rate transmitted by each user, there is a tradeoff between the equivocation rates. For justification of the rate 1R , we assume that user 1 sends information with the minimum rate H S U V X T after sending realizations of U . It is ) ( , , , 3 obvious that both of the transmissions by user 1 can result in information leakage about S to user 2. The leakage rate would be equal to: ( ; I S X U H S U V X T  ( ) , , , , ) 2 3 For obtaining 1R , we should subtract the leakage rate from ( )H S and hence, we have: R H S ( )  1  I S X U H S U V X T ( ;  ) ( , , , , 3 2 )  I S U V X T ( ; , , , 3 )  I S X U ( ; , 2 ( ) a  ) I S U X T ( ; , , 3 )  I S X U ( ; , 2 ) ( ) b  I S U X T ( ; , , 3 )  I S X U T ( ; 2 , , )  I S X T U I S X T U ( ; )  ( ; , , ) 2 3 where (a) follows from the distribution of V and (b) from the distribution of T that results in I S T X U  . Since ( ; ) 0 , 2 the minimum rate H S U V X T (according to the Slepian–Wolf theorem) is sent by user 1, ( ) , , , 3 1R is smaller than the calculated rate. The same approach can be applied to the rate 2R . For the rate R R 1 2 : R 1  R 2  H S H T ( )  ( )  I S X U I T X V H S T U V X ( ; , 1 ( ; ( ,   ) ) , , , 2 ) 3  I S T X U V ( , ; , 3 )  I S X T U I T X S V ( ; )  ( ; , , )  I S T U V ( ; , ) 1 2 Theorem 2 (outer bound of the forward key capacity region): If the rate pair ( the forward secret key strategy, then it satisfies: R R is an achievable key rate pair in ) , 1 2 R 1 R 1   R 2  0  0, R 2 I S T X U I S X U { ( ; ) ( ;  , )} I T S X V I T X V { ( ; ) ( ;  , )} 3 3 2 1 for random variables U V S T which take values in sufficiently large finite sets and form Markov chains as: , , , U S   V T X X X ( , 1 , , , 2 V T U S X X X ,   ( , , , 1 2 ), ), 3 3 . S X  T X  1 2  (  ( X X , 2 X X , 1 ), ) 3 3 In addition, the following bound is an explicit upper bound which can be easily deduced from Theorem 1 of [1]: 6 R I X X X 1 1  ( ; 3 ) 2 R I X X X 2 1  ( ; 2 3 ) The proof is given in Section V. B. Corollary 1: If user 2 is only interested in wiretapping and not sharing a secret key with user 3, random variables T and V can be assumed to be constant. In this case, the lower bound of Theorem 1 coincides with the upper bound of Theorem 2 and the forward secret key capacity between the users 1 and 3 would be equal to: R 1  max{ ( ; I S X U I S X U )  ( ; )} 2 3 for random variables ,U S which form a Markov chain as U S X    X X , 2 3 1 . This result is in agreement with the result of Theorem 1 of [1]. Theorem 3 (inner bound of the backward key capacity region): In the backward secret key strategy of the described source model, the rate pair ( R R is an achievable key rate pair if: ) , 1 2 R 1 R 1   R 2  0  0, R 2 I S X U I S X T U { ( ;  ( ; ) , )} { ( ; I T X U I T X S U  ( ; ) , )} 1 2 2 1 where ,U S and T are random variables taking values in sufficiently large finite sets and according to the distribution: ( , , , p u s t x x x 3 , , 1 2 )  ( p u s t p s t x p x x x 3 ( , , ) ) ( , , 1 3 2 ) . The proof is given in Section V. C. Intuitive interpretation of Theorem 3 is as follows. In the case of backward key capacity region, only user 3 is permitted to send information to users 1 and 2. In this case, user 3 considers two random variables ,S T with distribution p s t x ( , 3 ) and intends to send required information so that users 1 and 2 can reconstruct random variables S and T , respectively, and then user 3 exploits these random variables for sharing secret keys with these users. First, it transmits realizations of random variable U which has distribution ( p u s t and then sends information , ) with rate H S X U so that user 1 can reconstruct S and information with rate ( ) , 1 H T X U so that user 2 can ( ) , 2 reconstruct T . Consequently, user 2 has access to random variables H S X U for obtaining information about user 1’s key. So: ( ) , 1 X U T and also information with rate , 2 , ( ) R H S  1  ( ; ) I S X U T H S U X  ( , , , 2 )  ( ; I S U X , )  1 1 ( ; I S X U T 2 , , )  With the same approach the rate 2R can be deduced. 7 ( ; I S X U I S X T U  ( ; ) , ) 1 2 Theorem 4 (outer bound of the backward key capacity region): In the backward secret key strategy of the described source model, if the rate pair ( R R is an achievable key rate pair, then it satisfies: ) , 1 2 R 1 R 1   0  0, R 2 min{ ( ; I S X U I S X U I S X T U I S X T U ), ( ; ( ; ( ;   ) ) , , )} R 2  min{ ( ; ), ( ; I T X U I T X U I T X S U I T X S U ( ; ( ;   ) ) , , )} 1 2 2 1 1 2 2 1 where ,U S and T are random variables taking values in sufficiently large finite sets and according to the distribution ( , p u s t x x x , , 3 , , 2 1 )  ( p u s t p s t x p x x x 3 ( , , ) ( ) , , 2 3 1 ) which form Markov chains as U S X   and U T X   . 3 3 In addition, the following bound is an explicit upper bound which can be easily deduced from Theorem 1 of [1]: R I X X X 1 1  ( ; 3 ) 2 R 2  ( ; I X X X 2 1 3 ) The proof is given in Section V. D. Corollary 2: If user 2 is only interested in wiretapping and not sharing a secret key, the random variable T can be assumed to be constant. In this case, the lower bound of Theorem 3 coincides with the upper bound of Theorem 4 and the backward secret key capacity between user 1 and 3 would be equal to: R 1  max{ ( I S X U ; 1 )  I S X U ; ( 2 )} for the random variables which form Markov chain as U S X    X X , 1 2 3 . This result is in agreement with the result of Theorem 1 of [1]. In his section, we discuss some special cases in which the secret key capacity region can be found. IV. SPECIAL CASES Case 1: When sources ,X X and 2 1 3X form a Markov chain as X 1  X  X 3 2 , then the forward and backward key capacity regions reduce to:  0 R 1  0 R 2  I X X X ( ; 2 3 ) 1 The achievability is obtained by replacing S X T  , 1  X U V    , 2 in Theorem 1 and T  X S U    3, in Theorem 3. It should be noted that because of the above Markov chain, the equality ( I X X 2 ; 3 )  ( I X X 1 ; 3 )  ( ; I X X X 3 2 ) holds. For the converse part of the forward and backward key capacity regions, we 1 directly exploit Theorems 2 and 4, respectively . 8 When sources ,X X and 2 1 3X form a Markov chain as X 2  X 1  X 3 , the secret key capacity region can be derived by symmetry from case 1. Case 2: When sources ,X X and 2 1 3X form a Markov chain as X 1  X 3  X 2 , then the forward key capacity region reduces to: 0  R 1  0  R 2  I X X X ( ; 1 3 ) 2 ( ; I X X X 3 2 ) 1 The achievability is obtained by replacing S  , X T 1  X U V  , 2  in Theorem 1. It should be noted that because of the above Markov chain, the equalities ( I X X 1 ; 3 )  ( I X X 2 ; 1 )  ( ; I X X X 3 1 ) 2 and ( I X X 2 ; 3 )  ( I X X 2 ; 1 )  ( ; I X X X 2 3 ) hold. The converse part can be directly followed from Theorem 2. 1 Case 3: When sources ,X X and 2 1 3X form a Markov chain as X 1  X 3  X 2 , then the backward key capacity region reduces to: 0   0, R 1 R I S X U I S X U ) ) 1 R 2 ( ; ( ;   2 1 R I T X U I T X U ) 2 ( ; ( ;   ) 1 2 where U , S and T are random variables taking values in sufficiently large finite sets and according to the distribution ( p u s t p s t x p x x x 3 ( , , ) ( ) , , 3 1 2 ) which form Markov chains as: ( , p u s t x x x , , 3 , , 2 1 )  U S X   3 U T X   S X   1  T 2 3 X The existence of such random variables S and T can be deduced from the Markov chain X  X 3 1  X 2 . This situation is shown in Fig.3. For these random variables, we have I S T X U I S T X U ( ; )  ( ; , ) 0  and so, achievability can be deduced , 1 2 from Theorem 3. The converse part can be directly deduced from Theorem 2. 9 Fig.3. An example for the case 1 X  X 3  X 2 V. PROOFS In this section, proofs of the theorems in Section III are given. Construction of the Codebooks A. PROOF OF THEOREM 1 First, we describe random codebook generation at users 1 and 2. For a distribution ( )p s , collection of codewords Ns , each uniformly drawn from the set N T P ( S 1 ) , is generated by user 1. N T P ( S 1 ) denotes the set of jointly typical sequences Ns . Similarly, for a distribution ( )p t , collection of codewords Nt , each uniformly drawn from the set N T P ( T 1 ) , is generated by user 2. Now, for a fixed distribution p u s , user 1 generates ) ( 2N I S U  ( ( ; ) 2 ) i.i.d. codewords of length N , NU a for ( ) a  {1,..., 2 N I S U ( ( ; )  2 ) } with distribution ( )p u . Similarly, for a fixed distribution p v t ( ) , user 2 generates codewords of length N , NV b for ( ) b  {1,..., 2 N I T V ( ( ; )  2 ) } with distribution ( )p v . 2N I T V  ( ( ; ) 2 ) i.i.d. User 1 divides the typical sequences Ns into 12NR bins with the same size in a uniformly random manner where   R 1 ( H S X U R , 2 1  ( ) ) . The index of each bin is denoted as k and the corresponding random variable is denoted as K  . Also the codewords of each bin are randomly divided into 12NR bins with the same size and the bin index of the latter bins is denoted as k with the corresponding random variable K . It is obvious that in each internal bin with bin index k , there are 12NR typical sequences Ns where   R 1 ( ( ; , I S X U  2 1  ) ) which we use index k for them. Hence each typical codeword Ns can be uniquely determined with three indices as s N  , , k k k  and vice versa. Similarly, user 2 divides the typical sequences of Nt into 22NR bins with the same size in a uniformly random manner where   R 2 ( 10 H T X V ( , 1 )  R 2 ) . The bin index of each bin is denoted as l and the corresponding random variable is denoted as L . Also the codewords of each bin are randomly divided into 22NR bins with the same size and the bin index of the latter bins is denoted as l with the corresponding random variable L . It is obvious that in each internal bin with bin index l , there are 22NR typical sequences Nt where   R 2 ( ; , I T X V  1 1 )  which we use index l for them. Hence each typical codeword Nt can be uniquely determined with three indices as N t , l l  , l  and vice versa. Now, for every typical X N 1 N x 1 , all codewords Ns which are jointly typical with 1 Nx , based on distribution p s x , ( 1 ) are collected in a set which is denoted as S . In the same manner, for every typical N N x 1 X N 2 N x 2 , all codewords Nt which are jointly typical whith Nx , based on distribution 2 p t x ( 2 ) , are collected in a set which is denoted as N T . The codebooks of N x 2 users 1 and 2 for X N 1 N x 1 and X N 2 N x 2 are shown in Fig.4. It is assumed that all the users are informed of the binning schemes and distributions used. k 1 2 • • • k 1 2 • • • •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• 12NR •••• •••• •••• •••• •••• 12NR •••• •••• •••• •••• •••• •••• S : Set of user 1’s codewords for N N x 1 X N 1 N x 1 l 1 2 • • • 22NR l 1 2 • • • •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• 22NR •••• •••• •••• •••• •••• •••• N T : Set of user 2’s codewords for N x 2 X N 2 N x 2 Fig.4. Codebooks of users 1 and 2 for X N 1 N x 1 and X N 2 N x 2 11 Encoding For encoding, users 1 and 2 observe the i.i.d. sequences NX and 1 NX 2 , e.g., Nx 1 and Nx 2 , respectively, and select the corresponding sets S and N N x 1 T . User 1 randomly selects a sequence Ns N N x 2 from the set S and chooses the respective row N N x 1 index ( k ) of the codeword (as shown in Fig.4) as secret key with user 3 and sends the respective column index ( k ) of the codeword over the public channel. He also sends index a of its jointly typical sequence NU a over the public channel. ( ) Similarly, user 2 randomly selects a sequence Nt from the set N T and chooses the respective row index ( l ) of the N x 2 codeword (as shown in Fig.4) as secret key with user 3 and sends the respective column index ( l ) of the codeword over the public channel. He also sends the index b of its jointly typical sequence NV b over the public channel. ( ) Decoding and Error Probability Analysis For decoding, user 3 receives the indices k a l b ,  ,  , from the public channel and also observes the i.i.d. sequences NX 3 e.g., Nx . User 3 decodes the pair 3 ( s N  , , k k k  , N t , l l  , l  ) if: ( s N  k k k , ,  , N t l l ,  , l  , N x 3 )  N ( T  0 ) ( P S T X U V 3 , , , ) when such pair ( s N  , k k k ,  , N t , l l  , l  exists and is unique and otherwise, he declares error. After decoding such ) ( s N  , k k k ,  , N t , l l  , l  ) , user 3 chooses the indices k and l as secret keys with users 1 and 2, respectively. Ns Now, we analyze the decoding probability of error. Without loss of generality, we assume that the codewords 1,1,1 and Nt 1,1,1 are, respectively, chosen by users 1 and 2 and so the key pair ( N t s , 1,1,1 1,1,1 N ) should be decoded by user 3. The event E is defined as:   , , , E k k k l l ( ,  , l  )  {( N s  k k k , ,  , N t l l ,  , l  , N x 3 )  N ( T  0 ) ( P S T X U V 3 , , , )} The decoding error probability is bounded as: N ) ( P e  P E { c (1,1,1,1,1,1) ( N t s , 1,1,1 1,1,1 N )chosen}   l l (1,1),( , k k ( ,  )   )  (1,1) P E k { ( ,1,  k l , ,1, l  ) ( N t s , 1,1,1 1,1,1 N )chosen}     ) k k ( , (1,1) P E k { ( ,1, k  ,1,1,1) ( N t s , 1,1,1 1,1,1 N )chosen}  ( , l l    ) (1,1) P E { (1,1,1, ,1, l l  ) ( N s t , 1, 1,1 1,1,1 N )chosen} The first term vanishes due to the joint asymptotic equipartition property (AEP): c P E { k l (1,1,1,1,1,1) ( , )  (1,1) sent}   0 12 In the second term for ( , k k  ) (1,1) and ( , l l  ) (1,1) we have (according to the Slepian-Wolf Theorem [12]) P E k { ( ,1,  , ,1, k l l  ) ( , ) k l   (1,1)sent} 2 R R N H S T X U V ,   1    2 ( ( 3 , , )   0 ) In the third term for ( , k k  ) (1,1) we have: P E k { ( ,1, k  ,1,1,1) ( , ) k l  (1,1)sent} 2   R N H S X T U V   1 ( ( 3 , , , )   0 )  2  R N H S X T U (   1 ( 3 , , )   0 ) Finally, in the forth term for ( , l l  ) (1,1) we have: P E { (1,1,1, ,1, l l  ) ( , ) k l  (1,1)sent} 2   R N H T X S U V  ( ( , , , 3  2 )   0 )   2 R N H T X S V (  ( , , 3  2 )   0 ) and hence, the decoding error probability can be bounded as:  2 R H S T X U V ,  ( , , 3 )   0 )   2 N ) ( eP   0  2   N R ( 1  If we set: N R H S X T U (  ( , , 3  1 )   0 )   2 N R H T X S V (  ( , , 3  2 )   0 ) R H S X T U ( , , 3 H T X S V 3 ( , , ) )   1   R 2  R 1  R 2  H S T X U V ( , , , 3 ) or in other words: ) R H S X U H S X T U   ( ( , , , ) H S X T U H S X T U  ( ( , , , , )  ( ; ) I S X T U I S X T U  ( ; , , ) 3 2 1 R 2  2 1 (a)  ) (b)  ) 3 3 2 1 3 3 H T X V H T X S V  ( ( ) , , , ) H T X S V H T X S V  ( ( , , , , )  ( ; I T X S V 3 , )  ( ; I T X S V 1 , ) R 1  R 2  , H S X U H T X V H S T X U V 1 ( ,   ( ( ) ) , , , 2 3 )  ; ( , I S T X U V , 3 )  ( ; I S X T U I  ) , 2 T X S V ( ; 1 , )  I S T U V ( ; , ) then for any 0 0  , N ) ( eP 04  and if we set 04  , then the reliability condition 5 in Definition 1 will be satisfied. It should be noted that in the above equations, equalities (a) and (b) follow from the distributions of random variable S and T . It is obvious that the encoding scheme can satisfy the uniformity conditions (9) and (10) in Definition 1. Analysis of Security Conditions Now, we should analyze the security conditions (6) and (7) in Definition 1. User 2 attempts to obtain information about user 1’s key and to this end, he exploits M , X and the information which is sent by user 1 on the public channel: N 2 2 13 I K M X K U ( , ; ,  , 2 N 2 N (a)  ) N I K X K U ( 2  , ; , N )  H K H K X K U (  ) ( ,  , N 2 N )   H K H K S X K U ,  ( ( ) ,  , N N 2 H K H K X K U S  ( ( ) , ,  , N N 2 N N )  )  N H S K X K U , ( ,  , N 2 N ) N N H S X K U 2  , ( , N )  N H S K X K U , ( ,  , N 2 N ) (b)   (c)  N H K H S X K U   , ( ( ) , N 2 N )  N H S K X K U , ( ,  , N 2 N ) NH S X U N 2  ( ) ,  1 N R H S X K U   , ( , N )  N 2 N H S K X K U , ( ,  , N 2 N ) N H S X U ( , N 2 N )  N   1  N NR H S X K U   , ( , N )  N 2  1 N H S K X K U , ( ,  , N 2 N )   N  I S K X U ( ; , N 2 N )  NR H S K X K U  ( , ,  , N N 2  1 N )  N   1  H K X U ( , N 2 N )  N H K S X U ( , ,  N 2 N )  NR H S K X K U  ( , ,  , N N 2  1 N )  N   1 H K S (  N , N X U , 2 N )  NR H S K X K U  ( , ,  , N N 2  1 N )  N   1  H K (  )  (d)  H K (  )   (e)   1 N 2 NR H S K X K U  ( , ,  , N N 2 N )  N   1 N H S K X K U , ( ,  , N )  N   1 N   (    2 1 ) In the above equations, (a) follows from the independence of 2M from other random variables, (b) from the fact that the index k is one of the indices of Ns and the equality H K X K U S , ( ,  , N N 2 N )  holds. For proving (c), we use Lemma 0 1 which is given in part A of the Appendix. Equality (d) is true because the index k is also one of the indices of Ns . Finally for (e), we use Lemma 2 (which is given in part B of the Appendix) to show that: N H S K X K U , ( ,  , N 2 N )   N 2 . Similarly, the security condition for user 2’s key is satisfied as: I L M X ( ; , 1 N 1 ,  , L V N )    N   4 3  ( ) and so, the security conditions (6) and (7) of Definition 1 are satisfied when    i 1 2 ,i   , , 1 2 3 4 , . B. PROOF OF THEOREM 2 For deriving upper bound of the forward key capacity region, we use the reliable and secure transmission conditions. In the forward key strategy, users 1 and 2, respectively, generate the keys K and L for sharing with user 3: K K M X (  , 1 N 1 ), L L M X (  , 2 N 2 ) Then, users 1 and 2, respectively, generate 1F and 2F where F 1  f M X 1 1 ( , N 1 ), F 2  f M X 2 2 ( , N 2 ) and transmit them over the public channel so that user 3 can reconstruct K and L with an arbitrary probability of error 0  . According to Fano’s inequality: 14 1 N H K L M X ( , , 3 N 3 , F F , 1 2 )  H ( )    (log    1)   1 After reconstructing these keys, user 3 uses K and L as secret keys with users 1 and 2, respectively, and for arbitrarily small 0  , the following security conditions should be satisfied: I K M X ( ; , 2 N 2 , F 1 )  N . ,  I L M X ( ; , 1 N 1 , F 2 )  N .  Now, we show that for keys that satisfy the reliability and security conditions described above, there exist random variables U V S T that form Markov chains as mentioned in Theorem 2 and satisfy the following relations: , , , H K I S T X U I S X U ) 3 ( ; , ( ) ( ;   ) 2    H L I T S X V I T X V 3 ( ; , ( ) ( ;   ) 1 )    We prove upper bound for 1R . The proof for 2R can be deduced by symmetry. 1 N (b)  (c)   (d)  (e)  H K ( ) (a)  1 N H K M X F , 2 1 ( , N 2 )   1 N 1 N 1 N 1 N 1 N H K M X F , 2 1 ( , N 2 )  1 N N H K M X F F L , ) 3 ( , , , 1 3 2     1 H K X F 1 ( , N 2 )  1 N N H K X F F L , , ) 3 ( , 2 1     1 I K X F L F I K X F [ ( ; 1 1 ( ;  ) , , 2 )]     1 N 3 N 2 N  i 1  I K X F L X [ ( ; , i 3, , 2 i 1  3,1 , X N i 2, 1  , F I K X i 2, 1 ( ;  ) X 1 i  3,1 , X N 2, 1 i  , F 1 )]     1 N  i 1  I S T X U I S X U [ ( ) i i ( ; i  i 2, i 3, ; , i i )]     1 (f)  [ ( I S T X U I S X U Q Q Q Q ; Q Q )- ( Q 2, 3, ; , )]    where (a) results from the security condition, (b) from Fano’s inequality, (c) from independence of ( M M from other ) , 2 3 random variables, (d) from Lemma 3 (which is given in part C of the Appendix), (e) from definition of the random variables U ,V ,S ,T as: U i  ( X 1 i  3,1 , X N 2, 1 i  , F V ), 1 i  ( X 1 i  3,1 , X N 1, 1 i  , F S ), 2 i  ( K U T i i ), ,  L V ( , i ) and (f) from definition of the random variable Q which is uniformly distributed on {1 2 , ,..., N and setting }  . 1     Similarly, by using the above mentioned variables we have: R 2  1 N H L ( )  I T S X [ ( ; , Q Q V I T X )- ( ; Q 3, Q Q V Q Q 1, )]   15 It can be seen that the desired equations are satisfied with random variables which form Markov chains as in Theorem 2. Construction of the Codebooks C. PROOF OF THEOREM 3 First, we describe random codebook generation at user 3. For a distribution p s t ( , ) , collection of codewords, ( s N N , t ) each uniformly drawn from the set N T P ( 1 S T , ) , is generated by user 3. Now, for a fixed distribution p u s t , ) ( , user 3 generates ( ( 2N I S T U  , ) ; 2 ) i.i.d. codewords of length N , NU a for ( ) a  {1,..., 2 N I S T U ( ( ; , )  2 ) } with distribution ( )p u . User 3 divides the typical sequences of Ns into 12NR bins with the same size in a uniformly random manner where   R 1 ( H S X T U R , 1  ( ) , 2 ) . The bin index of each bin is denoted as k and the corresponding random variable is denoted as K  . Also the codewords of each bin are randomly divided into 12NR bins with the same size and the bin index of the latter bins is denoted as k with the corresponding random variable K . It is obvious that in each internal bin with bin index k , there are 12NR typical sequences Ns where   R 1 I S X T U  ( ( ; , 1  ) , 2 ) which we use index k for them. Hence each typical codeword Ns can be uniquely determined with three indices as s N  k k k , ,  and vice versa. Also, user 3 divides typical sequences of Nt into 22NR bins with the same size in a uniformly random manner where   R 2 ( H T X S U R , 2  ) ( , 1 ) . The bin index of each bin is denoted as l and the corresponding random variable is denoted as L . Also the codewords of each bin are randomly divided into 22NR bins with the same size and the bin index of the latter bins is denoted as l with the corresponding random variable L . It is obvious that in each internal bin with bin index l , there are 22NR typical sequences Nt where   R 2 I T X S U  ( ( ; , 1  ) , 1 ) which we use index l for them. Hence each typical codeword Nt can be uniquely determined with three indices as N t l l ,  , l  and vice versa. Now, for every typical X N 3 N x 3 , all codewords ( s N N t which are jointly typical with ) , Nx , based on distribution 3 p s t x , are collected in a set which is denoted as ( , ) 3 N S T , ( N ) N x 3 . It is assumed that all the users are informed of the binning schemes and distributions used. Encoding For encoding, user 3 observes the i.i.d. sequence of NX e.g., 3 Nx and after selecting the corresponding set 3 N S T , ( N , ) N x 3 16 he randomly selects a sequence ( s N N , t ) from this set. Then, he chooses the respective row index ( k ) of the codeword Ns (as shown in Fig.4) as secret key with user 1 and sends the respective column index ( k ) of the codeword over the public channel. Also, he chooses the respective row index ( l ) of the codeword Nt (as shown in Fig.4) as secret key with user 2 and sends the respective column index ( l ) of the codeword over the public channel. In addition, user 3 sends index a of NU a which is jointly typical with the sequence ( ( ) s N , t N ) over the public channel. Decoding and Error Probability Analysis For decoding, users 1 and 2 receive the indices   , , k l a from the public channel and also observe the i.i.d. sequences NX and 1 NX e.g., 1 Nx and 2 Nx , respectively. User 1 decodes 2 s N  k k k , ,  if: ( s N  k k k , ,  , N x 1 )  ) N ( T  0 ( P S X U 1 , ) when such s N  k k k , ,  exists and is unique and otherwise he declares error. User 2 decodes N t l l ,  if:  , l N t ( l l ,  , l  , N x 2 )  ) N ( T  0 ( P T X U 2 , ) when such N t l l ,  , l  exists and is unique and otherwise he declares error. Now we analyze decoding error probability. We define: N ) ( P e  max{ ) N ( P e 1 , N ( P e 2 ) } where ) N ( eP 1 N and ( eP 2 ) are, respectively, decoding error probabilities at users 1 and 2. Without loss of generality, we assume Ns that the codewords 1,1,1 Nt and 1,1,1 Ns are chosen by user 3 and so, 1,1,1 Nt and 1,1,1 should be decoded by users 1 and 2, respectively. Events 1E and 2E are defined as: E k k k ( ,  , 1  )  {( N s  k k k , ,  , N x 1 )  ) N ( T  0 ( P S X U 1 , )} E l l ( , 2  , l  )  N t {( l l ,  , l  , N x 2 )  ) N ( T  0 ( P T X U 2 , )} The decoding error probabilities are bounded as: ) N ( P e 1  c P E { 1 (1,1,1) ( N t s , 1,1,1 1,1,1 N )chosen}     ) k k ( , (1,1) P E k { ( ,1, k  ) ( N t s , 1,1,1 1,1,1 N )chosen} ) N ( P e 2  P E { c 2 (1,1,1) ( N t s , 1,1,1 1,1,1 N )chosen}  l l ( ,    ) (1,1) P E l { ( ,1, l  ) ( N t s , 1,1,1 1,1,1 N )chosen} 17 According to the joint asymptotic equipartition property (AEP), decoding error probabilities can be bounded as: ) ) ( N P e 1 ( N P e 2   0  2   0  2  R N H S X U ( 11  ( 1 , )   0 )  R N H T X U ( 12  ( 2 , )   0 ) and if we set:   1   R 2 R H S X U ( , ) 1 H T X U ( , 2 ) or in other words: R H S X T U H S X U ,   ( ( ) , , I S X U I S X T U ( ;  ( ; ) , H T X S U H T X V )  ( ( , , , I T X U I T X S U ( ;  ( ; ) , )  )  1 2 1 2 2 1 ) ) 1 R 2  2 1 then for any 0  , 0 N ) ( eiP 02  for i  1, 2 and so N ) ( eP 02  and if we set 02  , then the reliability condition 5 in Definition 1 will be satisfied. It is obvious that the encoding scheme can satisfy the uniformity conditions (9) and (10) in Definition 1. Analysis of Security Conditions Now, we should analyze the security conditions (6) and (7) in Definition 1. User 2 attempts to obtain information about user 1’s key and to this end, he exploits the indices k , l and a : M , X and the information which is sent by user 3 over the public channel, i.e., N 2 2 N I K M X K L U ( 2  ,  , ; , , 2 N (a)  ) N I K X K L U ( , 2  ,  , ; N )   , I K X K L T U (  , ; , , N N 2 N )  I K X K T U ( , ; ,  , N N 2 N )  N I K L X K T U ( , 2  , N ; ,  N ) (b)  I K X K T U ( , ; ,  , N N 2 N )  H K H K X K T U  ( ( ) , ,  , N N 2 N )   N H K H K S X K T U   , ( ) ( N , , , N 2 H K H K X  ) ( ( N 2 , N K T U S ,  , N , N N )  )  N N H S K X K T U 2  , ( N , , , N ) N N H S X K T U , 2  , ( N , N H K H S X K T U   , ) ( ( N , , N 2 N )  N N H S K X K T U 2  , ( N , , , N N H S K X K T U 2  , ( N , , , N ) N )  N )  NH S X T U NR H S X K T U  1   , ( ( ) , , , , 2 N 2 N N N )  N N H S K X K T U 2  , ( N , , , N ) N N H S X T U 2 ( N , , N )  N   1  NR H S X K T U  ( , ,  , N N N 2  1 N )  H ( N N S K X K T U , 2  , N , , N ) N I S K X T U ( ; , ,  N N 2 N )  NR H S K X K T U ,  ( , ,  , N N 2  1 N )  N   1 N H K X T U 2 ( N , ,  N )  H K S X T U , ( , ,  N N N 2 N )  NR H S K X K T U ,  ( , ,  , N N N 2 N )  N   1 N  1 (c)   (d)     H K (  )  H K S X T U , ( , ,  N N N 2 N )  NR H S K X K T U ,  ( , ,  , N N N 2  1 (e)   (f )  H K (  )   NR H  1 ( N N S K X K T U , 2  , N , , N )  N   1 N N H S K X K T U 2  , ( N , , , N )  N   1 N   (    2 1 ) N )  N   1 18 In above equations, (a) follows from the independence of 2M from other random variables, (b) from the fact that given NT , L is impendent of other random variables, (c) from the fact that the index k is one of the indices of Ns and the equality  , H K X K T U S ( , , , N N N 2 N )  holds. For proving (d), we use the same approach as in Lemma 1 which is given 0 in part A of the Appendix. Equality (e) is true because the index k is also one of the indices of Ns . Finally for (f), we use the same approach as in Lemma 2 (which is given in part B of the Appendix) to show that: N N H S K X K T U 2  , ( N , , , N )   N 2 . Similarly, the security condition for user 2’s key is satisfied as: N I L M X K L U ( ; 1  ,  , , , 1 N )    N   4 3  ( ) and so, the security conditions (6) and (7) of Definition 1 are satisfied when    i 1 2 ,i   , , 1 2 3 4 , . D. PROOF OF THEOREM 4 For deriving upper bounds of the backward key capacity region, we use the reliable and secure transmission conditions. In the backward key strategy, user 3 generates the keys K and L for sharing with users 1 and 2, respectively: K K M X (  , (cid:0) 3 N 3 ), L L M X (  , (cid:0) 3 N 3 ) Also, it sends 3F over the public channel where 3 F  f M X 3 3 ( , )N 3 to enable users 1 and 2 to compute K and L , respectively, with an arbitrary probability of error 0  . According to Fano’s inequality: 1 N 1 N H K M X ( , 1 H L M X ( , 2 N 1 , F 3 )  H ( )    (log   1)   1 , N 2 , F 3 )  H ( )    (log   1)   2 Also the security conditions require that: I K M X ( ; , 2 N 2 , F 3 )  N ,  I L M X ( ; , 1 N 1 , F 3 )  N  Now, we derive upper bounds for 1R . The proofs for 2R can be deduced by symmetry. For the first upper bound of 1R : 19 1 N (b)  (c)   (d)  (e)  1 N 1 N 1 N 1 N 1 N 1 N 1 N 1 N 1 N 1 N 1 N 1 N  (b)  (c)  (d)    (e)  (f)  (g)  ( H K ) (a)  1 N , H K M X F 2 3 ( , N 2 )   [ ) H K M X F H K M X F  3 ( ( , , , , 1 2 3 )]     1 N 1 N 2 [ ) H K X F H K X F  3 ( ( , , 3 )]     1 N 1 N 2 [ ( ; I K X F 3 N 1 )  ( ; I K X F 3 N 2 )]     1 N  i 1  i [ ( ; I K X X 1 i 1, 1 N N  i 1  [ ( ; I S X U i i 1, i 1  , X N i 2, 1  , F 3 )  i ( ; I K X X 1 i 2, 1  , X N i 2, 1  , F 3 )]   1   )]  ( ; I S X U i i 2, i )]     1 (f)  [ ( I S X U I S X U Q Q )- ( Q Q Q Q 2, ; ; 1, )]    where (a) results from the security condition, (b) from Fano’s inequality at user 1, (c) from independence of ( M M ) , 1 2 from other random variables, (d) from Lemma 3 (in which the random variable 2F is set to be constant), (e) from definition of the random variables U ,S ,T as: U i  ( X 1  i 1 , X N 2, 1 i  , F 3 S ), i  ( K U T i i ), ,  L U ( , ) i and (f) from definition of the random variable Q which is uniformly distributed on {1 2 , ,..., N and setting }    1  .   For the second upper bound of 1R , we have: 1 N H K ( ) (a)  1 N H K M X F , 2 3 ( , N 2 )    1 N H K L M X F 3 ( , , , 2 N 2 )   ) , H K M X F L ( , , ) , H K M X F L ( , ,  1 N , H L M X F 2 3 ( , N 2 )       2 3 3 2 2 N 2 N 2 N 2 ) H K M X F L H K M X F 3  ( ( , , , , , 1 3 2 )]      1 2  N 1 , ) H K X F L H K X F 3  ( ( , , 3 )]      1 2  N 1 N 2 ( H K X N 2 , F L H K X F L ( 3  ) , , , 3 )] N 1      1 2  N I K X F L [ ( ) 1 ; , 3 N  i 1  i I K X X [ ( ; 1 1, i  I K X ( ; N 2 , F L , 3 )]      1 2  1  , X N i 2, 1  , F L , ) 3  i I K X X ( 1 2, ; i 1  , X N i 2, 1  , F L , 3 )]      1 2  N  i 1  [ ( I S X U T 1, i i ; , i i )]  ( I S X U T 2, i i ; , i i )]      1 2  I S X U T I S X U T [ ( , Q Q , Q Q )- ( 2, Q Q Q Q 1, ; ; )]  where (a) results from the security condition, (b) from Fano’s inequality at user 2, (c) from Fano’s inequality at user 1, (d) from independence of ( M M from other random variables, (e) from Lemma 3 (in which the random variable ) , 1 2 2F is set 20 to be constant), (f) from definition of the random variables U ,S ,T as above and (g) from definition of the random variable Q as above and setting       .   1 2 Following the same approach, upper bounds for 2R can be deduced and so Theorem 4 is proved for some random variables with distribution U T X   . 3 ( , p u s t x x x , , 3 , , 1 2 )  ( p u s t p s t x p x x x 3 ( , , ) ( ) , , 2 1 3 ) which form Markov chains as U S X   and 3 VI. CONCLUSIONS In this paper, a source model for secret key generation were studied in which each of users 1 and 2 intends to share a secret key with user 3 where user 1 acts as a wiretapper for user 2 and vice versa. Three users could observe i.i.d outputs of correlated sources and there is a public channel between users. In the described model, the forward and backward key strategies were considered based on the direction of the public channel, i.e., from users 1 and 2 to user 3 or in the reverse direction. For both the forward and backward key strategies, inner and outer bounds of secret key capacity regions were derived. Our results also include the results of previous works such as [1]. Our upper and lower bounds did not coincide generally but some special cases were considered where these bounds were tight. As the continuation of this work, we are now exploring a model similar to the described model but instead of the public channel, there is a generalized multiple access channel (MAC) between the terminals, where users 1 and 2 govern the inputs of the MAC and outputs are received by users 1, 2 and 3. Also as the future works, we can suggest the same problem of this paper for the situation where there is a two-way public channel i.e., from users 1 and 2 to user 3 and vice versa. Also unlimited usage of the public channel can be viewed as a generalization of the problem. APPENDIX A. LEMMA 1 For sufficiently large N and sufficiently small 1 , we have: NH S X U ( , 2 )  N H S X U ( , N 2 N )  N 1 Proof: We use the indicator function:  s x u ( , , ) 2  1, (     N s , N x 2 N , u )  ( N A   0 ) ( P S X U 2 , , ) 0, otherwise 21 We have: N I S X U ( ; , N 2 N )  I S ( N , ;  N X U , 2 N ) and hence: N H S X U ( , N 2 N )  H S ( N )  I S ( N , ;  N X U , 2 N )  NH S ( )  I S ( N , ;  N X U , 2 N )  NH S ( )  I S X U ( ; , N N 2 N I ;    ( ) N X U , 2 N )  NH S ( )  P (   1) ( I S X U ; , N N 2 N  1)   P (   0) ( I S X U ; , N N 2 N   0)  I ( ;  N X U , 2 N ) We analyze the above terms one by one. For the second term: N   1)  NP s [( N , N x 2 N , u )  N ( A   0 ) ( P S X U 2 , , )]log  N   0 log  N   0)  N I S X U ( ; , N 2 N   0) P s ( N , N x 2 N , u P s )[log ( N , N x 2 N , u )  P s log ( N )  log ( N P x 2 N , u )] P (   1) ( I S X U ; , N N 2 For the third term: P (   0) ( N N 2 , I S X U ;  ( N A )   0 P ,2 S X U ) ( ) , N ( s , N x 2 , u N   N H S H X U H S X U , 2 ( ) ( ,   ( ( ) , 2 ) 3    0 )  N I S X U ( ( ; , 2 ) 3    0 ) For the forth term: I ( ;  N X U , 2 N ) ( H   ) 1 Finally, we can deduce: N H S X U ( , N 2 N )  NH S ( )  N   0 log   N I S X U ( ( ; , 2 ) 3    0 ) 1   N H S X U ( ( , 2 )   log 1  ( ))   0 N    1  3  0  For sufficiently large N and sufficiently small 2 , in the forward key strategy, we have: B. LEMMA 2 N H S K X K U , ( ,  , N 2 N )   N 2 Proof: For fixed k and k  , we assume that user 1 transmits a codeword s N  k k k , ,  where 1 k  2NR 1 , 1  k   1 2NR and 1  k  2NR  1 . First, we show that user 2 can decode s N  k k k , ,  with sufficiently small probability of error if it has access to 22 sequences ,N  , k k x 2 , u N . User 2 selects k so that: ( N s  k k k , ,  , N x 2 u , ) ( N ) P A (  3 , S X U 2 , ) if such k exists and it is unique, otherwise we declare error. With the assumption that N k ks ,  ,1 is sent by user 1, error occurred when ( N s k k ,  ,1 , N x 2 u , ) ( ) N P A (  3 , S X U 2 , ) or when ( N s  k k k , ,  , N x 2 u , ) ( N ) P A (  3 , S X U 2 , ) for k  . Due to joint AEP: 1 ( ) N P A (  3 S X U 2 , , ))    3 P s (( N k k , , N x 2  ,1 u , )  and also: P s {(  1 k N  k k k , ,  , N x 2 u , )  ( N A   3 ) ( P S X U 2 , , )} 2  NR N I S X U ( ( ; ,   1 2    3 )  2 N    (  3 1 ) So, we can bound decoding error of user 2 as: eP   3  2N    ( 3 1 ) and by choosing    , we can make max{ , } 1 0 3 eP sufficiently small. Now, we exploit Fano’s inequality to obtain: 1 N N H S K X K U , ( ,  , N 2 N )  1 N [1   P R ] 1 e  1 1  N N  (  3  2 N  [   3 1 ] )[ ( ; I S X U , 2 )    ]   2 1 This lemma is a modified version of Lemma 1 in [1]. C. LEMMA 3 For arbitrary random variables ,K F F and sequences of random variables , 1 2 X ,N 2 X we have [1]: N 3 I K X [ ( ; N 3 , F F 2 1 )  I K X F ( 1 ; N 2 )]  N  i 1  I K F X [ ( ; , 2 X i 1  3,1 , X N i 2, 1  , F 1 )  I K X ( ; 2, i X i 1  3,1 , X N i 2, 1  , F 1 )] 3, i Proof: First, we consider the right hand side of the above inequality: N  i 1  I K F X X [ ( , 2 3, i ; 1 i  3,1 , X N 2, 1 i  , F 1 )  I K X X ( ; 2, i 1 i  3,1 , X N 2, 1 i  , F 1 N  )]= [ 1 i  H K X ( 1 i  3,1 , X N 2, i , F H K X 1  ( ) i 3,1 , X N 2, 1 i  , F F , 1 2 )]  H K X ( N 2,1 , F 1 )  1  H K X [ ( N  i 1  i 3,1 , X N 2, 1 i  , F H K X 1  ( ) i 3,1 , X N 2, 1 i  , F F , 1 2 )]  H K X ( N 3,1 , F F , 1 2 )  I K F X F ( ; , 2 1 N 3,1 )  I K X F ( ; 1 N 2,1 )  N 1  I K F X [ ( ; i  )] 3,1 1 i   0  N 2, 1 i  F 1 X , , 2  I K F X F ( ; , 2 1 N 3,1 )  I K X F ( ; 1 N 2,1 ) 23 REFERENCES [1] R. Ahlswede and I. Csisz´ar, “Common randomness in information theory and cryptography, part I: Secret sharing,” IEEE Trans. Inf. Theory, vol. 39, no. 4, pp. 1121–1132, Jul. 1993. [2] I. Csisz´ar and J. K¨orner, “Broadcast channels with confidential messages,” IEEE Trans. Inf. Theory, vol. 24, no. 3, pp. 339–348, May 1978. [3] I. Csisz´ar and P. Narayan, “Secrecy capacities for multiple terminals,” IEEE Trans. Inf. Theory, vol. 50, no. 12, pp. 3047–3061, Dec. 2004. [4] I. Csisz´ar and P. Narayan, “Secrecy capacities for multiterminal channel model,” IEEE Trans. Inf. Theory, vol. 54, no. 6, pp. 2437–2452, Jun. 2008. [5] A. A. Gohari and V. Anantharam, “New bounds on the information-theoretic key agreement of multiple terminals”, in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Toronto, Canada, pp. 742-746, Jul. 2008. [6] A. A. Gohari and V. Anantharam, “Information-theoretic key agreement of multiple terminals - Part I: Source model”, IEEE Trans. Inf. Theory, submitted, Jun. 2008. [7] A. A. Gohari and V. Anantharam, “Information-theoretic key agreement of multiple terminals - Part II: Channel model”, IEEE Trans. Inf. Theory, submitted, Jun. 2008. [8] U. M. Maurer, “Secret key agreement by public discussion from common information,” IEEE Trans. Inf. Theory, vol. 39, no. 3, pp. 733–742, May 1993. [9] U. Maurer and S. Wolf, “Information-theoretic key agreement: From weak to strong secrecy for free,” in Proc. EUROCRYPT'2000, LNCS, vol. 1807, Bruges, Belgium: Springer-Verlag, pp. 351–368, May 2000. [10] R. Renner and S. Wolf, “New bounds in secret-key agreement: the gap between formation and secrecy extraction,”in Proc. EUROCRYPT’03, LNCS, Warsaw, Poland: Springer-Verlag, pp. 562-577, May 2003. [11] C. E. Shannon, “Communication theory of secrecy systems,” AT&T Bell Labs. Tech. J., vol. 28, pp. 656–715, 1949. [12] D. Slepian and J. K. Wolf, “Noiseless coding of correlated information sources,” IEEE Trans. Inf. Theory, vol. 19, no. 4, pp. 471– 480, Jul. 1973. [13] A. Wyner, “The wire-tap channel,” AT&T Bell Labs. Tech. J., vol. 54, pp. 1355–1387, 1975. [14] C. Ye and P. Narayan, “The private key capacity region for three terminals,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Chicago, USA, p. 44, Jun. 2004. [15] C. Ye and A. Rezenik, “Group secret key generation algorithms,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Nice, France, pp. 2596-2600, Jun. 2007. 24
synthetic_cpt
4
DART-Math_Difficulty-Aware_Rejection_Tuning_for_Mathematical_Problem-Solving.pdf
6 1 0 2 g u A 9 2 ] O C . h t a m [ 1 v 5 4 0 8 0 . 8 0 6 1 : v i X r a CONNECTEDNESS OF THE DART DIGRAPH AND THE SQUARED-DART DIGRAPH PRIMOˇZ POTO ˇCNIK AND STEVE WILSON Abstract. In this note we revisit the dart graph and the squared dart digraph constructions and prove that they yield strongly connected digraphs when applied to connected graphs of minimum valence at least 3. 1. Introduction In [1] and [3, Section 4], two constructions, called a dart digraph and a squared dart digraph, were introduced. The second of these is a directed form of a graph introduced in [2]. The purpose of this note is to prove that these two constructions yield strongly connected digraphs whenever applied to connected graphs. All the graphs and digraphs in this note are considered simple. More precisely, we define a digraph to be a pair (V, D) in which V is a finite non-empty collection of things called vertices and D is a collection of ordered pairs of distinct vertices. An element (u, v) of D will be called a dart with initial vertex u and terminal vertex v. A 2-dart of a digraph (V, D) is a pair (x, y) of darts in D such that the terminal vertex of x coincides with the initial vertex of y while the initial vertex of x does not coincide with the terminal vertex of y. If for every dart (u, v) of a digraph Λ also its reverse (u, v)−1 = (v, u) is a dart, then Λ is called a graph. In this case, we call the pair {(u, v), (v, u)} of darts an edge of Λ. We are now ready to define the dart digraph and the squared dart digraph of a given graph Λ with the set of darts D. The dart digraph of Λ is the digraph D(Λ) with vertices and darts being the darts and 2-darts of Λ, respectively. Similarly, let the squared dart digraph of Λ be the digraph A2D(Λ) with vertex- set D × D and with a pair (cid:0)(x, y), (z, w)(cid:1), x, y, z, w ∈ D, being a dart of A2D(Λ) if and only if y = z and (x, w) is a 2-dart of Λ. Recall that a digraph is said to be strongly connected provided that for any two vertices u, v, there is a directed path from u to v (we then say that v is accessible from u), as well as one from v to u. 2. Results The first of our results is a simple observation about bipartiteness of the dart (A digraph is said to be bipartite if its digraph and the squared dart digraph. underlying graph is bipartite.) 2000 Mathematics Subject Classification. 20B25. Key words and phrases. digraph, graph, transitive, product. Supported in part by the Slovenian Research Agency, projects J1-5433, J1-6720, and P1-0294. 1 2 P. POTO ˇCNIK AND S. WILSON Lemma 2.1. If Λ is a bipartite graph, then D(Λ) and A2D(Λ) are also bipartite. Proof. Colour the vertices of Λ properly black and white and let a dart of Λ inherit the colour of its initial vertex; this then determines a proper colouring of the vertices of D(Λ); in particular, D(Λ) is bipartite. Further, colour a vertex (x, y) of A2D(Λ) blue if the darts x, y of Λ are of the same colour as vertices in D(Λ) (either black or white), and red otherwise. This is (cid:3) then clearly a proper colouring of the vertices of A2D(Λ). We will now introduce a few auxiliary notions needed to analyse connectedness of the dart digraph and the square dart digraph. An s-arc in a graph Λ is a walk of length s in which no two of any three consec- utive vertices are the same; alternatively, it is a sequence of darts in Λ such that any two consecutive darts form a 2-dart. An arc-cycle is a closed walk which is also an s-arc for some s, and in addition, if it begins with (a, b) and ends with (c, a), then c is required to be different from b. Note that any cyclic shift of an arc-cycle is also an arc-cycle. Observe that an s-arc in Λ corresponds to a directed walk in D(Λ) of length s − 1, and an arc-cycle Λ of length s corresponds to a directed closed walk of length s in D(Λ). An s-arc, written as a sequence [a0, a1, a2, . . . , as−1, as] of vertices, is a balloon if a0, a1, . . . , as−1 are pairwise distinct and as = ai for some i ∈ {1, 2, . . . , s − 3}. The arc (a0, a1) is then called the beginning of the balloon. Lemma 2.2. Let Λ be a graph in which every vertex has valence at least 3 and let (u, v) be a dart of Λ. Then (u, v) is the beginning of some balloon in Λ. Proof. Let Λ′ be the connected component containing v of the graph obtained from Λ by removing the vertex u and all of the edges incident to u. Λ′ is not a tree. Hence Λ′ contains a cycle, say C = a0a1 . . . ak with ak = a0. Let vv1 . . . vm be a path from v to C in Λ′. Without loss of generality we may assume that vm = a0. (cid:3) Then [u, v, v1, . . . , vm, a1, a2, . . . , ak] is a baloon in Λ starting with (u, v). Lemma 2.3. Let Λ be a graph in which every vertex has valence at least 3. Then the greatest common divisor of the lengths of all arc-cycles in Λ is at most 2. Proof. Let C be a cycle in Λ, let m be its length, let uv be an edge of C, let a be a neighbour of u other than its neighbours in the cycle C, and let b be that for v. Let α, β be balloons beginning with (u, a) and (v, b), respectively. Then the walk beginning at u, following α out to and around its cycle and back to u along the initial part of α, then in one step to v, then following β out to and around its cycle and back to v following the initial part of β, then finally from v back in one step to u is an arc-cycle γ of some length n. Replacing that last step from v to u by the path formed from C by removing the edge {u, v} gives an arc-cycle of length m + n − 2. As the greatest common divisor of m, n, and m + n − 2 is at most 2, (cid:3) the result follows. Theorem 2.4. If Λ is a connected simple graph in which every vertex has valence at least 3, then D(Λ) and A2D(Λ) are strongly connected. Proof. Let ∆ = D(Λ). We begin the proof of the strong connectivity of ∆ by proving two claims: Claim 1: Let x = (u, v) be a dart of Λ and let x−1 = (v, u) be its inverse dart. Then there exists a directed walk from x to x−1 in ∆. CONNECTEDNESS 3 Indeed: By Lemma 2.2, there exists a balloon α = [a0, a1, a2, . . . , as−1, as] in Λ, beginning with x (that is, a0 = u and a1 = v). Let i ∈ {1, . . . , s − 2} be such that as = ai. Then β = [a0, a1, a2, . . . , as−1, as = ai, ai−1, ai−2, . . . , a2, a1, a0] is an (s + i)-arc in Λ, yielding a directed walk from x to x−1 in ∆. This proves Claim 1. Claim 2: If e and f are two edges in Λ, then there exists a directed walk in ∆ from some x to some y such that the underlying edges of x and y are e and f , respectively. To prove this, consider a shortest path va1a2 . . . akw from e to f . Then e = {u, v} and f = {w, z} for some vertices u and z of Λ such that a1 6= u and ak 6= z. But then (u, v) (v, a1) (a1, a2) . . . (ak−1, ak) (ak, w) (w, z) is a directed walk in ∆ from x = (u, v) to y = (w, z), underlying e and f respectively. This proves Claim 2. Note that strong connectivity of ∆ now follows directly from Claims 1 and 2. Namely, if x and y are two vertices in ∆ (and thus darts in Λ), then Claim 2 implies existence of a directed walk in ∆ from either x or x−1 to either y or y−1. By inserting directed walks (the existence of which is implied by Claim 1) from x to x−1 and y−1 to y, if necessary, one obtains a directed walk in ∆ from x to y. Now we are ready to prove that A2D(Λ) is strongly connected. Let (x, y) and (w, z) be any two vertices in A2D(Λ). Then x, y, w and z are darts of Λ and hence vertices of ∆. Since ∆ is strongly connected, there are directed walks from x to w and from y to z, and moreover, we may choose these two walks so that each passes through every vertex of ∆ By Lemma 2.3, the greatest common divisor D of the lengths of all arc-cycles in Λ is at most 2. Thus, by inserting arc-cycles appropriately, we can cause the length of the two walks to differ by at most 1. Let these walks be α = [x = a0, a1, . . . , ak = w] and β = [y = b0, b1, . . . , bℓ = z] where |k − ℓ| ≤ 1. Here, each ai and bi is a dart in Λ and each (ai, ai+1) and (bi, bi+1) is a 2-dart. If Λ is not bipartite, then D = 1, and we can force k to be equal to ℓ. Then the sequence (a0, b0), (b0, a1), (a1, b1), (b1, a2), . . . , (ak, bk) is a directed walk of length 2k from (x, y) to (w, z). Now suppose that Λ is bipartite. Recall that (see Lemma 2.1) that then also vertices A2D(Λ) can be properly bi-coloured blue and red where a vertex (x, y) is coloured blue whenever the initial vertices of x and y are at even distance in Λ. Since every vertex in A2D(Λ) has positive in- and out-valence, to prove that A2D(Λ) is strongly connected it suffices to show that every blue vertex is accessible from any other blue vertex, hence we may assume that the vertices (x, y) and (w, z) are blue. But then the directed walks α and β from x to w and from y to z must have the same parity. Thus, even though D = 2, we can again force k = ℓ, yielding (cid:3) a directed walk of length 2k from (x, y) to (w, z), as above. References [1] A. Hill, S. Wilson, Four constructions of highly symmetric graphs, J. Graph Theory 71 (2012), 229–244. [2] P. Potoˇcnik, P. Spiga, G Verret, Bounding the order of the vertex-stabiliser in 3-valent vertex- transitive and 4-valent arc-transitive graphs, J. Combin. Theory, Ser. B. 111 (2015), 148– 180. [3] P. Potoˇcnik, S. Wilson, The separated box product of two digraphs, to be put on arXiv. 4 P. POTO ˇCNIK AND S. WILSON Primoˇz Potoˇcnik, Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, SI-1000 Ljubljana, Slovenia; also affiliated with: IMFM, Jadranska 19, SI-1000 Ljubljana, Slovenia. E-mail address: [email protected] Steve Wilson, Northern Arizona University, Department of Mathematics and Statistics, Box 5717, Flagstaff, AZ 86011, USA E-mail address: [email protected]
synthetic_cpt
4
Gecko_Versatile_Text_Embeddings_Distilled_from_Large_Language_Models.pdf
4 2 0 2 y a M 4 2 ] L C . s c [ 1 v 0 4 6 5 1 . 5 0 4 2 : v i X r a GECKO: Generative Language Model for English, Code and Korean Sungwoo Oh KIFAI∗ [email protected] Donggyu Kim KIFAI [email protected] Abstract We introduce GECKO, a bilingual large language model (LLM) optimized for Korean and English, along with programming languages. GECKO is pretrained on the balanced, high-quality corpus of Korean and English employing LLaMA architecture. In this report, we share the experiences of several efforts to build a better data pipeline for the corpus and to train our model. GECKO shows great efficiency in token generations for both Korean and English, despite its small size of vocabulary. We measure the performance on the representative benchmarks in terms of Korean, English and Code, and it exhibits great performance on KMMLU (Korean MMLU) and modest performance in English and Code, even with its smaller number of trained tokens compared to English-focused LLMs. GECKO is available to the open-source community under a permissive license. We hope our work offers a research baseline and practical insights for Korean LLM research. The model can be found at: https://huggingface.co/kifai/GECKO-7B 1 Introduction Recent advances in artificial intelligence yield significant breakthroughs in the development of large language models (LLMs). Many proprietary LLMs [1, 33, 44] demonstrate human-level performances across multiple languages and on a wide range of real-world tasks [5, 31, 51]. In response to this, the open-source community has released various open large language models [46, 17, 18, 8], striving to match the capabilities of proprietary models. While these open-source models have been mainly trained on English [46, 17] or designed for specific use-cases such as programming [29, 47, 13] and mathematics [28, 30, 50], there has been increasing demand for models proficient in other languages. This need has led to the emergence of open-source language models that show strong understanding of non-english languages such as Chinese [2, 49], Finnish [32], and Indonesian [41]. They achieve impressive performance by leveraging language-specific datasets at the pretraining phase [2, 49, 32, 41]. Several open-source models enhance their linguistic performance by employing the following strate- gies: 1) language-specific continuous pretraining [37], 2) vocabulary expansion [24]. These ap- proaches efficiently improve the cross-lingual capabilities of the monolingual models compared to the process of pretraining models from scratch, which requires massive computational resources and extensive data. Despite the achievements of previous Korean language models [34, 26, 21, 22], research on pre- training methods and applications for Korean LLM remains limited. To address this, we initiate the development of GECKO, a language model designed mainly for Korean, yet capable in English and programming languages. GECKO is pretrained from scratch, utilizing terabytes of textual data in ∗Korea Institute of Finance and Artificial Intelligence(KIFAI) is an open community aiming to research AI technologies and share the findings to the public. English 28% Korean 35% 37% Code 36% 24% 3% 5% 16% 16% Web Wiki News Book Patent Translation Figure 1: Distribution of pretraining data sources for bilingual language models. The left pie chart illustrates the proportional composition of the corpus by language, highlighting a balanced representation of 35% Korean, 28% English, and 37% code to accommodate low-resource language challenges. The right pie chart details the types of data utilized, with 36% web sources, 24% from Wikipedia, 16% from news articles, 16% from books, 5% from patents, and 3% from translated texts. This distribution supports efforts to enhance model performance by diversifying and balancing the training data across different types and languages. both Korean and English to secure a strong bilingual proficiency. In the remainder of this report, we share our efforts and contributions as follows: • Data preprocessing and training methods maintaining the balance between Korean and English • Demonstration of strong performance in Korean with only small amount of pretraining resources • Open-sourcing our model under a permissive license to encourage further researches and applications 2 Datasets 2.1 Sources Low-resource languages, such as Korean, have far fewer public data sources available, even if they contain data with copyright issues. In contrast, resource-rich languages like English have large, accessible data sources for training language models. Balancing Korean and English As shown in Figure 1, similar to other bilingual language models [2, 32], we aim to strike a balance between English and Korean in our pretraining corpus by down- sampling and up-sampling English and Korean corpus respectively. High quality Korean corpus Since abundant open-source corpora for languages such as English and code already exist, and their refinement and processing significantly impact on the performance of language models [36, 35], our focus has shifted more towards improving methods of data cleansing methods. However, because high-quality Korean corpora without licensing issues are extremely limited, we collect data from Web. Reasoning capability Additionally, research findings [48, 11, 16] indicate that incorporating code data in the pretraining phase enhances the reasoning ability of language models, along with the academic perspective that treats code data as its own language. This ultimately led us to utilize three main corpora for pretraining: English, code, and Korean. 2 Figure 2: Pipeline for cleansing corpus Language Alignment There is research [6] on using translation datasets consisting of different language pairs for the purpose of multilingual alignment in the pretraining phase. Adopting this methodology, we train our model to align languages between English and Korean. 2.2 Preprocessing We curate and process terabytes of Korean corpus, and utilize large scale open-source corpora for English and programming languages. A sophisticated pipeline for deduplication and cleaning of raw text is implemented to obtain high-quality data as shown in Figure 2. The primary objectives of this data processing are as follows: • Mitigate harmful content: Preprocess and use selective data in order to remove harmful, toxic, and biased content from the training corpus. [19, 42]. • Minimize data memorization: The data deduplication improves robustness and generaliza- tion of the models when exposed to new, unseen data, preventing it from merely replicating patterns and generating training examples [27, 20]. • Keep structure information: Utilizing structural corpus including tables and lists plays a crucial role in increasing model performance. The training corpus includes the processing and normalization of specialized datasets such as wikis, programming code, mathematical expressions, and expert contributions. This step focuses on leveraging the structural elements inherent in these data types while carefully preserving tags and markdown features as shown in Figure 3. These considerations allow the model to interpret and generate contextually informed and syntactically coherent outputs, significantly enhancing its utility across various applications. 3 Pretraining 3.1 Tokenizer We train GECKO tokenizer on the balanced corpus of Korean, English, and Code. Similar to other large language models [46, 17], we utilize the Byte Pair Encoding (BPE) algorithm and train the tokenizer using Hugging Face’s tokenizer. We treat all numbers as individual digits and segment unknown UTF-8 characters into bytes to avoid out-of-vocabulary issues. Additionally, we opt not to use NFKC normalization [9], recently reporting performance degradation on BPE-based tokenizers [15, 38, 25]. We set the total vocabulary size to 32,000, following research [12] on optimal vocabulary size that aims to balance computational efficiency and performance considering larger vocabularies demand more computational power during the inference. We measure the efficiency of GECKO tokenizer compared to others using the following formula: 3 Figure 3: Example of normalization for a wiki dataset: The left image displays the original data, while the right image shows the preprocessed and normalized data in markdown format. Efficiency = (cid:18) # of tokensmodel # of tokensGECKO (cid:19) × 100% (1) The metric evaluates the tokenization efficiency by comparing the total number of tokens produced by GECKO tokenizer and others. Our tokenizer demonstrates superior efficiency in processing Korean while maintaining comparable results in English and Code, contrasting to the models primarily trained in English. The result of efficiency comparison using C4 corpus [39] and The Stack [23] is illustrated in Table 1 and Figure 4. Table 1: Overall toeknizer efficiency with respect to GECKO. Tokenizer GECKO Polyglot-Ko LLaMA-2 Mistral Gemma GPT-4 Vocab. size Efficiency 32,000 100% 30,080 71% 32,000 86% 32,000 92% 256,000 109% 100,277 110% 3.2 Training Details GECKO adopts the classical decoder-only Transformer architecture used in LLaMA [46]. The AdamW optimizer is employed to train the model, setting β1 at 0.9 and β2 at 0.95. The optimizer is configured to warm up over 10,000 iterations with a linearly increasing learning rate that peaks at 3e-4 and then decays to 3e-5 according to a cosine schedule. The model is trained with 200 billion tokens using BF16 mixed precision. Rotary positional embedding is utilized to train longer context tokens up to 8192 in length, allowing longer sequences to be understood during pretraining. We use sequence packing [7] to assemble multiple training samples into a single sequence and use end-of-sequence token to separate the document sequences. 4 130 115 111 132 100 101 100 100 100 98 67 62 48 57 56 GECKO Polyglot-Ko LLaMA-2 Mistral GPT-4 ) % ( y c n e i c fi f E n e k o T 140 120 100 80 60 40 20 0 Korean English Code Figure 4: Comparative analysis of tokenizer efficiency across multiple language models. This graph illustrates the performance of various tokenizers, including GECKO, Polyglot-Ko, LLaMA-2, Mistral, and GPT-4, across Korean, English, and code text corpora. The y-axis represents token efficiency as a percentage, with higher values indicating superior encoding performance relative to the tokenizer of GECKO. This analysis highlights the varying efficiency levels each model exhibits, offering insights into how effectively each tokenizer encodes multilingual and coding data. The dashed red line at 100% serves as a benchmark for baseline efficiency. 3.3 Training Infrastructure We train our model on Google Cloud Platform and used TPUv4 with 256 chips, utilizing Fully Sharded Data Parallelism (FSDP) and model parallelism. Leveraging JAX [3, 10], we implement the single controller programming paradigm, which enables us to manage and parallelize our training efficiently using just one Python command. Table 2: Performance evaluations across different models and benchmarks Model LLaMA-2 7B Mistral 7B Gemma 7B Polyglot-Ko 5.8B GECKO KMMLU MMLU HumanEval MATH 4-shot pass@1 5-shot 5-shot 24.2 21.0 21.1 28.3 30.7 45.3 62.5 64.3 26.8 28.3 12.8 26.2 32.3 0.0 17.7 2.5 12.7 24.3 0.3 4.3 4 Evaluation We evaluate several pretrained open-source large language models (LLMs) released under permissive licenses. For performance assessment, we use standard academic benchmarks to evaluate knowledge and reasoning abilities [14], as well as coding [4] and mathematics [40]. For LLaMA-2 [46], Mistral [17], and Gemma [45], we directly quote the scores as reported in the Gemma technical report [45]. Additionally, for the Korean evaluation set KMMLU [43], we conduct our own evaluation in the same environment with previous works. The result is shown in Table 2. In terms of Korean understanding (KMMLU), GECKO shows better performance compared to the evaluated models. Our model also demonstrates moderate performance in coding and mathematics. 5 5 Conclusion GECKO is an open-source Korean pretrained LLM released under a permissive license. Our work can contribute to both academic research and the practical development of the large Korean language model pretraining. Our immediate goal is to release an improved version of the model with additional training resources. We are also preparing for instruction fine-tuning to evaluate GECKO’s instruction- following ability. We believe that open-sourcing artificial intelligence technologies helps create safer products, accelerate innovation, and expand markets. Acknowledgements We deeply thank the TRC Team at Google Cloud for their dedication and support, which significantly enhanced our research through provision of Cloud TPUs. References [1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. [2] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023. [3] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, et al. Jax: composable transformations of python+ numpy programs. 2018. [4] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. [5] Cheng-Han Chiang and Hung-yi Lee. Can large language models be an alternative to human evaluations? arXiv preprint arXiv:2305.01937, 2023. [6] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1– 113, 2023. [7] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. Journal of Machine Learning Research, 25(70):1–53, 2024. [8] Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin. Free dolly: Introducing the world’s first truly open instruction-tuned llm. Company Blog of Databricks, 2023. [9] Mark Davis and Martin Dürst. Unicode normalization forms, 2001. [10] Roy Frostig, Matthew James Johnson, and Chris Leary. Compiling machine learning programs via high-level tracing. Systems for Machine Learning, 4(9), 2018. [11] Yao Fu, Hao Peng, and Tushar Khot. How does gpt obtain its ability? tracing emergent abilities of language models to their sources. Yao Fu’s Notion, 2022. [12] Thamme Gowda and Jonathan May. Finding the optimal vocabulary size for neural machine translation. arXiv preprint arXiv:2004.02334, 2020. [13] Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Y Wu, YK Li, et al. Deepseek-coder: When the large language model meets programming–the rise of code intelligence. arXiv preprint arXiv:2401.14196, 2024. 6 [14] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. Proceedings of the International Conference on Learning Representations (ICLR), 2021. [15] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022. [16] Jie Huang and Kevin Chen-Chuan Chang. Towards reasoning in large language models: A survey. arXiv preprint arXiv:2212.10403, 2022. [17] Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023. [18] Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024. [19] Nikhil Kandpal, Eric Wallace, and Colin Raffel. Deduplicating training data mitigates privacy risks in language models. In International Conference on Machine Learning, pages 10697– 10707. PMLR, 2022. [20] Aly Kassem, Omar Mahmoud, and Sherif Saad. Preserving privacy through dememorization: An unlearning technique for mitigating memorization risks in language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4360–4379, 2023. [21] Ildoo Kim, Gunsoo Han, Jiyeon Ham, and Woonhyuk Baek. Kogpt: Kakaobrain korean(hangul) generative pre-trained transformer. https://github.com/kakaobrain/kogpt, 2021. [22] Hyunwoong Ko, Kichang Yang, Minho Ryu, Taekyoon Choi, Seungmu Yang, Sungho Park, et al. A technical report for polyglot-ko: Open-source large-scale korean language models. arXiv preprint arXiv:2306.02254, 2023. [23] Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Fer- randis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra, and Harm de Vries. The stack: 3 tb of permissively licensed source code. Preprint, 2022. [24] L. Junbum. llama-2-ko-7b (revision 4a9993e), 2023. [25] Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. Bloom: A 176b-parameter open-access multilingual language model. 2023. [26] Junbum Lee. Kcbert: Korean comments bert. In Annual Conference on Human and Language Technology, pages 437–440. Human and Language Technology, 2020. [27] Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. Deduplicating training data makes language models better. arXiv preprint arXiv:2107.06499, 2021. [28] Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language models. Advances in Neural Information Processing Systems, 35:3843–3857, 2022. [29] Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. Starcoder: may the source be with you! arXiv preprint arXiv:2305.06161, 2023. 7 [30] Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical rea- soning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583, 2023. [31] Xiaoliang Luo, Akilles Rechardt, Guangzhi Sun, Kevin K Nejad, Felipe Yáñez, Bati Yil- maz, Kangjoo Lee, Alexandra O Cohen, Valentina Borghesani, Anton Pashkov, et al. Large language models surpass human experts in predicting neuroscience results. arXiv preprint arXiv:2403.03230, 2024. [32] Risto Luukkonen, Jonathan Burdge, Elaine Zosa, Aarne Talman, Ville Komulainen, Väinö Hatanpää, Peter Sarlin, and Sampo Pyysalo. Poro 34b and the blessing of multilinguality. arXiv preprint arXiv:2404.01856, 2024. [33] Claude Models. Model card and evaluations for claude models, 2023. [34] Jangwon Park. Koelectra: Pretrained electra model for korean. https://github.com/ monologg/KoELECTRA, 2020. [35] Guilherme Penedo, Hynek Kydlíˇcek, Leandro von Werra, and Thomas Wolf. Fineweb, 2024. [36] Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falcon llm: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, 2023. [37] Kunat Pipatanakul, Phatrasek Jirabovonvisut, Potsawee Manakul, Sittipong Sripaisarnmongkol, Ruangsak Patomwong, Pathomporn Chokchainant, and Kasima Tharnpipitchai. Typhoon: Thai large language models. arXiv preprint arXiv:2312.13951, 2023. [38] Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446, 2021. [39] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1–67, 2020. [40] Hill Saxton, Grefenstette and Kohli. Analysing mathematical reasoning abilities of neural models. arXiv:1904.01557, 2019. [41] AI Singapore. Sea-lion (southeast asian languages in one network): A family of large language models for southeast asia. https://github.com/aisingapore/sealion, 2023. [42] Luca Soldaini, Rodney Kinney, Akshita Bhagia, Dustin Schwenk, David Atkinson, Russell Authur, Ben Bogin, Khyathi Chandu, Jennifer Dumas, Yanai Elazar, et al. Dolma: An open corpus of three trillion tokens for language model pretraining research. arXiv preprint arXiv:2402.00159, 2024. [43] Guijin Son, Hanwool Lee, Sungdong Kim, Seungone Kim, Niklas Muennighoff, Taekyoon Choi, Cheonbok Park, Kang Min Yoo, and Stella Biderman. Kmmlu: Measuring massive multitask language understanding in korean. arXiv preprint arXiv:2402.11548, 2024. [44] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. [45] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024. [46] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. 8 [47] Yue Wang, Hung Le, Akhilesh Deepak Gotmare, Nghi DQ Bui, Junnan Li, and Steven CH Hoi. Codet5+: Open code large language models for code understanding and generation. arXiv preprint arXiv:2305.07922, 2023. [48] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022. [49] Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Heng Li, Jiangcheng Zhu, Jianqun Chen, Jing Chang, et al. Yi: Open foundation models by 01. ai. arXiv preprint arXiv:2403.04652, 2024. [50] Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. Mammoth: Building math generalist models through hybrid instruction tuning. arXiv preprint arXiv:2309.05653, 2023. [51] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36, 2024. 9
synthetic_cpt
8
Quality_Matters_Evaluating_Synthetic_Data_for_Tool-Using_LLMs.pdf
Quality Measures in Biometric Systems Fernando Alonso-Fernandez1, Member, IEEE, Julian Fierrez, Member, IEEE Javier Ortega-Garcia, Senior Member, IEEE Abstract— Biometric technology has been increasingly deployed in the last decade, offering greater security and convenience than traditional methods of personal recognition. But although the performance of biometric systems is heavily affected by the quality of biometric signals, prior work on quality evaluation is limited. Quality assessment is a critical issue in the security arena, especially in challenging scenarios (e.g. surveillance cameras, forensics, portable devices or remote access through Internet). Different questions regarding the factors influencing biometric quality and how to overcome them, or the incorporation of quality measures in the context of biometric systems have to be analysed first. In this paper, a review of the state-of-the-art in these matters is provided, giving an overall framework of the main factors related to the challenges associated with biometric quality. Index Terms— Biometrics, security, quality assessment, sample quality. I. INTRODUCTION identity The increasing interest on biometrics is related to the number of important applications where a correct assessment of to automatic is crucial. Biometrics refers recognition of an individual based on anatomical (e.g., iris, hand geometry) or behavioural fingerprint, face, characteristics (e.g., signature, gait, keystroke dynamics) [1]. Biometrics offers greater several advantages over traditional security methods based on something that you know (e.g. password, PIN) or something that you have (e.g. card, key). In biometric systems, users do not need to remember passwords or PINs, which can be forgotten, or carry cards or keys, which can be stolen. convenience and its efforts on Since the establishment of biometrics as a specific research area in late ’90s, the biometric community has focused the development of accurate recognition algorithms. Nowadays, biometric recognition is a mature technology, used in many government and civilian applications such as e-Passports, ID cards, or border control. Examples include the US-VISIT fingerprint system, the Privium iris system (Amsterdam Airport) or the SmartGate face system (Sydney Airport). But, during the last few years, the problem of quality measurement has emerged as an important concern in the biometric community after the poor performance observed on pathological samples [2]. It has been demonstrated by several studies and technology benchmarks that the performance of biometric systems is heavily affected by the quality of biometric signals e.g. see Figure 1. This operationally important step is nevertheless under-researched in comparison to the primary feature extraction or pattern recognition task. The performance degradation observed in less controlled situations is one of the main challenges facing biometric technologies [3]. The proliferation of portable hand-held devices with biometric acquisition capabilities or recognition at-a-distance and on-the-move are just two examples of non-ideal scenarios not yet sufficiently mature, which require robust recognition algorithms capable of handling a range of changing characteristics [1]. A quantitative example of the degradation observed in these scenarios can be seen in Figure 2. Another important example is forensics, in which intrinsic operational factors further degrade the recognition performance and are generally not replicated in controlled studies [4]. There are a number of factors that can affect the quality of biometric signals, and there are numerous roles of a quality measure in the context of biometric systems. Standardization bodies are also incorporating quality measures into existing data storage and exchange formats. This paper summarizes the state-of-the-art in the biometric quality problem, giving an overall framework of the different related factors. II. WHAT IS BIOMETRIC SAMPLE QUALITY? It has not been until the last years that there is consensus about what biometric sample quality is. Broadly, a sample is of good quality if it is suitable for personal recognition. Recent standardization efforts (ISO/IEC 29794-1) have established three components of biometric sample quality, see Figure 3: i) character (inherent discriminative capability of the source), ii) fidelity (degree of similarity between a sample and its source, attributable to each step through which the sample is processed); and iii) utility, (impact of the individual biometric sample on the overall performance of a biometric system). The character of the sample source and the fidelity of the processed sample contribute to, or similarly detract from, the utility of the sample [3]. It is generally accepted that a quality metric should most importantly mirror the utility of the sample, so that samples assigned higher quality lead to better identification of individuals [3]. Thus, quality should be predictive of recognition performance. This statement, however, is largely subjective: not all recognition algorithms work equally (i.e. they are not based on the same features), and their performance is not affected by the same factors. For example, a face recognition algorithm “A” can be insensitive to illumination changes, whereas another algorithm “B” can be severely affected by changes in illumination. In this 1 F. Alonso-Fernandez (correspondence author) is with Halmstad University, Box 823, SE 301-18 Halmstad, SWEDEN. J. Fierrez and J. Ortega-Garcia are with ATVS/Biometric Recognition Group, Escuela Politecnica Superior, Univ. Autonoma de Madrid, Avda. Francisco Tomas y Valiente 11, Campus de Cantoblanco, 28049 Madrid, SPAIN. Part of this research was carried out while author F.A.-F. was employed at ATVS/Biometric Recognition Group (email: [email protected], [email protected], [email protected]) Figure 1. Effect of low quality data on the performance of recognition algorithms. Conditions progressively more difficult in nature result in a significant decrease in performance, in spite of the technology improvement between the different studies. Some sample images with varying quality are also shown in each modality. Left: best performing algorithm in face independent evaluations. FRVT stands for Face Recognition Vendor Technology, and MBGC for Multiple Biometric Grand Challenge. A decrease in performance is observed in the 2009 evaluation, when uncontrolled illumination conditions and severe image compression were introduced. More information at www.frvt.org and http://face.nist.gov/mbgc. Middle: best performing algorithm in the Fingerprint Verification Competitions (FVC). In 2000 and 2002, fingerprint data where acquired without any special restriction, resulting in an EER decrease of one order of magnitude. In the 2004 edition, samples were intentionally corrupted (e.g. by asking people to exaggeratedly rotate or press the finger against the sensor, or by artificially drying or moisturizing the skin with water or alcohol). More information at https://biolab.csr.unibo.it/fvcongoing. Right: results of the Video-based Automatic System for Iris Recognition (VASIR) implemented by the National Institute of Standards and Technology (NIST) on iris data from the MBGC. Performance on iris from distant video (unconstrained acquisition) is dramatically reduced with respect to classical close-up controlled acquisition. More information at http://www.nist.gov/itl/iad/ig/vasir.cfm Figure 2. Performance degradation with portable devices. Face scores come from an LDA-based verifier using Fisher Linear Discriminant projection (face indoor) and an eigenface-based system with PCA analysis (face outdoor). Fingerprint scores come from the publicly available minutia-based matcher of the National Institute of Standards and Technology (NIST). Data is from the BioSecure Multimodal Database [5]. Face performance is degraded with the webcam, with further degradation in the more challenging outdoor environment (noisy ambience). As for the fingerprint modality, the sweep sensor results in worse performance with respect to the flat sensor. In flat sensors, acquisition is done by the touch method: the finger is simply placed on the scanner. In sweep sensors, the finger is swept vertically across a tiny strip with a height of only a few pixels. As the finger is swept, partial images are formed which are further combined to generate a full fingerprint image. This procedure allows to reduce the size and cost of the sensing element (facilitating its use in consumer products such as laptops, PDAs and mobile phones), but the reconstruction of a full image from the slices is prone to errors, especially in poor quality fingerprints and non-uniform sweep speed. (Figure extracted from Ortega-Garcia et al. [5]) situation, a measure of illumination will be useful for predicting performance of “B”, but not of “A”. Therefore, an adequate quality measure will be largely dependent on the type of the performance of different recognition algorithms may not be affected by the same signal quality factors, the efficacy of a quality estimation algorithm will be usually linked to a particular recognition algorithm, or thereof class. recognition algorithm considered. As Unfortunately some of them are beyond control of system developers or operators. Therefore, assessing the quality of captured samples will allow appropriate corrective actions to take place. Following the framework of Kukula et al. [6] and [7,8,9], a from other precedent works contributions classification of quality factors based on their relationship with the different parts of the system is proposed [10]. Using this classification, four different classes can be distinguished: III. FACTORS INFLUENCING BIOMETRIC QUALITY There are a number of factors affecting the quality of biometric signals, which are summarized in Table I. User-related factors, which include physical/ physiological and behavioural factors. As they have to do entirely with the “user side”, they are the most difficult to control. Some Figure 3. Definition of biometric quality from three different points of view: character, fidelity or utility. physical/physiological factors inherent to the person (age, gender or race) do not produce degradation, but data variability that needs to be properly considered by the recognition algorithm (e.g. differences in speech between males and females). Diseases or injuries may alter face, finger, etc., even irreversibly, making them infeasible for recognition. Although, in some cases, the presence of such alterations can be precisely used to narrow a person’s identity (e.g. amputation in gait recognition). On the other hand, coping with behavioural factors often implies modifying people’s behaviour or habits, which is not always convenient, or is even impossible in some applications like forensics or surveillance cameras. Many behavioural factors can be alleviated by recapturing after taking corrective actions (e.g. “take off your hat/coat/ring/glasses” or “keep your eyes opened”), but this is not always possible. Depending on the application, corrective actions can result in people’s reluctance to use the system. As can be seen in Table I, user-related factors have impact on the character of the biometric sample, that is, the quality attributable to inherent physical features. In this sense, the degree of control on these factors is low, as the inherent features of a person are difficult or impossible to modify. The remaining factors affect the fidelity, or in other words, the faithfulness between a biometric sample and its source, and their degree of control can be higher, as discussed next. Factors related to the user-sensor interaction, which include environmental and operational factors. In principle, these are easier to control than user-related factors, although users still play a role in these. Users impact will depend on the level of control of the environment, the acquisition itself, and whether the acquisition physically takes place in controllable premises. In many applications, biometric data is acquired in less than ideal conditions, such as by surveillance cameras or portable hand-held devices. Other hot topic includes acquisition “at a distance” or “on the move” as a person walks by detection equipment, facilitating the ease of interaction with the system. But the unsatisfactory performance of biometrics technologies in these uncontrolled situations has limited their deployment, being one of the main challenges facing biometric technologies [1]. Factors related to the acquisition sensor. The sensor is in most cases the only physical point of interaction between the user and the biometric system. Its “fidelity” (see Section II) in in reproducing the original biometric pattern is crucial for the accuracy of the recognition system. The diffusion of low cost sensors and portable devices (e.g. mobile cameras, webcams, telephones and PDAs with touch screen displays, etc.) is rapidly growing the context of convergence and ubiquitous access to information and services, representing a new scenario for automatic biometric recognition systems. Unfortunately, these low cost and portable devices produce data which are very different from those obtained by dedicated (and more expensive) sensors, primarily due to a small input area, poor ergonomics or the fact that the user may be in movement. In this context, a measure of the reliability of the data and recognition process can provide additional improvement, by optimizing a structure lacking homogeneity, while ensuring system interoperability by integrating data of different nature [11]. e.g. smart techniques, Factors related to the processing system. Related to how a biometric sample is processed once it has been acquired, these are the factors, in principle, easiest to control. Storage or exchange speed constraints may impose the use of data compression cards. Also, governments, regulatory bodies, and international standards organizations often specify that biometric data must be kept in raw form, rather than in (or in addition to) post-processed templates that may depend on proprietary algorithms, with implications in data size. Hence, the effects of data compression on recognition performance become critical. The necessity for data compression, together with packet loss effects, also appears in recent applications of biometrics over mobile or Internet networks. IV. ENSURING GOOD QUALITY OF BIOMETRIC SAMPLES After analysing the usual factors affecting quality of biometric systems, this section reports some helpful guidelines for their control [7], which are summarized in Table II. Three points of action can be identified: i) the capture point, a critical point of action since it acts as the main interface between the user and the system, ii) the quality assessment algorithm itself, and iii) the system that performs the recognition process. If quality can be improved, either by capture point design or by system design, better performance can be realized. For those aspects of quality that cannot be designed-in, an ability to analyse the quality of a sample and initiate corrective actions is needed. This is 1) Outdoor operation is especially problematic, as control on other environmental factors can be lost. It also demands additional actions regarding sensor conditions and its maintenance. 2) Background, object occlusion refer to uncontrolled environments (e.g. surveillance cameras), with great impact on face systems. 3) Temperature, humidity: Affect skin properties (fingerprint, hand). 4) Illumination, light reflection: Iris images are affected due to reflective properties of the eye. They also affect face images. 5) Ambient noise affects the quality of speech. 6) User familiarity, feedback of acquired data: Feedback has been demonstrated to lead to better acquired samples, helping in the process of habituation (i.e. becoming accustomed to the system). 7) Physical guides: In some cases, they are incorporated in sensors to facilitate acquisition (e.g. hand, finger). 8) Ergonomics refers to how the design of the acquisition device facilitates interaction with the user. 9) Time between acquisitions (aging of the template): Biometric data acquired from an individual at two different moments may be very different, having great impact on the system performance. 10) Age (aging of the subject): Although iris pigmentation and fingerprint characteristics are highly stable, they change until the adolescence and during the old age. Other traits like face, speech, signature, etc. are subject to natural evolution throughout our life. Age of the subject can also degrade the sample quality due to, for example, medical conditions or loss of certain abilities. 11) Gender: Face or speech characteristics are different in males and females. 12) Race affects face (physical features) and iris (in some ethnic groups, pigmentation is different and/or iris is not visible due to eyelid occlusion or long eyelashes, e.g. Eastern people). 13) Skin condition refers to factors like dryness/wetness, sweat, cuts, bruises, etc., which can have impact on traits involving analysis of skin properties (fingerprint and hand). 14) Manual work may affect the skin condition (dryness, cuts, bruises, dirt, diseases, etc.), in some cases irreversibly. 15) Illiteracy refers to people that do not know to read or write. 16) Ethnic origin: Although it is a physical/physiological feature, it can affect a person’s behaviour, e.g. in face appearance (hairstyle, beard, jewellery, etc.), speech (language, lexicon, intonation, etc.) and signature (American signatures typically consist of a readable written name, European signatures normally include flourish, Asian signatures consist of independent symbols, etc.). TABLE I FACTORS AFFECTING THE QUALITY OF BIOMETRIC SIGNALS. useful, for example, in initiating the reacquisition from a user, selecting the best sample in real time, or selectively evoking different processing methods, and it is the key component in quality assurance management. V. QUALITY ASSESSMENT ALGORITHMS AND THEIR PERFORMANCE Many quality assessment algorithms are found in the literature, focused on measuring different factors affecting the quality of biometric traits (see Figure 4). It is not the scope of this work to describe them in depth, so only a selection of key recent references is provided here (see references therein also). Quality assessment algorithms have been developed mainly for fingerprint images [14] and recently, for iris [15], voice [16], face [17] and signature signals [18]. In spite of the number of existing algorithms, almost all of them have been tested under limited and heterogeneous frameworks, mainly because it has not been until the last years when the biometric community has formalized the concept of sample quality and has developed evaluation methodologies. Two recent frameworks proposed for this purpose are briefly described here [3], [19]. 1) Use of an adequate Graphical User Interface (GUI), with a large display providing real time feedback of acquired data, has demonstrated to help users to provide better signals over time and to habituate faster to the system [9]. 2) Corrective actions depend heavily on the application. For example, in some cases it is not possible to recapture a second sample (e.g. forensics), so the system has to deal with the “bad” sample at hand. Rejecting a sample implies invoking alternative recognition procedures (e.g. another biometric trait) or human intervention, resulting in increased costs and user inconvenience. 3) Quality-based processing and fusion means to invoke different algorithms and to combine them with different weighting depending on the quality of the signal at hand. See Section VII for further discussion. 4) Template substitution/update, an area still under-researched [12], allows coping with natural variations of biometric traits across time. Efficient strategies include storing multiple templates representative of the associated variability and updating/substituting them with new acquisitions. 5) Monitoring and periodic reporting [13] helps identify sudden problems (e.g. a damaged sensor) and find hidden systematic problems (e.g. specific sites or sensors working worse than others, hours when the quality of acquired signals is worse, etc.). Especially important is to identify user-scanner learning curves in order to avoid “first time user” syndrome, especially for elderly people or people who are not accustomed to interact with machines. TABLE II BIOMETRIC QUALITY ASSURANCE PROCESS. the source), fidelity (faithfulness of As shown in Figure 3, biometric sample quality can be considered from the point of view of character (inherent the properties of biometric sample to the source), or utility (predicted contribution to performance). Youmaran and Adler [19] have developed a theoretical framework for measuring biometric sample fidelity. They relate biometric sample quality with the amount of identifiable information that the sample contains, and suggest that this amount decreases with a reduction in quality. They measure the amount of identifiable information for a person as the relative entropy, D(p||q), between the population feature distribution, q, and the person’s feature distribution, p. Based on this, the information loss due to a degradation in sample quality can be measured as the relative change in the entropy. On the other hand, most of the existing operational schemes for quality estimation of biometric signals are focused on the utility of the signal. Grother and Tabassi [3] have presented a framework for evaluating and comparing quality measures in terms of their capability of predicting the system performance. Broadly, they formalize the concept of sample quality as a scalar quantity is related monotonically to the recognition performance of biometric matchers. Therefore, by partitioning the biometric data in different groups according to some quality criteria, the quality measure should give an ordered indication of performance between quality groups. Also, by rejecting low quality samples, error rates should decrease quickly with the fraction rejected. Some of the works referenced above in this Section have followed this framework in their experimental that samples are stored in the system database and are later compared with new samples provided during the operation of the system. Therefore, a quality algorithm should be able to work with individual samples, even though its ultimate intention is to improve recognition performance when matching two (or more) samples. VI. HUMAN VS. AUTOMATIC QUALITY ASSESSMENT There is an established community of human experts in recognizing biometric signals for certain applications (e.g. signatures on checks or fingerprints in forensics) and the use of manual quality verification is included in the workflow of some biometric applications (e.g. immigration screening and passport generation). A common assumption here is that human assessment of biometric quality is an appropriate gold standard against which biometric sample quality measures should be measured [21]. Also, many authors make use of datasets with manually labelled quality measures to optimize and test their quality assessment algorithms. To the best of our knowledge, the only study aimed to test the relevance of human evaluations of biometric sample quality is [21]. From this study, it is evident that human and computer processing are not always functionally comparable. For instance, if a human judges a face or iris image to be good because of its sharpness, but a recognition algorithm works in low frequencies, then the human statement of quality is inappropriate. The judgement of human inspectors can be improved by adequate training on the limitations of the recognition system, but this could be prohibitively expensive and time consuming. In addition, there are other implications in incorporating a human quality checker, such as tiredness, boredom or lack of motivation that a repetitive task like this may cause in the operator, as pointed out in Section IV. A comprehensive analysis of factors leading to errors related with human-assisted operation is given by Wertheim [22]. VII. INCORPORATING QUALITY MEASURES IN BIOMETRIC SYSTEMS The incorporation of quality measures in biometric systems is an active field of research, with many solutions proposed. Different uses of sample quality measures in the context of biometric systems have been identified throughout this paper. These are summarized in Table III [7], [8]. It should be noted that these roles are not mutually exclusive. Indeed, prevention of poor quality data requires a holistic, system-wide focus involving the whole operation of the biometric system [23]. It is not the scope of this paper to provide a comprehensive list of references. We refer the interested reader to the surveys contained in references [3], [10], [12], [13], [23]. VIII. STANDARDIZING BIOMETRIC QUALITY It should be noted that adhesion to standards for sensors, software, interfaces, etc. is recommended throughout the quality assurance process. With the use of standards, great flexibility and modularity is obtained, as well as fast technology interchange, sensor and system interoperability, and proper interaction with external security systems. Figure 4. Common properties measured by biometric quality assessment algorithms. References to particular implementations are given in Section V. Figure 5. Evaluating the utility of four fingerprint quality measures. Results show the verification performance as samples with the lowest quality value are rejected. The similarity scores come from the publicly available minutia-based matcher released by the National Institute of Standards and Technology (NIST). Data is from the BioSec multimodal database [20]. Different performance improvement for the same fraction of rejected samples suggests different efficacy of each measure for the particular recognition algorithm and/or sensor evaluated. (Figure extracted from Alonso-Fernandez et al. [14]) studies. A graphical example evaluating the utility of fingerprint quality metrics can be seen in Figure 5. However, as mentioned before, the efficacy of a quality algorithm is usually tied to a particular recognition algorithm. This can be seen in the example of Figure 5, in which each quality metric results in different performance improvement for the same fraction of rejected low quality samples. It should be also noted that, although biometric matching involves at least two samples, they are not acquired at the same time. Reference 1) Recapture loop: implementation of an “up to three attempts” policy, giving feedback in each subsequent acquisition to improve quality; selection of the best signal from a video stream. 2) Quality-based processing: quality-specific enhancement algorithms; conditional execution of processing chains, including specialized processing for poor quality data; extraction of features robust to the degradation that the signal is suffering; extraction of features from useful regions only; ranking of extracted features based on quality of local regions. 3) Update of enrolment data/database maintenance: storage of multiple samples representing the variability associated with the user (e.g. different portions of the fingerprint to deal with partially overlapped fingerprints, face from multiple viewpoints); update of stored samples with ones of better quality captured during the system operation [12]. 4) Quality-based matching, decision and fusion: use of different matching/fusion algorithms; adjustment of the sensitivity of the matcher/fusion algorithm; quantitative indication of the reliability of the acceptance/rejection decision; quality-driven selection of data sources to be used for matching/fusion, e.g. weighting schemes to quality-based ranked features/data sources; use of soft-biometric traits (age, height, sex, etc.) to assist in the recognition. 5) Monitoring and reporting across different parts of the system to identify problems that lead to poor quality signals and initiate corrective actions. Different aspects that can be monitored and reported include signal quality [13]: By application, as different applications may require different scanners, environment setup, etc., and this may impact differently on the overall quality of acquired signals. By site/terminal, to identify abnormal sites/terminals due to operator training, operational and environmental conditions, etc. By capture device, to assess the impact due to different acquisition principles, mechanical design, etc., and if a specific scanner does not provide signals that satisfy our quality criteria. By subject, to identify interaction learning curves, which can helps to better train new users and alleviate the “first time user” syndrome [9]. By stored template, to detect how the database quality is varying when new templates are stored or old ones are updated. By biometric input, in the case that multiple biometric traits are being used, to improve the way in which they are combined. Trend analysis, providing statistics of all applications, sites, etc., allowing to identify trends in signal quality or sudden changes that need further investigation. TABLE III ROLES OF A SAMPLE QUALITY MEASURE IN THE CONTEXT OF BIOMETRIC SYSTEMS. Standards compliance allows for replacement of parts of deployed systems with various technological options coming from open markets. As biometric technology is extensively deployed, a common situation is the exchange of information between several multi-vendor applications of different agencies, involving heterogeneous equipment, environments or locations [1]. In response to a need for interoperability, biometric standards have been developed to allow modular integration of products, also facilitating future upgrades to newer developments. Examples of interoperable scenarios are the use of e-Passports readable by different countries, or the exchange of lists of criminals among Security Forces. A list of standards organizations and other bodies working in biometric standards development is given in Table IV. Current efforts in developing biometric standards [24, 25] are focused on acquisition practices, sensor specifications, data formats and technical interfaces, as we plot in Figure 6 and Table V. In addition, although particularly aimed to the assistance of US federal agencies in its development and implementation of biometric programs, there is a “Registry Standards” of (www.biometrics.gov/Standards) with some high level guidance with respect to its implementation. Recommended Biometric USG Concerning incorporation of quality information, most of the standards define a quality score field specific the aimed to incorporate quality measures. However, its content is not explicitly defined or is somewhat subjective due to the lack in consensus on i) how to provide universal quality measures interpretable by different algorithms or ii) what are the key factors that define quality in a given biometric trait. These problems are being addressed in the multipart standardization effort ISO/IEC 29794-1/4/5. A prominent approach within this standard is the Quality Algorithm vendor ID (QAID), which incorporates standardized data fields that uniquely identifies a quality algorithm, including its vendor, product code and version. QAID fields can be easily added to existing data interchange formats such as the Common Biometric Exchange Formats Framework (CBEFF), enabling a modular multi-vendor environment that accommodates scored by different quality algorithms in existing data interchange formats. samples IX. ISSUES AND CHALLENGES This paper gives an overall framework of the main factors involved in the biometric quality measurement problem. The increasing development of biometrics in the last decade, related to the number of important applications where a correct assessment of identity is a crucial point, has not been followed by extensive research on biometric data quality [3]. A significant improvement in performance in less controlled International Standards Organizations  ISO-JTC1/SC37: International Organization for Standardization, Committee 1 on Information Technology, Subcommittee 37 for Biometrics (www.iso.org/iso/jtc1 sc37 home) IEC: International Electrotechnical Commission (www.iec.ch) National Standards Bodies (NSBS)  ANSI: American National Standards Institute (www.ansi.org) Standards-Developing Organizations (SDOS)  for INCITS M1: Information InterNational Committee Technology Standards, Technical Committee M1 on Biometrics (http://standards.incits.org/a/public/group/m1) NIST-ITL: American National Institute of Standards and Technology, Laboratory (www.nist.gov/itl) ICAO: International Civil Aviation Organization (www.icao.int) Information Technology    International Other NON-SDOS participating in standards development efforts    BC: Biometrics Consortium (www.biometrics.org) IBG: International Biometrics Group (www.ibgweb.com) IBIA: (www.ibia.org) DoD-BIMA: American Department of Defence, Biometrics Identity Management Agency (www.biometrics.dod.mil) FBI-BCOE: American Federal Bureau of Biometric Centre of Excellence (www.biometriccoe.gov) Investigation, Association Biometric Industry   TABLE IV STANDARDS ORGANIZATIONS AND OTHER BODIES WORKING IN BIOMETRIC STANDARDS DEVELOPMENT (ALL LINKS ACCESSED OCTOBER 2011). situations is one of the main challenges facing biometric technologies [1]. Now that there is international consensus that a statement of a biometric sample’s quality should be related to its recognition performance, efforts are going towards an harmonized and universal interpretation of quality measures by defining the key factors that need to be assessed in each biometric trait [25], and by setting good acquisition practices [7]. This will enable a competitive multi-vendor marketplace, allowing interoperability of multiple vendors’ quality assessment algorithms. A biometric system has to be resilient in processing data with heterogeneous quality yet providing good recognition performance. Although there are several corrective actions that can be performed to improve the quality of acquired signals [7], some factors fall out of our control or cannot be avoided. In this respect, specially challenging scenarios for biometrics are the ones based on portable devices, and/or remote access through Internet or acquisition at-a-distance. These are expected to work in an unsupervised environment, with no control on the ambient noise, on the user-sensor interaction process, or even on the sensor maintenance. inherent degraded Another very conditions is forensics. Therefore, it is very important upon capture of biometric samples to assess their quality as well as having specific developments for poor quality signals [3]. important field with Quality is intrinsically multi-dimensional, with factors of very different nature affecting it [6], [7], [8], [9]. A biometric system must adequately address this multifactor nature. There are a number of things that quality measures can do in BioApi (Biometric Application Programming Interface), defines architecture and necessary interfaces to allow biometric applications to be integrated from modules of different vendors. Versions 1.0 and 1.1 were produced by the BioAPI Consortium, a group of over 120 companies and organizations with interest in biometrics. BioAPI 2.0 is specified in the ISO/IEC 19784-1 standard (published May 2006). CBEFF (Common Biometric Exchange Formats Framework), supports exchange of biometrics information between different system components or systems. Developed from 1999 to 2000 by the CBEFF Development Team (NIST) and the BioAPI Consortium. FBI-WSQ (FBI Wavelet Scalar Quantization) image compression algo-rithm for fingerprint images, developed to archive the large FBI fingerprint database. Developed by the FBI and the NIST. (DHS Automated Biometric FBI-EBTS (FBI Electronic Biometric Transmission Specification), DoD-EBTS (DoD Electronic Biometric Transmission Specification), DHS-IDENT-IXM Identification System-Exchange Messages Specification) for exchange of biometric the FBI, DoD and DHS biometric applications, data with respectively. particular and DoD-EBTS implementations of the ANSI/NIST ITL 1-2007 standard, customized to the needs of the FBI and the DoD. FBI-EBTS v9.2 released on in March 2009. May 2011. DoD-EBTS v2.0 DHS-IDENT-IXM v5.0 released in November 2009. FBI-EBTS released are ANSI/NIST-ITL 1-2000 for exchange of biometric data between law enforcement and related criminal justice agencies, including fingerprint, facial, scar, mark, and tattoo data. ANSI/NIST-ITL 1-2007/2-2008 and ISO/IEC-19794 multipart standard that specify a common format to exchange and store a variety of biometric data including face, fingerprint, palm print, face, iris voice and signature data. Annex to ISO/IEC-19794-5 with recommendations for face photo taking for E-passport and related applications, including indications about lighting and camera arrangement, and head positioning. ISO/IEC 29794-1/4/5 multi-part standard to enable harmonized interpretation of quality scores from different vendors, algorithms and versions by setting the key factors that define quality in different biometric traits. It also addresses the interchange of biometric quality data via the multipart ISO/IEC 19794 Biometric Data Interchange Format Standard. TABLE V AVAILABLE BIOMETRIC STANDARDS (WITH RESPONSIBLE AGENCIES AND LATEST VERSION AVAILABLE). the context of biometric systems to improve the overall performance, such as altering the sample processing/ comparison process, or weighting the results from different systems depending on the quality. Recent independent evaluations of commercial and research prototypes are also starting to include quality studies in their scenarios, as the BioSecure Multimodal Evaluation Campaign in 2007 (www.int-evry.fr/biometrics/BMEC2007) or the Noisy Iris Challenge Evaluation in 2009 (http://nice2.di.ubi.pt). Some research works have dealt with these matters, but much work is still to be done in this area. ACKNOWLEDGEMENTS Work of F.A.-F. at ATVS/Biometric Recognition Group has been supported by a Juan de la Cierva postdoctoral Fellowship from the Spanish MICINN. F. A.-F. also thanks (USA), 2007. [10] F. Alonso-Fernandez, “Biometric Sample Quality and its Application to Multimodal Authentication Systems,” Ph.D. dissertation, Universidad Politecnica de Madrid, Madrid, Spain, 2008, available online at http://atvs.ii.uam.es (publications). [11] J. Fierrez-Aguilar, J. Ortega-Garcia, J. Gonzalez-Rodriguez, and J. Bigun, “Discriminative Multimodal Biometric Authentication Based on Quality Measures,” Pattern Recognition, 38(5), pp. 777–779, 2005. [12] A. Rattani, B. Freni, G. Marcialis, and F. Roli, “Template update methods in adaptive biometric systems: A critical review,” Proc. International Conference on Biometrics, ICB, Springer LNCS-5558, pp. 847–856, 2009. [13] T. Ko and R. Krishnan, “Monitoring and Reporting of Fingerprint Image Quality and Match Accuracy for a Large User Application,” Proc. 33rd Applied Image Pattern Recognition Workshop, pp. 159–164, 2004. J. Fierrez, [14] F. Alonso-Fernandez, J. Gonzalez-Rodriguez, H. Fronthaler, K. Kollreider, and J. Bigun, “A Comparative Study of Fingerprint Image Quality Estimation Methods,” IEEE Trans. on Information Forensics and Security, 2(4), pp. 734–743, December 2007. J. Ortega-Garcia, [15] N. D. Kalka, J. Zuo, N. A. Schmid, and B. Cukic, “Estimating and Fusing Quality Factors for Iris Biometric Images,” IEEE Trans. On Systems, Man and Cybernetics, Part A: Systems and Humans, 40(3), pp. 509–524, 2010. [16] A. Harriero, D. Ramos, J. Gonzalez-Rodriguez, and J. Fierrez, “Analysis of the Utility of Classical and Novel Speech Quality Measures for Speaker Verification,” Proc. International Conference on Biometrics, ICB, Springer LNCS-5558, pp. 434–442, 2009. [17] D. DAmato, N. Hall, and D. McGarry, “The specification and measurement of face image quality,” Performance Testing Conference, IBPC, http://www.nist.gov/itl/iad/ig/ibpc2010.cfm, 2010. [18] N. Houmani, S. Garcia-Salicetti, and B. Dorizzi, “A Novel Personal Entropy Measure Confronted With Online Signature Verification Systems Performance,” Proc. IEEE Conference on Biometrics: Theory, Applications and Systems, BTAS, Washington DC (USA), pp. 1–6, 2008. [19] R. Youmaran and A. Adler, “Measuring Biometric Sample Quality in Terms of Biometric Information,” Proc. of Biometric Symposium, Biometric Consortium Conference, Baltimore, Maryland (USA), 2006. [20] J. Fierrez, J. Ortega-Garcia, D. Torre-Toledano, and J. Gonzalez-Rodriguez, “BioSec baseline corpus: A multimodal biometric database,” Pattern Recognition, 40(4), pp. 1389–1392, April 2007. [21] A. Adler and T. Dembinsky, “Human vs. Automatic Measurement of Biometric Sample Quality,” Canadian Conference on Electrical and Computer Engineering, CCECE, 2006. [22] K. E. Wertheim, “Human factors in large-scale biometric systems: A study of the human factors related to errors in semiautomatic fingerprint biometrics,” IEEE Systems Journal, 4(2), pp. 138-146, 2010. [23] A. Hicklin and R. Khanna, “The role of data quality in biometric systems,” Mitretek Systems, Tech. Rep., February 2006. [Online]. Available: http://www.mitretek.org/Role of Data Quality Final.pdf [24] Registry of USG http://www.biometrics.gov/standards/Biometric v2.pdf, August 2009. recommended Biometric Standards, Standards Registry [25] E. Tabassi and P. Grother, Encyclopedia of Biometrics. Springer, 2009, ch. Biometric Sample Quality, Standardization. Fernando Alonso-Fernandez received the M.S. degree in 2003 with Distinction and the Ph.D. degree “cum laude” in 2008, both in Electrical Engineering, from Universidad Politecnica de Madrid (UPM), Spain. Since 2004, he has been affiliated with the Biometric Recognition Group (ATVS), first working towards the Ph.D. degree, and later as Figure 6. Use of standards in biometric systems to ensure good quality signals. See Table V for a more detailed description. the Swedish Research Council (Vetenskapsrådet) and the European Commission for funding his postdoctoral research at Halmstad University. This work was also supported by from CAM, projects Contexts Bio-Challenge (TEC2009¬11186) from Spanish MICINN, TABULA RASA and BBfor2 (FP7-ITN-238803) from EU, and Cátedra UAM-Telefónica. The authors would also like to thank the Spanish Dirección General de la Guardia Civil for their support to the work (FP7-ICT¬257289) (S2009/TIC-1485) REFERENCES [1] A. K. Jain and A. Kumar, Second Generation Biometrics. Springer, 2010, ch. Biometrics of Next Generation: An Overview. [2] BQW, NIST Biometric Quality Workshop, Gaithersburg, MD, USA, November 7-8, 2007 - www.nist.gov/itl/iad/ig/bio_quality_wkshopii.cfm [3] P. Grother and E. Tabassi, “Performance of Biometric Quality Measures,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 29(4), pp. 531–543, 2007. [4] A. K. Jain, B. Klare, and U. Park, “Face recognition: Some challenges in forensics,” Proc. Intl. Conf. on Automatic Face and Gesture Recognition, FG, 2011. [5] J. Ortega-Garcia, J. Fierrez, F. Alonso-Fernandez, J. Galbally, M. Freire, J. Gonzalez-Rodriguez, C. Garcia-Mateo, J. Alba-Castro, E. Gonzalez-Agulla, E. Otero-Muras, S. Garcia-Salicetti, L. Allano, B. Ly-Van, B. Dorizzi, J. Kittler, T. Bourlai, N. Poh, F. Deravi, M. Ng, M. Fairhurst, J. Hennebert, A. Humm, M. Tistarelli, L. Brodo, J. Richiardi, A. Drygajlo, H. Ganster, F. Sukno, S. Pavani, A. Frangi, L. Akarun, and A. Savran, “The Multi-Scenario Multi-Environment BioSecure Multimodal Database (BMDB),” IEEE Trans. on Pattern Analysis and Machine Intelligence, 32(6), pp. 1097–1111, 2009. [6] E. P. Kukula, M. J. Sutton, and S. J. Elliott, “The Human-Biometric- Sensor Interaction Evaluation Method: Biometric Performance and Usability Measurements,” Instrumentation and Measurement, 59(4), pp. 784-791, 2010. IEEE Trans. on [7] J.-C. Fondeur, “Thoughts and Figures on Quality Measurements,” Proc. NIST Biometric Quality Workshop I, Gaithersburg, MD, USA, March 8-9, 2006 - www.nist.gov/itl/iad/ig/bio_quality_wkshopi.cfm [8] T. Mansfield, “The Application of Quality Scores in Biometric Recognition,” Proc. NIST Biometric Quality Workshop II, Gaithersburg, USA, Nov. 2007 - www.nist.gov/itl/iad/ig/bio_quality_wkshopii.cfm [9] M. Theofanos, B. Stanton, R. Micheals, and S. Orandi, “Biometrics Systematic Uncertainty and the User,” Proc. IEEE Conference on Biometrics: Theory, Applications and Systems, BTAS, Washington DC postdoctoral researcher. He is currently a postdoctoral researcher at the Intelligent Systems Laboratory (IS-lab), Halmstad University, Sweden, under a postdoctoral fellowship of the Swedish Research Council (Vetenskapsrådet) and a Marie Curie fellowship of the European Commission. His research interests include signal and image processing, pattern recognition and biometrics. He has published several journal and conference papers and he has been actively involved in European projects focused on biometrics (e.g., Biosecure NoE, COST 2101). He has participated in the development of several systems for a number of biometric evaluations (e.g. SigComp 2009, LivDet 2009, BMEC 2007). Dr. Alonso-Fernandez has been invited researcher in several laboratories across Europe, and is the recipient of a number of distinctions for his research, Information and Communication Technologies applied to Banking in 2010 by the Spanish College of Telecommunication Engineers (COIT), and Doctorate Extraordinary Award in 2011 by Universidad Politecnica de Madrid to outstanding Ph.D. Thesis. including: best Ph.D. Thesis on Julian Fierrez-Aguilar received the M.Sc. and the Ph.D. degrees in telecommunications engineering from Universidad Politecnica de Madrid, Madrid, Spain, in 2001 and 2006, respectively. Since 2002 he has been affiliated with the Biometric Recognition Group (ATVS), first at Universidad Politecnica de Madrid, and since 2004 at Universidad Autonoma de Madrid, where he is currently an Associate Professor. From 2007 to 2009 he was a visiting researcher at Michigan State University in USA under a Marie Curie fellowship. His research interests and areas of expertise include signal and image processing, pattern recognition, and biometrics, with emphasis on signature and fingerprint verification, multi-biometrics, biometric databases, and system security. Dr. Fierrez has been and is actively involved in European projects focused on biometrics, and is the recipient of a number of distinctions for his research, including: best Ph.D. thesis in computer vision and pattern recognition in 2005-2007 by the IAPR Spanish liaison (AERFAI), Motorola best student paper at ICB 2006, EBF European Biometric Industry Award 2006, and IBM best student paper at ICPR 2008. the M.Sc. degree Javier Ortega-Garcia received in electrical engineering (Ingeniero de Telecomunicaci´on), in 1989; and the Ph.D. degree ”cum laude” also in electrical engineering (Doctor Ingeniero de Telecomunicación), in 1996, both from Universidad Politécnica de Madrid, Spain. Dr. Ortega-Garcia is founder and co-director of ATVS research group. He is currently a Full Professor at the Escuela Politécnica Superior, Universidad Autónoma de Madrid, where he teaches Digital Signal Processing and Speech Processing courses. He also teaches a Ph.D. degree course in Biometric Signal Processing. His research interests are focused on biometrics signal processing: speaker recognition, face recognition, fingerprint recognition, on-line signature verification, data fusion and multimodality in biometrics. He has published over 150 international contributions, including book chapters, refereed journal and conference papers. He chaired “Odyssey-04, The Speaker Recognition Workshop”, co-sponsored by IEEE. Since 2008 he is a Senior member of the IEEE.
synthetic_cpt
8
Beware_of_Calibration_Data_for_Pruning_Large_Language_Models.pdf
4 2 0 2 t c O 3 2 ] L C . s c [ 1 v 1 1 7 7 1 . 0 1 4 2 : v i X r a Preprint BEWARE OF CALIBRATION DATA FOR PRUNING LARGE LANGUAGE MODELS Yixin Ji1, Yang Xiang1, Juntao Li1∗, Qingrong Xia2, Ping Li2, Xinyu Duan2, Zhefeng Wang2, Min Zhang1 1School of Computer Science and Technology, Soochow University 2Huawei Cloud, China {jiyixin169,baldwin021129}@gmail.com; {ljt,minzhang}@suda.edu.cn ABSTRACT As large language models (LLMs) are widely applied across various fields, model compression has become increasingly crucial for reducing costs and improving inference efficiency. Post-training pruning is a promising method that does not require resource-intensive iterative training and only needs a small amount of calibration data to assess the importance of parameters. Previous research has primarily focused on designing advanced pruning methods, while different cali- bration data’s impact on pruning performance still lacks systematical exploration. We fill this blank and surprisingly observe that the effects of calibration data even value more than designing advanced pruning strategies, especially for high spar- sity. Our preliminary exploration also discloses that using calibration data similar to the training data can yield better performance. As pre-training data is usually inaccessible for advanced LLMs, we further provide a self-generating calibration data synthesis strategy to construct feasible calibration data. We conduct experi- ments on the recent strong open-source LLMs (e.g., DCLM, and LLaMA-3), and the results show that the proposed method outperforms commonly used calibration data and can effectively enhance strong pruning methods (e.g., Wanda, OWL). 1 INTRODUCTION Recently, Large Language Models (LLMs) have exhibited remarkable performance and enormous potential in Natural Language Processing (NLP) and Artificial Intelligence (AI) (OpenAI, 2022; 2023; Bubeck et al., 2023; Yang et al., 2023). The success of LLMs is closely tied to scaling laws (Kaplan et al., 2020; Hoffmann et al., 2022): training language models with more parameters, using more data and greater computational resources leads to more powerful capabilities. However, LLMs with more parameters increase the difficulty and cost of deployment and inference. Therefore, much work has been devoted to compressing LLMs to achieve a trade-off between efficiency and performance, such as pruning (Frantar & Alistarh, 2023; Ma et al., 2023; Xia et al., 2024) and quantization (Frantar et al., 2023; Huang et al., 2024; Shao et al., 2024). Pruning is a model compression technique that has evolved over many years (LeCun et al., 1989) and remains full of potential and challenges. Based on the over-parameterization of neural networks, it aims to remove redundant parameters while minimizing the degradation of model performance. Pruning has been successfully applied to compress small to medium-sized neural networks. Through sparse training (Lee et al., 2019; Frankle & Carbin, 2019; Yuan et al., 2021; Lasby et al., 2024) or pruning-aware training (Sanh et al., 2020; Lagunas et al., 2021; Jiang et al., 2023) methods, it can achieve performance comparable to dense models with a high sparsity ratio (≥70%). However, these methods require iterative training, which is costly and time-consuming for LLMs with billions of parameters. As a result, post-training pruning that does not require iterative training has become the preferred approach for pruning LLMs. The challenge of post-training pruning is how to perform training-free parameter importance es- timation. Frantar & Alistarh (2023) note that simple parameter magnitude-based metrics perform ∗ Corresponding author. 1 Preprint (a) Peformance differences of repre- sentative pruning methods with the commonly-used C4 calibration data. (b) Performance differences of vari- ous calibration data on SparseGPT. (c) Method differ- ences v.s. data dif- ferences. Figure 1: The effects of pruning methods and calibration data on commonsense reasoning tasks. poorly in post-training pruning with over 20% sparsity. Therefore, they use a small amount of calibration data to compute the inverse Hessian matrix, estimating parameter importance through second-order gradient information. Sun et al. (2024) propose a simpler method by using the product of weight magnitudes and the L2 norm of the corresponding input activations. Dong et al. (2024) utilize the genetic algorithm to search for the optimal combination of information from magnitude, activation, and gradient as an importance metric. Overall, current advanced parameter importance metrics rely on calibration data. Although most papers claim their pruning methods are robust to calibration data, Williams & Aletras (2024)’s empirical study challenges this view. They demon- strate the performance differences of various methods using different calibration data. Furthermore, our experiments revealed that the performance gains from selecting better calibration data can even surpass those of advanced pruning methods (Figure 1). Therefore, it is time to focus more research on calibration data. However, many open questions regarding calibration data remain under-explored. For example, how does the impact of calibration data change with increased sparsity and structure of pruning? Can in- creasing the amount of calibration data narrow the performance gap between various datasets? What type of data is suitable for calibration? How do we select the appropriate calibration data in practice? In this paper, we investigate these questions. Our empirical results demonstrate that as sparsity and structure increase, the performance differences among different calibration data become more pro- nounced, and simply increasing the data volume does not reduce this disparity. We further find that the selection of calibration data is closely related to the LLM’s training data, with calibration data similar to the training data yielding better performance. Based on this, we propose two strategies, detection and self-generation, aimed at sampling appropriate calibration data for pruning in practi- cal settings with unavailable training data. To evaluate the effectiveness of our proposed calibration data sampling method, we conduct experiments on DCLM, LLaMA-2, and LLaMA-3 models. The results show that our proposed method performs better than the commonly used calibration data and is compatible with strong pruning methods by substantially improving their performance. 2 BACKGROUND Model compression is a crucial way to improve inference efficiency by reducing the required mem- ory, including pruning (Frantar & Alistarh, 2023; Sun et al., 2024; Ma et al., 2023; Yin et al., 2024a; Guo et al., 2023; Zhang et al., 2024b; Xia et al., 2024), quantization (Frantar et al., 2023; Xiao et al., 2023; Lin et al., 2024; Huang et al., 2024; Shao et al., 2024), low-rank decomposition (Kaushal et al., 2023; Yuan et al., 2024; Wang et al., 2024; Ji et al., 2024), etc. The enormous memory requirements and inefficient inference speeds for LLMs urgently necessitate model compression. However, many successful model compression methods have required substantial computational resources for re- training, which limits their application for LLMs in low-resource settings. Therefore, post-training compression, which does not require retraining, has become a current research focus. 2 0HWKRGGLII'DWDGLII Preprint Post-training compression methods typically approximate model compression as an optimization problem for layer-wise compression (Frantar & Alistarh, 2022): ||WlXl − ˆWlXl||F , min ˆWl (1) where Wl, ˆWl are the original and compressed l-th linear layer, respectively, and Xl is the input fea- ture activation. For post-training pruning, to optimize the objective, Frantar & Alistarh (2022; 2023) utilize second-order gradient information to measure parameter importance and propose an efficient algorithm for computing the inverse Hessian matrix. Sun et al. (2024) evaluate weight importance simply by the product of weight magnitudes and the L2 norm of the corresponding input activation without requiring backpropagation. Zhang et al. (2024c) propose the relative importance and acti- vation metric, which integrates weight, input, and output activation. They also utilize the channel permutation to minimize pruning loss under N:M semi-structured pruning. Dong et al. (2024) pro- pose a search framework that employs the genetic algorithm to discover the optimal pruning metric for LLMs automatically. Recently, several studies (Sung et al., 2024; Xu et al., 2024a; Yin et al., 2024b) indicate that layer-wise compression, which typically applies a uniform sparsity rate across all layers and evaluates weight importance within the layer, often results in suboptimal performance due to the lack of overall consideration. Specifically, Xu et al. (2024a) proposes a differentiable pruning framework designed to search for optimal pruning rates for each layer. Yin et al. (2024b) introduce outlier weighed layerwise sparsity, which relates the sparsity of each layer to the observed outliers in a proportional manner. In the aforementioned post-training compression methods, calibration data is an indispensable com- ponent. Calibration data is a small subset randomly sampled from unlabeled pretraining text. Many methods (Frantar & Alistarh, 2023; Sun et al., 2024; Dettmers et al., 2024) claim their robustness to the quantity and distribution of calibration data, requiring only dozens or hundreds of samples with 2,048 sequence length. However, this conclusion is based on the perplexity of certain datasets (such as Wikitext2), which does not fully reflect the true capabilities of the LLMs. Even if per- plexity shows no significant change, the compressed model may still experience substantial per- formance declines in downstream tasks (Jaiswal et al., 2024). Williams & Aletras (2024) observe in extensive experiments that the selection of calibration data in post-training pruning and quantiza- tion methods significantly impacts downstream tasks’ performance, especially post-training pruning, which is highly sensitive to calibration data. Nevertheless, current research on calibration data re- mains under-explored, with few studies providing guidelines for selecting calibration data. Khanal & Capone (2024) suggest that using task-specific calibration data helps improve performance on specific downstream tasks. Unlike their research, this paper aims to provide guidance on selecting calibration data to enhance the general capabilities of compressed models. 3 THE IMPACT OF CALIBRATION DATA FOR PRUNING Though Williams & Aletras (2024) have noted that calibration data significantly impacts post- training pruning, there exist many open questions. How much does calibration data affect prun- ing performance? How does the amount of calibration data affect compressed model performance? What data sources are more suitable for calibration? We investigate these questions in this section. 3.1 EXPERIMENTAL DETAILS Dense Model To study the impact of data from different sources on post-training pruning methods, we need a comprehensive knowledge of the data used in model training. We select the powerful and fully open-source LLM (including training data), DCLM-7B1 (Li et al., 2024), as the dense model and conduct post-training pruning with different calibration data on it. Post-training Pruning Methods We choose three competitive and representative post-training pruning methods for evaluation: Wanda (Sun et al., 2024), DSnoT (Zhang et al., 2024d) and OWL (Yin et al., 2024b). These methods apply to both unstructured and semi-structured pruning. 1https://huggingface.co/apple/DCLM-7B 3 Preprint (a) sparsity ratio (b) sparsity type Figure 2: Pruning performance range (M ax.-M in.) of different datasets (C4, Wikipedia, Slimpa- jama, DCLM) under various sparsity ratios (a) and sparsity types (b) on Wanda. Calibration Data We consider various data sources to be calibration data. Following the main- stream works, the calibration data sources are all from the unlabeled pre-trained corpus: • C4 (Raffel et al., 2020)2 is a widely used calibration data source, consisting of a large amount of multilingual web text filtered from Common Crawl. We sample from the English training set. • Wikipedia3 is a source of high-quality encyclopedic text. We use the first shard of the cleaned English version until 2023-11-01. • Slimpajama4 is a cleaned and deduplicated version of RedPajama. It is a high-quality pre-training corpus with diverse sources, including C4, ArXiv, GitHub, Books, etc. • DCLM (Li et al., 2024) is the pre-training data of DCLM-7B model. extracted from Common Crawl. We sample from a subset5 of the DCLM. It includes 2.6T tokens Aside from the experiments in Section 3.3, we follow prior works and randomly sample 128 se- quences with 2048 tokens as calibration data. To mitigate the impact of sampling randomness, all our experiments repeat the calibration data sampling 20 times with different random seeds and report the average performance. Evaluation Tasks Some pruning works focus on the perplexity of certain datasets while neglect- ing performance on various downstream tasks, which often fails to fully reflect the capabilities of compressed models. Therefore, we choose multiple widely used and challenging commonsense reasoning tasks for evaluation, including BoolQ (Clark et al., 2019), Winogrande (Sakaguchi et al., 2021), PIQA (Bisk et al., 2020), Hellaswag (Zellers et al., 2019), ARC-e, ARC-c (Clark et al., 2018) and MMLU (Hendrycks et al., 2021). For MMLU, we use a 5-shot setting, while all other tasks are evaluated in a zero-shot setting. Our evaluation code is based on the lm-evaluation-harness repository6. We report the average performance of these seven tasks. 3.2 HOW MUCH DOES CALIBRATION DATA AFFECT PRUNING PERFORMANCE? In practical applications, evaluating and comparing the impact of different calibration data on pruned models inevitably consumes time and computational resources. Therefore, we wonder how signif- icant the impact of calibration data is on pruning performance and whether it’s worth our effort to seek optimal calibration data in research and practice. We consider different sparsity ratios and sparsity types. Our experiments cover sparsity ratios ranging from 30% to 60%, and at 50% sparsity ratio, we further compare unstructured, 4:8 semi-structured, and 2:4 semi-structured sparsity types. 2https://huggingface.co/datasets/allenai/c4 3https://huggingface.co/datasets/wikimedia/wikipedia 4https://huggingface.co/datasets/DKYoon/SlimPajama-6B 5https://huggingface.co/datasets/robbiegwaldd/dclm-micro 6https://github.com/EleutherAI/lm-evaluation-harness 4  Preprint (a) Wanda (b) DSnoT Figure 3: The impact of calibration data amount for different pre-training data resources (i.e., C4, Wikipedia, Slimpajama, DCLM) and pruning methods, i.e., Wanda (a) and DSnoT (b). Shaded areas represent the standard deviations of 20 random seeds. We use Wanda as an example to illustrate the model’s performance range, defined as the difference between the maximum and minimum values, after pruning with four calibration data sets, as shown in Figure 2. More details on the performance of the different calibration data can be found in Figure 6 in Appendix A. Specifically, at low sparsity ratios (<50%), the performance difference between different calibration data is minimal, less than 0.1%. As sparsity increases, the impact of calibration data on pruning gradually amplifies, rising from a 0.5% difference at 50% sparsity to 2.3% at 60% sparsity. Notably, as shown in Figure 6, inappropriate calibration data can even have a negative effect at moderate sparsity levels. For instance, at 60% sparsity, using Wikipedia and Slimpajama as calibration data performs worse than magnitude pruning without any calibration data. For sparsity types, we observe that as the sparsity pattern becomes more structured, the choice of calibration data becomes increasingly important, with the maximum difference reaching 1.5% to 1.8%. We also report results on DSnoT and OWL in Appendix A. Although different pruning methods exhibit varying performance, they show similar trends regarding the impact of calibration data. Overall, at moderate to high sparsity ratios and with semi-structured sparsity types, different calibration data significantly affect the performance of pruned LLMs. For all pruning methods, higher sparsity ratios and more structured sparsity types are key to achieving effective inference acceleration. Therefore, paying more attention to the choice of calibration data is crucial. 3.3 IS CALIBRATION DATA FROM DIFFERENT SOURCES EQUALLY ROBUST TO DATA AMOUNT? Currently, almost all post-training pruning methods for LLMs have empirically demonstrated ro- bustness in terms of the amount of calibration data they use. Typically, model performance reaches a plateau when the data amount reaches 128, and more calibration data do not lead to additional performance gains. We wonder whether these methods are equally robust to the amount of data for calibration data from different sources. Can certain calibration data that lead to poorer pruned models be improved by increasing the data amount? We perform Wanda and DSnoT pruning on DCLM-7B in the 2:4 semi-structured pruning setting. We randomly sample 64, 128, 256, 512, 1024, and 2048 samples from different data sources as calibration data. Figure 3 shows how the performance of pruned models changes with increasing data amount using different calibration data. We observe that the average performance of pruned models is robust to data amount, regardless of the calibration data source, with fluctuations of only 0.1%-0.2%. Therefore, we cannot expect that increasing the amount of calibration data will narrow the performance gap between different calibration data. Additionally, as the data amount increases, the standard deviation of the pruned model’s performance decreases. 5 &:LNLSHGLD6OLPDSMDPD'&/0&:LNLSHGLD6OLPDSMDPD'&/0 Preprint 3.4 WHAT CALIBRATION DATA IS SUITABLE FOR PRUNING? Since the choice of calibration data is crucial and cannot be improved by increasing the amount alone, we have to figure out what calibration data is more suitable for pruning. We propose two reasonable hypotheses: (1) The more similar the calibration data is to the training data of the LLMs, the better the pruning performance. (2) The higher the quality of the calibration data, the better the pruning performance. C4 Wanda DSnoT DCLM Method Wikipedia Slimpajama In contrast, 62.520.21 61.710.21 63.400.19 Table 1: Pruning performance of three pruning meth- ods with four different sources of calibration data. To verify the hypotheses, we perform three post-training pruning methods on DCLM-7B with var- ious calibration data in the 2:4 semi-structured pruning setting. We report our results in Table 1. Among these data, using DCLM from the training data as calibration data con- sistently achieves the best performance. C4 and Slimpajama, which are also ex- tracted from Common Crawl, perform slightly worse. the source of Wikipedia differs significantly from the other three datasets. Although Wikipedia is recognized as high-quality data, it shows the worst performance, falling short of DCLM by 1.3% to 1.8%. Therefore, we assert that the quality of calibration data is not the primary factor affecting pruning performance. We further quantify the similarity between different calibration data and the training data. We utilize the MinHash-LSH al- gorithm to encode the 3-grams of C4, SlimPajama, Wikipedia, and DCLM, calculating their Jaccard similarities. The results show that the Jaccard similarity between C4 and DCLM is 0.070, SlimPa- jama is 0.016, and Wikipedia is 0.008. This indicates that C4 is the most similar to the training data, followed by SlimPajama, while Wikipedia has the lowest similarity. This ranking aligns with their performance as calibration data in pruning. Therefore, we believe that the similarity of calibration data to the training data has a more significant impact on pruning performance than the qual- ity of the calibration data. Training data or data similar to the training data is better suited as calibration data. We conjecture that this may be due to LLMs learning the patterns in the training data better. Therefore, using data with similar patterns as calibration data during the pruning process can more accurately reflect the importance of model parameters. 61.030.21 60.480.24 62.230.19 62.310.22 61.200.21 63.100.22 62.880.20 62.250.22 63.600.16 OWL 4 CALIBRATION DATA SAMPLING METHOD In the Section 3, our empirical study of the open-source DCLM-7B model demonstrates that se- lecting calibration data similar to the training data can yield better pruning performance. However, in practical scenarios, the training data of many LLMs is not publicly available to users. In this section, we will propose the “self-generating then sampling” strategy for sampling calibration data when the training data is unavailable. Formally, given a dataset D as the source of calibration data and an LLM M pre-trained on an inaccessible dataset Dt, we aim to sample n instances from D as calibration data Dc that has a similar distribution to Dt. Recently, Xu et al. (2024b) disclosed that LLMs internalize patterns such as language structure, word distribution, and even commonsense knowledge from the training data during the training process. Due to their auto-regressive nature, LLMs leverage these internalized patterns when predicting the next token, producing the generated text similar to the training data. Thus, we propose using self- generated synthetic data as a proxy for the training data for calibration in post-training pruning. Specifically, for a sample from the source of calibration data D, we truncate the first t tokens as the prefix and then allow the LLM M to generate contextually relevant subsequent content: xi ∼ pM(x<i), i = t · · · N. (2) After generating the data, we filter the synthetic data to prevent low-quality generated data from negatively impacting pruning effectiveness. We calculate each generated sample’s perplexity and filter the k% samples with the highest perplexity. Higher perplexity indicates that the patterns are not well-fitted by the LLM and may differ significantly from the training data, making them unsuitable as calibration data. 6 Preprint 5 EXPERIMENTS 5.1 EXPERIMENTAL DETAILS To evaluate the effectiveness of our proposed calibration data sampling method, we apply it to vari- ous LLMs, including DCLM-7B, LLaMA-2-7B, LLaMA-2-13B (Touvron et al., 2023) and LLaMA- 3-8B (Dubey et al., 2024). As described in Section 3.1, we use C4, Wikipedia, Slimpajama, and DCLM as baselines for calibration data, employing three post-training pruning methods: Wanda, DSnoT, and OWL, to prune the dense models. In the main experiments, we report performance at the 60% sparsity ratio. We follow previous work to evaluate the compressed LLMs’ language modeling and commonsense reasoning capabilities. We do not use the Wikitext2 dataset, which is common in most papers for evaluating language modeling ability, as its similarity to Wikipedia may introduce bias when assessing the impact of different calibration data on language modeling ability. Instead, we choose the Alpaca (Taori et al., 2023) dataset, distinct from all four calibration data sources, as our language modeling test data. When replicating DSnoT and OWL, we follow the hyperparameter settings detailed in their papers. During the self-generation process, we use Top-k and Top-p sampling to improve the diversity of the generated data. Specifically, we set the p-value to 0.95, the k-value to 50, and the temperature to 0.6 for the DCLM model, and 0.8 temperature for the LLaMA-series model. We apply the repetition penalty of 1.2 to avoid repeatedly generating low-quality fragments. To demonstrate the generaliza- tion of our self-generated calibration data, we randomly sample 5,000 examples from the Wikipedia data for generation, as Wikipedia performs poorly among the LLMs we used. In the filtering phase, we eliminate the top 20% of samples based on their perplexity. 5.2 OVERALL PERFORMANCE We report main results in Table 2 and Table 5. Overall, our self-generated synthetic calibration data outperforms other baseline calibration data in language modeling and commonsense reasoning tasks and is compatible with different pruning methods. On DCLM-7B, the synthetic calibration data im- proves performance in commonsense reasoning tasks by an average of 2.2% to 2.6% compared to the original Wikipedia data. Additionally, it surpasses the commonly used C4 calibration data, achiev- ing an average increase of 0.8% to 1.2%. On LLaMA family models, the self-generated synthetic data significantly outperforms the original data, with improvements ranging from approximately 0.9% to 1.1%, and surpasses the C4 data by about 0.3% to 0.5%. Surprisingly, the performance of the self-generated calibration data even exceeds that of calibration data sampled from the DCLM-7B training set, with an average improvement of 0.3% to 0.7%. We think this may be due to certain patterns in the calibration data that LLMs have not adequately learned. Using these patterns as calibration data may misestimate the importance of parameters. In contrast, due to the nature of maximum likelihood training, self-generated calibration data typically generates patterns that LLMs have better learned, thus avoiding using underrepresented patterns as calibration data. 6 DISCUSSION 6.1 IS THE SYNTHETIC CALIBRATION DATA SUITABLE FOR OTHER PRUNING SETTINGS? Table 3: Pruning performance of differ- ent calibration data. We further validate the effectiveness of self-generated synthetic calibration data across more pruning settings. Table 3 illustrates the commonsense reasoning perfor- mance of DCLM-7B during Wanda pruning using differ- ent calibration data at unstructured 50% and 65% spar- sity ratios, as well as semi-structured 4:8 and 2:4 settings. In all pruning settings, our synthetic calibration data ei- ther matches or exceeds the performance of the optimal calibration data from the training set DCLM. Notably, the synthetic data improve performance by approximately 0.8% in the two semi-structured pruning settings. Since semi-structured pruning can achieve prac- tical inference acceleration and advanced GPUs already support 2:4 sparse tensor cores. Thus, we Slim DCLM Syn C4 Wiki Setting 69.07 61.03 53.97 64.82 69.62 67.02 69.43 69.64 69.26 63.61 62.88 62.31 57.22 58.14 66.27 66.17 66.28 58.11 56.10 62.52 50% 65% 4:8 2:4 7 Preprint Table 2: Pruning performance of different calibration data on DCLM-7B and LLaMA-2-7B in 60% sparsity ratio. The best performance method is indicated in bold. Wiki, Slim, and Syn are abbrevia- tions for Wikipedia, SlimPajama, and our synthetic data, respectively. Alpaca (↓) BoolQ Winogrande PIQA Hellaswag ARC-e ARC-c MMLU Avg. Method Data Wanda DSnoT OWL Wanda DSnoT OWL C4 Wiki Slim DCLM Syn C4 Wiki Slim DCLM Syn C4 Wiki Slim DCLM Syn C4 Wikipedia Slimpajama DCLM Syn C4 Wikipedia Slimpajama DCLM Syn C4 Wikipedia Slimpajama DCLM Syn 9.67 9.99 9.76 9.54 9.40 9.81 10.16 9.87 9.70 9.40 9.52 9.96 9.59 9.38 9.20 10.42 10.42 10.23 9.88 9.62 10.88 10.92 10.76 10.37 10.40 9.19 9.30 9.21 9.08 9.13 DCLM-7B 70.27 68.40 70.16 70.51 70.06 69.44 68.08 69.21 69.36 69.20 68.90 67.11 68.69 69.47 68.92 LLaMA-2-7B 64.50 63.84 63.68 64.25 64.40 64.04 62.72 63.66 63.99 64.01 67.34 66.05 66.91 67.94 66.38 75.12 74.33 74.27 75.13 75.78 74.76 73.95 73.80 74.63 75.38 75.55 74.25 74.56 75.10 76.03 71.12 70.55 71.10 71.15 71.49 71.22 70.55 70.82 71.44 71.49 72.74 71.82 72.32 72.39 73.18 78.47 72.05 78.56 79.11 78.73 76.11 69.97 75.58 77.39 77.58 78.14 75.27 78.09 78.45 78.45 66.30 66.80 66.83 68.92 68.29 65.25 66.24 65.66 66.65 65.44 66.73 66.50 67.52 69.79 69.85 66.32 64.79 65.07 66.25 66.16 65.08 63.23 63.88 64.89 64.76 65.22 63.07 64.00 65.07 65.18 58.92 56.69 57.54 58.72 58.89 57.15 55.55 56.17 56.77 57.77 62.86 61.90 62.25 62.73 62.86 72.84 73.14 72.37 73.37 74.34 72.10 72.09 71.37 72.06 73.27 72.46 73.01 72.35 72.76 73.72 64.92 64.78 64.68 64.81 64.73 64.40 64.10 64.43 64.56 64.86 67.54 67.57 66.70 67.06 67.89 40.84 39.91 39.94 41.66 42.83 39.08 38.69 38.63 39.83 41.66 38.24 38.35 37.95 38.81 40.29 33.91 34.23 33.98 33.98 35.41 32.82 33.16 32.51 33.30 34.30 35.68 35.89 34.91 35.85 35.07 43.31 42.20 43.40 44.58 45.04 41.62 41.63 42.25 43.73 44.53 39.04 38.75 39.84 40.73 42.73 23.06 22.94 22.95 23.65 24.01 23.45 23.05 23.15 23.73 23.90 26.20 26.07 26.05 26.45 26.34 63.88 62.12 63.40 64.37 64.71 62.60 61.09 62.10 63.13 63.77 62.51 61.40 62.21 62.91 63.61 54.68 54.26 54.39 55.07 55.32 54.05 53.62 53.77 54.35 54.54 57.02 56.54 56.67 57.46 57.37 think the self-generated synthetic calibration data will effectively enhance the performance of pruned models in real-world deployment. 6.2 HOW DOES PREFIX LENGTH AFFECT THE PERFORMANCE OF SYNTHETIC DATA? The prefix length during self-generation is a crucial hy- perparameter. If the prefix is too short, the synthetic text is likely to be far from the semantics of the original text; if it is too long, the synthetic calibration data may retain excessive patterns from the original text. Therefore, it is essential to explore the selection of prefix length. Our experiments range from 0 to 1024 prefix lengths, where a prefix length of 0 indicates only a special token rep- resenting the start of the text. Figure 4 shows the trend of commonsense reasoning performance as the prefix length varies. Once there is a prefix, the performance exceeds that of the original calibration data. However, longer prefixes do not yield better results, as perfor- mance gradually declines with increased prefix length. The results indicate that using 1 to 4 tokens as a prefix is optimal. This suggests that semantic consistency with the original text is not critical in synthetic calibration data; instead, the key is to avoid retaining patterns that could have negative effects. Figure 4: Wanda pruning performance using self-generated synthetic calibration data with different prefix lengths. 8 :LNLSHGLD'&/06\QWKHWLF Preprint 6.3 HOW DOES PERPLEXITY-BASED DATA FILTERING AFFECT PRUNING PERFORMANCE? Table 4: Impact of perplexity- based data filtering. After generating synthetic data, we employ a simple perplexity- based method to filter low-quality data. Is this perplexity-based filtering method effective, and what should the filtering rate be? We conduct experiments on the DCLM-7B model. As shown in Table 4, even without any filtering strategy, the synthetic data outperforms the original data. The perplexity-based filtering has proved to be a simple yet effective approach, with the best pruning performance at a filtering rate of 10%-20%. As the filtering rate increases, prun- ing effectiveness gradually declines, ultimately matching the per- formance of the unfiltered data. Therefore, we recommend filtering only the outliers based on perplexity, as overly aggressive filtering may compromise the diversity of the calibration data, negatively impacting pruning performance. Alpaca (↓) Commonsense 40% filter 10% filter 30% filter 20% filter w/o filter 64.51 64.49 64.71 62.12 64.76 64.49 Wiki Data 9.47 9.99 9.42 9.40 9.40 - 6.4 WHETHER SELF-GENERATED SYNTHETIC CALIBRATION DATA IS MORE SIMILAR TO TRAINING DATA? In Section 3.4, we assert that data similar to the train- ing data is more suitable as calibration data for post- training pruning. Based on the auto-regressive gen- eration characteristics of LLMs, we propose using self-generated data as an approximation of the train- ing data. But is the self-generated synthetic data truly similar to the model’s training data than other calibration data? We use an efficient and effective Min-K%++ method (Zhang et al., 2024a) for mea- suring. Min-K%++ notes that after maximum like- lihood training, the probability distribution of the training data always lies at local maxima along the input dimensions. Therefore, for a given token se- quence (x<t, xt), if the sequence is belong to the training data, the p(x<t, xt) should be higher than that of other candidate tokens in the vocabulary. The Min-K%++ is formulated as follows: Figure 5: The Min-50%++ score distribu- tion of C4, Wikipedia, Slimpajama and self- generated synthetic data. W (x<t, xt) = logp(xt|x<t) − µx<t σx<t (cid:88) (x<t,xt)∈min-k% 1 |min-k%| Min-K%++(x) = , (3) W (x<t, xt), where µx<t, σx<t is the expectation and standard deviation of the next token’s log probability given the prefix x<t, respectively. min-k% refers to choosing the bottom k% of subsequences based on scores from the sequence x. Thus, the higher a sample’s Min-K%++ score, the more likely it is to appear in the training data. Figure 5 shows the Min-50%++ score distribution of C4, Wikipedia, Slimpajama and our self-generated synthetic data. We can clearly observe that the self-generated synthetic data has higher Min-50%++ scores than the other calibration data. It indicates that the self- generated synthetic calibration data is indeed similar to the training data, confirming the validity of using self-generated data as a proxy for the training data. 7 CONCLUSION AND FUTURE WORK In this paper, we highlight the critical role that calibration data plays in post-training pruning for LLMs. Through systematic exploration, we demonstrate that calibration data similar to the origi- nal training data leads to superior pruning performance. To address the challenge of inaccessible training data in practical scenarios, we propose a self-generating synthetic calibration data strategy, which effectively samples suitable calibration data for LLMs. Experimental results on the DCLM, LLaMA-2, and LLaMA-3 models demonstrate that our method significantly outperforms existing 9 0.80.60.40.20.001234567DensityC4WikipediaSlimpajamaSynthetic Preprint common-used calibration data. We firmly believe that calibration data, as an essential part of post- training pruning, still holds significant potential for further research. Our work still has some limitations that are worth exploring further. First, we do not fully optimize the hyperparameters when generating synthetic calibration data, such as using more advanced de- coding strategies or refined filtering methods. We believe that improving these details could further enhance the effectiveness of the synthetic calibration data. Second, our experiments are limited to unstructured and semi-structured pruning on 7B-13B LLMs. In future work, we will validate our method on 70B LLMs and in structured pruning scenarios. Additionally, we will continue to explore how to synthesize high-quality instruction data as calibration data to help compress aligned LLMs. REFERENCES Yonatan Bisk, Rowan Zellers, Ronan bras, Jianfeng Gao, and Choi Yejin. Piqa: Reasoning about physical commonsense in natural language. Proceedings of the AAAI Conference on Artificial Intelligence, 34:7432–7439, 04 2020. doi: 10.1609/aaai.v34i05.6239. S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with gpt-4, 2023. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina In Jill Toutanova. BoolQ: Exploring the surprising difficulty of natural yes/no questions. Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Papers), pp. 2924–2936, Minneapolis, Min- nesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1300. URL https://aclanthology.org/N19-1300. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge, 2018. Tim Dettmers, Ruslan A. Svirschevski, Vage Egiazarian, Denis Kuznedelev, Elias Frantar, Saleh Ashkboos, Alexander Borzunov, Torsten Hoefler, and Dan Alistarh. SpQR: A sparse-quantized representation for near-lossless LLM weight compression. In The Twelfth International Confer- ence on Learning Representations, 2024. URL https://openreview.net/forum?id= Q1u25ahSuy. Peijie Dong, Lujun Li, Zhenheng Tang, Xiang Liu, Xinglin Pan, Qiang Wang, and Xiaowen Chu. Pruner-zero: Evolving symbolic pruning metric from scratch for large language models. In Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp (eds.), Proceedings of the 41st International Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Research, pp. 11346–11374. PMLR, 21–27 Jul 2024. URL https://proceedings.mlr.press/v235/dong24b.html. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural In International Conference on Learning Representations, 2019. URL https:// networks. openreview.net/forum?id=rJl-b3RcF7. Elias Frantar and Dan Alistarh. Optimal brain compression: A framework for accurate post- training quantization and pruning. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=ksVGCOlOEba. Elias Frantar and Dan Alistarh. Sparsegpt: Massive language models can be accurately pruned in one-shot. In International Conference on Machine Learning, pp. 10323–10337. PMLR, 2023. 10 Preprint Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. OPTQ: Accurate quantization for generative pre-trained transformers. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=tcbBPnfwxS. Song Guo, Jiahang Xu, Li Lyna Zhang, and Mao Yang. Compresso: Structured pruning with collab- orative prompting learns compact large language models, 2023. URL https://arxiv.org/ abs/2310.05015. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Ja- cob Steinhardt. Measuring massive multitask language understanding. In International Confer- ence on Learning Representations, 2021. URL https://openreview.net/forum?id= d7KBjmI3GmQ. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hen- nigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. Training compute-optimal large language models, 2022. Wei Huang, Yangdong Liu, Haotong Qin, Ying Li, Shiming Zhang, Xianglong Liu, Michele Magno, and Xiaojuan Qi. BiLLM: Pushing the limit of post-training quantization for LLMs. In Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp (eds.), Proceedings of the 41st International Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Research, pp. 20023–20042. PMLR, 21–27 Jul 2024. URL https://proceedings.mlr.press/v235/huang24q.html. Ajay Jaiswal, Zhe Gan, Xianzhi Du, Bowen Zhang, Zhangyang Wang, and Yinfei Yang. Com- pressing LLMs: The truth is rarely pure and never simple. In The Twelfth International Confer- ence on Learning Representations, 2024. URL https://openreview.net/forum?id= B9klVS7Ddk. Yixin Ji, Yang Xiang, Juntao Li, Wei Chen, Zhongyi Liu, Kehai Chen, and Min Zhang. Feature- based low-rank compression of large language models via bayesian optimization, 2024. URL https://arxiv.org/abs/2405.10616. Ting Jiang, Deqing Wang, Fuzhen Zhuang, Ruobing Xie, and Feng Xia. Pruning pre-trained lan- guage models without fine-tuning. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 594–605, Toronto, Canada, July 2023. Association for Compu- tational Linguistics. doi: 10.18653/v1/2023.acl-long.35. URL https://aclanthology. org/2023.acl-long.35. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models, 2020. Ayush Kaushal, Tejas Vaidhya, and Irina Rish. Lord: Low rank decomposition of monolingual code llms for one-shot compression, 2023. URL https://arxiv.org/abs/2309.14021. Bishwash Khanal and Jeffery M. Capone. Evaluating the impact of compression techniques on task- specific performance of large language models, 2024. URL https://arxiv.org/abs/ 2409.11233. Franc¸ois Lagunas, Ella Charlaix, Victor Sanh, and Alexander Rush. Block pruning for faster transformers. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 10619–10629, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.829. URL https://aclanthology.org/2021.emnlp-main.829. Mike Lasby, Anna Golubeva, Utku Evci, Mihai Nica, and Yani Ioannou. Dynamic sparse training with structured sparsity. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=kOBkxFRKTA. 11 Preprint Yann LeCun, John Denker, and Sara Solla. In D. Touretzky (ed.), Advances in Neural Information Processing Systems, volume 2. Morgan-Kaufmann, URL https://proceedings.neurips.cc/paper_files/paper/1989/ 1989. file/6c9882bbac1c7093bd25041881277658-Paper.pdf. Optimal brain damage. Namhoon Lee, Thalaiyasingam Ajanthan, and Philip Torr. SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY. In International Conference on Learn- ing Representations, 2019. URL https://openreview.net/forum?id=B1VZqjAcYX. Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Gadre, Hritik Bansal, Etash Guha, Sedrick Keh, Kushal Arora, Saurabh Garg, Rui Xin, Niklas Muennighoff, Rein- hard Heckel, Jean Mercat, Mayee Chen, Suchin Gururangan, Mitchell Wortsman, Alon Al- balak, Yonatan Bitton, Marianna Nezhurina, Amro Abbas, Cheng-Yu Hsieh, Dhruba Ghosh, Josh Gardner, Maciej Kilian, Hanlin Zhang, Rulin Shao, Sarah Pratt, Sunny Sanyal, Gabriel Il- harco, Giannis Daras, Kalyani Marathe, Aaron Gokaslan, Jieyu Zhang, Khyathi Chandu, Thao Nguyen, Igor Vasiljevic, Sham Kakade, Shuran Song, Sujay Sanghavi, Fartash Faghri, Se- woong Oh, Luke Zettlemoyer, Kyle Lo, Alaaeldin El-Nouby, Hadi Pouransari, Alexander Toshev, Stephanie Wang, Dirk Groeneveld, Luca Soldaini, Pang Wei Koh, Jenia Jitsev, Thomas Kol- lar, Alexandros G. Dimakis, Yair Carmon, Achal Dave, Ludwig Schmidt, and Vaishaal Shankar. Datacomp-lm: In search of the next generation of training sets for language models, 2024. URL https://arxiv.org/abs/2406.11794. Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei-Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han. Awq: Activation-aware weight quantization for llm compression and acceleration. In MLSys, 2024. Xinyin Ma, Gongfan Fang, and Xinchao Wang. LLM-pruner: On the structural pruning of large language models. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=J8Ajf9WfXP. OpenAI. Chatgpt: Optimizing language models for dialogue. Open AI, blog, 2022. OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Research, 21(140):1–67, 2020. URL http: //jmlr.org/papers/v21/20-074.html. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: an ad- versarial winograd schema challenge at scale. Commun. ACM, 64(9):99–106, aug 2021. ISSN 0001-0782. doi: 10.1145/3474381. URL https://doi.org/10.1145/3474381. Victor Sanh, Thomas Wolf, and Alexander Rush. Movement pruning: Adaptive sparsity by fine-tuning. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 20378–20389. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/ paper/2020/file/eae15aabaa768ae4a5993a8a4f4fa6e4-Paper.pdf. Wenqi Shao, Mengzhao Chen, Zhaoyang Zhang, Peng Xu, Lirui Zhao, Zhiqian Li, Kaipeng Zhang, Peng Gao, Yu Qiao, and Ping Luo. Omniquant: Omnidirectionally calibrated quantization for large language models. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=8Wuvhh0LYW. Mingjie Sun, Zhuang Liu, Anna Bair, and J Zico Kolter. A simple and effective pruning approach for large language models. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=PxoFut3dWW. Yi-Lin Sung, Jaehong Yoon, and Mohit Bansal. ECoFLap: Efficient coarse-to-fine layer-wise prun- ing for vision-language models. In The Twelfth International Conference on Learning Represen- tations, 2024. URL https://openreview.net/forum?id=iIT02bAKzv. 12 Preprint Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. replicable instruction- following model. Stanford Center for Research on Foundation Models. https://crfm. stanford. edu/2023/03/13/alpaca. html, 3(6):7, 2023. Alpaca: A strong, Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. Xin Wang, Yu Zheng, Zhongwei Wan, and Mi Zhang. Svd-llm: Truncation-aware singular value decomposition for large language model compression, 2024. URL https://arxiv.org/ abs/2403.07378. Miles Williams and Nikolaos Aletras. On the impact of calibration data in post-training quantization In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Proceedings of the and pruning. 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 10100–10118, Bangkok, Thailand, August 2024. Association for Computational Linguistics. URL https://aclanthology.org/2024.acl-long.544. Mengzhou Xia, Tianyu Gao, Zhiyuan Zeng, and Danqi Chen. Sheared LLaMA: Accelerat- In The Twelfth International Confer- ing language model pre-training via structured pruning. ence on Learning Representations, 2024. URL https://openreview.net/forum?id= 09iOdaeOzp. Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han. SmoothQuant: Accurate and efficient post-training quantization for large language models. In Proceedings of the 40th International Conference on Machine Learning, 2023. Peng Xu, Wenqi Shao, Mengzhao Chen, Shitao Tang, Kaipeng Zhang, Peng Gao, Fengwei An, Yu Qiao, and Ping Luo. BESA: Pruning large language models with blockwise parameter-efficient sparsity allocation. In The Twelfth International Conference on Learning Representations, 2024a. URL https://openreview.net/forum?id=gC6JTEU3jl. Zhangchen Xu, Fengqing Jiang, Luyao Niu, Yuntian Deng, Radha Poovendran, Yejin Choi, and Bill Yuchen Lin. Magpie: Alignment data synthesis from scratch by prompting aligned llms with nothing, 2024b. URL https://arxiv.org/abs/2406.08464. Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang. The dawn of lmms: Preliminary explorations with gpt-4v(ision), 2023. Lu Yin, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Gen Li, Ajay Jaiswal, Mykola Pechenizkiy, Yi Liang, Michael Bendersky, Zhangyang Wang, and Shiwei Liu. Outlier weighed layerwise sparsity (owl): A missing secret sauce for pruning llms to high sparsity, 2024a. URL https://arxiv.org/abs/2310.05175. Lu Yin, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Gen Li, AJAY KUMAR JAISWAL, Mykola Pechenizkiy, Yi Liang, Michael Bendersky, Zhangyang Wang, and Shiwei Liu. Outlier weighed layerwise sparsity (OWL): A missing secret sauce for pruning LLMs to high sparsity. In Forty-first International Conference on Machine Learning, 2024b. URL https: //openreview.net/forum?id=ahEm3l2P6w. Geng Yuan, Xiaolong Ma, Wei Niu, Zhengang Li, Zhenglun Kong, Ning Liu, Yifan Gong, Zheng Zhan, Chaoyang He, Qing Jin, et al. Mest: Accurate and fast memory-economic sparse training framework on the edge. Advances in Neural Information Processing Systems, 34, 2021. Zhihang Yuan, Yuzhang Shang, Yue Song, Qiang Wu, Yan Yan, and Guangyu Sun. Asvd: Activation-aware singular value decomposition for compressing large language models, 2024. URL https://arxiv.org/abs/2312.05821. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. HellaSwag: Can a In Anna Korhonen, David Traum, and Llu´ıs M`arquez machine really finish your sentence? (eds.), Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4791–4800, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10. 18653/v1/P19-1472. URL https://aclanthology.org/P19-1472. 13 Preprint Jingyang Zhang, Jingwei Sun, Eric Yeats, Yang Ouyang, Martin Kuo, Jianyi Zhang, Hao Frank Yang, and Hai Li. Min-k%++: Improved baseline for detecting pre-training data from large language models, 2024a. URL https://arxiv.org/abs/2404.02936. Mingyang Zhang, Hao Chen, Chunhua Shen, Zhen Yang, Linlin Ou, Xinyi Yu, and Bohan Zhuang. LoRAPrune: Structured pruning meets low-rank parameter-efficient fine-tuning. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of the Association for Computational Linguistics ACL 2024, pp. 3013–3026, Bangkok, Thailand and virtual meeting, August 2024b. Association for Computational Linguistics. URL https://aclanthology.org/2024. findings-acl.178. Yingtao Zhang, Haoli Bai, Haokun Lin, Jialin Zhao, Lu Hou, and Carlo Vittorio Cannistraci. Plug- and-play: An efficient post-training pruning method for large language models. In The Twelfth International Conference on Learning Representations, 2024c. URL https://openreview. net/forum?id=Tr0lPx9woF. Yuxin Zhang, Lirui Zhao, Mingbao Lin, Sun Yunyun, Yiwu Yao, Xingjia Han, Jared Tanner, Shiwei Liu, and Rongrong Ji. Dynamic sparse no training: Training-free fine-tuning for sparse LLMs. In The Twelfth International Conference on Learning Representations, 2024d. URL https: //openreview.net/forum?id=1ndDmZdT4g. 14 Preprint A MORE STUDIES ON DIFFERENT SPARSITY (a) sparsity ratio: 0.3 (b) sparsity ratio: 0.4 (c) sparsity ratio: 0.5 (d) sparsity ratio: 0.6 (e) sparsity type: 4:8 (f) sparsity type: 2:4 Figure 6: Pruning performance of different datasets (C4, Wikipedia, Slimpajama, DCLM) under various sparsity ratios (a-d) and sparsity types (e-f) on Wanda. (a) sparsity ratio: 0.3 (b) sparsity ratio: 0.4 (c) sparsity ratio: 0.5 (d) sparsity ratio: 0.6 (e) sparsity type: 4:8 (f) sparsity type: 2:4 Figure 7: Pruning performance of different datasets (C4, Wikipedia, Slimpajama, DCLM) under various sparsity ratios (a-d) and sparsity types (e-f) on DSnoT. B MORE RESULTS OF SYNTHETIC CALIBRATION DATA 15 &:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0 Preprint (a) sparsity ratio: 0.3 (b) sparsity ratio: 0.4 (c) sparsity ratio: 0.5 (d) sparsity ratio: 0.6 (e) sparsity type: 4:8 (f) sparsity type: 2:4 Figure 8: Pruning performance of different datasets (C4, Wikipedia, Slimpajama, DCLM) under various sparsity ratios (a-d) and sparsity types (e-f) on OWL. Table 5: Pruning performance of different calibration data on LLaMA-2-13B in 60% sparsity ra- tio. The best performance method is indicated in bold. Wiki, Slim, and Syn are abbreviations for Wikipedia, SlimPajama, and our synthetic data, respectively. Method Data Alpaca BoolQ Winogrande PIQA Hellaswag ARC-e ARC-c MMLU Avg. Wanda DSnoT OWL 8.99 C4 9.21 Wiki Slim 8.76 DCLM 8.73 8.73 Syn 9.03 C4 9.34 Wiki Slim 9.03 DCLM 9.04 8.96 Syn 7.56 C4 8.25 Wiki Slim 7.68 DCLM 7.33 7.35 Syn 77.36 74.39 76.82 77.50 77.06 77.16 76.02 76.31 77.22 77.09 78.92 77.93 79.41 79.85 79.05 68.68 67.97 68.42 68.37 68.68 66.60 65.89 66.79 67.56 67.64 70.02 69.47 69.69 70.23 69.61 75.45 74.97 75.25 75.16 75.19 74.92 74.43 74.84 74.52 74.54 75.95 75.23 75.55 75.57 76.50 66.51 64.39 65.18 66.34 66.25 65.76 63.84 64.44 65.38 65.33 69.12 68.13 68.42 69.21 69.11 69.18 68.66 69.03 69.95 70.03 69.81 68.93 70.13 69.94 70.29 70.90 71.20 70.60 71.62 71.51 39.74 38.62 39.56 40.15 40.19 38.45 37.95 38.33 38.72 39.68 41.14 39.23 40.19 40.48 41.55 26.80 24.96 28.01 27.98 29.06 25.73 25.19 26.97 26.97 27.08 32.75 31.75 32.47 33.77 31.19 60.53 59.14 60.32 60.78 60.92 59.77 58.89 59.69 60.04 60.23 62.69 61.85 62.33 62.96 62.65 Table 6: Pruning performance of different calibration data on LLaMA-3-8B in 60% sparsity ra- tio. The best performance method is indicated in bold. Wiki, Slim, and Syn are abbreviations for Wikipedia, SlimPajama, and our synthetic data, respectively. Data BoolQ Winogrande PIQA Hellaswag ARC-e ARC-c MMLU Avg. 69.02 C4 66.82 Wiki Slim 66.86 DCLM 70.14 70.03 Syn 60.55 59.02 60.11 61.17 61.88 67.98 67.40 67.53 67.83 68.06 59.95 59.79 59.38 60.04 59.85 30.59 29.67 29.96 31.16 31.66 23.60 24.14 23.52 23.22 23.19 51.59 50.57 50.77 51.93 52.11 49.47 47.14 48.07 49.97 50.11 16 &:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0
synthetic_cpt
5
Self-Play_Fine-Tuning_Converts_Weak_Language_Models_to_Strong_Language_Models.pdf
1 0 0 2 r a M 9 2 1 v 5 4 2 3 0 1 0 / h t - p e h : v i X r a Non-abelian self-duality from self-interaction A. Khoudeir Instituto de F´ısica, Universidad Nacional Aut´onoma de M´exico Apdo. Postal 20-364, 01000 M´exico D. F. M´exico and Centro de Astrof´ısica Te´orica, Departamento de F´ısica, Facultad de Ciencias, Universidad de los Andes, M´erida, 5101,Venezuela. Abstract The non-abelian self-dual action in three dimensions is derived using the self-interaction mechanism. Self-duality in three dimensions was proposed initially by Townsend et. al. [1] as an alternative to the topologically massive theory[2]. In principle, they seem different descriptions of a locally massive spin 1 physical excitation: the self-dual theory is described by a non-gauge invariant first order action while the topologically massive action is written down in a gauge invariant second order formulation. Both actions have an abelian Chern-Simons term (ǫmnpAm∂nAp). Despite these differences, Deser and Jackiw stablished that both theories are locally equivalent through the existence of a master action, even in the presence of external sources[3]. Moreover, both theories are dual equivalent[4] and the self-dual theory can be seen as a gauged fixed version of the topologically massive theory[5]. The self-dual theory for gravity and for higher spin in three dimensions was achieved in [6] and [7], respectively. If glogal properties are considered, the equivalence is modified, for instance, the partition functions of the self dual and topologically massive theories are not the same but they are related in the following way: ZSD = ZCSZT M [8] (where ZCS is the partition function of the abelian Chern-Simons action). The non-abelian generalization of the topologically massive theory was given in [2] while the non-abelian self-dual theory was formulated indepen- dently by McKeon [9] and Arias, et. al.[10], which has a structure of a Freedman-Townsend action[11]. In this letter, starting from an appropiate master action, we will derive the non-abelian self-dual action using the self-interaction mechanism[12]. 1 We will start by considering the following master action[13] I = Z d3x[−µǫmnpAm∂nap − 1 2 µ2amam − µǫmnpAm∂nvp + 1 2 µǫmnpvm∂nvp] (1) This action can be seen as the coupling between a Maxwell field (Am) and a vector field (vm) described by an abelian Chern-Simons action through a three dimensional BF topological term. Independent variations in the am, vm and Am fields, yield the following equations of motion am = −1 2 µǫmnpfnp(A), ǫmnp∂n[Ap − vp] = 0 (2) (3) and ǫmnp∂n[ap + vp] = 0, (4) where fmn(A) = ∂mAn − ∂nAm. The last two equations can be solved locally. We have and vm = Am + ∂mφ am = −vm + ∂mσ. The master action has abelian gauge invariance δAm = ∂mλ1 δvm = ∂mλ2 (5) (6) (7) Substituting the equations (2) and (5), into the master action lead to the action for the abelian topologically massive theory d3x[−1 4 (A) fmn(A) − 1 f mn 4 µǫmnpAmfnp(A)]. I = (8) Z On the other hand, we can eliminate the am and Am fields, through the use of equations (5) and (6) in order to obtain I = Z d3x[−1 2 µ2(vm − ∂mφ)(vm − ∂mφ) + 1 2 µǫmnpvm∂nvp], (9) which is invariant under the following abelian gauge transformations δvm = ∂mλ1, δφ = λ1. (10) 2 Fixing the gauge φ = 0, we obtain the non-gauge invariant self-dual action. Then, the proposed master action show the equivalence (at classical level) between the topologically and self-dual theories. The master action that we are considering is locally equivalent to the master action of Deser and Jackiw, as can be seen after eliminating only the vm field and is written down as I = Z d3x[−µǫmnpAm∂nap − 1 2 µ2amam − 1 2 µǫmnpAm∂nAp] (11) Introducing the Lie-algebra valued vectors Am = Ai mT i and the mT i, am = ai mnT i, where the generators T i of Lie-algebra valued field strength Fmn = F i the gauge group are normalized by T iT j = δij, the non-abelian generalization of the master action of Deser and Jackiw obtained by replacing ordinary derivative by covariant derivative, fmn = ∂mAn − ∂nAm → Fmn = ∂mAn − ∂nAm + [Am, An] and considering the non-abelian Chern-Simons term is I = µtr Z d3x[ǫmnpamFnp − 1 2 µamam − 1 2 ǫmnpAm(∂nAp + 2 3 AnAp)] (12) and only can reproduce the non-abelian version of the topologically mas- sive theory after eliminating the am field by using its equation of motion (am = ǫmnpFnp). On the other hand, the equation of motion obtained by independent variations in Am has no known solutions and in consecuence the non-abelian master action of Deser and Jackiw can not reproduce the non-abelian self-dual action. The non-abelian topologically massive theory can be deduced from the self-interaction mechanism[14]. Now, we will consider for simplicity a triplet of SU(2) free vector fields m (i = 1, 2, 3). The m coupled with a triplet of SU(2) free vector fields vi Ai action is Io = Z d3x[−µǫmnpAi m∂nai p − 1 2 µ2ai mami − µǫmnpAi m∂nvi p + 1 2 µǫmnpvi m∂nvi p]. (13) This action has two global simmetries. One is the global SU(2) symmetry δωX = gǫijkX jωk where X = (A, a, v) and the other global symmetry is given by δρAi m = gǫijk[aj m + vj m]ρk; 3 δρai m = 0 = δρvi m. (14) (15) Under these transformations, the action changes by a total derivative. The Noether currents associated with the global symmetries are jmi = −µgǫmnpǫijkAj n[ak p + vk p ] + 1 2 µgǫmnpǫijkvj nvk p and K mi = −1 2 µgǫmnpǫijk[aj n + vj n][ak p + vk p ]. (16) (17) These currents are conserved on-shell. Now, we will couple these Noether currents to the action I0 through the corresponding self-interaction term defined by jmi ≡ δISI δvi m , K mi ≡ δISI δAi m . We find d3x[−ǫmnpǫijkvi ǫmnpǫijkvi mvj nAk p Z ISI = gµ − 1 2 ǫmnpǫijkAi maj nak p + nak p − 1 2 mvj ǫmnpǫijkvi mAj 1 6 nvk p ]. (18) (19) The self-interaction mechanism stops here since no other derivative terms appear in ISI. Now, we add ISI to Io. The last term in eq. (13) combines with the last term in eq. (19) to give a Chern-Simons term for the vm field. The non-abelian action is d3x[−ǫmnpAi m(F i np(a) + F i np(v) + 2gǫijkanvk p ) − µai mami (20) I = µ 1 2 + ǫmnpvi Z m(∂nvi p + 1 3 ǫijkvj nvk p )], or I = 1 2 µ Z where and d3x[−ǫmnpAi mF i np(a+v) − µai mami + ǫmnpvi m(∂nvi p + 1 3 ǫijkvj nvk p )], (21) mn(a) = ∂mai F i n mn(v) = ∂mvi F i n − ∂nai m + gǫijkaj mak n − ∂nvi m + gǫijkvj mvk n 4 (22) (23) are the field strengths for the ai m fields. The self-interaction process combines the abelian gauge transformations with the global ones giving rise to the following non-abelian local gauge transformations m and vi δAi δvi m = gǫijkAj m = ∂mαi + gǫijkvj mαk; δai mαk m = gǫijkaj mαk and δAi δai m = ∂mκi + gǫijk[aj m = 0 = δvi m m + vj m]κk (24) (25) Defining ωm ≡ am + vm, the action is rewritten down as I = 1 2 µ g2 tr Z d3x[−ǫmnpAmFnp(ω) − µ(vm − ωm)(vm − ωm) (26) + ǫmnpvm[∂nvp + 2 3 vnvp]. This action was interpreted as the interaction between a Chern-Simons and a BF(ǫAF ) topological terms propagating a massive spin 1 physical mode[10]. Like as in the non-abelian topologically massive theory, invariance in the functional integral implies the quantization condition: 4π µ g2 = integer. We observe that Am play the role of a Lagrange multiplier. Its equation of motion is which tell us that ω is a pure gauge. Fmn(ω) = 0 ωm = U −1∂mU. Then, the action becomes I = 1 2 µ g2 tr Z d3x[−µ(vm −U −1∂mU)(vm −U −1∂mU) + ǫmnpvm(∂nvp + (27) (28) 2 3 vnvp)], (29) where the vm field appear coupled with a Stuckelberg field. Now, we have invariance under the following (finite) gauge transformations vm → g−1∂m∂mg + g−1vmg, U → Ug. (30) 5 This gauge invariance allow us to fix the gauge U = 1, in order to obtain the standard action for the non-abelian self-dual field vm I = 1 2 µ g2 tr Z d3[−µvmvm + ǫmnpvm(∂nvp + 2 3 vnvp)]. (31) To conclude, we have derived the non-abelian self-dual action in three di- mensions using the self-interaction mechanism. Recently, a dual version of a pure non-abelian Chern-Simons action was formulated [15]. It would be interesting to analyse the duality properties of the self-dual and topologically masive theories at non-abelian level. ACKNOWLEDGEMENTS The author would like to thank to Marti Ruiz Altaba for his hospitality at Instituto de F´ısica de la Universidad Nacional Aut´onoma de M´exico. Also, the author thanks Conicit-Venezuela for financial support. References [1] P. K. Townsend, K. Pilch and P. van Nieuwenhuizen, Phys. Lett. B136 (1984) 38. [2] S. Deser, R. Jackiw and S. Tempelton, Ann. Phys. 140 (1982) 372. [3] S. Deser and R. Jackiw, Phys. Lett. B139 (1984) 371. [4] J. Stephany, Phys.Lett. B390 (1997) 128. [5] R. Gianvittorio, A. Restuccia and J. Stephany, Mod. Phys. Lett. A6 (1991) 2121; P. J. Arias and J. Stephany, J. Math. Phys. 36 (1995) 1868. [6] C. Aragone and A. Khoudeir, Phys.Lett. B173 (1986) 141. [7] C. Aragone and A. Khoudeir, Revista Mexicana de F´ısica 39 (1993) 819. [8] P. J. Arias and A. Restuccia, Phys. Lett. B347 (1995) 241. [9] D. G. C. McKeon, Int. Journal of Mod. Phys. A7 (1992) 2005. 6 [10] P. J. Arias, L. Leal and A. Restuccia, Phys.Lett. B367 (1996) 170. [11] D. Freedman and P. K. Townsend, Nucl. Phys. B177 (1981) 282. [12] S. Deser, Gen. Rel. Grav. 1 (1970) 9; Class. Quantum Grav. 4 (1987) L99; S. Deser and M. Henneaux, Mod. Phys. Lett. A10 (1995) 991. [13] A. Khoudeir, Mod. Phys. Lett. A11 (1996) 2489. [14] C. Aragone and E. Araujo, Acta Cient´ıfica Venezolana 36 (1985) 207. [15] H. Garc´ıa-Compean, O. Obregon and C. Ram´ırez, hep-th/0103066. 7
synthetic_cpt
2
Surface_Form_Competition_Why_the_Highest_Probability_Answer_Isn’t_Always_Right.pdf
Molecular explanation for why talc surfaces can be both hydrophilic and hydrophobic Benjamin Rotenberg CNRS et UPMC-Paris6, Laboratoire PECSA, UMR 7195, 4 pl. Jussieu, F-75005 Paris, France∗ Amish J. Patel Howard P. Isermann Department of Chemical & Biological Engineering, and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, New York 12180 Department of Chemistry, University of California, Berkeley, California 94720 David Chandler (Dated: September 4, 2018) Abstract While individual water molecules adsorb strongly on a talc surface (hydrophilic behavior), a droplet of water beads up on the same surface (hydrophobic behavior). To rationalize this di- chotomy, we investigate the influence of the microscopic structure of the surface and the strength of adhesive (surface-water) interactions on surface hydrophobicity. We show that at low relative humidity, the competition between adhesion and the favorable entropy of being in the vapor phase determines the surface coverage. However, at saturation, it is the competition between adhesion and cohesion (water-water interactions) that determines surface hydrophobicity. The adhesive interactions in talc are strong enough to overcome the unfavorable entropy, and water adsorbs strongly on talc surfaces. However, they are too weak to overcome the cohesive interactions, and water thus beads up on talc surfaces. Surprisingly, even (talc-like) surfaces that are highly adhe- sive, do not fully wet at saturation. Instead, a water droplet forms on top of a strongly adsorbed monolayer of water. Our results imply that the interior of hydrophobic zeolites suspended in water may contain adsorbed water molecules at pressures much smaller than the intrusion pressure. 1 1 0 2 p e S 0 2 ] h p - m e h c . s c i s y h p [ 1 v 4 8 2 4 . 9 0 1 1 : v i X r a 1 A. Introduction Wetting properties of minerals in soils and rocks play a crucial role in the transport, and thus availability, of water and oil. Clay minerals are particularly interesting, not only due to their abundance in nature and in synthetic materials, but also because the existence of clays with different structures allows us to investigate the effect of surface microstructure on macroscopic properties. Clay surfaces can be either charge-neutral or have a net charge, which is balanced by counter-ions in solution. Molecular simulation has furthered our un- derstanding of both these types of clays: uncharged clays have been studied using both ab-initio1,2 and classical simulations3,4, whereas simulations of charged clays have provided insights into interlayer properties5–8, swelling9–12, and cation exchange13–15. These studies have shown that the surface microstructure is expected to be more important in determining surface-water interactions in uncharged clays16,17, and it is these surfaces that are the focus of the current work. Among uncharged clays, talc surfaces have attracted a lot of atten- tion18–20, because of their peculiar behavior with respect to water. Water adsorption at low relative humidity (RH) reveals the presence of strong binding sites on talc21. Such strong binding sites are absent in other uncharged clays such as pyrophyllite and fluorotalc. Yet, experimental contact angles indicate that the surface of talc monocrystals is hydrophobic, similar to that of pyrophyllite22,23. To investigate this dichotomy, here we employ molecular dynamics simulations combined with recently developed algorithms24,25. In agreement with experiments, we find that at low RH, talc surfaces display hydrophilic behavior as water adsorbs strongly to the binding sites on the surface. However, at saturation, cohesive interactions dominate and the interaction between the surface binding sites and water is minimal, resulting in a hydrophobic surface. To further explore the role of surface microstructure and the strength of the adhesive interactions on surface hydrophobicity, we also study similar clay minerals, pyrophyllite and fluorotalc, as well as modified talc surfaces with a range of binding site polarities, both at low relative humidity and at saturation. We find that the dual hydrophilic-hydrophobic behavior observed in talc, is generically expected to manifest for surfaces whose adhesive interaction energy lies in a special range. If the adhesion to water is strong enough to overcome the entropy of being in the vapor phase at low RH, water adsorbs strongly to the surface (hydrophilic behavior). At the same time, if adhesion is too weak to overcome 2 the cohesive interactions in water, the surface is hydrophobic at saturation. For modified talc surfaces with strong enough adhesion to overcome the cohesive interactions, all surface binding sites are occupied by water molecules at saturation, as expected. Surprisingly, instead of observing complete wetting, we find that a water droplet sits atop the adsorbed water monolayer. B. Microscopic Models Talc, fluorotalc and pyrophyllite are uncharged clay minerals, i.e., layered silicates of magnesium (Mg) or aluminum (Al). They belong to the family of TOT clays: each clay sheet consists of a layer of octahedrally coordinated Mg or Al oxide between two layers of tetrahedral silicon oxide (see 1(a) - side view). The surface of these sheets displays hexagonal rings of SiO2 tetrahedra. In talc and fluorotalc, all octahedral sites are occupied by Mg atoms, while in pyrophyllite two third of these sites are occupied by Al atoms (see 1(a) - top view). The charge on Mg and Al is balanced by hydroxyl groups in the center of the hexagonal cavities. In talc, these hydroxyl groups are oriented perpendicular to the surface, and can participate in hydrogen bonds with water. In pyrophyllite, the hydroxyl groups are oriented parallel to the surface, and in fluorotalc, they are replaced by fluorine atoms. The atomic coordinates for the unit cells of these clays have been included as Supplementary Information. We use the CLAYFF force field3 to model the interactions of the clay atoms and the SPC/E model to describe water26. Lorentz-Berthelot combination rules are used to deter- mine the pair Lennard-Jones parameters and a rigid clay structure is assumed. As there are no parameters for fluorine in CLAYFF, we assigned it a charge equal to that of the -OH group in talc (-0.525) and Lennard-Jones parameters of the fluoride ion reported in Ref.27. All simulations were performed in the NVT ensemble using the LAMMPS simulation pack- age28 at a temperature, T = 300 K, maintained using a Nose-Hoover thermostat29. SHAKE was used to integrate the motion of the rigid water molecules30 and long-range electrostatic interactions were computed using Ewald summation. 3 FIG. 1: (a) Microscopic clay structure (Red: O, White: H, Yellow: Si, Green: Al, Cyan: Mg atoms). The side and top views of the pyrophyllite clay sheet show the hydroxyl (-OH) groups that are parallel to the sheet. In talc (top view shown), the -OH groups are perpendicular to the sheet and can participate in hydrogen bonds with water. In fluorotalc (not shown), the talc -OH groups are replaced by F atoms. (b) Part of the simulation setup for studying the clay - water interface. The blue box is the observation volume, v, used to probe density fluctuations. (c) Simulation setup for determining contact angles. C. Methods 1. Clay - water interface A clay-water interface is representative of the situation at saturation. The setup shown in 1(b) is used to calculate the local water density, ρ(z), as well as the water density fluctuations near the interface. The potential of mean force, , for bringing a water F molecule from bulk to a distance z from the plane of the Mg atoms for talc and fluorotalc (and Al for pyrophyllite) is related to ρ(z) by (z) = F − kBT ln[ρ(z)/ρb], where kB is the Boltzmann constant and ρb is the bulk water density. To quantify density fluctuations, we measure the probability distribution, Pv(N ), of finding N water molecules in an observation volume v, adjacent to the clay surface, using the indirect umbrella sampling (INDUS) 3 ˚A3 placed method24,25. We chose a rectangular parallelopiped of dimensions 15 15 × × near the surface [see 1(b)], as the observation volume. The exact z-position of v was chosen 4 Pyrophyllite!Talc!Side view!Top view!(a)!(c)!(b)! so that the mean water density in v is equal to ρb. The simulation box also contained a fixed wall of repulsive WCA particles (not shown), placed at the top of the box (far from v) to nucleate a vapor-liquid buffering interface. 2. Contact angle The simulation setup for contact angle measurements is shown in 1(c). The contact angle is determined by computing water density maps in the plane of the center-of-mass of the drop. The curve with density equal to half of the bulk density is then fit to a circle and the angle between the tangent to this circle at zS = 7 ˚A and the horizontal axis is taken to be the contact angle. While the exact quantitative value of the contact angle depends on the choice of zS, our qualitative findings do not. 3. Water vapor adsorption The adsorption of water vapor at low RH corresponds the interaction of an isolated water molecule with the surface. To determine the corresponding adsorption free energy, ∆µads, we compute (z) using umbrella sampling, with the weighted histogram analysis F method (WHAM) 31,32 being used to reconstruct (z) from the biased trajectories. F D. Hydrophobicity at low and high RH Using the various molecular measures of hydrophobicity described above, we study talc, as well as fluorotalc and pyrophyllite surfaces, both at saturation and at low RH. 1. High RH Theory33–36 and simulations24,37–40 have shown that the mean water density near a surface is not a good measure of its hydrophobicity. Instead, fluctuations away from the mean, and 5 in particular, the rare fluctuations24 indicating the cost of creating a cavity at the interface correlate quantitatively with the contact angle41. Patel et al. have shown that hydrophobic surfaces display an enhanced probability of density depletion or a low N fat tail in the Pv(N ) distribution, while Pv(N ) near hydrophilic surfaces is similar to that in bulk water24. As shown in 2(a), Pv(N ) near all three clay surfaces displays a low N fat tail, indicating that these surfaces are hydrophobic. A slight lifting of the fat tail from talc to fluorotalc and pyrophyllite suggests a corresponding marginal increase in hydrophobicity. FIG. 2: (v = 15 (a) The probability, Pv(N ), of observing N water molecules in a probe volume 3 ˚A3) displays a low N fat tail when v is near the surface of talc (black), 15 × × fluorotalc (red), and pyrophyllite (blue), as compared to that when v is in bulk water (green). (b) Water droplet profiles corresponding to ρ(r, z) = 0.5ρb are shown for the clay surfaces. The contact angles for the surfaces are similar: 96◦ for talc, 103◦ for fluorotalc, and 105◦ for pyrophyllite (based on tangents drawn at zS = 7˚A). (c) Potential of mean (z), for the adsorption of an isolated water molecule (low RH) to the clay surfaces. force, The hydrogen atoms of the talc -OH groups are located at z = 2 ˚A and can participate in F hydrogen bonds with water molecules. (d) (z) at the clay - liquid water interface F (saturation). To maximize H-bonding with other waters, the binding site is no longer occupied. Another way to probe surface hydrophobicity is by simulating a sufficiently large water droplet on the surface and estimating the corresponding contact angle. 2(b) shows the average shape of droplets on the clay surfaces. The curve corresponding to ρ(r, z) = 0.5ρb is a circle in the (r, z) plane, where r is the distance from the axis that passes through the center of mass of the droplet. The contact angles obtained by tangents drawn at zS = 7˚A on the three surfaces are similar (talc: 96◦, fluorotalc: 103◦ pyrophyllite: 105◦), and clearly 6 -60-40-2000102030lnPv(N)N(a)BulkTalcF-talcPyro-101236912F(z)(kcal/mol)z(Å)(d)Saturation-6-30336912F(z)(kcal/mol)z(Å)(c)LowRH1020304050010203040z(Å)r(Å)(b) indicate hydrophobic behavior. Reliable experimental estimates of the contact angle of water droplets on both talc and pyrophyllite monocrystals are between 80◦ and 85◦22,23. The reported values for measurements on powders are usually smaller due to the presence of hydrophilic sites on the edges of finite clay particles42. To the best of our knowledge, no experimental contact angles have been reported for fluorotalc. For both talc and pyrophyllite, the contact angles obtained from our simulations (96◦ and 105◦ respectively) are somewhat larger than the experimental estimates, suggesting that surfaces modeled with the CLAYFF model are too hydrophobic. Nevertheless, amongst various commonly used clay force fields43–45, we find that the correspondence with experiments is closest for CLAYFF. A comparison of these force fields with experiments is provided in the Supplementary Information. 2. Low RH To investigate the wetting behavior of clay surfaces at low RH, we calculate the potential of mean force, F (z), for the adsorption of an isolated water molecule. (z) displays a F minimum near all the clay surfaces [see 2(c)], corresponding to an adsorption (or binding) free energy, ∆µads. For talc, ∆µads ≈ − a hydrogen bond between the water molecule and the hydroxyl group in talc. In fluorotalc, 5.9 kcal/mol, or 10 kBT , consistent with the formation of the hydroxyl group is replaced by fluorine, resulting in a reduction in ∆µads to -3.5 kcal/mol. 1 ˚A as the water is no longer strongly It also shifts the location of the minimum out by ≈ bound to the surface. Pyrophyllite, with the hydroxyl group parallel to the surface has an even smaller ∆µads ≈ − 2.8 kcal/mol, and the minimum is shifted out even more. To compare our estimate of ∆µads from simulations to experimental data, we ana- lyzed the data of Michot et al.21 using a Langmuir model. This model assumes that there are no interactions between the adsorbed molecules and predicts a surface coverage, Θ = (P/P ∗)/(1 + P/P ∗). P ∗ is the pressure at which half of the surface sites are occupied and is related to ∆µads through P ∗ = σmaxkBT δ eβ∆µads, (1) where σmax ≈ 4.2 nm−2 is the surface site density, δ 1 − ≈ 2 ˚A is the width of the surface 7 layer, i.e. the width of the PMF well in 2(c), and 1/β = kBT is the thermal energy. In the very low RH limit, corresponding to single water adsorption, we can safely assume that water molecules do not interact with each other. In this regime, Θ ≈ in Figure 11 of Ref.21, allow us to obtain an experimental estimate of P ∗ 0.056Psat for the talc surface. Here, Psat = 30 mbar is the saturation pressure of water. Using this value of P ∗ 8 kcal/mol46. This somewhat in equation 1, we get an experimental estimate of ∆µads ≈ − stronger adsorption than that predicted from simulations using CLAYFF (-5.9 kcal/mol), is P/P ∗ and the data ≈ consistent with the overestimate of the CLAYFF talc contact angle. If we further assume that the adsorbed water molecules do not interact with each other even at higher RH, the Langmuir model (with P ∗ = 0.056Psat) predicts that Θ 0.9 at 50% ≈ RH. As water coverage on the talc surface can be large even at moderate RH, interactions between water molecules may be important, consistent with suggestions that clustering needs to be considered21,47. In contrast, for fluorotalc Θ at saturation estimated from ∆µads is very small ( 1.5%), in agreement with the hydrophobic adsorption behavior reported in Figure ≈ 10 of Ref.21. Thus, the clay surfaces simulated using the CLAYFF force field are more hydrophobic than the real clay surfaces used in experiments. However, the interesting dichotomy of talc surfaces is also observed in the simulations and our findings are qualitatively consistent with the experiments, both at low RH (strong adsorption for talc and not the other clays) and at high RH (large contact angles for all clays). E. Cohesion vs Adhesion To investigate the disparate behavior of talc surfaces at low and high RH, we compare (z) for moving a water molecule away from the surface under both conditions. At satu- (z) for the clay surfaces are similar [2(c)], consistent with similar droplet contact F (z) for fluorotalc is nearly identical to that for pyrophyl- angle on the three surfaces [2(b)]. lite, and that for talc features an additional local minimum around z = 5 ˚A corresponding F to water molecules above the binding site. However, the (z) curves at saturation are qual- F itatively different from those at low RH [see 2(c-d)] For all three clays, and especially so for talc, the depth of the minimum at saturation is smaller than that at low RH, suggesting a weakening of adhesive interactions at saturation. 8 F ration, FIG. 3: F (z) for adsorbing a single water molecule on the talc surface, compared to that for a molecule in the dimer and a molecule at saturation. To explore the competition between adhesive and cohesive interactions in talc, in 3, we compare F (z) for an individually adsorbed water, with that for water in a dimer, and that for water at saturation. As shown in 3, the (z) for the dimer displays two minima. The F minimum corresponding to the molecule inside the cavity is shifted to slightly larger values compared to the minimum in the F (z) for a single water. In addition, the depth of the minimum is smaller, and is comparable to that for a single water on the more hydrophobic fluorotalc surface [2(c) and 3]. In other words, the presence of the second water weakens the adhesive surface-water interactions, which have to compete with the cohesive interactions between the waters. As the dimer is less tightly bound to the surface than a single water, it is easier for the water to escape the cavity in the presence of a second molecule. The dimer is in fact more mobile on the talc surface than isolated water molecules (not shown), confirming that the interaction of the surface with the dimer is weaker than with individual molecules. Finally, at saturation, cohesive interactions prevail, and water no longer occupies (z) for 3˚A < z < 5˚A. the binding site cavity as evidenced by the lack of a minimum in F F. Modified Talc Surfaces While the H-bonding between binding sites on the talc surface and water leads to an interesting transition from hydrophilic at low RH to hydrophobic at high RH, the binding sites interact weakly with water in fluorotalc and pyrophyllite, which display hydrophobic behavior for all RH. To investigate the effect of the binding strength on the hydrophobicity of 9 -6-4-20236912F(z)(kcal/mol)z(Å)TalcSingleDimerSaturation FIG. 4: (a) F (z) for a single water on various talc surfaces modified to span a range of ∆µads-values. (b) The corresponding F (z) curves at saturation. (c) The relative stability of water in the binding site compared to that in bulk, ∆µsite, and the barrier to escape − the binding site, ∆µbarrier, as a function of the binding strength, ∆µads. The dashed vertical line corresponds to µsat, the chemical potential at saturation. the surface, following Giovambattista et al.48, we construct a series of modified talc surfaces. The only force field parameters that are changed are the charges on the oxygen (from qO = 0.95 to qO − − δq) and the hydrogen (from qH = 0.425 to qH + δq) of the hydroxyl group. We study modified talc surfaces for δq ranging from -0.425 which corresponds to a non-polar binding site similar to that in fluorotalc, to +0.6 which corresponds to an ion-pair. δq = 0 is the talc surface, by definition. In 4(a), we show F (z) for an isolated water molecule on the modified talc surfaces. As the polarity of the -OH bond is increased, the magnitude of ∆µads also increases, providing us with surfaces that display a wide range of binding strengths. (z) at saturation, shown F in 4(b) for these surfaces is particularly interesting. For weakly adhesive surfaces ( 0.425 δq < 0.1), there is only one stable basin at z ≈ binding site cavity. For stronger adhesion (larger δq), a second basin develops at z ≤ 6.5 ˚A, corresponding to molecules outside the 3.5 ˚A − ≈ and is separated from the first basin by a barrier. 4(c) shows the depth of this minimum relative to bulk, ∆µsite, as a function of ∆µads. As the surface becomes more adhesive, more waters occupy the binding site and the depth of this minimum increases. When adhesive interactions are large enough to overcome cohesive interactions, i.e., when − ∆µads becomes larger than the chemical potential at saturation, µsat (for δq − ≈ plateau in ∆µsite. 0.4), every binding site is occupied by a water molecule, resulting in a 10 -20-15-10-50468F(z)(kcal/mol)z(Å)(a)-101234468F(z)(kcal/mol)z(Å)δq=−0.425δq=0.0δq=0.2468z(Å)δq=0.3δq=0.4δq=0.5(b)-3036-20-15-10(kcal/mol)∆µads(kcal/mol)(c)−∆µsite∆µbarrier FIG. 5: (a) Schematic showing the surface coverage, Θ, over a wide range of relative humidities (RH P/Psat ∼ ≡ exp[β(µ − µsat)]) and adhesive interaction strengths (∆µads). (b) Effect of ∆µads on surface hydrophobicity quantified by cos θ. The dashed vertical line corresponds to µsat. Snapshots indicating typical configurations of water molecules (red and white) on modified talc surfaces (blue) are also shown. As the adhesive interactions (∆µads) overcome the cohesive interactions (µ), there is a transition from a dry surface [snapshots (i) and (iii)] to one covered with a monolayer of water [snapshots (ii) and (iv)]. However, the height ∆µbarrier of the barrier to escape the cavity, also shown in 4(c), continues to increase approximately linearly with the binding strength. Thus, for surfaces with strong binding, ∆µbarrier is large, and the exchange of molecules between the cavities and the liquid is expected to be very slow, with possible consequences on the extent of stick/slip at such surfaces in the presence of a hydrodynamic flow. G. Tuning cohesion/adhesion via RH/∆µads Collectively our results paint a comprehensive picture of how the experimentally mea- surable quantities, the surface coverage Θ, and the contact angle θ, respond to changes in relative humidity (or water chemical potential), and on the strength of the adhesive surface- water interactions. The surface coverage Θ, is defined as the fraction of binding sites occupied by water molecules, and its dependence on RH and ∆µads is shown schematically in 5(a). 11 µ = !µ!"#$"(!µ!"#- µ#!$)%5%-5%-10%0%0%-5%10%#$Adhesion!(i)!(ii)!(iii)!(iv)!(a)!-0.400.40.8-20-12-4cosθ∆µads(kcal/mol)(b) At low RH ( ≡ P/Psat), the competition between the adhesive interactions and the entropy of being in the vapor determines the surface coverage, Θ. At very low RH , there are no interactions between adsorbed waters and Θ can be approximated as : P/P ∗ = 0.1(P/Psat)e−β∆µads−8.3, Θ ≈ (2) where the second part of the equation is obtained by substituting for P ∗ using Equation (1), and using appropriate values of the constants that depend on the surface geometry, σmax and δ, and those that depend on thermodynamic conditions, T and Psat. For surfaces with small adhesive interactions, i.e., ∆µads < 5 kcal/mol (or − β∆µads < − 8.3), the coverage remains small (Θ < 0.1) even at saturation [snapshot (i) in 5]. Thus, no appreciable interactions between waters are expected over the entire range of RH-values. Both pyrophyllite and fluorotalc fall in this regime. Since Θ increases exponentially with β∆µads, for values of ∆µads > 5 kcal/mol, there − can be substantial coverage even at modest RH [snapshot (ii) in 5]. Equation (2) is then valid only for small RH-values for which the predicted Θ-values are small. Talc lies in this regime. For larger RH values, there are appreciable interactions between the waters, and it is the competition between adhesive and cohesive interactions that determines surface properties. For surfaces such as talc, for which ∆µads < − − µsat, cohesion prevails at saturation, and the adsorbed waters bead up into a droplet, while the rest of the binding sites on the surface are devoid of waters [snapshot (iii) in 5]. Thus, the interesting crossover from hydrophobic to hydrophilic behavior in talc is a result of its adhesive interactions being strong enough to overcome vapor phase entropy at low RH, but not strong enough to overcome cohesive interactions at saturation. In this regime, with increasing polarity of the binding site, the surface gradually shifts from hydrophobic to hydrophilic, and cos θ increases approximately linearly as shown in 5(b). Finally, for surfaces with even larger values of ∆µads that are greater than µsat, adhe- − − sion dominates. . Surprisingly, water does not fully wet the surface at saturation. Instead, all binding sites are occupied by water molecules and only this first layer of water wets the surface. This water is strongly bound to the surface and the microstructure of the surface dictates the relative positions of the waters. In the present case, the arrangement of waters on the surface is not commensurate with the hydrogen bonding network of water, so that 12 water beads up on the monolayer [snapshot (iv) in 5]. For the modified talc surfaces with ∆µads > µsat, the surface has a strongly adsorbed water monolayer with a droplet on it − that makes a contact angle of about 50◦. − Similar behavior was reported by Ohler et al. for titanium dioxide surfaces, with droplet contact angles of 32 34◦ on top of roughly two monolayers of water49. However, other simulation studies investigating the effects of surface polarity on hydrophobicity50,51, do not − observe a plateau with non-zero contact angle at large polarities, seen in our results [5(b)]. Our modified talc surfaces are different from these previous studies in that the variation in polarity was achieved by changing the charges on atoms in recessed binding sites, while the remaining surface atoms remained the same. In contrast, in ref.50, the surface was modified by changing dipoles that protrude from the surface, while leaving the remaining surface atoms unchanged; whereas in ref.51, the charges on all atoms in the top two layers of an FCC crystal (111 facet) were changed to tune the polarity. Thus, our results indicate that the microstructure of the surface is important in determining the effect of polarity on its wetting properties. In contrast to the wetting properties of the model FCC surfaces used in ref.51, experi- mental measurements indicate that the FCC crystals of platinum (Pt), palladium (Pd), and gold (Au) are hydrophobic. Kimmel et al. observed a hydrophobic water monolayer on both Pt(111) and Pd(111) crystals52,53. Similarly, water has been shown to bead up on Au surfaces54 with a contact angle of 100◦ and Au surfaces have also been shown to adsorb, and facilitate the unfolding of proteins55; behavior that is typically associated with hydrophobic surfaces41. We speculate that the hydrophobicity of these metal surfaces arises from the presence of a monolayer of water, which binds strongly to the surface in a geometry that inhibits hydrogen bonding to the subsequent liquid water molecules. Our results also have implications on the wetting properties of nanoporous silicates such as hydrophobic zeolites56–58 and metal-organic frameworks59. These hydrophobic pores are thought to be devoid of water at ambient conditions, with water intrusion into the pores occurring only at sufficiently high water pressures. Our results suggest that in the presence of strong binding sites, these nanoporous materials may contain strongly adsorbed water molecules, even at lower pressures. If the resulting water-covered surface is hydrophobic, no further filling of the pores (analogous to wetting for planar surfaces) would be observed at ambient pressures, and intrusion would occur only at higher pressures. 13 Acknowledgements The authors thank Virginie Marry, Patrick Varilly, Mark Davis, Shekhar Garde and Adam Willard for helpful discussions. B.R. is grateful to the University of California, Berkeley, for its hospitality. A.J.P. was supported by NIH Grant No. R01-GM078102-04. D.C. was supported by the Director, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division and Chemical Sciences, Geosciences, and Biosciences Division of the U.S. Department of Energy under Contract No. DE- AC02-05CH11231. Appendix A: Unit cells The unit cells used for the simulation of talc and pyrophyllite are reported in 6 and 7. For fluorotalc, the oxygen of the hydroxyl group is replaced by a fluorine atom and the hydrogen is removed. The unit cell of pyrophyllite, a dioctahedral smectite, has dimensions 8.97 ˚A2, as known from X-ray diffraction60. The unit cell of along the surface of 5.18 × fluorotalc is not known exactly; we used the one determined by X-ray diffraction on synthetic fluorohectorite61, which differs from fluorotalc only by substitution of some magnesium by lithium in the octahedral layer, resulting in a permanent negative charge compensated by 9.09 ˚A2 along the surface. For talc sodium counterions. The unit cell has dimensions 5.24 × we used the same structure, replacing each fluorine by a hydroxyl group with a bond length of 1 ˚A, oriented perpendicular to the surface. Appendix B: Comparison of force fields In the present work, we used the CLAYFF force field to describe the clay surfaces and their interactions with water molecules. To justify this choice, here we compare the predictions of another commonly used force field, and those of CLAYFF, with experimental results. This force field was originally developed by Skipper et al.43 and adapted by Smith et al.44 for its use in conjunction with the SPC/E water model. To investigate the talc surface at low RH, in 8(a), we show the (z) obtained using the F Skipper/Smith force field and compare it with that obtained using the CLAYFF force field. Also shown is the experimental estimate discussed in the main text, indicating that the Skipper/Smith force field overestimates the binding or adsorption free energy. 14 To investigate the hydrophobicity of talc surfaces at saturation, obtained using the two force fields, in 8(b), we show the respective Pv(N ) distributions. Pv(N ) for v near the Skipper/Smith talc surface indicates that it is harder to empty the observation volume close to the surface than in bulk water. This is also consistent with the observed complete wetting of the talc surface by a droplet, indicating a contact angle of θ = 0◦. Such a complete wetting is however in contradiction with the experimental contact angle of 80 85◦. We thus − conclude that the Skipper/Smith force field significantly overestimates talc-water adhesive interactions, both at low RH and at saturation. Another force field used to model dioctahedral clays and their interaction with organic cations was proposed by Heinz et al.45. This model was not extended to triocahedral clays such as talc, and the behavior of water at clay surfaces modeled with this force field has not been reported. We nevertheless simulated water droplets on the surface of pyrophyllite using this force field. The resulting contact angle (125◦) was larger than that measured experimentally (80 − hydrophobic. 85◦), suggesting that this force fields results in surfaces that are too Finally, while we find that CLAYFF is the best available force field to date, to simulate water at the surface of uncharged clay minerals, the present work suggests that it is too hydrophobic. Thus we find that there is room for improvement to describe the clay-water interaction, in agreement with the findings of a recent study comparing molecular simula- tions with X-ray and neutron diffraction experiments on a charged smectite62. The insights gained during the present study of neutral clays, which are more sensitive to the clay-water interactions, could also be helpful in the design of an improved force field. Such design requires a subtle balance between different interactions which is generally not achieved by tuning only one parameter. With this caveat in mind, we note that a slightly more polar hydroxyl group might be relevant, as the modified talc surface with δq = 0.1 seems to agree quite well with experimentally measured ∆µads and cos θ values for talc. ∗ Electronic address: [email protected] 1 Bridgeman, C.; Buckingham, A.; Skipper, N.; Payne, M. Mol. Phys. 1996, 89, 879–888. 2 Churakov, S. V. Geochim. Cosmochim. Acta 2007, 71, 1130–1144. 15 3 Cygan, R. T.; Liang, J.-J.; Kalinichev, A. G. J. Phys. Chem. B 2004, 108, 1255–1266. 4 Cygan, R. T.; Greathouse, J. A.; Heinz, H.; Kalinichev, A. G. J. Mater. Chem. 2009, 19, 2470. 5 Delville, A. Langmuir 1991, 7, 547–555. 6 Marry, V.; Turq, P. J. Phys. Chem. B 2003, 107, 1832–1839. 7 Boek, E.; Coveney, P.; Skipper, N. J. Am. Chem. Soc. 1995, 117, 12608–12617. 8 Sposito, G.; Skipper, N.; Sutton, R.; Park, S.; Soper, A.; Greathouse, J. Proc. Nat. Acad. Sci. 1999, 96, 3358–3364. 9 Delville, A. Langmuir 1992, 8, 1796–1805. 10 Young, D.; Smith, D. J. Phys. Chem. B 2000, 104, 9163–9170. 11 Hensen, E.; Smit, B. J. Phys. Chem. B 2002, 106, 12664–12667. 12 Tambach, T.; Bolhuis, P.; Smit, B. Angew. Chem. Int. Ed. 2004, 43, 2650–2652. 13 Teppen, B. J.; Miller, D. M. Soil Sci. Soc. Am. J. 2006, 70, 31–40. 14 Rotenberg, B.; Marry, V.; Vuilleumier, R.; Malikova, N.; Simon, C.; Turq, P. Geochim. et Cosmochim. Acta 2007, 71, 5089–5101. 15 Rotenberg, B.; Morel, J.; Marry, V.; Turq, P.; Morel-Desrosiers, N. Geochim. et Cosmochim. Acta 2009, 73, 4034–4044. 16 Wang, J.; Kalinichev, A.; Kirkpatrick, R.; Cygan, R. J. Phys. Chem. B 2005, 109, 15893–15905. 17 Marry, V.; Rotenberg, B.; Turq, P. Phys. Chem. Chem. Phys. 2008, 10, 4802–4813. 18 Wang, J.; Kalinichev, A. G.; Kirkpatrick, R. Earth and Planetary Science Letters 2004, 222, 517–527. 19 Wang, J.; Kalinichev, A.; Kirkpatrick, R. Geochim. et Cosmochim. Acta 2006, 70, 562–582. 20 Wang, J.; Kalinichev, A. G.; Kirkpatrick, R. J. J. Phys. Chem. C 2009, 113, 11077–11085. 21 Michot, L. J.; Villieras, F.; Francois, M.; Yvon, J.; Le Dred, R.; Cases, J. M. Langmuir 1994, 10, 3765–3773. 22 Giese, R.; Costanzo, P.; Oss, C. Phys. Chem. Minerals 1991, 17, 611–616. 23 Van Oss, C. J.; Giese, R. F. Clays Clay Minerals 1995, 43, 474–477. 24 Patel, A. J.; Varilly, P.; Chandler, D. J. Phys. Chem. B 2010, 114, 1632–1637. 25 Patel, A. J.; Varilly, P.; Chandler, D.; Garde, S. J. Stat. Phys. 2011, in press, doi: 10.1007/s10955–011–0269–9. 26 Berendsen, H. J. C.; Grigera, J. R.; Straatsma, T. P. J. Phys. Chem. 1987, 91, 6269–6271. 27 Dang, L. X. Chem. Phys. Lett. 1992, 200, 21–25. 16 28 LAMMPS, http://lammps.sandia.gov. 29 Martyna, G.; Klein, M.; Tuckerman, M. J. Chem. Phys. 1992, 97, 2635–2643. 30 Ryckaert, J.-P.; Ciccotti, G.; Berendsen, H. J. Comput. Phys. 1977, 23, 327–341. 31 Kumar, J., S. and. Rosenberg; Bouzida, D.; Swendsen, R.; Kollman, P. J. Comp. Chem. 1995, 16, 1339–1350. 32 Roux, B. Comput. Phys. Comm. 1995, 91, 275–282. 33 Lum, K.; Chandler, D.; Weeks, J. D. J. Phys. Chem. B 1999, 103, 4570–4577. 34 Chandler, D. Nature 2005, 437, 640–647. 35 Berne, B. J.; Weeks, J. D.; Zhou, R. Ann. Rev. Phys. Chem. 2009, 60, 85–103. 36 Varilly, P.; Patel, A. J.; Chandler, D. J. Chem. Phys. 2011, 134, 074109. 37 Mittal, J.; Hummer, G. Proc. Natl. Acad. Sci. 2008, 105, 20130–20135. 38 Sarupria, S.; Garde, S. Phys. Rev. Lett. 2009, 103, 037803. 39 Godawat, R.; Jamadagni, S. N.; Garde, S. Proc. Nat. Acad. Sci. 2009, 106, 15119–15124. 40 Acharya, H.; Vembanur, S.; Jamadagni, S. N.; Garde, S. Faraday Discuss. 2010, 146, 353–365. 41 Patel, A. J.; Varilly, P.; Jamadagni, S. N.; Acharya, H.; Garde, S.; Chandler, D. Proc. Natl. Acad. Sci. 2011, in press. 42 Douillard, J. J. Coll. Interf. Sci. 2002, 255, 341–351. 43 Skipper, N.; Refson, K.; McConnell, J. Clay Minerals 1989, 24, 411–425. 44 Smith, D. Langmuir 1998, 14, 5959–5967. 45 Heinz, H.; Koerner, H.; Anderson, K. L.; Vaia, R. A.; Farmer, B. L. Chem. Mater. 2005, 17, 5658–5669. 46 Since the number of hydrophilic binding sites on clay edges is much smaller than on the talc surface, we neglect the edge sites to obtain this estimate. 47 Carvalho, A.; Ramalho, J.; Villieras, F. Applied Surface Science 2007, 253, 5628–5632. 48 Giovambattista, N.; Debenedetti, P. G.; Rossky, P. J. J. Phys. Chem. B 2007, 111, 9581–9587. 49 Ohler, B.; Langel, W. J. Phys. Chem. C 2009, 113, 10189–10197. 50 Giovambattista, N.; Debenedetti, P. G.; Rossky, P. J. Proc. Natl. Acad. Sci. 2009, 106, 15181– 15185. 51 Surblys, D.; Yamaguchi, Y.; Kuroda, K.; Nakajima, T.; Fujimura, H. J. Chem. Phys. 2011, 135, 014703. 52 Kimmel, G. A.; Petrik, N. G.; Dohnalek, Z.; Kay, B. D. Phys. Rev. Lett. 2005, 95, 166102. 17 53 Kimmel, G. A.; Petrik, N. G.; Dohnalek, Z.; Kay, B. D. J. Chem. Phys. 2007, 126, 114702. 54 Anand, G.; Sharma, S.; Dutta, A. K.; Kumar, S. K.; Belfort, G. Langmuir 2010, 26, 10803– 10811. 55 Anand, G.; Zhang, F.; Linhardt, R. J.; Belfort, G. Langmuir 2011, 27, 1830–1836. 56 Cailliez, F.; Trzpit, M.; Soulard, M.; Demachy, I.; Boutin, A.; Patarin, J.; Fuchs, A. H. Phys. Chem. Chem. Phys. 2008, 10, 4817. 57 Cailliez, F.; Stirnemann, G.; Boutin, A.; Demachy, I.; Fuchs, A. H. J. Phys. Chem. C 2008, 112, 10435–10445. 58 Moliner, M.; Roman-Leshkov, Y.; Davis, M. E. Proc. Natl. Acad. Sci. 2010, 107, 6164. 59 Paranthaman, S.; Coudert, F.-X.; Fuchs, A. H. Phys. Chem. Chem. Phys. 2010, 12, 8123. 60 Maegdefrau, E.; Hoffman, U. Z. Kristallogr. Kristallgeom. Kristallphys. Kristallchem. 1937, 98, 299–323. 61 Breu, J.; Seidl, W.; Stoll, A. Z. anorg. allg. Chem. 2003, 629, 503–515. 62 Ferrage, E.; Sakharov, B. A.; Michot, L. J.; Delville, A.; Bauer, A.; Lanson, B.; Grangeon, S.; Frapper, G.; Jim´enez-Ruiz, M.; Cuello, G. J. J. Phys. Chem. C 2011, 115, 1867–1881. 18 FIG. 6: Atomic coordinates in the talc unit cell. Subscripts for oxygen differentiate tetrahedral (Td), bridging (B) and octahedral (Oh) atoms. 19 Table1:Atomiccoordinatesinthetalcunitcell.Subscriptsforoxygendifferentiatetetrahe-dral(Td),bridging(B)andoctahedral(Oh)atoms.AtomXYZMg1.7531.5140.01.7534.5470.01.7537.580.04.3730.00.04.3733.0320.04.3736.0610.0Si0.01.517-2.7240.07.577-2.7242.623.029-2.7242.626.064-2.7240.8873.0292.7240.8876.0642.7243.5071.5172.7243.5077.5772.724OB0.0037.574-1.1220.0031.519-1.1222.6233.027-1.1222.6236.066-1.1220.8843.0271.1220.8846.0661.1223.5041.5191.1223.5047.5741.122OTd1.3192.258-3.3201.3196.835-3.3202.5964.547-3.3203.9392.288-3.3203.9396.806-3.3205.2160.0-3.3200.9114.5473.3202.1882.2583.3202.1886.8353.3203.530.03.3204.8086.8063.3204.8082.2883.320OOh0.0154.547-0.9982.6350.0-0.9980.8720.00.9983.4924.5470.998H0.0154.547-1.9982.6350.0-1.9980.8720.01.9983.4924.5471.998Table2:Atomiccoordinatesinthepyrophyl-liteunitcell.Subscriptsforoxygendifferentiatebridging(B),tetrahedral(Td)andoctahedral(Oh)atoms.AtomXYZAl1.7270.00.01.7272.990.04.3174.4850.04.3177.4750.0Si0.8631.495-2.680.8634.485-2.683.4530.0-2.683.4535.98-2.680.02.992.680.05.982.682.591.4952.682.597.4752.68OB0.8631.495-1.090.8634.485-1.093.4535.98-1.093.4538.97-1.090.02.991.090.05.981.092.591.4951.092.597.4751.09OTd0.8632.99-3.272.1580.748-3.272.1585.233-3.273.4537.475-3.274.7480.748-3.274.7485.233-3.270.04.4853.271.2952.2433.271.2956.7283.272.590.03.273.8852.2433.273.8856.7283.27OOh0.8637.475-1.093.4532.99-1.090.00.01.092.594.4851.09H1.3266.675-1.03.9162.19-1.02.1275.2851.04.7170.81.03 FIG. 7: Atomic coordinates in the pyrophyllite unit cell. Subscripts for oxygen differentiate bridging (B), tetrahedral (Td) and octahedral (Oh) atoms. 20 Table1:Atomiccoordinatesinthetalcunitcell.Subscriptsforoxygendifferentiatetetrahe-dral(Td),bridging(B)andoctahedral(Oh)atoms.AtomXYZMg1.7531.5140.01.7534.5470.01.7537.580.04.3730.00.04.3733.0320.04.3736.0610.0Si0.01.517-2.7240.07.577-2.7242.623.029-2.7242.626.064-2.7240.8873.0292.7240.8876.0642.7243.5071.5172.7243.5077.5772.724OB0.0037.574-1.1220.0031.519-1.1222.6233.027-1.1222.6236.066-1.1220.8843.0271.1220.8846.0661.1223.5041.5191.1223.5047.5741.122OTd1.3192.258-3.3201.3196.835-3.3202.5964.547-3.3203.9392.288-3.3203.9396.806-3.3205.2160.0-3.3200.9114.5473.3202.1882.2583.3202.1886.8353.3203.530.03.3204.8086.8063.3204.8082.2883.320OOh0.0154.547-0.9982.6350.0-0.9980.8720.00.9983.4924.5470.998H0.0154.547-1.9982.6350.0-1.9980.8720.01.9983.4924.5471.998Table2:Atomiccoordinatesinthepyrophyl-liteunitcell.Subscriptsforoxygendifferentiatebridging(B),tetrahedral(Td)andoctahedral(Oh)atoms.AtomXYZAl1.7270.00.01.7272.990.04.3174.4850.04.3177.4750.0Si0.8631.495-2.680.8634.485-2.683.4530.0-2.683.4535.98-2.680.02.992.680.05.982.682.591.4952.682.597.4752.68OB0.8631.495-1.090.8634.485-1.093.4535.98-1.093.4538.97-1.090.02.991.090.05.981.092.591.4951.092.597.4751.09OTd0.8632.99-3.272.1580.748-3.272.1585.233-3.273.4537.475-3.274.7480.748-3.274.7485.233-3.270.04.4853.271.2952.2433.271.2956.7283.272.590.03.273.8852.2433.273.8856.7283.27OOh0.8637.475-1.093.4532.99-1.090.00.01.092.594.4851.09H1.3266.675-1.03.9162.19-1.02.1275.2851.04.7170.81.03 FIG. 8: (a) F (z) for the adsorption of an isolated water molecule on talc simulated using the CLAYFF and Skipper/Smith force fields. The arrow indicates the experimental value of the minimum, estimated by fitting the adsorption isotherm of Michot et al.21 to a Langmuir model in the very low RH regime (see text). (b) Pv(N ) for the talc surface, using the CLAYFF and Skipper/Smith force fields. 21 -15-10-5036912F(z)(kcal/mol)z(Å)(a)LowRH-60-40-2000102030lnPv(N)N(b)BulkSmithCLAYFF
synthetic_cpt
2
Can_Models_Help_Us_Create_Better_Models_Evaluating_LLMs_as_Data_Scientists.pdf
KAUCUS: Knowledge Augmented User Simulators for Training Language Model Assistants Kaustubh D. Dhole Department of Computer Science Emory University Atlanta, USA [email protected] 4 2 0 2 n a J 9 2 ] C H . s c [ 1 v 4 5 4 6 1 . 1 0 4 2 : v i X r a Abstract An effective multi-turn instruction-following assistant can be developed by creating a simula- tor that can generate useful interaction data. Apart from relying on its intrinsic weights, an ideal user simulator should also be able to bootstrap external knowledge rapidly in its raw form to simulate the multifarious diver- sity of text available over the internet. Previ- ous user simulators generally lacked diversity, were mostly closed domain, and necessitated rigid schema making them inefficient to rapidly scale to incorporate external knowledge. In this regard, we introduce Kaucus, a Knowledge- Augmented User Simulator framework, to out- line a process of creating diverse user simula- tors, that can seamlessly exploit external knowl- edge as well as benefit downstream assistant model training. Through two GPT-J based sim- ulators viz., a Retrieval Augmented Simula- tor and a Summary Controlled Simulator we generate diverse simulator-assistant interac- tions. Through reward and preference model- based evaluations, we find that these interac- tions serve as useful training data and create more helpful downstream assistants. We also find that incorporating knowledge through re- trieval augmentation or summary control helps create better assistants. 1 Introduction Significant advancements in Large Language Mod- els (LLMs) have made them exceptionally adept in conversational applications like virtual assis- tants (Touvron et al., 2023; FitzGerald et al., 2022; OpenAI, 2023; Team et al., 2023). This proficiency is largely attributed to the notably parallelizable transformer architecture (Vaswani et al., 2017) en- abling these models to utilize extensive pre-training datasets effectively (Raffel et al., 2019; Computer, 2023). To create effective assistants, LLMs are then further enhanced by learning from human interactions including popular paradigms such as RLHF (Böhm et al., 2019; Ziegler et al., 2019; Ouyang et al., 2022a). Such conversational human alignment of assistants requires large amounts of interactive dialog data, both for training as well as testing. However, interactive data collection is a manual and slow process, particularly (a) for covering a wide range of user behaviors as well as (b) for diverse adversarial and behavior testing. These challenges can be mitigated by simulating user behaviors by automating the generation of in- teractive data, reducing both time and cost, while maintaining control over the interactions. Simu- lated interactions can be executed at a much faster pace than manual collection efforts, limited only by the speed of inference. Yet, current user simulators lack diversity, are mostly closed domain, and require rigid schema for control or conversation grounding. The necessity of intermediate schema in the form of a knowledge base (Kim et al., 2023) or handcrafted rules (like user persona or specific behaviors) while being excellent drivers to ground conversations, make it hard to develop scalable simulators – that can utilize natural text freely available on the internet and rapidly create corresponding assistant mod- els. A simulator should be able to exploit external knowledge rapidly and also be controllable with- out a rigid schema. We argue that such a knowl- edge simulator can be helpful in two ways – It can seamlessly convert free-form text to useful training data without user intervention as well as provide a natural control to direct simulators for specific behaviors (Mille et al., 2021; Cheng et al., 2023). in this work, we propose Kaucus, a Knowledge Augmented Simulator Framework1. Through this framework, we demonstrate the usage of external sources of knowledge – viz. Retrieval Augmentation and Summary Control – for creating Hence, 1pronounced like Caucus derived from Algonquian cau’- cau’-as’u meaning ‘adviser’ Figure 1: The complete three step framework of Kaucus – creating, utilizing and evaluating a user simulator. user simulators that can incorporate free-flowing text and result in better assistant training. The paper is organized as follows: In Section 2, we first discuss existing work related to user sim- ulators. In Section 3, we define simulators and introduce Kaucus, through two knowledge simu- lators. We further describe the efficacy of each through training and evaluating downstream as- sistant models. Our retrieval augmented simula- tor, SRAG shows how retrieving relevant passages with a simple BM25 retriever can be used to im- prove intrinsic metrics as well as provide useful training data to train helpful assistants. We also introduce the summary-controlled setting, SCTRL to build scalable simulators to exploit freely avail- able text and further measure their performance with and without retrieval. 2 Related Work User simulators have been studied in various set- tings. Aher et al. (2023) create four simulators that elicit behavior to judge an assistant’s fairness, ratio- nality, grammaticality, and general knowledge, and then measure them qualitatively. Their simulators are models with different prompt templates. Train- ing multi-agent interactions has been a popular choice in reinforcement learning. Horton (2023); Argyle et al. (2023) create simulations for eco- nomic purposes by endowing GPT3 (Brown et al., 2020) with demographic characteristics and then get responses in various scenarios that match what is seen empirically. Irving et al. (2018) in AI safety has proposed using self-play and self-debate to train AI agents to pursue human goals and prefer- ences. Two tasks in the collaborative benchmark, BIG-Bench (Srivastava et al., 2023) evaluate the model’s ability for self-evaluation by simulating specific human professions. They make the models to act as lawyers, tutors2, judges3, students, etc. and then have separate model instances to evaluate the conversation. Each of the roles is invoked by user-specific prompts like “You are a lawyer” and a subsequent model-based evaluation is performed by prompting to seek numerical ratings. Kreyssig et al. (2018)’s Neural User Simula- tions involve training encoder-decoder RNNs on dialogues between real users and a spoken dialogue system (SDS) in a restaurant domain and then us- ing the trained simulator to train the policy of a reinforcement learning based SDS. They further use Schatzmann et al. (2005)’s cross-model evalua- tion to compare user simulators by training differ- ent policies with each simulator and testing it with other simulators. Gur et al. (2018) encode dialog history and a goal to generate user responses for task-oriented dialog. Kraus et al. (2023a); Li et al. 2BIG-Bench Self Evaluation Tutoring 3BIG-Bench Self Evaluation Courtroom CollectandAugmentDemonstrationDataCollectHuman-MachineDemonstrationDataUseExternalKnowledgetoAugmenttheDataGenerateDemonstrationSummariesFromExternalSummariserCreateaSimulatorandGenerateInteractionDataUsetheCollectedDemonstrationDatatoTrainaSimulatorUsefreeflowingtexttointeractwithapartnerassistantmodelandcreatesimulatedinteractionsTrainanAssistantandevaluateusingarewardmodelUsetheInteractionstoTrainaDownstreamAssistantGenerateInteractionsbytalkingtoanysimulatorUsearewardmodeltrainedonHumanPreferencestoEvaluate (2022b) prompt LLMs with task-oriented dialog data, such as goals, and perform intrinsic evaluation over the generated data to show the effectiveness of their approaches. Kim et al. (2023) generate conver- sations grounded on common sense by prompting InstructGPT with knowledge base triples. Their human evaluations show that oftentimes humans prefer model outputs against their human-written counterparts. Liu et al. (2023) leverage multiple user simulators to train task-oriented dialog sys- tems. Faltings et al. (2023) utilize user simulators that offer edits to guide the model towards achiev- ing a specified target text training them using Imi- tation Learning. Other studies augment simulators with emo- tions (Lin et al., 2023) and trusting be- haviours (Kraus et al., 2023b). For instance, Lin et al. (2023) simulate user emotions alongside user behavior based on the user goal, the dia- logue history, and persona. Giabbanelli (2023) uti- lize GPT-based models for scientific simulations while Schaefer et al. (2023) explore LLMs to simu- late biological systems. With the popularity of large language mod- els deployed in closed-source settings, boot- strapping training data from them has become useful. Taori et al. (2023) create downstream assistant models by training LLama-7B and 13B models (Touvron et al., 2023) on 52K single-turn instruction following demonstrations generated through self-instruct (Wang et al., 2023b) from text-davinci-003 (Brown et al., 2020). Bian et al. (2023) create a dialog corpus by extending the same to the multi-turn setting. Dai et al. (2022) show improved conversation retrieval by proposing a mechanism to convert Wikipedia passages to dialog. On the other hand, retrieval augmentation has been the focus of many recent efforts (Schick et al., 2023; Zhang et al., 2023; Wang et al., 2023a; Li et al., 2022a) as it offers advantages such as up-to- date information access beyond an LLM’s training dataset, incorporation of proprietary or domain- specific data at runtime, and enhanced factuality in outputs compared to standard LLMs. Studies have been performed by training RAG systems end-to- end (Guu et al., 2020; Lewis et al., 2020) as well as using retrieval in context for various tasks (Ram et al., 2023; Jiang et al., 2023; Gao et al., 2023; Dhole and Agichtein, 2024). 3 The Kaucus Framework In this section, we introduce Kaucus, a 3-stage framework, and outline the process of creating knowledge-augmented simulators as shown in Fig- ure 1. Our approach involves the following steps: 3.1 Data Collection and Augmentation We start by gathering interaction data – essentially conversations between a user and a base assistant LLM, which will be later augmented to enrich the training process. For instance, the base LLM could take the form of closed-source instruct models such as OpenAI’s GPT-4, Claude, or BingChat which are widely used for work. 3.2 Training a Language Model (LM) as a Simulator The next step involves training a Language Model (LM) to act as a simulator. This LM can then serve as a conversation generator for data augmen- tation (Dhole et al., 2023) or be integrated into a pipeline that relies on conversation interactions, such as Reinforcement Learning from Human Feed- back (RLHF) (Ziegler et al., 2019; Ouyang et al., 2022b). Our work focuses on the former. 3.3 Leveraging the User Simulator Once the user simulator is trained, there are sev- eral methods to utilize for improving an assistant Language Model (LM). Our work resorts to data augmentation, which will be the focus of our sec- ond set of experiments. It involves using the user simulator to generate additional training data to enhance the assistant LM’s performance. 3.4 Evaluation To evaluate the effectiveness of the user simulator, we will employ both intrinsic and extrinsic metrics. Intrinsic metrics will be measured over the interac- tions with the simulator, assessing its performance in generating relevant and coherent responses. On the other hand, extrinsic metrics will be based on evaluating a downstream assistant model trained over these interactions, which will help us gauge the impact of the user simulator on overall assistant performance. We will describe the evaluation in detail in Section 5. 4 Methods and Experiments We now specifically describe the two types of knowledge-augmented simulators, viz. Utterance Figure 2: The format of the conversations used for training S1 (a vanilla simulator), SRAG (retrieved document shown in green), and SCTRL (summary shown in red). Grounded Simulators (S1 and SRAG) and Sum- mary Controlled Simulators (SCTRL). associated human response is passed as the output. 4.1 Utterance Grounded Simulators Here we train simulators with human-machine demonstration data by feeding models the conver- sation history to create simulators that can be trig- gered from a starting utterance. We create two sim- ulators – S1 and SRAG by fine-tuning an unsuper- vised pre-trained GPJ-6B (Wang and Komatsuzaki, 2021) model. We describe the training process for both below: 4.1.1 S1 Simulator Trained on Anthropic and Open Assis- tant Conversations • Training Data: For training S1, we use demonstration data available through Open Assistant’s conversations (Köpf et al., 2023) and Anthropic’s helpful splits (Bai et al., 2022). • Format: The Simulator’s training data con- sists of (context, human-response) pairs. For every “Human” utterance in all the conver- sations, we select all the previous utterances along with their speaker information and pass it as an input to the model. The input also consists of a “Human:” string at the end. The 4.1.2 SRAG Retrieval Augmented Simulator Trained on An- thropic and Open Assistant Conversations with BM25 Retrieval on MSMarco Simulators could benefit from the incorporation of external knowledge which can be helpful to steer the conversation, improve factuality and most im- portantly introduce variation. To test our hypoth- esis, we train the second simulator, SRAG by in- corporating passages retrieved from an external retriever. • Training Data: We augment the interactions used to train S1 with passage snippets from the MS-Marco dataset (Nguyen et al., 2016; Bajaj et al., 2016), which is a large-scale dataset of 8.8M passages popularly used for information retrieval and reading comprehen- sion. Having been generated from real users’ search queries, it provides a vast repository of documents collected on a plethora of topics over the web. • Format: We use (context, human-response) pairs in the style of S1 with human turns annotated with retrieved Using the human MSMarco passages. Human:```Amonopolyisdistinguishedfromamonopsony,inwhichthereisonlyonebuyerofaproductorservice;amonopolymayalsohavemonopsonycontrolofasectorofamarket.```Canyouwriteashortintroductionabouttherelevanceoftheterm"monopsony"ineconomics?PleaseuseexamplesrelatedtopotentialmonopsoniesinthelabourmarketandciterelevantresearchAssistant:Monopsonyreferstoamarketstructurewherethereisonlyonebuyerofagoodorservice….Human:```Thetermmayalsobeusedtodescribetherestrictionofcertainaccessrightswhenanemployeehaschangedroleswithintheorganization.Onboardingandoffboardingproceduresmustbedocumentedtoensurecompliancewithregulatoryrequirements.```Whatcanbedoneataregulatoryleveltoensure,inamonopsony,thatthepower….",S1Human:Canyouwriteashortintroductionabouttherelevanceoftheterm"monopsony"ineconomics?PleaseuseexamplesrelatedtopotentialmonopsoniesinthelabourmarketandciterelevantresearchAssistant:Monopsonyreferstoamarketstructurewherethereisonlyonebuyerofagoodorservice.Inthecontextoflabormarkets,amonopsonyoccurswhenthereisonlyoneemployerinaparticulargeographicareaorindustry,givingthemsignificantpowertosetwagesandworkingconditions.……Human:Whatcanbedoneataregulatoryleveltoensure,inamonopsony,thatthepowerovertheemployeesisn'tabused?Listafewoptionsandfocusontheorgansthatshouldact.Assistant:Yes,that'scorrect.Keeping….SRAGSCTRLSummary:"Monopsony"ineconomicsisamarketstructurewherethereisonlyonebuyerofagoodorservice.Thepowerovertheemployeesisn'tabusedinamonopsony.Human:Canyouwriteashortintroductionabouttherelevanceoftheterm"monopsony"ineconomics?Please…Assistant:Monopsonyreferstoamarketstructurew…,whenthereisonlyoneemployerinaparticulargeographicareaorindustry,givingthemsignificantpowertosetwagesandworkingconditions.……Human:Whatcanbedoneataregulatoryleveltoensure,inamonopsony,thatthepowerovertheemployeesisn'tabused?…… utterance as a query, we execute a BM25 retriever against an MSMarco Passage Index for every human turn. Each of the human utterances is then prepended with a retrieved passage as shown in Figure 2 in green. We use the MSMarco index provided by IRDatasets (MacAvaney et al., 2021) and the BM25 implementation provided by PyTerrier (Macdonald et al., 2021). 4.2 Summary Controlled Simulators The previous utterance-grounded setting relies on a conversational utterance at inference time to initiate the interaction. While it can be easy to obtain such conversational utterances using existing conversa- tional datasets, they can quickly become scarce and out-of-date. It would be of interest to be able to scale over vast amounts of free text available over the web. However, most of the web data exists in a non-conversational format unsuitable for direct incorporation in the training process. SCTRL: In that regard, we introduce the training of summary controlled simulators that can uti- lize the conversational summary obtained from an external conversation summarizer during training. This can be potentially helpful in two ways – It can provide a mechanism for the simulator to attempt to seamlessly convert “free form text” to “interac- tion data” while also coming up with the “simulator trigger” by itself reducing our reliance on conversa- tional corpora. As compared to a fixed schema or a knowledge base, it can provide a natural control to guide simulators for specific behaviors via nat- ural language texts which are generally available in plenty as compared to their conversational or interactive counterparts. • Training Data: To create the training data, we append a conversational summary gener- ated from an external conversational summa- rizer, at the beginning of the conversation. Our objective is to force the simulator to be able to learn the association between the initial non-conversational text and the subsequent conversation. We choose an existing BART Summariser (Wolf et al., 2020) fine-tuned on various dialog and non-dialog summarisation datasets like DialogSum, AMI and XSUM. We create the two summary-controlled counterparts of S1 and SRAG as SCTRL and SCTRL-RAG re- spectively. SCTRL-RAG Summary Controlled Simulator Trained on Anthropic and Open Assistant Conver- sations with MSMarco BM25 Retrieval We use a GPT-J-6B model RLHF fine-tuned on demonstration data as our base assistant model and our simulator. We use deepspeed (Rasley et al., 2020) to optimize training and train for 10 epochs on a learning rate of 10−6. 5 Evaluation & Results 5.1 Intrinsic Metrics We first seek to assess the “diversity” of the gener- ated interactions. In assessing diversity, we utilize well-established reference-free lexical metrics viz. TTR, logTTR, RootTTR, HDD, and MTLD are based on type-token ratios and are quick to com- pute. The Measure of Textual Lexical Diversity (MTLD) is a prevalent and contemporary TTR met- ric that does not vary as a function of text length and explains textual information that similar lexical diversity approaches do not account for (McCarthy and Jarvis, 2010). It gauges the proportion of dis- tinct word stems (types) to the overall word count (tokens). HDD is an alternative metric that captures additionally unique lexical information (McCarthy and Jarvis, 2010)4. We first generate 125 interactions by making each of the simulators interact with a fixed assis- tant model. The conversation is initialized with an existing Anthropic conversation in the case of S1 and SRAG and five more turns are gener- ated (referred to as the augmented length). In SC- TRL and SCTRL-RAG, 5 turns are generated from scratch from Anthropic’s conversation summary. We present the results in Table 2. The metrics mea- sure the lexical diversity only on the utterances generated via the simulator interaction (and not on the initial Anthropic conversation history that was fed to initiate the interaction). Across all metrics, incorporating a knowledge component, through re- trieval augmentation (SRAG) or summary control (SCTRL) improves diversity. Incorporating both improves diversity across RootTTR and HDD met- rics. • Format: We prepend the predicted summary at the start of the conversation as shown in Figure 2 in red. 4Through a separate ancillary study, we also find that sim- ulators trained on dialog data generate more diverse text as compared to pre-trained ones according to the above metrics. Source Human S1 SRAG S1-CTRL S1-CTRL-RAG Both Type – Without Knowledge With Retrieval Augmentation Simulated Anthropic_8k + MSMarco With Summary Control Generated Interaction Data Assistant Trained With Anthropic_8k Simulated Anthropic_8k Simulated Anthropic_8k*10 summaries Simulated Anthropic_8k*10 summaries + MSMarco A1-CTRL-RAG Assistant A0 A1 A1-RAG A1-CTRL Table 1: The sources of various simulated data used in Kaucus to train the corresponding assistants MTLD Root TTR LogTTR HDD Simulator S1 0.04 23.177 0.134 24.632 SRAG 0.131 SCTRL 25.864 SCTRL-RAG 22.761 0.278 0.818 0.82 0.844 0.766 2.918 3.223 3.437 2.976 Table 2: Lexical diversity metrics on 125 conversations of each simulator. The top-2 highly diverse simulators are the knowledge-based ones - SRAG and SCTRL on all metrics. 5.2 Extrinsic Metrics Although the aforementioned metrics can assist in evaluating and comparing various user simulators as potential data augmenters and generators, it is crucial to determine if they benefit subsequent as- sistant models. The RLHF paradigm, by training reward models, has demonstrated assistants that are more helpful, honest, and less harmful providing a promising direction for aligning with human pref- erences. In this regard, we resort to the family of reward and preference models to measure how well assistant models trained using data produced from various simulators perform. Training Downstream Assistant Models: For each simulator trained (S1, SRAG,..), we create a subsequent assistant model (A1, ARAG, ...) and use reward modeling to measure the helpfulness of each of the assistant models. To create training data for each of the assistant models, we first simulate interactions between the corresponding simulator model along a separately held-out assistant model. For each utterance grounded simulator (S1 and SRAG), we use 8000 Anthropic conversations as triggers. Particularly, we utilize the com- plete Anthropic conversation as the starting his- tory for both the simulator and the separately held-out assistant model and allow ten turns (5 pairs) of interactions to be generated. Using the simulator to generate longer contexts pro- vides an opportunity to collect a larger number of (context, assistant-response) pairs for training the downstream assistant model. For the retrieval augmented simulator, SRAG, it is necessary to retrieve passages relevant to the ongoing conversation. We hence use the previous assistant response as a query to our MSMarco Pas- sage Index before generating the simulator turn. The top-ranked passage via BM25 is then placed at the end of the input to SRAG. For generating interactions from SCTRL, we need free-flowing text as the initial trigger. We gen- erate 8000 conversations from conversation sum- maries of the Anthropic dataset. We use additional 9*8K passages from MS-Marco as initial triggers to act as implicit summaries. After generating the conversations, we con- vert them into (context, assistant-response) pairs and use them as training data for predicting the assistant response given all the previous utter- ances. We call the subsequent assistant models A1, ARAG and ACTRL. The training details of each assistant model are described in Table 1. Baseline: We additionally train an assistant model, A0 using raw 8000 conversations from An- thropic to act as appropriate baseline. Test Set: For evaluation, we utilize 200 utter- ances from the test set of Anthropic’s dataset. FastChat Evaluation: FastChat (Zheng et al., 2023) is a platform for evaluating and serving LLMs. We resort to FastChat evaluation for prompting GPT-4 (OpenAI, 2023) for a compar- ative evaluation between two simulators. The pro- cess involves GPT-4 being input with two conver- sations, placed one after the other, along with an instruction to evaluate and generate a numerical score. We attribute a win, a loss, or a tie depending on whether the first (assistant model on the left in all the images) has a value greater, lesser, or equal to the second (one on the right). SteamSHP Reward Model Evaluation SteamSHP-XL (Ethayarajh et al., 2022) is a preference model fine-tuned on top of an instruction-tuned model FLAN-T5-XL (Wei et al.; Longpre et al., 2023) to predict which response humans will find more helpful, given some context and two possible responses. On the reward being prompted the same context, model setting compares the probabilities assigned independently to each model response to infer the preference label. SteamSHP Preference Model Evalua- tion (Ethayarajh et al., 2022) Preference modeling, like the FastChat Evaluation, compares two model responses through a single inference pass, which can be used to compute the probability of the first one being better than the second. To avoid any bias occurring through the order of two conversations, we also calculate the scores with the simulator order reversed in the prompt. For each plot, the columns indicate the two as- sistant models being compared. The colors in blue for each row indicate when the evaluation system prefers the left-hand side model as compared to the right-hand side when compared against A0. Figure 3: FastChat Evaluation of Assistants created from Utterance Grounded Simulators (A1 and ARAG) against baseline assistant (A0) Effect of Simulator: We first compare A1 (i.e. the assistant trained on 8k interactions generated from S1) against A0 (i.e. the one trained without Figure 4: FastChat Evaluation of Assistants created from Summary Controlled Simulators (-CTRL) against baseline assistant (A0) Figure 5: SteamSHP reward model Evaluation of Assis- tants created from Utterance Grounded against baseline assistant (A0) Figure 6: SteamSHP reward model Evaluation of As- sistants created from Summary Controlled Simulators (-CTRL) against baseline assistant (A0) the help of the simulator). A1 outperforms A0 in all three evaluations as seen on the first rows of Figures 3, 5 and 7. The results are more prominent in SteamSHP’s evaluations. This shows that with the help of a simulator, we can generate more data and improve downstream assistant performance. Effect of Retrieval Augmentation: We then compare whether an assistant model ARAG, trained from retrieval augmented data benefits train- ing. With the retrieval augmented simulator, down- stream performance across all metrics is improved. ARAG’s interactions are preferred more often as compared to A0 as well as A1 as seen in the 2nd and 3rd rows of Figures 3, 5 and 7. Effect of Summary Control: The assistants AC- TRL and ACTRL-RAG trained from the summary- controlled simulators are more often preferred across all the evaluations as shown in the first two rows of Figures 4, 6 and 8. However, the non- retrieval counterpart ACTRL is more often pre- ferred as compared to the retrieval counterpart. Wins, Losses and Ties on Comparative Evaluation (LHS = Blue)A1 vs A0ARAG vs A0ARAG vs A10%25%50%75%100%WinsLossesTiesFastChat Evaluation (Utterance Grounded)ACTRL vs A0ACTRL-RAG vs A0ACTRL-RAG vs ACTRL050100150200WinsLossesTiesFastChat Comparisons (Summary Controlled)A1 vs A0ARAG vs A0ARAG vs A10%25%50%75%100%WinsLossesTiesSteamSHP Reward Model Comparisons (Utterance Grounded)ACTRL vs A0ACTRL-RAG vs A0ACTRL-RAG vs ACTRL0%25%50%75%100%WinsLossesTiesSteamSHP Reward Modelling Comparisons (Summary Controlled) hope Kaucus will help encourage the development of automated techniques to be able to incorporate the vast amount of text produced rapidly over the in- ternet and align assistant models better with newer data as well as be able to control the distribution of training data without the need for a rigid schema. Limitations Retrieval Augmentation helps incorporate diver- sity as well as benefit downstream models. We chose to use BM25 as our choice of retriever. How- ever, there are dense retrievers (Khattab and Za- haria, 2020) and neural rerankers (Pradeep et al., 2023) that perform better than BM25 across a range of information retrieval benchmarks. Our focus was to show the benefit of incorporating external knowledge while performing a rigorous set of ex- periments with the same. Future studies could specifically study the impact of additional hyper- parameter tuning by using varied choices of the re- triever, the retrieving query, choice of summarisers and also gauge the impact of different domains than those of the Anthropic and the MSMarco datasets. Besides, our study does not consider the im- pact of prolonged training on generated data which could cause potential problems of model forget- ting over the long run (Shumailov et al., 2023). More experiments conducted to gauge long-term viability would shed better light on the efficacy of knowledge simulators. All the evaluations conducted in this paper were automated – through popular reward or preference models. Human evaluations can provide better additional insights. Besides, the current intrinsic metrics primarily focus on diversity, which, while important, is only one dimension of dialogue eval- uation and future work would benefit from other measures depending on the application. Ethics Statement Our study has focused on the benefits of employing simulators to improve downstream assistant mod- els. We believe that these simulators can also act as effective testers of assistants to pre-encounter and regurgitate harmful or undesirable assistant content before assistant models are deployed in impact- ing end applications. We should maintain caution against their unethical usage or if such regurgitation is exploited to cause harm. Just like assistants or other applications of large language models (Dhole, 2023), simulators should also be gauged from a Figure 7: SteamSHP Preference model Evaluation of Assistants created from Utterance Grounded Simulators against baseline assistant (A0) Figure 8: SteamSHP Preference model Evaluation of Assistants created from Summary Controlled (-CTRL) against baseline assistant (A0) 6 Conclusion Simulators provide a way to generate data to cre- ate downstream assistant models saving human time and effort. Through our framework Kau- cus, we further showed that augmenting simulators by exploiting external knowledge helps generate diverse interactions and as well as creates more helpful assistants than vanilla simulators. We de- scribe two types of knowledge-augmented simu- lators, a Retrieval Augmented Simulator, SRAG, and a summary-controlled simulator, SCTRL both of which consume external knowledge in unique ways. Raw text is more prevalent than the conversa- tional counterparts. Controlling simulators through conversational summaries or external documents can be a quick and powerful tool to convert public text to trainable interaction data and create more helpful assistants. It provisions the simulator to generate interactions for novel information outside the scope of an LLM’s intrinsic parameters. We A1 vs A0ARAG vs A0ARAG vs A10%25%50%75%100%WinsLossesSteamSHP Preference Model Evaluation (Utterance Grounded)ACTRL vs A0ACTRL-RAG vs A0ACTRL-RAG vs ACTRL0%25%50%75%100%WinsLossesSteamSHP Preference Model Evaluation (Summary Contolled) socio-technical lens, and appropriate checks and fallback mechanisms should be employed before their actual usage. Besides, simulators themselves could inadvertently learn biases in the training data, leading to unfair or biased generations, and can be exploited for malicious purposes such as gen- erating fake news and harmful content or asking triggering questions. Acknowledgements The author would like to thank the three anony- mous reviewers for their useful suggestions. References Gati V Aher, Rosa I Arriaga, and Adam Tauman Kalai. 2023. Using large language models to simulate mul- tiple humans and replicate human subject studies. In International Conference on Machine Learning, pages 337–371. PMLR. Lisa P Argyle, Ethan C Busby, Nancy Fulda, Joshua R Gubler, Christopher Rytting, and David Wingate. 2023. Out of one, many: Using language mod- els to simulate human samples. Political Analysis, 31(3):337–351. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862. Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. 2016. Ms marco: A human generated ma- chine reading comprehension dataset. arXiv preprint arXiv:1611.09268. Ning Bian, Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun, and Ben He. 2023. Chatalpaca: A multi- turn dialogue corpus based on alpaca instructions. https://github.com/cascip/ChatAlpaca. Florian Böhm, Yang Gao, Christian M. Meyer, Ori Shapira, Ido Dagan, and Iryna Gurevych. 2019. Bet- ter rewards yield better summaries: Learning to sum- In Proceedings of the marise without references. 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3110–3120, Hong Kong, China. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Myra Cheng, Tiziano Piccardi, and Diyi Yang. 2023. CoMPosT: Characterizing and evaluating caricature In Proceedings of the 2023 in LLM simulations. Conference on Empirical Methods in Natural Lan- guage Processing, pages 10853–10875, Singapore. Association for Computational Linguistics. Together Computer. 2023. Redpajama: An open source recipe to reproduce llama training dataset. Zhuyun Dai, Arun Tejasvi Chaganty, Vincent Zhao, Aida Amini, Mike Green, Qazi Rashid, and Kelvin Guu. 2022. Dialog inpainting: Turning documents to dialogs. In International Conference on Machine Learning (ICML). PMLR. Kaustubh Dhole. 2023. Large language models as So- cioTechnical systems. In Proceedings of the Big Pic- ture Workshop, pages 66–79, Singapore, Singapore. Association for Computational Linguistics. Kaustubh Dhole and Eugene Agichtein. 2024. Gen- qrensemble : Zero-shot llm ensemble prompting for generative query reformulation. In Advances in In- formation Retrieval. Kaustubh Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abi- naya Mahadiran, Simon Mille, Ashish Shrivastava, Samson Tan, et al. 2023. Nl-augmenter: A frame- work for task-sensitive natural language augmenta- tion. Northern European Journal of Language Tech- nology, 9(1). Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta. 2022. Understanding dataset difficulty with V-usable information. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 5988–6008. PMLR. Felix Faltings, Michel Galley, Kianté Brantley, Baolin Peng, Weixin Cai, Yizhe Zhang, Jianfeng Gao, and Bill Dolan. 2023. Interactive text generation. In Pro- ceedings of the 2023 Conference on Empirical Meth- ods in Natural Language Processing, pages 4450– 4468, Singapore. Association for Computational Lin- guistics. Jack FitzGerald, Shankar Ananthakrishnan, Konstan- tine Arkoudas, Davide Bernardi, Abhishek Bha- gia, Claudio Delli Bovi, Jin Cao, Rakesh Chada, Amit Chauhan, Luoxin Chen, Anurag Dwarakanath, Satyam Dwivedi, Turan Gojayev, Karthik Gopalakr- ishnan, Thomas Gueudre, Dilek Hakkani-Tur, Wael Hamza, Jonathan J. Hüser, Kevin Martin Jose, Haidar Khan, Beiye Liu, Jianhua Lu, Alessandro Manzotti, Pradeep Natarajan, Karolina Owczarzak, Gokmen Oz, Enrico Palumbo, Charith Peris, Chandana Satya Prakash, Stephen Rawls, Andy Rosenbaum, Anjali Shenoy, Saleh Soltan, Mukund Harakere Sridhar, Lizhen Tan, Fabian Triefenbach, Pan Wei, Haiyang Yu, Shuai Zheng, Gokhan Tur, and Prem Natarajan. 2022. Alexa teacher model: Pretraining and distill- ing multi-billion-parameter encoders for natural lan- guage understanding systems. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Dis- covery and Data Mining, KDD ’22, page 2893–2902, New York, NY, USA. Association for Computing Machinery. Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, et al. 2023. Rarr: Researching and revising what language models say, using language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16477–16508. Philippe J Giabbanelli. 2023. Gpt-based models meet simulation: How to efficiently use large-scale pre- trained language models across simulation tasks. arXiv preprint arXiv:2306.13679. Izzeddin Gur, Semih Yavuz, Yu Su, and Xifeng Yan. 2018. DialSQL: Dialogue based structured query generation. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1339–1349, Mel- bourne, Australia. Association for Computational Linguistics. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. Realm: Retrieval- augmented language model pre-training. ArXiv, abs/2002.08909. John J Horton. 2023. Large language models as sim- ulated economic agents: What can we learn from homo silicus? Technical report, National Bureau of Economic Research. Geoffrey Irving, Paul Christiano, and Dario Amodei. arXiv preprint 2018. Ai safety via debate. arXiv:1805.00899. Zhengbao Jiang, Frank Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Lan- guage Processing, pages 7969–7992, Singapore. As- sociation for Computational Linguistics. Omar Khattab and Matei Zaharia. 2020. Colbert: Effi- cient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 39– 48. Hyunwoo Kim, Jack Hessel, Liwei Jiang, Peter West, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, and Yejin Choi. 2023. Soda: Million-scale dialogue dis- tillation with social commonsense contextualization. Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi Rui Tam, Keith Stevens, Abdullah Barhoum, Duc Minh Nguyen, Oliver Stanley, Richárd Nagyfi, Shahul ES, Sameer Suri, David Alexandrovich Glushkov, Arnav Varma Dan- tuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Julian Mattick. 2023. Ope- nassistant conversations - democratizing large lan- In Thirty-seventh Con- guage model alignment. ference on Neural Information Processing Systems Datasets and Benchmarks Track. Matthias Kraus, Ron Riekenbrauck, and Wolfgang Minker. 2023a. Development of a trust-aware user simulator for statistical proactive dialog modeling in human-ai teams. In Adjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization, pages 38–43. Matthias Kraus, Ron Riekenbrauck, and Wolfgang Minker. 2023b. Development of a trust-aware user simulator for statistical proactive dialog modeling in human-ai teams. In Adjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization, pages 38–43. Florian Kreyssig, Iñigo Casanueva, Paweł Budzianowski, and Milica Gaši´c. 2018. Neural user simulation for corpus-based policy optimisation of spoken dialogue systems. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 60–69, Melbourne, Australia. Association for Computational Linguistics. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neu- ral Information Processing Systems, 33:9459–9474. Huayang Li, Yixuan Su, Deng Cai, Yan Wang, and Lemao Liu. 2022a. A survey on retrieval-augmented text generation. arXiv preprint arXiv:2202.01110. Zekun Li, Wenhu Chen, Shiyang Li, Hong Wang, Jing Qian, and Xifeng Yan. 2022b. Controllable dialogue In Findings simulation with in-context learning. of the Association for Computational Linguistics: EMNLP 2022, pages 4330–4347, Abu Dhabi, United Arab Emirates. Association for Computational Lin- guistics. Hsien-Chin Lin, Shutong Feng, Christian Geishauser, Nurul Lubis, Carel van Niekerk, Michael Heck, Ben- jamin Ruppik, Renato Vukovic, and Milica Gasi´c. 2023. Emous: Simulating user emotions in task- oriented dialogues. SIGIR ’23, page 2526–2531, New York, NY, USA. Association for Computing Machinery. Yajiao Liu, Xin Jiang, Yichun Yin, Yasheng Wang, Fei Mi, Qun Liu, Xiang Wan, and Benyou Wang. 2023. One cannot stand for everyone! leveraging multi- ple user simulators to train task-oriented dialogue systems. In Proceedings of the 61st Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1–21. Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, and Adam Roberts. 2023. The flan collection: Designing data and methods for effective instruction tuning. Sean MacAvaney, Andrew Yates, Sergey Feldman, Doug Downey, Arman Cohan, and Nazli Goharian. 2021. Simplified data wrangling with ir_datasets. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, pages 2429–2436. Craig Macdonald, Nicola Tonellotto, Sean MacAvaney, and Iadh Ounis. 2021. Pyterrier: Declarative exper- imentation in python from bm25 to dense retrieval. In Proceedings of the 30th acm international con- ference on information & knowledge management, pages 4526–4533. Philip M McCarthy and Scott Jarvis. 2010. Mtld, vocd- d, and hd-d: A validation study of sophisticated ap- proaches to lexical diversity assessment. Behavior research methods, 42(2):381–392. Simon Mille, Kaustubh Dhole, Saad Mahamood, Laura Perez-Beltrachini, Varun Gangal, Mihir Kale, Emiel van Miltenburg, and Sebastian Gehrmann. 2021. Au- tomatic construction of evaluation suites for natural In Thirty-fifth Con- language generation datasets. ference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1). Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human-generated machine read- ing comprehension dataset. OpenAI. 2023. Gpt-4 technical report. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022a. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, volume 35, pages 27730–27744. Curran Associates, Inc. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022b. Training language models to follow instruc- tions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744. Ronak Pradeep, Sahel Sharifymoghaddam, and Jimmy Lin. 2023. Rankzephyr: Effective and robust zero- shot listwise reranking is a breeze! arXiv preprint arXiv:2312.02724. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv e-prints. Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav In-Context Retrieval-Augmented Shoham. 2023. Language Models. Transactions of the Association for Computational Linguistics, 11:1316–1331. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System optimiza- tions enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowl- edge Discovery & Data Mining, pages 3505–3506. Moritz Schaefer, Stephan Reichl, Rob ter Horst, Adele M Nicolas, Thomas Krausgruber, Francesco Piras, Peter Stepper, Christoph Bock, and Matthias Samwald. 2023. Large language models are univer- sal biomedical simulators. bioRxiv, pages 2023–06. Jost Schatzmann, Kallirroi Georgila, and Steve Young. 2005. Quantitative evaluation of user simulation tech- niques for spoken dialogue systems. In 6th SIGdial Workshop on DISCOURSE and DIALOGUE. Timo Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, Maria Lomeli, Eric Hambro, Luke Zettle- moyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. In Thirty-seventh Conference on Neural Information Processing Systems. Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, and Ross Anderson. 2023. The curse of recursion: Training on generated data makes models forget. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anan- tharaman S. Iyer, Anders Johan Andreassen, An- drea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew M. Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubara- jan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karaka¸s, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, Cesar Ferri, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison- Burch, Christopher Waites, Christian Voigt, Christo- pher D Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Court- ney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, C. Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Ju- rgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodolà, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Fran- cois Chollet, Frieda Rong, Gaurav Mishra, Genta In- dra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Glo- ria Xinyue Wang, Gonzalo Jaimovitch-Lopez, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Francis Anthony Shevlin, Hin- rich Schuetze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jae- hoon Lee, Jaime Fernández Fisac, James B Simon, James Koppel, James Zheng, James Zou, Jan Kocon, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez- Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh Dhole, Kevin Gimpel, Kevin Omondi, Kory Wallace Mathewson, Kristen Chia- fullo, Ksenia Shkaruta, Kumar Shridhar, Kyle Mc- Donell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras- Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros-Colón, Luke Metz, Lütfi Kerem Senel, Maarten Bosma, Maarten Sap, Maartje Ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramirez-Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L Leavitt, Matthias Hagen, Mátyás Schu- bert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael Andrew Yee, Michael Co- hen, Michael Gu, Michael Ivanitskiy, Michael Star- ritt, Michael Strube, Michał Sw˛edrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan Andrew Chi, Nayeon Lee, Neta Gur- Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar El- baghdadi, Omer Levy, Owain Evans, Pablo Anto- nio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter W Chang, Pe- ter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Ra- bin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Ro- han Sikand, Roman Novak, Roman Sitelew, Ro- nan Le Bras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Russ Salakhutdinov, Ryan Andrew Chi, Seungjae Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel Stern Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixi- ang Shane Gu, Shubh Pachchigar, Shubham Tosh- niwal, Shyam Upadhyay, Shyamolima Shammie Debnath, Siamak Shakeri, Simon Thormeyer, Si- mone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven Piantadosi, Stuart Shieber, Summer Mish- erghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsunori Hashimoto, Te-Lin Wu, Théo Desbor- des, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Ko- rnev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victo- ria Nyamai, Vikas Raunak, Vinay Venkatesh Ra- masesh, vinay uday prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadol- lah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yu- fang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, Zirui Wang, and Ziyi Wu. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, and Shuming Shi. 2023. Siren’s song in the ai ocean: A survey on hallucination in large language models. ArXiv, abs/2309.01219. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Chris- tiano, and Geoffrey Irving. 2019. Fine-tuning lan- arXiv guage models from human preferences. preprint arXiv:1909.08593. Transactions on Machine Learning Research. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca. Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Ben Wang and Aran Komatsuzaki. 2021. GPT-J- 6B: A 6 Billion Parameter Autoregressive Lan- guage Model. https://github.com/kingoflolz/ mesh-transformer-jax. Yile Wang, Peng Li, Maosong Sun, and Yang Liu. 2023a. Self-knowledge guided retrieval augmenta- tion for large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10303–10315, Singapore. Association for Computational Linguistics. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023b. Self-instruct: Aligning language models with self-generated instructions. In Proceed- ings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13484–13508, Toronto, Canada. Association for Computational Linguistics. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System
synthetic_cpt
1
Intra_prediction_using_template_matching_with_adaptive_illumination_compensation.pdf
2 2 0 2 b e F 1 2 ] V C . s c [ 1 v 8 3 9 9 0 . 2 0 2 2 : v i X r a GENERATIVE TARGET UPDATE FOR ADAPTIVE SIAMESE TRACKING A PREPRINT Madhu Kiran∗† Le Thanh Nguyen-Meidine∗ Rajat Sahay‡ Rafael Menelau Oliveira E Cruz∗ Louis-Antoine Blais-Morin§ Eric Granger∗ February 22, 2022 ABSTRACT Siamese trackers perform similarity matching with templates (i.e., target models) to recursively localize objects within a search region. Several strategies have been proposed in the literature to update a template based on the tracker output, typically extracted from the target search region in the current frame, and thereby mitigate the effects of target drift. However, this may lead to corrupted templates, limiting the potential benefits of a template update strategy. This paper proposes a model adaptation method for Siamese trackers that uses a generative model to produce a synthetic template from the object search regions of several previous frames, rather than directly using the tracker output. Since the search region encompasses the target, attention from the search region is used for robust model adaptation. In particular, our approach relies on an auto-encoder trained through adversarial learning to detect changes in a target object’s appearance, and predict a future target template, using a set of target templates localized from tracker outputs at previous frames. To prevent template corruption during the update, the proposed tracker also performs change detection using the generative model to suspend updates until the tracker stabilizes, and robust matching can resume through dynamic template fusion. Extensive experiments conducted on VOT-16, VOT-17, OTB-50, and OTB-100 datasets highlight the effectiveness of our method, along with the impact of its key components. Results indicate that our proposed approach can outperform state-of-art trackers, and its overall robustness allows tracking for a longer time before failure. Code: https://anonymous.4open.science/r/AdaptiveSiamese-CE78/ 1 Introduction Many video analytics, monitoring, and surveillance applications rely on visual object tracking (VOT) to locate targets appearing in a camera viewpoint over time, scene understanding, action and event recognition, video summarizing, person re-identification. In real-world video surveillance applications, VOT is challenging due to real-time computational constraints, changes and deformation in target appearance, rapid motions, occlusion, motion blur, and complex backgrounds. In real-time video surveillance applications, the time required to capture and identify various events is a significant constraint. Techniques for VOT may be categorized according to the target model or template construction mechanism, as either generative or discriminative. Generative appearance models represent target appearance without considering the background, while discriminative trackers learn a representation to distinguish between a target and background Salti et al. [2012]. The trackers can be further classified based on their image representation techniques, ranging from conventional hand-crafted descriptors Hare et al. [2016], Henriques et al. [2015], Nebehay and Pflugfelder, Wang et al. to more recent deep learning models, like Siamese trackers Bertinetto et al. [2016], Guo et al. [a], Li et al. [a], Li and Zhang [2019], Zhang and Peng, Zhang et al. [a], Zhu et al.. ∗Laboratoire d’imagerie, de vision et d’intelligence artificielle (LIVIA), Ecole de technologie superieure, Montreal, Canada †Corresponding author, [email protected] ‡Vellore Institute of Technology, Vellore §Genetec Inc. arXiv Template A PREPRINT Figure 1: Approaches to select templates for adaptive Siamese tracking. (a) Conventional approaches select templates from previous tracker outputs. (b) Our approach generates templates from previous ones using a generative model, and filters noisy templates via change detection. One of the initial Siamese trackers – the Fully Convolutional Siamese tracker (SiameseFC) Bertinetto et al. [2016] – uses a single features representation extracted at the beginning of a trajectory, and does not update the target features during tracking. Although this strategy can provide computational efficiency, SiameseFC trackers suffer from target drift over time. Target drift is defined as a situation when the tracker slowly starts focusing on distractive backgrounds (rather than the target), and eventually looses the target. Such drift causes broken tracklets, a potential problem in video surveillance applications such as loitering detection, video person re-identification, face recognition, and other related applications. When the object’s appearance changes abruptly, or the object is occluded or partially leaves the search region, the SiameseFC tracker temporarily drifts to a location with a high response map score Zhang et al. [b]. Some adaptive Siamese trackers have been proposed that allow for template updates. Most early trackers sought to update target features as a moving average based on localizations from the current frame output. Other trackers apply strategies to address drifting objects by storing an array of templates, or combining features from various tracker outputs Yang and Chan, Zhang et al. [b]. However, these trackers face issues when updating templates on every frame based on tracker output. In particular, they integrate noise from the tracker output templates, especially in the present image occlusion or drift. Moreover, when training a Siamese tracker for matching based on multiple templates, learning the template update function in addition to the conventional search-template pair may lead to over-fitting Zhang et al. [b]. Hence, to avoid corrupting templates, it is important to manage when and how the templates are updated. In this paper, we focus robust VOT of a single object, where the template is updated dynamically in response to changes in the object’s appearance. This paper introduces a method that applied to any adaptive Siamese trackers for real-time applications. Instead of using the samples mined directly from the tracker output, we propose to use a generative model to generate a sample observing many previous target template. This generative model predicts the future appearance of a target template given a set of consecutive target templates localized from tracker outputs at previous frames. It also allows detecting abrupt changes in the appearance of target objects, and thereby preventing template corruption by suspending template updates until the tracker stabilizes. In the absence of an abrupt change, our generative model 2 arXiv Template A PREPRINT outputs a synthetic target template for robust matching through dynamic template fusion, and updating the target template. In contrast with Zhang et al. [b], our method learns the target update itself, using cross-attention between search region and template features. This allows selecting channels among the target features that are most useful for target update. The cross-attention approach relies on attention from the target’s current appearance in the search region to update the existing target template. The proposed generative model is designed by adversarial training a video autoencoder to produce a future frame. The discrepancy between the generated future frame, and the target’s appearance from tracker output helps detect appearance changes using a change detection mechanism. We summarise our contribution as follows. We propose a method for adaptation of Siamese trackers based generative model update. The generative model produces a future template by observing the the past templates. Additionally, change detection is proposed using the generative model to suspend model update during target drifting. Finally, the method relies on the difference between a simple average and a learned fusion templates to define an inequality constraint during learning of model adaptation. It uses attention from the search region to attend to salient regions in the tracker localised template. For proof-of-concept validation, the proposed method is integrated into state-of-art SiamFC+ and SiamRPN trackers Zhang and Peng, Li et al. [a], and compared to different conventional and state-of-art trackers from deep Siamese family Bertinetto et al. [2016], Zhang and Peng on videos from the OTB Wu et al. and VOT Kristan and et al., Kristan and et al. [2018] evaluations datasets. We also perform ablation studies on different modules to study the effectiveness of the proposed method. 2 Related Work Pioneered by SINT Tao et al. and SiamFC Bertinetto et al. [2016], the Siamese family of trackers evolved from Siamese networks trained offline with similarity metrics. These networks were trained on a large dataset to learn generic features for object tracking. SiamRPN Li et al. [a] further improves on this work by employing region proposals to produce a target-specific anchor-based detector. Then, the following Siamese trackers mainly involved designing more powerful backbones Zhang and Peng, Li and Zhang [2019] or proposal networks, like in Fan and Ling. ATOM Danelljan et al. [a] and DIMP Bhat et al. [2019] are robust online trackers that differ from the general offline Siamese trackers by their ability to update model online during tracking. Other paradigms of Siamese trackers are distractor-aware training, domain-specific tracking He et al., Zhu et al.. In Zhong et al. [2018], an LSTM is incorporated to learn long-term relationships during tracking and turns the VOT problem into a consecutive decision-making process of selecting the best model for tracking via reinforcement learning Duman and Erdem [2019]. In Valmadre et al. and Zhu et al., models are updated online by a moving average based learning. These methods integrate the target region extracted from tracker output into the initial target. In Song et al., a generative model is learned via adversarial learning to generate random masks that produce shifted versions of target templates from the original template. Then, an adversarial function is used to decide whether or not the generated template is from the same distribution and if they will be used as templates for tracking. In Yang and Chan, an LSTM is employed to estimate the current template by storing previous templates in a memory bank. In Guo et al. [b], authors propose to compute transformation matrix with reference to the initial template, with a regularised linear regression in the Fourier domain. Finally, in Yao et al., authors propose to learn the updating co-efficient of a correlation filter-based tracker using SGD online. All these methods use the tracker output as the reference template while updating on top of the initial template. Bhat et al. [2019], Danelljan et al. [b] propose a model where an adaptive discriminative model is generated online by the steepest gradient descent method. They differ from another online learned method like Nam and Han due to their real-time performance. Similarly Zhang et al. [a] introduce online model prediction but employ a fast conjugate gradient algorithm for model prediction. Foreground score maps are estimated online, and the classification branch is combined by weighted addition. Several methods follow the standard strategy of updating target template features, such as simple averaging, where the template is updated as a running average with exponentially decaying weights over time. This yields a template update defined by: (cid:101)ϕn = (1 − γ) (cid:101)ϕn−1 + γϕn, (1) where n denotes the time step, (cid:101)ϕn the predicted template, and γ the learning rate. This strategy has several issues, most notably the possibility of integrating noise and corruption into templates during the update process. Therefore, authors in Zhang et al. [b] proposed a network which, when given an input of past template features, the template extracted from current tracker output produces a new representation that can be added to the original ground truth template (obtained during tracker initialization). This approach further suffers from the following issues. (1) A future template for the tracker is unseen at the time of template update, and the model is updated solely based on the tracker output in the past frame output. (2) The model is updated every frame making it still susceptible to the integration of noise over time. (3) Network training is a tedious task since it must be trained continuously offline by 3 arXiv Template A PREPRINT running the tracker on the training dataset. It must produce a previous frame feature representation that needs to be stored and used for the next training iteration. Further developments in this direction are challenging. 3 Proposed Adaptive Siamese Tracker Given the initial object location, a ground truth-object template image T is extracted, along with the corresponding deep CNN features ϕgt. A tracker seeks to produce object localization BBox at a given time step by matching ϕgt with search region features ϕs. The objective is to produce a trajectory by recursively extracting search regions from tracker output, and matching them with a given template over each input video frame. a) Template Prediction and Change Detection: Inspired from Tang et al. [2020], We employ a video autoencoder that is trained through adversarial learning for template generation. Given an set of past templates T n where n = t, t − 1, t − 2, t − 3... we aim to predict a future template for time step t. As described below, our template generation method consists of a generator and a discriminator. Generator: It consists of an encoder-decoder architecture (see Fig 2). The encoder compresses an input video clip into a small bottleneck with a set of CNN layers and Conv-LSTM based recurrent network to model the temporal relationship between the frames. The decoder consists of some layers of transposed CNN to obtain the predicted video frame. Hence given an input video clip of T t−k, ..., T t−2, T t−1, the Generator produces the estimated future video frame ˆT . The generator is trained according to the Mean Squared Error (MSE) between predicted image (cid:98)T and ground truth image T Discriminator: It comprises of several CNN layers to compete with the generator to differentiate between the ground truth and generated frames. The discriminator distinguishes a real-world image from a fake image, promoting the generator to produce good quality images. Since training the autoencoder on MSE loss alone will cause the output to be blurry, we leverage the discriminator to help produce higher-quality images. The labels are set to 0 for fake images (obtained from the autoencoder’s reconstruction) and 1 for real (ground truth template image). The discriminator is Figure 2: Our generator model is a video autoencoder that is trained adversarially. A future target template is reconstructed from a sequence of input target templates. The discriminator D processed the reconstructed template as fake, and the ground truth template input as real. 4 arXiv Template A PREPRINT Figure 3: Block diagram of our proposed generic template update system for Siamese trackers that adapts the model of the target template with a generative model and change detection. Our attention based dynamic ensemble of targets adapts the model to the current representation of the target with attention from search region. The change detection system disables template update during anomalies such as occlusion and severe target drift. trained with an adversarial loss: LD adv(T, (cid:98)T ) = 1 2 (D(T ) − 1)2 + 1 2 (D( (cid:98)T ) − 0)2 (2) Change Detection: Once the adversarial auto-encoder has been trained, the average MSE error between the recon- structed template and search regions from each input frame in the video clip is computed to produce the reconstruction error. Similar to previous methods such as Duman and Erdem [2019], Zhao et al., we adopt the regularity score to detect abrupt changes in template clips. Let e(T ) be the reconstruction error. Reconstruction error should be normalized from the sequences of the same video with: s(x) = 1 − e(T ) − minT e(T ) maxT e(T ) (3) In practice it is difficult to set minT e(T ) and maxT e(T ) as the future frames are not observable. Hence, we set minT e(T ) and maxT e(T ) experimentally using a validation set. The regularity score s(x) serves as the measure for regular templates. Hence a score of less than a threshold τ is considered an abrupt change. The length of the input template sequence is kept fixed, and new templates are updated into the sequence by pushing the oldest template out of the stack. When a change is detected, the template that was last pushed into the stack is rejected and considered a possible source of corruption, and the template update eventually stalled for that particular time step. b) Template Update with Cross Attention: Target model adaptation is often based on the last known appearance of the object from previous frames. At the start of the tracking, the initial target feature ϕinit needs to be adapted to match the latest object appearance. Such adaptation is not possible without predicting the tracker in the current frame. At the same time, it is to be noted that the search region encompasses the target in the current frame, given that the change detector has detected no drastic change. Therefore we propose to use this cue to obtain attention from the search region to adapt the model. In addition to this, search region features and template features are of different sizes. This difference in feature size has inspired our proposal of using channel attention across the search and template stream. We follow a similar model adaptation paradigm as Zhang et al. [b] along with our attention model and proposed optimization with inequality constraints.Zhang et al. [b] consider adapting the target feature by adding additional information to the initial target feature ϕinit. 5 arXiv Template A PREPRINT t be the feature extracted from search region. Then we obtain matching channel attention from ϕz Let (cid:98)ϕt be the feature extracted from the generated template (cid:98)T t. The generated template is the predicted target appearance. In comparison to (cid:98)ϕt, ϕinit is the most reliable target feature as it is obtained during the initialization of the tracker using ground truth information. The model adaptation mechanism considers both ϕinit and (cid:98)ϕt to predict the adapted feature (cid:101)ϕt. As discussed earlier, the first step is to obtain attention from the search region to select important channels t, ϕinit in (cid:98)ϕt. Let ϕz and (cid:98)ϕt by passing through an attention model similar to channel attention in Subramaniam et al. , using an MLP with t, ϕinit and (cid:98)ϕt are averaged Sigmoid activation to select channels based on importance. The attention obtained from ϕz to obtain channel attention A. Attention A is multiplied with (cid:98)ϕt. Therefore the channels of (cid:98)ϕt have been re-weighed and common saliency across search region and target template are encompassed into the attention. The attended feature A (cid:98)ϕt and (cid:101)ϕt−1 (obtained from the prior frame after model adaptation) are then concatenating in the channel dimension as follows 4: ϕconcat = [ (cid:101)ϕt−1; A (cid:98)ϕt] (4) The concatenated feature ϕconcat is passed through a two layer CNN with 1x1 convolution layer, followed by a TanH activation function to obtain adapted feature in: where η is the model adaptation network discussed above, and (cid:101)ϕt is the adapted target template for tracking. (cid:101)ϕt = η(ϕconcat), (5) c) Model Adaptation: During training, target samples are generated from the training data keeping the chronological order of the image frame in a video to obtain features ϕinit, ϕGT . The ground truth video data generated these two, i.e., initial and template from future frames. To obtain the generated template, n consecutive templates are used from the same video to generate (cid:98)ϕt by using the pre-trained generator that was previously discussed. To enable the system, learn to generate an adapted feature to resemble a target template from the next frame, we employ MSE loss: Lmdl−mse = (cid:107)ϕGT − (cid:101)ϕt(cid:107)2, (6) where ϕGT are the ground truth target features which are chronologically the latest template. We expect the adapted template (cid:101)ϕt obtained by adapting previously seen target templates to resemble the future ground truth template. Optimizing the MSE loss in our case is a difficult task since the model is being forced to learn to produce an unseen representation from future frames given two different previously seen frames. In Zhang et al. [b], the tracker is recursively train on several training cycles, which is a tedious task. Template update can also be performed by simply averaging features that would suffer from noisy updates and feature smoothing due to averaging both leading to information loss. Such simple averaging can be used as a cue to introduce a constraint to optimize the template update. Let ϕavg be the averaged template obtained by averaging ϕinit and ϕt−1. Let DE denote the Euclidean distance function. It is reasonable to assume that simple template averaging is a trivial solution and therefore the distance between learnt template (cid:101)ϕt and ϕGT (the future template) must be less than ϕavg and ϕGT . Constrained loss given by, Lconst−mse = (cid:107)ϕGT − (cid:101)ϕt(cid:107)2 + λ ReLU ((DE(ϕGT , (cid:101)ϕ) − DE(ϕGT , ϕavg)) (7) where ReLU ensures that the gradients are passed for the constraint only when the constraint is not respected. λ is set to a value (cid:29) 1 and is determined experimentally. 4 Results and Discussion a) Experimental Methodology: A ResNet-22 CNN similar to SiamDW tracker Sosnovik et al., Zhang and Peng is used for a fair comparison. The system on GOT-10K dataset Huang et al. [2019] to train our video autoencoder, as well as the tracking network similar to Sosnovik et al. for direct comparison since they use a similar baseline as ours. GOT-10K has around 10,000 video sequences with 1.5 million labeled bounding boxes to train our tracking network and auto encoder. In particular, due to many training sequences, the autoencoder overall motion model for objects in generic videos to predict frames in the future. We used the official training set of GOT10-K to train the networks. We use the same data augmentation techniques as Sosnovik et al., Zhang and Peng. The autoencoder was pre-trained adversarially with the discriminator. The Siamese tracker is pre-trained without the autoencoder by selecting random samples in a specific video, one for the template and the other for the search region. The standard tracking benchmarks, OTB2013, OTB2015 Wu et al. and VOT2017 Kristan and et al. video datasets, are uses to evaluate trackers. The OTB Wu et al. dataset consists of sets OTB213 and OTB2015 with 50 and 100 real-world tracking videos, respectively. The metrics used with OTB datasets are success rate and precision. VOT2017 dataset has 60 public test videos with a total of 21,356 frames. The VOT protocol re-initializes the tracker when the tracker fails with a delay of 5 frames. Evaluation measures used with VOT are EAO and (Expected average overlap), a combination of accuracy and robustness. Robustness refers to the number of times a tracker needs to be initialized. 6 arXiv Template A PREPRINT Table 1: EAO and robustness associated with different components of our proposed tracker on the VOT2017 dataset. Remark Sl Ablation · Template update Only SiamFC+ 1 Baseline Baseline and Update SiamFC+ and UpdateNet 2 SiamFC+ and Moving Average Baseline and Linear 3 SiamFC+ and Dynamic Update Ours without Constraint 4 5 SiamFC+ and Dynamic Constr Ours with INQ. Constraint · Generative Modelling 6 7 · Change Detection 8 Generated Template Update Generated Model and Blend 5) + Generated Template 6) + Tracker Output Blend Change Detection 7) + No Update on Drastic Change EAO↑ Robustness↓ 0.23 0.26 0.25 0.27 0.29 0.29 0.30 0.31 0.49 0.40 0.44 0.41 0.38 0.37 0.37 0.34 Table 2: Accuracy of our proposed and state-of-art trackers on the OTB-50, OTB-100, VOT2016 and VOT2017 datasets. Tracker SINT, CVPR-16 Tao et al. SiamFC, ECCV-16 Bertinetto et al. [2016] DSiam, ECCV-17 Zhu et al. StructSiam, ECCV-18 Zhang et al. [c] TriSiam, ECCV-18, Dong and Shen SiamRPN, CVPR-18 Li et al. [a] SE-Siam, WACV-21 Sosnovik et al. SiamFC+, CVPR-19 Zhang and Peng SiamRPN++, CVPR-19 Li et al. [b] Adaptive SiamFC+ (ours) Adaptive SiamRPN++ (ours) OTB2013 AUC↑ 0.64 0.61 0.64 0.64 0.62 - 0.68 0.67 - 0.68 - Prec↑ 0.85 0.81 0.81 0.88 0.82 - 0.90 0.88 - 0.89 - OTB2015 AUC↑ - 0.58 0.64 0.62 0.59 0.64 0.66 - 0.69 0.67 0.71 Prec↑ - 0.77 0.81 0.85 0.78 0.85 0.88 - 0.89 0.89 0.87 VOT2016 EAO↑ A↑ - 0.24 - 0.53 - 0.34 0.36 0.30 0.46 0.39 0.47 - - 0.56 0.59 0.54 0.64 0.56 0.61 R↓ - 0.46 - - - 0.26 0.24 0.26 0.20 0.21 0.19 VOT2017 - 0.5 EAO↑ A↑ - 0.19 - - 0.2 0.24 0.27 0.24 0.41 0.31 0.44 0.49 0.54 0.49 0.60 0.52 0.58 R↓ - 0.59 - - - 0.46 0.38 0.46 0.23 0.34 0.21 b) Ablation Study: We study the contribution of different components of our proposed method on the VOT2017 dataset. In the first part of Tab 1, "Template update," demonstrates our contribution to model adaptation. The second part, "Generative Model," evaluates the contribution of the generative model in the template update. Finally, the "Change Detection" part shows the effect of change detection on tracking EAO. In order to evaluate the template update part, we compare the results of the baseline Zhang and Peng which is also our backbone. The template update mechanism uses the output from tracker instead of the generative model instead of (cid:98)ϕt in the template update network. We implement Zhang et al. [b] based model adaptation for the baseline Zhang and Peng and moving average based linear update as in Zhu et al. is compared with our proposed update method "Dynamic Update" (with attention), which refers to training without the inequality constraint discussed above. Number 5) in the table refers to the experiment where template update is used with inequality constraints. It can be seen that using the inequality constraint alone and our template update mechanism has improved the overall Robustness of the tracker as indicated by the robustness score(lower the score more robust the tracker is). 6) and 7) in the Tab. 1 uses the output from generative model to feed (cid:98)ϕt. Since the generative model’s output is a bit blurry in 7) we blend it with tracker output extracted target template image to obtain a sharper image. Such blending has been shown to improve the result further. We detect drastic changes in the model via the regularity score of the tracker. The change detection will help prevent noisy updates during drift or occlusion; this is shown in 8) where no updates were made during drastic changes. c) Comparison with State-of-Art: We compare our proposed template update method implemented on SiamFC+ Zhang and Peng back-end against popular Siamese methods bench marked on OTB-50,OTB-100,VOT16,17 datasets. Similar to the benchmarking method in SE-SiamFC Sosnovik et al. we have selected the Siamese trackers for direct comparison with ours. It is important to note that our back-end Siamese tracker, training procedure, sample selection, Etc., are the same as Sosnovik et al.. OTB benchmark uses AUC, which signifies the average overlap over the dataset, and Precision (Prec) signifies the center distance error between object and tracker bounding box. We can see that our method performs competitively with Sosnovik et al. on OTB dataset shown in Tab.2 . It is important to note that OTB does not re-initialize the tracker on failure, and in addition, OTB does not consider track failures into the final evaluation. 7 arXiv Template A PREPRINT On the other hand, the VOT dataset uses Expected average Overlap (EAO), Robustness (R), and Accuracy (A) as metrics. Particularly Robustness is interesting as it indicates some measure on tracker drift in a given dataset. EAO combines tracking accuracy and Robustness, and hence it is a better indicator of tracker performance than just AUC. We can see from the Tab.2 our method outperforms SOA by 4% and outperforms the baseline SiamFC+ Zhang and Peng by 7% on EAO. The results show that our proposed method would enable the tracker to track for longer periods before complete failure compared to the other methods we compare. To show drastic changes during tracking, we plot the IOU "overlap" (intersection over union for tracking bounding box over ground truth) and the regularity score produced by our change detector. In Fig 4 blue line indicates IOU for our proposed tracker. The thumbnails at the bottom indicate cutouts of the ground truth bounding box around the object being tracked. The video example is from "basketball" of the VOT17 dataset. It can be observed that the regularity score produced by our change detector is low during frames that have partial occlusion and during clutter around the background. Figure 4: Visualization of tracker accuracy in terms of instantaneous overlap (overlap) of tracker output with ground truth bounding box with video frame number on x axis. We show the results for the trackers with our proposed model update and the baseline SiamFC. Red arrow on the x-axis indicates points of drastic changes. 5 Conclusion Adaptive Siamese trackers commonly rely on the tracker’s output to update the target model. In this paper, we have identified shortcomings with this approach, and proposed a generative model to predict a synthetic target template based on the appearance of several templates from previous time steps. Since the generative model learns the future template from the distribution over past time steps, it suppresses stochastic noise. We also propose a change detection mechanism to avoid noisy updates during abrupt changes in target appearance. Our proposed method can be integrated into any Siamese tracker, and results achieved on VOT16, VOT17, OTB-50, and OTB-100 datasets indicate that it can provide a high level of robustness (can track for a longer period before drifting) compared to state-of-art adaptive and baseline trackers. References S. Salti, A. Cavallaro, and L. Di Stefano. Adaptive appearance modeling for video tracking: Survey and evaluation. IEEE Transactions on Image Processing, 21(10):4334–4348, 2012. S. Hare, S. Golodetz, A. Saffari, V. Vineet, M. M. Cheng, S. L. Hicks, and P. H. S. Torr. Struck: Structured output tracking with kernels. IEEE Trans. PAMI, 38(10):2096–2109, 2016. J. F. Henriques, R. Caseiro, P. Martins, and J. Batista. High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3):583–596, 2015. G. Nebehay and R. Pflugfelder. Consensus-based matching and tracking of keypoints for object tracking. In WACV 2014, March . doi:10.1109/WACV.2014.6836013. 8 arXiv Template A PREPRINT X. Wang, M. O’Brien, C. Xiang, B. Xu, and H. Najjaran. Real-time visual tracking via robust kernelized correlation filter. In ICRA 2017. Luca Bertinetto, Jack Valmadre, João F Henriques, Andrea Vedaldi, and Philip HS Torr. Fully-convolutional siamese networks for object tracking. arXiv:1606.09549, 2016. Dongyan Guo, Jun Wang, Ying Cui, Zhenhua Wang, and Shengyong Chen. Siamcar: Siamese fully convolutional classification and regression for visual tracking. In CVPR2020, a. Bo Li, Junjie Yan, Wei Wu, Zheng Zhu, and Xiaolin Hu. High performance visual tracking with siamese region proposal network. In CVPR 2018, a. Yuhong Li and Xiaofan Zhang. Siamvgg: Visual tracking using deeper siamese networks. 2019. Zhipeng Zhang and Houwen Peng. Deeper and wider siamese networks for real-time visual tracking. In CVPR 2019. Zhipeng Zhang, Houwen Peng, Jianlong Fu, Bing Li, and Weiming Hu. Ocean: Object-aware anchor-free tracking. In ECCV 2020, a. Zheng Zhu, Qiang Wang, Li Bo, Wei Wu, Junjie Yan, and Weiming Hu. Distractor-aware siamese networks for visual object tracking. In ECCV 2018. Lichao Zhang, Abel Gonzalez-Garcia, Joost van de Weijer, Martin Danelljan, and Fahad Shahbaz Khan. Learning the model update for siamese trackers. In ICCV 2019, b. Tianyu Yang and Antoni B Chan. Learning dynamic memory networks for object tracking. In ECCV 2018. Yi Wu, Jongwoo Lim, and Ming-Hsuan Yang. Online object tracking: A benchmark. In CVPR 2013. M. Kristan and et al. The visual object tracking vot2017 challenge results. In ICCVW 2017. Matej Kristan and et al. The sixth visual object tracking vot2018 challenge results, 2018. Ran Tao, Efstratios Gavves, and Arnold WM Smeulders. Siamese instance search for tracking. In CVPR 2016. Heng Fan and Haibin Ling. Siamese cascaded region proposal networks for real-time visual tracking. In CVPR2019. Martin Danelljan, Goutam Bhat, Fahad Shahbaz Khan, and Michael Felsberg. Atom: Accurate tracking by overlap maximization. In CVPR2019, a. Goutam Bhat, Martin Danelljan, Luc Van Gool, and Radu Timofte. Learning discriminative model prediction for tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6182–6191, 2019. Anfeng He, Chong Luo, Xinmei Tian, and Wenjun Zeng. A twofold siamese network for real-time object tracking. In CVPR 2018. Bineng Zhong, Bing Bai, Jun Li, Yulun Zhang, and Yun Fu. Hierarchical tracking by reinforcement learning-based searching and coarse-to-fine verifying. IEEE Transactions on Image Processing, 28(5):2331–2341, 2018. Elvan Duman and Osman Ayhan Erdem. Anomaly detection in videos using optical flow and convolutional autoencoder. IEEE Access, 7:183914–183923, 2019. Jack Valmadre, Luca Bertinetto, Joao Henriques, Andrea Vedaldi, and Philip HS Torr. End-to-end representation learning for correlation filter based tracking. In CVPR 2017. Yibing Song, Chao Ma, Xiaohe Wu, Lijun Gong, Linchao Bao, Wangmeng Zuo, Chunhua Shen, Rynson WH Lau, and Ming-Hsuan Yang. Vital: Visual tracking via adversarial learning. In CVPR 2018. Qing Guo, Wei Feng, Ce Zhou, Rui Huang, Liang Wan, and Song Wang. Learning dynamic siamese network for visual object tracking. In ICCV2017, b. Yingjie Yao, Xiaohe Wu, Lei Zhang, Shiguang Shan, and Wangmeng Zuo. Joint representation and truncated inference learning for correlation filter based tracking. In ECCV 2018. Martin Danelljan, Luc Van Gool, and Radu Timofte. Probabilistic regression for visual tracking. In CVPR 2020, b. Hyeonseob Nam and Bohyung Han. Learning multi-domain convolutional neural networks for visual tracking. In CVPR 2016. Yao Tang, Lin Zhao, Shanshan Zhang, Chen Gong, Guangyu Li, and Jian Yang. Integrating prediction and reconstruction for anomaly detection. Pattern Recognition Letters, 129:123–130, 2020. Yiru Zhao, Bing Deng, Chen Shen, Yao Liu, Hongtao Lu, and Xian-Sheng Hua. Spatio-temporal autoencoder for video anomaly detection. In ICM2017. Arulkumar Subramaniam, Athira Nambiar, and Anurag Mittal. Co-segmentation inspired attention networks for video-based person re-identification. In ICCV 2019. 9 arXiv Template A PREPRINT Ivan Sosnovik, Artem Moskalev, and Arnold WM Smeulders. Scale equivariance improves siamese tracking. In WACV2021. Lianghua Huang, Xin Zhao, and Kaiqi Huang. Got-10k: A large high-diversity benchmark for generic object tracking in the wild. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2019. Yunhua Zhang, Lijun Wang, Jinqing Qi, Dong Wang, Mengyang Feng, and Huchuan Lu. Structured siamese network for real-time visual tracking. In ECCV 2018, c. Xingping Dong and Jianbing Shen. Triplet loss in siamese network for object tracking. In ECCV 2018. B Li, W Wu, Q Wang, F Zhang, J Xing, and J SiamRPN+ Yan. Evolution of siamese visual tracking with very deep networks. In CVPR 2019, b. 10
synthetic_cpt
1
Using_Large_Language_Models_in_Automatic_Hint_Ranking_and_Generation_Tasks.pdf
Using Large Language Models in Automatic Hint Ranking and Generation Tasks Jamshid Mozafari University of Innsbruck [email protected] Florian Gerhold University of Innsbruck [email protected] Adam Jatowt University of Innsbruck [email protected] Abstract The use of Large Language Models (LLMs) has increased significantly recently, with indi- viduals frequently interacting with chatbots to receive answers to a wide range of questions. In an era where information is readily accessible, it is crucial to stimulate and preserve human cognitive abilities and maintain strong reason- ing skills. This paper addresses such challenges by promoting the use of hints as an alternative or a supplement to direct answers. We first introduce a manually constructed hint dataset, WIKIHINT, which includes 5,000 hints created for 1,000 questions. We then finetune open- source LLMs such as LLaMA-3.1 for hint gen- eration in answer-aware and answer-agnostic contexts. We assess the effectiveness of the hints with human participants who try to an- swer questions with and without the aid of hints. Additionally, we introduce a lightweight eval- uation method, HINTRANK, to evaluate and rank hints in both answer-aware and answer- agnostic settings. Our findings show that (a) the dataset helps generate more effective hints, (b) including answer information along with questions generally improves hint quality, and (c) encoder-based models perform better than decoder-based models in hint ranking. 1 Introduction In recent years, question answering (QA) systems have risen in importance, giving users the op- portunity to ask arbitrary questions and gain an- swers to them (Mavi et al., 2024; Karpukhin et al., 2020; Zhao et al., 2021; Abdel-Nabi et al., 2023). The rapid development of Large Language Mod- els (LLMs) (Gemini Team et al., 2023; OpenAI et al., 2023; Dubey et al., 2024) has without doubt contributed to this, as well as to many other Natu- ral Language Problems (NLPs) (Qin et al., 2024). While the benefits of using LLMs for current infor- mation access are clear1, there are some worries in relation to their potential effect on human devel- opment. One such concern relates to the potential weakening of important cognitive skills of users like thinking, reasoning, and remembering due to the expected widespread use of automatic ques- tion answering technologies, in particular, ones backed by powerful AI solutions (Heersmink, 2024). Users who will rely mainly on the solutions presented by AI’s, might also be discouraged to practice and improve their reasoning abilities (Al- fredo et al., 2024). For example, Darvishi et al. (2024) demonstrate that students are more likely to depend on AI assistance instead of learning from it. Moreover, Jošt et al. (2024) examine the impact of LLMs as an automated problem-solving technol- ogy on education and learning outcomes, demon- strating that such systems can negatively affect the development of learning skills. Furthermore, psy- chological studies confirm the importance of ob- taining an answer independently, enhancing user self-confidence and encouraging further learning (Bandura, 2013). Letting users come up with the correct answers by themselves should then also contribute to the positive psychological effect, po- tentially increasing their self-confidence and moti- vation for learning (Usher and Pajares, 2006). While there is no simple remedy to the afore- mentioned problem, we would like to promote an approach that involves humans in the answer find- ing process, by providing them with hints rather than direct answers. Hints are meant to serve as subtle clues to guide potential users towards a cor- rect answer, without revealing the solution (Hume et al., 1996). This should engage human’s cogni- tive abilities, potentially leading to forming new pathways in brains based on the received hints and already possessed knowledge. Automatically gen- erating hints for user questions could be used as an alternative for those who prefer to find the an- 1While the current LLMs have still considerable weak- nesses such as hallucinations, once can assume that, in the future, these problems will be mitigated to large extent, judg- ing from the speed of the recent technology advancement. 4 2 0 2 c e D 2 ] L C . s c [ 1 v 6 2 6 1 0 . 2 1 4 2 : v i X r a Figure 1: Pipeline of WIKIHINT dataset generation. The numbers in arrows indicate the counts of output questions. swers themselves; much like long-distance walking is an alternative to using a car for people who wish to take additional effort for the benefit of staying healthy (Panter et al., 2018). In this paper, we propose the first dataset for the Automatic Hint Generation (HG) task, called WIKIHINT2 which has been manually constructed and designed for both hint generation as well as evaluation. We next explore the performance of various LLMs in generating hints across different scenarios, including vanilla and finetuned models. We also examine the quality of the generated hints using both answer-aware and answer-agnostic ap- proaches. Finally, we assess the effectiveness of a novel evaluation method for hint ranking called HINTRANK and compare it with other automatic evaluation techniques. To sum up, we make the following contributions in this paper: • We release the first manually created dataset called WIKIHINT for HG task containing 5,000 hints and 1,000 questions. • We propose an automatic evaluation method for ranking hints called HINTRANK and com- pare with other evaluation methods. • We finetune and evaluate LLMs on WIKIHINT to assess the dataset quality and LLMs capa- bilities in hint generation and ranking. • We present several novel observations, includ- ing the findings of a positive correlation be- tween hint convergence and helpfulness, an inverse correlation between their length and helpfulness, and the superiority of the answer- aware approach over the answer-agnostic ap- proach. In general, our research can contribute to fostering research in explainable AI for ed- ucation (Khosravi et al., 2022). 2 Related Work Automatic question answering (QA) (Karpukhin et al., 2020; Zhao et al., 2021; Abdel-Nabi et al., 2023) and question generation (QG) (Kurdi et al., 2020; Lu and Lu, 2021; Zhang et al., 2021) have advanced quite much in the last years. These tasks have seen numerous different datasets (Trischler et al., 2017; Rajpurkar et al., 2016; Zhang et al., 2020), and evaluation metrics (Nema and Khapra, 2018; Mavi et al., 2022) proposed. The research related to hint generation is however still scarce despite that hinting is a common mechanism used by humans for question answering, and that auto- matic hint generation could be regarded as the third missing task alongside the two established ones, QA and QG. The prior research focused mainly on generating hints for programming (Price et al., 2019; Kochmar et al., 2022; Barnes and Stamper, 2008; McBroom et al., 2021) and typically in the context of intelligent tutoring systems. Automatic hint generation for factoid questions was first addressed by Jatowt et al. (2023). How- ever, the authors neither released a dataset nor utilized LLMs, focusing instead on hints gener- ated from selected Wikidata3 predicates. Moreover, their work only considered an answer-aware setting and did not explore the hint ranking task. Subse- quently, Mozafari et al. (2024) released the first synthetic dataset for hint generation (HG) called TriviaHG, which was automatically generated us- ing LLMs. However, the automatic generation in- creases the likelihood of false information within the dataset, particularly due to the hallucination phenomenon of LLMs. The authors also intro- duced the first automatic evaluation method, called Convergence, for assessing hint quality. However, this method requires substantial computational re- sources as it relies on LLMs for evaluation. Inter- ested readers can also refer to the recent survey of 2https://github.com/DataScienceUIBK/WikiHint 3https://www.wikidata.org/ 4,0003,2472,7882,788Hs: 5,000Qs: 1,000QuestionSelectionManualVerificationAmazonMTurkHintDatasetQuestionSampling ModuleHint GenerationModule1,000DatasetAttributesManualVerificationChatGPTNaturalQuestionsSQuAD 2.0TrainHs: 4500Qs: 900TestHs: 500Qs: 100WikipediaInclude Wikipedia PageHints Jangra et al. (2024) who discuss various types of hints and challenges associated with hint genera- tion and evaluation. To the best of our knowledge, WIKIHINT is the first manually curated dataset for the HG task, with questions and hints verified by humans. Our pro- posed automatic evaluation method is also the first for this task that does not rely on LLMs and is lightweight enough to be used locally. 3 WIKIHINT Dataset The absence of high-quality, verified hint datasets poses a significant challenge given the demand- ing data requirements of LLMs for their effective training. In this section, we outline the process for constructing WIKIHINT dataset. Figure 1 provides an overview of the pipeline of the dataset gener- ation process, which we explore in detail in the following sections. 3.1 Question Sampling Module We incorporated AI-generated questions using ChatGPT (OpenAI, 2022) and questions from existing popular QA datasets such as SQuAD 2.0 (Rajpurkar et al., 2018) and Natural Questions (NQ) (Kwiatkowski et al., 2019). The following prompt was used for question generation: Can you give me 10 questions where the answer is ANSWER? Please put them in a CSV file with answer=ANSWER and link=WIKIPEDIA_LINK where each question has an answer and link. Make sure to put the questions in quotation marks. where WIKIPEDIA_LINK is the URL for the Wikipedia page corresponding to ANSWER. As men- tioned above, we also selected questions from SQuAD 2.0 and NQ making sure that their answers had dedicated Wikipedia articles with sufficiently long content. Finally, we manually verified the correctness of all the selected questions discarding questions that are either too general or have incor- rect answers. Table 6 in Appendix A compares the difficulty levels of questions based on their sources. 3.2 Hint Creation The crowdsourcing platform Amazon Mechanical Turk4 was then used to distribute the hint creation task among multiple workers. The instructions shown to crowdworkers asked them to create five hints for a question and its associated Wikipedia 4https://www.mturk.com/ Figure 2: The HINTRANK method. Number of hints Number of questions Avg. question length (words) Avg. hint length (words) Avg. #entities / question Avg. #entities / hint Train Test 4,500 900 19.55 17.77 1.2 1.2 500 100 19.19 18.32 1.44 1.18 Table 1: Statistics of WIKIHINT dataset. article. After generating the hints, the workers were asked to rank them on a scale from 1 to 5, with 1 being the most helpful in finding correct an- swers and 5 being the least helpful. Figure 9 in Ap- pendix A displays the annotators’ interface for hint generation while Figures 10, 11, and 12 provide additional views of the summarized instructions, detailed instructions, and the provided examples, respectively, to further assist the crowdworkers. Each data submission was subsequently reviewed manually for quality and was either approved or rejected based on their assessment. Table 14 in Appendix A shows the detailed criteria used for the selection process. The most common reasons for rejection were hints that directly revealed the an- swers (answer leakage) and hints that were single words instead of complete sentences. Among the 2,788 submissions reviewed, 1,788 were rejected and 1,000 were accepted. We prepared several at- tributes to be included for each question, answer, and hint, as shown in Tables 7, 8, and 9 in Ap- pendix A. We discuss some of them in Section 4. Finally, we divided the questions and hints into train and test subsets. The train subset includes 4,500 hints for 900 questions, while the test subset contains 500 hints for 100 questions. Tables 10, 11, 12, and 13 in Appendix A show few examples of hints taken from the WIKIHINT dataset. QuestionHint1Hint2Concatenate[CLS] Question Hint1 Answer [SEP] Question Hint2 Answer [SEP]Hint2 is better than Hint187%13%10AnswerHint1is better than Hint2 Rank 1 2 3 4 5 4.1 HINTRANK Evaluation Approach Average Length 16.99 17.67 18.02 18.14 18.3 Table 2: Average length of hints vs. their ranks. 4 Evaluation Approaches Mozafari et al. (2024) proposed several evaluation metrics for assessing the quality of hints, including Relevance, Readability, Convergence, and Familiar- ity, although the authors proposed automatic evalua- tion methods only for Convergence and Familiarity. Convergence is a measure of how effectively a hint can narrow down or eliminate potential answers to a given question. Familiarity measures the ex- pected level of knowledge of information expressed in hints. To evaluate hints based on Convergence and Familiarity, we follow the method proposed by Mozafari et al. (2024). However, we employ two cores for convergence including LLaMA-3.1-8b and LLaMA-3.1-70b (Dubey et al., 2024). We extend the above evaluation scheme by incor- porating automatic methods for evaluating hint’s Relevance and Readability. We also propose a new metric for evaluating the probability of answer leak- age - a case when a hint directly reveals the answer in its content. We introduce a lightweight auto- matic evaluation method for assessing hint quality in a pairwise scenario. The way to compute those additional metrics is briefly described below. Hints can be considered a form of an answer since they provide explanations of the question’s correct answer. Based on this, one can evaluate the Relevance of a hint to its question as an An- swer Relevance task (Es et al., 2024) - the task where the goal is to assess how pertinent the pro- vided answer is to the target question. To compute the answer relevance metric, we employ DeepE- val framework5 treating hint as a kind of answer. To evaluate Readability (Liu and Lee, 2023), we finetune a RoBERTa (Liu et al., 2019) model as a classifier on the OneStopEnglish dataset (Vajjala and Luˇci´c, 2018). The finetuned model categorizes sentences into three classes: Beginner (0), Interme- diate (1), and Advanced (2), reflecting their level of reading difficulty6. To calculate the Answer Leakage Degree, we measure the semantic similarity between each word of a hint and an answer using RoBERTa model. 5https://docs.confident-ai.com/ 6The accuracy of the readability estimator model is 62.3% In addition to the above automatic evaluation ap- proaches involving individual hints, we introduce a new lightweight evaluation method, HINTRANK, for evaluating and ranking hints using pairwise pref- erences. Building on the success of widely-used au- tomatic evaluation metrics like BERTScore (Zhang et al., 2020), BEM (Bulian et al., 2022), Mover- Score (Zhao et al., 2019), and BLEURT (Sellam et al., 2020), which leverage BERT (Devlin et al., 2019) as the core evaluation module and demon- strate its effectiveness, we chose BERT as the foun- dation for the HINTRANK method. Our method determines the better hint within a pair of hints. Fig- ure 2 illustrates the proposed method. In the HIN- TRANK method, we begin by concatenating a given question and its answer with two hints, labeled as Hint1 and Hint2, to create an input compatible with BERT model. Note that in the answer-agnostic scenarios, we avoid appending the answer to the evaluated hints. Such constructed input is then pro- cessed by BERT model, which produces one of two possible outputs: 0 or 1. An output of 0 means that Hint2 is of higher quality than Hint1, whereas an output of 1 suggests that Hint1 is superior to Hint2. As HINTRANK operates on pairwise preferences, (cid:1) comparisons for a question with n it requires (cid:0)n 2 hints, with a runtime complexity of O(n2). 5 Experiments and Results 5.1 Data Analysis The WIKIHINT dataset is split into a train set with 4,500 hints (900 questions) and a test set with 500 hints (100 questions). Table 1 provides the statis- tics of both train and test sets, while Figure 7 in Appendix A shows their distributions according to the question types, indicating that the distributions are well-matched. We next analyze the difficulty levels of ques- tions in WIKIHINT. To evaluate the difficulty, we utilize the Reference-based Question Complexity method (Gabburo et al., 2024). This method com- putes the difficulty of a question by assessing how many of its retrieved passages contain the correct answer and by measuring the relevance between the retrieved passages and the question. It then cal- culates the difficulty score for the question based on such computed features. In particular, we use the DPR method (Karpukhin et al., 2020) as the re- trieval technique, employing an English Wikipedia dump preprocessed by Karpukhin et al. (2020) as Dataset Subset Relevance Readability Convergence Familiarity Length Answer Leakage Degree (Avg) Answer Leakage Degree (Max) TriviaHG Entire WIKIHINT Entire Train TriviaHG WIKIHINT Train TriviaHG Test WIKIHINT Test 0.95 0.98 0.95 0.98 0.95 0.98 0.71 0.72 0.73 0.71 0.73 0.83 0.57 0.73 0.57 0.74 0.6 0.72 0.77 0.75 0.75 0.76 0.77 0.73 20.82 17.82 21.19 17.77 20.97 18.32 0.23 0.24 0.22 0.24 0.23 0.24 0.44 0.49 0.44 0.49 0.44 0.47 Table 3: Quality comparison of WIKIHINT and TriviaHG. Relevance, convergence, familiarity, and answer leakage are measured on a scale from 0 to 1, while readability is rated on a scale from 0 to 2 (the lower, the more readable). HINT dataset (and separately, in its train and test subsets) using the relevance, readability, conver- gence, familiarity, length, and answer leakage de- gree. We then compare these values with the ones obtained for TriviaHG dataset (Mozafari et al., 2024) - the only existing hint dataset. The com- parison results are presented in Table 3. The results indicate that in terms of relevance, readability, an- swer leakage degree, and familiarity, the metrics are nearly same between the two datasets. How- ever, WIKIHINT has better convergence values compared to TriviaHG. Additionally, the hints in WIKIHINT are shorter in length, as measured by word count, than TriviaHG. These results indicate that the hints in WIKIHINT are of higher quality. Lastly, Figure 3 demonstrates the negative cor- relation between the convergence scores of hints and their helpfulness as represented by hint ranks assigned by crowdworkers. The plot suggests that the convergence scores can be considered a reli- able metric for evaluating the helpfulness of hints and for hint ranking. 5.2 Human Evaluation To manually evaluate hints, we recruited five inde- pendent evaluators, who were not involved in the dataset generation process, to answer the questions from the test subset of the WIKIHINT. The pro- cess was as follows: 1. Participants were asked to answer the question without using any hints. If they provided a correct answer, they proceeded to the next question. 2. If they could not answer the question correctly, they were asked to review the hints until they could find the correct answer. By providing the correct answer, the participants could move to the next question. 3. If the participants could not answer the question after reviewing all the hints, they were allowed to skip the question. Figure 4 illustrates that all the participants could answer more questions across all of question types Figure 3: Average convergence of the hints of WIKI- HINT based on the hint ranks. the evidence source, and consider the top 30 most relevant passages as the retrieved passages. Fig- ure 8 in Appendix A illustrates the computed ques- tion difficulty of WIKIHINT for train and test sub- sets7. The figure indicates that medium-hard ques- tions are the most common as well as the train and test subsets have quite similar distributions in terms of question difficulty. Also, Table 6 in Appendix A highlights the difficulty levels of questions gener- ated or extracted from various sources. Table 2 reveals an interesting insight regarding the length of hints, which can be considered as one of indicators of helpfulness. The results suggest that high-quality hints tend to be shorter in length (measured by the number of words) than the lower quality hints. This finding indicates an inverse correlation between hint length and helpfulness, challenging the intuition that longer hints are more informative or specific, and therefore more useful. In contrast, shorter hints appear to be more concise and easier to follow, likely presenting more helpful information in the first place. We also evaluate the hints in the entire WIKI- 7We classify questions with difficulty scores below 0.33 as easy, those above 0.66 as hard and the rest as medium. 12345Rank5060708090100Avg. of Convergence79.975.172.571.768.272.564.460.557.854.2LLaMA-3.1-70bLLaMA-3.1-8b Model GPT-4 GPT-4 Config Vanilla Vanilla LLaMA-3.1-405b Vanilla LLaMA-3.1-405b Vanilla LLaMA-3.1-70b LLaMA-3.1-70b LLaMA-3.1-70b LLaMA-3.1-70b LLaMA-3.1-8b LLaMA-3.1-8b LLaMA-3.1-8b LLaMA-3.1-8b FTwA Vanilla FTwoA Vanilla FTwA Vanilla FTwoA Vanilla Use Answer Relevance Readability Convergence (LLaMA 8b) Convergence (LLaMA 70b) Familiarity Length Answer Leakage Degree (Avg) Answer Leakage Degree (Max) ✓ ✗ ✓ ✗ ✓ ✓ ✗ ✗ ✓ ✓ ✗ ✗ 0.91 0.92 0.94 0.92 0.88 0.86 0.86 0.87 0.78 0.81 0.76 0.78 1.0 1.1 1.49 1.53 1.5 1.53 1.5 1.56 1.63 1.72 1.7 1.76 0.14 0.12 0.11 0.1 0.09 0.05 0.08 0.06 0.05 0.05 0.03 0.04 0.48 0.47 0.47 0.45 0.42 0.42 0.38 0.38 0.37 0.32 0.32 0.3 0.84 0.81 0.76 0.78 0.84 0.8 0.8 0.76 0.79 0.8 0.8 0.83 26.36 26.93 41.81 50.91 43.69 45.51 51.07 53.24 50.33 54.38 55.02 52.99 0.23 0.24 0.23 0.23 0.22 0.23 0.22 0.22 0.22 0.22 0.22 0.22 0.51 0.52 0.5 0.5 0.48 0.5 0.51 0.5 0.52 0.5 0.51 0.5 Table 4: Evaluation of generated hints based on relevance, readability, convergence, familiarity, length, and answer leakage across different scenarios. LLaMA-3.1-8b and LLaMA-3.1-70b on the train- ing subset of the WIKIHINT dataset to evaluate the LLMs’ capabilities in hint generation when trained specifically on this task. For each question, we assign a hint as the target during the finetuning pro- cess. As a result of this learning strategy, during the inference stage, the finetuned model is prompted to generate one hint for each question. We con- sider two finetuning approaches: answer-aware and answer-agnostic. Given that LLMs typically handle most knowledge questions correctly, the answer- agnostic approach might be sufficient for generat- ing hints. Besides, users generally do not know the answers to their questions when seeking hints. However, the answer-aware approach has its own advantages, too, such as in educational contexts where a teacher might use it to collect materials for class preparation. Due to the importance of both approaches, we chose to investigate fine-tuning of the LLMs in these two distinct scenarios. We found that shorter prompts were more effec- tive in achieving the desired task. Longer, more detailed instructions often led to the model disre- garding the key goal, i.e., generating hints, and instead focusing on irrelevant details. In contrast, shorter prompts increased the likelihood of success- ful task completion. After experimenting, we opted for the following prompt as the system prompt: You are a hint generator for the factoid ques- tions. The user asks you a question and you should generate a hint for that question with- out revealing the answer in the hint. Two distinct user prompts were employed to generate hints within a zero-shot learning strategy. Assuming a question q as an input, the answer- agnostic prompt was ‘Give me the best hint for this Figure 4: The results of human evaluation. such as HUMAN, ENTITY, and LOCATION8 when they used hints compared to the case with- out hints. Notably, the greatest improvement was observed in human-related questions, where hints proved most beneficial. Following, entity-related questions led to significant improvement, while location-related questions saw the smallest posi- tive change. This suggests that generating effective hints becomes progressively more challenging for human, entity, and location questions, in that order. 5.3 Model Performance To further assess the quality of hints, we analyze how well LLMs can automatically generate hints for questions. We use the open-source LLaMA models: LLaMA-3.1-8b, LLaMA-3.1-70b, and LLaMA-3.1-405b (Dubey et al., 2024), and GPT- 4 (OpenAI et al., 2023) as the most powerful closed- source LLM for comparison. To explore different scenarios, we finetune9 8We use names as stated in the original dataset. 9We perform model finetuning using the API functions available on together.ai LOCATIONENTITYHUMANQuestion Type020406080100% of Questions57.131.812.395.272.754.434.136.938.1Without HintsUsing Hints Method Config Use Answer Accuracy (%) Correlation (%) Convergence Vanilla Vanilla LLaMA-3.1-8b Vanilla LLaMA-3.1-8b FTwoA LLaMA-3.1-8b LLaMA-3.1-8b FTwA LLaMA-3.1-70b Vanilla LLaMA-3.1-70b Vanilla LLaMA-3.1-70b FTwoA LLaMA-3.1-70b FTwA HINTRANK HINTRANK FTwoA FTwA ✓ ✗ ✓ ✗ ✓ ✗ ✓ ✗ ✓ ✗ ✓ 40.80 60.50 60.95 61.00 61.25 64.00 64.25 64.65 65.30 67.25 68.55 36.70 49.25 49.79 50.74 49.03 50.32 51.32 51.51 52.53 49.06 52.34 Table 5: Comparison between Convergence metric, LLM-based ranking, and HINTRANK. cates that the prompt we use is effective in prevent- ing LLMs from including answers, their synonyms or very similar terms in the generated hints, as the results closely align with those of WIKIHINT shown in Table 3. Figure 6 illustrates that as LLMs decrease in size and capability, the convergence of their hints also diminishes. This trend is observed for both LLaMA-3.1-8b and LLaMA-3.1-70b, used as the cores of the convergence method. This supports our claim in Section 5.1 regarding the correla- tion between convergence and ranks. The fig- ure also shows that the average convergence for the answer-aware approach surpasses that of the answer-agnostic approach, suggesting that includ- ing the answer in the prompt makes it easier for LLMs to generate hints. Furthermore, LLMs fine- tuned on the train subset of the WIKIHINT dataset achieve better convergence scores than their vanilla counterparts, indicating the efficacy of WIKIHINT for finetuning LLMs in hint generation. 5.4 HINTRANK Evaluation Method As outlined in Section 4, we also propose in this paper a novel evaluation method, HINTRANK, for ranking hints using the BERT model. Alongside finetuning BERT, we additionally finetune LLaMA- 3.1-8b and LLaMA-3.1-70b models on the train set of the WIKIHINT to assess the performance of these LLMs in identifying high-quality hints. Similar to the experiments described in Section 5.3, we examine various scenarios including answer- aware and answer-agnostic contexts, and compare vanilla models with their finetuned counterparts. We use the following prompt as the system prompt: Figure 5: Average length of generated hints by LLMs. question: q’. The answer-aware prompt included the answer a as follows: ‘Give me the best hint for this question: q? The answer for the question is a’. To evaluate the hint generation capabilities of LLMs across different scenarios, we examine four approaches: Vanilla-wA, Vanilla-woA, FTwA, and FTwoA where wA means With Answer and woA means Without Answer. We test these models on the WIKIHINT test to assess the impact of finetun- ing and the inclusion of answers in the prompt. Figure 5 illustrates that as LLMs decrease in size and hint generation capability, the length of the generated hints increases. This supports our obser- vation made in Section 5.1 of an inverse correlation between hint length and hint quality. Additionally, hints produced by finetuned models are generally shorter than those from vanilla models, indicating that finetuned models may generate higher-quality hints. Moreover, hints in the answer-aware sce- narios are shorter compared to those in answer- agnostic scenarios, suggesting that when the an- swer is provided along with the question, LLMs are able to produce more effective hints. Table 4 presents the quality of generated hints, evaluated with methods such as relevance, read- ability, convergence, answer leakage degree, and familiarity. The results indicate that more power- ful LLMs are capable of generating more relevant hints. Regarding readability, GPT-4 exhibits the highest quality, followed closely by LLaMA-3.1- 405b and LLaMA-3.1-70b, while LLaMA-3.1-8b shows the lowest readability. It also demonstrates that more powerful LLMs can generate more read- able hints. Additionally, finetuned models consis- tently outperform their vanilla counterparts, and answer-aware prompts yield better results com- pared to answer-agnostic prompts for readability and familiarity. The answer leakage degree indi- GPT-4-VanillaLLaMA-3.1-405b-VanillaLLaMA-3.1-70b-FTLLaMA-3.1-70b-VanillaLLaMA-3.1-8b-FTLLaMA-3.1-8b-Vanilla010203040506070Avg. of Length26.3641.8143.6945.5150.3354.3826.9350.9151.0753.2455.0252.99Avg of Answer-Aware/AgnosticAnswer-AwareAnswer-Agnostic Figure 6: Average convergence of the generated hints by different LLMs. The order of LLMs is determined by their capabilities and the parameter count. You are a hint evaluator for the factoid ques- tions. The user gives you a question and two hints and you should specify which hint for that question is a better hint and more helpful. Two distinct user prompts are employed to eval- uate hints within a zero-shot learning strategy. As- suming a question q as a question and h1 and h2 as a pair of hints, the answer-agnostic prompt is: Which hint is better to find the answer of this question: q. Hint_1: h1. Hint_2: h2. Just choose between "Hint_1" and "Hint_2" with- out any explanations. and the answer-aware prompt with answer a is: Which hint is better to find the answer of this question: q. The answer for this question is a. Hint_1: h1. Hint_2: h2. Just choose between "Hint_1" and "Hint_2" without any explanations. We benchmark HINTRANK against the Conver- gence metric which turned out to be useful for hint ranking assessment as indicated in Figure 3. To convert pairwise rankings to listwise rankings, we apply the Bradley–Terry model (Bradley and Terry, 1952). We evaluate the correlation between the rankings with Pearson Correlation (Mining, 2006). Table 5 outlines the key features and differences among various scenarios. The results indicate that with the increase in the size and power of LLMs, both accuracy and correlation improve. Addition- ally, the answer-aware approach yields better out- comes compared to the answer-agnostic method, suggesting that the presence of an answer enables LLMs to evaluate hints more effectively. More- over, finetuned versions outperform their vanilla counterparts, demonstrating that the WIKIHINT dataset is well-suited for model fine-tuning to rank hints. Surprisingly, the BERT-base method outper- forms LLMs, including finetuned versions in the answer-aware scenario. This holds true for both Bert-FTwoA and Bert-FTwA, although BERT-base also performs better in the answer-aware approach compared to answer-agnostic. BERT-base methods achieve higher accuracy than LLMs and conver- gence, but in terms of correlation, LLaMA-3.1-70b exhibits the best performance. The effectiveness of BERT-base methods may be attributed to the strengths of encoder-based models like BERT in classification tasks over decoder-based models. Uti- lizing BERT-based models instead of LLMs en- hances the speed and accessibility of HINTRANK, reducing computational demands. Figure 13 in Ap- pendix A shows that accuracy improves as the rank difference between hints increases, indicating it’s harder to correctly order hints with closer ranks. 6 Conclusions In this paper, we introduced the first manually cre- ated dataset for hint generation and hint ranking. We also presented a new lightweight method for evaluating and ranking hints. To demonstrate the effectiveness of our dataset, we conducted exper- iments where humans attempted to answer ques- tions with and without the use of hints. The results confirm that the hints are of sufficient quality to assist users. We then finetuned LLMs using our dataset, prompting them to generate new hints for different questions. The high quality of the gener- ated hints indicates that our dataset is well-suited for finetuning LLMs for HG task. We also fine- tuned BERT and LLMs on the dataset for the task of hint ranking and evaluated their performance. The results reveal that encoder-based models out- perform decoder-based models in hint ranking. GPT-4-VanillaLLaMA-3.1-405b-VanillaLLaMA-3.1-70b-FTLLaMA-3.1-70b-VanillaLLaMA-3.1-8b-FTLLaMA-3.1-8b-Vanilla010203040506070Avg. of Convergence14.4610.848.745.365.194.8312.109.648.015.643.194.28LLaMA-3.1-8bAvg of Answer-Aware/AgnosticAnswer-AwareAnswer-AgnosticGPT-4-VanillaLLaMA-3.1-405b-VanillaLLaMA-3.1-70b-FTLLaMA-3.1-70b-VanillaLLaMA-3.1-8b-FTLLaMA-3.1-8b-Vanilla010203040506070Avg. of Convergence48.2146.9542.0741.8637.0131.5647.1145.0638.3138.2431.5230.00LLaMA-3.1-70bAvg of Answer-Aware/AgnosticAnswer-AwareAnswer-Agnostic In future, we plan to generate personalized hints tailored to the knowledge of askers. The main chal- lenge here will be to develop appropriate datasets and solutions for user profiling. Limitations Our study has the following limitations: • The need for generative capabilities in hint generation task necessitates the use of LLMs. However, this dependency is a limitation as fine-tuning and prompting LLMs require ex- tensive computational resources and are time- consuming. • Our research focus on factoid questions may limit its applicability to other types of ques- tions that involve more complex or abstract answers. Factoid questions, by their nature, provide clear and concrete answers, which simplifies automated hint generation and eval- uation but may not fully capture the breadth of human inquiry. • The WIKIHINT dataset is exclusively written in the English language. While this facili- tates accessibility for a global audience and en- sures compatibility with most existing Large Language Models, it also limits the dataset’s applicability in multilingual or non-English contexts, potentially excluding non-English speakers and diverse linguistic data. Ethical Considerations Our study utilizes GPT models, which are covered by the OpenAI License and Apache-2.0 license, and the LLaMA model, which is distributed under Meta’s LLaMA 2 Community License Agreement. We comply with these licensing agreements in all applications. Additionally, the datasets we use are sourced from repositories that are approved for aca- demic use. The artifacts developed during our re- search are made available under the MIT license to facilitate straightforward modifications and use by the research community. We ensure that our data management, model training, and dissemination practices meet ethical standards and legal require- ments associated with each artifact we use. References Heba Abdel-Nabi, Arafat Awajan, and Mostafa Z Ali. 2023. Deep learning-based question answering: a survey. Knowledge and Information Systems, 65(4):1399–1485. Riordan Alfredo, Vanessa Echeverria, Yueqiao Jin, Lix- iang Yan, Zachari Swiecki, Dragan Gaševi´c, and Roberto Martinez-Maldonado. 2024. Human-centred learning analytics and ai in education: A systematic literature review. Computers and Education: Artifi- cial Intelligence, 6:100215. Albert Bandura. 2013. The role of self-efficacy in goal- based motivation. New developments in goal setting and task performance, pages 147–157. Tiffany Barnes and John Stamper. 2008. Toward auto- matic hint generation for logic proof tutoring using historical student data. In Intelligent Tutoring Sys- tems, pages 373–382, Berlin, Heidelberg. Springer Berlin Heidelberg. Ralph Allan Bradley and Milton E. Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324– 345. Jannis Bulian, Christian Buck, Wojciech Gajewski, Ben- jamin Börschinger, and Tal Schuster. 2022. Tomayto, tomahto. beyond token-level answer equivalence for question answering evaluation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 291–305, Abu Dhabi, United Arab Emirates. Association for Computa- tional Linguistics. Ali Darvishi, Hassan Khosravi, Shazia Sadiq, Dragan Gaševi´c, and George Siemens. 2024. Impact of ai as- sistance on student agency. Computers & Education, 210:104967. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, et al. 2024. The Llama 3 Herd of Models. arXiv e-prints, page arXiv:2407.21783. Shahul Es, Jithin James, Luis Espinosa Anke, and Steven Schockaert. 2024. RAGAs: Automated evalu- ation of retrieval augmented generation. In Proceed- ings of the 18th Conference of the European Chap- ter of the Association for Computational Linguistics: System Demonstrations, pages 150–158, St. Julians, Malta. Association for Computational Linguistics. Matteo Gabburo, Nicolaas Jedema, Siddhant Garg, Leonardo Ribeiro, and Alessandro Moschitti. 2024. Measuring question answering difficulty for retrieval- augmented generation. In ACL 2024. Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean- Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M. Dai, Anja Hauth, Katie Milli- can, David Silver, Melvin Johnson, et al. 2023. Gem- ini: A Family of Highly Capable Multimodal Models. arXiv e-prints, page arXiv:2312.11805. Richard Heersmink. 2024. Use of large language mod- els might affect our cognitive skills. Nature Human Behaviour, 8(5):805–806. Gregory Hume, Joel Michael, Allen Rovick, and Martha Evens. 1996. Hinting as a tactic in one-on-one tutor- ing. The Journal of the Learning Sciences, 5(1):23– 47. Anubhav Jangra, Jamshid Mozafari, Adam Jatowt, and Smaranda Muresan. 2024. Navigating the Landscape of Hint Generation Research: From the Past to the Future. arXiv e-prints, page arXiv:2404.04728. Adam Jatowt, Calvin Gehrer, and Michael Färber. 2023. Automatic hint generation. In Proceedings of the 2023 ACM SIGIR International Conference on Theory of Information Retrieval, ICTIR ’23, page 117–123, New York, NY, USA. Association for Com- puting Machinery. Gregor Jošt, Viktor Taneski, and Sašo Karakatiˇc. 2024. The impact of large language models on program- ming education and student learning outcomes. Ap- plied Sciences, 14(10). Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics. Hassan Khosravi, Simon Buckingham Shum, Guanliang Chen, Cristina Conati, Yi-Shan Tsai, Judy Kay, Si- mon Knight, Roberto Martinez-Maldonado, Shazia Sadiq, and Dragan Gaševi´c. 2022. Explainable ar- tificial intelligence in education. Computers and Education: Artificial Intelligence, 3:100074. Ekaterina Kochmar, Dung Do Vu, Robert Belfer, Varun Gupta, Iulian Vlad Serban, and Joelle Pineau. 2022. Automated data-driven generation of personalized pedagogical interventions in intelligent tutoring sys- tems. International Journal of Artificial Intelligence in Education, 32(2):323–349. Ghader Kurdi, Jared Leo, Bijan Parsia, Uli Sattler, and Salam Al-Emari. 2020. A systematic review of auto- matic question generation for educational purposes. International Journal of Artificial Intelligence in Ed- ucation, 30:121–204. Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral questions: A benchmark for question answering research. Transactions of the Association for Compu- tational Linguistics, 7:452–466. Fengkai Liu and John Lee. 2023. Hybrid models for sen- tence readability assessment. In Proceedings of the 18th Workshop on Innovative Use of NLP for Build- ing Educational Applications (BEA 2023), pages 448– 454, Toronto, Canada. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. ArXiv, abs/1907.11692. Chao-Yi Lu and Sin-En Lu. 2021. A survey of ap- proaches to automatic question generation:from 2019 to early 2021. In Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing (ROCLING 2021), pages 151–162, Taoyuan, Taiwan. The Association for Computational Linguistics and Chinese Language Processing (ACLCLP). Vaibhav Mavi, Anubhav Jangra, and Adam Jatowt. 2022. A survey on multi-hop question answering and gen- eration. arXiv preprint arXiv:2204.09140. Vaibhav Mavi, Anubhav Jangra, and Adam Jatowt. 2024. Multi-hop question answering. Found. Trends Inf. Retr., 17(5):457–586. Jessica McBroom, Irena Koprinska, and Kalina Yacef. 2021. A survey of automated programming hint gen- eration: The hints framework. ACM Computing Sur- veys (CSUR), 54(8):1–27. What Is Data Mining. 2006. Data mining: Concepts and techniques. Morgan Kaufinann, 10(559-569):4. Jamshid Mozafari, Anubhav Jangra, and Adam Jatowt. 2024. Triviahg: A dataset for automatic hint gen- eration from factoid questions. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’24, page 2060–2070, New York, NY, USA. Association for Computing Machinery. Preksha Nema and Mitesh M. Khapra. 2018. Towards a better metric for evaluating question generation systems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3950–3959, Brussels, Belgium. Association for Computational Linguistics. OpenAI. 2022. Introducing chatgpt. https://openai. com/blog/chatgpt. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, et al. 2023. GPT-4 Technical Report. arXiv e-prints, page arXiv:2303.08774. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- In International uating text generation with bert. Conference on Learning Representations. Tiancheng Zhao, Xiaopeng Lu, and Kyusong Lee. 2021. SPARTA: Efficient open-domain question answering via sparse transformer matching retrieval. In Pro- ceedings of the 2021 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 565–575, Online. Association for Computational Lin- guistics. Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Chris- tian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized em- beddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 563–578, Hong Kong, China. Association for Computational Lin- guistics. Jenna Panter, Oliver Mytton, Stephen Sharp, Søren Brage, Steven Cummins, Anthony A Laverty, Katrien Wijndaele, and David Ogilvie. 2018. Using alterna- tives to the car and risk of all-cause, cardiovascular and cancer mortality. Heart, 104(21):1749–1755. Thomas W Price, Yihuan Dong, Rui Zhi, Benjamin Paaßen, Nicholas Lytle, Veronica Cateté, and Tiffany Barnes. 2019. A comparison of the quality of data- driven programming hint generation algorithms. In- ternational Journal of Artificial Intelligence in Edu- cation, 29:368–395. Libo Qin, Qiguang Chen, Xiachong Feng, Yang Wu, Yongheng Zhang, Yinghui Li, Min Li, Wanxiang Che, and Philip S. Yu. 2024. Large Language Mod- arXiv e-prints, page els Meet NLP: A Survey. arXiv:2405.12819. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable ques- tions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 784–789, Melbourne, Australia. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text genera- tion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehen- sion dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 191–200, Vancouver, Canada. Association for Computational Linguistics. Ellen L. Usher and Frank Pajares. 2006. Sources of aca- demic and self-regulatory efficacy beliefs of entering middle school students. Contemporary Educational Psychology, 31(2):125–141. Sowmya Vajjala and Ivana Luˇci´c. 2018. On- eStopEnglish corpus: A new corpus for automatic readability assessment and text simplification. In Pro- ceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 297–304, New Orleans, Louisiana. Association for Computational Linguistics. Ruqing Zhang, Jiafeng Guo, Lu Chen, Yixing Fan, and Xueqi Cheng. 2021. A review on question generation from natural language text. ACM Trans. Inf. Syst., 40(1). A Appendix Figure 7: The distribution of Train and Test subsets. Figure 8: Question difficulty based on different question types. Source ChatGPT NQ SQuAD 2.0 Difficulty 0.43 0.34 0.38 Table 6: Question distributions based on the their sources and difficulty. TrainHUMAN: 58.1%ENTITY: 21.8%LOCATION: 20.0%TestHUMAN: 57.0%ENTITY: 22.0%LOCATION: 21.0%TrainHUMAN-Easy: 14.6%HUMAN-Medium: 37.0%HUMAN-Hard: 6.6%ENTITY-Easy: 2.7%ENTITY-Medium: 16.9%ENTITY-Hard: 2.2%LOCATION-Easy: 4.6%LOCATION-Medium: 13.8%LOCATION-Hard: 1.7%TestHUMAN-Easy: 14.0%HUMAN-Medium: 36.0%HUMAN-Hard: 7.0%ENTITY-Easy: 3.0%ENTITY-Medium: 17.0%ENTITY-Hard: 2.0%LOCATION-Easy: 5.0%LOCATION-Medium: 14.0%LOCATION-Hard: 2.0% Figure 9: The MTurk Worker interface for the hint generation task. Figure 10: The summarized instructions for the hint generation task. Figure 11: The detailed instructions for the hint generation task. Figure 12: Good and bad examples for the hint generation task. Attribute Description The content of the question. The major category of the question. The specific sub-category of the question. Content of an entity Type of entity (e.g., GPE, PERSON). Start index of the entity in the question. End index of the entity in the question. The title of the corresponding Wikipedia page to the entity in question. question major minor entity ent_type start_index end_index wikipedia_page_title wiki_views_per_month Number of views per month of the Wikipedia page for the entity in question. normalized_views readability familiarity difficulty Views normalized to scale from 0 to 1 for the entity in the question. Indicates the readability score Indicates the familiarity score Indicates the difficulty level of the question Table 7: A detailed description of attributes of a question in WIKIHINT. Attribute Description The actual answer. Content of an entity as identified within the answer. Type of entity. Start index of the entity in the answer. End index of the entity in the answer. The title of the corresponding Wikipedia page to the entity in the answer. answer entity ent_type start_index end_index wikipedia_page_title wiki_views_per_month Number of views per month of the Wikipedia page for the entity in the answer. normalized_views familiarity difficulty Views normalized to scale from 0 to 1 for the entity in the answer. Indicates the familiarity score Indicates the difficulty level of the answer Table 8: A detailed description of attributes of an answer in WIKIHINT. Attribute Description Hint provided to assist with the question. URL source of the hint. Content of entities mentioned in the hint. Category of each entity (e.g., PERSON, GPE) mentioned in the hint. Specific start index where the entity is found in the hint text. Specific end index where the entity is found in the hint text. The title of the corresponding Wikipedia page to the entities in the hint. hint source entity ent_type start_index end_index wikipedia_page_title wiki_views_per_month Number of views per month of the Wikipedia pages for the entities of the hint. normalized_views relevance readability convergence familiarity answer_leakage rank Views normalized to scale from 0 to 1 for the entity in the hint. Indicates the relevance score Indicates the readability score Indicates the convergence score Indicates the familiarity score Indicates the answer leakage score Priority or helpfulness rating of the hint Table 9: A detailed description of attributes of a hint in WIKIHINT. Question Fifth Best Hint What is the driest country in sub-Saharan Africa? The earliest settlers in this country were in the 18th century and crossed the Orange River to move into the area. Fourth Best Hint The name of this country is derived from the oldest desert on Earth. Third Best Hint This country’s coat of arms is a shield with the same design as the flag, 2 antelopes and a red blue and white bird. Second Best Hint This country is a country in southern Africa with its western border along the Best Hint Atlantic Ocean. This country’s flag is blue and green with a red stripe down the middle and the symbol of a sun in the top left corner. Table 10: First example of the WIKIHINT. Question Fifth Best Hint What artist received the Presidential Medal of Freedom posthumously in 2018, in recognition of his contributions to American culture? The artist had karate as a lifelong interest, including its moves in his perfor- mances. Fourth Best Hint The artist served in the Army after he became famous. Third Best Hint Second Best Hint The artist tried to establish a career in films with pictures such as Jailhouse The artist never performed outside North America. Best Hint Rock and Fun in Acapulco. The artist is known for hits such as "Love Me Tender" and "Jailhouse Rock". Table 11: Second example of the WIKIHINT. Question What British comedy team is famous for its ’Four Yorkshiremen’ sketch, por- traying exaggerated tales of hardship with humor? Fifth Best Hint A giant cupid’s foot is repeatedly used in the show. Fourth Best Hint This British comedy troupe is formed in 1969 consisting of 6 members. Third Best Hint The team was awarded the AFI Star Award by the American Film Institute in 1998. Second Best Hint The team received the BAFTA Award for Outstanding British Contribution to Cinema at the 41st British Academy Film Awards in 1988. The Holy Grail and Life of Brian are some of their greatest comedy films. Best Hint Table 12: Third example of the WIKIHINT. Question What fictional pirate is known for carrying a compass that doesn’t point north but rather to what the user wants most? Fifth Best Hint One of the pirate’s most famous quotes is "Now, bring me that horizon". Fourth Best Hint The fictional pirate appeared in the video game series "Kingdom Hearts". Third Best Hint Second Best Hint The ship of the fictional pirate is called the Black Pearl. Best Hint The fictional pirate has a blood debt to another character called Davy Jones. The fictional pirate is played by Johnny Depp in the film series. Table 13: Fourth example of the WIKIHINT. • A hint must not include the exact answer explicitly. • A hint must be a sentence. • A hint must be specific, not generic. • A hint must be from the corresponding Wikipedia page. • A hint must have a unique rank. Table 14: A detailed criteria or standards used for verifying the generated hints during the selection process. Figure 13: Accuracy of HINTRANK for different hint pairs in different scenarios. The element at position (r, c) represents the accuracy when comparing Hint1 at rank r to Hint2 at rank c. 12345123451004542494345100444039424410039314940391003643393136100Convergence12345123451007469677361100526063625010062586855551005369584952100LLaMA-3.1-8b-VAwoA12345123451007372707366100616161655710053487253581004771575447100LLaMA-3.1-8b-VAwA12345123451006968717266100535765615410063597054571005169585152100LLaMA-3.1-8b-FTwoA12345123451006972737359100645962675410057537253561005174605146100LLaMA-3.1-8b-FTwA12345123451007474767969100535864725710058537762581004983625745100LLaMA-3.1-70b-VAwoA12345123451007374747972100556066735510061577864541005182605542100LLaMA-3.1-70b-VAwA12345123451007877768369100576367684910061607559591004981605448100LLaMA-3.1-70b-FTwoA12345123451007777777868100536570745410061597861631004881605448100LLaMA-3.1-70b-FTwA12345123451006468738167100596470706010057707669581005682757056100Bert-FTwoA12345123451006471758468100596175745810058677764581006088777360100Bert-FTwA40506070809010050607080901005060708090100607080901005060708090100506070809010050607080901005060708090100506070809010060657075808590951006065707580859095100
synthetic_cpt
4
Tailored-LLaMA_Optimizing_Few-Shot_Learning_in_Pruned_LLaMA_Models_with_Task-Specific_Prompts.pdf
Tailors: New Music Timbre Visualizer to Entertain Music Through Imagery 음악의 음색을 강조한 시각화 시스템 개발: 심상 형성과 음악 향유 중심의 분석 Contents Abstract List of Tables List of Figures I. Introduction II. Related Works & Background 2. 1. Timbre 2. 2. Music Visualization and Music Visual Imagery III. Study 3. 1. Identity of Tailors 3. 2. System Design and Mapping Rule of Tailors 3. 2. 1. System Design 3. 2. 2. Mapping Rule for Vocal Timbre 3. 2. 3. Mapping Rule for Background Timbre 3. 3. Method 3. 3. 1. Participants and Procedure 3. 3. 2. Materials and Metrics IV. Results 4. 1. Did timbral music visualization through Tailors well convey the timbral features of music? (RQ1) 4. 1. 1. Overall Results of the Timbre Survey 4. 1. 2. Timbre Survey Results By Groups 4. 2. Did Tailors make music entertainment better thorugh the imporved music visual imagery? (RQ2) 4. 2. 1. Overall Results of the Imagery Survey 4. 2. 2. Overall Results of the Entertainment Survey 4. 2. 3. Multiple Linear Regression Analysis 4. 2. 4. Coefficients Comparison by Fisher Transformation i ii iii 1 2 2 2 4 4 4 4 5 5 6 6 7 9 9 9 10 12 12 13 13 17 Contents 4. 3. Post Survey Results 4. 3. 1. Rankings for the Best Timbre Expression 4. 3. 2. Rankings for the Best Music Experience 4. 3. 3. Rankings for the Willingness to Use Again V. Discussion 5. 1. Possibilities for Tailors 5. 2. Limitations and the Future Work Ⅵ . Conclusion References Appendix Appendix A. User Interface of Tailors Appendix B. Information of Music Used In the Experiment Appendix C. Visualized Output of Tailors Appendix D. Tables for Wilcoxon Sign-Ranked Analysis By Surveys Appendix E. Post Survey Questionnaires Curriculum Vitae 23 23 24 25 26 26 26 28 29 32 40 ABSTRACT In this paper, I have implemented a timbre visualization system called ‘Tailors.’ Through the experiment with 27 MIR users, Tailors was found to be effective in conveying timbral warmth, brightness, depth, shallowness, hardness, roughness, and sharpness features of music compared to the only music condition and basic visualization. All scores of Tailors in the imagery and music entertainment surveys were valued highest among the three music conditions. Multiple and imagery-entertainment shows significant and positive correlations. Coefficients comparing results from Fisher Transformation show that Tailors made users’ music entertainment better through improved music visual imagery. The post-survey result represents that Tailors ranked first for the best timbre expression, music experience, and willingness to use it again. While some users felt a burden left the future work of the data-driven approach of the mapping rule of timbre visualization to gain consent from many users. Furthermore, reducing timbre features to focus on features that Tailors can express well was also discussed, with future work of Tailors in a more artistic way using the sense of space. in the eye, Tailors timbre-imagery regression between analysis linear i LIST OF TABLES Table 1. Demographic information of the participants Table 2a. Music Timbre Survey Questionnaires Table 2b. Music Imagery Survey Questionnaires Table 2c. Music Entertainment Survey Questionnaires Table 3a. Multiple Linear Regression Result of Timbre and Imagery from Tailors Table 3b. Multiple Linear Regression Result of Imagery and Entertainment from Tailors Table 4a. Fisher Transformation Result for Timbre → Imagery (A vs. C) Table 4b. Fisher Transformation Result for Timbre → Imagery (B vs. C) Table 5a. Fisher Transformation Result for Imagery → Entertain (A vs. C) Table 5b. Fisher Transformation Result for Imagery → Entertain (B vs. C) 6 7 8 8 14 15 17 18 20 21 ii LIST OF FIGURES Figure 1. Overall System Structure of Tailors Figure 2. Timbre Visualization Mapping Rule of Tailors Figure 3. Three music conditions (A, B, C) in experiment Figure 4. Boxplot for each timbral feature value after experiencing Tailors Figure 5. Boxplot for each imagery feature value after experiencing Tailors Figure 6. Boxplot for each entertainment feature values after experiencing Tailors Figure 7. Ranking Comparisons for the Best Timbre Expression Figure 8. Ranking Comparisons for the Best Music Experience Figure 9. Ranking Comparisons for the Willingness to Use Again 4 5 7 9 12 12 23 24 25 iii I. INTRODUCTION Music is a vital component in our lives, in the view of not only entertaining the music but also creating visual imagery and having a richer music experience. Through the music visual imagery, MIR (Music Information Retrieval) users can set the direction for their music entertainment. Because visual imagery in music can increase enjoyment from listeners [37], it is essential to form imagery in the best way. And it can be strengthened by the secondary creation of music, like music visualizations [33], because it has a strong connection between motor imagery in music visualization and the auditory imagery of the music [33, 35]. This gives importance to the philosophy of music visualization, to make MIR users entertained by the music well by the visualization. Music visualizations are created primarily based on what creators value the most. in the music [26, 27, 28]. And this Timbre, a complex and high-level music feature, has been neglected so far in music visualization. The recent trend of music visualization was focused on low-level features like the pitch and volume of the music. These features were simpler and relatively easier to express than other high-level features like timbre. The reason that timbre was hard to be represented by visuals was that there needed to be more on the criteria of the timbre. Many researchers have defined the timbral features of music into the kinds of instruments used in music visualization because there needed to be more visual components to express the kinds of instruments. Semantic descriptors of timbre, on the other hand, were a breakthrough in the field of music visualization. Recently, more techniques are being used to express timbre using semantic descriptors [56], giving the advantage of being able to describe the subjective feeling of users, stretching out to the music visualization. However, compared to the importance of timbre, it is not widely used for music visualization and research on the good mapping rule using the timbral features [31]. It showed the possibilities and research needs about music visualization using timbre and the effects on the music visual imagery and entertainment. led to the exhaustion of ideas In this paper, I have developed a web-based music visualization system using the timbral features of the music, called Tailors, to prove its effectiveness in timbre visualization compared to the condition of the only music and the basic visualization without timbral features. Furthermore, I want to identify the answer to these two research questions below by Tailors: RQ1. Did timbral music visualization through Tailors well convey the timbral features of music? RQ2. Did Tailors make music entertainment better through improved music visual imagery? To prove these research questions, I conducted the main study of 27 participants listening to twenty pieces of music by three conditions (only music, basic visualization, and timbre visualization by Tailors). Each participant did the three kinds of the survey every time they experienced the music, producing 1,620 results for the total (27 participants * 20 pieces of music * 3 conditions). In addition, a demographic and a post-survey were included in the experiment to provide more insights into the results. 1 II. RELATED WORKS & BACKGROUND is a subjective aesthetic 2. 1. Timbre Timbre, also known as tone color, can show a unique spirit that music initially has. While we listen to music as a harmony of various components, timbre is a vital component that defines the music's overall mood and sensation. Different criteria defined timbre in multiple ways in the last decades [2]. Dolan [3] described timbre as a concept of discriminator of nonidentical sounds with similar pitch and loudness that are not the same. Wallmark [4] defined timbre as both a consequence of material sound and the inner sound of a listener, judgment of minds. Siedenburg and McAdams [5] put which together these opinions about timbre and presented timbre terms in four concepts. That timbre is a 1) perceptual attribute, 2) quality and a contributor to source identity, 3) functions on different scales of detail, and 4) property of fused auditory events. Because timbre depends very strongly on the acoustic properties of the music [1], identifying each content of acoustic properties is essential to knowing each music's timbral features. There were several attempts to set up the semantic descriptors for timbral features. Disley and Howard [7] gathered forty-five listeners' adjective words related to timbre. And they advanced their results by collecting words from musicians consistently used to describe the timbre of musical instruments in their following study [8]. Porcello [9] defined a taxonomy for timbre verbalizations from sound experts and distinguished these descriptors on vocal and non-vocal timbres. Because timbral features are multidimensional [19, 22], it is essential to clarify each dimension and its meaning to understand timbre. Pearce et al. [6] collected and grouped 145 timbral attributes considering group discussion results and filtering by search term frequencies. Bismarck [11] stated that sharpness is a timbral component that distinguishes the pitch and loudness of verbal attributes. Stark [12] mentioned timbral brightness in singing voice pedagogy, highlighting it in vocal resonances, a feature related to high frequencies' spectral energy [21]. Pearce et al. [14] have built a perceptual model for hardness because in acoustic musical acoustics. its Pressnitzer et al. [16] claimed that roughness plays an important role in musical tension perception because it is proposed on a sensory basis attribute. Vassilakis [17] also argued the importance of auditory roughness in the aesthetic aspect of music. Wen [20] defined depth as a type of music with a poetic and dreamy mood and found that it relates to different music features like tempo, MFCC, etc. For the warmth of the music, it was found that it is highly associated with pitches, which is a component that can give a pleasant sound sensation [21]. Due to this multidimensionality of timbral features, the need to classify these features automatically using the feature extraction model has arisen. Although a lot of research organized multiple timbres of instruments in a piece of single music [26, 27, 28], there was little timbral discrimination on a single musical instrument. Loureiro et al. [24] investigated classification methods for a single sound using a clustering algorithm but with á no labels of timbre descriptors. Recently, Oliv n and Bl zquez [23] developed a multi-head attention-based model that classifies different timbres of the instruments. Although this model achieved an overall F1 value of 0.62, the model was not for organizing vocal timbres. Sha and Yang [25] build an automatic classifier on singing voice using 387 popular Chinese songs, achieving 79.84% accuracy; however, with no instrument timbres. it was undervalued importance in á 2. 2. Music Visualization and Music Visual Imagery To express mood, emotion, and other audio features, music visualizations were used to convey these sentiments with information to MIR users. Music visualizations are direct reflections of musical representations with good explanatory power [30], which can deepen our understanding of musical pieces [29]. For those who are not specialists in music, 2 relies on cross-modal association while music It shows that music visualization can cause synesthesia visualization can help them comprehend the music's feelings and expressions with various effects [32]. In addition, music visualization can provide an intuitive visual presentation of music to MIR users, which is highly related to synesthesia [33]. Synesthesia, which can be generated by music, heavily listening experience [34]. It happens because there's a strong connection between auditory and motor imagery, showing similar consequences function in the neurologically [35, 41]. interactive visual and audio process [33]. Even though visual imagery could be generated with most of the expressive features of music [36, 37, 38], timbral features are an essential dimension of developing music visual imagery rather than other musical attributes [40]. Halpern et al. [41] found that perception and visual imagery access a similar cognitive representation of timbre and that timbre imagery activated other auditory areas connected to the visual imagery. Bailes [39] experimented with vivid timbre perception and imagery for music. She found that the participants could internalize timbral features of music using the ability to discriminate timbres through the study. This shows that timbre is a crucial characteristic of the sound that can generate visual music imagery. in musical performance and imagery Consequently, users' music visual imagery, especially the one that is evoked by timbre, can be strengthened by music visualizations [36], leading them to enhance their musical experience with enjoyment [37]. According to a recent music visualization survey paper [31], it was found that timbre was timidly used in features for music visualizers despite the importance of music visual imagery and enjoyment. Smith and Williams [42] defined timbre as the most complex musical attribute to visualize because it depends on many factors. However, there were several attempts to imagine music with timbral features. Li et al. [43] employed three pairs of images containing the piano timbre's brightness, shape, and size. They have defined soft timbres as round shapes with cold colors and harsh timbres as angular shapes with warm colors. Giannakis [44] mapped timbral features such as sharpness, compactness, and sensory dissonance to the visual textures and developed a music Sound Mosaics was comprehensible in the timbral aspect to users, it had limitations on the small sample size low statistical significance of the experiment result. Siedenburg [45] also explored and various representations of real-time timbral visualization using the music programming environment, leaving the future work of music visualization needing some creativity. Sound Mosaics. Although visualization system called 3 III. STUDY 3. 1. Identity of Tailors The identity of Tailors came from semantic descriptors of the timbral features of the music and applying it to the music visualizations. Besides the discrimination on the instrument, these descriptors could well convey the specific meaning and mood of the timbral features. As mentioned in the Related Work and Background Section, there are several timbral visualizations in the world [44, 45], but it has yet to strengthen the music visual imagery of the user. If Tailors could support the music visual imagery of the users, those users could more entertain by the music by using the imagery that pops up in their minds. I wanted to determine this process as 'tailoring' of the musical piece by timbre visualization to each user, making the goods fit a given target. Through semantic descriptors as a mapping component for timbre visualization, Tailors were able to help users invent music-visual imagery and lead it to music entertainment. to the the transformation of in the music visualization well, To express and reflect the timbral features I conceived the art trend of Impressionism to the Tailors's visualization part. Impressionism is an art style that nicely captures the instantaneous changes in the shape of natural objects Impressionist artists used color according segmentation techniques to give a sense of change in form, which shows multiple colors to express one object. I thought these impressionist-style visualizations could well represent the timbral features of the music through color, tone, and texture. In addition, impressed by the nature and the sound artwork by Anna Marinenko [47], Tailors took the view that music visualization should express the whole flow of the sound by nature. By this, I used three types of nature: cloud, water, and ice in the Tailors. Additionally, for easy mapping and to make more difference in the numbers from the timbral model, the mapping rule in Tailors used min-max normalization. Appendix A shows the user interface of Tailors. [46]. light 3. 2. System Design and Mapping Rule of Tailors 3. 2. 1. System Design Figure 1. Overall System Structure of Tailors Figure 1 shows the whole system structure for the Tailors. First, to deliver the timbral feature through the visualization well, I separated the original music into the vocal audio and the background music using the audio source separation library called demucs [48]. Then, because visualization elements that can better express vocal and background music are different, I separated the rule and expressed timbre components on the Tailors. Then, due to the difference in visualization components and methods and the applied timbral features in the vocal and the background, I macroscopically divided it into the object (vocal audio) and the background, which shows the flow (background music). Then, using the timbral model 4 from audio commons [49], each timbral feature for both vocal audio and the background music expresses the original sound well was extracted. For the vocal audio, timbral features of roughness [13], sharpness [11], and warmth [52] were extracted. And for the background music, timbral features of roughness [16, 17], depth [50], brightness [15], hardness [10], and warmth [51] were extracted. For the visualization of the timbral features, three.js [53], a javascript library, was used for the object - the vocal visualization. And a Vanta.js [54] for the background visualization to represent 3D animated backgrounds. Figure 2. Timbre Visualization Mapping Rule of Tailors 3. 2. 2. Mapping Rule for Vocal Timbre As shown in Figure 2, visualization components and the timbral mapping rule in Tailors are as follows. The object, representing the vocal audio, is a combination of small sphere objects forming a giant sphere. These small sphere objects were controlled to express the roughness of the vocal sound, representing rough texture when it comes together and creates a big sphere object. By changing the view of the user's sight in the visualization, Tailors intended users to feel either rough or smooth. The sharpness and the warmth of the vocal timbre express the object's texture and the hue of the color, respectively. If the sharpness of the vocal is high, the texture of the object gets closer to the metal texture and gets closer to the plain texture in vice versa. Through the warmth score of the vocal, the object's color gets close to the warm color (e.g., red, orange, or yellow) when it gets a high score in warmth and gets close to the cold color (e.g., green, blue, or violet) in the vice versa. 3. 2. 3. Mapping Rule for Background Timbre The background is divided into three categories based on the score for the timbral hardness of the background music. Because timbral hardness represents the strongness of the music, I mapped the strong timbre group into the group of ice, the neutral group to the water, and the soft timbre group to the cloud. Based on these criteria, each natural object gets to express the roughness of the background music. In addition, the same rule as the vocal object applies to the background color- timbral warmth to the background color's hue, and two additional rules for the background- timbral brightness to the background color's value and the timbral depth to the its saturation. 5 3. 3. Method 3. 3. 1. Participants and Procedure Category N (Total 27) Percentage (%) Gender Age Female Male 18-23 24-29 30-35 Interest in Music Listening Very Interested Somewhat Interested Neutral Not Very Interested Not at all Interested Frequency of Music Listening More than 5 hours per day 3 to 5 hours per day 1 to 2 hours per day Less than 1 hour per day Favorite Genre of Music (Duplicate answer possible) Classic POP CCM Jazz K-POP OST (Movie, Drama) Ballad R&B Hip-hop Trot Indie EDM Country Folk Korean Traditional Music Alternative Rock & Band Hymn Interest in Music Visualization Very Interested Somewhat Interested Neutral Not Very Interested Not at all Interested Table 1. Demographic information of the participants 6 16 11 4 19 4 11 12 3 0 1 2 3 17 5 6 16 3 2 17 12 19 7 12 1 8 2 1 1 1 1 1 1 1 0 3 22 59.30% 40.70% 17.90% 64.20% 17.90% 40.70% 44.40% 11.10% 0% 3.70% 7.40% 11.10% 63.00% 18.50% 22.20% 59.30% 11.10% 7.40% 63.00% 44.40% 70.40% 25.90% 44.40% 3.70% 29.60% 7.40% 3.70% 3.70% 3.70% 3.70% 3.70% 3.70% 3.70% 0.00% 11.10% 81.50% Figure 3. Three music conditions (A, B, C) in experiment 27 participants who enjoyed listening to music daily were recruited for the experiment. I gathered the demographic questionnaire of the users before the experiment. Table 1 shows the result of the demographics of the participants. For the procedure of the main study, every participant got to experience twenty pieces of POP music under three conditions (Figure 3) and do surveys after listening to a piece of music in one condition. These three conditions represents A. Only Music, B. Basic Visualization, and C. Timbre Visualization (Tailors). In the main study they were counter-balanced by the Latin-Square Design, additionally shuffling the twenty pieces of music on every user. Detailed information about the twenty pieces of music used in the experiment is in the Appendix B. Furthermore, because the experiment procedure went on a web-based, every participant was given the website link to get access, and they were paid each 30,000 for participating in the experiment. The overall experiment procedure went approximately two hours per person. See Appendix C for the visualized output of the Tailors. ₩ 3. 3. 2. Materials and Metrics Questionnaire in the Timbre Survey Timbre Feature 1 2 3 4 5 6 7 8 9 10 11 12 I felt the power of the timbre hard. I felt the power of the timbre soft. I felt the timbre complicated and deep. hard soft deep I felt the timbre simple and shallow. shallow bright dark warm cold rough smooth sharp blunt I felt the timbral brightness. I felt the timbral darkness. I felt the timbral warmness. I felt the timbral coldness. I felt the timbral roughness. I felt the timbral smoothness. I felt the timbral sharpness. I felt the timbral bluntness. 7 Table 2a. Music Timbre Survey Questionnaires 1 2 3 4 5 Questionnaire in the Imagery Survey Imagery Feature I felt harmonious and balanced. I felt the power that music gives me. I was able to become one with the music. I was able to move my body to the rhythm. I wanted to wander and travel around. flow force interior movement wandering Table 2b. Music Imagery Survey Questionnaires Questionnaire in the Entertainment Survey Entertainment Feature 1 2 3 4 5 6 7 8 I felt a shiver run through my body. I wanted to dance around. stimulated dancing I was able to feel entertained. entertained I was able to feel energized. I was moved. I was able to feel animated. I got excited. I was able to feel the rhythm well. energized moving animated excited rhythm Table 2c. Music Entertainment Survey Questionnaires Due to the research questions being focused on figuring out the Tailors’ effect on delivering timbral features, and creation of imagery and entertainment of the music through it, the main study compared timbral music visualization with the condition of the music only and the basic visualization with volume(energy) of the music but without timbral features. I have defined the three surveys for music timbre, imagery, and entertainment to get from the users, and Table 2 shows the questionnaire of each survey; Table 2a represents the twelve 7-point Likert scale questions for the Music Timbre Survey (hard, soft, deep, shallow, bright, dark, warm, cold, rough, smooth, sharp, and blunt). Table 2b shows the music imagery survey's five 7-point Likert scale questions (flow, force, interior, movement, and wandering) from GEMMES [55], which is a questionnaire for the music metaphor. And Table 2c shows the eight 7-point Likert scale questions (stimulated, dancing, entertained, energized, moving, animated, excited, and rhythm) for the Music Entertainment Questionnaire [55]. 8 IV. RESULTS 4. 1. Did timbral music visualization through Tailors well convey the timbral features of music? (RQ1) 4. 1. 1. Overall Results of the Timbre Survey Figure 4. Boxplot for each timbral feature value after experiencing Tailors To figure out that timbre visualization through Tailors was effective for conveying timbral features of the music well, I compared three conditions of the music: A. Only Music, B. Basic Visualization, and C. Timbre Visualization (Tailors). First, I conducted a Kruskal-Wallis test to compare three conditions, but no significance was found. However, as shown in Figure 4, it was found that seven features (warm, bright, deep, shallow, hard, rough, and sharp) among twelve timbral features were most well delivered by timbre visualization among the three conditions. On the other hand, for three features (cold, smooth, soft), it was found that it conveyed its timbral features better than basic visualization but less than only music. And for the last two elements (dark, blunt), timbre visualization didn't deliver the best of it. 9 Appendix D1 shows that there were combinations that had significant differences in the Wilcoxon sign-rank test between A. only music and C. timbre visualization and A. only music and B. basic visualization. In the comparison of only music versus timbre visualization, I figured out that timbre visualization better conveyed significantly bright, warm, and rough features of music than only music condition. Although other timbral features had no significance in difference, hard, deep, shallow, and sharp timbral features better delivered timbral features in timbre visualization than only music. However, dark, blunt features went vice versa. In the comparison of only music versus basic visualization, timbral elements of dark and soft were better conveyed in only music than in basic visualization. And other parts were shown as insignificant, while deep, cold, smooth, and blunt elements went better in in basic only music. While hard, shallow, bright, warm, rough, and sharp went better visualization. 4. 1. 2. Timbre Survey Results By Groups To learn more specifically about the overall results, I divided five groups using the original music information and the results of the timbre survey of 27 users. I wanted to figure out what timbral features Tailors delivered well or not, so I compared the timbre survey to each its own and the piece of music, with the comparison of music's timbral feature of representation in Tailors. The first group represents examples of good representation and good results of Tailors, which shows well-made visualization by timbral features of music and the cases where users could be well-delivered timbre. The second group is the example of lousy representation and destructive results of Tailors, which is vice versa to the first group. The third group represents neutral examples, which it well delivered the music's timbre but the result was neither good nor bad. And the fourth group shows the counter-example of Tailors, in which timbre visualization has the lowest score in the feature of the opposite of last group shows the extraordinary the timbre that the music example, which shows the reversed output of the expected results- which shows the user survey was dependent on the expression of the timbral visualization, little to do with the original timbre that music has. I counted the number of instances for each group in each timbral feature and found the insights below. initially has. Finally, the Group of Good Results Of the total 240 cases (20 pieces of music * 12 timbral features), I found 79 cases, which is 32.91% of the total found to be a good result. The good result, representing timbral features of the music is well delivered in the Tailors, took up a large part in these features: deep (14), warm (12), rough (11), bright (9), soft (7), sharp (7). Comparing this to the overall results of the timbre survey, all the features in the group of good results except timbral softness and shallowness were the features that were the most well delivered by timbre visualization among the three conditions. This cross-validates the overall results of the timbre survey, which shows that Tailors effectively delivered deep, warm, rough, bright, and sharp features of the timbre in music. Group of Bad Results On the other hand, 31 out of the 240 cases, 12.91% of the total, were a bad result, representing the timbral features of the music not well delivered in the Tailors. The number of bad results in timbral darkness (5) and bluntness (6) was the highest among the twelve features. Comparing this to the overall results of the timbre survey, all the features in the group of bad outcomes were the features that were the most poorly delivered by timbre visualization among the three conditions. This also cross-validates the overall results of the timbre survey, which shows that Tailors was weak in providing dark and blunt features of 10 the timbre in music. Group of Neutral Results 65 out of 240 cases, 27.08% of the total, were classified into neutral results, which was neither good nor bad. I found timbral hardness (12), shallowness (9), sharpness (8), and smoothness (7) had a large portion in the neutral result. Group of Counter Results 38 out of 240 cases, which is 15.83% of the total, were classified into the counter results, in which Tailors has the lowest score in the feature because it was an opposite feature of the timbre that the music initially had. I found timbral softness (7), darkness (6), coldness (5), and smoothness (5) had the most portion in the counter results. And also found that timbral coldness and smoothness were the features that had a lower score than the only music condition but higher than the basic visualization. And because the timbral darkness in Tailors was the weakest among the three conditions, the low score of timbral darkness is that there was more bright music than dark music among the 20 pieces of music. Group of Extraordinary Results 27 out of 240 cases, 11.25% of the total, were classified into extraordinary results, which was an unexpected result compared to the original timbral feature of the music. I found timbral shallowness (5), coldness (4), sharpness (4), and roughness (4) had the most portion in the extraordinary results. And I also figured out that for timbral shallowness, the number of cases with good results was only one. This shows that although the timbral shallowness score in Tailors was highest among the three conditions, it may need to be more balanced by the extraordinary result, not by the good result. This represents that users got to feel timbral shallowness more than the timbre of the actual music through Tailors. 11 4. 2. Did Tailors make music entertainment better through improved music visual imagery? (RQ2) Figure 5. Boxplot for each imagery feature value after experiencing Tailors Figure 6. Boxplot for each entertainment feature value after experiencing Tailors 12 it didn't show any significance 4. 2. 1. Overall Results of the Imagery Survey Figure 5 shows the comparison results of the music visual imagery survey. Among the three conditions, timbre visualization was the most effective in all imagery aspects (flow, force, interior, movement, and wandering). Because the I conducted a Wilcoxon sign-rank test between three combinations. Kruskal-Wallis test, Appendix D2 shows that varieties significantly differed in the Wilcoxon sign-rank test. Although all imagery aspects were highest in Tailors, there were significant differences in flow (mean score: 4.7222) and wandering (mean score: 3.1926) in timbre visualization compared to the basic visualization (flow mean score: 4.3907, wandering mean score: 2.8907). And it also turned out that the wandering aspect was significantly higher in timbre visualization In comparing basic (mean score: 3.1926) than the only music visualization and only music, every aspect of imagery turned out there's no significance while showing an element of flow turned out to be higher than the basic visualization. (mean score: 2.7259). in 4. 2. 2. Overall Results of the Entertainment Survey Figure 6 shows the comparison results of the music entertainment survey. Among the three conditions, timbre visualization is the most effective in all entertainment aspects (stimulated, dancing, entertained, energized, moving, animated, excited, and rhythm). Because it didn't show any significance in the Kruskal-Wallis test, I conducted a Wilcoxon sign-rank test between three combinations. Appendix D3 shows that varieties significantly differed in the Wilcoxon sign-rank test. Although all entertainment aspects were highest in Tailors, there were significant differences in entertain aspect (mean score: 4.2778) in timbre visualization compared to the basic visualization (mean score: 3.9944). Compared with the only music condition, there was a significant difference in entertained (Tailors mean score: 4.2778, only music mean score:3.8204), animated (Tailors mean score: 4.2537, only music mean score:3.7463), excited (Tailors mean score: 3.7667, only music mean score:3.3296), and rhythm (Tailors mean score: 4.9852, only music mean score:4.4648). Compared to the only music condition and basic visualization, basic visualization was significantly higher than the only music condition in energized, animated, and exciting aspects. 4. 2. 3. Multiple Linear Regression Analysis on Tailors imagery and timbre To determine which components of entertainment aspects of music, regression analysis using I conducted multiple OLS(Ordinary Least Squares) between twelve timbre features and five elements of the imagery, and eight elements of music entertainment. The range of the coefficient is from -1 to +1, which shows strong positive correlation between variables closer to +1, and vice versa 1. – To determine effective timbral features in each music imagery and entertainment to in multiple regression equations, and preprocessed each I compared coefficients feature, survey points to remove the multicollinearity between the variables. features affected which linear IV (Timbre) DV (Imagery) coefficients std_err t_value p_value R-squared Adj. R-squared F-Statistic hard soft deep shallow bright dark warm cold rough smooth sharp flow flow flow flow flow flow flow flow flow flow flow 0.2738 0.2437 1.1235 -0.0493 0.2953 -0.1669 -0.1238 0.3272 -0.3784 0.1457 0.4082 -0.0587 0.333 -0.6351 0.3292 0.3726 0.347 0.3282 0.3205 0.2802 0.3716 0.169 0.3523 0.2765 0.5302 0.3569 -0.1762 -1.9292 1.074 1.024 0.754 0.4797 0.5215 13 0.2801 0.8698 0.7108 0.7265 0.8627 0.0742 0.301 0.3232 0.4633 0.6389 0.6102 0.615 0.615 0.615 0.615 0.615 0.615 0.615 0.615 0.615 0.615 0.615 0.286 0.286 0.286 0.286 0.286 0.286 0.286 0.286 0.286 0.286 0.286 1.866 1.866 1.866 1.866 1.866 1.866 1.866 1.866 1.866 1.866 1.866 blunt hard soft deep shallow bright dark warm cold rough smooth sharp blunt hard soft deep shallow bright dark warm cold rough smooth sharp blunt hard soft deep bright dark warm cold rough smooth sharp blunt hard soft deep flow force force force force force force force force force force force force interior interior interior interior interior interior interior interior interior interior interior interior 0.1497 0.4932 0.3034 0.4779 0.1691 0.0455 0.2049 2.8255 0.2219 0.766 0.0135 0.8276 -0.5263 0.227 -2.3181 0.0361 -0.0252 0.2833 -0.089 0.9304 0.2806 0.2311 1.2145 0.2446 -0.2563 0.2284 -1.1219 0.2808 0.1611 0.2408 0.669 0.5144 0.4237 0.2224 1.9046 0.0776 0.4832 0.2579 0.2476 0.2444 1.8738 1.0129 0.082 0.3283 0.0257 0.368 0.0698 0.9453 -0.0441 0.3423 -0.129 0.8992 0.5569 0.1784 0.2463 0.2162 3.1212 1.1396 -0.4004 0.2395 -1.6719 -0.3034 0.2988 -1.0153 0.1443 0.2437 0.592 0.0075 0.2736 0.1167 0.3272 0.5633 -0.292 0.0647 0.241 0.254 -1.2118 0.2456 0.2549 0.8025 0.7377 0.2346 3.1442 0.0072 0.667 0.272 2.4523 0.0279 0.486 0.2578 1.8847 0.0804 -0.2274 0.3881 -0.586 -0.5799 0.361 -1.6063 0.5672 0.1305 0.001 0.3109 0.5919 0.3115 movement 0.6692 0.1616 movement 0.2058 0.1958 4.1415 1.0514 movement -0.119 0.2169 -0.5487 shallow movement 0.2842 0.2707 1.05 movement 0.0078 0.2208 0.0353 0.9724 movement -0.4926 0.2182 -2.2572 0.0405 movement -0.0814 0.23 -0.3537 0.7288 movement movement movement 0.5876 0.2125 2.765 0.6651 0.2464 2.6997 0.0152 0.0173 0.1293 0.2335 0.5536 0.5886 movement -0.4375 0.3515 -1.2445 0.2338 movement -0.4756 0.327 -1.4545 wandering wandering wandering 0.1795 0.2154 0.8333 0.132 0.261 0.5058 0.6209 0.0479 0.2891 0.1657 0.8708 0.1679 0.4187 shallow wandering -0.2254 0.3608 -0.6248 0.5422 bright dark warm cold rough smooth sharp wandering -0.1087 0.2942 -0.3694 0.7173 wandering -0.3357 0.2909 -1.1541 0.2678 wandering wandering wandering wandering 0.0725 0.3066 0.2364 0.3854 0.2833 0.7479 0.3284 0.4513 0.3113 1.3605 2.2778 1.4498 0.8165 0.1952 0.039 0.1691 wandering 0.0948 0.4686 0.2023 0.8426 0.615 0.815 0.815 0.815 0.815 0.815 0.815 0.815 0.815 0.815 0.815 0.815 0.815 0.794 0.794 0.794 0.794 0.794 0.794 0.794 0.794 0.794 0.794 0.794 0.794 0.831 0.831 0.831 0.831 0.831 0.831 0.831 0.831 0.831 0.831 0.831 0.831 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 blunt 0.7 Table 3a. Multiple Linear Regression Result of Timbre and Imagery from Tailors wandering -0.2835 -0.1235 0.4358 0.781 14 0.286 0.656 0.656 0.656 0.656 0.656 0.656 0.656 0.656 0.656 0.656 0.656 0.656 0.617 0.617 0.617 0.617 0.617 0.617 0.617 0.617 0.617 0.617 0.617 0.617 0.686 0.686 0.686 0.686 0.686 0.686 0.686 0.686 0.686 0.686 0.686 0.686 0.442 0.442 0.442 0.442 0.442 0.442 0.442 0.442 0.442 0.442 0.442 0.442 1.866 5.131 5.131 5.131 5.131 5.131 5.131 5.131 5.131 5.131 5.131 5.131 5.131 4.494 4.494 4.494 4.494 4.494 4.494 4.494 4.494 4.494 4.494 4.494 4.494 5.733 5.733 5.733 5.733 5.733 5.733 5.733 5.733 5.733 5.733 5.733 5.733 2.717 2.717 2.717 2.717 2.717 2.717 2.717 2.717 2.717 2.717 2.717 2.717 (F(12, Table 3a shows the multiple linear regression analysis results between twelve timbre features and five imagery aspects in timbre visualization. Red letters in each table represent a significant positive predictor variable with a predicted value. Blue letters in each table represent a significant negative predictor variable with a predicted value. Except for the flow aspect, which had no significant timbral feature (F(12, 14)=1.866, p>0.05), I was able to observe significant effects for the last four imagery features. For the force aspect, the timbre feature of hard turned out to be a significant and positive main cause (F(12, 14)=5.131, p=0.0135). The regression coefficient of hard was 0.4779, which was found to be a slightly weak significant positive predictor of force (t(14)=2.8255, p=0.0135). On the other hand, the deep timbral feature turned out to be a negative predictor (F(12, 14)=5.131, p=0.0361), showing its coefficient of -0.5263 (t(14)=-2.3181, p=0.0361). In the interior aspect, the hard, cold, and rough timbre features turned out to be significant and positive causes for the effect (t(14)=3.1442, 14)=4.4940, p<0.05). The coefficient of cold was 0.7377 p=0.0072), and rough was 0.6670 (t(14)=2.4523, p=0.0279), which shows cold and rough feature has strong positive correlations with the interior. Also, the coefficient of hard was 0.5569, which also shows medium positive correlations with the (t(14)=3.1212, p=0.0279). The timbre feature of hard, cold, and rough had a significant positive effect on the movement aspect, too (F(12, 14)=5.7330, p<0.05). The coefficient of hard timbre was 0.6692 (t(14)=4.1415, p=0.0010), rough was 0.6651 (t(14)=2.6997, p=0.0173), and cold was 0.5876 (t(14)=2.6997, p=0.0173). This result shows that hard, rough, and cold timbral features were strong and significant positive predictors for movement imagery. Lastly, for the wandering aspect, the timbre feature of rough was found to be a strong, significant, and positive factor (F(12, 14)=2.7170, p<0.05). The coefficient of the rough feature of timbre was 0.7479, which shows a strong positive correlation between the imagery aspect of wandering. To summarize, except for the flow aspect of music visual imagery, users of timbre visualization by Tailors were positively affected by timbral features of hard, cold, and rough, which help them internalize with music (interior) and were able to move their bodies by the rhythm (movement). In feeling the force of the music (force) given to the user, a timbral feature of hard was a slightly weak positive factor. Conversely, the timbral feature of deep turned out to be a negative factor for the imagery aspect of the force. For the feeling of wandering and traveling around the world (wandering), the rough feature of the timbre was a strong positive cause. interior IV (Imagery) DV (Entertainment) coefficients std_err t_value p_value R-squared Adj. R-squared F-Statistic flow force interior movement wandering flow force interior movement wandering flow force interior movement wandering flow force interior stimulated stimulated stimulated stimulated stimulated dancing dancing dancing dancing dancing entertained entertained entertained entertained entertained energized energized energized -0.0453 0.2119 -0.2139 0.8327 -0.746 0.2903 -2.57 0.0178 0.8757 0.2918 3.0006 0.0068 0.4512 0.322 1.4013 0.1757 0.236 0.1928 1.224 0.2345 0.0678 0.2067 0.3282 0.746 -0.6624 0.2832 -2.3385 0.0293 0.8729 0.2848 3.0655 0.0059 0.2233 0.3142 0.7108 0.485 0.323 0.1881 1.7166 0.1008 0.0523 0.1483 0.353 0.7276 0.6329 0.2032 3.1151 0.0052 -0.2424 0.2043 -1.1869 0.2485 0.298 0.2253 1.3222 0.2003 0.2617 0.135 1.9393 0.1889 0.2252 0.8389 0.066 0.411 -0.4009 0.3085 -1.2993 0.2079 0.5632 0.3102 1.8156 0.0837 0.707 0.707 0.707 0.707 0.707 0.721 0.721 0.721 0.721 0.721 0.856 0.856 0.856 0.856 0.856 0.669 0.669 0.669 0.637 0.637 0.637 0.637 0.637 0.654 0.654 0.654 0.654 0.654 0.822 0.822 0.822 0.822 0.822 0.59 0.59 0.59 10.11 10.11 10.11 10.11 10.11 10.83 10.83 10.83 10.83 10.83 25.02 25.02 25.02 25.02 25.02 8.47 8.47 8.47 15 movement wandering flow force interior movement wandering flow force interior movement wandering flow force interior movement wandering flow force interior movement wandering energized energized moving moving moving moving moving animated animated animated animated animated excited excited excited excited excited rhythm rhythm rhythm rhythm rhythm 0.0699 0.3422 0.2041 0.8402 0.446 0.2049 2.1764 0.0411 0.1182 0.1765 0.6695 0.5104 0.4384 0.2418 1.8133 0.0841 0.0222 0.2431 0.0911 0.9282 0.2078 0.2682 0.775 0.447 0.198 0.1606 1.2327 0.2313 0.0355 0.1759 0.2018 0.842 -0.49 0.241 -2.0331 0.0549 0.6154 0.2423 2.5401 0.019 0.5422 0.2673 2.0284 0.0554 0.2023 -0.161 0.1601 0.1535 1.2641 0.22 -1.0485 0.3063 -0.3154 0.2103 -1.4996 0.1486 0.1864 0.2115 0.8813 0.3881 0.758 0.2333 3.2492 0.0038 0.4321 0.3157 0.1397 0.2145 3.0925 0.0055 1.472 0.1558 0.093 0.2939 0.3166 0.7547 0.3707 0.2954 1.2547 0.2234 0.3336 0.3259 1.0235 0.3177 -0.3046 0.1952 -1.5604 0.1336 0.669 0.669 0.796 0.796 0.796 0.796 0.796 0.798 0.798 0.798 0.798 0.798 0.846 0.846 0.846 0.846 0.846 0.699 0.699 0.699 0.699 0.699 0.59 0.59 0.748 0.748 0.748 0.748 0.748 0.75 0.75 0.75 0.75 0.75 0.809 0.809 0.809 0.809 0.809 0.628 0.628 0.628 0.628 0.628 8.47 8.47 16.43 16.43 16.43 16.43 16.43 16.57 16.57 16.57 16.57 16.57 23.06 23.06 23.06 23.06 23.06 9.768 9.768 9.768 9.768 9.768 Table 3b. Multiple Linear Regression Result of Imagery and Entertainment from Tailors Table 3b shows the multiple linear regression analysis results between five music imagery aspects and eight music entertainment features in timbre visualization. Red letters in each table represent a significant positive predictor variable with a predicted value. Blue letters in each table represent a significant negative predictor variable with a predicted value. Except for the moving (F(5, 21)=16.34, p>0.05) and rhythm (F(5, 21)=9.7680, p>0.05) aspect, which had no significant timbral feature, I was able to observe significant effects for the last six music entertainment aspects. For the stimulated aspects, it was found to be imagery feature of the interior turned out to be a significant and positive leading cause (F(5, 21)=10.11, p=0.0068). Also, the imagery feature of the interior turned out to be the main cause of the entertainment feature of dancing (F(5, 21)=10.83, p=0.0059), and the feature of animated (F(5, 21)=16.57, p=0.019) too. The regression coefficient of the interior was 0.8757 for the entertainment of stimulated (t(21)=3.0006, p=0.0068), 0.8729 for the entertainment of dancing (t(21)=3.0655, p=0.0059), and 0.6154 for the entertainment of animated (t(21)=2.5401, p=0.0173) respectively. For the entertaining aspect of entertainment, the imagery feature of force was found to have a strong, positive correlation (F(5, 21)=25.02, p=0.0052), showing a positive correlation with coefficients 0.6239 (t(21)=3.1151, p=0.0052). In the energized aspect of entertainment, the imagery feature of wandering was found to have a weak positive correlation (F(5, 21)=8.47, p=0.0411), showing a coefficient of 0.4460 (t(21)=2.1764, p=0.0411). For the exciting aspect of entertainment, the imagery feature of movement (F(5, 21)=23.06, p=0.0038) and wandering (F(5, 21)=23.06, p=0.0055) was found to have positive correlations. Coefficients were found to be 0.7580 and 0.4321 each, which can be interpreted there was a strong positive correlation between movement and wandering (t(21)=3.2492, p=0.0038), and a weak positive correlation between wandering and excited (t(21)=3.0925, p=0.0055). Consequently, except for the moving and rhythm aspect of entertainment, users were affected by their internalization of music, to entertain the music in the way of feeling shuddered (stimulated), willing to dance (dancing), and a sense of liveliness (animated). For entertaining the music (entertained), the musical force that the user got was a plain positive factor. The imagery of wandering around (wandering) was a positive factor for 16 energized feeling (energized). And lastly, for the exciting feeling (excited), the imagery of movement and wandering around was a positive cause. 4. 2. 4. Coefficients Comparison by Fisher Transformation Condition IV DV coefficients Condition IV DV coefficients p-value A Only Music hard soft deep shallow bright dark warm cold rough smooth sharp blunt hard soft deep shallow bright dark warm cold rough smooth sharp blunt hard soft deep shallow bright dark warm cold rough smooth sharp blunt hard soft deep shallow bright dark warm cold rough flow force interior movement -0.103 0.0386 0.0429 0.0745 0.9809 0.8081 0.0863 0.0392 -0.257 -0.1674 -0.3402 -0.7973 -0.1789 0.2397 -0.5186 -0.4125 0.6853 0.6534 0.1972 0.0755 0.3003 0.4922 -0.0888 -0.7227 -0.153 0.2342 -0.4938 -0.51 0.4905 0.6202 0.2809 0.0377 0.3299 0.4347 -0.0564 -0.6837 -0.1804 0.2765 -0.0577 0.0525 0.2852 -0.084 0.2389 0.1339 -0.1139 C Timbre Visualization 17 hard soft deep shallow bright dark warm cold rough smooth sharp blunt hard soft deep shallow bright dark warm cold rough smooth sharp blunt hard soft deep shallow bright dark warm cold rough smooth sharp blunt hard soft deep shallow bright dark warm cold rough flow force interior movement 0.2738 -0.0493 -0.1238 0.1457 -0.0587 -0.6351 0.3726 0.3282 0.2802 0.1690 0.2765 0.1497 0.4779 0.0455 -0.5263 -0.0252 0.2806 -0.2563 0.1611 0.4237 0.4832 0.2476 0.0257 -0.0441 0.5569 0.2463 -0.4004 -0.3034 0.1443 -0.2920 0.0647 0.7377 0.6670 0.4860 -0.2274 -0.5799 0.6692 0.2058 -0.1190 0.2842 0.0078 -0.4926 -0.0814 0.5876 0.6651 *** * *** *** * *** ** *** *** *** * ** smooth sharp blunt hard soft deep shallow bright dark warm cold rough smooth sharp blunt wandering 0.0508 0.2184 0.0104 -0.5014 0.0325 -0.0694 -0.2037 -0.0709 0.2794 0.0752 -0.3055 0.2961 0.248 0.9135 0.2217 smooth sharp blunt hard soft deep shallow bright dark warm cold rough smooth sharp blunt wandering 0.1293 -0.4375 -0.4756 0.1795 0.1320 0.0479 -0.2254 -0.1087 -0.3357 0.0725 0.3854 0.7479 0.4513 0.0948 -0.1235 Table 4a. Fisher Transformation Result for Timbre → Imagery (A vs. C) Condition IV DV coefficients Condition IV DV coefficients p-value B Basic Visualization flow force hard soft deep shallow bright dark warm cold rough smooth sharp blunt hard soft deep shallow bright dark warm cold rough smooth sharp blunt hard soft deep shallow bright interior dark warm cold rough 0.4495 0.105 -0.3322 0.2961 0.1752 0.1782 -0.0088 -0.1185 0.2853 -0.2745 0.325 0.2194 0.3668 0.0997 -0.5839 -0.0195 0.0446 0.0625 0.175 0.0246 0.4262 0.1198 0.2265 0.0685 0.2559 0.148 -0.4 -0.1763 0.0267 0.1858 0.3592 0.1422 0.1036 flow force hard soft deep shallow bright dark warm cold rough smooth sharp blunt hard soft deep shallow bright dark warm cold rough smooth sharp blunt hard soft deep shallow C Timbre Visualization bright interior dark warm cold rough 18 0.2738 -0.0493 -0.1238 0.1457 -0.0587 -0.6351 0.3726 0.3282 0.2802 0.1690 0.2765 0.1497 0.4779 0.0455 -0.5263 -0.0252 0.2806 -0.2563 0.1611 0.4237 0.4832 0.2476 0.0257 -0.0441 0.5569 0.2463 -0.4004 -0.3034 0.1443 -0.2920 0.0647 0.7377 0.6670 ** * ** * ** * * *** * ** ** smooth sharp blunt hard soft deep shallow bright dark warm cold rough smooth sharp blunt hard soft deep shallow bright dark warm cold rough smooth sharp blunt movement wandering 0.0917 0.2896 -0.0432 0.1978 0.1233 -0.1655 0.7099 0.4993 0.2309 0.0085 -0.1075 -0.2333 -0.6251 0.0463 -0.0823 -0.1656 -0.2036 0.0854 0.2774 0.3554 0.2487 0.3929 -0.407 -0.0543 -0.3599 0.5396 0.1753 smooth sharp blunt hard soft deep shallow bright dark warm cold rough smooth sharp blunt hard soft deep shallow bright dark warm cold rough smooth sharp blunt movement wandering 0.4860 -0.2274 -0.5799 0.6692 0.2058 -0.1190 0.2842 0.0078 -0.4926 -0.0814 0.5876 0.6651 0.1293 -0.4375 -0.4756 0.1795 0.1320 0.0479 -0.2254 -0.1087 -0.3357 0.0725 0.3854 0.7479 0.4513 0.0948 -0.1235 * * * * * ** ** *** ** * * * * ** *** ** * Table 4b. Fisher Transformation Result for Timbre → Imagery (B vs. C) → Fisher transformation was used to compare coefficients and determine if Tailors effectively conveyed timbral features, made music imagery, and made users to entertained the music among the three conditions. P-values were also evaluated to determine whether the coefficients' difference was significant. Each aestrisk notations in the table represents pvalue ranges: * p<0.05, ** p<0.01, ** *p<0.001. Table 4a represents the comparison results Imagery) from comparing between coefficients in multiple regression equations (Timbre condition A (Only Music) and condition C (Tailors). I found that timbral hardness significantly affected more to imagery features of force(r=0.47), interior(r=0.55), and movement(r=0.66) in Tailors than in only music. Also, timbral coldness significantly affected more to imagery features of the interior(r=0.73) and movement(r=0.58) in Tailors. Lastly, timbral roughness significantly affected imagery features of movement(r=0.66) and wandering(r=0.74) higher in Tailors too. This result shows that, except for imagery features of flow, timbral hardness, coldness, and roughness of Tailors were significantly higher predictors for users to make last music imageries. However, there were timbral features that did not work well as only music condition. Firstly, timbral brightness was a significantly lower predictor for flow(r=-0.05) and force(r=0.28) imagery features in Tailors than only music. Timbral darkness was also found to be a significantly lower predictor for the imagery feature of force(r=-0.25). Lastly, timbral sharpness was a significantly lower predictor for the imagery feature of wandering(r=0.09). This result shows that timbre visualization was weaker than the only music condition in these timbral features. Table 4b shows the comparison results between coefficients in multiple regression equations from comparing condition B (Basic Visualization) and condition C (Tailors). Timbral 19 in timbre visualization to roughness significantly affected imagery features of the interior(r=0.66), movement(r=0.66), and wandering(r=0.74) higher on Tailors than basic visualization. Timbral coldness affected significantly more in Tailors to imagery features of the interior(r=0.73) and movement(r=0.58). imagery features of Also, timbral hardness is affected more movement(r=0.66). Lastly, timbral smoothness was affected more in imagery features of wandering(r=0.45). It shows that except for the imagery feature of flow, timbral roughness, coldness, hardness, and smoothness were significantly more effective in making music imagery in timbre visualization than in basic visualization. On the contrary, imagery features of the movement were significantly less affected by timbral brightness(r=0.00), darkness(r=-0.49), sharpness(r=-0.43), and shallowness(r=0.28). Also, timbral darkness was a significantly lower timbral sharpness was a predictor significantly lower predictor for the imagery feature of wandering(r=0.09). This result shows timbre visualization was weaker than the basic visualization in these timbral features. flow(r=-0.63). Lastly, feature of imagery the for Condition IV DV coefficients Condition IV DV coefficients p-value flow force interior stimulated movement wandering flow force interior dancing movement wandering flow force 0.1962 -0.5755 0.4723 0.0656 0.7363 -0.0523 0.0730 0.3238 0.3520 0.2739 0.0456 1.0340 flow force interior stimulated movement wandering flow force interior dancing movement wandering flow force -0.0453 -0.7460 0.8757 0.4512 0.2360 0.0678 -0.6624 0.8729 0.2233 0.3230 0.0523 0.6329 interior entertained -0.2334 interior entertained -0.2424 A Only Music movement wandering flow force -0.0795 C movement 0.1800 Timbre wandering Visualization 0.2932 0.4794 flow force interior energized -0.0348 interior energized movement wandering flow force interior moving movement wandering flow force interior movement animated -0.3496 0.4829 -0.0533 0.4629 0.5392 -0.0066 0.0173 0.3358 -0.5121 0.4409 0.3145 movement wandering flow force interior moving movement wandering flow force interior movement animated 0.2980 0.2617 0.1889 -0.4009 0.5632 0.0699 0.4460 0.1182 0.4384 0.0222 0.2078 0.1980 0.0355 -0.4900 0.6154 0.5422 20 ** ** ** *** *** *** ** * wandering flow force interior excited movement wandering flow force interior rhythm movement wandering 0.4264 0.1020 -0.2929 0.4931 0.1575 0.5602 0.5216 -0.0033 0.1679 0.3603 -0.2700 wandering flow force interior excited movement wandering flow force interior rhythm movement wandering 0.2023 -0.1610 -0.3154 0.1864 0.7580 0.4321 0.3157 0.0930 0.3707 0.3336 -0.3046 Table 5a. Fisher Transformation Result for Imagery → Entertain (A vs. C) Condition IV DV coefficients Condition IV DV coefficients p-value flow force -0.2220 -0.2313 flow force interior stimulated 0.4779 interior stimulated movement wandering flow force 0.1413 0.5000 -0.2510 -0.4700 movement wandering flow force interior dancing 0.6344 interior dancing movement wandering flow force 0.4516 0.3631 -0.0182 0.7845 movement wandering flow force -0.0453 -0.7460 0.8757 0.4512 0.2360 0.0678 -0.6624 0.8729 0.2233 0.3230 0.0523 0.6329 ** ** ** * interior entertained -0.7951 interior entertained -0.2424 ** B Basic Visualization movement wandering flow force interior energized movement wandering flow force interior moving movement wandering flow force animated interior 0.4078 0.3732 0.0701 0.0926 0.2257 -0.0930 0.5849 0.1667 0.1753 0.4829 0.0755 0.0183 0.0616 0.1157 0.5812 C Timbre Visualization movement wandering flow force interior energized movement wandering flow force interior moving movement wandering flow 0.2980 0.2617 0.1889 -0.4009 0.5632 0.0699 0.4460 0.1182 0.4384 0.0222 0.2078 0.1980 0.0355 force animated -0.4900 interior 0.6154 21 * * * movement wandering flow force interior excited movement wandering flow force interior rhythm movement wandering -0.0621 0.2733 -0.2510 0.0949 0.1222 0.3437 0.5334 0.6573 0.1636 0.1248 0.0076 -0.1683 movement wandering flow force interior excited movement wandering flow force interior rhythm movement wandering 0.5422 0.2023 -0.1610 -0.3154 0.1864 0.7580 0.4321 0.3157 0.0930 0.3707 0.3336 -0.3046 * * Table 5b. Fisher Transformation Result for Imagery → Entertain (B vs. C) → represents results between coefficients Table 5a the comparison in multiple Entertainment) from comparing condition A (Only Music) and regression equations (Imagery condition C (Tailors). Especially, coefficients between imagery feature of interior and the entertainment features of stimulated(r=0.87), dancing(r=0.87), and energized(r=0.56) were significantly higher in Tailors than the only music. Also, being able to move the body to the rhythm (movement) has significantly affected higher to the excited feeling(r=0.75) of users in Tailors. However, the less affected stimulated(r=0.23) feeling significant. Furthermore, imagery features of force have less affected feelings for less affected the entertainment dance(r=-0.66) significantly. Finally, feature of moving(r=0.02) significantly. imagery feature of wandering has imagery has interior the features, interior element of music imagery features Table 5b shows the comparison results between coefficients in multiple regression equations from comparing condition B (Basic Visualization) and condition C (Tailors). The result showed that the imagery of becoming one with the music (interior) affected music entertainment more in timbre visualization than the basic visualization. Except for entertaining and moving influenced more music entertainment than other music in Tailors. Especially, the coefficients between interior imagery with stimulated feeling(r=0.87) and feeling to dance(r=0.87) were significantly higher in Tailors than the basic visualization. The coefficients between movement imagery with animated(r=0.54) and excited(r=0.75) feelings were significantly higher too. On the other hand, the less affected by music entertainment of stimulated(r=-0.74), energized(r=-0.40), and animated(-0.49). As same with less affected the entertainment feature of the result imagery has moving(0.02) significantly, showing that each imagery feature can affect or not affect entertainment features due to its characteristics. imagery feature of force was significantly in Table 5a, imagery interior 22 4. 3. Post Survey Results This section is organized and based on the post-survey results (See Appendix E for the questionnaires) and found some insights from users’ interviews. 4. 3. 1. Rankings for the Best Timbre Expression Figure 7. Ranking Comparisons for the Best Timbre Expression Figure 7 represents that Timbre visualization made by Tailors the most well-expressed way of timbre among the three conditions. 62.96% of participants (17 out of 27) gave Tailors the first place for the rank in timbre expression. Eight people for only music condition and two for the basic visualization. The participants' opinion who gave Tailors the first place in the best timbre expression ranking was as follows; P17 said Tailors' visual background well delivered the timbral features of the music, which he thought was also related to the mood of the music. P25 stated that she gave Tailors the first place in timbre expression ranking because even the tiny sounds that users might miss were expressed in timbre visualization, which led her to enjoy and have more fun when listening to music. P24 said that she could focus more on visualization and the music itself because there were more dynamic movements and various colors. Also, P2 evaluated that Tailors' visualization immersed her immediately in the music. On the contrary, those who gave Tailors the third place were the most minor (5 out of 27); they all pointed out that timbre visualization felt dizzy because there were so many components, distracting them from focusing on music. 23 4. 3. 2. Rankings for the Best Music Experience Figure 8. Ranking Comparisons for the Best Music Experience Figure 8 represents that Tailors was the most well-enjoyable way of music listening among the three conditions. 44.44% of participants (12 out of 27) gave Tailors the first place for the rank in music experience, followed by ten people for only music condition and 5 for the basic visualization. The participants’ opinion who gave Tailors the first place in the best music experience ranking was as follows: P3 stated that timbre visualization expressed the detailed background sound of the music well, making her listen to listening music more closely. Also, P19 evaluated the background with color clouds, and the sky felt well harmonized when the music was quiet and calm. P24 found that Tailors helped to find something new in the same song by listening analytically. Those who gave Tailors third place in the music experience where the most minor (6 out of 27). However, in P26, who didn’t find Tailors’ strength in music listening experience, the experience was a pity because the visualization have affected her music experience in a forcing way. P5, who gave second place to Tailors, and first place to only music condition in the music experience ranking, stated that music is something to listen to when it is rest time. Still, the visualization gives a burden to keep an eye on. 24 4. 3. 3. Rankings for the Willingness to Use Again Figure 9. Ranking Comparisons for the Willingness to Use Again Figure 9 represents that Tailors was the most reusable way of music listening among the three conditions. 55.55% of participants (15 out of 27) gave Tailors the first place for music experience, seven people for basic visualization, and 5 for the only music condition. The participants' opinion who picked Timbre visualization in the rank for use again in the first place was like these: For P16, Tailors' novelty became the main cause of picking the first rank. She picked the timbre visualization in the first place because it is a unique way to experience visualization and entertain music. P24 also stated Tailors as a fresh way to entertain music, mentioning the chance that Tailors will enable users to feel every single different timbre of the singers. P13 emphasized the feeling of falling for the music more deeply when listening to music through timbre visualization. On the other hand, those who picked Tailors' visualization for the third place (6 out of 27) found fault with the durability of watching the timbre visualization due to dizziness. There were also the participants who gave Tailors in the first place who pointed out the weak durability of watching the visualization, especially on the object front of the background. P5 pointed out that the circle objects in the front were too flashy, which was a little burdensome to experience Tailors but also had the willingness to reuse timbre visualization when the circle objects in the front were edited more clearly. 25 V. DISCUSSION 5. 1. Possibilities of Tailors Through Tailors, I found that timbre visualization affects the MIR users, which is beyond further than just conveying timbral features of the music. Each participant was able to relate each music’s tone color with the timbre visualization, to shape their imagery of the music. When users hear the music, they were able to get information from both sides- musical audio and visualization. While users’ experience with the music was focused on the mood and the emotion, Tailors have given opportunities to users to enjoy music by listening carefully for the timbre of the music. Also, participants anticipated their behavior of listening carefully to the music to feel the timbre although there was no notification before the experiment. By not focusing on the mood or the emotion but focusing on the timbre and the music's visual imagery, users of Tailors were able to be immersed in the timbral feature and define their subjective judgments of the mind. From this point of the view, Tailors have made users think about various words related to the timbral features of the music. I was also able to find that semantic descriptors of timbre, which are highly related to the user’s thoughts on the music, can be derived and raised from the timbre visualization users. This brings up the new possibilities of timbre visualization that its users could generate and brainstorm together about the semantic descriptors of the timbre, which can be applied differently in the visualization. There are also It will allow possibilities for users to participate user-centered design about instance, visual the semantic descriptors of parameters of Tailors could be accustomed to the user’s preference if customization options are opened. Relating semantic descriptors with visual parameters of Tailors, each user can take a part in music visualization and have a richer music experience. Consequently, users’ participation in the timbre descriptors and applicating in the visualization could arouse more agreement of users, making users more entertained by the music with the timbre. While users do interaction with finetuning these descriptors and parameters of the visualization, this would also make users find their taste and feel of the timbral features. Users’ subjective thoughts on the musical timbre would also be linked to the data-driven approach of the Tailors, which is also related to future work. in the expression of the visualization. timbre. For 5. 2. Limitations and the Future Work Although 7 out of 12 timbral features (warm, bright, deep, shallow, hard, rough, and sharp) were well delivered in the twenty music used in the experiment, we found that Tailors didn't convey timbral darkness and bluntness well, compared to the only music and the basic visualization. In addition, through the Kruskal-Wallis Test, there was no significant difference between the comparison of the three conditions. This shows Tailors needs to be modified to compensate for the weakness (timbral darkness and bluntness) and strengthen the strength that it already has. Also, even I’ve united the genre for the POP of the twenty music, more future experiments with more participants needed with various kinds of music will help to give more insights into the result by music genre and music features. As mentioned above, participants who experienced pity music experience with Tailors pointed out the fatigue of the eye due to the dizziness. Although the participants get to experience the timbre visualization for twenty music straight during the experiment, this shows a shortage in the case of experiencing timbre visualization daily. Since it is a well-known fact that getting information in multimodal (both visually and by hearing) burdens more than basic hearing for music listeners, the way to reduce these burdens needs to be considered. The result shows that in the sustainability of the timbre visualization 26 timbral features were well expressed, and the specific features influenced the music's visual imagery and entertainment. This makes considering a way to reduce visual components to concentrate on specific four to six timbral features rather than representing all features. Future work should focus on visually expressing timbral features that can be well delivered and the features that affect music imagery and entertainment. Since the post-survey showed that MIR users could concentrate and be immersed in the music by timbre visualization rather than the only music and the basic visualization, future work should consider the data-driven approach to improve Tailors’ mapping rules. I believe this will involve a data collection process from the MIR users to determine which timbral features connect to the elements of music visualization. Furthermore, the way of applying Tailors more artistically can be discussed. I believe using a beam projector to express timbre visualization on the wall or the ground is possible. It will help to focus on the entertainment purpose to emphasize the strength of concentrating and immersing in the timbre visualization in a specific space. Users can experience timbre visualization with the sense of the space, extending its possibility to the artistic purpose of Tailors. 27 Ⅵ . CONCLUSION In this paper, I made a music visualization system based on the timbral features called Tailors. After the experiment with 27 participants with twenty music with three different conditions, I found that Tailors effectively delivered timbral warmth, brightness, depth, shallowness, hardness, roughness, and sharpness compared to the only music condition and basic visualization. Also, five features of music imagery and eight features of music entertainment were all highest in the Tailors among the three conditions. After the multiple linear regression analysis between timbre-imagery, and imagery-entertainment, we found significant and positive correlations. This result shows that timbre visualization by Tailors made users' music visual imagery well and led to music entertainment. Post survey result of participants shows Tailors all ranked first for the best timbre expression, music experience, and willingness to try the next time again. While leaving the limitations on the usability for some users leaving a burden in the eye, Tailors led to the future work improving its mapping rule by data-driven approach from MIR users. Furthermore, focusing on the timbral and visual features of Tailors express well, in a more artistic way using the space like a plane wall or the ground, emphasizing its entertainment purpose. 28 References – – [1] Siedenburg, K., & M(cid:18166)llensiefen, D. (2019). Memory for Timbre. Timbre: Acoustics, Perceptio n, and Cognition. [2] Velankar, Makarand. (2013). Study paper for Timbre identification in sound. http://www.ijer t.org/view.php?id=5942&title=study-paper-for-timbre-identification-in-sound. 2. 10.13140/2.1.456 5.0566. [3] Dolan EI (2013) The orchestral revolution: Haydn and the technologies of timbre. Cambrid ge University Press, Cambridge. [4] Wallmark, Zachary, 'The Meaning of Timbre,' Nothing but Noise: Timbre and Musical Mean ing at the Edge (New York, 2022; online edn, Oxford Academic, 24 Mar. 2022). [accessed 30 Oct. 2022]; https://doi.org/10.1093/oso/9780190495107.003.0001. [5] Lembke SA, Levine S, McAdams S (2017) Blending between bassoon and horn players: an analysis of timbral adjustments during musical performance. Music Percept 35(2):144 164– [6] Pearce, Brookes, T., & Mason, R. (2017). Timbral attributes for sound effect library searchi ng. 2 2. https://openresearch.surrey.ac.uk/esploro/outputs/conferencePresentation/Timbral-attribu tes-for-sound-effect-library/99516677602346#file-0 [7] Disley, Alastair & Howard, David. (2003). TIMBRAL SEMANTICS AND THE PIPE ORGAN. [8] Disley, Alastair & Howard, David & Hunt, Andy. (2006). Timbral description of musical ins truments. [9] Porcello, T. (2004). Speaking of Sound: Language and the Professionalization of Sound-Re cording Engineers. Social Studies of Science, 34(5), 733 758. http://www.jstor.org/stable/41443 59 [10] Czedik-Eysenberg, Isabella & Knauf, Denis & Reuter, Christoph. (2017). "Hardness" as a se mantic audio descriptor for music using automatic feature extraction.. 10.18420/in2017_06. [11] Bismarck, G.V. (1974). Sharpness as an attribute of the timbre of steady sounds. Acustica, 30, 159-172. [12] Stark, J. (2003). Bel Canto: A History of Vocal Pedagogy (University of Toronto Press, T oronto, Canada). [13] Gentilucci, Marta & Ardaillon, Luc & Liuni, Marco. (2018). Vocal distortion and real-time p rocessing of roughness. [14] Pearce, Andy & Brookes, Tim & Mason, Russell. (2019). Modelling Timbral Hardness. Appli ed Sciences. 9. 466. 10.3390/app9030466. [15] Juslin P. N. (2000). Cue utilization in communication of emotion in music performance: r elating performance to perception. Journal of experimental psychology. Human perception and performance, 26(6), 1797 1813. https://doi.org/10.1037//0096-1523.26.6.1797 [16] Pressnitzer, Daniel & Mcadams, Stephen & Winsberg, Suzanne & Fineberg, Joshua. (1996). Roughness and musical tension of orchestral timbres. [17] Vassilakis, Pantelis. (2005). Auditory Roughness as a Means of Musical Expression. Selecte d Reports in Ethnomusicology. 7 (Perspectives in Musicology). 119-. [18] Brightness / Darkness, https://timbreandorchestration.org/writings/timbre-lingo/2020/4/22/b rightness-darkness [19] Saitis, C., & Siedenburg, K. (2020). Brightness perception for musical instrument sounds: Relation to timbre dissimilarity and source-cause categories. The Journal of the Acoustical Soc iety of America, 148(4), 2256. https://doi.org/10.1121/10.0002275 [20] Wen, Xin & Huang, Zhengxi & Zaoyi, Sun & Xu, Liang. (2021). What a deep song: The r ole of music features in perceived depth. PsyCh Journal. 11. 10.1002/pchj.510. [21] Victor Rosi, Olivier Houix, Nicolas Misdariis, Patrick Susini. Uncovering the Meaning of Fo ur Semantic Attributes of Sound : Bright, Rough, Round and Warm. e-Forum Acusticum 202 – 29 ⟩ ⟨ ç á á é ó hal-03016066 0, Dec 2020, Lyon, France. [22] Li, R.; Zhang, M. Singing-Voice Timbre Evaluations Based on Transfer Learning. Appl. Sci. 2022, 12, 9931. https://doi.org/10.3390/app12199931 á [23] Hernandez Oliv n, Carlos & Beltr n Bl zquez, Jos Ram n. (2021). Timbre Classification of Musical Instruments with a Deep Learning Multi-Head Attention-Based Model. [24] Loureiro, M.A., Paula, H.B., & Yehia, H.C. (2004). Timbre Classification Of A Single Music al Instrument. ISMIR. [25] Sha, C., Yang, Y., Lin, Y., & Chen, H.H. (2013). Singing voice timbre classification of Chin ese popular music. 2013 IEEE International Conference on Acoustics, Speech and Signal Proce ssing, 734-738. [26] Kostek, B. (2004). Musical instrument classification and duet analysis employing music in formation retrieval techniques. Proceedings of the IEEE, 92, 712-729. [27] Herrera, P., Amatriain, X., Batlle, E., & Serra, X. (2000). Towards Instrument Segmentatio n for Music Content Description: a Critical Review of Instrument Classification Techniques. IS MIR. [28] Agostini, G., Longari, M., & Pollastri, E. (2001). Musical Instrument Timbres Classification with Spectral Features. EURASIP Journal on Advances in Signal Processing, 2003, 1-10. [29] Hiraga, Rumi & the Watanabe, F. & Fujishiro, Issei. (2002). Music learning through visuali zation. 1,01- 108. 10.1109/WDM.2002.1176199. [30] Isaacson, E.J. (2005). What You See Is What You Get: on Visualizing Music. ISMIR. [31] Hugo B. Lima, Carlos G. R. Dos Santos, and Bianchi S. Meiguins. 2021. A Survey of Musi c Visualization Techniques. ACM Comput. Surv. 54, 7, Article 143 (September 2022), 29 pages. https://doi.org/10.1145/3461835 [32] Marco Filipe Ganan a Vieira. 2012. Interactive Music Visualization- Implementation, Realiz ation and Evaluation. Ph.D. Dissertation. Universidade da Madeira (Portugal). Order Number: A AI28727326. [33] Liao, N. (2022). Research on intelligent interactive music information based on visualizati on technology. Journal of Intelligent Systems, 31(1), 289-297. https://doi.org/10.1515/jisys-2022- 0016 [34] Bragan a GFF, Fonseca JGM, Caramelli P. Synesthesia and music perception. Dement Neu ropsychol. 2015 Jan-Mar;9(1):16-23. doi: 10.1590/S1980-57642015DN91000004. PMID: 2921393 7; PMCID: PMC5618987. [35] Clark, Terry & Williamon, Aaron & Aksentijevic, Aleksandar. (2012). Musical imagery and i magination: the function, measurement and application of imagery skills for performance. 10.1 093/acprof:oso/9780199568086.003.0022. [36] University Non-Major Student Reactions to Music Appreciation Course Content and Instru ctional Methods, Barbara E. Lewis [37] Presicce, Graziana & Bailes, Freya. (2019). Engagement and Visual Imagery in Music Liste ning: An Exploratory Study. Psychomusicology: Music, Mind and Brain. 29. [38] Taruffi, Liila & K(cid:18166)ssner, Mats. (2018). A Review of Music-Evoked Visual Mental Imagery: Conceptual Issues, Relation to Emotion, and Functional Outcome. [39] Bailes, F. (2007). Timbre as an elusive component of imagery for music. Empirical Music ology Review, 2, 21-34. [40] Pitt, M. A., & Crowder, R. G. (1992). The role of spectral and dynamic cues in imagery f or musical timbre. Journal of Experimental Psychology: Human Perception and Performance, 1 8(3), 728 738. https://doi.org/10.1037/0096-1523.18.3.728 [41] Halpern, Andrea & Zatorre, Robert & Bouffard, Marc & Johnson, Jennifer. (2004). Behavi oral and neural correlates of perceived and imagined musical timbre. Neuropsychologia. 42. 12 81-92. 10.1016/j.neuropsychologia.2003.12.017. [42] S. M. Smith and G. N. Williams, "A visualization of music," Proceedings. Visualization '97 – ç 30 (Cat. No. 97CB36155), 1997, pp. 499-503, doi: 10.1109/VISUAL.1997.663931. [43] Li, S., Timmers, R., & Wang, W. (2021). The Communication of Timbral Intentions Betwee n Pianists and Listeners and Its Dependence on Auditory-Visual Conditions. Frontiers in psych ology, 12, 717842. https://doi.org/10.3389/fpsyg.2021.717842 [44] GIANNAKIS, K. (2006). A comparative evaluation of auditory-visual mappings for sound v isualisation. Organised Sound, 11(3), 297-307. doi:10.1017/S1355771806001531 [45] Siedenburg, Kai. (2009). An Exploration of Real-Time Visualizations of Musical Timbre. [46] Margaret Samu. Impressionism: Art and Modernity. 2004. HEILBRUNN TIMELINE OF ART HISTORY. [accessed 2022 Nov 30]; https://www.metmuseum.org/toah/hd/imml/hd_imml.htm [47] Nina Azzarello. anna marinenko connects nature and noise for sound form wave. 2014. d esignboom. [accessed 2022 Nov 30]; https://www.designboom.com/art/anna-marinenko-nature- noise-sound-wave-06-09-2014/ [48] FacebookResearch. Demucs. https://github.com/facebookresearch/demucs. (2022). [49] AudioCommons. timbral_models. https://github.com/AudioCommons/timbral_models, (2022). [50] W. Jiang, J. Liu, Z. Li, J. Zhu, X. Zhang and S. Wang, "Analysis and Modeling of Timbre Perception Features of Chinese Musical Instruments," 2019 IEEE/ACIS 18th International Confer ence on Computer and Information Science (ICIS), 2019, pp. 191-195, doi: 10.1109/ICIS46139.20 19.8940168. [51] Gounaropoulos, Alex & Johnson, Colin. (2006). Synthesizing Timbres and Timbre-Changes from Adjectives/Adverbs. 3907. 664-675. 10.1007/11732242_63. [52] Wu, H. X., Li, Y., Ching, B. H.-H., & Chen, T. T. (2022). You are how you speak: The rol es of vocal pitch and semantic cues in shaping social perceptions. Perception, 0(0). https://do i.org/10.1177/03010066221135472 [53] Three.js [54] Vanta.js Animated 3D Background Library For Your Website. cessed 30 Nov 2022. [55] Schaerlaeken, S., Glowinski, D., Rappaz, M., & Grandjean, D. (2019). “Hearing music as . . .”: Metaphors evoked by the sound of classical music. Psychomusicology: Music, Mind, and Br ain. [56] Soraghan, S., Faire, F., Renaud, A., & Supper, B. (2018). A New Timbre Visualization Tech nique Based on Semantic Descriptors. Computer Music Journal 42 (1), 23-36. [57] https://threejs.org. Accessed 30 Nov 2022. https://www.vantajs.com. Ac Javascript 3D Library. — — 31 Appendix Appendix A. User Interface of Tailors Appendix B. Information of Music Used In the Experiment Appendix C. Visualized Output of Tailors Appendix D. Tables for Wilcoxon Sign-Ranked Analysis By Surveys Appendix E. Post Survey Questionnaires 32 Appendix A. User Interface of Tailors 33 Appendix B. Information of Music Used In the Experiment Index Artist Title of the Music Genre Detailed Genre 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Sufjan Stevens Lana Del Rey Foster The People Cigarette After Sex Ben Folds John Mayer The Eurythmics Gotye Demi Lovato Scarlett Johansson Radiohead Charlie Puth Ledisi James Vincent McMorrow Corinne Bailey Rae Ashley Tisdale Andrew Belle Ella Mai Norah Jones Mystery of Love Folk hope is a dangerous thing for a woman like me to have - but i have it Imagination Apocalypse Red Is Blue Rosie Sweet Dreams State Of The Art Alternative & Indie Soul Alternative & Indie Folk Folk R&B Rock Cool For The Summer Electronic The Moon Song Burn The Witch That's Hilarious POP I Blame You Cavalier Like A Star He said She said In My Veins She Don't Come Away With Me Alternative & Indie Alternative & Indie Christian R&B, Soul Folk Jazz, R&B R&B Soul R&B Folk Keyshia Cole Love, I Thought You Had My Back R&B, Soul 34 Appendix C. Visualized Output of Tailors (numbers indicate indexes for the each music) 35 Appendix D. Tables for Wilcoxon Sign-Ranked Analysis By Surveys D1. Wilcoxon Sign-Ranked Analysis for the Timbre Survey Significant warm (p=0.04857) deep(p=0.0029) rough (p=0.0456) Mean Value B C 4.1185 3.3407 3.9722 4.3333 3.6333 4.1593 Not Significant hard (p=0.8909) soft (p=0.6503) cold (p=0.7356) shallow(p=0.8052) blunt (p=0.2617) sharp (p=0.0612) smooth (p=0.4816) bright (p=0.0732) dark (p=0.2161) Mean Value B C 4.3241 4.4019 2.9315 3.7537 3.187 4.5056 3.2296 4.1537 3.1167 Significant bright (p=0.0024) dark (p=0.0007) warm (p=0.017) rough (p=0.0238) blunt (p=0.0206) Mean Value A C Not Significant Mean Value A C 3.9278 3.3889 4.0796 3.8648 3.3222 4.3093 hard (p=0.3936) 3.013 4.3333 soft (p=0.1460) deep (p=0.2311) 4.1593 shallow (p=0.7877) 3.1 cold (p=0.8664) smooth (p=0.2337) sharp (p=0.1073) 4.2148 4.6074 3.4574 3.7574 3.0259 3.4093 4.4074 D2. Wilcoxon Sign-Ranked Analysis for the Imagery Survey Significant Mean Value Not Significant Mean Value 4.3259 4.4481 2.9759 3.7833 3.1 4.6667 3.3037 4.3093 3.013 4.3259 4.4481 3.6333 3.7833 2.9759 3.3037 4.6667 flow (p=0.0013) wandering (p=0.0127) Significant B A 4.3907 2.8907 Mean Value C C 4.7222 force (p=0.1212) 3.1926 interior (p=0.0618) movement (p=0.7005) Not Significant wandering (p=0.0023) 2.7259 3.1926 flow (p=0.1178) force (p=0.0614) interior (p=0.0997) movement(p=0.2692) 36 B A 4.2611 3.9759 4.0815 Mean Value 4.4926 4.0741 3.7926 3.9037 C C 4.4111 4.1519 4.1278 4.7222 4.4111 4.1519 4.1278 D3. Wilcoxon Sign-Ranked Analysis for the Entertainment Survey Significant Mean Value B C Not Significant Mean Value B C entertained (p=0.0349) 3.9944 4.2778 stimulated (p=0.2745) dancing (p=0.8488) energized (p=0.8191) moving (p=0.1506) animated (p=0.4847) excited (p=0.1943) rhythm (p=0.1576) 3.0259 3.1148 3.4537 4.1037 3.7463 3.3296 4.4648 Significant entertained (p=0.0068) animated (p=0.0090) excited (p=0.0245) rhythm (p=0.0079) Mean Value A C Not Significant Mean Value A C 3.8204 3.7463 3.3296 4.4648 4.2778 stimulated (p=0.1585) 4.2537 dancing (p=0.8717) 3.7667 energized (p=0.0550) 4.9852 moving (p=0.1331) 3.0259 3.1148 3.4537 4.1037 3.2963 3.287 3.7944 4.45 4.2537 3.7667 4.9852 3.2963 3.287 3.7944 4.45 37 Appendix E. Post Survey Questionnaires 38 39 Curriculum Vitae ChungHa Lee M.S. Student GIST IIT School of Integrated Technology, Soft Computing & Interaction Laboratory : [email protected] : https://github.com/ChungHaLee Contact: +82-10-2667-6489 Education M.S., Culture Technology (HCI), 2023, Gwangju Institute of Science and Technology. B.Ed., Pedagogy, 2021, Korea National University of Education. History Education (Double Major), 2021, Korea National University of Education. Skills Javascript, Python, User Experience and Interface a good writer, a communicator for problem-solving, a fast learner Research Interests HCI, Visualization, Application of Role-Playing Games in Education, User Centered Design Publications 1. ChungHa Lee, YouJin Choi, Junryeol Jeon, Jin-Hyuk Hong. (2022). A Study on the Music Visualization Tool for Deaf and Hard-of-Hearing: From the Perspective of Exploration and Customization, Proceedings of the Korean Information Science Society Conference. 1474-1476. 2. YouJin Choi, Junryeol Jeon, ChungHa Lee, Yeo-Gyeong Noh, Jin-Hyuk Hong. (2022). Cross-modal Music Palette: A Music Conceptualization Tool for the Deaf and Hard of Hearing to Enjoy Music. Under Review. 3. YouJin Choi, ChungHa Lee, Jin-Hyuk Hong. (2021). Design and Development of an Emotion Annotation System for Deaf and Hard-of-Hearing. Proceedings of the Korean Information Science Society Conference. 974 976. – 4. JooYeong Kim, ChungHa Lee, JuYeon Kim, Jin-Hyuk Hong. (2021). Interactive Description to Enhance Accessibility and Experience of Deaf and Hard-of-Hearing Individuals in Museums. Under Review. Projects 1. Development of Assistive Technology of Music and Dance for Deaf and Hard-of-Hearings to Entertain Music. (2022. 03 Present) – 2. HCI+AI Convergence for Human-Centered Physical System Design. (2022. 03 Present) – 40 3. Development of Intelligent Exhibition Labels with Korean Sign Language Conversion Technology – for Deaf and Hard-of-Hearings. (2021. 03 2022. 03) Awards and Certificate 2018. 02 NAU (Northern Arizona University) Education Program 2020. 01 DCU (Dublin City University) English Language Training 2020. 08 Grand Prize, POSCO Artificial Intelligence & Big Data Academy Extracurriculars KNUE Education Donors, Team Leader of Multicultural Education 2016. 07. Education Donation Acitivies at Jinbo High School, Republic of Korea. 2016. 08. Multicultural Eoulim Camp at Daeso Elementary School, Republic of Korea. 2016. 12. Education Donation Acitivies at Jinhae Girls' High School, Republic of Korea. 2017. 08. Multicultural Eoulim Camp at Daeso Elementary School, Republic of Korea. 2018. 01. Education Donation Acitivies at Gaeryung Middle School, Republic of Korea. 2018. 01. Multicultural Eoulim Camp at Deoksan Elementry School, Republic of Korea. 41
synthetic_cpt
1
OFDM_Emitter_Identification_Method_Based_on_Data_Augmentation_and_Contrastive_Learning.pdf
IEEE WIRELESS COMMUNICATIONS LETTERS, VOL. XX, NO. XX, XXX 2022 1 Few-Shot Specific Emitter Identification via Hybrid Data Augmentation and Deep Metric Learning Cheng Wang, Xue Fu, Yu Wang, Guan Gui, Senior Member, IEEE, Haris Gacanin, Fellow, IEEE, Hikmet Sari, Life Fellow, IEEE, and Fumiyuki Adachi, Life Fellow, IEEE 2 2 0 2 c e D 1 ] P S . s s e e [ 1 v 2 5 2 0 0 . 2 1 2 2 : v i X r a Abstract—Specific emitter identification (SEI) is a potential physical layer authentication technology, which is one of the most critical complements of upper layer authentication. Radio frequency fingerprint (RFF)-based SEI is to distinguish one emitter from each other by immutable RF characteristics from electronic components. Due to the powerful ability of deep learning (DL) to extract hidden features and perform classification, it can extract highly separative features from massive signal samples, thus enabling SEI. Considering the condition of limited training samples, we propose a novel few- shot SEI (FS-SEI) method based on hybrid data augmentation and deep metric learning (HDA-DML) which gets rid of the dependence on auxiliary datasets. Specifically, HDA consisting rotation and CutMix is designed to increase data diversity, and DML is used to extract high discriminative semantic features. The proposed HDA-DML-based FS-SEI method is evaluated on an open source large-scale real-world automatic-dependent surveillance-broadcast (ADS-B) dataset and a real-world WiFi dataset. The simulation results of two datasets show that the proposed method achieves better identification performance and higher feature discriminability than five latest FS-SEI methods. Index Terms—Specific emitter identification (SEI), few-shot learning (FSL), data augmentation, deep metric learning. I. INTRODUCTION In recent years, the rapid development of the Internet of Things (IoT) has accelerated the integration of many edge applications [1], [2]. Thus, due to high density of IoT devices it is necessary to consider the secure IoT communications. Hence, it is important to conduct specific emitter identification (SEI) which can serve as the method of identification and certification [3]. Due to the increased availability of big data and the increase in hardware computing power, deep learning (DL) and deep neural networks (DNN) for joint features extraction and classification have been successfully applied in many fields [4]–[8]. There are some DL-based SEI methods considered from the perspective of models, such as convolutional neural network (CNN)-based methods Cheng Wang, Xue Fu, Yu Wang, Guan Gui, and Himet Sari are with the College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China [email protected], [email protected], [email protected], [email protected], [email protected]). (e-mail: Bamidele Adebisi is with the Department of Engineering, Faculty of Science and Engineering, Manchester Metropolitan University, Manchester M1 5GD, United Kingdom (e-mail: [email protected]). Haris Gacanin is with the Institute for Communication Technologies and Embedded Systems, RWTH Aachen University, Aachen 52062, Germany (e-mail: [email protected]). Fumiyuki Adachi is with the International Research Institute of Disaster Science (IRIDeS), Tohoku University, Sendai 980-8572 Japan (e-mail: [email protected]). [4] and recurrent neural network (RNN)-based methods [9], which achieve great performance. In addition, there are some methods using data in the transform domain as input of DNN to further improve identification performance, such as bispectrum [10] and differential constellation trace figure (DCTF) [11]. DL-based SEI methods which are sufficient data-driven methods rely heavily on a large number of labeled samples to fully train DNN [12], while massive samples are not available in practical non-cooperative strong adversarial environments, making the identification performance of these methods decrease sharply due to insufficient training of the network. To overcome the limitations imposed by the dependence of DL-based SEI on massive samples, the study of SEI for few- shot (FS) scenarios has been considered. There are methods to study FS-SEI problem from the perspective of meta- learning [13], [14], deep metric learning (DML) [15] and data augmentation [16], which achieve good identification performance. Although the above FS-SEI methods achieve better these methods are still dependent on auxiliary datasets. These methods obtain a set of good initialization parameters from auxiliary dataset and then fine-tune the model parameters slightly on the target FS dataset. These auxiliary datasets are often extremely similar to the FS dataset, but it is often difficult to obtain such an auxiliary dataset in practical tasks. Liu et al. [22] and Cai et al. [23] considered the scene without auxiliary datasets, and they used adversarial training (AT) and virtual adversarial training (VAT) to achieve good identification performance, respectively, but the models of these two methods have lots of parameters which makes the models difficult to train. identification performance, In this paper, to overcome the dependence of DL-based SEI on plenty of training samples and get rid of reliance on auxiliary datasets, we propose a FS-SEI method based on Hybrid Data Augmentation and Deep Metric Learning (HDA- DML). Specifically, HDA is used to increase the quantity and diversity of training samples and improve the robustness of the model, while DML is used to extract high discriminative semantic features. The main contributions of this paper are summarized as follows: • We propose a HDA-based FS-SEI method, in which rotation and CutMix augment the dataset in the data preprocessing and training process, respectively, and the DNN can learn more data distribution information from the augmented dataset. • We propose a DML-based FS-SEI method, in which loss is used as the regularization term on the triplet IEEE WIRELESS COMMUNICATIONS LETTERS, VOL. XX, NO. XX, XXX 2022 2 objective function to enable efficient learning in FS scenarios and improve the discriminability between inter- class semantic features. • We conduct the experiments to validate the proposed HDA-DML-based FS-SEI method on ADS-B and WiFi dataset. The experimental results show that the proposed identification performance and method has the best feature discriminability. II. PROBLEM FORMULATION The system model of the proposed HDA-DML-based FS- SEI is shown in Fig. 1. The general three steps of system model can be described as: data collection, model training and identification. III. THE PROPOSED HDA-DML-BASED FS-SEI METHOD The framework of HDA-DML is shown in Fig. 2. We use rotation and CutMix to extend training dataset, extract semantic features of the augmented samples via a complex- valued CNN (CVCNN) [17] which can possess a more efficient and powerful feature extraction capability than CNN for complex signals containing coupling information, and optimize the CVCNN by using triplet loss as the regularization term on cross-entropy (CE) loss to extract separable and discriminative semantic features. Fig. 1. System model of HDA-DML-based FS-SEI. Considering a general machine learning-based SEI problem, the goal of the problem is to find a maping function to minimize the expected error approximately, which can be written as: min f ∈F εem = min f ∈F E(x,y)∈D L(f (x), y), (1) where E denotes computing average, f (·) is the function that maps sample x to its predicted category, and L(·) represents the object function that compares the predicted category f (x) with the ground-truth category y. x represents the input sample with in-phase quadrature (IQ) format, and y respresents the ground-truth category of the corresponding sample. We use D = {(xi, yi)}N i=1 to represent the dataset, in which N is the number of samples, xi ∈ X , yi ∈ Y, where X and Y represent sample space and category space, respectively. Different from the SEI problem, where the D consists of massive samples, the goal of FS-SEI problem without auxiliary dataset is to train an excellent mapping function f (·) using few samples. The problem can be described by few-shot dataset Df s = {Dtr, Dte}, where Dtr = {(xi, yi)}Ntr i=1 is training dataset and Dte = {(xi, yi)}Nte i=1 is testing dataset. there are C categories with K samples per Specifically, category for Dtr, and the total number of samples Ntr in Dtr which is formulated as Ntr = C × K is usually small. Thus, it is denoted as a “C-ways, K-shots” problem, the goal of which is to use just Ntr samples in Dtr to train an excellent mapping function f (·). Hence, it can be formulated so as to to perform the minimization below: Fig. 2. The framework of HDA-DML. A. The Data Augmentation of HDA-DML Data augmentation has wide application in DL-based SEI methods especially in FS-SEI methods, which can increase the diversity of training dataset, prevent model from over fitting and improve the robustness of the model. In this paper, two data augmentation methods, rotation and CutMix, are used to improve the diversity of training samples. These two methods are described as follows. 1) Rotation [16]: For a sample point in an original signal sample (I, Q), the augmented point (I (cid:48), Q(cid:48)) can be obtained by the following rotation transformation: (cid:20)cos α − sin α (cid:21) cos α sin α (cid:21) (cid:20) I Q (cid:20) I (cid:48) Q(cid:48) (3) = (cid:21) , where α ∈ {0, 0.5π, π, 1.5π} is the angle of rotation. Rotation can extend one original signal sample into four signal samples. 2) CutMix [18]: Assume that x and y represent the input training samples and categories, respectively. The purpose of CutMix is to generate a series of new samples ( ˜x, ˜y) by combining two original samples (xA, yA) and (xB, yB), both of which belong to the training dataset Dtr, and then we feed the generated new samples into the CVCNN. Compared with other data augmentation, optimizing the CutMix-aided CVCNN not only makes full use of all information of the samples, but also uses mixed samples and mixed labels, which can fully consider the global distribution of data, so the optimized CVCNN will have better robustness towards unknown samples in testing process. Specifically, the new samples ( ˜x, ˜y) can be generated through the following formulas: ˜x = M (cid:12) xA + (1 − M ) (cid:12) xB, ˜y = λyA + (1 − λCM )yB, (4) (5) min f ∈F εem = min f ∈F E(x,y)∈Dtr L(f (x), y). (2) where M denotes a binary mask, the size of which is same as original sample, and it indicates where to delete and fill in IEEE WIRELESS COMMUNICATIONS LETTERS, VOL. XX, NO. XX, XXX 2022 3 two samples, and 1 represents a binary mask filled by ones, the size of which is also same as original sample. (cid:12) represents element-wise multiplication. λCM is the combination ratio between two original samples, which is randomly selected from beta distribution Beta(1, 1). B. The Objective Function Regularized with Metric In addition to increasing the diversity of FS samples, we also impose a regularization term on the objective function to enable efficient learning in FS scenarios. DML can learn a suitable distance to optimize the performance of the classifier and is an efficient approach for learning few samples [26]. Compared with the network based only on CE loss, the network with metric loss can not only extract the separable semantic features, but also extract the discriminative semantic features [27]. In this paper, to make the distance between similar semantic features (from the same category) closer and the distance between different semantic features (from the different categories) farther, the CVCNN in HDA-DML uses a triplet loss [19] as regularization term of objective function to optimize feature extraction. The triplet loss function LT riple can be formulated as: LT riple = N (cid:88) i=1 [dap − dan + γ]+ , (6) i ) − g (xn i ) − g (xp i , dan = (cid:107)g (xa i and xn where dap = (cid:107)g (xa i )(cid:107)2 stands for the distance i and xp between xa i )(cid:107)2 represents the distance between xa i , and g(x) represents the semantic feature of sample x. xa i is a anchor sample which is randomly selected from any category, xp is a positive sample which i represents other samples of the same category as the anchor sample xa is a negative sample which represents samples of different categories from the anchor sample xa i . || · ||2 indicates Euclidean distance, γ is a margin which is an adjustable hyperparameter that can take on a positive value and [·]+ denotes positive part. i , and xn i In this paper, the triplet loss is used as the regularization term on the CE loss to optimize the CVCNN to extract the semantic features with both separability and high discriminability. The objective function can be written as follows: Ljoint = LCE + λLT riple, (7) where LCE represents the CE loss. The threshold λ is used to balance the two loss functions. Specifically, the values of γ and λ are set with reference to [15]. C. Training Prodedure Algorithm 1 Training procedure of the proposed HDA-DML- based FS-SEI method. Require: • lr: learning rate • θ: the parameters of network • x, x(cid:48): raw samples and rotated samples • xa, xp, xn: anchor, positive and negative samples • ˜x, ˜y: sample and label after CutMix • M : binary mask of CutMix • λCM : combination ratio of CutMix • LCE, LT riple, Ljoint: CE loss, triplet loss and joint loss • λ: threshold between CE loss and triplet loss • T : the number of training iterations • B: the number of batches in a training iteration Input: Few-shot training dataset Dtr = {(xi, yi)}Ntr i=1 Output: Predicted categories ˆy 1: Data preprocessing: • Rotation: x(cid:48) ← x • Power normalization: x(cid:48) ← x(cid:48)−x(cid:48) min max−x(cid:48) 2: Building network and randomly initializing θ 3: for t = 1 to T do 4: x(cid:48) min for b = 1 to B do ˜x(cid:48) = M (cid:12) x(cid:48) ˜y = λCM yi + (1 − λCM )yj Feeding the augmented signals { ˜x(cid:48), ˜y} into CVCNN i + (1 − M ) (cid:12) x(cid:48) j Extracting semantic features g( ˜x(cid:48)) Calculating the predicted categories f (g( ˜x(cid:48))) Computing objective loss function: Ljoint( ˜x(cid:48), ˜y) = LCE(f (g( ˜x(cid:48))), ˜y)+λLT riple(g( ˜xa(cid:48)), g( ˜xp(cid:48)), g( ˜xn(cid:48))) Updating parameters of CVCNN with backward propagation: θ ← Adam(∇θ, Ljoint, lr, θ) end for 12: 13: end for IV. EXPERIMENTAL RESULTS A. Simulation Parameters In this paper, we use two different datasets to evaluate HDA-DML-based FS-SEI method. The first dataset that was presented in [20] contains ADS-B signals collected in a real- world large-scale airspace and is suitable for SEI researches. Specifically, we randomly select 10 categories of long IQ signals, the number of sampling points is 6000 and the signal- to-noise ratio (SNR) is 30 dB. The second dataset presented in [21] contains WiFi signals collected from 16 X310 USRP devices, the SNR of which is also 30 dB and we cut them into signal samples with 6000 sampling points. 5: 6: 7: 8: 9: 10: 11: The total training procedure of HDA-DML is described in Algorithm 1. Adam optimizer is used for back propagation to update the parameters of CVCNN. In more detail, the rotation augmentation is operated in data preprocessing, while the CutMix augmentation is operated in the training process. We build five few-shot experimental scenarios to evaluate the identification performance of HDA-DML-based FS-SEI method. For both datasets, each scenario is {1, 5, 10, 15, 20} shots. TABLE I shows the details of other experimental parameters. IEEE WIRELESS COMMUNICATIONS LETTERS, VOL. XX, NO. XX, XXX 2022 4 TABLE I DETAILED SIMULATION PARAMETERS. Items Margin γ of Triplet Loss Threshold λ Optimizer Batch Size Learning Rate lr Simulation Platforms Parameters 5 0.01 Adam 16 0.01 Pytorch NVIDIA GeForce GTX 1080Ti B. Evaluation Criteria and Benchmarks In this paper, we use identification accuracy and silhouette coefficient to evaluate identification performance and semantic feature discriminability, respectively. The identification accu- racy can be expressed as the ratio between the number of correctly identified test samples and the total number of test samples. The silhouette coefficient can measure the cohesion and separation degree of the extracted semantic features, and we use it as an indicator of discriminability between features. In this paper, we compare the HDA-DML method with three latest FS-SEI methods, namely, Triplet-CVCNN [15], CRCN- AT [22] and VAT [23]. In addition, HDA-DML method is also compared with Softmax-CVCNN [24] and DA-CVCNN [16]. Taking fairness into consideration, without changing the core idea of these methods, we use the same training samples with IQ format, the same data preprocessing method, optimizer, learning rate and network structure. C. Identification Performance Comparison 1) HDA-DML vs. Benchmarks: Due to the instability the experimental results shown are of the sample quality, the average of the results of 100 Monte Carlo simulations. The identification performance of HDA-DML-based FS-SEI method and comparative methods in ADS-B dataset and WiFi dataset are shown in Fig. 3 and Fig. 4, respectively. It can be seen that our proposed method has a clear improvement in identification performance compared to the comparison methods for all few-shot scenarios in both datasets. This indicates that the robustness of our proposed method is better. Specifically, compared with the comparison methods, in ADS- B dataset, the identification accuracy of HDA-DML can be improved by at least 3%, and in WiFi dataset, the identification accuracy can be improved by at least 10%. 2) Visualization of Semantic Features: The silhouette coefficients of HDA-DML and the comparison methods on the two datasets in “20 shots” scenario are shown in TABLE II. As can be seen from the table, on both datasets, HDA- DML is able to achieve the best silhouette coefficient and the discriminability of semantic features is the best. In this paper, we use t-distributed stochastic neighbor embedding (t-SNE) [25] for feature compression, and the visualization figures of the compressed semantic features are shown in Fig. 5. Due to the limitation of space, we only give the figures of ADS-B dataset. It can be clearly seen that Softmax-CVCNN only uses the CE loss function, and the separability between different categories is weak; Triplet-CVCNN has smaller intra-class Fig. 3. The identification performance of HDA-DML-based FS-SEI method and benchmarks in ADS-B dataset, where “All Data” represents the identification accuracy on the test dataset after training with 2611 samples. Fig. 4. The identification performance of HDA-DML-based FS-SEI method and benchmarks in WiFi dataset, where “All Data” represents the identification accuracy on the test dataset after training with 34140 samples. loss, but is difficult inter-class distance compared with distance and larger Softmax-CVCNN because it uses the triplet the extracted semantic features of Softmax-CVCNN and Triplet- CVCNN are haphazardly distributed throughout the feature space and it to find clear boundaries between different categories of samples because these methods can not learn the data information efficiently from few samples. Since data augmentation can extend the diversity of samples, the features extracted by DA-CVCNN have better separability but still weak discriminability, the semantic features extracted by our proposed HDA-DML method have the best separability and discriminability, which are significantly better than the other compared methods. IEEE WIRELESS COMMUNICATIONS LETTERS, VOL. XX, NO. XX, XXX 2022 5 TABLE II SILHOUETTE COEFFICIENTS OF HDA-DML AND COMPARISON METHODS. Datasets HDA-DML (our proposed) ADS-B WiFi 0.46245 0.69685 CRCN-AT 0.00047 0.69077 VAT −0.01991 0.61927 Triplet-CVCNN 0.10381 0.14791 DA-CVCNN 0.044798 0.601797 Softmax-CVCNN −0.02991 0.16032 [6] N. Kato, B. Mao, F. Tang, Y. Kawamoto, and J. Liu, “Ten Challenges in Advancing Machine Learning Technologies Towards 6G,” IEEE Wireless Communications, vol. 27, no. 3, pp. 96–103, Jun. 2020. [7] S. Chang, S. Huang, R. Zhang, Z. Feng, and L. Liu, “Multi-Task Learning Based Deep Neural Network for Automatic Modulation Classification,” IEEE Internet of Things Journal, vol. 9, no. 3, pp. 2192– 2206, Feb. 2022. [8] Q. Zheng, P. Zhao, D. Zhang, and H. Wang, “MR-DCAE: Manifold Regularization-based Deep Convolutional Autoencoder for Unauthorized Broadcasting Identification,” International Journal of Intelligent Sys- tems, vol. 36, no. 12, pp. 7204–7238, 2021. [9] X. Wang, Y. Zhang, H. Zhang, Y. Li and X. Wei, “Radio Frequency Signal Identification Using Transfer Learning Based on LSTM,” Circuits, Systems, and Signal Processing, vol. 39, no.11, pp. 5514–5528, Apr. 2020. [10] L. Ding, S. Wang, F. Wang and W. Zhang, “Specific Emitter Iden- tification via Convolutional Neural Networks,” IEEE Communications Letters, vol. 22, no. 12, pp. 2591–2594, Dec. 2018. [11] Y. Peng, P. Liu, Y. Wang, G. Gui, B. Adebisi and H. Gacanin, “Radio Frequency Fingerprint Identification Based on Slice Integration Cooperation and Heat Constellation Trace Figure,” IEEE Wireless Communications Letters, vol. 11, no. 3, pp. 543–547, Mar. 2022. [12] Z. Li and D. Hoiem, “Learning Without Forgetting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 12, pp. 2935– 2947, Dec. 2018. [13] C. Xie, L. Zhang and Z. Zhong, “Few-Shot Specific Emitter Identification Based on Variational Mode Decomposition and Meta- Learning,” Wireless Communications and Mobile Computing, 2022, early access, doi: 10.1155/2022/4481416. [14] N. Yang, B. Zhang, G. Ding, Y. Wei, G. Wei, J. Wang, and D. Guo, “Specific Emitter Identification With Limited Samples: A Model- Agnostic Meta-Learning Approach,” IEEE Communications Letters, vol. 26, no. 2, pp. 345–349, Feb. 2022. [15] Y. Wang, G. Gui, Y. Lin, H.-C. Wu, C. Yuen and F. Adachi, “Few-Shot Specific Emitter Identification via Deep Metric Ensemble Learning,” IEEE Internet of Things Journal, 2022, early access, doi: 10.1109/JIOT.2022.3194967. [16] L. Huang, W. Pan, Y. Zhang, L. Qian, N. Gao and Y. Wu, “Data Augmentation for Deep Learning-Based Radio Modulation Classification,” IEEE Access, vol. 8, pp. 1498-1506, 2020. [17] Y. Wang, G. Gui, H. Gacanin, T. Ohtsuki, O. A. Dobre and H. V. Poor, “An Efficient Specific Emitter Identification Method Based on Complex- Valued Neural Networks and Network Compression,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 8, pp. 2305–2317, Aug. 2021. [18] S. Yun, D. Han, S. Chun, S. J. Oh, Y. Yoo and J. Choe, “CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features,” in IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 6022–6031. [19] X. Dong, J. Shen, “Triplet Loss in Siamese Network for Object Tracking,” in European Conference on Computer Vision (ECCV), 2018, pp. 459–474. [20] Y. Tu, Y. Lin, H. Zha, J. Zhang, Y. Wang, G. Gui, and S. Mao, “Large Scale Real-World Radio Signal Recognition with Deep Learning,” Chinese Journal of Aeronautics, vol. 35, pp. 35–48, Sep. 2022. [21] K. Sankhe, M. Belgiovine, F. Zhou, S. Riyaz, S. Ioannidis and K. Chowdhury, “ORACLE: Optimized Radio Classification through Convolutional Neural Networks,” in IEEE Conference on Computer Communications, 2019, pp. 370-378. [22] C. Liu, X. Fu, Y. Ge, Y. Wang, Y. Lin, G. Gui and S. Hikmet, “A Robust Few-Shot SEI Method Using Class-Reconstruction and Adversarial Training,” in IEEE 95th Vehicular Technology Conference, 2022, pp. 1–5. [23] Z. Cai, W. Ma, X. Wang, H. Wang and Z. Feng, ”The Performance Analysis of Time Series Data Augmentation Technology for Small Sample Communication Device Recognition,” IEEE Transactions on Reliability, 2022, early access, doi: 10.1109/TR.2022.3178707. Fig. 5. Visualization of semantic features in ADS-B dataset. V. CONCLUSION We proposed an effective FS-SEI method based on HDA- DML. The proposed method considers FS-SEI without auxiliary datasets and innovatively combines two types of data augmentation, not only to achieve data expansion but also to consider the global features of the samples, while using the triplet loss as the regularization of objective function to optimize the network and extract discriminative semantic features. We validated the performance of this method on the ADS-B and WiFi dataset. Without the use of auxiliary datasets, the proposed method achieves a great improvement in identification performance and feature discriminability compared to the comparison methods. REFERENCES [1] J. Hwang, L. Nkenyereye, N. Sung, J. Kim and J. Song, “IoT Service Slicing and Task Offloading for Edge Computing,” IEEE Internet of Things Journal, vol. 8, no. 14, pp. 11526–11547, July 2021. [2] D. C. Nguyen, M. Ding, et al., “6G Internet of Things: A Comprehensive Survey,” IEEE Internet of Things Journal, vol. 9, no. 1, pp. 359–383, Jan. 2022. [3] B. Li, Z. Fei, C. Zhou and Y. Zhang, “Physical-Layer Security in Space Information Networks: A Survey,” IEEE Internet of Things Journal, vol. 7, no. 1, pp. 33–52, Jan. 2020. [4] K. Merchant, S. Revay, G. Stantchev and B. Nousain, “Deep Learning for RF Device Fingerprinting in Cognitive Communication Networks,” IEEE Journal of Selected Topics in Signal Processing, vol. 12, no. 1, pp. 160–167, Feb. 2018. [5] B. He, F. Wang, Y. Liu and S. Wang, “Specific Emitter Identification via Multiple Distorted Receivers,” in IEEE International Conference on Communications Workshops (ICC Workshops), 2019, pp. 1–6. IEEE WIRELESS COMMUNICATIONS LETTERS, VOL. XX, NO. XX, XXX 2022 6 [24] Y. Tu, Y. Lin, C. Hou and S. Mao, “Complex-Valued Networks for Automatic Modulation Classification,” IEEE Transactions on Vehicular Technology, vol. 69, no. 9, pp. 10085–10089, Sept. 2020. [25] L. Maaten and G. Hinton, “Visualizing Data Using t-SNE,” Journal of Machine Learning Research, pp. 2579-2605, 2008. [26] P. Man, C. Ding, W. Ren and G. Xu, “A Specific Emitter Identification Algorithm under Zero Sample Condition Based on Metric Learning,” Remote Sensing, vol. 13, no. 23, pp. 4919–4928, Dec. 2021. [27] Z. Zhang, Y. Li and M. Gao, “Few-Shot Learning of Signal Modulation Recognition Based on Attention Relation Network,” in European Signal Processing Conference (EUSIPCO), 2021, pp. 1372–1376.
synthetic_cpt
8
Data_Selection_for_Language_Models_via_Importance_Resampling.pdf
3 2 0 2 v o N 8 1 ] L C . s c [ 3 v 9 6 1 3 0 . 2 0 3 2 : v i X r a Data Selection for Language Models via Importance Resampling Sang Michael Xie, Shibani Santurkar, Tengyu Ma, Percy Liang Stanford University {xie, shibani, tengyuma, pliang}@cs.stanford.edu Abstract Selecting a suitable pretraining dataset is crucial for both general-domain (e.g., GPT-3) and domain-specific (e.g., Codex) language models (LMs). We formalize this problem as selecting a subset of a large raw unlabeled dataset to match a desired target distribution given unlabeled target samples. Due to the scale and dimensionality of the raw text data, existing methods use simple heuristics or require human experts to manually curate data. Instead, we extend the classic importance resampling approach used in low-dimensions for LM data selection. We propose Data Selection with Importance Resampling (DSIR), an efficient and scalable framework that estimates importance weights in a reduced feature space for tractability and selects data with importance resampling according to these weights. We instantiate the DSIR framework with hashed n-gram features for efficiency, enabling the selection of 100M documents from the full Pile dataset in 4.5 hours. To measure whether hashed n-gram features preserve the aspects of the data that are relevant to the target, we define KL reduction, a data metric that measures the proximity between the selected pretraining data and the target on some feature space. Across 8 data selection methods (including expert selection), KL reduction on hashed n-gram features highly correlates with average downstream accuracy (r = 0.82). When selecting data for continued pretraining on a specific domain, DSIR performs comparably to expert curation across 8 target distributions. When pretraining general-domain models (target is Wikipedia and books), DSIR improves over random selection and heuristic filtering baselines by 2–2.5% on the GLUE benchmark.1 1 Introduction Given a fixed compute budget, the choice of pretraining data is critical for the performance of language models (LMs) (Brown et al., 2020, Du et al., 2021, Gururangan et al., 2020, Hoffmann et al., 2022, Raffel et al., 2019). Existing works rely on heuristics to select training data. For example, GPT-3 (Brown et al., 2020) and PaLM (Chowdhery et al., 2022) filter web data for examples that are closer to formal text from Wikipedia and books as a proxy for high quality, a method which we call heuristic classification. Specifically, they train a binary classifier to discriminate formal text from web data and select web examples that have a predicted probability above a noisy threshold (Brown et al., 2020, Du et al., 2021, Gao et al., 2020). However, heuristic classification does not guarantee that the selected data is distributed like formal text. As a second example, domain-specific LMs such as Minerva (Lewkowycz et al., 2022) and Codex (Chen et al., 2021) (math and code LMs, respectively) employ domain-adaptive pretraining (DAPT) (Gururangan et al., 2020), where the model is initialized from a base LM and continues to be pretrained on a domain-specific dataset to achieve gains over the base LM on that domain. The domain-specific datasets are typically manually curated, but a framework for automating data selection could save effort and increase the amount of relevant training data. 1Code, selected data, and pretrained models are available at https://github.com/p-lambda/dsir. 1 Figure 1: Given a large raw dataset such as The Pile (Gao et al., 2020) and a smaller target dataset (e.g., Wikipedia + books), we aim to select a subset of the raw data that is distributed like the target in some feature space. Our method, DSIR, first estimates importance weights using raw and target data in an n-gram feature space. The importance weights are used to resample a subset of the raw dataset. In this paper, we consider the problem of data selection: given a large and diverse raw dataset (e.g., The Pile (Gao et al., 2020)) and a smaller dataset sampled from a desired target distribution, choose a subset of the raw data that is distributed similarly to the target (Figure 1). While a natural approach is to resample the raw data according to importance weights (importance resampling (Rubin, 1988)), estimating importance weights on high dimensional data such as text is often statistically intractable (Bengtsson et al., 2008, Snyder et al., 2008). Instead, our Data Selection with Importance Resampling (DSIR) framework efficiently estimates importance weights over a featurization of the raw and target distributions (Section 3). Our framework first maps the raw and target data onto some feature space and resamples a subset of raw data according to importance weights computed in this feature space. DSIR is extensible via the choice of feature space and importance estimator, which specify what aspects of the data we care about. What is a feature space that both allows for efficient computation and preserves aspects of the data that are relevant for the target? In Section 4, we instantiate DSIR with hashed n-gram features, where n-grams are hashed onto a fixed number of buckets, for efficiency and scalability. The importance estimator is parameterized by bag-of-words generative models on the hashed n-grams, learned by simply counting the the hash bucket frequencies. DSIR with hashed n-grams enables the selection of 100M documents from the full Pile dataset in 4.5 hours. To evaluate how well hashed n-grams preserves the aspects of the data that are relevant for the target, in Section 6 we define KL reduction, a data metric that measures how much a selected dataset reduces the Kullback-Leibler (KL) divergence to the target (in some feature space) over random selection (KL(target∥random)−KL(target∥selected)). We show in Section 5 that KL reduction highly correlates with average downstream performance (Pearson r = 0.82) across 8 data selection methods, including expert selection. We consider selecting data from The Pile (1.6B examples) for continued pretraining of domain- specific LMs and training general-domain LMs from scratch. First, we select data for continued 2 pretraining (Gururangan et al., 2020) of domain-specific LMs in a controlled setting where the target samples are unlabeled training inputs from a downstream dataset (Section 5). We perform continued pretraining on the selected data starting from RoBERTa (Liu et al., 2019b) and evaluate by fine-tuning on the downstream dataset (whose unlabeled inputs were also used as the target for data selection). On 8 datasets from 4 domains (CS papers, biomedical papers, news, reviews), DSIR improves over RoBERTa (no continued pretraining) by 2% on average and is even comparable to continued pretraining on expert-curated data from Gururangan et al. (2020). For general-domain LMs (Section 7), the data selection target is formal, clean text from Wikipedia and books, following GPT-3 (Brown et al., 2020). We train a masked language model (MLM) from scratch on the selected data and evaluate by fine-tuning on GLUE (Wang et al., 2019). In controlled experiments, heuristic classification performs comparably to random sampling from The Pile, possibly because The Pile is already filtered using heuristic classification. DSIR improves over both baselines by 2–2.5% on average on GLUE. We publish the selected dataset and the code for DSIR to improve future LM pretraining. 1 2 Setup Given a small number of target text examples x′ 1,x′ n from a target distribution of interest p and a large raw dataset x1,x2,...,xN drawn from distribution q, we aim to select k examples (k ≪ N ) from the raw dataset that are similar to the target. 2,...,x′ Selection via heuristic classification. As a starting point, we first define the heuristic classification method used by GPT-3/The Pile/PaLM (Brown et al., 2020, Chowdhery et al., 2022, Gao et al., 2020). In heuristic classification, we train a binary classifier f : X → [0,1] to output the probability that an input is sampled from the target distribution. The model is typically a fasttext linear classifier on n-gram feature vectors (usually unigrams and bigrams) (Joulin et al., 2017). We initialize the feature vectors from pretrained fasttext word vectors. We use the trained classifier to estimate f (xi), the predicted probability that xi is sampled from the target, for all raw examples. Example xi is selected if f (xi) > 1−βi, where βi is a sample from a Pareto distribution (typically with shape parameter α = 9 (Brown et al., 2020, Chowdhery et al., 2022)). Since each example is kept or discarded independently, to select a desired number of examples k, the process must either be repeated or α must be tuned. Heuristic classification selects examples from modes of the target distribution (high f (xi)), which could lack diversity. To combat this, noise βi is added. However, it is unclear how much noise to add and there are no guarantees on the selected data distribution. 3 Data Selection with Importance Resampling In the DSIR framework, we consider using importance resampling (Rubin, 1988) to select examples that are distributed like the target. However, estimating importance weights on high dimensional data like text is often statistically intractable without sufficient additional structure (Bengtsson et al., 2008, Gelman and Meng, 2004, Snyder et al., 2008). Instead, we employ importance resampling on a feature space Z that provides this structure. DSIR uses a feature extractor h : X → Z to map the input x to features z = h(x). The induced raw and target feature distributions are qfeat and pfeat, respectively. The goal is to select examples with features that are approximately distributed according to the target feature distribution pfeat. Depending on the choice of feature extractor, DSIR focuses on different aspects of the input. For example, an n-gram feature extractor focuses on matching the n-gram frequencies of the selected data and the target. 3 Our framework consists of 3 steps: 1. Learn ˆpfeat and ˆqfeat: We learn two feature distributions ˆpfeat and ˆqfeat using held-out featurized examples from the target and raw data, respectively. 2. Compute importance weights: We compute the importance weights wi = ˆpfeat(zi) ˆqfeat(zi) for each featurized example zi = h(xi) from the N raw examples. 3. Resample: Sample k examples without replacement from a categorical distribution with . Sampling without replacement avoids choosing the same example multiple probabilities times, is more statistically efficient for importance resampling (Gelman and Meng, 2004), and can be implemented efficiently with the Gumbel top-k trick (Kim et al., 2016, Kool et al., 2019, Vieira, 2014, Xie and Ermon, 2019). wi i=1wi (cid:80)N 4 DSIR with Hashed N-gram Features For efficiency and scalability, we instantiate DSIR with hashed n-gram features. Later in Section 6, we test whether hashed n-gram features preserve the information needed to select data relevant to the target. Hashed n-gram features. We consider hashed n-gram features inspired from fasttext (Joulin et al., 2017, Weinberger et al., 2009). Specifically, for each example x we form a list of unigrams and bigrams, hash each n-grams into one of m buckets (m = 10000 in this paper), and return the counts of the hashed buckets in a m-dimensional feature vector z ∈ Nm. For example, if the text input is “Alice is eating”, we form the list [Alice, is, eating, Alice is, is eating], hash each element of this list to get a list of indices [1, 3, 3, 2, 0] and return the vector of counts for each index [1, 1, 1, 2, ... ]. While the hashing introduces some noise due to collisions, we find that this is a simple and effective way to incorporate both unigram and bigram information. Bag of hashed n-grams model. We parameterize the raw and target feature distributions pfeat and qfeat as bag-of-ngrams models. The bag-of-ngrams model has parameters γ ∈ ∆m, which is a vector of proba- bilities on the hash buckets that sums to 1, Under this model, the probability of a feature vector z ∈ Nm is m (cid:89) P(z;γ) = γ[j]z[j] (1) j=1 where the bracket notation selects the corresponding index in the vector. Given some featurized examples ˜z1,..., ˜zs sampled from a feature distribution, we estimate the parameters by counting: ˆγ = (cid:80)s (cid:80)s 1 i=11⊤ ˜zi j=1 ˜zj. Speed benchmark on The Pile. To test the scalability of the framework, we benchmark DSIR with hashed n-gram features on selecting data from the full Pile dataset (Gao et al., 2020). For this test, we do not preprocess the data other than decompressing the text files for faster I/O. We use hashed n-gram features with 10k buckets, fit the raw feature distribution with 1B hashed indices from the Pile, and fit the target feature distribution with the full target dataset (ChemProt (Kringelum et al., 2016)). DSIR selects 100M documents from the full Pile dataset in 4.5 hours on 1 CPU node with 96 cores. Almost all of the time (4.36 hours) is spent computing the importance weights on the raw dataset, while fitting the feature distributions (1 minutes) and resampling (6 minutes) were much faster. Increasing the number of CPU cores can further decrease the runtime. 4 Table 1: F1 scores for continued pretraining from the RoBERTa checkpoint (Liu et al., 2019b) on 8 downstream datasets from 4 domains (CS, Biomed, News, and Reviews). Random selection, heuristic classification, and DSIR train on 25M selected examples from The Pile. Heuristic classification and DSIR create a different pretraining dataset for every downstream dataset. All models (including DAPT (Gururangan et al., 2020)) use the same amount of training compute and results are averaged over 5 seeds, with standard deviations in subscripts. All datasets use macro-F1 except ChemProt and RCT, which use micro-F1. ACL-ARC Sci-ERC ChemProt RCT HyperPartisan AGNews Helpfulness IMDB Avg RoBERTa (no continued pretrain) Random selection Manual curation/DAPT (Gururangan et al., 2020) Heuristic classification Top-k Heuristic classification DSIR 66.801.08 67.512.60 71.844.78 69.942.96 71.730.21 72.862.71 80.142.25 80.531.65 80.421.57 80.520.95 80.220.58 80.441.13 82.310.54 83.140.52 84.170.50 83.351.07 84.110.73 85.510.46 86.680.14 86.850.13 87.110.10 86.780.17 87.080.21 87.140.13 88.852.59 86.425.33 87.233.65 85.716.01 88.298.28 87.014.53 93.350.2 93.520.15 93.610.12 93.540.19 93.670.14 93.620.19 65.082.29 68.151.37 68.211.07 68.500.79 69.180.73 68.950.81 94.380.13 94.490.25 95.080.11 94.660.22 94.900.14 94.560.34 82.20 82.58 83.46 82.88 83.65 83.76 5 Selecting Data for Domain-Specific Continued Pretraining In this section, we use DSIR to select domain-specific data for continued pretraining. We compare DSIR to 7 other data selection methods in this continued pretraining setting. Setup. We select data for 8 target distributions in the setting of Gururangan et al. (2020), where we perform continued pretraining of domain-specific LMs. Here, the target is a specific downstream unlabeled data distribution and we select examples from The Pile (the raw data). For each downstream dataset, we select data for continued pretraining starting from RoBERTa (Liu et al., 2019b) (see Ap- pendix H). Following Gururangan et al. (2020), we consider 8 downstream datasets across 4 domains: Computer Science papers (ACL-ARC (Jurgens et al., 2018), Sci-ERC (Luan et al., 2018)), Biomedicine (ChemProt (Kringelum et al., 2016), RCT (Dernoncourt and Lee, 2017)) News (AGNews (Zhang et al., 2015), HyperPartisan (Kiesel et al., 2019)), and Reviews (Helpfulness (McAuley et al., 2015), IMDB (Maas et al., 2011)). Baselines. Beyond random selection (without replacement) and heuristic classification, we also compare against manual curation (Gururangan et al., 2020) and a top-k variant of heuristic classification. In manual curation, we simply fine-tune from domain-adaptive pretraining (DAPT) checkpoints (Gururangan et al., 2020), which are the result of continued pretraining on manually- curated data. In top-k heuristic classification, we select the top-k-scoring examples according to the binary classifier used in heuristic classification. All methods select data from The Pile except for manual curation, which uses domain-specific data sources (Gururangan et al., 2020). We perform a controlled comparison by equalizing the amount of LM training compute for all methods, measured by the number of tokens processed during training, following the compute budget in Gururangan et al. (2020). For random selection, heuristic classification, and DSIR using n-gram features (defined in Section 4), we control the number of selected examples (25M examples with fixed token length 256) and the training protocol. We standardize the fine-tuning for all models and average all results over 5 random seeds (see Appendix H for details). All the models initialize from RoBERTa-base. Before data selection via DSIR or heuristic classification, we remove extremely short (<40 words) or repetitive documents that tend to be uninformative (Appendix J). Automatic data selection with DSIR can replace manual curation. Table 1 shows the comparison between the data selection methods. To summarize: 5 Table 2: F1 scores for continued pretraining from RoBERTa, testing different components of heuristic classification and DSIR methods. For heuristic classification, replacing the Pareto noisy threshold with calibration and importance resampling (DSIR (fasttext discriminative)) improves F1. Generative importance weight estimators outperform discriminative importance weight estimators for DSIR. All results average over 5 seeds, with standard deviations in subscripts. ACL-ARC Sci-ERC ChemProt RCT HyperPartisan AGNews Helpfulness IMDB Avg Heuristic classification DSIR (n-gram generative) 69.942.96 72.862.71 80.520.95 80.441.13 83.351.07 85.510.46 86.780.17 87.140.13 DSIR (fasttext discriminative) DSIR (n-gram discriminative) DSIR (unigram generative) 68.467.15 70.352.90 69.530.16 79.001.50 80.210.85 79.691.91 84.570.65 85.031.18 85.240.88 87.090.08 87.040.19 87.050.10 85.716.01 87.014.53 89.184.06 85.498.24 90.115.39 93.540.19 93.620.19 93.540.14 93.740.07 93.420.16 68.500.79 68.950.81 68.411.51 68.791.22 68.550.78 94.660.22 94.560.34 94.950.29 94.840.24 94.390.33 82.88 83.76 83.15 83.19 83.50 • On average, DSIR improves over random selection by 1.2% and manually curated data (DAPT) by 0.3%, showing the potential to replace manual curation. • DSIR improves over heuristic classification by 0.9% and is comparable to top-k heuristic classification. We note that top-k heuristic classification is not typically used in this setting, but we find that it may be particularly suited for domain-specific data selection, where diversity may be less important than the general-domain setting. • Random selection improves by 0.4% on average over no continued pretraining at all, showing that additional data generally improves the downstream F1 score. All the targeted data selection methods improve over random selection. Discriminative importance weight estimators underperform generative estimators. We exper- iment with replacing components of DSIR in Table 2. First, we consider using the binary classifier from heuristic classification (which takes pretrained fasttext word vectors as input) as the importance weight estimator in DSIR. For input xi, the classifier predicts the probability of target f (xi). We use this to estimate importance weights f (xi) 1−f (xi) , then resample according to these weights. This approach (DSIR (fasttext discriminative)) improves F1 by 0.3% over heuristic classification. However, this approach still underperforms DSIR by 0.6% on average, even with regularization and calibration. We consider another discriminative version of DSIR that uses hashed n-gram features as input to a logistic regression binary classifier for importance weight estimation. This differs from heuristic classification, which initializes with pretrained fasttext feature vectors and fine-tunes the features along with the classifer. This approach (DSIR (n-gram discriminative)) underperforms DSIR by 0.7%, even with regularization and calibration. These results suggest that a generative approach is better suited (or easier to tune) for importance resampling. However, the discriminative approaches still outperform random selection by 0.6%. Selecting with n-grams improves downstream performance over unigrams. DSIR uses both unigram and bigram information to compute hashed n-gram features. We ablate the role of bigram information in hashed n-grams by using hashed unigram features (with 10k buckets) for DSIR. In Table 2, we find that DSIR with unigram features underperforms DSIR with n-grams by 0.26%, though still achieving comparable F1 score to manual curation. Overall, selecting data with unigrams is effective, but including bigrams further improves the relevance of the selected data. Cross-domain analysis and the effect of the choice of pretraining data. DSIR assumes knowledge of the target distribution, but what happens if the target dataset is not representative of the target distribution? To test the effect of varying the target distribution on downstream performance, we 6 Figure 2: F1 scores of DSIR for all pairs of pretraining data target distributions (rows) and downstream tasks (columns). The cells are colored by its per-column ranking, with better rankings (higher F1 scores) having darker colors. While using the pretraining data selected specifically for the downstream task is typically strong, choosing the worst pretraining dataset for the downstream task reduces F1 by 6% on average. All results are averaged over 5 seeds. consider every pair of pretraining dataset, which is selected by DSIR for target downstream task X, and downstream task Y. Figure 2 provides the full matrix of results. We find a 6% average drop in F1 when we choose the worst pairing for each downstream task instead of matching the pretraining and downstream data. In the worst case, the F1-score on HyperPartisan drops by 30%. Thus, the choice of target distribution can have a large effect on downstream performance. Pretraining data transfers better for targets within the same domain. In practice, we may have access to some target datasets in the relevant domain and hope to select pretraining data that can improve performance on other tasks in that domain. The 8 target/downstream tasks we use come from 4 domains, with 2 tasks from each domain. We define within-domain F1 as the average F1 of the pairs of pretraining and fine-tuning data from the same domain, but excluding pairs where the pretraining data is selected for the fine-tuning task. We compute this by averaging the off-diagonal elements in the 2×2 diagonal blocks of the matrix in Figure 2. We find that the within-domain F1 (82.9%) is 1.7% higher on average than the cross-domain F1 (81.2%), where the pretraining data is selected for a target from a different domain. 7 Figure 3: Plot of average KL reduction on the n-gram feature space, defined as how much the selected dataset reduces KL divergence to the target distribution over just random sampling from The Pile, against average downstream F1 score over the 8 continued pretraining datasets in Table 1. There is a strong correlation between KL reduction and downstream performance (Pearson r = 0.82). 6 KL Reduction on Hashed N-grams Predicts Downstream Performance When designing a feature extractor for DSIR, how do we measure whether the features preserve the information for selecting relevant pretraining data? To answer this question, we propose a data metric, KL reduction, which measures how much data selection reduces distance to the target over random selection in a feature space. We find that KL reduction on hashed n-gram features strongly correlates with downstream performance across various data selection methods, including those that do not involve n-grams, such as manual curation. KL reduction metric. We define KL reduction as the average reduction in empirical KL divergence from doing data selection over random selection over a set of target feature distributions T : 1 |T | KL(ˆpfeat∥ˆqfeat)−KL(ˆpfeat∥p′ KL-reduction(p′ feat;ˆqfeat,T ) = feat) (cid:88) (2) ˆpfeat∈T where p′ feat is the empirical feature distribution of the selected data, ˆpfeat is a empirical target feature distribution, ˆqfeat is the empirical raw feature distribution. KL reduction depends on the raw distribution ˆqfeat and the set of target distributions T as hyperparameters. In our continued pretraining setting, ˆqfeat is the feature distribution of the Pile and T consists of the feature distributions from the 8 downstream tasks from Section 5. KL reduction on hashed n-grams predicts downstream performance. We show that when computed on the hashed n-gram feature space, KL reduction of a selected dataset highly correlates with the downstream performance of a model trained on that data. Figure 3 plots KL reduction against average downstream performance over 8 target distributions for 8 data selection methods from The Pile (Gao et al., 2020), where the distribution parameters are estimated using 100k samples from each dataset. The average downstream F1 score is highly correlated with the KL reduction (Pearson r = 0.82). This agrees with the results of Razeghi et al. (2022) for in-context learning (Brown et al., 2020) and extends the preliminary evidence from Gururangan et al. (2020) on one selection method that better unigram overlap improves downstream performance. DSIR with hashed n-gram features achieves the highest KL reduction and the best average downstream F1. 8 0.000.050.100.15Average KL reduction (KL(target || random) - KL(target || selected))82.583.083.5Average downstream F1r=0.82RoBERTa baselineDSIR (n-gram generative)DSIR (unigram generative)Random selectionManual curation (DAPT)Top-k Heuristic classficationHeuristic classificationDSIR (fasttext discriminative)DSIR (n-gram discriminative) While some of the original pretraining datasets for DAPT (Gururangan et al., 2020) were not publicly available, we downloaded the public versions as an approximation. Our results suggest that hashed n-gram features preserve most of the information needed for selecting data relevant to the target. Since KL reduction highly correlates with downstream performance and can be cheaply computed without training an LM, KL reduction can be used as a sanity check for future data selection methods. 7 Selecting Data for Training General-Domain LMs In this section, we consider selecting formal text (as a proxy for high-quality text) for training general-domain LMs from scratch. We use Wikipedia and books as the target distribution. Baselines and setup. We compare the following methods for selecting data from the Pile: 1) Random selection, 2) Heuristic classification (GPT-3/Pile/PaLM method), and 3) DSIR. As ablations, we consider top-k variants of heuristic classification and DSIR (take the top-k examples according to importance weights instead of resampling). We use each method to select 51.2M examples, which corresponds to 4 epochs with our compute budget. For heuristic classification and DSIR, we select 96% of the examples from domains excluding Wikipedia and books. This is done to reduce the bias towards selecting data from Wikipedia and books (the target distribution). We choose the other 4% uniformly from Wikipedia and books, and did not tune these proportions (Appendix F). We apply a quality filter for extremely short or repetitive examples before heuristic classification and DSIR selection (Appendix J). For each dataset, we perform MLM pretraining for 50k steps with a large batch size (4096) and short token length (128), following Izsak et al. (2021). All the models use the BERT-base archi- tecture (Devlin et al., 2019). We evaluate the models on the GLUE dev set, averaged over 5 fine-tuning runs (Wang et al., 2019). Fine-tuning hyperparameters such as the number of epochs and batch size are fixed for each dataset, following reasonable defaults from the RoBERTa codebase (Liu et al., 2019b). DSIR qualitatively selects more formal text. Figure 4 shows the beginning characters of 20 random examples selected by random selection, heuristic classification, and DSIR. The random sample alignment. The value of Z~k simultaneously. To prove $p^{g}_{\\phi}(x)= \\; r \\; e^{− \\alpha z} suggests that, nicotinic THE AUTHORS OR COPYRIGHT H&E coupe over a length if (codingState == SMModel Five lessons in viral {m}_s}{{m}_b}\\,.$$ Omega, which automatically C1−−−O1−−−Cu2 128.2\xa0(2) if the given properties * GPL Classpath Exception: husband and I quit donating The helical character of new TestWriteLine(" 6−month period in 2014−−15 "Alice Adams: An Intimate index : m_outerStart); in London and, like raid5 sf if by accident denied that the plaintiff Are celebrities the new made on the antimesenteric woodland species consumed they differ from gecko that someone may legit console.log(err); \\usepackage{wasysym} *.*", , , , True) Prison officials deposed plan to see it 30 more." Misdiagnosis of mosaic haven\’t had issues when slowly over a napkin. In turn, this may trigger 50\u2009ul of 20\u2009mM we are afraid of patrons. Linas Gylys, president when ship rats invaded. Stephon Gilmore\nBuffalo This story is a sequel BLM to Begin Massive Below, in alphabetical enforcement, the office O’Brien is a Senior Lockwood, commander of A ten−year−old girl the suburbs, a regular American lawyers\nCategory: state’s chief executive. of a scheduled program, Guard unit was identifiably have to put up the word Indiana head coach Archie a committee comprised of Filipacchi Media said Anna had the necessary a few days ago. (a) Random selection (b) Heuristic classification (c) DSIR Figure 4: Beginning characters of 20 random examples (each line is a different example) selected by random selection, heuristic classification, and DSIR, where the target is formal text from Wikipedia + books. Qualitatively, DSIR selects more formal text than random selection and heuristic classification. 9 Table 3: Accuracies on the GLUE (Wang et al., 2019) dev set for a BERT-style masked language model (Devlin et al., 2019) trained on data selected from The Pile (Gao et al., 2020). Following RoBERTa (Liu et al., 2019b), for RTE, STS, and MRPC we fine-tune starting from the MNLI model instead of from scratch. DSIR outperforms heuristic classification (used by GPT-3 and PaLM) and random selection by over 2% on average. All results are averaged over 5 seeds and standard deviations are in subscripts. MNLI QNLI QQP RTE SST-2 MRPC CoLA STS-B Avg Random selection Heuristic classification 82.630.41 82.690.17 Top-k Heuristic classfication 83.340.22 DSIR Top-k DSIR 83.070.29 83.390.06 86.900.28 85.950.79 88.620.24 89.110.14 88.630.38 89.570.30 89.770.32 89.890.19 89.800.37 89.940.17 67.371.69 68.591.75 70.040.99 75.092.76 72.491.29 90.050.41 88.940.98 91.150.76 87.401.08 86.030.93 86.371.00 49.413.67 48.173.19 53.023.56 88.630.11 88.620.22 89.300.11 90.480.57 91.010.79 87.700.68 86.181.12 54.001.34 49.901.10 89.170.13 89.520.21 80.25 79.85 81.47 82.30 81.38 contains many code examples that are not similar to text from Wikipedia and books. Heuristic classification seems slightly too diverse, which suggests that the variance of the Pareto distribution added to the classifier scores may be too high. Note that we use the setting of the Pareto shape hyperparameter used in GPT-3 (Brown et al., 2020). Qualitatively, DSIR selects the most formal text. By doing importance resampling to match the target distribution, DSIR trades off the relevance and diversity of the selected data automatically. DSIR improves GLUE performance. Table 3 shows results on the GLUE dev set. DSIR achieves 82.3% average GLUE accuracy, improving over random selection by 2% and heuristic classification by 2.5%. Heuristic classification leads to 0.5% lower accuracy than random selection from The Pile. We hypothesize this is because The Pile has already been filtered once with heuristic classification. Resampling outperforms top-k selection. Top-k heuristic classification and top-k DSIR have similar performance across datasets, with some tradeoffs compared to DSIR without top-k. DSIR without top-k is competitive with these variants on all datasets and achieves a 0.8–0.9% higher average over top-k variants. All the top accuracies across datasets are achieved by DSIR or top-k DSIR. 8 Related Work Effect of pretraining data on LMs. The pretraining data has a large effect on LM performance. Hernandez et al. (2022), Lee et al. (2022) show that deduplicating data improves LMs, and Baevski et al. (2019), Yang et al. (2019) compare using a large web corpus versus Wikipedia. Raffel et al. (2019) shows that heuristically filtered data (filtering out short and duplicated examples) improves T5 and Du et al. (2021) shows that heuristic classification improves downstream few-shot performance for GLaM. We provide extensive controlled experiments comparing the effect of data selection methods on downstream performance. Retrieval. Yao et al. (2022) use keyword-based retrieval (BM25) to select data for semi-supervised learning. In preliminary tests, we found that out of 6.1M documents retrieved by BM25, there were only 1.8M unique documents (70% were exact duplicates). These duplicate examples can hurt performance (Hernandez et al., 2022, Lee et al., 2022). Selecting a desired number of unique documents involves oversampling and de-duplication. Instead, we consider top-k heuristic classification, which has similarities to cosine similarity-based retrieval (since heuristic classification uses an inner product score between pretrained word embeddings and a learned class vector) and avoids retrieving repeated examples. 10 Data selection in classical NLP. Moore-Lewis selection (Axelrod, 2017, Feng et al., 2022, Moore and Lewis, 2010) takes the top-k examples in cross-entropy difference between n-gram LMs trained on target and raw data to score examples, which could over-sample examples from the mode of the target distribution. In Section 7, we found that top-k DSIR, which is a form of Moore-Lewis selection with hashed n-gram LMs, underperforms DSIR by 0.9% on GLUE. DSIR naturally balances diversity and relevance for use in both domain-specific and general-domain cases, since it uses importance resampling to match the target distribution. Feature-space/n-gram discrepancy measures (Jiang and Zhai, 2007, Liu et al., 2019a, Ruder and Plank, 2017) have also been used in selecting data in the domain adaptation setting. Overall, these methods do not consider importance resampling and do not address the gap between pretraining and downstream tasks: pretraining has a different objective to fine-tuning, pretraining uses unlabeled data that is not task-formatted, and the influence of pretraining data is separated from the final model by the fine-tuning step. Beyond the preliminary evidence that unigram similarity metrics are related to downstream performance in Gururangan et al. (2020), we show comprehensively and quantitatively on 8 selection methods that despite the pretrain-downstream gap, n-gram KL reduction on pretraining datasets highly correlates with downstream performance. Data selection in deep learning. Many works show the importance of data selection in the supervised or semi-supervised learning setting in vision (Bengio et al., 2009, Coleman et al., 2020, Kaushal et al., 2019, Killamsetty et al., 2021a,b,c, Mindermann et al., 2022, Mirzasoleiman et al., 2020, Paul et al., 2021, Sener and Savarese, 2018, Sorscher et al., 2022, Wang et al., 2020, Wei et al., 2015) and in language finetuning (Coleman et al., 2020, Mindermann et al., 2022). While most select image data from CIFAR or ImageNet, which have up to 1–10M examples, we consider selecting text data from The Pile, which has over 1.6B examples (of 128 whitespace-delimited words each). At this scale, previous methods become quite expensive since they typically require running a neural network forward pass to get embeddings (Killamsetty et al., 2021c, Sener and Savarese, 2018, Sorscher et al., 2022), taking gradients (Killamsetty et al., 2021a,b, Mirzasoleiman et al., 2020, Paul et al., 2021, Wang et al., 2020), or training a reference model (Mindermann et al., 2022). In contrast, we construct a simple n-gram-based selection method that easily scales to internet-scale datasets. Coleman et al. (2020) select data with high uncertainty under a smaller proxy neural model. They do not consider using a target dataset for estimating importance weights. However, using a neural model could be a complementary strategy for importance resampling. Other works (Katharopoulos and Fleuret, 2018, Loshchilov and Hutter, 2016, Schaul et al., 2015) focus on choosing a subset that approximates training with the original dataset and require selecting data online during training. We aim to select a targeted dataset (once, before training) with different properties from the raw data (restricting the data to formal text or a specific domain). Our work also differs from active learning methods (Ein-Dor et al., 2020, Settles, 2012, Tamkin et al., 2022, Wang et al., 2022, Yuan et al., 2020), which query an annotator for more labeled data. Instead, we select data for self-supervised pretraining. Importance weighting and domain adaptation. Many methods tackle the high-dimensional importance weight estimation problem (Choi et al., 2021, 2022, Rhodes et al., 2020, Sugiyama et al., 2012). In particular, importance weighting is classically used in domain adaptation (Shimodaira, 2000, Sugiyama et al., 2007), where unlabeled target examples are used to adapt a model trained on labeled source data, for reweighting the loss function. However, in many modern applications the source and target are often disjoint (e.g., sketches vs. natural images), causing undefined importance weights (Kumar et al., 2020, Plank et al., 2014, Shen et al., 2022). We side-step high-dimensional importance weight estimation by instead working in a reduced feature space where the support of the massive web corpus should cover the target. 11 9 Discussion and Limitations Feature space for importance resampling. Finding an appropriate feature space is important for DSIR. Although we find a tight correlation between downstream performance and our data metric compute using hashed n-gram features, n-grams only capture a superficial word-level overlap. Other feature extractors, such as neural models, may produce features that better capture semantics. We consider a variant of DSIR which estimates importance weights on a neural feature space in Appendix B, and find that this variant also improves by 1–1.5% over random selection and heuristic classification on GLUE, but our preliminary version does not improve over DSIR with hashed n-gram features. However, extracting these features is much more computationally expensive (on the order of D times more FLOPs for a D-parameter neural model), and importance weight estimation on this continuous feature space may be more difficult. Parameterization of the importance weight estimator. In principle, both generative and dis- criminative approaches to estimating the importance weights should work. In a discriminative approach, regularization and calibration should be used to combat overfitting and make the predicted probabilities useful for importance resampling. We find that a generative approach requires less tuning and could also be better when the number of target examples is small, as Ng and Jordan (2002) finds that Naive Bayes often performs better than logistic regression in low-sample regimes. What is the right target distribution? When developing a domain-specific model such as Codex (Chen et al., 2021), the target dataset should be representative of the coding tasks we expect the model to be used on. However, it’s unclear how exactly to collect this dataset and how much to weight each task in the target distribution. Developing better procedures for collecting the target dataset can ultimately improve the data selected by DSIR. For general-domain LMs, we follow GPT-3, the Pile, and PaLM in using formal text from Wikipedia and books as a proxy for high quality text (Brown et al., 2020, Chowdhery et al., 2022, Du et al., 2021, Gao et al., 2020). However, this is just a heuristic. We leave the exploration of other target distributions for general-domain LMs to future work. Broader impacts. The impact of DSIR depends on the properties of the target data. While DSIR could amplify biases present in the target examples, with the appropriate target data, DSIR can be used to collect data that improve the training efficiency, alignment, or bias of LMs (Bai et al., 2022, Korbak et al., 2023, Ouyang et al., 2022). These benefits could reduce the environmental impact of LMs (Lacoste et al., 2019, Ligozat et al., 2021, Patterson et al., 2021, Strubell et al., 2019) and reduce their biases and risks (Abid et al., 2021, Blodgett and OConnor, 2017, Bommasani et al., 2021, Gehman et al., 2020). For example, DSIR can be used to collect more data on underrepresented subpopulations and fine-tune the model on this data to improve model fairness. 10 Conclusion We provide a cheap and scalable data selection framework based on importance resampling for improving the downstream performance of LMs. We also find a data metric, KL reduction, that strongly correlates with downstream performance and can provide a sanity check for data selection methods without training a model. Our work provides a step in understanding the choice of pretraining data for downstream transfer in LMs. 12 11 Acknowledgements We thank Neil Band, Hong Liu, Garrett Thomas, and anonymous reviewers for their feedback. This work was supported by an Open Philanthropy Project Award and NSF IIS 2211780. SMX was supported by a NDSEG Fellowship. SS is supported by an Open Philanthropy Graduate Fellowship. References Abubakar Abid, Maheen Farooqi, and James Zou. Persistent anti-muslim bias in large language models. arXiv preprint arXiv:2101.05783, 2021. Amittai Axelrod. Cynical selection of language model training data. CoRR, abs/1709.02279, 2017. URL http://arxiv.org/abs/1709.02279. Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, and Michael Auli. Cloze-driven pretraining of self-attention networks. arXiv, 2019. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, T. Henighan, Nicholas Joseph, Saurav Kadavath, John Kernion, Tom Conerly, S. El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, S. Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom B. Brown, Jack Clark, Sam McCandlish, C. Olah, Benjamin Mann, and J. Kaplan. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv, 2022. Yoshua Bengio, Jerome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In International Conference on Machine Learning (ICML), 2009. Thomas Bengtsson, Peter Bickel, and Bo Li. Curse-of-dimensionality revisited: Collapse of the particle filter in very large scale systems. arXiv, 2008. Steven Bird, Edward Loper, and Ewan Klein. Natural Language Processing with Python. O’Reilly Media Inc., 2009. Su Lin Blodgett and Brendan OConnor. Racial disparity in natural language processing: A case study of social media African-American English. arXiv preprint arXiv:1707.00061, 2017. Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dorottya Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, 13 Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. Kristy Choi, Madeline Liao, and Stefano Ermon. Featurized density ratio estimation. Uncertainty in Artificial Intelligence (UAI), 2021. Kristy Choi, Chenlin Meng, Yang Song, and Stefano Ermon. Density ratio estimation via infinitesimal classification. International Conference on Artificial Intelligence and Statistics (AISTATS), 2022. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, A. Rao, Parker Barnes, Yi Tay, Noam M. Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, B. Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, M. Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, S. Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier García, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, D. Luan, Hyeontaek Lim, Barret Zoph, A. Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, T. S. Pillai, Marie Pellat, Aitor Lewkowycz, E. Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, K. Meier-Hellstern, D. Eck, J. Dean, Slav Petrov, and Noah Fiedel. PaLM: Scaling language modeling with pathways. arXiv, 2022. Cody Coleman, Christopher Yeh, Stephen Mussmann, Baharan Mirzasoleiman, Peter Bailis, Percy Liang, Jure Leskovec, and Matei Zaharia. Selection via proxy: Efficient data selection for deep learning. In International Conference on Learning Representations (ICLR), 2020. Franck Dernoncourt and Ji Young Lee. Pubmed 200k rct: a dataset for sequential sentence classification in medical abstracts. IJCNLP, 2017. 14 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Association for Computational Linguistics (ACL), pages 4171–4186, 2019. Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, M. Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, K. Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V. Le, Yonghui Wu, Zhifeng Chen, and Claire Cui. GLaM: Efficient scaling of language models with mixture-of-experts. arXiv, 2021. Liat Ein-Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Ma- rina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. Active Learning for In Proceedings of the 2020 Conference on Empirical Methods BERT: An Empirical Study. in Natural Language Processing (EMNLP), pages 7949–7962, Online, November 2020. As- sociation for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.638. URL https://aclanthology.org/2020.emnlp-main.638. Yukun Feng, Patrick Xia, Benjamin Van Durme, and João Sedoc. Automatic document selection for efficient encoder pretraining, 2022. URL https://arxiv.org/abs/2210.10951. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The pile: An 800gb dataset of diverse text for language modeling. arXiv, 2020. Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. Real- arXiv preprint toxicityprompts: Evaluating neural toxic degeneration in language models. arXiv:2009.11462, 2020. Andrew Gelman and Xiao-Li Meng. Applied Bayesian modeling and causal inference from incomplete-data perspectives. Wiley Series in Probability and Statistics, 2004. Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. Don’t stop pretraining: adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964, 2020. Ruining He and Julian McAuley. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In World Wide Web (WWW), 2016. Danny Hernandez, Tom Brown, Tom Conerly, Nova DasSarma, Dawn Drain, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Tom Henighan, Tristan Hume, Scott Johnston, Ben Mann, Chris Olah, Catherine Olsson, Dario Amodei, Nicholas Joseph, Jared Kaplan, and Sam McCandlish. Scaling laws and interpretability of learning from repeated data. arXiv, 2022. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. An empirical analysis of compute-optimal large language model training. In Advances in Neural Information Processing Systems (NeurIPS), 2022. Peter Izsak, Moshe Berchansky, and Omer Levy. How to train BERT with an academic budget. In Empirical Methods in Natural Language Processing (EMNLP), 2021. 15 Jing Jiang and ChengXiang Zhai. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 264–271, Prague, Czech Republic, June 2007. Association for Computational Linguistics. URL https://aclanthology.org/P07-1034. Instance weighting for domain adaptation in NLP. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficient text classification. European Chapter of the Association for Computational Linguistics (EACL), 2, 2017. David Jurgens, Srijan Kumar, Raine Hoover, Daniel A. McFarland, and Dan Jurafsky. Measuring the evolution of a scientific field through citation frames. Transactions of the Association for Computational Linguistics (TACL), 6, 2018. Angelos Katharopoulos and François Fleuret. Not all samples are created equal: Deep learning with importance sampling. In International Conference on Machine Learning (ICML), 2018. Vishal Kaushal, Rishabh Iyer, Suraj Kothawade, Rohan Mahadev, Khoshrav Doctor, and Ganesh Ra- makrishnan. Learning from less data: A unified data subset selection and active learning framework for computer vision. IEEE/CVF Winter Conference on Applicatios of Computer Vision (WACV), 2019. Johannes Kiesel, Maria Mestre, Rishabh Shukla, Emmanuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. Semeval2019 task 4: Hyperpartisan news detection. SemEval, 2019. Krishnateja Killamsetty, Durga S, Ganesh Ramakrishnan, Abir De, and Rishabh Iyer. GRAD-MATCH: Gradient matching based data subset selection for efficient deep model training. In International Conference on Machine Learning (ICML), 2021a. Krishnateja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, and Rishabh Iyer. Glister: Generalization based data subset selection for efficient and robust learning. In Association for the Advancement of Artificial Intelligence (AAAI), 2021b. Krishnateja Killamsetty, Xujiang Zhao, Feng Chen, and Rishabh Iyer. Retrieve: Coreset selection for efficient and robust semi-supervised learning. In Advances in Neural Information Processing Systems (NeurIPS), 2021c. Carolyn Kim, Ashish Sabharwal, and Stefano Ermon. Exact sampling with integer linear programs and random perturbations. In Association for the Advancement of Artificial Intelligence (AAAI), 2016. Wouter Kool, Herke van Hoof, and Max Welling. Stochastic beams and where to find them: The Gumbel-top-k trick for sampling sequences without replacement. In International Conference on Machine Learning (ICML), 2019. Tomasz Korbak, Kejian Shi, Angelica Chen, Rasika Bhalerao, Christopher L. Buckley, Jason Phang, Samuel R. Bowman, and Ethan Perez. Pretraining language models with human preferences, 2023. Jens Kringelum, Sonny Kim Kjærulff, Søren Brunak, Ole Lund, Tudor I. Oprea, and Olivier Taboureau. Chemprot-3.0: a global chemical biology diseases mapping. Database, 2016. Ananya Kumar, Tengyu Ma, and Percy Liang. Understanding self-training for gradual domain adaptation. In International Conference on Machine Learning (ICML), 2020. Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700, 2019. 16 Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison- Burch, and Nicholas Carlini. Deduplicating training data makes language models better. In Association for Computational Linguistics (ACL), 2022. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. Solving quantitative reasoning problems with language models. arXiv preprint arXiv:2206.14858, 2022. Anne-Laure Ligozat, Julien Lefèvre, Aurélie Bugeau, and Jacques Combaz. Unraveling the hidden environmental impacts of AI solutions for environment. CoRR, abs/2110.11822, 2021. URL https://arxiv.org/abs/2110.11822. Miaofeng Liu, Yan Song, Hongbin Zou, and Tong Zhang. Reinforced training data selection for domain adaptation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1957–1968, Florence, Italy, July 2019a. Association for Computational Linguistics. doi: 10.18653/v1/P19-1189. URL https://aclanthology.org/P19-1189. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692, 2019b. Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, and Daniel S. Weld. S2orc: The semantic scholar open research corpus. In Association for Computational Linguistics (ACL), 2020. Ilya Loshchilov and Frank Hutter. Online batch selection for faster training of neural networks. In International Conference on Learning Representations Workshop (ICLR), 2016. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Empirical Methods in Natural Language Processing (EMNLP), 2018. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Association for Computational Linguistics (ACL), 2011. Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. Image-based recommendations on styles and substitutes. SIGIR, 2015. Sören Mindermann, Jan Brauner, Muhammed Razzak, Mrinank Sharma, Andreas Kirsch, Winnie Xu, Benedikt Höltgen, Aidan N. Gomez, Adrien Morisot, Sebastian Farquhar, and Yarin Gal. Prioritized training on points that are learnable, worth learning, and not yet learnt. In International Conference on Machine Learning (ICML), 2022. Baharan Mirzasoleiman, Jeff Bilmes, and Jure Leskovec. Coresets for data-efficient training of machine learning models. In International Conference on Machine Learning (ICML), 2020. Robert C. Moore and William Lewis. In Proceedings of the ACL 2010 Conference Short Papers, pages 220–224, Uppsala, Sweden, July 2010. Association for Computational Linguistics. URL https://aclanthology.org/P10-2041. Intelligent selection of language model training data. David R. Musser. Introspective sorting and selection algorithms. Software: Practice and Experience, 27, 1999. 17 Andrew Y. Ng and Michael I. Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression and naive Bayes. In Advances in Neural Information Processing Systems (NeurIPS), 2002. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, J. Schulman, Jacob Hilton, Fraser Kelton, Luke E. Miller, Maddie Simens, Amanda Askell, P. Welinder, P. Christiano, J. Leike, and Ryan J. Lowe. Training language models to follow instructions with human feedback. arXiv, 2022. David A. Patterson, Joseph Gonzalez, Quoc V. Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David R. So, Maud Texier, and Jeff Dean. Carbon emissions and large neural network training. CoRR, abs/2104.10350, 2021. URL https://arxiv.org/abs/2104.10350. Mansheej Paul, Surya Ganguli, and Gintare Karolina Dziugaite. Deep learning on a data diet: Finding important examples early in training. In Association for the Advancement of Artificial Intelligence (AAAI), 2021. Barbara Plank, Anders Johannsen, and Anders Søgaard. Importance weighting and unsupervised In Proceedings of the 2014 Conference domain adaptation of POS taggers: a negative result. on Empirical Methods in Natural Language Processing (EMNLP), pages 968–973, Doha, Qatar, October 2014. Association for Computational Linguistics. doi: 10.3115/v1/D14-1104. URL https://aclanthology.org/D14-1104. John Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in Large Margin Classifiers, 10(3):61–74, 1999. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019. Yasaman Razeghi, Robert L. Logan IV, Matt Gardner, and Sameer Singh. Impact of pretraining term frequencies on few-shot reasoning. arXiv, 2022. Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2019. URL https://arxiv.org/abs/1908.10084. Benjamin Rhodes, Kai Xu, and Michael U Gutmann. Telescoping density-ratio estimation. ArXiv, abs/2006.12204, 2020. Donald B. Rubin. Using the SIR algorithm to simulate posterior distributions. Bayesian Statistics, 1988. Sebastian Ruder and Barbara Plank. Learning to select data for transfer learning with Bayesian optimization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 372–382, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1038. URL https://aclanthology.org/D17-1038. T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. In International Conference on Learning Representations (ICLR), 2015. Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. In International Conference on Learning Representations (ICLR), 2018. Burr Settles. Active learning. Synthesis lectures on artificial intelligence and machine learning, 6, 2012. 18 Kendrick Shen, Robbie Jones, Ananya Kumar, Sang Michael Xie, Jeff Z. HaoChen, Tengyu Ma, and Percy Liang. Connect, not collapse: Explaining contrastive learning for unsupervised domain adaptation. In International Conference on Machine Learning (ICML), 2022. Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90:227–244, 2000. Chris Snyder, Thomas Bengtsson, Peter Bickel, and Jeff Anderson. Obstacles to high-dimensional particle filtering. Mathematical Advances in Data Assimilation (MADA), 2008. Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, and Ari S. Morcos. Beyond neural scaling laws: beating power law scaling via data pruning. arXiv, 2022. Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645–3650, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1355. URL https://aclanthology.org/P19-1355. Masashi Sugiyama, Matthias Krauledat, and Klaus-Robert Muller. Covariate shift adaptation by importance weighted cross validation. Journal of Machine Learning Research (JMLR), 8:985–1005, 2007. Masashi Sugiyama, Taiji Suzuki, and Takafumi Kanamori. Density ratio estimation in machine learning. 2012. Alex Tamkin, Dat Nguyen, Salil Deshpande, Jesse Mu, and Noah Goodman. Active learning helps pretrained models learn the intended task, 2022. Tim Vieira. Gumbel-max trick and weighted reservoir sampling, 2014. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations (ICLR), 2019. Xinyi Wang, Hieu Pham, Paul Michel, Antonios Anastasopoulos, Jaime Carbonell, and Graham Neubig. Optimizing data usage via differentiable rewards. In International Conference on Machine Learning (ICML), 2020. Xudong Wang, Long Lian, and Stella X Yu. Unsupervised selective labeling for more effective semi-supervised learning. In European Conference on Computer Vision, pages 427–445. Springer, 2022. Kai Wei, Rishabh Iyer, and Jeff Bilmes. Submodularity in data subset selection and active learning. In International Conference on Machine Learning (ICML), 2015. Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature hash- ing for large scale multitask learning. In International Conference on Machine Learning (ICML), 2009. Sang Michael Xie and Stefano Ermon. Reparameterizable subset sampling via continuous relaxations. In International Joint Conference on Artificial Intelligence (IJCAI), 2019. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. XLNet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems (NeurIPS), 2019. 19 Xingcheng Yao, Yanan Zheng, Xiaocong Yang, and Zhilin Yang. NLP from scratch without large-scale pretraining: A simple and efficient framework. In International Conference on Machine Learning (ICML), 2022. Michelle Yuan, Hsuan-Tien Lin, and Jordan Boyd-Graber. Cold-start active learning through In Proceedings of the 2020 Conference on Empirical Meth- self-supervised language modeling. ods in Natural Language Processing (EMNLP), pages 7935–7948, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.637. URL https://aclanthology.org/2020.emnlp-main.637. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. Defending against neural fake news. In Advances in Neural Information Processing Systems (NeurIPS), pages 9054–9065, 2019. Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems (NeurIPS), 2015. 20 A DSIR asymptotically selects from the target We prove that DSIR selects examples with features distributed as the target as the raw dataset size goes to infinity, assuming that the importance weights are correct up to a constant factor. pfeat(zi) Proposition 1. Assume that the importance weights wi are proportional to the true importance weights qfeat(zi) . Then as the number of raw examples N goes to infinity, the procedure returns k i.i.d. samples with features distributed according to the target feature distribution pfeat. Proof. By assumption, we have importance weights wi that are proportional to the true importance weights, so that wi = C pfeat(zi) qfeat(zi) for the i-th source example for some constant C > 0. First suppose that k = 1. Then, Prob. of sampling an example with feature value z = (cid:80)N i=11[zi = z]wi (cid:80)N j=1wj C(cid:80)N = (cid:80)N 1 N = i=11[zi = z] pfeat(zi) qfeat(zi) (cid:80)N j=1C pfeat(zj) qfeat(zj) i=11[zi = z] pfeat(zi) qfeat(zi) (cid:80)N pfeat(zj) qfeat(zj) j=1 1 N (3) (4) (5) . For k ≥ 1, we can similarly compute the probability of sampling the m-th example (m ∈ {1,...,k}) as: Prob. of sampling m-th example with feature value z = 1 N −m+1 (cid:80)N −m+1 i=1 1 N −m+1 (cid:80)N −m+1 j=1 1[zi = z] pfeat(zi) qfeat(zi) pfeat(zj) qfeat(zj) , (6) where for notational convenience, we re-index the raw examples after selecting each example. For each m ∈ {1,...,k}, the numerator converges to pfeat(z) as N → ∞: 1 N −m+1 N −m+1 (cid:88) 1[zi = z] pfeat(zi) qfeat(zi) = 1 N −m+1 N −m+1 (cid:88) 1[zi = z] pfeat(z) qfeat(z) → qfeat(z) = pfeat(z) (7) pfeat(z) qfeat(z) i=1 i=1 since zj (raw features) are sampled from qfeat (raw feature distribution). For the same reason, the denominator converges to 1: 1 N −m+1 N −m+1 (cid:88) j=1 pfeat(zj) qfeat(zj) → Eqfeat (cid:21) (cid:20) pfeat(zj) qfeat(zj) = 1. (8) Therefore the features of the m-th example is sampled from pfeat for all m ∈ {1,...,k}. Intuition from a simple example. DSIR uses importance resampling to better balance the tradeoff between relevance and diversity as the samples converge to true samples from the target distribution (Proposition 1), while no such guarantee holds for top-k selection. For intuition using a simple example, consider a raw dataset of n coin flips from a biased coin, with 0.9n heads and 0.1n tails. We want to filter the raw dataset to have k = 10 flips from a fair coin (the target distribution). The importance weights are 1 2·0.1 for the tails examples (the tails have higher weight). If we select the top k flips according to the importance weight, we will select 10 tails, still resulting in a biased dataset. However, importance resampling balances this out, resulting in 2·0.9 for the heads examples and 1 21 Table 4: Accuracies on the GLUE (Wang et al., 2019) dev set for a BERT-style masked language model (Devlin et al., 2019) trained on data selected from The Pile (Gao et al., 2020). Following RoBERTa (Liu et al., 2019b), for RTE, STS, and MRPC we fine-tune starting from the MNLI model instead of from scratch. DSIR outperforms heuristic classification (used by GPT-3 and PaLM) and random selection by over 2% on average. All results are averaged over 5 seeds and standard deviations are in subscripts. MNLI QNLI QQP RTE SST-2 MRPC CoLA STS-B Avg Random selection Heuristic classification 82.630.41 82.690.17 Top-k Heuristic classfication 83.340.22 DSIR Top-k DSIR DSIR + Neural features 83.070.29 83.390.06 83.440.16 86.900.28 85.950.79 88.620.24 89.110.14 88.630.38 88.200.35 89.570.30 89.770.32 89.890.19 89.800.37 89.940.17 89.810.35 67.371.69 68.591.75 70.040.99 75.092.76 72.491.29 70.682.70 90.050.41 88.940.98 91.150.76 90.480.57 91.010.79 90.501.07 87.401.08 86.030.93 86.371.00 87.700.68 86.181.12 87.550.99 49.413.67 48.173.19 53.023.56 54.001.34 49.901.10 52.581.67 88.630.11 88.620.22 89.300.11 89.170.13 89.520.21 88.400.12 80.25 79.85 81.47 82.30 81.38 81.40 a fair dataset in expectation as n goes to infinity. We ran a simulation of the simple example with k = 10 and varying raw data sizes n to see how fast the resampled dataset converges to a fair dataset with the raw data size. For raw data sizes n ∈ {100,200,500}, DSIR selects a dataset with (44%, 47%, 50%) heads respectively, averaged over 1000 trials. Thus, DSIR converges quickly to the desired target distribution. In all cases, top-k selects a dataset with all tails. B DSIR with a neural importance weight estimator As a preliminary study, we test an instantiation of DSIR with an importance weight estimator based on neural features. For each example, we extract embeddings from a SentenceTransformer (Reimers and Gurevych, 2019) (all-MiniLM-L6-v2) with dimension 384. We fit the generative models for the source and target feature space with 1000- and 50-component Gaussian mixture models respectively, with diagonal covariance structure. We use this to select pretraining data for training general-domain LMs from scratch. Table 4 shows the results using this DSIR variant in the last row. On average, DSIR with neural features improves by 1-1.5%+ over random selection and heuristic classification and is on par with top-k heuristic classification and top-k DSIR, but still underperforms DSIR with n-gram features. However, we believe that many aspects of this preliminary pipeline could be improved or redesigned, and that using a neural model in the importance weight estimator is a promising direction. C Distribution of data sources for general-domain training Figure 5 shows the distribution of data sources (ArXiv, GitHub, News, etc.) from The Pile that were selected by random selection, heuristic classification, and DSIR. Heuristic classification and DSIR aim to select formal text that are similar to text from Wikipedia or books. Note that we restricted heuristic classification and DSIR to select from data sources outside of Wikipedia and books sources (Books3, BookCorpus2, Gutenberg) for 96% of the dataset, while 2% is randomly selected from Wikipedia and the remaining 2% are selected from the 3 book sources. DSIR seems to focus mostly selecting formal text from web data such as Pile-CC (which can still be quite varied), while the other methods select from a variety of sources. 22 Figure 5: Distribution of Pile data sources for datasets selected by Left: Random selection Middle: Heuristic classification and Right: DSIR. Heuristic classification and DSIR were restricted to select only 4% of its dataset from Wikipedia, Books3, BookCorpus2, and Gutenberg. Figure 6: Distribution of Pile data sources selected by DSIR for different target distributions. The four columns from left to right represent 4 domains: CS papers, Biomedical text, News, and Reviews. D Distribution of data sources for continued pretraining Figure 6 shows the distribution of Pile data sources selected by DSIR for different target distributions. Each of the 4 columns represents a domain: CS papers, Biomedical text, News, and Reviews. The distribution of data sources for target distributions from the same domain are similar. When the target is a task from the CS domain, the distribution of data sources is the most diverse. Biomedical and news domains are particularly different; when the target is from the biomedical domain, most of the selected examples are from PubMed Abstracts and PubMed Central, and when the target is from the news domain, most of the selected examples are from web data (Pile-CC and OpenWebText2). 23 ArXivBookCorpus2Books3DM MathematicsEnron EmailsEuroParlFreeLawGithubGutenberg (PG-19)HackerNewsNIH ExPorterOpenSubtitlesOpenWebText2PhilPapersPile-CCPubMed AbstractsPubMed CentralStackExchangeUSPTO BackgroundsUbuntu IRCWikipedia (en)YoutubeSubtitlesPile data domain0.02.55.07.510.012.515.017.5Proportion (%)RandomArXivBookCorpus2Books3DM MathematicsEnron EmailsEuroParlFreeLawGithubGutenberg (PG-19)HackerNewsNIH ExPorterOpenSubtitlesOpenWebText2PhilPapersPile-CCPubMed AbstractsPubMed CentralStackExchangeUSPTO BackgroundsUbuntu IRCWikipedia (en)YoutubeSubtitlesPile data domain0510152025Proportion (%)Heuristic classificationArXivBookCorpus2Books3Enron EmailsEuroParlFreeLawGithubGutenberg (PG-19)HackerNewsNIH ExPorterOpenSubtitlesOpenWebText2PhilPapersPile-CCPubMed AbstractsPubMed CentralStackExchangeUSPTO BackgroundsUbuntu IRCWikipedia (en)YoutubeSubtitlesPile data domain01020304050Proportion (%)DSIRArXivBookCorpus2Books3DM MathematicsEnron EmailsEuroParlFreeLawGithubGutenberg (PG-19)HackerNewsNIH ExPorterOpenSubtitlesOpenWebText2PhilPapersPile-CCPubMed AbstractsPubMed CentralStackExchangeUSPTO BackgroundsUbuntu IRCWikipedia (en)YoutubeSubtitlesPile data domain0246810121416Proportion (%)ACL-ARCArXivBookCorpus2Books3DM MathematicsEnron EmailsEuroParlFreeLawGithubGutenberg (PG-19)HackerNewsNIH ExPorterOpenSubtitlesOpenWebText2PhilPapersPile-CCPubMed AbstractsPubMed CentralStackExchangeUSPTO BackgroundsUbuntu IRCWikipedia (en)YoutubeSubtitlesPile data domain01020304050Proportion (%)ChemProtArXivBookCorpus2Books3Enron EmailsEuroParlFreeLawGithubGutenberg (PG-19)HackerNewsOpenSubtitlesOpenWebText2PhilPapersPile-CCPubMed AbstractsPubMed CentralStackExchangeUSPTO BackgroundsUbuntu IRCWikipedia (en)YoutubeSubtitlesPile data domain01020304050Proportion (%)HyperPartisanArXivBookCorpus2Books3Enron EmailsEuroParlFreeLawGithubGutenberg (PG-19)HackerNewsOpenSubtitlesOpenWebText2PhilPapersPile-CCPubMed AbstractsPubMed CentralStackExchangeUSPTO BackgroundsUbuntu IRCWikipedia (en)YoutubeSubtitlesPile data domain01020304050Proportion (%)HelpfulnessArXivBookCorpus2Books3DM MathematicsEnron EmailsEuroParlFreeLawGithubGutenberg (PG-19)HackerNewsNIH ExPorterOpenSubtitlesOpenWebText2PhilPapersPile-CCPubMed AbstractsPubMed CentralStackExchangeUSPTO BackgroundsUbuntu IRCWikipedia (en)YoutubeSubtitlesPile data domain0.02.55.07.510.012.515.017.520.0Proportion (%)SciERCArXivBookCorpus2Books3DM MathematicsEnron EmailsEuroParlFreeLawGithubGutenberg (PG-19)HackerNewsNIH ExPorterOpenSubtitlesOpenWebText2PhilPapersPile-CCPubMed AbstractsPubMed CentralStackExchangeUSPTO BackgroundsUbuntu IRCWikipedia (en)YoutubeSubtitlesPile data domain010203040Proportion (%)RCTArXivBookCorpus2Books3DM MathematicsEnron EmailsEuroParlFreeLawGithubGutenberg (PG-19)HackerNewsNIH ExPorterOpenSubtitlesOpenWebText2PhilPapersPile-CCPubMed AbstractsPubMed CentralStackExchangeUSPTO BackgroundsUbuntu IRCWikipedia (en)YoutubeSubtitlesPile data domain0510152025303540Proportion (%)AGNewsArXivBookCorpus2Books3Enron EmailsEuroParlFreeLawGithubGutenberg (PG-19)HackerNewsOpenSubtitlesOpenWebText2PhilPapersPile-CCPubMed AbstractsPubMed CentralStackExchangeUSPTO BackgroundsUbuntu IRCWikipedia (en)YoutubeSubtitlesPile data domain05101520253035Proportion (%)IMDB Figure 7: Plot of KL reduction against average downstream F1 score of DSIR for the 8 continued pretraining datasets, where each point represents a different pretraining dataset (selected for a particular target). Pretraining data selected for CS-related target distributions tend to transfer well to datasets in other domains, while pretraining data selected for reviews transfers poorly. Table 5: Continued pretraining results on the GLUE dev set when the target distribution is formal text. DSIR improves average GLUE performance by 0.4–0.7% over all baselines. All fine-tuning results are averaged over 5 seeds. Following RoBERTa (Liu et al., 2019b), for RTE, STS, and MRPC we fine-tune starting from the MNLI model instead of from scratch. MNLI QNLI QQP RTE SST-2 MRPC CoLA STS-B Avg BERT-base (no continued pretrain) Random selection Heuristic classification DSIR 84.290.41 83.820.48 84.030.33 84.210.47 91.260.16 89.860.63 90.470.65 90.780.42 90.230.06 90.470.39 90.460.36 90.450.39 76.393.80 76.032.20 76.751.74 78.341.75 92.340.34 92.000.31 91.880.42 92.090.59 86.422.49 87.211.47 86.030.78 87.160.77 56.361.49 59.002.57 56.034.22 58.415.86 90.110.23 90.320.17 90.300.22 90.490.19 83.43 83.59 83.24 83.99 Cross-domain KL reduction. Figure 7 plots the KL reduction against average downstream F1 for all datasets selected by DSIR (one dataset for each target). KL reduction is still a strong indicator of downstream performance in this case. Data selected using CS papers tend to transfer the best to other domains, while data selected using reviews hurts performance. This also shows that transfer between domains is very asymmetric. In Figure 6, we show that the distribution of data sources selected by DSIR for CS targets is generally the most diverse, which could contributes to its strong performance on many domains. Intuitively, ACL-ARC (a dataset of NLP papers) is likely to contain a more diverse set of topics than reviews. E Continued pretraining results when target is formal text We also consider using the same datasets for continued pretraining, starting from the public BERT-base checkpoint. Here, all data selection methods improve over BERT-base on the GLUE dev set. Similarly to training from scratch, we find that heuristic classification slightly decreases performance compared to random selection (by 0.2% on average). DSIR improves over random selection by 0.4% and over BERT-base by 0.6%, achieving almost 84% on the GLUE dev set. 24 0.200.150.100.050.00Average KL reduction (KL(target || random) - KL(target || selected))7980818283Average Downstream F1CS: ACL-ARCCS: SciERCBiomed: ChemProtBiomed: RCTNews: HyperPartisanNews: AGNewsReviews: HelpfulnessReviews: IMDB F Data selection details Data preprocessing. We select data from The Pile (Gao et al., 2020), which comes in 30 random chunks. We reserve chunk 0 for validation purposes and only consider the last 29 chunks. We first divided the documents in The Pile into chunks of 128 “words”, according to whitespace tokenization. These chunks define the examples that we do data selection on, totaling 1.7B examples. For heuristic classification and DSIR, we first apply a manual quality filter (Appendix J) and only consider the examples that pass the filter. Random selection selects from the unfiltered Pile. Heuristic classification. We use a bigram fasttext classification model (Joulin et al., 2017), which first forms a list of unigrams and bigrams, hashes them into a predefined number of tokens (2M in this case), maps these tokens into learned feature vectors, and then learns a logistic regression model on top of averaged feature vectors across the model. We initialize the feature vectors from 300 dimensional pretrained subword fasttext vectors trained from Common Crawl. We use the fasttext hyperparameter autotuning functionality with a duration timeout of 30 minutes. The classification model is trained on a balanced dataset of examples from The Pile validation set and examples from the target distribution (downstream unlabeled training inputs or Wikipedia/book text from The Pile validation set). We downsample the larger dataset of the two to create the balanced dataset. Each example is lowercased and stripped of newlines by first tokenizing using the NLTK word tokenizer and rejoining the words with spaces. For noisy thresholding, we select a raw example with probability ρi predicted by the fasttext model if ρi > 1−βi, where βi is sampled from a Pareto distribution with shape parameter 9. If the number of ex- amples that do not cross the threshold is smaller than the desired number of examples k, then we repeat this process on the examples that were not chosen and continue to add to the dataset. After we have chosen at least k examples, we take k random samples without replacement from the chosen examples. For top-k heuristic classification, we simply take the examples with the top-k predicted probabilities ρi. Importance resampling. Our importance resampling-based methods use a bag-of-words generative model of text. We process each example by lowercasing and splitting into words using the WordPunct tokenizer from NLTK (Bird et al., 2009). Following (Joulin et al., 2017), we incorporate unigram and bigram information by hashing the unigrams and bigrams into 10k buckets, which defines a vocabulary of 10k “words” for the generative model. Both unigrams and bigrams are hashed into the same space of words. We learn two bag-of-words models, one for the target and one for The Pile, using target data (downstream unlabeled training inputs or Wikipedia/book text from The Pile validation set) and Pile validation data. The parameters of the models are learned by simply counting the word frequencies across the dataset. For unigram-based DSIR, we use the RoBERTa tokenizer (Devlin et al., 2019), which allows us to avoid hashing. With bigrams, this is more difficult since we must consider 500002 pairs of tokens in the RoBERTa vocabulary. Still, even in the unigram case we find that there are often tokens that are never seen in the target dataset, so we smooth the MLE parameters by mixing with the uniform distribution over tokens with a weight of 1e-5. Implementation of importance resampling. We implement importance resampling with the Gumbel top-k trick (Kim et al., 2016, Kool et al., 2019, Vieira, 2014, Xie and Ermon, 2019), which produces k samples without replacement according to the softmax distribution of the given scores. In the Gumbel top-k procedure, we add IID standard Gumbel noise gi to each log-importance weight to produce a score si = log ˆpfeat(zi) ˆqfeat(zi) +gi for each raw example. We select the examples corresponding 25 Table 6: Hyperparameters for training general-domain LMs from scratch. Architecture Max token length Batch size Learning rate Learning rate schedule Weight decay Warmup steps Total steps Optimizer Adam β1 Adam β2 Adam ϵ GPUs BERT-base 128 4096 1e-3 or 8e-4 Linear 0.01 3000 50000 AdamW 0.9 0.999 1e-8 4 Titan RTX Table 7: Hyperparameters for continued pretraining of general-domain LMs. Architecture Max token length Batch size Learning rate Learning rate schedule Weight decay Warmup steps Total steps Optimizer Adam β1 Adam β2 Adam ϵ GPUs BERT-base 512 2048 1e-4 Linear 0.01 1440 25000 AdamW 0.9 0.999 1e-8 4 Titan RTX to the top k scores. Note that producing the log-likelihood ratios and adding independent Gumbel noise to them can be trivially parallelized, and selecting top k can be done in linear time with the introselect algorithm (Musser, 1999), implemented by numpy.argpartition. Sampling data for general-domain LMs. To select a dataset that is suitable for both pretraining from scratch at token length 128 and continued pretraining with token length 512, we choose to first select 102.4M examples then concatenate every two examples to create 51.2M examples. This ensures that the examples are long enough for a max token length of 512 without much padding. We train the importance weight estimator or fasttext classifier from The Pile validation set, where the target is Wikipedia + BookCorpus2 + Gutenberg + Books3 and the raw data come from the rest of the data sources in The Pile. We first select 98.4M examples from non-Wikipedia and book data, then randomly select 2M from Wikipedia and 0.66M each from BookCorpus2, Gutenberg, and Books3. We 26 mix in some examples from Wikipedia and books to balance the distribution of sources and to reduce catastrophic forgetting in continued pretraining. After this, we concatenate every two examples. Details for ablations. We ablate top-k heuristic classification in Section 5 in two ways. First, we consider the original heuristic classification method, which takes classifier probabilities ρi = f (xi) for an example and selects the example if ρi > 1−βi where βi is a Pareto random variable. Second, we consider heuristic classification with importance resampling by first calibrating the classifier’s probabilities with Platt scaling (Platt, 1999) against a validation set, then using the calibrated probabilities ρi to compute the importance weight log ρi . Similarly to DSIR, we use the Gumbel 1−ρi top-k trick to select a subset using these importance weights. We ablate the DSIR approach by replacing the generative importance weight estimator with a discrim- inative one. We use the same hashing method and define the features as 10k-dimensional counts of the n-grams. We normalize each count vector to sum to 1. On top of these features, we train a logistic regres- sion classifier using the same dataset used to train the fasttext classifier in heuristic classification. We tune an L2 regularization weight based on best held-out accuracy (we further split the validation set in half to create another held out set) in the binary classification task. Similarly as above, we calibrate the probabilities using Platt scaling and use the classifier probabilities to compute the importance weight. G Training details for training general-domain LMs Pretraining from scratch. Table 6 shows the hyperparameters for training general-domain LMs from scratch. For all models except DSIR, we use learning rate 1e-3. We use 8e-4 for DSIR since we found that 1e-3 leads to divergence in the training. We use 16 accumulation steps with 4 GPUs to achieve a large batch size of 4096, following Izsak et al. (2021). Our hyperparameters result in a compute budget of 26B tokens processed (128 × 4096 × 50000). Each training run takes about 50 hours. Our pretraining implementation is adapted from Yao et al. (2022). Continued pretraining (Appendix E). Table 7 shows the hyperparameters for continued pretraining general-domain LMs. We continue pretraining from the BERT-base (Devlin et al., 2019) checkpoint. During BERT training, they process 43B tokens. We process 26B tokens during training so that the total compute after continued pretraining is 69B tokens. Each continued pretraining run takes about 60 hours. Fine-tuning on GLUE. We follow the hyperparameters used by RoBERTa (Liu et al., 2019b) for fine-tuning on GLUE (Tables 8 and 9). While RoBERTa searches over a space of hyperparameters, we just use the hyperparameters set for each task from the RoBERTa code base. The fine-tuning for RTE, MRPC, and STSB continues from the fine-tuned model for MNLI, following Liu et al. (2019b). We use the default HuggingFace code for GLUE fine-tuning. H Training details for continued pretraining of domain-specific LMs Pretraining. Table 10 shows the hyperparameters for continued pretraining domain-specific LMs. We choose the pretraining compute budget to equal the number of tokens processed in the DAPT models from Gururangan et al. (2020). For all models, we first try pretraining with learning rate 5e-4, and if training diverges, we use 1e-4. Fine-tuning. Table 11 shows the hyperparameters for fine-tuning on domain-specific datasets. We use the fine-tuning code from Gururangan et al. (2020) and follow their fine-tuning protocols. For 27 Table 8: Dataset-specific hyperparameters for fine-tuning LMs on GLUE, following best hyperparam- eters from RoBERTa (Liu et al., 2019b). MNLI RTE MRPC STSB COLA QQP SST2 QNLI Epochs Batch size Learning rate Continue from MNLI? N Y Y Y N N N N 1e-5 2e-5 1e-5 2e-5 1e-5 1e-5 1e-5 1e-5 10 10 10 10 10 10 10 10 32 16 16 16 16 32 32 32 Table 9: Shared hyperparameters for fine-tuning LMs on GLUE, following Liu et al. (2019b). Architecture Max length Weight decay Optimizer Adam β1 Adam β2 Adam ϵ Warmup ratio LR schedule Precision GPUs BERT-base 128 (from scratch) or 512 (continued pretrain) 0.1 AdamW 0.9 0.98 1e-6 0.06 Polynomial FP16 1 Titan RTX datasets from CS/Biomed/News domains, we use a max token length of 256 to match the pretraining length. For Reviews (IMDB and Helpfulness) datasets, we use a max token length of 512 since this seems to change performance significantly. For DAPT models (Gururangan et al., 2020), we use a max token length of 512 for all datasets, which matches their protocol. Following Gururangan et al. (2020), we choose either 3 or 10 epochs based on average validation performance over 5 seeds. Our fine-tuning implementation follows Gururangan et al. (2020). I Computing the KL reduction metric To compute the KL reduction metric for a particular dataset, we took the first 100k examples from the dataset and computed the hashed n-gram counts. Normalizing these counts gives an MLE estimate of the hashed n-gram distribution for the dataset. We use the same procedure to compute the hashed n-gram distribution parameters for The Pile (from the Pile validation set). For manual curation (DAPT), we attempted to download the datasets used in the paper (Real- News (Zellers et al., 2019), S2ORC (Lo et al., 2020), and Amazon reviews (He and McAuley, 2016)). However, Gururangan et al. (2020) uses an internal version of S2ORC that cannot be released. We approximate S2ORC for CS papers and Biomed by using the first 100k documents in the public version of S2ORC that contain ‘Computer Science’ and ‘Medicine’ as a metadata field, respectively. 28 Table 10: Hyperparameters for continued pretraining on domain-specific data. Architecture Max token length Total steps Batch size Weight decay Adam β1 Adam β2 Adam ϵ Warmup steps LR schedule Learning rate GPUs RoBERTa-base 256 12500 4096 0.01 0.9 0.999 1e-8 720 Linear 5e-4 or 1e-4 4 Titan RTX Table 11: Hyperparameters for fine-tuning on domain-specific data. Architecture Max token length Epochs Patience Batch size Weight decay Optimizer Adam β1 Adam β2 Adam ϵ Warmup ratio LR schedule GPUs RoBERTa-base 256 or 512 3 or 10 3 epochs 4096 0.1 AdamW 0.9 0.98 1e-6 0.06 Linear 1 Titan RTX For RoBERTa, we approximate the pretraining distribution by computing the hashed n-gram distribution from Wikipedia and books data in the Pile validation set. J Quality filter For heuristic classification and IS methods, we devise a few hand-crafted ways to filter out low quality data as a preprocessing step, according to • Word length: between 40 and 500 • Repeat ratio, defined as maxword # occurrences of word in example example word length : between 0.02 and 0.2 • Informativeness ratio, defined as # of non-stopwords and non-punctuation in example example word length : between 0.3 and 0.7 • Numeric ratio, defined as # of numbers in example example word length : less than 0.2 29 The words are based on the NLTK word tokenizer (Bird et al., 2009). These are difficult for a simple n-gram based importance weight estimator or classifier to use as features because it requires global context. We decide to keep vs. discard examples using some simple thresholds on the above values, decided using inspection on the Pile validation set. Below, we detail some statistics of the quality filtering procedure and provide some data examples. Statistics of quality filtering. With the above thresholds, we find that: • The length filter is the most selective — after applying the length filter, only 55% of the examples are left. • The repeat ratio filter keeps 78% of the data. • The informativeness filter keeps 72% of the data. • The numeric filter keeps 91% of the data. • Overall, when applying all the filters at the same time, 52% of the examples are kept. Thus, we are mainly filtering by length, which seems like a good proxy for quality. Kept vs. discarded examples according to quality filter. First, we show the beginning characters from some randomly selected kept vs. discarded examples. KEPT: all rights, the girl should be hanged for coining and thievery, and you, sir, millennia of ancient carvings, magical swords and glittering jewels and textiles. Kmax, and mean asphericity ( Q) on corneal tomography were evaluated [M]other selects a therapist who requires co-pay in\n informations about call once you are done and you don’t need info anymore DISCARDED: SUCH DAMAGE.\n################################################################# (31)\\\n + "mpls"\n 1993--1997, 48 months var value = formattedTime + ’\\t’ + type + ’\\t’ + name + ’\\t’ + eventTxt + [\n NA (\\<5 yr age) "setup": [\n ],\n 1 110.88 (108.42 to 113.34) 107.89 (105.28 to 110.50) Wye Mun no podia evitar recordar lo que su padre siempre decia: <<Nunca olvides 1.25 (-2.18 to 4.67) FILED\n 2.18 bG9hdDpsZWZ0O21hcmdpbjoycHggNXB4OyB0ZXh0LWFsaWduOmNlbnRlcjsiPjxhIGhyZWY9Imh0\n Extreme length examples. Very short examples tend to be data snippets or otherwise nonsensical: 278713.303 3771574.556 by arc centered at 280828.793 3766437.062 94a to 279188.184 3771745.314 by arc centered at 280945.177 3766474.440 to 280325.491 3771995.774 by arc centered at 281478.555 3766560.741 to Very long examples tend to be dense code, repetitive, or very technical: $ y ’ = \cap_ { h \in \mathcal { d } ( y ) \setminus g. \ { h_0 \ } } h^+ $ . the cube complex $ v = h_0^- \cap y ’ $ is called a * vertebra * . see figures \ [ fig : pentagons\ ] and \ [ vertebra\ ] . ( -4.37 , -3.17 ) rectangle ( 6.57,5.42 ) ; ( 0,0 ) - ( 0,1 ) ; ( 0,0 ) - ( 1,0 ) ; ( 1,1 ) - ( 1,0 ) ; ( 1,1 ) - ( 1,1.56 ) ; ( 0.71,1.71 ) - ( 0.85,1.71 ) ; plot\ [ domain=3.93:4.71 , variable=\ ] ( [ 1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=4.71:5.5 , variable=\ ] ( [ 1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=-0.79:0 , variable=\ ] ( [ 1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=3.142:4.71 , variable=\ ] ( [ 1\ * 0.15\ * cos ( r ) +0\ * 0.15\ * sin ( r ) ] { } , [ 0\ * 0.15\ * cos ( r ) +1\ * 0.15\ * sin ( r ) ] { } ) ; plot\ [ domain=3.93:4.71 , variable=\ ] ( [ -1\ * 0.71\ * cos ( r ) 30 +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=4.71:5.5 , variable=\ ] ( [ -1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=-0.79:0 , variable=\ ] ( [ -1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=3.142:4.71 , variable=\ ] ( [ -1\ * 0.15\ * cos ( r ) +0\ * 0.15\ * sin ( r ) ] { } , [ 0\ * 0.15\ * cos ( r ) +1\ * 0.15\ * sin ( r ) ] { } ) ; ( 2,0 ) - ( 2,1 ) ; ( 2,0 ) - ( 1,0 ) ; ( 1,1 ) - ( 1,1.56 ) ; ( 1.29,1.71 ) - ( 1.15,1.71 ) ; plot\ [ domain=3.93:4.71 , variable=\ ] ( [ -1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=4.71:5.5 , variable=\ ] ( [ -1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=-0.79:0 , variable=\ ] ( [ -1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=3.142:4.71 , variable=\ ] ( [ -1\ * 0.15\ * cos ( r ) +0\ * 0.15\ * sin ( r ) ] { } , [ 0\ * 0.15\ * cos ( r ) +1\ * 0.15\ * sin ( r ) ] { } ) ; plot\ [ domain=3.93:4.71 , variable=\ ] ( [ 1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=4.71:5.5 , variable=\ ] ( [ 1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=-0.79:0 , variable=\ ] ( [ 1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=3.142:4.71 , variable=\ ] ( [ 1\ * 0.15\ * cos ( r ) +0\ * 0.15\ * sin ( r ) ] { } , [ 0\ * 0.15\ * cos ( r ) + 1\ * 0.15\ * sin ( r ) ] { } ) ; ( 4,0 ) - ( 4,1 ) ; ( 4,0 ) - ( 3,0 ) ; ( 3,1 ) - ( 3,0 ) ; ( 3,1 ) - ( 3,1.56 ) ; ( 3.29,1.71 ) - ( 3.15,1.71 ) ; ( 2,0 ) - ( 3,0 ) ; ( 3,1 ) - ( 3,1.56 ) ; ( 2.71,1.71 ) - ( 2.85,1.71 ) ; plot\ [ domain=3.93:4.71 , variable=\ ] ( [ -1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=4.71:5.5 , variable=\ ] ( [ -1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=-0.79:0 , variable=\ ] ( [ -1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=3.142:4.71 , variable=\ ] ( [ -1\ * 0.15\ * cos ( r ) +0\ * 0.15\ * sin ( r ) ] { } , [ 0\ * 0.15\ * cos ( r ) +1\ * 0.15\ * sin ( r ) ] { } ) ; plot\ [ domain=3.93:4.71 , variable=\ ] ( [ 1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=4.71:5.5 , variable=\ ] ( [ 1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=-0.79:0 , variable=\ ] ( [ 1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=3.142:4.71 , variable=\ ] ( [ 1\ * 0.15\ * cos ( r ) + 0\ * 0.15\ * sin ( r ) ] { } , [ 0\ * 0.15\ * cos ( r ) +1\ * 0.15\ * sin ( r ) ] { } ) ; plot\ [ domain=3.93:4.71 , variable=\ ] ( [ 1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=4.71:5.5 , variable=\ ] ( [ 1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=-0.79:0 , variable=\ ] ( [ 1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=3.142:4.71 , variable=\ ] ( [ 1\ * 0.15\ * cos ( r ) +0\ * 0.15\ * sin ( r ) ] { } , [ 0\ * 0.15\ * cos ( r ) +1\ * 0.15\ * sin ( r ) ] { } ) ; plot\ [ domain=3.93:4.71 , variable=\ ] ( [ -1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=4.71:5.5 , variable=\ ] ( [ -1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) + 1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=-0.79:0 , variable=\ ] ( [ -1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; plot\ [ domain=3.142:4.71 , variable=\ ] ( [ -1\ * 0.15\ * cos ( r ) +0\ * 0.15\ * sin ( r ) ] { } , [ 0\ * 0.15\ * cos ( r ) +1\ * 0.15\ * sin ( r ) ] { } ) ; ( 8,0 ) - ( 8,1 ) ; ( 8,0 ) - ( 7,0 ) ; ( 7,1 ) - ( 7,0 ) ; ( 7,1 ) - ( 7,1.56 ) ; ( 7.29,1.71 ) - ( 7.15,1.71 ) ; ( 6,0 ) - ( 6,1 ) ; ( 6,0 ) - ( 7,0 ) ; ( 7,1 ) - ( 7,1.56 ) ; ( 6.71,1.71 ) - ( 6.85,1.71 ) ; ( 4,0 ) - ( 5,0 ) ; ( 5,1 ) - ( 5,0 ) ; ( 5,1 ) - ( 5,1.56 ) ; ( 4.71,1.71 ) - ( 4.85,1.71 ) ; ( 6,0 ) - ( 5,0 ) ; ( 5,1 ) - ( 5,1.56 ) ; ( 5.29,1.71 ) - ( 5.15,1.71 ) ; plot\ [ domain=3.93:4.71 , variable=\ ] ( [ -1\ * 0.71\ * cos ( r ) +0\ * 0.71\ * sin ( r ) ] { } , [ 0\ * 0.71\ * cos ( r ) +1\ * 0.71\ * sin ( r ) ] { } ) ; Extreme repeat ratio example. Examples with a high repeat ratio are mostly examples without much content except for one repeated token, sometimes within code: $ d_h ( x , y+\delta ) $ -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- $ \left ( -\infty , \frac { 3\delta } { 4 } +\eps\delta\right ) $ $ ( x_0 , y ) $ - -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -- -- -- -- - -- -- -- -- -- -- -- - -- -- -- -- -- -- -- -- - -- -- -- -- -- -- -- -- - -- -- -- * * i look forward to one day becoming a mother * * * * 0 * * . * * 871 * * * * 0 * * . * * 870 * * -0.130 -0.312 0.066 -0.286 Extreme informativeness ratio examples. Very low informative ratio examples also tend to be (sometimes extremely) short: | | Extremely high informativeness examples are often foreign language, since they don’t contain English stop words: maailmankaikkeuden sinnikkäimmällä valittajahahmolla umayya abu-hannalla on taas sanottavaa . niin , tiedätte kyllä mistä : suomalaisten ra-sis-mis-ta . 31 abu-hannaa haastatellaan päivän helsingin sanomissa ehkä ziljoonannen kerran tästä yhdestä ja samasta aiheesta . en ymmärrä miksi . abu-hanna on ollut tällä viikolla suomessa noutamassa global family award -palkintoa ” avarakatseisuudesta ja monikulttuurisuuden merkittävästä edistämisestä suomalaisessa yhteiskunnassa ” . avarakatseisuudesta ? ? ? ? ? en tiedä ketään , jonka katsantokanta on ( julkisen kuvan perusteella ) niin kapea kuin abu-hannan . hänen toistuvat ” suuri osa suomalaisista on rasisteja ” -puheenvuoronsa eivät myöskään synnytä minkäänlaista positiivista dialogia tähän yhteiskuntaan . niistä tykkäävät ja somessa peukuttavat ainoastaan ne ihmiset , jotka ovat itsekin sitä mieltä , että suomi on raakalaismaisten rasistien maa . muissa suomalaisissa ne herättävät kohtuuttomuudessaan ja jankkauksessaan vain ärtymystä . kuten kaikki tiedämme , abu-hanna asuu nykyään hollannissa . vielä vuosi sitten kyseinen maa näyttäytyi hänen haastatteluissaan paratiisina . mutta nyt - ja tämä ei varmasti yllätä ketään - Examples with informative ratio close to 0.5 are more standard English: the feature ( 2 ) mentioned above . the assumption of $ p $ = 1 might be unrealistic in usual ferromagnetic metals . however , if the exchange interaction between eu atoms is accomplished via $ \pi $ -bands of c $ _ { 60 } $ as discussed earlier , we can expect a large spin polarization of $ \pi $ -electrons . we can also consider the effect of magnetic polaron . in magnetic semiconductors such as eu chalcogenides , a carrier makes surrounding magnetic moments be polarized via exchange interaction and forms a magnetic polaron [ @ kasuya1 ] . at zero field , magnetic polarons have to move with flipping some magnetic moments which are more or less randomly oriented , and their conduction is suppressed . application of magnetic field aligns spin directions and carriers become mobile . as a result , negative magnetoresistance occurs . the negative magnetoresistance above $ t_c $ can be attributed to this 32
synthetic_cpt
1
Generative_Adversarial_Networks_for_Synthetic_Data_Generation_in_Finance_Evaluating_Statistical_Similarities_and_Quality_Assessment.pdf
FedSyn: Synthetic Data Generation using Federated Learning Monik Raj Behera1, Sudhir Upadhyay1, Suresh Shetty1, Sudha Priyadarshini1, Palka Patel1, Ker Farn Lee1 {monik.r.behera,sudhir.x.upadhyay,suresh.shetty,sudha.priyadarshini} @jpmorgan.com 1Onyx by J.P. Morgan 2 2 0 2 r p A 6 ] L M . t a t s [ 2 v 1 3 9 5 0 . 3 0 2 2 : v i X r a Abstract—As Deep Learning algorithms continue to evolve and become more sophisticated, they require massive datasets for model training and efficacy of models. Some of those data requirements can be met with the help of existing datasets within the organizations. Current Machine Learning practices can be leveraged to generate synthetic data from an existing dataset. Further, it is well established that diversity in generated synthetic data relies on (and is perhaps limited by) statistical properties of available dataset within a single organization or entity. The more diverse an existing dataset is, the more expressive and generic synthetic data can be. However, given the scarcity of underlying data, it is challenging to collate big data in one organization. The diverse, non-overlapping dataset across distinct organizations provides an opportunity for them to contribute their limited distinct data to a larger pool that can be leveraged to further synthesize. Unfortunately, this raises data privacy concerns that some institutions may not be comfortable with. This paper proposes a novel approach to generate synthetic data - FedSyn. FedSyn is a collaborative, privacy preserving approach to generate synthetic data among multiple participants in a federated and collaborative network. FedSyn creates a synthetic data generation model, which can generate synthetic data consisting of statistical distribution of almost all the par- ticipants in the network. FedSyn does not require access to the data of an individual participant, hence protecting the privacy of participant’s data. The proposed technique in this paper leverages federated machine learning and generative adversarial network (GAN) as neural network architecture for synthetic data generation. The proposed method can be extended to many machine learning problem classes in finance, health, governance, technology and many more. Index Terms—Federated Learning, Machine Learning, Gen- erative Adversarial Network, Neural Network, Synthetic Data Generation, Deep Learning I. INTRODUCTION Back in 2017, The Economist published a story titled, ”The world’s most valuable resource is no longer oil, but data.” Since then data has continued to be one of the most critical and valuable asset in industry today, where we see innovative and complex machine learning algorithms and architectures, which are dependent on a plethora of data for training, testing, mining and designing ‘data-centric’ statistical algorithms [1]. In many organizations, big data is mined with robust data engineering pipelines, which allows them to create sufficient datasets to train machine learning and deep learning models. Scarcity of data can be a big challenge to application of machine learning for organizations or entities which are not able to mine big data. Though transfer learning [2] may help, it does not cater to all the needs as they are generally designed over public data. In few cases, even organizations with abundant data may find some bias in their dataset because of various reasons, resulting in biased and non-generic machine learning models with limited variance in available data. GAN [3] have emerged as one of the prominent neural net- work architectures to generate synthetic data. The architecture can be modified to accommodate various data signals. Since discriminator and generator models compete with each other in an adversarial game to generate synthetic data as close as possible to original data, the quality of synthetic data is directly dependent on training data used. Generative Adversar- ial Networks trained on available data can help generate data required for training machine learning models in organizations. In financial, health care and Internet of Things industries, re- cent trends [4]–[9] have indicated an increase in collaboration among organizations to benefit from data of their peers in their respective industries. To draw a hypothetical scenario, different financial organizations have varying concentration of their business operations in different geographical distributions. This in turn results in the concentration of a specific dataset in those locations. Localized regional data would certainly be valuable to other regions that can leverage it for training their local models for developing intelligent solutions, including anomaly detection, potential payment fraudulent in case of cross border payments etc. However, considering data privacy laws and governance on data accessibility within industry it may be challenging to share such data across regions. In such cases, it may be possible to share the local models across regions that can augment other regional models. To extend it further, the proposed method in this paper can be implemented for greater research opportunities to obtain powerful and global machine learning models. A. Novel Contribution in Current Work Industries, research groups and individuals would benefit from machine learning models using collaboration, which will bridge the gap of data scarcity and data bias [10]. Though collaboration is required, data privacy must be honoured. Generating synthetic data is a strategic way to proceed, where challenges to data scarcity could be solved, but data bias would still be observed, since synthetic data generation models would be trained on existing data only. The experiments in this paper demonstrate generation of synthetic data in a privacy preserving and collaborative manner. a. Data Scarcity b. Data Bias c. Data Privacy In order to try and solve the above-mentioned challenges, FedSyn framework has been proposed in this paper. FedSyn proposes a novel method, in which federated learning, syn- thetic data generation using GAN and differential privacy are combined as a framework. For solving the challenge of data scarcity, generative adversarial networks are being used to generate synthetic data after being trained upon existing data. Federated learning [11] provides privacy of data as well as that of underlying data of participants in collaborative network, with aggregation algorithm executed on non-IID data [12]. This essentially tries to solve the problem of data bias, as aggregated models from federated server builds on learning from individual participants. To extend data privacy and anonymity guarantee even further, FedSyn implements differential privacy by adding Laplacian noise [13] to model parameters. II. RELATED WORK Synthetic data generation has been an active area of research and exploration, both for research groups and industries. Syn- thetic data is not only used for training machine learning, but software engineering industry also depends on synthetic data for testing, and other business use cases. With the introduction of GAN, it has become de facto method to generate synthetic data signals like image, uni-variate data, multivariate data, audio, video, etc. A. Generative Adversarial Network As discussed in [14], GANs have been quite popular in the domain of synthetic data generation. They are being actively researched in areas of data and image augmentation, generat- ing synthetic images for medical domain, data for extensive training and many more. In [15], authors have discussed the common design patterns, use cases and various architectures in GANs - convolutional GAN, conditional GAN, tabular GAN, adversarial auto-encoders. B. Federated Learning Horizontal federated learning [16] with non-IID data is the key consideration in this paper, where participants have varied data distribution and properties. A majority of the enterprise networks for federated learning are constrained. This ensures homogeneous model parameters and architecture for all the participating clients in federated learning network. In real world scenarios, where data originate from different organizational entities, covariate shift, prior probability shift, concept shift and unbalanced data size are common technical challenges [17]. C. Differential Privacy In [18], authors have discussed through numerous exper- iments, that Laplacian noise performs better over Gaussian noise by significantly reducing noise amplitude and noise power in all privacy regimes. This also provides an effective noise distribution, which does not reduce the performance of aggregated machine learning model as compared to Gaussian noise, since Gaussian distribution has higher standard devia- tion (and noise spread). In [19], authors have discussed meth- ods of adding noise to gradients of the neural network while training. This is to obfuscate generator, which provides strict differential privacy guarantee. In [20], Private Aggregation of Teacher Ensemble framework is implemented with GAN, which further guarantees tight differential privacy by diluting the effect of any individual sample. D. Federated Learning and Synthetic Data In [21], authors have showed usage of federated learning for GANs, with Gaussian noise in data during individual participant training (local training), which would protect the privacy of participants. This work has also provided theoretical guarantee of ((cid:15), δ)-differential privacy. In [22], authors have shown the usage of synthetic data to enhance the communica- tion rounds in federated learning. The participants in federated learning are sending synthetic data, generated from original data, which will be used for training on federated server. This ensures data privacy and optimizes communication overhead, which can be really effective in complex real-world scenarios. III. FEDERATED LEARNING-BASED SYNTHETIC DATA GENERATION For a given problem specification, every individual partic- ipant will be executing GAN based synthetic data generator. This will generate synthetic data, which must be deployed for usage, monitored for various possible challenges like data drift, temporal drift, distribution shift, etc. Since the local GAN process is independent of federated learning network, it should follow the general machine learning pipeline practiced across industry, which is comprised of data engineering, model monitoring and feedback for supervised models [23]. FedSyn is a privacy preserving method of generating syn- thetic data in a collaborative manner. The method is suitable for enterprise networks, where individual participants have ample resources to train complex neural networks. Network participant with sufficient data for deep neural networks have a better chances of contributing to federated aggregate model, as compared to participants with resource constraints. The framework is dependent on creating a collaborative and trusted network of servers, communicating model parameters in every federated learning communication round with trusted execu- tion entity over secure channels, as depicted in Figure 1. FedSyn is comprised of three core components in one round, as depicted in Figure 2. Fig. 1: In the above architecture diagram, individual nodes are running data engineering jobs with processes to serve and re-train data models. The participants are connected over a secure network, with many-to-one star topology to share model parameters. Each individual participant consists of data storage and local GAN process A. Generative Adversarial Network In the current work, GAN is used to generate synthetic data at individual participant level. As with every GAN, it consists of two separate neural network models - 1) Generator 2) Discriminator In case of GAN training, the data is prepared and fed into discriminator network for training. Further, random latent space is generated with Gaussian distribution, which is then trained further as an adversarial game with discriminator net- work to achieve an objective, where generator generates latent space which are classified as real by discriminator network. The epochs for this training can be decided empirically, as optimal number of epochs are dependent on neural network architecture and data used. The discriminator network is essentially a binary classifier to classify between real and fake images. Figure 3.a depicts the discriminator network which is used for the paper. Discrim- inator can be modified to obtain various other architectures depending on the objective and data. Generator network is a feature extractor, which generates latent space following specific distribution learnt during train- ing. Similar to discriminator network, generator network can follow various architectures depending on the use case. Figure 3.b shows the generator network used in the current work. B. Differential Privacy In [24], differential privacy is defined as a method to protect privacy of underlying data and statistical properties, which can be potentially leaked from models. The core idea is to add noise onto training data, which would be added to model parameters and learnt indirectly, since the data used for training is modified. In the current work, differential privacy is implemented by adding noise to model parameters, rather than data. Doing this adds noise to generator weight layers, which are respon- sible for generating latent space for synthetic images [25]. Adding noise to model parameters of generator adds noise to latent space in multi-dimensional hyper-sphere, effecting synthetic images itself, hence providing similar benefits as of adding noise to data itself. This approach also makes it computationally favourable for all the participants. Laplacian noise is added to the model parameters before they are sent from participant server to aggregation server. Use of Laplacian noise is favourable over Gaussian noise, as the resultant model parameters are quantitatively less distorted compared to Gaussian noise. Even after that, privacy guarantees are better for Laplacian noise [13]. Below equation shows how the noise introduce noise to model parameters, which may mitigate data leakage to a certain extent. C. Federated Learning As described in [26], the current federated learning frame- work consists of a trusted and secure aggregator server. The federated learning rounds are orchestrated by aggregation server through time bound triggers. In every round, partici- pants share their model parameters to the aggregator server through secure and anonymous channels. Due to the nature of communication, it follows star topology [27] for network communication. Aggregator server responsible for aggregation of all the local participant’s model parameters using FedAvg algorithm [28]. A key enhancement done in FedSyn architecture for av- eraging algorithm is to add Laplacian noise during FedAvg aggregation. The enhanced algorithm is depicted in Algorithm 1. Laplacian noise is added after the weighted averaging of model parameters is completed. This further extends the required privacy of the model, which will protect against adversarial attacks and prevent participant identity leakage. Algorithm 1 FedSyn - FederatedAveraging. In the cluster there are N clients in total, each with a learning rate of η. The set containing all clients is denoted as S, the communication interval is denoted as E, and the weights of clients is denoted as P . The Laplacian noise is denoted as Lnoise(µ, λ), where µ denotes mean of Laplacian distribution and λ denotes exponential decay parameter of Laplacian distribution Central server do: Initialization: global model w0. for each global iteration t ∈ 1, ..., iteration do for all each client k ∈ S # Get clients improved model. wk t+1 ← T rainLocally(k, wt) end for # Update the global model. wt+1 ← Lnoise(µ, λ) + (cid:80)N k=0 pkwk t+1 end for TrainLocally(k, w0): for each client iteration e ∈ 1, ..., E do # Do local model training. we ← we−1 − η∇F (we−1) Fig. 2: In the above diagram, three core components are depicted - General adversarial network deep neural network at local participant level, differential privacy added at local participant level and aggregation layer, and finally federated learning over the complete network from each participant is accumulated at federated aggregated server. One can observe that noise from each participant is added together and propagated further in aggregated model parameters, and subsequent federated learning rounds. WG = N (cid:88) i wipi (cid:80)N i pi N (cid:88) + i Lipi (cid:80)N i pi (1) where WG is aggregated model parameter, N is total num- ber of participants, wi is model parameter of ith participant, pi is scaling weight of ith participant, Lipi is Laplacian noise for ith participant. To extend further, Laplacian noise is also added during aggregation for increased privacy. This enhances Equation 1 as below W (cid:48) G = N (cid:88) i wipi (cid:80)N i pi N (cid:88) + i Lipi (cid:80)N i pi + LG(cid:48) (2) end for return wE where LG(cid:48) is Laplacian noise added during aggregation of participant’s model parameters and W (cid:48) G is the enhanced model parameter after aggregation, with additive noise accumulation. In [19], [20], the authors have discussed and shown privacy guarantees of using differential privacy on various layers of neural network architecture and training. Though data leakage is a core challenge in synthetic data generation, which requires continued research, our work, inspired by [19], [20] does IV. EXPERIMENTS In the current work, the performance of synthetic data generated with FedSyn method has been captured on both MNIST [29] and CIFAR10 [30] public dataset. Both the datasets are widely used to perform federated learning based experiments, since it contains well sampled data elements for 10 labels. For the current work, MNIST and CIFAR10 (a) In above figure, the Discriminator neural network architecture is depicted in Tensorflow Keras format, specific to this paper. The top layer is input layer, followed by 2D Convolution layer and LeakyRelu layer followed by Dropout layer. The subsequent layer is 2D Convolution layer with LeakyRelu with reduced kernel size, followed by Dropout layer. The next layer is Flatten layer. The final layer is Dense layer with Sigmoid layer (b) In above figure, the Generator neural network architecture is depicted in Tensorflow Keras format, specific to this paper. The top layer is input layer, followed by Dense layer and LeakyRelu layer followed by Reshape layer. The subsequent layer is 2D Convolution transpose layer with LeakyRelu, again followed by 2D Convolution transpose layer with LeakyRelu with doubled kernel size. The final layer is 2D Convolution layer in desired shape of expected data with Sigmoid layer Fig. 3: GAN model’s generator and discriminator’s neural network architecture data are divided into 3 parts based on labels, in separate experiments. In each experiment, part 1 denotes all the images from dataset which have labels 0, 1 and 2 with train size of 15000. Part 2 denotes images for labels 3, 4, 5, and 6 with train size of 20000. Part 3 denotes images for labels 7, 8 and 9 with train size of 15000. This sampling allows to have 3 different simulated participants in federated learning network, in non-IID manner. Given the sample size, the importance weight factor pk mentioned in Algorithm 1 is 0.3, 0.4 and 0.3 respectively. For local model training of GANs, 4 core, 16 GB mem- ory based AWS ec2 machines were used for each simulated participant and aggregator server. Each image from dataset has resolution of 28x28. For the current work, architecture of used generator neural network is described in Figure 3.b and architecture of discriminator neural network is described in Figure 3.a. Neural networks are implemented using Tensor- flow framework [31]. For training, batches of 256 have been used with loss function as binary cross entropy [32]. Adam optimizer [33] with learning rate of 0.0002 and exponential decay rate of 0.5 is used. The training accuracy for each participants is taken and mean is computed for each epoch checkpoint (10 epochs per checkpoint). Training is done for 100 epochs. Figure 4 shows the average (average over all the simulated participants, in both the datasets’ experiments) training accuracy of discriminator for classifying real and fake images accurately. From the plot, it is observed that higher number of epochs does not neces- sarily guarantee better accuracy of models. This phenomenon may occur because of higher learning rate and going beyond global minima of cost function [34]. In the experiment, the models after 50 epochs have been considered as final model for each participant, whose model parameters are sent to aggregator server. As discussed in [35], measuring quantitative performance for GAN generated images is still an active research area, in order to measure with no standard method. Currently, performance of images generated with GAN, one can employ problem specific and data specific methods, or can proceed with qualitative methods, involving subjective analysis of generated images. In Figure 5, the quality of synthetically gen- erated images for MNIST data can be qualitatively observed improving over number of epochs during training. Fig. 4: In above figure, average training accuracy (across both the datasets and all the simulated participants), in percentage, is plotted across number of epochs. Red line depicts accuracy for classifying fake data points and Green line depicts accuracy for classifying real data points. After the completion local GAN training, Laplacian noise [13] with µ = 0 and λ as 100,10−1, 10−2, 10−3, 10−4, 10−5, 10−6 is added to model parameters. Resultant model param- eters from each participant are securely sent to aggregator server for aggregation, as described in Algorithm 1. Figure 6 shows the effect of changing λ of Laplacian noise added for differential privacy, over quality of images generated, for MNIST data. The synthetic images generated in Figure 6 are (a) Generated images after 10 epochs (b) Generated images after 20 epochs (c) Generated images after 30 epochs (d) Generated images after 40 epochs (e) Generated images after 50 epochs Fig. 5: Qualitative analysis of generated images for MNIST data, over number of epochs during training of local GAN model for Participant 1 from global model, with aggregated model parameters from federated learning. It can be observed that quality of image does not change significantly after for 10−4, 10−5 and 10−6. So higher value λ can be chosen for the given problem, as higher the λ, higher is the degree of privacy maintained. Though quantitative metric for GANs is still an active area of research, Frechet Inception Distance (FID) [35] is a metric which calculates distance between feature vector of base images and synthetically generated images. FID scores have been also used several other works [21], [35], [36] related to synthetic data generation using federated learning. Lesser the FID score, better is the quality of synthetic images generated. In our case, base images are the synthetic images generated by a GAN model, which is trained over all the MNIST and CIFAR10 data, centrally (in two separate experiments). Thus, central GAN model forms the upper bound for synthetic images generated with FedSyn. As it can be observed in Figure 7, with increasing value of λ in Laplacian noise used for differential privacy, the value of FID score is also increasing. This essentially shows the deteriorating effect of generated images with increase in noise. FID score could be a great metric, which can help in deciding the optimal value of λ for Laplacian noise. After observing Figure 6 and 7, it is evident that λ = 10−4 is an optimal choice, where images generated are of acceptable quality and the degree of differential privacy is also sound. The results of hyper parameter λ over both the datasets are also observed in Table I. To draw a comparison of quality of image generated through FedSyn, Figure 8 shows synthetically generated images from GAN trained over all the 10 labels versus synthetic generated images from FedSyn model, for MNIST data. As it can be TABLE I: Frechet Inception Distance score of images gen- erated from FedSyn, compared to images generated with centrally trained GAN model. λ of Laplacian Noise MNIST CIFAR10 10−6 10−5 10−4 10−3 10−2 10−1 100 23.43 24.88 24.91 25.31 28.96 111.63 132.45 10.81 11.71 11.91 12.58 8.84 84.21 97.74 observed, the images generated from global generator model, created after using aggregated model parameters from feder- ated learning round consists of images from all the 10 labels individual participants have (handwritten digits). Whereas, been trained on fraction of total labels. Also, aggregator server does not have access to all the data points, but the aggregated model parameters is capable of generating synthetic data from all the labels. This observation supports the claim of using FedSyn on non-IID data with Laplacian noise for differential privacy on local generator model parameters, which has been proposed in this paper. V. CONCLUSION Growing need for data intensive, deep learning models across industries is the key driver for collaboration among (a) λ = 100 (b) λ = 10−1 (c) λ = 10−2 (d) λ = 10−3 (e) λ = 10−4 (f) λ = 10−5 (g) λ = 10−6 Fig. 6: Qualitative analysis of synthetic images generated for MNIST data, from global model using aggregated model parameters over changing λ of Laplacian noise. Laplacian noise is added to each local model parameters for differential privacy during federated learning. Through experiments, it has been observed that quality of synthetic data generation is dependent on the degree of noise being added for differential privacy. For differential privacy, noise is added to model parameters as opposed to adding to original data. With federated learning, the results of ex- periments have shown that aggregated models after federated learning are capable of generating synthetic data of all the labels, without accessing any data for training. This further strengthens the claim of using FedSyn for non-IID data. VI. FUTURE WORK FedSyn framework is built on federated learning and GANs, with differential privacy in model parameters. Various tech- niques of machine learning for synthetic data generation [37]– [41], it’s compatibility with federated learning could be worth exploring. To extend the differential privacy guarantees, the study can be extended to other techniques [24], [42] for privacy based computing. Study can also be extended to analyze the impact of various differential privacy techniques on machine learning and deep learning, it’s training efficiency, computation overhead and overall model accuracy [43], [44]. The research presented in this paper can also be extended further to quantify energy consumption of federated learning models, compared to centralized models, which can be the direction towards sustainable machine learning. Study can be extended to identify classes of problems, where feder- ated learning techniques can be employed and can perform better, along with low carbon footprint. This can be further studied to identify key parameters of federated learning for energy consumption like energy consumed during commu- nication, upload/download of model parameters, training at edge and aggregations [45]–[49]. Studying complex synthetic data generation methods further can even help in minimizing the energy requirements for humongous big data engineering systems across data centers [50]. Fig. 7: In above figure, the Frechet Inception Distance Score is plotted against exponential decay parameter of Laplacian noise added for differential privacy enterprise and participants. The key challenges faced here are data privacy, data scarcity and data bias. In the current work, Privacy preserving method like federated learning along with synthetic data generation using GAN is implemented to create FedSyn framework. FedSyn tries to solve the data privacy challenge with federated learning where the original data is never shared for GAN training, but model parameters are shared, after applying differential privacy with Laplacian noise. Federated learning with non-IID data among participants of network, minimizes the risk of data bias while aggregating the local model param- eters. Data scarcity among certain participants are observed, because of imbalanced train size or data availability. Federated learning framework with synthetic data generation could solve that issue for participants with lesser data resources. ACKNOWLEDGMENT The authors would like to thank the following people for their invaluable contributions and support to this project: Tulasi D Movva, Vinay Somashekhar, Thomas Eapen, Senthil Nathan, Sean Moran, Fran Silavong, Kirk Stirling and Paula Valentine from JPMorgan Chase & Co; REFERENCES [1] D. Alvarez-Coello, D. Wilms, A. Bekan, and J. M. G´omez, “Towards a data-centric architecture in the automotive industry,” Procedia Computer Science, vol. 181, pp. 658–663, 2021. [2] K. Weiss, T. M. Khoshgoftaar, and D. Wang, “A survey of transfer learning,” Journal of Big data, vol. 3, no. 1, pp. 1–40, 2016. [3] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in neural information processing systems, vol. 27, 2014. [4] J. Pang, Y. Huang, Z. Xie, J. Li, and Z. Cai, “Collaborative city digital twin for the covid-19 pandemic: A federated learning solution,” Tsinghua Science and Technology, vol. 26, no. 5, pp. 759–771, 2021. [5] N. Rieke, J. Hancox, W. Li, F. Milletari, H. R. Roth, S. Albarqouni, S. Bakas, M. N. Galtier, B. A. Landman, K. Maier-Hein et al., “The future of digital health with federated learning,” NPJ digital medicine, vol. 3, no. 1, pp. 1–7, 2020. [6] M. Aledhari, R. Razzak, R. M. Parizi, and F. Saeed, “Federated learning: A survey on enabling technologies, protocols, and applications,” IEEE Access, vol. 8, pp. 140 699–140 725, 2020. [7] Y. Kumar and R. Singla, “Federated learning systems for healthcare: per- spective and recent progress,” in Federated Learning Systems. Springer, 2021, pp. 141–156. [8] G. Long, Y. Tan, J. Jiang, and C. Zhang, “Federated learning for open banking,” in Federated learning. Springer, 2020, pp. 240–254. [9] Z. Zheng, Y. Zhou, Y. Sun, Z. Wang, B. Liu, and K. Li, “Applications of federated learning in smart cities: recent advances, taxonomy, and open challenges,” Connection Science, pp. 1–28, 2021. [10] H. Kaur, H. S. Pannu, and A. K. Malhi, “A systematic review on imbalanced data challenges in machine learning: Applications and solutions,” ACM Computing Surveys (CSUR), vol. 52, no. 4, pp. 1–36, 2019. [11] J. Koneˇcn`y, H. B. McMahan, F. X. Yu, P. Richt´arik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” arXiv preprint arXiv:1610.05492, 2016. [12] H. Zhu, J. Xu, S. Liu, and Y. Jin, “Federated learning on non-iid data: A survey,” 2021. [13] R. Sarathy and K. Muralidhar, “Evaluating laplace noise addition to satisfy differential privacy for numeric data.” Trans. Data Priv., vol. 4, no. 1, pp. 1–17, 2011. [14] H. Emami, M. Dong, S. P. Nejad-Davarani, and C. K. Glide-Hurst, “Gen- erating synthetic cts from magnetic resonance images using generative adversarial networks,” Medical physics, vol. 45, no. 8, pp. 3627–3636, 2018. [15] A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath, “Generative adversarial networks: An overview,” IEEE Signal Processing Magazine, vol. 35, no. 1, pp. 53–65, 2018. [16] Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, pp. 1–19, 2019. [17] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 50–60, 2020. [18] Q. Geng and P. Viswanath, “The optimal noise-adding mechanism in differential privacy,” IEEE Transactions on Information Theory, vol. 62, no. 2, pp. 925–951, 2015. [19] C. Xu, J. Ren, D. Zhang, Y. Zhang, Z. Qin, and K. Ren, “Ganobfuscator: Mitigating information leakage under gan via differential privacy,” IEEE Transactions on Information Forensics and Security, vol. 14, no. 9, pp. 2358–2371, 2019. [20] J. Jordon, J. Yoon, and M. Van Der Schaar, “Pate-gan: Generating synthetic data with differential privacy guarantees,” in International conference on learning representations, 2018. (a) Synthetic Images generated from generator of GAN model, which is centrally trained over all the labels (without federated learning) (b) Synthetic Images generated from FedSyn generator model, converged model using federated learning Fig. 8: Qualitative analysis of quality of images generated from generator of GAN model, which is trained over all the labels versus synthetic images generated from generator of FedSyn model. Implementation of federated learning framework with blockchain network is an active area of research, which can be extended with current study. This would create decentralized, trusted and secure FedSyn framework [51] with autonomous governance. DISCLAIMER This paper was prepared for information purposes by the Onyx Engineering of JPMorgan Chase & Co and its affiliates (J.P. Morgan), and is not a product of the Research Department of J.P. Morgan. J.P. Morgan makes no explicit or implied representation and warranty and accepts no liability, for the completeness, accuracy or reliability of information, or the le- gal, compliance, financial, tax or accounting effects of matters contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction. sification,” in 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018). IEEE, 2018, pp. 289–293. [42] C. Dwork, A. Roth et al., “The algorithmic foundations of differential privacy.” Found. Trends Theor. Comput. Sci., vol. 9, no. 3-4, pp. 211– 407, 2014. [43] A. Friedman and A. Schuster, “Data mining with differential privacy,” in Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, 2010, pp. 493–502. [44] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, 2016, pp. 308–318. [45] N. Jones, “How to stop data centres from gobbling up the world’s electricity,” Nature, vol. 561, no. 7722, pp. 163–167, 2018. [46] E. Strubell, A. Ganesh, and A. McCallum, “Energy and policy con- siderations for deep learning in nlp,” arXiv preprint arXiv:1906.02243, 2019. [47] P. Henderson, J. Hu, J. Romoff, E. Brunskill, D. Jurafsky, and J. Pineau, “Towards the systematic reporting of the energy and carbon footprints of machine learning,” Journal of Machine Learning Research, vol. 21, no. 248, pp. 1–43, 2020. [48] L. F. W. Anthony, B. Kanding, and R. Selvan, “Carbontracker: Tracking and predicting the carbon footprint of training deep learning models,” arXiv preprint arXiv:2007.03051, 2020. [49] X. Qiu, T. Parcollet, J. Fernandez-Marques, P. P. B. de Gusmao, D. J. Beutel, T. Topal, A. Mathur, and N. D. Lane, “A first look into the carbon footprint of federated learning,” arXiv preprint arXiv:2102.07627, 2021. [50] X. Jin, B. W. Wah, X. Cheng, and Y. Wang, “Significance and challenges of big data research,” Big data research, vol. 2, no. 2, pp. 59–64, 2015. [51] Y. Lu, X. Huang, Y. Dai, S. Maharjan, and Y. Zhang, “Blockchain and federated learning for privacy-preserved data sharing in industrial iot,” IEEE Transactions on Industrial Informatics, vol. 16, no. 6, pp. 4177– 4186, 2019. [21] B. Xin, W. Yang, Y. Geng, S. Chen, S. Wang, and L. Huang, “Private fl-gan: Differential privacy synthetic data generation based on federated learning,” in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 2927– 2931. [22] J. Goetz and A. Tewari, “Federated learning via synthetic data,” arXiv preprint arXiv:2008.04489, 2020. [23] M. Vartak, H. Subramanyam, W.-E. Lee, S. Viswanathan, S. Husnoo, S. Madden, and M. Zaharia, “Modeldb: a system for machine learning model management,” in Proceedings of the Workshop on Human-In-the- Loop Data Analytics, 2016, pp. 1–3. [24] C. Dwork, “Differential privacy: A survey of results,” in International conference on theory and applications of models of computation. Springer, 2008, pp. 1–19. [25] A. Odena, J. Buckman, C. Olsson, T. Brown, C. Olah, C. Raffel, and I. Goodfellow, “Is generator conditioning causally related to gan per- formance?” in International conference on machine learning. PMLR, 2018, pp. 3849–3858. [26] K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. Kiddon, J. Koneˇcn`y, S. Mazzocchi, B. McMahan et al., “Towards federated learning at scale: System design,” Proceedings of Machine Learning and Systems, vol. 1, pp. 374–388, 2019. [27] P. W. Dowd, “Random access protocols for high-speed interprocessor communication based on an optical passive star topology,” Journal of Lightwave Technology, vol. 9, no. 6, pp. 799–808, 1991. [28] X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang, “On the convergence of fedavg on non-iid data,” arXiv preprint arXiv:1907.02189, 2019. [29] L. Deng, “The mnist database of handwritten digit images for machine learning research,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 141–142, 2012. Irving, M. [30] A. Krizhevsky, V. Nair, and G. Hinton, “The cifar-10 dataset,” online: http://www. cs. toronto. edu/kriz/cifar. html, vol. 55, no. 5, 2014. [31] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man´e, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi´egas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2015, software available from tensorflow.org. [Online]. Available: https://www.tensorflow.org/ [32] Y. Ho and S. Wookey, “The real-world-weight cross-entropy loss func- tion: Modeling the costs of mislabeling,” IEEE Access, vol. 8, pp. 4806– 4813, 2019. [33] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” 2017. [34] Z. Zhang and M. Sabuncu, “Generalized cross entropy loss for training deep neural networks with noisy labels,” Advances in neural information processing systems, vol. 31, 2018. [35] K. Shmelkov, C. Schmid, and K. Alahari, “How good is my gan?” in Proceedings of the European Conference on Computer Vision (ECCV), September 2018. [36] C. Hardy, E. Le Merrer, and B. Sericola, “Md-gan: Multi-discriminator generative adversarial networks for distributed datasets,” in 2019 IEEE international parallel and distributed processing symposium (IPDPS). IEEE, 2019, pp. 866–877. [37] J. Eno and C. W. Thompson, “Generating synthetic data to match data mining patterns,” IEEE Internet Computing, vol. 12, no. 3, pp. 78–82, 2008. [38] J. Drechsler and J. P. Reiter, “An empirical evaluation of easily im- plemented, nonparametric methods for generating synthetic datasets,” Computational Statistics & Data Analysis, vol. 55, no. 12, pp. 3232– 3243, 2011. [39] P. Krishnan and C. Jawahar, “Generating synthetic data for text recog- nition,” arXiv preprint arXiv:1608.04224, 2016. [40] N. Jaipuria, X. Zhang, R. Bhasin, M. Arafa, P. Chakravarty, S. Shri- vastava, S. Manglani, and V. N. Murali, “Deflating dataset bias using synthetic data augmentation,” in Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition Workshops, 2020, pp. 772–773. [41] M. Frid-Adar, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan, “Synthetic data augmentation using gan for improved liver lesion clas-
synthetic_cpt
4
Ontology-Free_General-Domain_Knowledge_Graph-to-Text_Generation_Dataset_Synthesis_using_Large_Language_Model.pdf
Keet ONTOLOGY 9 1 0 2 y a M 3 2 ] I A . s c [ 1 v 9 1 5 9 0 . 5 0 9 1 : v i X r a The African Wildlife Ontology tutorial ontologies: requirements, design, and content C. Maria Keet Abstract Background: Most tutorial ontologies focus on illustrating one aspect of ontology development, notably language features and automated reasoners, but ignore ontology development factors, such as emergent modelling guidelines and ontological principles. Yet, novices replicate examples from the exercises they carry out. Not providing good examples holistically causes the propagation of sub-optimal ontology development, which may negatively affect the quality of a real domain ontology. Results: We identified 22 requirements that a good tutorial ontology should satisfy regarding subject domain, logics and reasoning, and engineering aspects. We developed a set of ontologies about African Wildlife to serve as tutorial ontologies. A majority of the requirements have been met with the set of African Wildlife Ontology tutorial ontologies, which are introduced in this paper. The African Wildlife Ontology is mature and has been used yearly in an ontology engineering course or tutorial since 2010 and is included in a recent ontology engineering textbook with relevant examples and exercises. Conclusion: The African Wildlife Ontology provides a wide range of options concerning examples and exercises for ontology engineering well beyond illustrating only language features and automated reasoning. It assists in demonstrating tasks about ontology quality, such as alignment to a foundational ontology and satisfying competency questions, versioning, and multilingual ontologies. Keywords: Ontology Engineering; Tutorial Ontology; African Wildlife 1 Background The amount of educational material to learn about ontologies is increasing gradually, and there is mate- rial for different target audiences, including domain ex- perts, applied philosophers, computer scientists, soft- ware developers, and practitioners. These materials may include a tutorial ontology to illustrate concepts and principles and may be used for exercises. There are no guidelines as to what such a tutorial ontology should be about and should look like. The two most popular tutorial ontologies are about wine and pizza, which are not ideal introductory subject domains on closer inspection (discussed below), they are limited to the OWL DL ontology language only, and are over 15 years old by now, hence, not taking into consideration the more recent insights in ontology engineering nor the OWL 2 standard with its additional features [15]. Considering subject domains in the most closely re- lated area, conceptual modelling for relational databa- ses, there is a small set of universes of discourse that are used in teaching throughout the plethora of teach- ing materials available: the video/DVD/book rentals, Full list of author information is available at the end of the article employees at a company, a university, and, to a lesser extent, flights and airplanes. Neither of these topics for databases lend themselves well for ontologies, for the simple reason that the two have different purposes. It does raise the question as to what would be a suit- able subject domain and, more fundamentally, what it is that makes some subject domain suitable but not another, and, underlying that, what the requirements are for an ontology to be a good tutorial ontology. In this paper, we will first analyse existing tuto- rial ontologies and highlight some issues (Section 1.1). We then proceed in Section 2 to formulate a prelimi- nary, first, list of requirements that tutorial ontologies should meet, the contents of the African Wildlife On- tology (AWO) tutorial ontologies, and how the AWO fares against the requirements. Further utility is de- scribed in Section 3, as well as a discussion. The scope of this paper is thus to introduce the AWO tutorial ontologies and to frame it in that context. Finally, we conclude the paper in Section 4. Keet Page 2 of 8 1.1 Tutorial ontologies: examples of error propagation in teaching There are two popular tutorial ontologies, being the Wine Ontology and the Pizza ontology, one for being linked to the OWL guide and the other for being linked to the most popular ontology development environ- ment (Prot´eg´e). They both have various shortcomings as tutorial ontologies, however, especially concerning modelling practices or styles, which are discussed first. The Wine ontology[1] and its related “Ontology de- velopment 101” [2] predates OWL 1 with its frames and slots. While the guide contains good suggestions, such as that “Synonyms for the same concept do not represent different classes”, there are modelling issues, notably that the ontology is replete with the class-as- instance error[3] (e.g., TaylorPort as instance of Port and MalbecGrape as instance of Grape instead of as subclass of), and the sub-optimal object property nam- ing scheme of ‘hasX’ , such as adjacentRegion between two Regions rather than the reusable and generic adja- cent. Further, it uses different desiderata in the direct subclassing of wine[4], which does make it interesting for showing classification reasoning (except the unde- sirable deduction that DryWine ≡ TableWine), but is not ideal from a modelling viewpoint. Further, from a tutorial viewpoint: there are many repetitions, such as very many wineries, which distract from the principles, and it lacks annotations. The Pizza ontology tutorial was created for the Prot´eg´e user manual and OWL DL ontology language [1]. It reflects the state of the art at that time, yet much has happened over the past 15 years. For instance, there are new OWL 2 features and there are founda- tional ontologies that provide guidance for represent- ing attributes (cf. Pizza’s ValuePartition). Pizza’s Do- mainConcept throws a learner straight into philosoph- ical debates, which may not be useful to start with, and, for all practical purposes, duplicates owl:Thing. Like the Wine, it has the ‘hasX’ naming scheme, such as hasTopping, including the name of the class it is supposed to relate to, which is a workaround for not having qualified number restrictions (an OWL 1 arte- fact) and sub-optimal ontological analysis of the re- lation (in casu, of how the toppings really relate to [1]http://www.w3.org/TR/2003/PR-owl-guide-20031209/wine [2]https://protege.stanford.edu/publications/ontology_ development/ontology101-noy-mcguinness.html [3]Promoted by the incorrect statement in the guideline “Individual instances are the most specific concepts represented in a knowledge base.” [4]e.g., the likes of Bordeaux and Loire (region-based) and Chardonnay and Cabernet Sauvignon (grape-based), and then there are other criteria, like DessertWine (food pairing- based grouping) and ‘wine descriptor’ ones (DryWine, RedWine, TableWine) the rest of the pizza) that reduces chance of ontol- ogy reuse and alignment. Also, this propagates into students’ modelling approaches[5]. Modelling issues are compounded by the “we generally advise against do- ing [domain and range declarations]” in the tutorial documentation. When one aims to get novices to use Prot´eg´e and OWL so as not get too many errors with the automated reasoners, that might make sense, but ontologically, fewer constraints make an ontology less good because it admits more unintended models. Fi- nally, it has repetitive content to show features, which may be distracting, and, as with Wine, there is only one ‘final’ ontology, despite that multiple releases are common in practice. Other tutorial ontologies include Family History, University, and Shirt. Family History [2] is developed by the same group as Pizza and aims to teach about advanced OWL 2 features and maximise the use of inferencing. Loading it in Prot´eg´e 5.2 results in three punning errors and trying to classify it returned an OutOfMemoryError (on a MacBookPro, 2.6 GHz and 8GB of memory), which is not ideal to start a tutorial with. Concerning modelling issues, ParentOfRobert il- lustrates one can use individuals in class expressions, but just that the language allows it, does not mean it is ontologically a good idea that must be taught. It also has the ‘hasX’ semantic weakness, very few anno- tations, DomainEntity being subsumed by owl:Thing, and multiple data properties. In contrast to Pizza and Wine, all the declared instances are instances and the ontology has different versions as one goes along in the chapters. It has some subject domain aspects descend- ing into politics, which would render it unsuitable for teaching in several countries, such as stating that Sex ≡ Female (cid:116) Male (enforcing a gender binary) and that Person (cid:118) ≤ 2 hasParent.Person (biologically, but not always societally). The University ontology also focuses on illustrating OWL features and automated reasoning, rather than modelling. For instance, it has AcademicStaff with sib- ling NonAcademicStaff where a “non-X” complement class is sub-optimal, especially when there is a term for it. The representation of Student (cid:118) Person is an advanced modelling aspect that can be improved upon with a separate branch for roles played by an object. The Computer Science Ontology was based on the Uni- versity Ontology tutorial and contains artificial classes, like unions of classes (ProfessorinHCIorAI) and under- specified or incorrect individuals like AI and HCI (e.g., [5]such instances were found in ontologies developed by students in earlier instances of the author’s course on ontology engineering, such as developing a sandwich ontology with hasFilling, an electrical circuit board on- tology with hasIsolator, furniture with hasHeadboard etc. Keet Page 3 of 8 some course instance would be CS AI-sem1-2018 in- stead). The Shirt ontology is a tutorial ontology to explain the structure and organisation of the Foundational Model of Anatomy in a simpler way[6] and therefore does not have the hasX naming scheme for object prop- erties, it has no data properties and no instances. It has many annotations with explanations of the enti- ties. There are no ‘interesting’ inferences. Finally, more or less related textbooks were consid- ered [3, 4, 5, 6, 7]. Only the “Semantic Web for the working Ontologist” (2nd ed.) has sample files for the book’s many small examples[7] with two reoccurring subject domains, being English literature and prod- ucts. 1.2 Problems to address The previous section described several problems with existing tutorial ontologies. Notably, a recurring short- coming is that good modelling practices are mostly ignored in favour of demonstrating language features, automated reasoning, and tools. This has a negative effect on learning about ontology development, for tu- torial ontology practices are nonetheless seen by stu- dents as so-called ‘model answers’ even if it were not intended to have that function. The ontology survey does not reveal what may be the characteristics of a good tutorial ontology and, to the best of our knowledge, there is no such list of criteria for tutorial ontologies specifically (only for production level domain ontologies, such as [8, 9, 10, 11]). 1.3 Potential benefits of the African Wildlife Ontology tutorial ontologies In order to address these problems, we introduce the African Wildlife Ontology (AWO). The AWO has been developed and extended over 8 years. It meets a range of different tutorial ontology requirements, notably re- garding subject domain, language feature use and au- tomated reasoning, and its link with foundational on- tologies on the one hand and engineering on the other. It aims to take a principled approach to tutorial ontol- ogy development, which thereby not only may assist a learner, but, moreover from a scientific viewpoint, it might serve as a starting point for tutorial ontology creation or improvement more broadly, and therewith in the future contribute to an experimental analysis of tutorial ontology quality. This could benefit educa- tional material for ontology development. Also, educationally, there is some benefit to ‘reusing’ the same ontology to illustrate a range of aspects, rather than introducing many small ad hoc examples, for then later in a course, it makes it easier for the learners to see the advances they have made. This is also illustrated with offering multiple versions of the same ontology, which clearly indicate different types of increments. Finally, the AWO can be used on its own or together with the textbook “An Introduction to Ontology Engi- neering” [12], which contains examples, tasks and ex- ercises with the AWO. 2 Construction and content The construction of the AWO tutorial ontologies has gone through an iterative development process since 2010. This involved various extensions and improve- ments by design, mainly to address the increasing amount of requirements to meet, and maintenance is- sues, such as resolving link rot of an imported ontol- ogy. Rather than describing the process of the iter- ative development cycles, we present here a ‘digest’ version of it. First, a set of tutorial ontology require- ments are presented together, then a brief overview of the AWO content is described, and subsequently we turn to which of these requirements are met by the AWO. 2.1 OE Tutorial ontology requirements Tutorials on ontologies may have different foci and it is unlikely that an ontology used for a specific tutorial will meet all requirements. The ontology should meet the needs for that tutorial or course, and that should be stated clearly. As such, this list is intended to serve as a set of considerations when developing a tutorial ontology. Each item easily can take up a paragraph of explanation. We refrain from this by assuming the reader of this paper is sufficiently well-versed in ontol- ogy engineering and seeking information on tutorial ontologies. For indicative purpose, the requirements are categorised under three dimensions: the subject domain of the ontology, logics & reasoning, and engi- neering factors. Subject domain 1. It should be general, common sense, domain knowledge, so as to be sufficiently intuitive for non-experts to be able to understand content and add knowledge. Optionally, it may be an enjoyable subject domain to make it look more interesting and, perhaps, also uncontroversial[8] to increase [6]http://xiphoid.biostr.washington.edu/fma/shirt_ontology/ shirt_ontology_1.php [7]http://www.workingontologist.org/Examples.zip; cessed: 26-11-2018. Last ac- [8]Recent general issues include subject domains of ex- ercises that perpetuate stereotypes and simplifications, such as, but not limited to, the gender binary, who can marry whom, and boys with cars. Keet Page 4 of 8 chance of use across different settings and cul- tures. 2. The content should be not wrong ontologically, neither regarding how things are represented (e.g., no classes as instances) nor the subject domain semantics (e.g., whales are mammals, not fish). 3. It needs to be sufficiently international or cross- cultural so that experimentation with a scenario with multiple natural languages for multilingual ontologies is plausible. 4. Its contents should demonstrate diverse aspects succinctly when illustrating a point cf. being repetitive in content. 5. It needs to be sufficiently versatile to illustrate the multiple aspects in ontology development (see below), including the use of core relations such as mereology and meronymy. 6. It should permit extension to knowledge that re- quires features beyond the Description Logics- based OWL species, so as to demonstrate rep- resentation limitations and pointers to possible directions of solutions (e.g., fuzzy and temporal aspects, full first order logic). 7. The subject domain has to be plausible for a range of use case scenarios (database integration, sci- ence, NLP, and so on). Logics & Reasoning I. The ontology should be represented in a logic that has tool support for modelling and automated rea- soning. II. The ontology should be represented in a logic that has tool support for ‘debugging’ features that ‘ex- plain’ the deductions such that the tool shows at least the subset of axioms involved in a particular deduction. III. It should permit simple classification examples and easy examples for showing unsatisfiability and inconsistency, meaning as not to involve more than 2-3 axioms in the explanation, and also longer ones for an intermediate level. IV. The standard reasoning tasks should terminate fairly fast (< 5 seconds) for most basic exercises with the ontology, with the ‘standard’ reason- ing tasks being subsumption/classification, sat- isfiability, consistency, querying and instance re- trieval. V. The representation language should offer some way of importing or linking ontologies into a net- work of ontologies. VI. The language should be expressive enough to demonstrate advanced modelling features, such as irreflexivity and role composition. VII. The logic should be intuitive for the modelling examples at least at the start; e.g., if there is a need for ternaries, then that should be used, not a reification or approximation thereof. Engineering and development tasks A. At least some ontology development methods and tools should be able to use the ontology, be used for improvement of the ontology, etc. B. The ontology needs to permit short/simple com- petency questions (CQs) and may permit long and complicated CQs, which are formulated for the ontology’s content and where some can be an- swered on the ontology and others cannot. C. At least some of the top-level classes in the hi- erarchy should be straight-forward enough to be easily linked to a leaf category from a foundational ontology (e.g., Animal is clearly a physical object, but the ontological status of Algorithm is not im- mediately obvious). D. It should be relatable to, or usable with, or else at least amenable to the use of, ontology design patterns, be they content patterns or other types of patterns, such as the lexico-syntactic or archi- tecture ones. E. It is beneficial if there is at least one ontology sufficiently related to its contents, so that it can be used for tasks such as comparison, alignment, and ontology imports. F. It is beneficial if there are relevant related non- ontological resources that could be used for bottom-up ontology development, such as a con- ceptual model or thesaurus. G. It should be able to show ontology quality im- provements gradually, stored in different files. H. It should not violate basic ontology design princi- ples (e.g., the data properties issue on hampering reuse). While this list may turn out not to be exhaustive in the near future, it is expected to be sufficient for intro- ductory levels of ontology development tutorials and courses. 2.2 Content of the AWO – at a glance The principal content of the AWO is, in the first stage at least, ‘intuitive’ knowledge about African wildlife. This subject domain originated from an early Seman- tic Web book [4] (its Section 4.3.1, pp119-133, 1st ed.) that was restructured and extended slightly for its first, basic version; see Table 1 and Figure 1. It has descriptions of typical wildlife animals, such as Lions and Elephants, and what they eat, such as Impalas (a sort of antelopes) and Twigs and (i.e., a logical ‘or’) leaves. Basic extensions in the simple version of the on- tology include plant parts, so as to demonstrate part- hood and its transitivity, and carnivore vs. herbivore, Keet Page 5 of 8 which make it easy to illustrate disjointness, subsump- tion reasoning, and unsatisfiable classes, and carniv- orous plants to demonstrate logical consequences of declaring domain and range axioms (in casu, of the eats object property). Most elements have been anno- tated with informal descriptions, and several annota- tions link to descriptions on Wikipedia. Figure 1 The African Wildlife Ontology at a glance. The main classes and relations of the African Wildlife ontology (v1) and an illustrative selection of its subclasses. Like the aforementioned Family History ontology, there are several versions of the AWO that reflect dif- ferent stages of learning. In the case of the AWO, this is not specifically with respect to OWL language fea- tures, but one of notions of ontology quality and where one is in the learning process. For instance, version 1a contains answers to several competency questions— i.e., quality requirements that an ontology ought to meet [13]—that were formulated for Exercise 5.1 in the “Methods and methodologies” chapter of [12]. Versions 2 and 3, on the other hand, have the AWO aligned to the DOLCE and BFO foundational ontologies, re- spectively, whose differences and merits are discussed in Chapter 6 of the textbook. Their respective ver- sions with the answers to the related exercises have the name appended with an ‘a’ as well. Version 4 has some contents ‘cleaned up’, partially based on what the OOPS! tool [10] detected, it uses more advanced language features, and takes steps in the direction of adhering to science (e.g., type of carnivores, distin- guishing between types of roots). There are also four versions in different natural lan- guages, being in isiZulu, Afrikaans, Dutch, and Span- ish, which mainly serve the purpose of illustrating is- sues with multilingual settings of ontology use, which relates to content in Chapter 9 of the textbook. 2.3 AWO against the requirements The AWO meets most of the requirements. Concerning the subject domain, the content is general, versatile, not wrong, sufficiently international, and not repeti- tive (Items 1-4). The AWO includes the core relation of parthood for, especially, plants and their parts, with optional straightforward extensions with the partici- pation relation (e.g., animals participating in a Chas- ing event) and membership (animal collectives, such as Herd; see v4 of the AWO), therewith meeting Item 5. Representation of relevant domain knowledge beyond Description Logics-based OWL species (Item 6) could include information about temporal segregation of for- aging or commensalism, inclusion of species with dis- tinct successive phases (e.g., Caterpillar/Butterfly), and the notion of rigidity between what an object is and the role it plays (e.g., Lion playing the role of Preda- tor; see v4 of the AWO). Use case scenarios (Item 7) may be, among others, science of African wildlife, ac- tivism on endangered species, and applications such as a database integration and management system for zoos and for tourism websites. Regarding the logics and reasoning, the AWO is rep- resented in OWL [15], and thus has ample tooling sup- port for knowledge representation, reasoning, and ba- sic debugging/explanation, with ontology development environment tools such as Prot´eg´e (Items I-III). The AWO has both ‘simple’ deductions and more elaborate ones (Item III); e.g., compare Lion that is classified as a Carnivore, having one explanation involving three ax- ioms, with Warthog that is classified as an Omnivore, for which there are three justifications computed that each use, on average, five axioms. Because the AWO is small, does not make extensive use of individuals and high number restrictions, the reasoner terminates fast under all standard reasoning tasks (Item IV). OWL has the language feature to import other ontologies and it also can be used in other ontology network frameworks, notably the Distributed Ontology, Model, and Specification Language DOL [16] (Item V). While OWL contains expressive features such as role chain- ing (Item VI), it, arguably, fails on intuitiveness es- pecially for novices (Item VII). Regarding the latter, e.g., novices make errors in the use of existential and universal quantification (for as of yet unclear reasons), which is not known to be a problem when modelling the equivalent in, say, UML Class Diagrams, and there is the elaborate n-ary (with n ≥ 3) approximation is- sue. With respect to the engineering aspects, by virtue of the AWO being represented in OWL, there are tools that can process the ontology (Item A), and there- with ontology quality improvement methods can be used with the AWO. They include, e.g., the popular Prot´eg´e, and various tools for methods and quality, such as test-driven development [17] and OOPS! [10], and ontology development support activities, such as visualisation and documentation (e.g., [18, 19]). There are also a few competency questions that can be an- swered and that can be easily modelled to be answered, relationeatsXAnimalPlantPlantPartHerbivoreCarnivoreLionImpalaeatseatsis proper part of TwigLeafGrassTreeCarnivorousplanteatsXis-aXdisjointnessimplied relation Keet Page 6 of 8 Table 1 AWO ontologies, with their main differences. File name AfricanWildlifeOntology.xml AfricanWildlifeOntologyWeb.owl AfricanWildlifeOntology0.owl AfricanWildlifeOntology1.owl AfricanWildlifeOntology1a.owl AfricanWildlifeOntology2.owl AfricanWildlifeOntology2a.owl AfricanWildlifeOntology3.owl AfricanWildlifeOntology3a.owl AfricanWildlifeOntology4.owl AfricanWildlifeOntologyZU.owl AfricanWildlifeOntologyAF.owl AfricanWildlifeOntologyNL.owl AfricanWildlifeOntologyES.owl file the from http://www.iro.umontreal.ca/~lapalme/ift6281/OWL/ Difference This is AfricanWildlifeOntology.xml, that was based on the description in [4] AfricanWildlifeOntology.xml + changed the extension to .owl and appended the name with Web. This ontology gave at the time (in 2010) a load error in the then current version of Prot´eg´e due to the use of Collection in the definition of Herbivore AfricanWildlifeOntologyWeb.owl + that section on Collection removed AfricanWildlifeOntology0.owl + several classes and object properties were added (up to SRI DL expressiveness), more annotations, URI updated (described in Example 4.1 in [12]) AfricanWildlifeOntology1.owl + new content for a selection of the CQs in Exercise 5.1 in [12] (its CQ5, CQ8) and awo 12 of the CQ dataset [14]) AfricanWildlifeOntology1.owl + OWL-ised DOLCE (Dolce-Lite.owl) was imported and aligned AfricanWildlifeOntology2.owl + answers to the questions in Example 6.2 in [12] AfricanWildlifeOntology1.owl + BFO v1 was imported and aligned AfricanWildlifeOntology3.owl + answers to the questions in Example 6.2 in [12] AfricanWildlifeOntology1.owl + some things cleaned up (e.g., consistent naming) and added some science content, more OWL language features are used (up to SRIQ), and several educational explanations and questions for further exploration have been added in annotation fields Mostly AfricanWildlifeOntology1.owl but then in isiZulu, with IRI changed AfricanWildlifeOntology1.owl but then in Afrikaans, has some IRI issues to resolve AfricanWildlifeOntology1.owl in Dutch, with IRI changed AfricanWildlifeOntology1.owl in Spanish, same IRI but different file name as included in AWO version 1a (Item B), and there are examples and activities to link it to foundational on- tologies (AWO versions 2 and 3) with easy examples (Item C) (see below, ‘Utility and Discussion’). There are several versions demonstrating various quality im- provements (Item G), avoiding violating some basic de- sign principles like data properties and punning hacks (Item H), and touching upon some advanced engineer- ing issues with multilingual ontologies (see Table 1). Where it falls short at the novice level, is an easy way to link it to another ontology (Item E) and bottom-up development from non-ontological resources (Item F). It is possible and feasible in a mini-project assignment, however; e.g., one could use the freely available wildlife trade data[9] or relate it to the Biodiversity Informa- tion Standards[10] for application scenarios, and link it to the Envo Environment ontology [20] or take it easier on the domain knowledge with one of the avail- able tourism ontologies to create an ontology network. A bottom-up approach to knowledge acquisition for ontologies is demonstrated with cellfie[11] that imple- ments the M2 DSL [21] so that a modeller can add content in a spreadsheet and cellfie converts that into axioms in the ontology, as demonstrated in Example 7.1 of the textbook. Regarding ODPs (Item D), a con- tent ODP with the current contents is not immediately obvious, but other types of ODPs, such as architec- tural ones, are easy to illustrate, alike for BioTop [22] but then at the organism-level with an orchestration between foundation, top-domain, and domain-level on- tologies. [9]https://www.kaggle.com/cites/cites-wildlife-trade-database [10]http://www.tdwg.org/ [11]https://github.com/protegeproject/cellfie-plugin 3 Utility and discussion The principal utility of the AWO is to be a concrete machine-processable artefact for the related examples and exercises, which we shall turn to first, and subse- quently discuss the tutorial ontology. 3.1 Use in exercises and examples The major utility of the AWO is its use in educational activities for ontology engineering exercises and exam- ples that are described in the “An Introduction to On- tology Engineering” textbook [12]. It is not intended as a real domain ontology, but it is explicitly designed as a tutorial ontology that has a domain ontology flavour to it. Consequently, the subject domain knowledge about African Wildlife has been kept simple, yet amenable to extensions. An example of an exercise is shown in Figure 2, which fits within the broader scope of sensitising the student to the notion of quality of an ontology, with competency questions as one of the options. It also of- fers a gentle acquaintance with foundational ontologies with some OWL classes that are either easy or fun to categorise or to elicitate lively debate. For instance, impalas die in the process of being eaten by a lion, where both are subclasses of the straightforward Phys- ical Object in DOLCE [23] or Independent Continuant in BFO [5], and Death is a type of Achievement or Pro- cess boundary, respectively. The exercises of aligning AWO to DOLCE is additionally assisted by the D3 decision diagram [24]. Death/dying also provides an entry point to the alternate modelling styles of process- as-relation vs. process-as-class representation options. Another core distinction in modelling styles are data Keet Page 7 of 8 properties vs. a hierarchy of qualities, for which a use case of elephant’s weight in zoos across the world is used (Section 6.1.1 of [12]). Figure 2 Section of an exercise. Screenshot of the first part of Exercise 5.1 in [12], which lets the student experiment with requirements for the content of an ontology, trying to find that knowledge, and the task of evaluating an ontology on its quality based on its requirements. The high-level notion of a ‘good’ ontology—compared to ‘less good’, ‘bad’, and ‘worse’—has been introduced earlier in the textbook, which has to be recalled and applied here. While the emphasis in this paper is on modelling and engineering aspects, the AWO is still suitable for teach- ing about OWL language features and automated rea- soning, as noted before regarding the deductions (e.g., Lion (cid:118) Carnivore), and language features use such as transitivity and (ir)reflexivity with parthood. Straight- forward examples for demonstrating unsatisfiability are multiple inheritance of Omnivore to the disjoint Carnivore and Herbivore or to set the domain of eats to Animal resulting in an unsatisfiable CarnivorousPlant. Additional variants of the AWO are in progress, which zoom in on subject domains with correspond- ing exercises that are not yet covered in the intro- ductory textbook. Among others, a future ‘version 5’ may be the engineering aspects of importing, aligning, and integrating another domain ontology rather than a foundational ontology, such as a module of the En- vironment Ontology with the habitat information or a tourism ontology, with a corresponding sample an- swer file. The former option would be more suitable for ontology development in ecology, whereas the lat- ter is a more practical option in a tutorial/course for people in other disciplines. Other themes that have not been covered explicitly yet but easily can be applied to the AWO are modularisation [25] and Ontology-Based Data Access with its recent tools [26], and it could be assessed against the MIRO guidelines for reporting on ontologies [27]. 3.2 Discussion The AWO meets most of the tutorial ontology require- ments that evolved and extended over the years. The AWO goes beyond extant tutorial ontologies that over- whelmingly focus only on demonstrating language fea- tures and automated reasoning, or how to use a specific version of a specific tool. In particular, the AWO brings in ontology development aspects, such as competency questions and alignment to a foundational ontology, among others. The illustrations of gradual quality improvements— common in ontology development—go beyond the no- tion that a new version only uses more language fea- tures, as in Family History [2] and University[12]. In particular, there are improvements on aspects regard- ing, among others, content, naming, annotations, and foundational ontology alignment. Also, care has been taken in representing the knowl- edge, such as avoiding some common pitfalls like the class-as-instance and certain naming issues like ‘and’, ‘or’ or negation in a term [9]. Unlike other tutorial on- tologies, including the popular Pizza and Wine, it is richly annotated with informal descriptions, pointers to introductory domain knowledge, and questions for further exploration of a modelling topic. Tutorial ontology subject domains such as one’s fam- ily history, a university, or one’s pets are distinctly focussed on individual application scenarios that may serve database development, but do not give an ed- ucationally good flavour of typical scopes of domain ontologies. In that regard, pizzas and wines fare some- what better, which, however, have repetitive content, such as listing all ingredients of pizza topping. Con- trast this with animal wildlife, where it suffices al- ready to represent that a lion eats animals to have it classified automatically as a carnivore. The wildlife subject domain is generic rather than specific for one application scenario, and therewith less predisposed to a myopic ‘my thing only’ thinking that is preva- lent when students encounter ontologies for a first time that is in line with the UML and EER conceptual data modelling they are familiar with. Last, but not least, African wildlife is obviously relevant for South Africa, where the author and most of her students are based, and it fits with the trend to make curricula region- ally relevant. This is also reflected in an isiZulu and an Afrikaans version of the ontology and introductory aspects on term use for ontologies in a multilingual setting, as Impala and Rockdassie are not Standard English words yet they are widely accepted words in South African English. 4 Conclusions The paper introduced the African Wildlife Ontology tutorial ontologies, which is a set of ontologies used for a variety of ontology development examples and exercises. Considering possible desirable educational outcomes, 22 requirements were formulated that a tu- torial ontology should meet. The AWO meets most of [12]http://owl.man.ac.uk/2005/07/sssw/university.html Keet Page 8 of 8 13. Uschold, M., Gruninger, M.: Ontologies: principles, methods and applications. Knowledge Engineering Review 11(2), 93–136 (1996) 14. Wisniewski, D., Potoniec, J., Lawrynowicz, A., Keet, C.M.: Competency questions and SPARQL-OWL queries dataset and analysis. Technical Report 1811.09529 (November 2018). https://arxiv.org/abs/1811.09529 15. Motik, B., Patel-Schneider, P.F., Parsia, B.: OWL 2 web ontology language structural specification and functional-style syntax. W3c recommendation, W3C (27 Oct. 2009). http://www.w3.org/TR/owl2-syntax/ 16. Distributed Ontology, Model, and Specification Language. Object Management Group. http://www.omg.org/spec/DOL/ 17. Keet, C.M., Lawrynowicz, A.: Test-driven development of ontologies. In: Sack, H., et al.(eds.) Proceedings of the 13th Extended Semantic Web Conference (ESWC’16). LNCS, vol. 9678, pp. 642–657. Springer, Berlin (2016). 29 May - 2 June, 2016, Crete, Greece 18. Garijo, D.: WIDOCO: a wizard for documenting ontologies. In: d’Amato, C., et al.(eds.) The Semantic Web – ISWC 2017. LNCS, vol. 10588, pp. 94–102. Springer, Berlin (2017) 19. Lohmann, S., Link, V., Marbach, E., Negru, S.: WebVOWL: Web-based visualization of ontologies. In: Proceedings of EKAW 2014 Satellite Events. LNAI, vol. 8982, pp. 154–158. Springer, Berlin (2015) 20. Buttigieg, P.L., Morrison, N., Smith, B., Mungall, C.J., Lewis, S.E.: The environment ontology: contextualising biological and biomedical entities. Journal of Biomedical Semantics 4, 43 (2013) 21. O’Connor, M.J., Halaschek-Wiener, C., Musen, M.A.: Mapping master: A flexible approach for mapping spreadsheets to OWL. In: Patel-Schneider, P.F., et al.(eds.) Proceedings of the International Semantic Web Conference 2010 (ISWC’10). LNCS, vol. 6497, pp. 194–208. Springer, Berlin (2010) 22. Beisswanger, E., Schulz, S., Stenzhorn, H., Hahn, U.: BioTop: An upper domain ontology for the life sciences - a description of its current structure, contents, and interfaces to OBO ontologies. Applied Ontology 3(4), 205–212 (2008) 23. Masolo, C., Borgo, S., Gangemi, A., Guarino, N., Oltramari, A.: Ontology Library. WonderWeb Deliverable D18 (ver. 1.0, 31-12-2003). http://wonderweb.semanticweb.org (2003) 24. Keet, C.M., Khan, M.T., Ghidini, C.: Ontology authoring with FORZA. In: Proceedings of the 22nd ACM International Conference on Conference on Information & Knowledge Management (CIKM’13), pp. 569–578. ACM proceedings, ??? (2013). Oct. 27 - Nov. 1, 2013, San Francisco, USA. 25. Khan, Z.C., Keet, C.M.: An empirically-based framework for ontology modularization. Applied Ontology 10(3-4), 171–195 (2015) 26. Calvanese, D., Cogrel, B., Komla-Ebri, S., Kontchakov, R., Lanti, D., Rezk, M., Rodriguez-Muro, M., Xiao, G.: Ontop: Answering SPARQL queries over relational databases. Semantic Web Journal 8(3), 471–487 (2017) 27. Matentzoglu, N., Malone, J., Mungall, C., Stevens, R.: MIRO: guidelines for minimum information for the reporting of an ontology. Journal of Biomedical Semantics 9, 6 (2018) these requirements, therewith improving over its pre- decessors especially reading the notions of evolution of ontology quality several ontology development tasks beyond getting the axioms into an OWL file, such as alignment to a foundational ontology and satisfying competency questions. Both the 22 requirements and the AWO are rele- vant to the field of ontology engineering in particular, especially for enhancing course material, which, it is hoped, will result in further quality improvements of the actual ontologies that developers are building. Competing interests The author declares that she has no competing interests. Funding The author declares that she has not received project funding for this work. Availability The AWO is freely available under a CC-BY licence through the textbook’s webpage at https://people.cs.uct.ac.za/~mkeet/OEbook/. Acknowledgements The author would like to thank previous ontology engineering course participants on their feedback, which assisted in refining some of the examples and exercises with the AWO. References 1. Rector, A., Drummond, N., Horridge, M., Rogers, L., Knublauch, H., Stevens, R., Wang, H., Wroe, C. Csallner: OWL pizzas: Practical experience of teaching OWL-DL: Common errors & common patterns. In: Proceedings of the 14th International Conference Knowledge Acquisition, Modeling and Management (EKAW’04). LNCS, vol. 3257, pp. 63–81. Springer, Whittlebury Hall, UK (2004) 2. Stevens, R., Stevens, M., Matentzoglu, N., Jupp, S.: Manchester Family History Advanced OWL Tutorial, 1.0 edn. University of Manchester, UK (2013). http://owl.cs.manchester.ac.uk/tutorials/fhkbtutorial/ 3. Allemang, D., Hendler, J.: Semantic Web for the Working Ontologist, 1st edn. Morgan Kaufmann, USA (2008) 4. Antoniou, G., van Harmelen, F.: A Semantic Web Primer. MIT Press, USA (2003) 5. Arp, R., Smith, B., Spear, A.D.: Building Ontologies with Basic Formal Ontology. The MIT Press, USA (2015) 6. Hitzler, P., Krtzsch, M., Rudolph, S.: Foundations of Semantic Web Technologies, 1st edn. Chapman & Hall/CRC, USA (2009) 7. Su´arez-Figueroa, M.C., G´omez-P´erez, A., Motta, E., Gangemi, A. (eds.): Ontology Engineering in a Networked World. Springer, Germany (2012) 8. Duque-Ramos, A., Fern´andez-Breis, J.T., Iniesta, M., Dumontier, M., Egana Aranguren, M., Schulz, S., Aussenac-Gilles, N., Stevens, R.: Evaluation of the oquare framework for ontology quality. Expert Systems with Applications 40(7), 2696–2703 (2013) 9. Keet, C.M., Su´arez-Figueroa, M.C., Poveda-Villal´on, M.: Pitfalls in ontologies and tips to prevent them. In: Fred, A., Dietz, J.L.G., Liu, K., Filipe, J. (eds.) Knowledge Discovery, Knowledge Engineering and Knowledge Management: IC3K 2013 Selected Papers. CCIS, vol. 454, pp. 115–131. Springer, Berlin (2015) 10. Poveda-Villal´on, M., Su´arez-Figueroa, M.C., G´omez-P´erez, A.: Validating ontologies with OOPS! In: ten Teije, A., et al.(eds.) 18th International Conference on Knowledge Engineering and Knowledge Management (EKAW’12). LNAI, vol. 7603, pp. 267–281. Springer, Germany (2012). Oct 8-12, Galway, Ireland 11. Schulz, S., Seddig-Raufie, D., Grewe, N., R¨ohl, J., Schober, D., Boeker, M., Jansen, L.: Guideline on developing good ontologies in the biomedical domain with description logics. Technocal report (December 2012). v1.0. http://www.purl.org/goodod/guideline 12. Keet, C.M.: An Introduction to Ontology Engineering. Computing, vol. 20. College Publications, UK (2018)
synthetic_cpt
1
Image_Quality_Assessment_by_Integration_of_Low-level_&_High-Level_Features_Threshold_Similarity_Index.pdf
BLIND OMNIDIRECTIONAL IMAGE QUALITY ASSESSMENT: INTEGRATING LOCAL STATISTICS AND GLOBAL SEMANTICS Wei Zhou and Zhou Wang Department of Electrical & Computer Engineering, University of Waterloo, Canada Email: {wei.zhou, zhou.wang}@uwaterloo.ca 3 2 0 2 b e F 4 2 ] M M . s c [ 1 v 3 9 3 2 1 . 2 0 3 2 : v i X r a ABSTRACT Omnidirectional image quality assessment (OIQA) aims to predict the perceptual quality of omnidirectional images that cover the whole 180×360◦ viewing range of the visual en- vironment. Here we propose a blind/no-reference OIQA method named S2 that bridges the gap between low-level statistics and high-level semantics of omnidirectional images. Specifically, statistic and semantic features are extracted in separate paths from multiple local viewports and the halluci- nated global omnidirectional image, respectively. A quality regression along with a weighting process is then followed that maps the extracted quality-aware features to a perceptual quality prediction. Experimental results demonstrate that the proposed S2 method offers highly competitive performance against state-of-the-art methods. Index Terms— Omnidirectional image, blind image quality assessment, low-level statistics, high-level semantics 1. INTRODUCTION The rapid recent advancement in virtual reality (VR) tech- nologies makes it possible to create immersive multimedia quality-of-experience (QoE) for end-users. As a represen- tative form of VR, omnidirectional content has increasingly emerged in our daily life. To evaluate and optimize the per- ceptual QoE of omnidirectional content, objective omnidirec- tional image quality assessment (OIQA) models play a critical roles in the development of modern VR systems. In the literature, objective OIQA models have emerged that follow both full-reference (FR) and no-reference (NR) frameworks. FR-OIQA models assume full access to infor- mation of the reference image and are usually direct exten- sions of traditional FR methods developed for regular rectan- gular 2D image quality assessment (IQA). For example, based upon the peak signal-to-noise ratio (PSNR), Yu et al. [1] pro- pose the spherical PSNR (S-PSNR) algorithm, where PSNR is calculated for uniformly distributed points on a sphere in- stead of projected rectangular image. In [2], the weighted-to- spherically uniform PSNR (WS-PSNR) method is presented, where a weighting map is created by considering the stretched degree. Zakharchenko et al. [3] propose the Craster parabolic Fig. 1. Perceptual cues in omnidirectional image quality as- sessment. Existing models extract spatial information from various viewports and may obtain help from global projected maps, whereas the proposed method combines local image statistics and global semantic reconstruction. projection PSNR (CPP-PSNR) approach, which maps the ref- erence and distorted omnidirectional images on the Craster parabolic projection followed by PSNR computation. NR-OIQA methods do not require access to the reference image and are more desirable in many application scenar- ios. Existing NR-OIQA approaches can generally be clas- sified into two categories, depending on whether the conven- tional hand-crafted or learned deep features are employed for quality prediction. Multi-frequency information and local- global naturalness are applied to develop the MFILGN model [4]. More recent models employ deep convolutional neu- ral networks (CNNs) or graph convolution networks (GCNs). These models demonstrate promising performance, includ- ing the multi-channel CNN for blind 360-degree image qual- ity assessment (MC360IQA) [5], the viewport oriented graph convolution network (VGCN) [6], and its variant named adap- tive hypergraph convolutional network (AHGCN) [7]. In a 360-degree viewing environment, e.g. using a head- mounted device, the observer is not able to visualize the whole omnidirectional content simultaneously, and thus an important step in the human subjective viewing experience is to establish or reconstruct a sense of the global semantics by browsing and integrating information from many view- ports. During the course of image quality assessment, such global semantics are integrated with local observations on image fidelity, naturalness, and/or artifacts to produce an Viewports browsingSemantic reconstructionClassroom!Quality inferenceQuality predictionSpatial features(a) Previous Models(b) Our Method Fig. 2. Framework of the proposed S2 method for blind OIQA. overall quality evaluation. Motivated by this observation, we propose a statistic and semantic oriented quality prediction framework named S2 for blind OIQA as illustrated in Fig. 1, by integrating features extracted from both low-level image statistics of multiple local viewports and high-level semantics of the hallucinated global omnidirectional image. A quality regression module is then leveraged to map the collection of the quality-sensitive features extracted from the two sep- arate paths to an overall prediction of the subjective quality rating. Extensive experimental results demonstrate that the proposed method is superior to many state-of-the-art quality assessment models. In addition, we make some interesting observations on the relationship between semantic confidence and image distortions, as well as how the individual compo- nents affect the ultimate quality prediction performance in ablation studies. 2. PROPOSED METHOD The overall framework of the proposed S2 method is shown in Fig. 2, which consists of a statistic path, a semantic path, and a final quality regression step. Since a variety of viewports are browsed by the viewers, we first convert the distorted omnidirectional image (OI) to multiple viewports. Given each input distorted OI denoted by D, we exploit the non-uniform viewport sampling strategy [8, 9] and obtain N viewports Vn, n = 1, 2, ..., N . To capture the multi-scale characteristics of the human visual system [10], we construct pyramid representations [11, 12] of multiple local viewports. Specifically, multi-level Laplacian pyramids [13] are created by iterative Gaussian fil- tering, down-sampling, and subtracting, resulting in Gaussian and Laplacian pyramids in the same process. For a specific viewport Vn, layers of the Gaussian pyramid are calculated as follows: Gi n(x, y) =    Vn, i = 1 2 (cid:80) v=−2 2 (cid:80) u=−2 k(u, v)Gi−1 n (2x + u, 2y + v), i > 1 , (1) where i is the layer index of the Gaussian pyramid, x ∈ [0, X) and y ∈ [0, Y ) are the pixel position indices in which X and Y are the image dimensions, and k(u, v) denotes the generat- ing kernel that is typically defined by the coefficients of a low pass filter such as a 2D Gaussian filter. We then interpolate each layer of the Gaussian pyramid by: ˆGi n(x, y) = 4 2 (cid:88) 2 (cid:88) u=−2 v=−2 k(u, v)Gi n (cid:16) u + x 2 , v + y 2 (cid:17) . (2) The residual between the current layer of the Gaussian pyra- mid and the interpolation result from the next layer defines the current layer of the Laplacian pyramid: Li n = Gi n − ˆGi+1 n . (3) Since the computation of the i-th layer in the Laplacian pyra- mid requires the (i + 1)-th layer of the Gaussian pyramid, the number of layers in the Laplacian pyramid is one less than that in the Gaussian pyramid. To extract features from the Gaussian pyramid, we com- pute the default uniform local binary pattern (LBP) descrip- tors, resulting in 59 statistics for each Gaussian layer. When a 3-layer Gaussian pyramid is employed, this leads to 177 Gaussian pyramid features denoted by fGP . For a Laplacian pyramid, motivated by the success of natural scene statistics (NSS) in IQA research [14, 15, 16], we extract mean sub- tracted and contrast normalized coefficients, leading to 36 Input distorted OIStatistic PathSemantic PathViewportsGaussian statistics extractionLaplacianstatistics extractionGaussian pyramidsLaplacian pyramids17772RegressionQ𝑠𝑡Q𝑠𝑒Regression2494096Conv-96Conv-256Conv-512Conv-512Conv-512QualitySemantic-aware featureStatistic-aware feature features for each layer. When a 2-layer Laplacian pyramid is employed, this results in 72 Laplacian pyramid features de- noted by fLP . The full statistic feature set fst, one for each viewport, is obtained by concatenating the statistical features extracted from the Gaussian and Laplacian pyramids as: fst = [fGP , fLP ] . (4) We employ the VGGNet trained on the large ImageNet dataset [17] as the semantic feature extraction backbone, mainly for its simplicity and ability to capture image distortion- related representations [18]. In [19], three different structures of VGGNet have been proposed to balance between complex- ity and accuracy, namely fast VGG (VGG-F), medium VGG (VGG-M), and slow VGG (VGG-S). Each of them contains 5 convolutional (Conv) layers and 3 fully connected (FC) layers. The first two FC layers have 4,096 neurons, while the last one has 1,000 nodes indicating the 1,000 classes for image recognition. In our current implementation, we select the deep features from the first FC layer of VGG-M as our semantic feature set fse: fse = F C1(D). (5) To learn the mapping from features to quality labels, we feed the statistic features and semantic features separately to support vector regression (SVR) models [20], and denote the regressed statistic and semantic quality scores as Qst and Qse, respectively. The overall quality score is calculated by a weighted average: Qoverall = wQst + (1 − w)Qse , (6) where w is a weighting factor that determines the relative im- portance of the statistic and semantic feature predictors. 3. VALIDATION 3.1. Experimental Setup and Performance Comparison We evaluate the proposed approach on the CVIQD subjective database [21], which is so far a relatively large and widely adopted database containing both omnidirectional images and their corresponding quality labels given by human subjects. It consists of 16 original images and 528 distorted images pro- duced by three classic image or video coding technologies, namely JPEG, AVC, and HEVC. The subjective quality rat- ings in the form of mean opinion score (MOS) are rescaled to the range of [0, 100], for which a higher MOS represents better perceptual image quality. To compare the performance of various IQA models, we take Spearman Rank-Order Correlation Coefficient (SROCC), Pearson Linear Correlation Coefficient (PLCC) and Root Mean Squared Error (RMSE) as the evaluation criteria. Be- fore calculating the PLCC and RMSE, a 5-parameter logistic nonlinear fitting approach [22] is implemented to map the predicted quality into the subjective quality space. Types FR-IQA FR-OIQA Table 1. Performance comparisons of objective models. RMSE 9.9599 6.0793 7.3072 4.9864 4.8574 9.8564 10.3283 10.1448 7.6271 8.5258 4.9311 3.1036 4.6506 3.6573 3.6990 2.8945 SROCC PLCC 0.7008 0.6239 0.9002 0.8842 0.8521 0.8222 0.9340 0.9152 0.9375 0.9292 0.7083 0.6449 0.6729 0.6107 0.6871 0.6265 0.8376 0.8180 0.7919 0.7470 0.9356 0.9308 0.9751 0.9670 0.9429 0.9428 0.9651 0.9639 0.9643 0.9623 0.9781 0.9710 Methods PSNR SSIM [23] MS-SSIM [10] FSIM [24] DeepQA [25] S-PSNR [1] WS-PSNR [2] CPP-PSNR [3] BRISQUE [26] BMPRI [27] DB-CNN [28] MFILGN [4] MC360IQA [5] VGCN [6] AHGCN [7] Proposed S2 NR-IQA NR-OIQA The database is randomly divided into 80% data for train- ing and the remaining 20% data for cross-validation. In or- der to relieve the uncertainty in training/testing splitting, we repeat this random-splitting and cross-validation process 100 times and report the median performance. The performance of the proposed algorithm is compared against state-of-the-art quality assessment models, including five FR-IQA, three FR-OIQA, three NR-IQA, and four NR- OIQA methods. The results are shown in TABLE 1, where we observe that for FR-IQA metrics, the PSNR-based mod- els are inferior to more advanced approaches such as struc- tural (SSIM, MS-SSIM, FSIM) and deep learning (DeepQA) models. Somewhat surprisingly, the FR-OIQA methods do not help further improve upon FR-IQA approaches. By con- trast, the NR-OIQA models show significant superiority over NR-IQA methods. This is likely due to their specific de- sign to capture the characteristics of omnidirectional images. Among all metrics tested, the proposed S2 method demon- strates highly competitive performance. 3.2. Semantic Confidence Versus Image Distortion Since the proposed method contains a semantic path, it is in- teresting to observe the relationship between semantic con- fidence and image distortion. An example of distorted om- nidirectional images with different JPEG, AVC and HEVC distortion levels is shown in Fig. 3, where from the first col- umn to the third column, we observe that as the degree of distortion increases, the semantic confidence level decreases. This suggests that semantic information may be highly related to perceptual image quality. It is also interesting to see that the semantic confidence shows various sensitivities to differ- ent distortion types. In particular, the drop in semantic confi- dence levels is much less in more advanced image/video cod- Table 2. Performance comparisons for different viewport numbers in the statistic path. Numbers 6 20 80 SROCC PLCC 0.9769 0.9684 0.9777 0.9686 0.9771 0.9683 RMSE 3.0083 2.9626 2.9501 Table 3. Performance comparisons for different neural net- work architectures in the semantic path. Architectures VGG-F VGG-M VGG-S SROCC PLCC 0.9537 0.9497 0.9576 0.9517 0.9486 0.9451 RMSE 4.2107 4.0329 4.4345 observed that either path alone can achieve promising quality prediction performance, but adopting both paths (i.e. the All case) delivers the best performance. Relative speaking, the more dominant factor seems to be the statistic path. This may not be surprising as the statistic features come from differ- ent viewports directly visualized by human subject while the global semantics offer complementary information for addi- tional cues in quality assessment. Because different parameter settings may be employed in the implementations of the proposed framework, here we test the sensitivity of our model with regard to various viewport numbers and semantic architectures. The results are reported in TABLE 2 and TABLE 3, respectively. We can see that the proposed model is insensitive to the viewport number. This allows us to reduce the number of viewports (for example, 6) to alleviate the computational complexity in real-world appli- cations. The results also show that VGG-M outperforms the other neural network architectures in the semantic path. The possible reason may be that VGG-M achieves a preferable tradeoff between algorithm complexity and accuracy, making it a desired option for deep semantic backbone. 4. CONCLUSION We propose a novel S2 framework for blind omnidirectional image quality assessment that integrates both local low-level statistic and global high-level semantic features. Extensive experiments show that the proposed method achieves state- of-the-art performance. Observations on the relationship be- tween semantic confidence and image distortion, and the ab- lation/sensitivity tests offer additonal useful insights. Under the same framework, more advanced models for statistic and semantic analysis may be employed in the future, aiming for more accurate QoE assessment models that may help drive the advancement of immersive multimedia systems. Fig. 3. The relationship between semantic confidence and im- age distortions. The first, second and third rows correspond to three distortion types (JPEG, AVC and HEVC compression, respectively). The first, second and third columns correspond to increasing distortion levels (low, medium and high, respec- tively) for each distortion type. Fig. 4. Performance results of ablation experiments. ing method HEVC than in earlier JPEG and AVC encoders. 3.3. Ablation and Parameter Sensitivity Tests We evaluate the contributions from the statistic and semantic paths by ablation experiments, and the results are shown in Fig. 4, where GP1, GP2 and GP3, respectively, represent the cases of using the first, second, and third layers of Gaussian pyramid statistics only. GP denotes the case of using three layers of Gaussian pyramid statistics. We find that the perfor- mance increases gradually. Similarly, LP1 and LP2, respec- tively, correspond to the cases of using the first and second layers of Laplacian pyramid statistics only, while LP denotes the case of using 2-layer Laplacian pyramid statistics. The results show that LP produces the best performance among the three. The cases of adopting the statistic path and the se- mantic path only are denoted by St and Se, respectively. It is (a) Confidence=0.557(b) Confidence=0.302(c) Confidence=0.140(d) Confidence=0.525(e) Confidence=0.290(f) Confidence=0.174(g) Confidence=0.537(h) Confidence=0.392(i) Confidence=0.298JPEGAVCHEVCGP1GP2GP3GPLP1LP2LPStSeAll0.90.910.920.930.940.950.960.970.98SROCC 5. REFERENCES [1] Matt Yu, Haricharan Lakshman, and Bernd Girod, “A frame- work to evaluate omnidirectional video coding schemes,” in IEEE International Symposium on Mixed and Augmented Re- ality, 2015, pp. 31–36. [2] Yule Sun, Ang Lu, and Lu Yu, “Weighted-to-spherically- uniform quality evaluation for omnidirectional video,” IEEE Signal Processing Letters, vol. 24, no. 9, pp. 1408–1412, 2017. [3] Vladyslav Zakharchenko, Kwang Pyo Choi, and Jeong Hoon Park, “Quality metric for spherical panoramic video,” in Op- tics and Photonics for Information Processing X. International Society for Optics and Photonics, 2016, vol. 9970, p. 99700C. [4] Wei Zhou, Jiahua Xu, Qiuping Jiang, and Zhibo Chen, “No- reference quality assessment for 360-degree images by anal- ysis of multifrequency information and local-global natural- ness,” IEEE Transactions on Circuits and Systems for Video Technology, 2021. [5] Wei Sun, Xiongkuo Min, Guangtao Zhai, Ke Gu, Huiyu Duan, and Siwei Ma, “MC360IQA: A multi-channel cnn for blind 360-degree image quality assessment,” IEEE Journal of Se- lected Topics in Signal Processing, vol. 14, no. 1, pp. 64–77, 2019. [6] Jiahua Xu, Wei Zhou, and Zhibo Chen, “Blind omnidirectional image quality assessment with viewport oriented graph convo- lutional networks,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 5, pp. 1724–1737, 2020. [7] Jun Fu, Chen Hou, Wei Zhou, Jiahua Xu, and Zhibo Chen, “Adaptive hypergraph convolutional network for no-reference 360-degree image quality assessment,” in Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 961–969. [8] Jiahua Xu, Ziyuan Luo, Wei Zhou, Wenyuan Zhang, and Zhibo Chen, “Quality assessment of stereoscopic 360-degree images from multi-viewports,” in IEEE Picture Coding Symposium, 2019, pp. 1–5. [9] Zhibo Chen, Jiahua Xu, Chaoyi Lin, and Wei Zhou, “Stereo- scopic omnidirectional image quality assessment based on pre- dictive coding theory,” IEEE Journal of Selected Topics in Sig- nal Processing, vol. 14, no. 1, pp. 103–117, 2020. [10] Zhou Wang, Eero P Simoncelli, and Alan C Bovik, “Multi- scale structural similarity for image quality assessment,” in The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003. IEEE, 2003, vol. 2, pp. 1398–1402. [11] Zhou Wang and Eero P Simoncelli, “Reduced-reference im- age quality assessment using a wavelet-domain natural image statistic model,” in Human vision and electronic imaging X. In- ternational Society for Optics and Photonics, 2005, vol. 5666, pp. 149–159. [12] Valero Laparra, Johannes Ball´e, Alexander Berardino, and Eero P Simoncelli, “Perceptual image quality assessment us- ing a normalized laplacian pyramid,” Electronic Imaging, vol. 2016, no. 16, pp. 1–6, 2016. [13] Peter J Burt and Edward H Adelson, “The laplacian pyramid as a compact image code,” in Readings in Computer Vision, pp. 671–679. Elsevier, 1987. [14] Anish Mittal, Rajiv Soundararajan, and Alan C Bovik, “Mak- ing a ”completely blind“ image quality analyzer,” IEEE Signal Processing Letters, vol. 20, no. 3, pp. 209–212, 2012. [15] Yuming Fang, Kede Ma, Zhou Wang, Weisi Lin, Zhijun Fang, and Guangtao Zhai, “No-reference quality assessment of contrast-distorted images based on natural scene statistics,” IEEE Signal Processing Letters, vol. 22, no. 7, pp. 838–842, 2014. [16] Zhibo Chen, Wei Zhou, and Weiping Li, “Blind stereoscopic video quality assessment: From depth perception to overall ex- perience,” IEEE Transactions on Image Processing, vol. 27, no. 2, pp. 721–734, 2017. [17] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, 2009, pp. 248–255. [18] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang, “The unreasonable effectiveness of deep fea- tures as a perceptual metric,” in Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, 2018, pp. 586–595. [19] Ken Chatfield, Karen Simonyan, Andrea Vedaldi, and An- drew Zisserman, “Return of the devil in the details: Delving deep into convolutional nets,” arXiv preprint arXiv:1405.3531, 2014. [20] Chih-Chung Chang and Chih-Jen Lin, “LIBSVM: a library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, pp. 1–27, 2011. [21] Wei Sun, Ke Gu, Siwei Ma, Wenhan Zhu, Ning Liu, and Guangtao Zhai, “A large-scale compressed 360-degree spher- ical image database: From subjective quality evaluation to ob- jective model comparison,” in IEEE 20th International Work- shop on Multimedia Signal Processing, 2018, pp. 1–6. [22] Video Quality Experts Group et al., “Final report from the video quality experts group on the validation of objective mod- els of video quality assessment, phase II,” VQEG, 2003. [23] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Si- moncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. [24] Lin Zhang, Lei Zhang, Xuanqin Mou, and David Zhang, “FSIM: A feature similarity index for image quality assess- ment,” IEEE Transactions on Image Processing, vol. 20, no. 8, pp. 2378–2386, 2011. [25] Jongyoo Kim and Sanghoon Lee, “Deep learning of human visual sensitivity in image quality assessment framework,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1676–1684. [26] Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695–4708, 2012. [27] Xiongkuo Min, Guangtao Zhai, Ke Gu, Yutao Liu, and Xi- aokang Yang, “Blind image quality estimation via distortion aggravation,” IEEE Transactions on Broadcasting, vol. 64, no. 2, pp. 508–517, 2018. [28] Weixia Zhang, Kede Ma, Jia Yan, Dexiang Deng, and Zhou Wang, “Blind image quality assessment using a deep bilinear convolutional neural network,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 1, pp. 36–47, 2018. 1 1© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
synthetic_cpt
2
Detecting_Offensive_Content_in_Open-domain_Conversations_using_Two_Stage_Semi-supervision.pdf
WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans Tharindu Ranasinghe1, Diptanu Sarkar2, Marcos Zampieri2, Alexander Ororbia2 1University of Wolverhampton,UK 2Rochester Institute of Technology, USA [email protected] Abstract In recent years, the widespread use of social media has led to an increase in the generation of toxic and offensive content on online plat- forms. In response, social media platforms have worked on developing automatic detec- tion methods and employing human modera- tors to cope with this deluge of offensive con- tent. While various state-of-the-art statistical models have been applied to detect toxic posts, there are only a few studies that focus on de- tecting the words or expressions that make a post offensive. This motivates the organization of the SemEval-2021 Task 5: Toxic Spans De- tection competition, which has provided par- ticipants with a dataset containing toxic spans annotation in English posts. In this paper, we present the WLV-RIT entry for the SemEval- 2021 Task 5. Our best performing neural trans- former model achieves an 0.68 F1-Score. Fur- thermore, we develop an open-source frame- work for multilingual detection of offensive spans, i.e., MUDES, based on neural trans- formers that detect toxic spans in texts. 1 Introduction The widespread adoption and use of social media has led to a drastic increase in the generation of abusive and profane content on the web. To counter this deluge of negative content, social media com- panies and government institutions have turned to developing and applying computational mod- els that can identify the various forms of offensive content online such as aggression (Kumar et al., 2018, 2020), cyber-bullying (Rosa et al., 2019), and hate speech (Ridenhour et al., 2020). Prior work has either designed methods for identifying conversations that are likely to go awry (Zhang WARNING: This paper contains text excerpts and words that are offensive in nature. et al., 2018; Chang et al., 2020) or detecting of- fensive content and labelling posts at the instances level – this has been the focus in the recent shared tasks like HASOC at FIRE 2019 (Mandl et al., 2019a) and FIRE 2020 (Mandl et al., 2020), Ger- mEval 2019 Task 2 (Struß et al., 2019), TRAC (Kumar et al., 2018, 2020), HatEval (Basile et al., 2019a), OffensEval at SemEval-2019 (Zampieri et al., 2019b) and SemEval-2020 (Zampieri et al., 2020). With respect to identifying offensive language in conversations, comments, and posts, noticeable progress has been made with a variety of large, annotated datasets made available in recent years (Pitenis et al., 2020; Rosenthal et al., 2020). The identification of the particular text spans that make a post offensive, however, has been mostly ne- glected (Mathew et al., 2021) as current state-of- the-art offensive language identification models flag the entire post or comment but do not actually highlight the offensive parts. The pressing need for toxic span detection models to assist human con- tent moderation, processing and flagging content in a more interpretable fashion, has motivated the organization of the SemEval-2021 Task 5: Toxic Spans Detection (Pavlopoulos et al., 2021). In this paper, we present the WLV-RIT sub- mission to the SemEval-2021 Task 5. We ex- plore several statistical learning models and re- port the performance of the best model, which is based on a neural transformer. Next, we gen- eralise our approach to an open-source frame- work called MUDES: Multilingual Detection of Of- fensive Spans (Ranasinghe and Zampieri, 2021a). Alongside the framework, we also release the pre- trained models as well as a user-friendly web-based User Interface (UI) based on Docker, which pro- vides the functionality of automatically identifying the offensive spans in a given input text. 1 2 0 2 y a M 7 2 ] L C . s c [ 3 v 0 3 6 4 0 . 4 0 1 2 : v i X r a 2 Related Work Datasets Over the past several years, multiple post-level, offensive language benchmark datasets have been released. In Zampieri et al. (2019a), the authors compiled an offensive language identifica- tion dataset with a three-layer hierarchical annota- tion scheme – profanity, category, and target identi- fication. Rosenthal et al. (2020) further extended the dataset using a semi-supervised model that was trained with over nine million annotated English tweets. Recently, Mathew et al. (2021) released the first benchmark dataset which covered the three primary areas of online hate-speech detection. The dataset contained a 3-class classification problem (hate-speech, offensive, or neither), a targeted com- munity, as well as the spans that make the text hate- ful or offensive. Furthermore, offensive language datasets have been annotated in other languages such as Arabic (Mubarak et al., 2017), Danish (Sig- urbergsson and Derczynski, 2020), Dutch (Tulkens et al., 2016), French (Chiril et al., 2019), Greek (Pitenis et al., 2020), Portuguese (Fortuna et al., 2019), Spanish (Basile et al., 2019b), and Turkish (C¸ ¨oltekin, 2020). Apart from the dataset released for SemEval- 2021 Task 5, HateXplain (Mathew et al., 2021) is, to the best of our knowledge, the only dataset that we could find that has been annotated at the word level. The dataset consists of 20, 000 posts from Gab and Twitter. Each data sample is annotated with one of the hate/offensive/normal labels, com- munities being targeted, and words of the text are marked by the annotators who support the label. Models In the past, trolling, aggression, and cy- berbullying identification tasks on social media data have been approached using machine and deep learning-focused models (Kumar et al., 2018). Across several studies (Malmasi and Zampieri, 2017, 2018; Waseem and Hovy, 2016) researchers have noted that n-gram based features are very useful when building reliable, automated hate- speech detection models. Statistical learning mod- els aided with natural language processing (NLP) techniques are frequently used for post-level of- fensive and hateful language detection (Davidson et al., 2017; Indurthi et al., 2019). Given the in- creased use of deep learning in NLP tasks, of- fensive language identification has seen the intro- duction of methods based on convolutional neural networks (CNNs) and Long Short-term Memory (LSTM) networks (Badjatiya et al., 2017; Gamb¨ack and Sikdar, 2017; Hettiarachchi and Ranasinghe, 2019). The most common approach has been to use a word/character embedding model such as Word2vec (Mikolov et al., 2013), GloVe (Penning- ton et al., 2014), or fastText (Mikolov et al., 2018) to embed words/tokens and then feed them to an artificial neural network (ANN) (Zampieri et al., 2019b). With the introduction of BERT (Devlin et al., 2019), neural transformer models have become popular in offensive language identification. In hate speech and offensive content identification in Indo-European languages, the BERT model has been shown to outperform GRU (Gated Recurrent Unit) and LSTM-based models (Ranasinghe et al., 2019). In Mandl et al. (2019b), the best perform- ing teams on the task employed BERT-based pre- trained models that identified the type of hate and target of a (text) post. The SemEval-2019 Task 6 (Zampieri et al., 2019b) presented the challenge of identifying and categorizing offensive posts on social media, which included three sub-tasks. In sub-task A: offensive language identification, Liu et al. (2019a) applied a pre-trained BERT model to achieve the highest F1 score. In Sub-task B: automatic categorization of offense types, BERT-based models also achieved competitive rankings. We noticed similar trends in SemEval-2020 Task 12 (Zampieri et al., 2020) as well. Not limited to English, transformer mod- els have yielded strong results in resource-scarce languages like Bengali (Ranasinghe and Zampieri, 2020) and Malayalam (Ranasinghe et al., 2020) along with cross-lingual transfer learning from resource-rich languages (Ranasinghe and Zampieri, 2020, 2021b). Nonetheless, despite the recent suc- cess of statistical learning in offensive language detection problems, due to the lack of finer-grained, detailed datasets, models are limited in their ability to predict word-level labels. 3 Task and Dataset In the SemEval-2021 Task 5 dataset, the sequence of words that makes a particular post or comment toxic is defined as a toxic span. The dataset for this task is extracted from posts in the Civil Com- ments Dataset that have been found to be toxic. The practice dataset has 690 instances out of which 43 instances do not contain any toxic spans. The training dataset has a total of 7, 939 instances and Post Offensive Spans Stupid hatcheries have completely fucked everything Victimitis: You are such an asshole. So is his mother. They are silver spoon parasites. You’re just silly. [0, 1, 2, 3, 4, 5, 34, 35, 36, 37, 38, 39] [28, 29, 30, 31, 32, 33, 34] [] [12, 13, 14, 15, 16] Table 1: Four comments from the dataset along with their annotations. The offensive words are displayed in red and the spans are indicated by the character position in the instance. comprises 485 instances without any toxic spans. Each instance is composed of a list of toxic spans and the post (in English). In Table 1, we present four randomly selected examples from the training dataset along with their annotations. Figure 1: The Bi-LSTM-CRF model. Green squares represent the top CRF layer. Non-offensive and offen- sive tokens are shown as 0 and 1, respectively. 4 Methodology 4.1 Lexicon-based Word Match Lexicon-based word-matching algorithms often achieve balanced results. For the lexicon, we col- lected profanity words from online resources1,2. Then, we added the toxic words present in the train- ing dataset and we run a simple word matching algorithm the trie data structure. As anticipated, the algorithm does not evaluate the toxic spans con- textually and misses censored swear words. For instance, the word f**k is missed, which is not present in the lexicon. Nonetheless, this result pro- vides as a useful baseline performance measure- ment for the task. 4.2 Recurrent Networks: Long Short-Term Memory Long Short-term Memory (LSTM) is a recurrent neural network model that uses feedback connec- 1https://www.cs.cmu.edu/˜biglou/ resources/bad-words.txt 2https://github.com/RobertJGabriel/ Google-profanity-words tions to model temporal dependencies (past-to- present) in sequential data. Bidirectional LSTM (Bi-LSTM) is capable of learning contextual in- formation both forwards and backwards in time compared to conventional LSTMs. In this study, we used the Bi-LSTM architecture given this bi- directional ability to model temporal dependencies. Conditional random fields (CRF) (Lafferty et al., 2001) are a statistical model that are capable of incorporating context information and are highly used for sequence labeling tasks. A CRF connected to the top of the Bi-LSTM model provides a power- ful way to model relationships between consecutive outputs (across time) and provides a means to ef- ficiently utilize past and future tag information to predict the current tag. The final hybrid model is comparable to the pre- vious state-of-the-art sequence tagging Bi-LSTM- CRF model (Huang et al., 2015). Figure 1 presents the Bi-LSTM-CRF architecture we designed for this study, which has 4.2 million trainable parame- ters. We trained the model on mini-batches of 16 samples with a 0.005 learning rate for 5 epochs with a maximum sequence length of 200. 4.3 Neural Transformers Recently, pre-trained language models have been shown to be quite useful across a variety of NLP tasks, particularly those based on bidirectional neu- ral transformers such as BERT (Devlin et al., 2019; Li et al., 2019). Transformer-based models have also been shown to be highly effective in sequence classification tasks such as named entity recogni- tion (NER) (Luoma and Pyysalo, 2020). In our work, we extend the BERT model by integrating a token level classifier. The token-level classifier is a linear transformation that takes the last hidden state of the sequence as the input and produces a label for each token as its output. In this case, each token will be predicted to have one of two possible labels – toxic or not toxic. We fine-tuned the un- cased BERT transformer model with a maximum Figure 2: The two-part model architecture. Part A depicts the language model and Part B is the token classifier. (Ranasinghe and Zampieri, 2021a) sequence length of 400 with batches of size of 16. We also experimented with customising the lay- ers in between the BERT transformer and token- classification layer by adding a CRF layer between them given that it has been shown that BERT-CRF architectures often outperform BERT baselines in similar sequence labeling tasks (Huang et al., 2019; Souza et al., 2020). Therefore, we added a sequen- tial CRF layer on top of the BERT transformer and further incorporated dropout (probability of dropping a neuron was 0.2) to introduce some reg- ularization. Unfortunately, in our experiments, we found that adding a CRF layer does not signifi- cantly improve the final generalization results. Ad- ditionally, we experimented with transfer learning to identify if a further boost in model generalization was possible if we first trained a basic BERT trans- former on HateXplain (Mathew et al., 2021) and then fine-tuned it using our extended architecture as described above. However, the transfer learning process did not improve results any further. Development of MUDES Given the success we observed using neural transformers such as BERT, we developed a (software) framework we call MUDES (Ranasinghe and Zampieri, 2021a): Mul- tilingual Detection of Offensive Spans, an open- source framework based on transformers to detect toxic spans in texts. MUDES offers several capa- bilities in addition to the (automatic) token classi- fication we described earlier. MUDES has the fol- lowing components: a) Language Modeler: Fine– tuning transformer models using masked language modeling before performing the downstream task often leads to better results (Ranasinghe and Het- tiarachchi, 2020) and MUDES incorporates this, b) Transformer Type Variety: since there are many varieties of neural transformers, e.g., XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019b) that have been shown to outperform BERT-based architectures (Ranasinghe and Hettiarachchi, 2020; Hettiarachchi and Ranasinghe, 2020a), our soft- ware framework provides support for these archi- tectures, and, finally, c) Model Ensembling: mul- tiple MUDES models with different random seeds can be trained and the final model prediction is the majority vote from all the models, aligning with the approach taken in Hettiarachchi and Ranasinghe (2020b, 2021); Jauhiainen et al. (2021). The complete architecture of MUDES is de- picted in Figure 2. We used several popular trans- former models including BERT (Devlin et al., 2019), XLNET (Yang et al., 2019), RoBERTa (Liu et al., 2019b), SpanBERT (Joshi et al., 2020), and ALBERT (Lan et al., 2020). We compared these transformer architectures against the spaCy token classifier baseline (reported by the competition organisers) and report these results in Section 5. Since adding a CRF layer did not improve the re- sults in our models, we do not add this to MUDES. Parameter optimization involved mini-batches of 8 samples using the Adam update rule (global learn- ing rate was 2e−5 and a linear warm-up schedule over 10% of the training data was used). Models were evaluated using a validation subset that con- tained 20% of the training data. Early stopping was executed if the validation loss did not improve over 10 evaluation steps. Models were trained for 3 epochs on an Nvidia Tesla K80 GPU using only the training set provided. 5 Evaluation and Results For evaluation, we followed the same procedure that the task organisers have used to evaluate the systems. Let system Ai return a set St Ai of character off- sets for parts of a text post that have been found to be toxic. Let Gt be the character offsets of the ground truth annotations of t. We compute the F1 score of system Ai with respect to the ground truth G for post t as mentioned in Equation 1 where | ·| denotes set cardinality. P t and Rt measure the precision and recall, respectively. F t 1 (Ai, G) = 2 · P t (Ai, G) · Rt (Ai, G) P t (Ai, G) + Rt (Ai, G) (1) Model MUDES RoBERTa MUDES BERT MUDES SPANBert MUDES XLNet BERT BERT-CRF BERT HateXplain spaCy baseline Bi-LSTM-CRF Lexicon word match Trial F1 Test F1 0.6801 0.6886 0.6698 0.6771 0.6675 0.6751 0.6653 0.6722 0.6538 0.6738 0.6517 0.6643 0.6326 0.6387 0.5976 0.5976 0.5398 0.5631 0.4086 0.3378 Table 2: Results ordered by test F1 score. The Trial F1 column shows the F1 scores on the trial set and the Test F1 column shows the F1 scores for test set. Observe in Table 2 that all of our deep neural- based models outperformed the spaCy baseline while the lexicon-based word match algorithm pro- vided fairly good results despite it being an unsu- pervised method. Our best model is the MUDES RoBERTa model which scored 0.68 F1 score in the test set and is very compatible with the 0.70 F1 score that the best model scored in the compe- tition. Furthermore, it is clear that the additional features supported by our MUDES framework, e.g., language modeling and ensembling, improves the results over a vanilla BERT transformer. 6 Conclusion and Future Work In this paper, we presented the WLV-RIT approach for tackling the SemEval-2021 Task 5: Toxic Spans Detection. SemEval-2021 Task 5 provided partici- pants with the opportunity of testing computational models to identify token spans in toxic posts as opposed to previous related SemEval tasks such as HatEval and OffensEval that provided participants with datasets annotated at the instance level. We believe that word-level predictions are an impor- tant step towards explainable offensive language identification. We experimented with several methods includ- ing a lexicon-based word match, LSTMs, and neu- ral transformers. Our results demonstrated that transformer models offered the best generalization results and, given the success observed, we devel- oped MUDES, an open-source software framework based on neural transformers focused on detecting toxic spans in texts. With MUDES. we release two English models that performed best for this task (Ranasinghe and Zampieri, 2021a). A large model; en-large based on roberta-large which is more accurate, but has a low efficiency regarding space and time. The base model based on xlnet- base-cased; en-base is efficient, but has a compar- atively low accuracy than the en-large model. All pre-trained models are available on Hugging Face Model Hub (Wolf et al., 2020)3. We also make MUDES available as a Python package4 and set up as an open-source project5. In addition, a proto- type User Interface (UI) of MUDES has been made accessible to the general public6 based on Docker7. In terms of future work, we would like to experi- ment with multi-task (neural) architectures that can be used for offensive language identification capa- ble of carrying out predictions at both the word- level and post-level jointly. Furthermore, we would like to evaluate multi-task architectures on multi- domain and multilingual settings as well as broaden our experimental comparison to other types of re- current network models, such as the Delta-RNN (Ororbia II et al., 2017). 3Available on https://huggingface.co/mudes 4Available https://pypi.org/project/ at mudes/ 5The MUDES GitHub repository is available at https: //github.com/tharindudr/MUDES 6The UI can be accessed from http://rgcl.wlv.ac. uk/mudes/ 7Available at https://hub.docker.com/r/ tharindudr/mudes Acknowledgments We would like to thank the shared task organizers for making this interesting dataset available. We further thank the anonymous SemEval reviewers for their insightful feedback. References Pinkesh Badjatiya, Shashank Gupta, Manish Gupta, and Vasudeva Varma. 2017. Deep learning for hate speech detection in tweets. In Proceedings of WWW. Valerio Basile, Cristina Bosco, Elisabetta Fersini, Deb- ora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019a. Semeval-2019 task 5: Multilingual detec- tion of hate speech against immigrants and women in twitter. In Proceedings of SemEval. Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela San- guinetti. 2019b. SemEval-2019 task 5: Multilin- gual detection of hate speech against immigrants and women in Twitter. In Proceedings of SemEval. C¸ a˘grı C¸ ¨oltekin. 2020. A Corpus of Turkish Offen- sive Language on Social Media. In Proceedings of LREC. Jonathan P Chang, Justin Cheng, and Cristian Danescu- Niculescu-Mizil. 2020. Don’t let me be misunder- stood: Comparing intentions and perceptions in on- line discussions. In Proceedings of WWW. Patricia Chiril, Farah Benamara Zitoune, V´eronique Moriceau, Marl`ene Coulomb-Gully, and Abhishek Kumar. 2019. Multilingual and multitarget hate speech detection in tweets. In Proceedings of TALN. Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated Hate Speech Detection and the Problem of Offensive Language. In Proceedings of ICWSM. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of NAACL. Paula Fortuna, Joao Rocha da Silva, Leo Wanner, S´ergio Nunes, et al. 2019. A Hierarchically-labeled Portuguese Hate Speech Dataset. In Proceedings of ALW. Bj¨orn Gamb¨ack and Utpal Kumar Sikdar. 2017. Us- ing convolutional neural networks to classify hate- speech. In Proceedings of ALW. Hansi Hettiarachchi and Tharindu Ranasinghe. 2020a. BRUMS at SemEval-2020 task 3: Contextualised embeddings for predicting the (graded) effect of con- text in word similarity. In Proceedings of SemEval. Hansi Hettiarachchi and Tharindu Ranasinghe. 2020b. InfoMiner at WNUT-2020 task 2: Transformer- based covid-19 informative tweet extraction. In Pro- ceedings of W-NUT. Hansi Hettiarachchi and Tharindu Ranasinghe. 2021. TransWiC at SemEval-2021 Task 2: Transformer- based Multilingual and Cross-lingual Word-in- In Proceedings of Se- Context Disambiguation. mEval. Weipeng Huang, Xingyi Cheng, Taifeng Wang, and Wei Chu. 2019. BERT-Based Multi-Head Selection for Joint Entity-Relation Extraction. In Proceedings of NLPCC. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidi- rectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991. Vijayasaradhi Indurthi, Bakhtiyar Syed, Manish Shri- vastava, Nikhil Chakravartula, Manish Gupta, and Vasudeva Varma. 2019. FERMI at SemEval-2019 task 5: Using sentence embeddings to identify hate speech against immigrants and women in Twitter. In Proceedings of SemEval. Tommi Jauhiainen, Tharindu Ranasinghe, and Marcos Zampieri. 2021. Comparing approaches to dravid- ian language identification. In Proceedings of Var- Dial. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Proceedings of TACL. Ritesh Kumar, Atul Kr Ojha, Shervin Malmasi, and Marcos Zampieri. 2018. Benchmarking aggression In Proceedings of identification in social media. TRAC. Ritesh Kumar, Atul Kr. Ojha, Shervin Malmasi, and Evaluating Aggression In Proceedings of Marcos Zampieri. 2020. Identification in Social Media. TRAC. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of ICML. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In Proceed- ings of ICLR. Hansi Hettiarachchi and Tharindu Ranasinghe. 2019. Emoji powered capsule network to detect type and target of offensive posts in social media. In Proceed- ings of RANLP. W. Li, S. Gao, H. Zhou, Z. Huang, K. Zhang, and W. Li. 2019. The Automatic Text Classification Method Based on BERT and Feature Union. In Proceedings of ICPADS. Ping Liu, Wen Li, and Liang Zou. 2019a. NULI at SemEval-2019 task 6: Transfer learning for of- fensive language detection using bidirectional trans- formers. In Proceedings of SemEval. Alexander G Ororbia II, Tomas Mikolov, and David Re- itter. 2017. Learning simpler language models with the differential state framework. Neural computa- tion, 29(12):3327–3352. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A Robustly Optimized BERT Pretrain- ing Approach. arXiv preprint arXiv:1907.11692. Jouni Luoma and Sampo Pyysalo. 2020. Exploring Cross-sentence Contexts for Named Entity Recogni- tion with BERT. In Proceedings of COLING. Shervin Malmasi and Marcos Zampieri. 2017. Detect- ing Hate Speech in Social Media. In Proceedings of RANLP. Shervin Malmasi and Marcos Zampieri. 2018. Chal- lenges in Discriminating Profanity from Hate Speech. Journal of Experimental & Theoretical Ar- tificial Intelligence, 30:1–16. Thomas Mandl, Sandip Modha, Anand Kumar M, and Bharathi Raja Chakravarthi. 2020. Overview of the hasoc track at fire 2020: Hate speech and offensive language identification in tamil, malayalam, hindi, english and german. In Proceedings of FIRE. Thomas Mandl, Sandip Modha, Prasenjit Majumder, Daksh Patel, Mohana Dave, Chintak Mandlia, and Aditya Patel. 2019a. Overview of the hasoc track at fire 2019: Hate speech and offensive content identi- fication in indo-european languages. In Proceedings of FIRE. Thomas Mandl, Sandip Modha, Prasenjit Majumder, Daksh Patel, Mohana Dave, Chintak Mandlia, and Aditya Patel. 2019b. Overview of the hasoc track at fire 2019: Hate speech and offensive content identi- fication in indo-european languages. In Proceedings of FIRE. Binny Mathew, Punyajoy Saha, Seid Muhie Yi- mam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2021. Hatexplain: A benchmark dataset In Proceed- for explainable hate speech detection. ings of AAAI. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Ad- vances in pre-training distributed word representa- tions. In Proceedings of LREC. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Proceedings of NeurIPS. Hamdy Mubarak, Kareem Darwish, and Walid Magdy. 2017. Abusive language detection on Arabic social media. In Proceedings of ALW. John Pavlopoulos, L´eo Laugier, Jeffrey Sorensen, and Ion Androutsopoulos. 2021. Semeval-2021 task 5: Toxic spans detection. In Proceedings of SemEval. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of EMNLP. Zeses Pitenis, Marcos Zampieri, and Tharindu Ranas- inghe. 2020. Offensive Language Identification in Greek. In Proceedings of LREC. Tharindu Ranasinghe, Sarthak Gupte, Marcos Zampieri, and Ifeoma Nwogu. 2020. WLV-RIT at HASOC-Dravidian-CodeMix-FIRE2020: Offensive Language Identification in Code-switched YouTube Comments. In Proceedings of FIRE. Tharindu Ranasinghe and Hansi Hettiarachchi. 2020. BRUMS at SemEval-2020 task 12: Transformer based multilingual offensive language identification in social media. In Proceedings of SemEval. Tharindu Ranasinghe and Marcos Zampieri. 2020. Multilingual Offensive Language Identification with In Proceedings of Cross-lingual Embeddings. EMNLP. Tharindu Ranasinghe and Marcos Zampieri. 2021a. MUDES: Multilingual Detection of Offensive Spans. In Proceedings of NAACL. Tharindu Ranasinghe and Marcos Zampieri. 2021b. Multilingual Offensive Language Identification for Low-resource Languages. ACM Transactions on Asian and Low-Resource Language Information Pro- cessing (TALLIP). Tharindu Ranasinghe, Marcos Zampieri, and Hansi Hettiarachchi. 2019. BRUMS at HASOC 2019: Deep Learning Models for Multilingual Hate Speech and Offensive Language Identification. In Proceed- ings of FIRE. Michael Ridenhour, Arunkumar Bagavathi, Elaheh Raisi, and Siddharth Krishnan. 2020. Detecting On- line Hate Speech: Approaches Using Weak Supervi- sion and Network Embedding Models. In Proceed- ings of SBP-BRiMS. Hugo Rosa, N Pereira, Ricardo Ribeiro, Paula Costa Ferreira, Joao Paulo Carvalho, S Oliveira, Lu´ısa Co- heur, Paula Paulino, AM Veiga Sim˜ao, and Isabel Trancoso. 2019. Automatic cyberbullying detection: A systematic review. Computers in Human Behav- ior, 93:333–345. Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Marcos Zampieri, and Preslav Nakov. 2020. A large- scale semi-supervised dataset for offensive language identification. arXiv preprint arXiv:2004.14454. Gudbjartur Ingi Sigurbergsson and Leon Derczynski. 2020. Offensive Language and Hate Speech Detec- tion for Danish. In Proceedings of LREC. F´abio Souza, Rodrigo Nogueira, and Roberto Lotufo. 2020. Portuguese Named Entity Recognition using BERT-CRF. arXiv preprint arXiv:1909.10649. Julia Maria Struß, Melanie Siegel, Josef Ruppenhofer, Michael Wiegand, Leibniz ScienceCampus, and Manfred Klenner. 2019. Overview of germeval task 2, 2019 shared task on the identification of offensive language. In Proceedings of KONVENS. St´ephan Tulkens, Lisa Hilte, Elise Lodewyckx, Ben A Verhoeven, and Walter Daelemans. 2016. Dictionary-based Approach to Racism Detection in Dutch Social Media. In Proceedings of TA-COS. Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? predictive features for hate In Proceedings of speech detection on Twitter. NAACL Student Research Workshop. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R´emi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of EMNLP. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretrain- ing for Language Understanding. In Proceedings of NeurIPS. Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019a. Predicting the type and target of offensive posts in social media. In Proceedings of NAACL. Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019b. SemEval-2019 Task 6: Identifying and Cat- egorizing Offensive Language in Social Media (Of- fensEval). In Proceedings of SemEval. Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and C¸ a˘grı C¸ ¨oltekin. 2020. SemEval-2020 Task 12: Multilingual Offen- sive Language Identification in Social Media (Offen- sEval 2020). Proceedings of SemEval. Justine Zhang, Jonathan Chang, Cristian Danescu- Niculescu-Mizil, Lucas Dixon, Yiqing Hua, Dario Taraborelli, and Nithum Thain. 2018. Conversations gone awry: Detecting early signs of conversational failure. In Proceedings of ACL.
synthetic_cpt
4
Gradient_Localization_Improves_Lifelong_Pretraining_of_Language_Models.pdf
Gradient Localization Improves Lifelong Pretraining of Language Models Jared Fernandez Yonatan Bisk Carnegie Mellon University {jaredfern, ybisk, strubell}@cmu.edu Emma Strubell 4 2 0 2 v o N 7 ] L C . s c [ 1 v 8 4 4 4 0 . 1 1 4 2 : v i X r a Abstract Large Language Models (LLMs) trained on web-scale text corpora have been shown to cap- ture world knowledge in their parameters. How- ever, the mechanism by which language models store different types of knowledge is poorly un- derstood. In this work, we examine two types of knowledge relating to temporally sensitive entities and demonstrate that each type is local- ized to different sets of parameters within the LLMs. We hypothesize that the lack of consid- eration of the locality of knowledge in existing continual learning methods contributes to both: the failed uptake of new information, and catas- trophic forgetting of previously learned infor- mation. We observe that sequences containing references to updated and newly mentioned en- tities exhibit larger gradient norms in a subset of layers. We demonstrate that targeting pa- rameter updates to these relevant layers can improve the performance of continually pre- training on language containing temporal drift. 1 Introduction Pretraining over diverse datasets has been shown to encode world knowledge in the parameters of large language models (LLMs) (Petroni et al., 2019; Roberts et al., 2020; Gueta et al., 2023) from mas- sive static web-scale datasets. However, these mod- els are normally trained on large static text corpora which do not reflect changes in world knowledge or language usage that occur after the initial data In practice language models are de- collection. ployed in dynamic real-world settings, and their learned knowledge becomes stale over time (Lazari- dou et al., 2021; Luu et al., 2022; Dhingra et al., 2022; Yao et al., 2022; Nylund et al., 2023; Cheang et al., 2023); the temporal degradation can be eval- uated according to intrinsic measures such as per- plexity, or extrinsic downstream performance (e.g. question answering). Incrementally training language models on streams of data has been explored as a method to Figure 1: When continually pretraining on sequences with updated and newly mentioned entities, certain layers consis- tently observe larger gradient norms. mitigate temporal performance degradation with- out incurring the heavy computational and envi- ronmental costs of retraining models on large pre- training corpora (Jang et al., 2021, 2022; Lin et al., 2022; Gururangan et al., 2020). However, naive on- line training on these data streams is known to: in- duce hallucinations in language generations (Kang et al., 2024), fail in the uptake of new information (Onoe et al., 2023; Hu et al., 2023), and catastroph- ically forget previously learned information (Zhu et al., 2020). To address these problems, recent work has ap- plied continual learning and online learning meth- ods to adapting large language models to streams of documents (Loureiro et al., 2022; Scialom et al., 2022; Jang et al., 2022). While continual learning methods have been shown to mitigate temporal per- formance degradations, the mechanisms by which neural language models store and update informa- tion are not well understood. In this work, we consider a real-world setting for continual language learning, that of temporal language drift, and probe the performance of lan- guage models on two types of entity relationships which exhibit temporal degradation: (1) acquisition of information about new entities, and (2) updat- ing relationships between existing entities. We hypothesize that the poor performance of existing continual learning methods on these forms of entity Newly Emerging Entities →Entity Relation Changes→Model Depth||∇||Model Depth||∇||∇(ℒ(·))StalePretrained LM∇(ℒ(·))StalePretrained LM Dataset Year Example Answer TempLAMA 2020 2021 Joe Biden holds the position of __ . Joe Biden holds the position of __ . President-elect.of the United States President of the United States Entity Cloze By Date (ECBD) 2020 2021 The Congressional Budget Office provided a score for the CARES Act on April 16, 2020 estimating it would __. increase federal deficits. On August 14, when Hurricane Grace entered the Caribbean, a tropical storm watch was issued for __. the entire coast of Haiti. Table 1: Examples from TempLAMA and ECBD probing tasks. The temporally sensitive entity is bolded. relationship shift can be in part attributed to a mis- alignment in the autoregressive language modeling pretraining objective and the optimal parameter up- dates required to acquire new information or update existing knowledge. To characterize this misalignment, we com- pare the gradient updates observed when training language models to predict knowledge intensive salient entity spans, with the gradient updates ob- served from standard continual pretraining. We observe that for the gradient updates for predicting knowledge intensive salient spans, observe high values in distinct groups of layers based on the type of entity relationship presented in the sequence (see Fig. 1). Based on these observations, we propose new methods for aligning the gradient updates dur- ing continual pretraining to better align with these layers which exhibit high gradient norms. Through empirical study, we show that the observed charac- teristic gradient patterns occur across autoregres- sive, transformer language models of various of sizes; and we demonstrate the efficacy of our pro- posed method through performance improvements on knowledge probing tasks when applied on top of existing continual learning methods in pretraining. 2 Related Work Continual Pretraining of Language Models. Continued pretraining of models on the target dis- tribution is often used to adapt a generically pre- trained language model from its source to its target setting to update factual knowledge or to adapt to new language domains (Lin et al., 2022; Jin et al., 2022; Wu et al., 2024). However, standard finetun- ing techniques can result in catastrophic forgetting of previously learned tasks and the loss of the pre- trained models generalization capabilities due to distortion of the underlying features and lack of regularization (Kumar et al., 2022). As a mitigation for forgetting, it is common to apply regularizers or constraints on the gradi- ent descent updates such as: gradient projection, example-replay, loss rescaling, or introduction of additional parameters for the target domain (Cossu et al., 2022; Saha et al., 2021; Farajtabar et al., 2020). While continual pretraining is commonly used in the adaptation to a sequence of domains (Gururangan et al., 2020; Yıldız et al., 2024), re- cent work is only beginning to explore its use in the adaptation to changing temporal knowledge which can often exhibit finer-grained changes (Jang et al., 2021, 2022; Nylund et al., 2023). Knowledge Localization and Model Editing. Another method to adjust the information contained within large pretrained models is knowledge edit- ing, in which specific factual relations are injected or manipulated by performing causal traces of acti- vations to identify where a model stored knowledge necessary for prediction (De Cao et al., 2021; Meng et al., 2022a,b). However, these methods exhibit high per-edit computational costs and fail to large number of edits (Gupta et al., 2024), which can be- come necessary when updating models over larger corpora or repeatedly over time. 3 Knowledge Probing Using Salient Span We probe language models using the task of salient span prediction, which has previously shown suc- cess as a pretraining objective for knowledge- intensive tasks such as closed-book question an- swering (Cole et al., 2023; Guu et al., 2020). In salient span prediction, a model is provided with a sequence and tasked with completing a masked slot corresponding to a named entity or noun phrase. Specifically, we examine language models on prob- ing tasks for temporal entity knowledge in which the masked sequence corresponds: (1) to an update or change to an existing temporally sensitive enti- ties; (2) to a mention of emerging new entities that were not previously seen during pretraining. 3.1 Probing Datasets We study these using the Dynamic TempLAMA (Dhingra et al., 2022) and the Entity Cloze By Date (Onoe et al., 2022) diagnostic datasets, respectively. Examples can be found in Table 1. Figure 2: Relative gradient norms for the salient spans in ECBD and TempLAMA for the GPT-2 Base (110M; Left-hand side), and GPT-2 Large (770M; Right-hand side), models. Norms for attention (Top) and norms for MLP (Bottom) are depicted separately. Rradient norms of salient spans are 4 to 15x larger than those of the full sequence. Dynamic TempLAMA contains cloze queries consisting of subject-object relations in which the correct answer corresponds to objects that have changed over time. Although the answer may change over time, the referenced subject in each example may have been mentioned in both the seen data (i.e. initial pretraining corpus) and unseen data (i.e. continual pretraining corpus. Thus, we use this dataset to evaluate the ability of continual learning techniques to update existing knowledge. Entity Cloze By Date contains cloze queries where the salient spans correspond to noun-phrases (ECBD-NP) referring to newly emerging entities that are not seen prior to specified cutoff dates. As the entity was not seen in initial pretraining but may have been mentioned in the subsequent con- tinual pretraining, we use the ECBD-NP dataset to evaluate the effectiveness of a continual learning method in knowledge acquisition. Additionally, we evaluate on the ECBD-Popular split in which the salient spans reference entities that exists in all splits. As the ECBD-Popular split references static information that was seen in both the intial and continual pretraining data, we use the ECBD-Popular split to evaluate catastrophic forget- ting the retention of previously learned knowledge. 3.2 Models We examine decoder-only transformer language models of various sizes, specifically: GPT 2-Base (110M parameters) and GPT-2 Large (770M pa- rameters); with additional analysis on GPT-Neo (1.3B parameters) in Appendix 3. To evaluate the perplexity of each of these models, we provide the example context of each example up to the salient span and compute the perplexity over the salient span as in (Onoe et al., 2022, 2023). To align each model with Wikipedia-based knowledge contained in the probing tasks, we per- form domain adaptive pretraining on snapshots of Wikipedia retrieved prior to the pretraining data cutoffs for each model to prevent data contamina- tion. Speicifically, we perform initial pretraining of GPT-2 models on Wikipedia snapshots from Jan- uary 2019, and of GPT-Neo on January 2020. 3.3 Probing Model Response to Salient Spans We hypothesize that the target parameters and gra- dient updates relevant for learning the entity rela- tionship knowledge previously described in Section 3.1 differs from those observed during autogressive continual pretraining. Based on this hypothesis, we analyze the per-layer gradient norms for examples which reflect the target form of knowledge. To identify critical portions of the model, we compare the relative gradient norms for salients span prediction on the knowledge probing tasks with the gradient norms of randomly sampled au- toregressive pretraining examples. Precisely, we provide the autoregressive language model with the left context preceding the salient span and com- pute the parameter gradient with respect to the loss, averaged over each token in the target span. We then aggregate the gradients according to their re- spective transformer block’s component attention and MLP layers, and compute the L2-Norm of the gradients for each layer. We then normalize these per-layer norms with the average per-layer gradi- ents for 2000 examples from the 2019 Wikipedia snapshot over the full sequence. For the ECBD probing dataset, we examine the gradients for the salient span corresponding to the noun phrase related to the target entity, which we refer to ECBD-NP. For the TempLAMA dataset, we examine the loss gradient with respect to the object noun phrase. In Figure 2, we observe that the gradient norms for salient spans are consistently 4 to 15x higher than the gradient norms of randomly sampled pre- training examples for all layers in both GPT2-Base and Large. Additionally, we observe that the rela- tive gradient norms for these salient spans observe a distinct profile in which there is large magnitude in the early and middle layers, and that the relative gradient norms are larger in the attention layers than in the MLP layers. 048Layers6.008.0010.0012.0014.0016.0018.0020.00Relative Gradient Norms048121620242832Layers6.008.0010.0012.0014.0016.0018.0020.00Relative Gradient NormsECBD: NPTempLAMA048Layers6.008.0010.0012.0014.0016.0018.00Relative Gradient Norms048121620242832Layers8.0010.0012.0014.0016.0018.00Relative Gradient NormsECBD: NPTempLAMA 4 Gradient Localized Continual Pretraining Ideally, standard autoregressive pretraining of a lan- guage model on a changing stream of data would be sufficient to update a model to capture the relevant changes in knowledge. However, recent work has demonstrated that current methods for continual learning often suffer from both catastrophic forget- ting and a failure to uptake new knowledge even when it is explicitly contained in the training cor- pus (Hu et al., 2023; Kang et al., 2024). Based on our observations from §3, we hypothesize that one cause of failed transfer is due to a misalignment of the gradients from the NLL objective function with the desired update based on the information content of the data observed during continual pretraining. We propose a method to improve the acquisition of entity knowledge by amplifying updates to the layers that are relevant to the learning of salient en- tity spans. To identify relevant layers, we compute the relative gradient norm for each layer i as: the ratio between the gradient norm ˜∇i in the layer i for knowledge intensive salient prediction on data sampled from the validation set of the TempLAMA diagnostic dataset, and the gradient norm for au- toregressive pretraining on randomly sampled data from the continual pretraining data stream: ˜∇i = ||∇iL(Mθ, (x, y)TempLAMA)|| ||∇iL(Mθ, (x, y)PT)|| (1) We propose two methods to improve knowledge uptake by aligning gradient updates during con- tinual pretraining. For relevant salient spans from the TempLAMA diagnostic dataset, we construct a profile of the relative gradient norms with respect to the gradients for randomly sampled pretraining sequences. We then adjust the learning rates for layers in this profile to increase the updates to lay- ers with large relative gradient norms. We refer to our methods as Traced Gradient Layers (TGL). Selecting Trainable Layers for Pretraining with Relative Gradient Norm We consider a simple approach to target continual pretraining updates to layers with high relative gradient norm, by only up- dating parameters where the relative gradient norm on the TempLAMA diagnostic dataset exceed the mean relative gradient norm of all layers – we refer to this method which freezes parameters as TGL + FP. In the case of the GPT-2 architecture, we separate the model into its component MLP and attention layers, then compute the relative gradi- ent norm for each layer as the ratio between the average gradient norm computed over salient spans from the TempLAMA dataset and the gradients for examples from the continual pretraining corpus. Precisely, we freeze a parameter group for layer i if ˜∇i < No. Layers ((cid:80) ˜∇k). k∈Layers 1 Per-Layer Adaptive Learning Rates from Rela- tive Gradient Norm Rather than using relative gradient norm as a hard threshold to determine which layers to update, we instead consider an adaptive approach in which we set the learning rate for layers to scale with the magnitude of the relative gradient norm; we refer to this method as TGL + ALR. We scale the per-layer learning rate for layer i as : ηi = η ˜∇i maxi∈Layers( ˜∇k) 5 Training and Dataset Details To perform domain adaptive pretraining, we sam- ple and preprocess a snapshot of Wikipedia from January 2019 using Wikiextractor. For continual pretraining, we follow the methodology of (Jang et al., 2022) to collect snapshots of Wikipedia from each of the subsequent years until 2022 and fil- ter each corpus to contain the edits to Wikipedia made in the intervening year, consisting of new articles and sentences within existing articles that were edited between succeeding snapshots. 5.1 Baselines We compare the performance of our proposed con- tinual pretraining method with existing approaches from continual learning. We consider vanilla con- tinual pretraining in which we update all param- eters; a parameter-expansion method LoRA (Hu et al., 2021), which introduces additional trainable low rank adapters to the self-attention layers; a replay-based method MixReview (He et al., 2021), which randomly mixes previously seen pretraining data alongside current data; and the regularization- based method RecAdam (Chen et al., 2020), which imposes a quadratic penalty on the norm of the parameter update. Initial domain adaptive pretraining is performed on a the complete Wikipedia snapshot for 4 epochs with a global batch size of 64, or approximately 500,000 training iterations. Models are trained using the Adam optimizer with weight decay and a linear warmup schedule over 10% of examples and a linear decay with a max learning rate of 1E-4. Evaluation Set: 2020 ECBD Pop. ECBD NP TempLAMA Pretrain Domain Pretrain Continual Pretrain + TGL with FP LoRA: 64D, Attn + TGL with FP MixReview + TGL with FP RecAdam + TGL with FP 40.99 30.90 34.79 34.13 31.94 30.28 28.70 28.24 34.78 33.56 47.44 41.39 43.97 44.20 41.40 41.05 37.34 37.77 43.92 43.41 81.92 62.99 56.72 55.19 57.21 56.32 67.64 60.05 57.34 54.75 Table 2: TGL with frozen layers improves performance of GPT2-Large (770M) during continual pretraining. Evaluation Set: 2020 ECBD Pop. ECBD NP TempLAMA Pretrain Domain Pretrain Continual Pretrain + TGL with ALR + TGL with FP MixReview + TGL with ALR + TGL with FP LoRA + TGL with ALR + TGL with FP RecAdam + TGL with ALR + TGL with FP 78.61 55.26 64.13 57.62 57.75 54.10 53.50 53.48 55.77 57.75 58.09 57.55 57.52 57.55 80.04 62.59 72.42 64.83 65.08 61.54 61.01 61.48 65.56 69.44 67.62 64.60 64.77 64.89 162.54 80.51 83.39 77.58 74.55 82.16 77.04 76.35 80.11 78.40 78.77 76.67 77.32 74.88 Evaluation Set: 2021 ECBD Pop. ECBD NP TempLAMA Pretrain Domain Pretrain Continual Pretrain + TGL with ALR + TGL with FP MixReview + TGL with ALR + TGL with FP LoRA + TGL with ALR + TGL with FP RecAdam + TGL with ALR + TGL with FP 78.61 55.26 67.18 57.91 57.83 51.96 53.42 52.81 58.07 58.06 58.39 64.42 57.72 57.69 98.47 66.16 77.70 63.45 63.55 57.69 59.60 58.31 66.89 69.17 66.31 73.34 63.53 63.60 167.23 82.60 86.34 78.85 74.88 81.88 78.75 79.17 76.78 79.03 78.19 92.26 78.39 75.21 Table 3: Traced Gradient Layers (TGL) can be applied on top of existing continual pretraining methods by ap- plying per-layer adaptive learning rates (ALR) or frozen parameters (FP) to improve performance (perplexity of the slot) of existing continual learning methods. the model During continual pretraining, is trained for one epoch on the Wikipedia edits for the subsequent year. For the MixReview method, unedited articles are added Wikipedia edits corpus at a 2:1 ratio. We train LoRA adapters with a hid- den rank of 64 dimensions. 5.2 Evaluating TGL for Continual PT To evaluate the performance of TGL+FP and TGL+AR, we incrementally train the domain- adapted language model on the subsequent set of Wikipedia revisions for the years of 2020 and 2021. We then probe the continually pretrained model after each updating on new year of Wikipedia re- visions using the corresponding temporally delin- eated split from the ECBD-NP and TempLAMA test datasets 3.1. To evaluate whether either TGL method leads to catastrophic forgetting, we also report performance on ECBD-Popular, which con- tains sequences referring to entities common in all years including entities previously seen during initial pretraining. In Table 3, we report the perplexities of the con- tinually pretrained model on the 2020 and 2021 test splits with the GPT-2 Base (110M) model. Relative to the domain-adapted pretrained initialization, we observe that all continual learning baselines exhibit performance tradeoffs in which performance either improves on the probe tasks for recognizing new entities (ECBD-NP) or improves on updating entity relations (TempLAMA). When applying TGL methods on top of contin- ual learning methods, we see that it is possible to avoid catastrophic forgetting as we observe de- creases in probing task perplexity relative to the continual learning baselines. In Table 2, we scale our experiments to the GPT-2 Large (770M) model and observe that the improvements from localized gradient updates extend to continual pretraining for the larger model. 6 Conclusion In this work, we conduct an analysis of the gradi- ent updates observed during knowledge intensive salient span prediction and autoregressive language modeling, and observe characteristic differences in the layer-wise norms for each objective. Based on this observation, we proposed Traced Gradient Layers (TGL) a method for identifying relevant layers to target during continual pretraining of lan- guage models. We observe that our proposed ap- proach improve language model performance on tasks probing for entity and relational knowledge; without the need for fine-grained annotations. Acknowledgements The authors would like to thank Sanket Vaibhav Mehta for helpful discussions, as well as Clara Na and Jeremiah Milbauer for manuscript feed- back. This work was supported in part by fund- ing from the National Science Foundation Gradu- ate Research Fellowship Program under Grant No. DGE2140739, and by DSO National Laboratories. Limitations and Ethical Considerations In our work, we observe that per-layer gradient norms can be utilized as an informative indicator for identifying layers to train during continual pre- training on temporally changing data. Although perplexity is a commonly used metric for evaluat- ing language models and can often be useful in mea- suring the quality of a model, it is unclear whether improvements in knowledge probe perplexity trans- fers to downstream settings. While the goal of our investigations is to miti- gate the need for environmentally and financially prohibitive pretraining by enabling the continual learning of existing models, it is possible that re- ductions in the cost of pretraining may then lead more individuals and organizations to pursue large model pretraining (i.e. Jevons Paradox). References Chi Cheang, Hou Chan, Derek Wong, Xuebo Liu, Zhao- cong Li, Yanming Sun, Shudong Liu, and Lidia Chao. 2023. Can lms generalize to future data? an empiri- cal analysis on text summarization. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 16205–16217. Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, and Xiangzhan Yu. 2020. Recall and learn: Fine-tuning deep pretrained language models with less forgetting. arXiv preprint arXiv:2004.12651. Jeremy R. Cole, Aditi Chaudhary, Bhuwan Dhingra, and Partha Talukdar. 2023. Salient span masking for temporal understanding. In Proceedings of the 17th Conference of the European Chapter of the As- sociation for Computational Linguistics, pages 3052– 3060, Dubrovnik, Croatia. Association for Computa- tional Linguistics. Andrea Cossu, Tinne Tuytelaars, Antonio Carta, Lu- cia Passaro, Vincenzo Lomonaco, and Davide Bac- ciu. 2022. Continual pre-training mitigates for- arXiv preprint getting in language and vision. arXiv:2205.09357. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Edit- ing factual knowledge in language models. arXiv preprint arXiv:2104.08164. Bhuwan Dhingra, Jeremy R Cole, Julian Martin Eisen- schlos, Dan Gillick, Jacob Eisenstein, and William Cohen. 2022. Time-aware language models as tempo- ral knowledge bases. Transactions of the Association for Computational Linguistics, 10:257–273. Mehrdad Farajtabar, Navid Azizan, Alex Mott, and Ang Li. 2020. Orthogonal gradient descent for continual learning. In International Conference on Artificial Intelligence and Statistics, pages 3762–3773. PMLR. Almog Gueta, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, and Leshem Choshen. 2023. Knowledge is a region in weight space for fine-tuned language models. In The 2023 Conference on Empir- ical Methods in Natural Language Processing. Akshat Gupta, Anurag Rao, and Gopala Anu- manchipalli. 2024. Model editing at scale leads to gradual and catastrophic forgetting. arXiv preprint arXiv:2401.07453. Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don’t stop pretraining: In Adapt language models to domains and tasks. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342–8360. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In International confer- ence on machine learning, pages 3929–3938. PMLR. Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, and Fuchun Peng. 2021. Analyzing the forgetting problem in pretrain-finetuning of open- domain dialogue response models. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1121–1133, Online. Association for Computational Linguistics. Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2021. Lora: Low-rank adaptation of large lan- guage models. In International Conference on Learn- ing Representations. Nathan Hu, Eric Mitchell, Christopher D Manning, and Chelsea Finn. 2023. Meta-learning online arXiv preprint adaptation of language models. arXiv:2305.15076. Joel Jang, Seonghyeon Ye, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, and Minjoon Seo. 2022. Temporalwiki: A lifelong bench- mark for training and evaluating ever-evolving lan- guage models. In Proceedings of the 2022 Confer- ence on Empirical Methods in Natural Language Processing, pages 6237–6250. Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, KIM Gyeonghun, Stanley Jungkyu Choi, and Minjoon Seo. 2021. Towards continual knowledge learning of language models. In Interna- tional Conference on Learning Representations. Xisen Jin, Dejiao Zhang, Henghui Zhu, Wei Xiao, Shang-Wen Li, Xiaokai Wei, Andrew Arnold, and Xiang Ren. 2022. Lifelong pretraining: Continually adapting language models to emerging corpora. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 4764–4780. Yasumasa Onoe, Michael J.Q. Zhang, Shankar Padman- abhan, Greg Durrett, and Eunsol Choi. 2023. Can lms learn new entities from descriptions? challenges in propagating injected knowledge. In Annual Meet- ing of the Association for Computational Linguistics. Katie Kang, Eric Wallace, Claire Tomlin, Aviral Ku- mar, and Sergey Levine. 2024. Unfamiliar finetuning examples control how language models hallucinate. arXiv preprint arXiv:2403.05612. Ananya Kumar, Aditi Raghunathan, Robbie Jones, Tengyu Ma, and Percy Liang. 2022. Fine-tuning can distort pretrained features and underperform out- of-distribution. arXiv preprint arXiv:2202.10054. Angeliki Lazaridou, Adhi Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d’Autume, Tomas Ko- cisky, Sebastian Ruder, et al. 2021. Mind the gap: Assessing temporal generalization in neural language models. Advances in Neural Information Processing Systems, 34:29348–29363. Bill Yuchen Lin, Sida I Wang, Xi Lin, Robin Jia, Lin Xiao, Xiang Ren, and Scott Yih. 2022. On continual model refinement in out-of-distribution data streams. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3128–3139. Daniel Loureiro, Francesco Barbieri, Leonardo Neves, Luis Espinosa Anke, and Jose Camacho-Collados. 2022. Timelms: Diachronic language models from twitter. arXiv preprint arXiv:2202.03829. Kelvin Luu, Daniel Khashabi, Suchin Gururangan, Kar- ishma Mandyam, and Noah A Smith. 2022. Time waits for no one! analysis and challenges of tem- In Proceedings of the 2022 poral misalignment. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5944–5958. Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022a. Locating and editing factual as- sociations in gpt. Advances in Neural Information Processing Systems, 35:17359–17372. Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. 2022b. Mass editing memory in a transformer. arXiv preprint arXiv:2210.07229. Kai Nylund, Suchin Gururangan, and Noah A Smith. 2023. Time is encoded in the weights of finetuned language models. arXiv preprint arXiv:2312.13401. Yasumasa Onoe, Michael Zhang, Eunsol Choi, and Greg Durrett. 2022. Entity cloze by date: What lms know about unseen entities. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 693–702. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- In Proceedings of the 2019 Confer- edge bases? ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463–2473. Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the pa- arXiv preprint rameters of a language model? arXiv:2002.08910. Gobinda Saha, Isha Garg, and Kaushik Roy. 2021. Gradient projection memory for continual learning. arXiv preprint arXiv:2103.09762. Thomas Scialom, Tuhin Chakrabarty, and Smaranda Muresan. 2022. Fine-tuned language models are continual learners. arXiv preprint arXiv:2205.12393. Tongtong Wu, Linhao Luo, Yuan-Fang Li, Shirui Pan, Thuy-Trang Vu, and Gholamreza Haffari. 2024. Con- tinual learning for large language models: A survey. arXiv preprint arXiv:2402.01364. Huaxiu Yao, Caroline Choi, Bochuan Cao, Yoonho Lee, Pang Wei W Koh, and Chelsea Finn. 2022. Wild- time: A benchmark of in-the-wild distribution shift over time. Advances in Neural Information Process- ing Systems, 35:10309–10324. Ça˘gatay Yıldız, Nishaanth Kanna Ravichandran, Pr- ishruit Punia, Matthias Bethge, and Beyza Ermis. 2024. Investigating continual pretraining in large language models: Insights and implications. arXiv preprint arXiv:2402.17400. Chen Zhu, Ankit Singh Rawat, Manzil Zaheer, Srinadh Bhojanapalli, Daliang Li, Felix Yu, and Sanjiv Kumar. 2020. Modifying memories in transformer models. arXiv preprint arXiv:2012.00363. A Licenses Wikipedia data, which was used to construct the TempLAMA and ECBD, the datasets we used, has a Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA). TempLAMA is also derived from LAMA which has a CC Attribution-NonCommercial 4.0 International Li- cense (CC BY-NC 4.0), and the script for construct- ing it is licensed under the Apache License, Version 2.0. Our use of the datasets is for research purposes only and aligns with the intended use. entity probes (ECBD-ENT), as well as an increase in relative gradient norm in the middle layers for probes of relational changes (TempLAMA) in Fig- ure 3. B Dataset Details Examples from the Dynamic TempLAMA and ECBD probing and evaluation datasets are pro- vided in Table 1. Details on the datasets used for domain-specific and continual pretraining are provided in Table 4. Split Date No. Articles No. Tokens Complete Edits Edits Edits Jan. 2019 Jan. 2020 Jan. 2021 Jan. 2022 7.9 Million 364,235 419,879 425,296 1.81 Billion 268 Million 311 Million 309 Million Table 4: Statistics on the Wikipedia corpora used for domain adaptive and continual pretraining. C Gradient Profiles for GPT-Neo (1.3B) In addition probing the 110M and 770M parameter GPT-2 models in Section 3, we examine the gra- dient characteristics of the larger GPT-Neo (1.3B parameter) model. As the GPT-Neo model was pre- trained on the Pile with a data cutoff year of 2020, we conduct initial domain adaptive pretraining on a snapshot of Wikipedia from January 2020, and conduct gradient norm probes using TempLAMA and ECBD evaluation splits from 2020. Figure 3: Relative Gradient Norms for the GPT-Neo 1.3B parameter model. For GPT-Neo, we observe similar characteristic gradient profiles, with increases in relative gradient norm in the first and final layers for the ECBD new
synthetic_cpt
8
Can_Open-source_LLMs_Enhance_Data_Synthesis_for_Toxic_Detection_An_Experimental_Study.pdf
3 0 0 2 b e F 6 2 ] A Q . h t a m [ 1 v 2 3 3 2 0 3 0 / h t a m : v i X r a OPEN AND CLOSED STRING FIELD THEORY INTERPRETED IN CLASSICAL ALGEBRAIC TOPOLOGY DENNIS SULLIVAN Dedicated to Graeme Segal on his 60th birthday Abstract: There is an interpretation of open string field theory in al- gebraic topology. An interpretation of closed string field theory can be deduced from this open string theory to obtain as well the interpretation of open and closed string field theory combined. The algebraic structures derived from the first string interactions are related to algebraic models discussed in work of (Atiyah-Segal), (Moore-Segal) and (Getzler and Se- gal). For example the Corollary 1 of §1 says that the homology of the space of paths in any manifold beginning and ending on any submanifold has the structure of an associative dialgebra satisfying the module or Frobenius compatibility (see appendix). Corollary 2 gives another structure. §1Open string states in M: The open string theory interpretation in topology includes a collection of linear categories ϑM one for each am- bient space M. The objects of ϑM are smooth oriented submanifolds La, Lb, Lc, ... of M. The set of morphisms ϑab between two objects La and Lb are graded chain complexes, linearly generated by smooth oriented families of paths from La to Lb. An element in ϑab is called an open string state. A path is a piecewise smooth map [0,1]→ M. 1 2 DENNIS SULLIVAN The first open string interactions are i)two endpoint restrictions: ϑab r→ ϑa′b and ϑab r→ ϑab′ where La′ is a submanifold of La and Lb′ is a submanifold of Lb. Degree r = −cod of submanifold. ii)joining or composition ϑab ⊗ ϑbc ∧ → ϑac, degree ∧ = −dim Lb iii) cutting or cocomposition ϑac ∨ → ϑab ⊗ ϑbc, degree ∨ = −cod Lb +1 Namely, i)(restriction) for an open string state in ϑab(ie. a chain in ϑab) one can intersect transversally in La the chain of beginning points in La with La′ to obtain a chain in ϑa′b. The same idea works in Lb for the endpoints of paths to construct ϑab r→ ϑab′. ii)(joining) the transversal intersection in Lb of the chain of endpoints for an open string state in ϑab with the chain of beginning points for an open string state in ϑbcis a chain labelling composible paths which after composing defines an open string state in ϑac, and the composition ϑab ⊗ ϑbc ∧ → ϑac. iii)(cutting) Now it is required that La, Lb, Lc, ... have oriented normal bundles. For example, this is true if the ambient space M is a smooth manifold. Then given an Lb and any open string state in ϑac we may transversally intersect in M the paths with Lb. The intersection chain labels cuttings of the path at Lb defining ϑac ∨ → ϑab ⊗ ϑbc. (We use Eilenberg-Zilber.) The operation ∨ refers to cutting at any time along the path whenever it crosses Lb. We can also consider the operation ∨t of cutting at a specific OPEN AND CLOSED STRING FIELD THEORY INTERPRETED IN CLASSICAL ALGEBRAIC TOPOLOGY3 time tǫ[0, 1]. All these ∨t are chain homotopic. In fact ∨ is the chain homotopy between ∨0 cutting at time zero and ∨1 cutting at time one. Remark : Actually the above operations are directly defined by the above descriptions only for states satisfying transversality conditions. To go from such a typical definition to a complete definition perturbations of the identity creating transversality must be introduced. The combinatorics of these perturbations fits neatly into Stasheff’s strong homotopy formalism [S]. An elegant treatment can be read in Fukaya et al [1], for the classical case of intersecting chains in a manifold. Theorem: For each ambient oriented smooth manifold M there is an open string category whose objects are smooth submanifolds La, Lb, Lc, .. and whose morphisms are chains ϑαβ on paths between objects Lα and Lβ. Only the objects La which are compact (without boundary) have iden- tity maps (which commute with the boundary operator). For transversal open string states in ϑαβ,... composition ∧ is associative, cocomposition ∨ is coassociative, and the derivation compatibility holds between ∨ and ∧(x, y) = x · y, ∨(x · y) = x · ∨y + ∨x · y(see appendix). ∧ and ∨t commute with ∂ but [∨, ∂] = ∨1 − ∨0. On the full space of open string states, associativity for ∧ and coasso- ciativity for ∨t hold up to strong homotopy in the sense of Stasheff. There are conjecturally similar strong homotopy statements for coassociativity of ∨ and the derivation or infinitesimal bialgebra compatibility between ∧ and ∨.(see appendix). 4 DENNIS SULLIVAN Corollary 1 : For each object La the homology of ϑaa is an associative algebra via the composition operation ∧ (with identity if La is compact without boundary). The operation ∨t is a coassociative coalgebra (which if non zero implies La cannot be deformed off of itself). The ∧, ∨t dialgebra satisfies the module or Frobenius compatibility (see appendix). Proof of corollary: i) The algebra statement follows from a) ∧ com- mutes with ∂ operator on open string states and so passes to homology b)homotopy associativity at the chain level implies associativity at the homology level. ii) a)The fixed time cutting operation ∨t also commutes with the ∂ opera- tor and passes to homology. b) because different times are chain homotopic we can choose them conveniently to prove the module or Frobenius com- patibility. To calculate ∨t(x · y) we can choose t in x’s time to see that we get ∨t(x) · y or in y’s time to see that we get x · ∨t(y). See the remark 2) for the rest. Sketch proof of theorem: 1) One sees the indicated identities hold for transversal chains by looking at the picture. For example, when cutting a joining of paths, the cut can happen in the first part or the second part. This yields the derivation compatibility. 2) The strong homotopy properties follow using i) manifolds are locally contractible ii) transversality can be created in manifolds by arbitrarily small pertu- bations. OPEN AND CLOSED STRING FIELD THEORY INTERPRETED IN CLASSICAL ALGEBRAIC TOPOLOGY5 Remarks: 1) The coalgebra ∨t is chain homotopic to ∨0 which may be written as a composition involving the restriction and the diagonal map- ping. Let La′ be the transversal intersection of La with itself. Then ∨0 is the composition of, first the restriction of the beginning point to La′, next the inclusion of ϑa′a into ϑaa, next the diagonal map on generat- ing chains of ϑaa, next the cartesian product on chains of the beginning point operator(thought of as a constant path) with the identity and finally Eilenberg-Zilber. A similar composition and statement hold for ∨1. 2) We can use remark 1) to define a new coalgebra structure on homology when La is deformable off itself, say to Lb. Then define ∨ : ϑaa → ϑab ⊗ϑba cutting at variable time and note that ∨0 and ∨1 are zero on the chain level. Thus ∨ commutes with ∂ and passes to homology. We use the obvious equivalences ϑaa ∼ ϑba ∼ ϑab to obtain: Corollary 2 : If La is deformable off of itself, the homology of open string states on La has the structure of an associative dialgebra satisfying the derivation or infinitesimal bialgebra compatibility (see appendix). Examples: i)(manifolds) La = M the ambient space. Then ϑaa is equiv- alent to the ordinary chains on M since paths in M is homotopy equivalent to M. Then the strong homotopy associativity algebra structure on ϑaa is equivalent to the intersection algebra of chains on M. The operation ∨◦ ∼ ∨t ∼ ∨1 is chain equivalent to the diagonal mapping on chains. One recovers the known fact that on passing to homology one obtains a graded commutative algebra structure C ⊗ C ∧ → C and a graded cocommuta- tive coalgebra structure C ∨ → C ⊗ C satisfying the module or Frobenius 6 DENNIS SULLIVAN compatibility ∨(x·y) = x·(∨y) = (∨x)·y where the notation refers to mul- tiplication on the left and right factors of the tensor product respectively (see appendix). Note when M is a closed oriented manifold ∧ and ∨ are related by the non degenerate intersection pairing, Poincare duality. ii)(based loop space)M is any space and La is a point in M. Then ϑaa is the chains on the based loop space of M and the algebra structure on ϑaa is the Pontryagin algebra of chains on the based loop space (the original setting of Stasheff’s work). No transversality is needed here because all paths are composible. Here one has Hopf’s celebrated compatibility with the diagonal map ∨′ on chains that ∨′ is a map of algebras. The connection of the latter with the open string theory here is a mystery (but compare [2] and remark 1) above). If M is a manifold of dim M near La and La is a point, the cocomposition ∨t is defined but is zero in homology. The operation ∨ can then be refined to a chain mapping and passes to homology (remark 2)). ϑaa obtains a coassociative coalgebra structure on homology of degree (-dim M) +1 satisfying the derivation or infinitesimal bialgebra compatibility (of the theorem) with the Pontryagin product. Here one is splitting a based loop where it passes again through a (nearby) base point. iii) (free loop space) Let M = L x L and La ⊂ M be the diagonal. Then paths in M beginning and ending on La is homeomorphic to the free loop space of L= Maps (circle, L). Then the algebra structure on ϑaa is chain homotopic to the loop product of ”String Topology”[2]. This is OPEN AND CLOSED STRING FIELD THEORY INTERPRETED IN CLASSICAL ALGEBRAIC TOPOLOGY7 a graded commutative algebra structure on the homology of the free loop space of the manifold L. The degree is zero if we grade by the negative codimension (k−dimM). The product interacts with the circle action differential △ of degree +1. The deviation of △ from being a derivation of the loop product △(x · y) − (△x) · y − x · (△y) is a Lie bracket of degree +1 which is com- patible via the Leibniz identity with the loop product (all on homology). This Lie bracket is a geometric version [2] of Gerstenhaber’s bracket in the (Hochshild) deformation complex of an associative algebra. For sim- ply connected closed manifolds L the Hochshild complex ⊕k Hom (A⊗k, A) applied to the intersection algebra A of chains on L is a model of the free loop space of L (Cohen-Jones, Tradler) which realizes the above compar- ison (Tradler). The Lie product on the free loop space of degree +1 is compatible via the connecting morphism M between equivariant homology and ordinary homology with a Lie bracket on the equivariant free loop space homol- ogy [2]. The latter Lie bracket generalizes to all manifolds the Goldman bracket (related to the Poisson structure on flat bundles over a surface) on the vector space generated by conjugacy classes in the fundamental group of a surface [Goldman] (see closed strings §2 below). If the coalgebra part ∨t of the Frobenius dialgebra on homology of the free loop space of L is non zero, then L is a closed manifold with non-zero Euler characteristic. Otherwise a homotopy class of non-zero vector fields on L allows a refining of the operation ∨ cutting at variable time to an 8 DENNIS SULLIVAN operation commuting with ∂ and we obtain in this case an infinitesimal bialgebra structure (appendix) on the homology of the free loop space. §2Closed string states in M (now called L): For closed string states in L we take the chains for the equivariant free loop space of L relative to the circle action rotating the domain. There are maps C→ closed string states in L M→ open string states on the diagonal in LxL ... E→ closed string states in L C→... leading to the long exact sequence relating ordinary homology and equi- variant circle homology. Here we are thinking of the free loop space of L as paths in L x L beginning and ending on the diagonal. The connecting chain map C has degree -2 and intersects with a rep- resentative of the 1st chern class of the line bundle associated to the S1 action made free by crossing with a contractible space on which S1 acts freely. The chain map M has degree +1 and is associated to adding a mark to a closed string in all ways to get a circle of free loops. The chain map E has degree zero and is associated to forgetting the mark on a loop to get a closed string. The composition EM = 0 and the composition ME is △ the differential associated to the circle action. The string product on closed string states satisfying Jacobi (at the transversal chain level) may be defined by the formula [α, β] = E(Mα ∧ Mβ) where ∧ is the open string product (the procedure in example 3 above only satisfies Jacobi up to a non trivial chain homotopy). Other indepen- dent closed string operations cn can be defined by cn(α1, α2, ..., αn)=E(Mα1∧ OPEN AND CLOSED STRING FIELD THEORY INTERPRETED IN CLASSICAL ALGEBRAIC TOPOLOGY9 Mα2 ∧ ... ∧ Mαn)(cf. [2] and [G]). These all commute with the ∂ operator and satisfy other identities transversally [2]. The collision operators cn pass to the reduced equivariant complex or reduced closed string states which is defined to be the equivariant chain complex for the S1 pair, (free loop space, constant loops). We can define a closed string cobracket s2 by the formula s2(α) = (E ⊗ E)(∨(Mα)). In the reduced complex s2 commutes with ∂ and passes to homology (but not so in the unreduced complex). Theorem: The closed string bracket c2(α, β) = E(Mα ∧ Mβ) where x ∧ y = ∧(x⊗y) and the closed string cobracket s2(α) = (E⊗E)(∨Mα) satisfy respectively jacobi, cojacobi, and Drinfeld compatibility (appendix). The term satisy means either on the level of integral homology, for transversal chains on the chain level, or conjecturally at the Stasheff level of strong homotopy. Proof : These formulae in terms of open strings are reinterpretations as in [2] of the definitions given in ”Closed string operators in topology lead- ing to Lie bialgebras and higher string algebra” [3]. There the identities at the transversal chain level were considered. Corollary: Homology of reduced closed string states forms a Lie bialge- bra, [3]. Remark : Independent splitting operations s3, s4, ... can be defined sim- ilarly by iterations of ∨, sn(α) = E ⊗ ... ⊗ E(... ∨ ⊗1 · ∨(Mα)). These also commute with ∂ and pass to homology in the reduced equivariant theory. A conjecture about c2, c3, ... s2, s3, ... generating genus zero closed string 10 DENNIS SULLIVAN operators and the algebraic form of this structure was proposed in [3] and is mentioned below in the summary. Also, compare [Chas] for the original questions motivating this work. Interplay between open and closed string states: Let C denote the closed string states in M, a manifold of dimension d, and let ϑ denote any of the complexes of open string states. Transversality yields an action of closed strings on open strings, C ⊗ ϑ → ϑ degree=(−d + 2) and a coaction of closed strings on open strings ϑ → C ⊗ ϑ degree=(−d + 2) In the coaction we let the open string hit itself at any two times and split the event into a closed string and an open string. In the action we let a closed string combine with an open string to yield an open string. The action is a Lie action of the Lie algebra of closed strings by deriva- tions at the transversal chain level. Both the action and the coaction have a non trivial commutator with the boundary operator on chains. §3Connection to work of (Atiyah-Segal),(Moore-Segal) and (Getzler and Segal): Dialgebras satisfying the module or Frobenius compatibility give examples of 1+1 TQFT’s in the positive boundary sense. In the commu- tative case we associate the underlying vector space to a directed circle, its tensor products to a disjoint union of directed circles and to a connected 2D oriented bordism between two non empty collections the morphism ob- tained by decomposing the bordism into pants and composing accordingly the algebra or coalgebra map. The module or Frobenius compatibility is OPEN AND CLOSED STRING FIELD THEORY INTERPRETED IN CLASSICAL ALGEBRAIC TOPOLOGY11 just what is required for the result to be independent of the choice of pants decomposition. N.B. this description differs from the usual one because we don’t have disks to close up either end of the bordism. One knows these discs at both ends would force the algebra to be finite dimensional and the algebra and coalgebra to be related by a non degenerate inner product. We refer to these generalizations of the Atiyah-Segal concepts as the positive boundary version of TQFT (a name due to Ralph Cohen). An exactly similar discussion with associative dialgebras satisfying the module or Frobenius compatibility leads to a positive boundary version of a relative TQFT using open intervals. Now the algebra and coalgebra are associated to 1/2 pants (a disc with ∂ divided into six intervals-three (1/2 seams) alternating with two (1/2 cuffs) and one (1/2 waist)). Any planar connected bordism between two nonempty collections of intervals determines a mapping between inputs and outputs. The structures we have found (including ∂ labels La, Lb, ...) for open strings using the composition ∧ and fixed time cutting ∨t satisfies this Frobenius compatibility up to chain homotopy and we can apply it at the homology level in the relative TQFT scheme just mentioned. This fits with the work of Moore-Segal [M]. As we begin to look at the chain homotopy coproduct ∨ the derivation or infinitesimal bialgebra compatibility appears. According to [Gan] the derivation or infinitesimal bialgebra compatibility is related to the notion of module or Frobenius compatibility via Koszul duality (see appendix). 12 DENNIS SULLIVAN Now we are entering into a third stage-the proposal of Segal (and in- dependently Getzler) enriching the earlier notion of TQFT by chain com- plexes and chain homotopies. Recall the free loop space above gives on the ordinary (chain) homology level a (strong homotopy) commutative associative product and a cocom- mutative coassociative coproduct (cutting at a fixed time) satisfying the module or Frobenius compatibility. This together with the associative Frobenius category above for open strings fits with the model [M]. In that model ordinary and equivariant levels are not distinguished. We saw that passing to the equivariant setting the product and the cutting at variable time gave a Lie bialgebra in the reduced theory. Ac- cording to [Gan] Lie dialgebras with Drinfeld compatibility are related to commutative dialgebras with Frobenius compatibility by Koszul duality (see appendix). §4 Summary: We have described the part of the interpretation of open and closed string field theory in topology associated to the basic product and coproduct (and in the equivariant setting certain implied n-variable splitting and collision operators as in [3]). The coproduct discussion has two levels involving a coproduct ∨t and an associated chain homotopy coproduct ∨. We found the open string product and the coproduct ∨t satisfied the module or Frobenius compatibility on the level of homology. In a setting where ∨0 and ∨1 were zero or even deformable to zero, ∨ emerges as or can be deformed to a coproduct commuting with ∂ and thus a coproduct OPEN AND CLOSED STRING FIELD THEORY INTERPRETED IN CLASSICAL ALGEBRAIC TOPOLOGY13 ∨ on homology of one higher degree. Then a new compatibility with the product is observed- the derivation or infinitesimal bialgebra compatibility (true transversally). Similarly for the closed string one has to consider the free loop space in both the ordinary and equivariant versions. For the open string with diag- onal boundary conditions the relevant ordinary (chains) homology of the free loop space becomes a (strong homotopy) commutative dialgebra with the module or Frobenius compatibility. Passing to the equivariant theory required for the closed string interpretation and reducing to kill ∨0 and ∨1 which makes ∨ commute with ∂, the product coproduct pair becomes a Lie dialgebra with the derivation or Drinfeld compatibility (equals Lie bialgebra). According to [Gan] the associative and commutative dialge- bras with the module or Frobenius compatibility are respectively Koszul dual to the associative and Lie dialgebras with the derivation or Drinfeld compatibility. This suggests that one of the structures will intervene in descriptions of strong homotopy versions (in the sense of Stasheff) of the dual structure (see appendix). One can go further as discussed in [3] and visualize conjecturally all the above collision and splitting operations of the closed string theory c2, c3, ..., s2, s3, ... defining on homology a structure Koszul dual to the positive boundary version of the Frobenius manifold structure described in [Manin]. 14 DENNIS SULLIVAN The above is only a partial interpretation. The full interpretation of open closed string field theory in topology involves full families of arbi- trary cutting and reconnecting operations of a string in an ambient space M. For closed curves some full families of these operators were labelled combinatorially by decorated even valence ribbon graphs obtained by col- lapsing chords in [3]. There is a serious compactness issue for the full families discussed there for realizing these in algebraic topology. The is- sue is a correct computation of the boundary. The problem has a parallel with renormalization in Feynman graphs. For the compactness algebraic topology issue one needs to associate operators to families of geometric graphs where various subgraphs are collapsing. When all the components of the collapsing subgraphs are trees there is no real problem as discussed in [3]. Similarly for Feynman graphs it is my understanding that if there were only tree collapses there is no problem of renormalization. In both cases algebraic topology transversality normal bundle and Feyn- man graphs the loops in collapsing subgraphs cause the problems. In [3] we had to deal with some simple cases of one loop subgraph col- lapses to treat the identities defining the Lie bialgebra (in particular Drin- feld compatibility). This lead to the idea of using the Fulton MacPherson compactification of configuration spaces to complete the discussion. There is a normal bundle issue related to transversality which requires more anal- ysis to treat the general F M stratum. However for disjoint union of graphs with at most one loop per component this normal bundle for transversality can be easily described as in [3]. OPEN AND CLOSED STRING FIELD THEORY INTERPRETED IN CLASSICAL ALGEBRAIC TOPOLOGY15 Now we expect a Riemann surface discussion to be sufficient to com- plete the string field theory transversality construction. This will complete the definition of the operations for this topological interpretation of open closed string field theory. The idea is that 1) general cutting and re- connecting operation on strings is isomorphic to the change in level that occurs when passing through a critical level of a harmonic function on a Riemann surface and 2) geometrical ideas due to Thurston and then Pen- ner [P] allow an analysis of the combinatorial compactifications of spaces of Riemann surfaces in terms of ribbon graphs. Thus if the transversality cutting and reconnecting operations of the string field theory interpretations are organized by ribbon graphs, then the compactness and transversality normal bundle issues discussed in [3] can be treated for open and closed strings. This is work in progress. Appendix : (dialgebras and compatibilities) Let us call a linear space V with two maps V ⊗V ∧ → V and V ∨ → V ⊗V a dialgebra. Associative dialge- bra means ∧ is associative and ∨ is coassociative. Commutative dialgebra means besides being associative ∧ and ∨ are symmetric. Lie dialgebra means both maps are skew symmetric and that jacobi and cojacobi hold. In all these cases V and V ⊗ V have module structures over V and there are two kinds of compatibilities between ∧ and ∨ relative to these. We get six kinds of structures (five appear in this paper, see table below) which are examples of definitions of algebras over dioperads [Gan]. These are structures whose generators and relations are described diagrammatically by trees. 16 DENNIS SULLIVAN The familiar example of a compatibility studied by Hopf that ∨ is a map of algebras (associative or commutative case but not Lie) can only be described by a non tree diagram. The compatibilities we consider here are derivation compatibility ∨(a · b) = (∨a) · b + a · ∨(b) and module compatibility ∨(a · b) = ∨(a) · b = a · ∨(b) Table with names of compatibility and/or structure and/or examples. Module compatibility Derivation compatibility Associative dialgebra Commutative dialgebra Lie dialgebra Frobenius compatibility ⇐ Frobenius algebra= associative algebra with non degenerate invariant inner product Frobenius compatibility ⇐commutative Frobenius algebra Frobenius compatibility ⇐Lie algebra with non degenerate invariant inner product infinitesimal bialgebra compatibility= infinitesimal bialgebra (see Aguilar) commutative cocommutative infinitesimal bialgebra Drinfeld compatibility =Lie bialgebra Where the · refers to the algebra structure or the module structure (which means in the associative case a·(b⊗c) = (a·b)⊗c, (a⊗b)·c = a⊗(b·c) and in the Lie case a · (b ⊗ c) = −(b ⊗ c) · a = [a, b] ⊗ c + b ⊗ [a, c] where [x, y] = ∧(x ⊗ y).) In [Gan] Koszul dual pairs are defined and there it is proved that upper left and upper right are Koszul dual pairs and that middle left and lower OPEN AND CLOSED STRING FIELD THEORY INTERPRETED IN CLASSICAL ALGEBRAIC TOPOLOGY17 right are Koszul dual pairs. We suppose that the lower left and middle right are also Koszul dual pairs. We note in passing a remark about derivation or Drinfeld compatibility and algebra or Hopf compatibility. A category of ”power series” Hopf algebras U was shown to be equivalent to the category of Lie bialgebras D where D → U was a formal quantization and U → D was a semi classical limit (Etingof-Kahzdan). We emphasize these Koszul relations because in several important situ- ations a strong homotopy algebraic structure of one kind is very naturally expressed by freely generated diagrams decorated with tensors labeled by the Koszul dual structure. In the above discussion all the structures that are true transversally will almost certainly lead to strong homotopy ver- sions on the entire space of states. So these might be expressed in this graphical Koszul dual way. 18 DENNIS SULLIVAN References: [1]K. Fukaya,Oh, Ohta, Ono ”Lagrangin intersection Floer theory - anomaly and obstruction” - (2000) See Fukaya website. [2] M. Chas and D. Sullivan ”String Topology” GT/ 9911159. Annals of Mathematics (to appear). [3] M. Chas and D. Sullivan ”Closed string operators in topology leading to Lie bialgebras and higher string algebra” GT/ 0212358. Abel Bicen- tennial Proceedings (to appear). [G] E.Getzler ”Operads and moduli spaces of genus zero Riemann sur- faces”. In:The Moduli Spaces of Curves ed. by R.Dijkgraaf, C. Faber, G. van der Geer Progress in Math, vol.129 Birkhauser 1995, 199-230. [M] Greg Moore ”Some Comments on Branes, G-Flux, and K-theory” Part II and references to Segal notes therein. International Journal of Modern Physics A. arXiv:hep-th/0012007 v1. 1 Dec. 2000. [Gan] Wee Liang Gan ”Koszul duality for dioperads” preprint University of Chicago 2002 QA/0201074. [Goldman] William M.Goldman ”Invariant functions on Lie group and Hamiltonian flows of surface group representations” Invent. Math. 85(1986), no. 2, 263-302. [Manin] Yuri Manin ”Frobenius Manifolds, Quantum cohomology and moduli spaces” AMS Colloquium Publications, Vol. 47. [Cohen-Jones] ”A homotopy theoretic realization of string topology” math GT/0107187. OPEN AND CLOSED STRING FIELD THEORY INTERPRETED IN CLASSICAL ALGEBRAIC TOPOLOGY19 [Tradler] ”The BV Algebra on Hochschild Cohomology Induced by In- finity Inner Products”, GT/0210150. [S] James Stasheff ”H-spaces from a homotopy point of view,” Lecture Notes in Mathematics 161, Springer-Verlag, Berlin (1970), ii-95. [Chas] Moira Chas ”Combinatorial Lie bialgebras of curves on surfaces” to appear in Topology. Also arXiv GT/0105178. [P] R. C. Penner ”The decorated Teichmuller space of punctured sur- face”, Communications in mathematical physics 113 (1987) 299-339. CUNY Graduate Center, 365 Fifth Avenue, New York, NY 10016 SUNY at Stony Brook, Stony Brook, NY 11794-3651 email: [email protected]
synthetic_cpt
1
VLMs_meet_UDA_Boosting_Transferability_of_Open_Vocabulary_Segmentation_with_Unsupervised_Domain_Adaptation.pdf
4 2 0 2 c e D 2 1 ] V C . s c [ 1 v 0 4 2 9 0 . 2 1 4 2 : v i X r a VLMs meet UDA: Boosting Transferability of Open Vocabulary Segmentation with Unsupervised Domain Adaptation Roberto Alcover-Couso1, Marcos Escudero-Vi˜nolo1, Juan C. SanMiguel1 and Jesus Bescos1 ∗ December 13, 2024 Abstract 1 Introduction Segmentation models are typically constrained by the categories defined during training. To address this, researchers have explored two independent ap- proaches: adapting Vision-Language Models (VLMs) and leveraging synthetic data. However, VLMs often struggle with granularity, failing to disentangle fine- grained concepts, while synthetic data-based meth- ods remain limited by the scope of available datasets. This paper proposes enhancing segmentation accu- racy across diverse domains by integrating Vision- Language reasoning with key strategies for Unsuper- vised Domain Adaptation (UDA). First, we improve the fine-grained segmentation capabilities of VLMs through multi-scale contextual data, robust text em- beddings with prompt augmentation, and layer-wise fine-tuning in our proposed Foundational-Retaining Open Vocabulary Semantic Segmentation (FROVSS) framework. Next, we incorporate these enhance- ments into a UDA framework by employing distil- lation to stabilize training and cross-domain mixed sampling to boost adaptability without compromis- ing generalization. The resulting UDA-FROVSS framework is the first UDA approach to effectively adapt across domains without requiring shared cate- gories. ∗Video Processing and Understanding Lab, Escuela Polit´enica Superior, Universidad Aut´onoma de Madrid, 28049 Madrid, Spain. E-mail: [email protected], {marcos.escudero,juancarlos.sanmiguel, j.bescos}@uam.es 1 Semantic segmentation, the task of assigning a cat- egorical label to every pixel in an image, is critical for multiple applications. However, its class dviersity is constrained by the number and nature of the an- notated classes learned during training. To handle this limitation, two approaches have been explored: (1) training in synthetic domains where classes can be created at will and instances are annotated at cre- ation; so-trained models need to be later adapted to the target domain. (2) leverage the foundational en- coded knowledge of Vision Language Models (VLMs) by adapting their outcomes to a segmentation setting. First, Unsupervised Domain Adaptation (UDA) [1, 2, 3] aims at leveraging synthetic datasets, where simulated environments enable the automatic gener- ation of pixel-perfect annotations at a fraction of the cost of manual labeling [4, 5, 6]. While UDA has shown remarkable success in mitigating domain shifts between tasks and data domains, UDA has a signif- its limited ability to adapt to new icant drawback: categories absent in the synthetic dataset [7]. A po- tential solution would be to regenerate the synthetic data with the new categories, but the models would have to be re-trained, as UDA models are notoriously known for overspecificity [8]. This limitation impairs the performance of UDA-based models in real-world applications where coping with unseen categories is crucial. Second, recent advancements in open vocabulary semantic segmentation (OVSS) have shown that VLMs can be adapted to perform dense prediction tasks. However, these methods typically require task- former layers learning with limited data [15, 16, 17]. Additionally, our fine-tuning strategy is de- signed to preserve the integrity of VLM’s pre- trained weights, avoiding catastrophic forgetting while enhancing pixel-level predictions (see Fig- ure 2a). • Textual Relationship Improvements: To improve the cross-dataset generalization ability of our model, we propose a concept-level prompt augmentation strategy. By using Large Lan- guage Models (LLMs) and providing specific in- structions for annotators, we generate diverse and contextually enriched textual prompts. Our approach enhances the model’s ability to rec- ognize categories through semantic relationships across datasets (see comparison with [13] in Fig- ure 2 Foundational-Retaining Open Vocabulary Semantic Segmentation (FROVSS)). • Synergy between UDA and Open Vocab- ulary: We bridge the gap between UDA and OVSS by developing a unified framework that eliminates the need for shared categories be- tween the source and target domains, making our UDA framework the first to enable mod- els to recognize and segment objects beyond the categories encountered during training. By com- bining the domain generalization capabilities of UDA with the flexibility of OVSS, we train mod- els which present high effectiveness on the tar- get dataset while preserving their generalization prowers (see Figures 2b and 2c UDA-FROVSS). Alltogether, these contributions enable open vo- cabulary models to significantly benefit from the use of unlabeled images common in UDA, thus enhanc- ing their performance and generalization across di- verse datasets. As a result, the models trained us- ing FROS and UDA-FROS demonstrate superior seg- mentation accuracy on the training dataset but also enhanced transferred performance in previously un- seen domains (see Figure 2). The proposed methods are extensively evalu- ated across multiple semantic segmentation datasets showcasing improvements across all analyzed benchmarks: PAS-20 (↑ 2.0% mIoU), COCO (↑ Figure 1: State of the art for open vocabulary semantic segmentation underperforms when trained with small training sets. Results of CAT-Seg [13] trained on random subsets derived from three popular datasets with different amounts of im- ages across three random seeds (maximum minimum range depicted by shadowed area). Performance eval- uated in the COCO validation set [14]. specific data for effective fine-grained adaptation to semantic segmentation tasks (see Figure 1). Unfour- tunately, during this adaptation process,generally conducted by fine-tuning, the foundational capabil- ities of the model are catastrophically [9, 10]. The practical applicability of these model is then limitted, as it is still to appear a technique to enable adap- tation without reducing the generality of the prior features [11, 12] While these challenges have been tackled sepa- rately, in this paper we show that their solutions can be mutually reinforcing. Our key insight is two-fold: First, UDA principles and techniques can dramatically reduce the data re- quirements for the fine-grained adaptation of VLM by providing efficient mechanisms for employing unla- belled data and preserving the global knowledge from VLM pre-training. Second, the rich semantic under- standing and open vocabulary capabilities of VLMs can help UDA methods evolve from their closed-set constraints, enabling them to handle novel categories through zero-shot recognition capabilities. Based on these challenges, our contributions can be organized into three main areas: • Open Vocabulary Model Enhancements: We introduce a novel decoder architecture that leverages convolutional layers for aiding trans- 2 Test performance (mIoU)Number of Training Images Employed (log scale) (a) Our proposals model improve SOTA models for the default setup of COCO training. (b) Urban scenes datasets improve specificity at the cost of reduced generalization (see Cityscapes and COCO performance) (c) Scene parsing datasets improve specificity at the cost of reduced gen- eralization (see ADE and COCO per- formance) Figure 2: Visual summary of contributions. In Figure 2a, we showcase the benefits of FROVSS in the standard OVSS setup (trained in the COCO dataset and evaluated in multiple datasets). Figures 2b and 2c illustrate the major challenge we tackle: training with task-specific datasets (Cityscapes in 2b and ADE in 2c) drastically reduces generalization of the model. To overcome such issue, our proposed combination of UDA and OVSS (UDA-FROVSS), presents high performance for task-specific datasets while preserving generalization across other datasets (see UDA-FROVSS in 2b and 2c). Note that these UDA models do not require labels for the task-specific dataset. 3.2% mIoU), ADE-20 (↑ 7.9% mIoU), Pascal (↑ 17.2% mIoU) and Cityscapes (↑ 22.1% mIoU). On top of them, we also establish a new benchmark for UDA in the Synthia-to-Cityscapes setup, surpassing the previous state-of-the-art frameworks by over 8% in mIoU. The rest of the paper is organized as follows: Sec- tion 2 reviews related work on OVSS and UDA; Section 3 details the proposed OVSS architecture, prompt definition, and UDA training framework; Sec- tion 4 covers experimental results, focusing on both generalization with a single dataset and specializa- tion to an unlabeled set (UDA); Finally, Section 5 discusses insights from the experiments and proposes future research directions for OVSS. 2 Related Work Open Vocabulary Semantic Segmentation Early CLIP [18] extensions to pixel-level prediction primarily involved its use as an image classifier with masked regions [19, 20, 21, 22, 23, 24]. These meth- ods typically utilized a region proposal network to identify image segments, which were masked and clas- sified using a static CLIP model. This process re- quired processing each segment through the CLIP encoder, making it computationally intensive. Re- cent studies [25, 26, 13, 27, 11] have shifted focus to harnessing CLIP’s features for more detailed, object- level semantic analysis. Initial efforts involved using CLIP’s attention maps to create segmentation maps [25, 26], which were classified based on similarity values. However, as CLIP is trained for global im- age representations, these methods struggled to cap- ture intricate image details. Alternative approaches [28, 29, 30] introduce category-agnostic region pro- posals as support inputs to the CLIP encoder. This process has the limitation of losing the image global context as each support region is processed indepen- dently. In contrast, MaskCLIP [11] addressed this by modifying CLIP’s final pooling layer to extract dense features directly. To obtain accurate dense 3 CityscapesMapillaryADEPascalPAS-20PAS-20bCOCOEvaluation Dataset020406080100Performance (mIoU)MaskClip (Baseline)Catseg (COCO)FROVSS (COCO)CityscapesMapillaryADEPascalPAS-20PAS-20bCOCOEvaluation Dataset020406080100Performance (mIoU)Catseg (COCO)FROVSS (COCO)Catseg (Cityscapes)FROVSS (Cityscapes)UDA-FROVSS (COCO-Cityscapes)CityscapesMapillaryADEPascalPAS-20PAS-20bCOCOEvaluation Dataset020406080100Performance (mIoU)Catseg (COCO)FROVSS (COCO)Catseg (ADE)FROVSS (ADE)UDA-FROVSS (COCO-ADE) features, they fine-tuned the query and key embed- dings responsible for spatial representation. Building on this, CAT-Seg [13] proposes to rely on similarity maps between these dense visual features and textual category descriptions as inputs for a decoder, thereby avoiding the direct CLIP optimization that may be compromising the open vocabulary capabilities of the image backbone [11, 12]. While the CAT-Seg method is proved to be effective for transferring the open vo- cabulary capabilities of CLIP to the segmentation task, but the added decoder and the modified layers of CLIP are domain-specific. Consequently, it under- performs in scenarios subjected to semantic distribu- tion shifts. To address this, we first propose defining robust text embeddings by incorporating synonyms and annotator instructions to refine and focus the se- mantic meaning of each category name within each dataset. Second, we introduce a layer-wise learning rate to enhance the fine-tuning of VLMs, mitigat- ing the catastrophic forgetting. Finally, we design a decoder that combines convolutional and transformer layers, enabling efficient adaptation of VLMs to dense prediction tasks. Together, these techniques form the foundation of our FROVSS method. Unsupervised Domain Adaptation Addressing the challenge of applying open vocabulary segmenta- tion to unseen datasets, we advocate the use of UDA techniques [31, 32, 33]. UDA leverages knowledge from a source-labeled domain to train models that can effectively generalize to unlabeled target domains by coping with a covariant distribution shift. To that end, pseudo-labels [34] from a teacher model are used to guide the learning of a model being trained (i.e., the student) [35, 36, 37, 38, 7, 39]. As UDA frame- works typically lack a reliable teacher, the common choice is to define the teacher model as an exponen- tial moving average of the student’s weights. This allows integrating learnt knowledge on the fly, while mitigating concept drifting by hampering the impact of pseudo-labels in the teacher model [40]. Concept drift is the phenomenon where accumulating inac- curacies, particularly false positives, misguides the model’s learning trajectory [41, 42]. Among other canonical UDA strategies, cross-domain mixed sam- pling (image mixup) randomly overlays images from both domains to enforce the model to learn domain- invariant features [43, 44, 45, 46, 47]. Moreover, do- main randomization introduces controlled variability in the training data, such as changes in lighting, tex- tures, and other environmental factors. Additionally, it proposes overlaying source objects on target im- ages to further introduce variability [47, 44]. These variations ensure that the model remains adaptable even when faced with data different from the training set. We blend these UDA techniques into our VLM- based OVSS method: UDA-FROVSS, yielding large performance gains across all analyzed domains. Robust Text Embeddings To improve open vo- cabulary capabilities, state-of-the-art methods [18, 48] employ multiple descriptions of the image to gen- erate a mean embedding. These mean embeddings are supposed to be more reliable descriptors of the object as the only common factor of the sentences is the text identifying the target object, therefore reduc- ing the noise introduced into the text embedding of the prompt. Differently, the state-of-the-art method for open vocabulary segmentation [13] does not em- ploy mean text embeddings and computes similarity on each prompt, thus neglecting the cascaded ben- efits of better textual representations. To overcome this drawback we employ different prompt augmen- tation techniques based on object characteristics, e.g. “A photo of a Vintage car”, and descriptions for an- notators. 3 Method In this paper, we enhance the adaptability of CLIP to interpret semantics at pixel level across novel do- mains. We initialize the training with a pre-trained VLM as it contains rich semantics learned from large- scale training. We follow [11] and modify the last layer of the image encoder to obtain dense image fea- tures which are combined with text embeddings to generate cost volume embeddings as in [13]. These cost volumes are then fed to a decoder to gener- ate segmentation maps (see Figure 3). Additionally, to improve performance we propose paraphrasing en- 4 Figure 3: Proposed decoder for open vocabulary semantic segmentation, exemplified with the category: “car”. We guide segmentation by refining the similarities between dense features extracted from the image encoder and the text features (C). semble to generate robust text embeddings, these ro- bust text embeddings can be applied during the train- ing or the testing phase to improve performance. 3.1 Open Vocabulary Semantic Seg- mentation Problem overview In open vocabulary semantic segmentation, the task is to map each pixel of an input image x to a label from a set of N potential categories P = {P1, ..., Pn, ..., PN }, where for each n- th category, Pn = {p1,n, ..., pm,n, ..., pM,n} is a set of M different text prompts pm,n (equal M for all cate- gories). This task expands upon traditional semantic segmentation by being able to cope with unseen cat- egories during inference, posing a challenge beyond conventional segmentation capabilities. Fine-Tuning of CLIP Prior works utilizing CLIP [11, 13, 9, 10, 12] highlight that conventional fine- tuning of the image encoder can degrade performance due to potential misalignment of the image and text encoders. Therefore, guided by the insight that tun- ing layers responsible for spatial interaction (e.g., at- tention layers and positional embeddings) suffices for transferring image-level representations to pixel-level [49], we freeze the MLP layers in the encoder. Ad- ditionally, we follow the hypothesis that deeper lay- ers in the image encoder encapsulate task-specific fil- ters, while shallow layers represent task-agnostic fil- ters which should be less tuned to obtain optimal performance [50, 51]. Therefore, we propose to de- crease the learning rate in each layer by a factor of β with respect of the previous layer: lrl ← lrl+1 · β, (1) where lrl is the learning rate assigned to layer l and β a training hyper-parameter. For the framework we only set the initial learning rate of the last layer of the encoder and then the learning rate is propagated to the following layers. Language-Guided Cost Volume and Semantic Decoding The vision-language pre-training allows for aligned vision and language embeddings; such alignment enables the reasoning of semantics of the image given a description. We employ a decoder to disentagle the relationships derived from the VLM into accurate pixel-level predictions. To do so, first we modify the CLIP image encoder as per [11], so that it generates patch-level image features by re- moving the attention pooling of the last layer of the ViT encoder [52]. For each input image x of H × W spatial resolution, the modified CLIP image encoder ΦV (x) processes image patches of size k × k to obtain sets of visual features {EV i , i ∈ [1, H ′ × W ′]}, with k and W ′ = W H ′ = H k . Regarding the text branch, the encoder ΦL(P) is kept unaltered, yielding a set 5 TextEncoder (Φ𝐿𝐿)A photo of a…A photo of a…A photo of a…A photo of a…A photo of a carImageEncoder (Φ𝑉𝑉)CAuxiliary ImageEncoderSwin (𝒮𝒮)UP (𝒰𝒰)......C’C’’Linear (𝒦𝒦)FeaturesFrozenTrainableASPP2D ConvolutionProposed decoder (𝒟𝒟)Fine-tuneD𝐸𝐸𝐿𝐿𝜀𝜀1𝒜𝒜𝑛𝑛𝐸𝐸𝑉𝑉1D Convolution...... of text features EL category of the same size as EV i . m,n for each text description and We define a cost volume embedding C ∈ R(H ′×W ′)×M ×N as the cosine similarity [53] between EV i and EL m,n: Ci,m,n = · EL EV i ∥EV i ∥∥EL m,n m,n∥ . (2) These cost volumes are fed into a decoder D, whose main objective is to refine the similarities ex- tracted from the text and image encoders. Our de- coder is composed of a spatial refinement module followed by a semantic reasoning module. Initially, each category-prompt-spatial feature C:,:,n is embed- ded into a D hidden dimension by means of a a con- volution yielding C ′ ∈ R(H ′×W ′)×D×N . To aid in the spatial reasoning, we incorporate a residual ASPP module to train long-range and multi-scale context relations across all spatial re- gions of these similarity maps C ′ :,:,n. This mod- ule incorporates these relationships yielding C ′′ ∈ R(H ′×W ′)×D×N . Visual Guidance Branch Our framework may incorporate auxiliary image feature embeddings E for spatial structure or contextual information through additional image encoders. These embeddings are concatenated with the spatially-refined cost volume features and processed by two Swin Transformer blocks S: text embeddings of each category. Empirically, we set K as a linear transformer: F ′ i,:,n = K([Fi,:,n; An]). (4) Up-Sampling Module Given that the underlying CLIP visual transformer encoder operates on a k × k times smaller feature resolution than the input, the similarity volume F ′ is up-sampled by a module U to recover the original image resolution. This module is the concatenation of bilinear up-sampling followed by a set of transposed convolutions. This process it- erates as many times required 1 to yield an output of the same resolution as the input image for each hid- den dimension and category: F ′′ ∈ R(H×W )×D×N . :,d,n = U(F ′ F ′′ :,d,n) (5) Incorporating visual guidance [54], the auxiliary feature embeddings E are concatenated with F ′ and processed by a convolutional layer to return to the hidden dimension D before the up-sampling mod- ule. These features are concatenated and processed in the same order they are extracted. Intuitively, fol- lowing the extraction order in the auxiliary decoder refines finer details of the image. Finally, each cate- gory’s similarity F:,n ∈ R(H×W )×N is computed for each spatial position by the sigmoid activation of a weighted combination of the hidden dimension D in F ′′, implemented as a learnable 1 × 1 convolution. F:,d,n = S([C ′′ :,d,n; E]), d ∈ [1, D]. (3) 3.2 Prompt Definition The Swin Transformer blocks combine spatial infor- mation from the auxiliary encoder with relational fea- tures from the cost volumes, enabling the model to fill gaps and refine semantic structures from the VLM based on spatially-aware context. Text Guidance Branch To reinforce the textual guidance, we incorporate a semantic reasoning mod- ule that reinforces the relationships among the tex- tual descriptions in F without further modifying the spatial ones. Through a linear kernel K, we leverage our prompt-augmented semantic anchors An (see sec- tion 3.2) obtained as a linear combination of the M We find two major drawbacks in prompt definition of the current CLIP-based segmentators. First, they do not employ mean text embeddings nor prompt augmentations to generate more reliable text embed- dings [13]. Second, they do not account for conflicting category names given by different datasets, e.g. the Cityscapes dataset [55] differentiates between the cat- egories rider and pedestrian , whereas other datasets group both under the category person, without ac- counting for the situational context of the individual. To address the identified issues, we incorporate the descriptions provided to annotators into the text 1For instance for P = 16 two iterations are required. 6 (a) Robust per-prompt embeddings (b) Prompt unification Figure 4: Prompt augmentation pipeline. prompts for each category. These descriptions of- fer additional detail on how each category is defined in the target dataset, enhancing specificity without compromising generalization to other datasets. Ad- ditionally, we use LLMs [56] to generate synonyms for each category to improve concept robustness. Dupli- cate synonyms are removed to maintain category dis- tinction. Furthermore, we use LLM-generated varia- tions to create multiple versions of each prompt (see Figure 4a). This approach enables a novel prompt- level augmentation protocol, leveraging robust text embeddings for each prompt instead of averaging across prompts. We propose to define augmentations based on: • Object characteristics augmentation. These augmentations are based on including different adjectives before the class name into the prompt, e.g. “A photo of a Vintage car”. should provide These object characteristics robustness to the concept as these variations are averaged with the concept as a common anchor. • Photometry of the image augmentation. At the end of the prompt followed by a comma we include visual characteristics of the global im- age, e.g. “A photo of a car, with High Contrast”. These characteristics may be useful to provide robustness towards style changes. • Background characteristics augmenta- tion. We include positional information into the prompt, e.g. “A photo of a car in the Country- side”. These characteristics can be useful for extrapolating from datasets captured on specific geographical points, such as Cityscapes captured 7 on Germany [55], to more diverse datasets as Mapilliary [57] captured on a global scale. n = {p′ Formally, we propose to generate A variations of each category textual description pm,n, generating an augmented set P ′ m,n,A}. Then, these are fed to the text encoder to extract text features. Before combining them with the visual ones in Equations 2 and 4, we calculate the mean of the augmented set of text features to obtain a robuster set of text features: 1,n,A, ..., p′ 1,n,1, ..., p′ EL m,n = A (cid:88) a=1 ΦL(p′ m,n,a) A . (6) To unify the responses across the different augmen- tations, a 1×1 convolution is trained, thereby relying on a learnable weighted combination (see Figure 4b). 3.3 Unsupervised Domain Adapta- tion Our UDA-FROVSS framework, illustrated in Figure 5, integrates the improved CLIP capabilities in the UDA framework to utilize labeled images from the source domain to guide the learning on unlabeled images on the target domain. While UDA already achieves good segmentation quality on shared cate- gories, current UDA frameworks struggle to segment target-private categories not present in the source domain. To address this, we propose to combine open vocabulary segmentation with UDA techniques into our UDA-FROVSS framework. Specifically, we adapt UDAs teacher-student training scheme and im- age mixup to enhance domain generalizatio by em- ploying pseudo-labels generated by the teacher on the PROMPT AUGMENTATIONLLM…TEMPLATES ():VARIATIONS ():…TEXT ENCODER…ROBUST TEXT EMBEDDINGSCLASSINJECTION{}ROBUST TEXT EMBEDDING GENERATIONAVGAVGCOSINE SIMILARITYDECODER (D)PROMPT COMBINATIONCONFIDENCE MAP FOR CLASS {n}ROBUSTTEXTEMBEDDINGS…VISUAL EMBEDDINGSCOST VOLUMECOST VOLUME COMPUTATIONPIXEL-WISE SEGMENTATION PREDICTION Figure 5: Overview of UDA-FROVSS, which combinines VLMs with UDA. Key Components are illustrated within delineated boxes: (1) Integration of a custom decoder alongside a fine-tuning strategy to effectively train the framework; (2) Adaptation of UDA techniques, incorporating a teacher-student framework and image mixup for domain robustness; (3) Generation of robust text embeddings for enhanced category recognition target dataset. Moreover, to preserve the open vo- cabulary capabilities of the final model, we propose to only update the teacher decoder, as the backbones update leads to concept drifting, thus corrupting the open vocabulary reasoning outside the vocabulary employed during training. Therefore, the teacher pre- serves the open recognition at the cost of fine-grained classification, as the encoder remains unaltered. Problem Overview In the context of domain is trained on labeled source adaptation, a model data and unlabeled target data. Training a model with source data typically leads to suboptimal per- formance when applied to target domain images. For the fine-grained adaptation of VLMs to OVSS, one must adapt the model to transfer the learned image- level representations to pixel-level ones, leading to domain specificity of the learned layers [11, 13]. Teacher-Student Framework Our proposal em- ploys a teacher-student framework from two equally initialized VLMs, each one composed of a decoder (Dτ and Ds for the teacher and student respectively) and an encoder. The student model Ms learns through cross-entropy loss on the source data using each source image x with one-hot encoded label y: L(x) = − H×W (cid:88) i yilog(Ms(x)i), (7) Image Mix-Up Our hypothesis is based on CLIP inherently supporting open vocabulary for broadly defined categories, so our focus is on training the head to discern shapes and align with CLIP’s knowledge at a more detailed, fine-grained level. Therefore we fol- low a domain randomization protocol [58, 44, 47] to improve the shape understanding of the model. Fol- lowing [47], we overlay half of the semantic instances in the source domain on top of target images to gener- ate blend data-augmented images: x′. The blending is also performed to generate the labels y′ by combin- ing the ground truth labels with the pseudo-labels. To cope with pseudo-labels uncertainty, we employ a 8 StudentPromptsLMAugmented PromptsTarget Class names Source Class names Clip TextEncoder𝑇𝑇1𝑇𝑇2𝑇𝑇𝑁𝑁…𝑇𝑇1𝑇𝑇2𝑇𝑇𝑁𝑁…𝑇𝑇1𝑇𝑇2𝑇𝑇𝑁𝑁…𝑇𝑇1𝑇𝑇2𝑇𝑇𝑁𝑁…MeanRobust Text EmbeddingsImage Mixup𝑇𝑇1𝑇𝑇2𝑇𝑇𝑁𝑁…𝑇𝑇1𝑇𝑇2𝑇𝑇𝑁𝑁…𝑇𝑇1𝑇𝑇2𝑇𝑇𝑁𝑁…𝑇𝑇1𝑇𝑇2𝑇𝑇𝑁𝑁…MeanImage EncoderSource Data FlowTarget Data FlowFrozen ModelTrainable ModelE.M.A.Image EncoderDecoder (𝒟𝒟𝑠𝑠)Decoder(𝒟𝒟𝜏𝜏)Fine-tuned ModelEqual initialization𝑞𝑞𝑖𝑖𝐸𝐸𝑞𝑞(8)32211Mixed Domain Data Flow weighted cross-entropy loss for mix-up training: Dataset COCO CS MAP ADE-20 PC PAS Synthia L′(x′) = −q H×W (cid:88) i ilog(Ms(x′)i), y′ (8) Categories Train Test 171 118 5 19 3 0.5 65 18 2 150 20 2 59 5 5 20 1.5 1.5 16 9.5 - where q is the quality of the target image segmen- tation; q is computed through the confidence of the pixel-wise predictions of the teacher model by using a pixel-wise confidence threshold (µ): Table 1: Summary of the number of defined categories and subset sizes (in thousands) for the datasets used in the following experi- ments. “-” denotes the absence of a defined test set. (cid:80)H×W i q = max(ˆyi,:) > µ H · W , (9) Teacher Update Based on observations that up- dating the teacher encoder improves performance on source domain but diminishes the model’s open vo- cabulary potential, we propose to keep the encoder of the teacher model unaltered, whereas the teacher de- coder is updated through an exponential moving av- erage (E.M.A.) of the student weights implementing a temporal ensemble at every time step δ to stabilize predictions: Dτ δ+1 ← α · Dτ δ + (1 − α) · Ds δ. (10) As the teacher decoder becomes increasingly un- aligned with its encoder, we propose to progressively incorporate the student predictions in the pseudo- labels. Our aim is to first only weight teacher predic- tions in the learning, to seamlessly employ the stu- dent predictions once its learning stablizes. We pro- pose to implement this as a weighted linear combi- nation of the teacher (Mτ ) and student predictions: ˆyi = γ · Mτ (x) + (1 − γ) · Ms(x), (11) Initially, generated pseudo-labels only takes into account the teacher pseudo-label, as it has been shown that CLIP possesses somewhat reliable zero- shot segmentation capabilities [11]. To do so, we de- fine γ as a parameter reduced at every time-step: γδ+1 ← ( 1 δγδ + 1 ), (12) where γ0 is a training hyper-parameter. We se- lect this update, as it is a smoothed version of the teacher decoder E.M.A. update. Therefore, the more 9 the teacher decoder changes the less we trust the its pseudo-labels as its decoder will be increasingly un- aligned with its encoder. 4 Experiments 4.1 Experimental Setup Datasets and Evaluation Our experiments ex- plores seven distinct datasets for semantic segmen- tation, each used independently: COCO-stuff [14], Cityscapes (CS) [55], Mapillary (MAP) [57], ADE-20 [59], Pascal-Context (PC) [60], Pascal VOC [61] and Synthia [62]. Synthia is the only synthetic dataset. Cityscapes, Mapillary and Synthia are urban scenes datasets while the others are general-purpose ones. For the Pascal VOC dataset, we present results in two formats: including the background (PASb), and focusing solely on object categories (PAS). Table 1 provides further details. As evaluation metric, we adopt the per-class mean Intersection over Union at pixel-level, mIoU = T P +F N +F P , where TP, FP and FN stand for true positives, false positives and false negatives respec- tively. T P Implementation details Our models are trained on a single GPU A40 with a per-pixel binary cross- entropy loss, batch size of 4, AdamW optimizer with a learning rate of 2 · 10−4 for the decoder and 2 · 10−6 for the CLIP image encoder, with weight decay of 10−4. The CLIP text encoder always remains frozen. Our models utilize ViT-B [52] as the CLIP image encoder (P = 16), and a Swin Trasformer [63] as the Method CS MAP ADE-20 PC COCO CAT-Seg [13] Proposed 67.3* 72.1 52.6* 53.4 46.8 53.4 62.4 67.4 45.3 48.8 Table 2: Decoder performance comparison against the best stage-of-the-art for open vo- cabulary semantic segmentation. Training and test data correspond to the same dataset. Not re- ported results are trained by us and indicated by: “*”. auxiliary image encoder. The image encoder remains frozen when initialized from pre-trained weights, if not, the learning rate is set to 2 · 10−4. All models are trained for 80k iterations. We set µ = 0.96 and β = 0.95 through initial exploration and D = 128 following [13]. 4.2 Results on Open Vocabulary Seg- mentation This subsection provides results for FROVSS without using any UDA techniques (see Section 3.1 and Fig- ure 3), following standard protocols for comparisons. Open Vocabulary Segmentation Architecture. Table 2 compiles the results of our baseline decoder and compares them to the CAT-Seg architecture. Aditionally, Table 3 presents the full performance comparison with the baseline framework CAT-Seg [13]. Across all analyzed training and validation setups, our decoder improves or performs on-par with CAT- seg. These improvements are specially notorious on dense labelled datasets such as Cityscapes and Map- illary, suggesting that our proposed encoder does in fact improve segmentation of fine details and densly populated scenes. Methods Train CS MAP ADE-20 PC PAS PASb CAT-Seg* Decoder FROVSS CS CS CS 67.3 26.7 72.1 26.8 73.5 28.8 CAT-Seg* MAP 61.5 52.6 Decoder MAP 62.1 53.4 FROVSS MAP 62.9 53.6 CAT-Seg ADE-20 41.2* 20.6* Decoder ADE-20 42.8 22.3 FROVSS ADE-20 43.0 23.3 CAT-Seg Decoder FROVSS PC PC PC 35.8* 20.4* 36.1 21.9 37.4 22.4 CAT-Seg COCO 41.7* 22.6* Decoder COCO 44.0 23.1 FROVSS COCO 44.3 24.0 26.6 27.4 28.8 27.6 28.4 28.9 46.8 53.4 53.6 23.0 29.3 29.3 27.2 28.6 31.3 45.7 73.6 65.7 48.1 78.4 70.0 48.6 78.9 70.6 50.0 80.1 70.1 50.4 85.5 73.9 51.3 85.5 74.0 46.7 85.5 70.3 56.6 93.2 75.7 56.9 93.7 76.0 62.4 87.3 79.0 67.4 93.8 78.5 68.3 94.7 79.1 57.5 93.7 78.3 61.8 94.8 78.9 67.4 95.6 79.8 Table 3: OVSS performance comparison on dif- ferent settings. “Decoder” stands for our proposed decoder results and FROVSS stands for our decoder with the proposed prompt augmentation and finetun- ing strategies. Our models demonstrate remarkable generalization capabilities even on visually different datasets. The scores evaluated on the same dataset used for training are colored in gray for clarity. Not reported results are trained by us and indicated by: “*”. Test Train CS MAP ADE-20 PC Pas-20 PAS-20b 72.1 72.0 ✓ 72.3 Trained on Cityscapes 48.1 27.4 48.3 27.6 48.6 28.8 26.8 27.2 28.1 42.8 40.3 ✓ 43.0 Trained on ADE-20 56.6 53.4 22.3 56.8 53.0 22.9 56.9 53.6 23.3 78.4 78.9 78.9 93.2 93.2 93.7 ✓ ✓ ✓ ✓ 70.0 70.3 70.6 75.7 75.9 76.0 Prompt Augmentation Table 4 showcases the ef- fectiveness of prompt augmentation described in Sec- tion 3.2. This technique can be applied during either the testing phase, the training phase, or both. Our findings indicate that utilizing prompt augmentation Table 4: Performance comparison of prompt augmentation within the proposed FROVSS method. Prompt augmentation is applied during only testing, training and testing or neither. 10 Evaluation dataset Prompt Aug Obj PH BG CS MAP ADE-20 PC Pas-20 PAS-20b Trained on: Cityscapes 48.1 47.9 48.1 47.6 48.6 72.1 26.8 72.1 27.2 72.1 25.7 ✓ 72.2 27.6 ✓ 72.3 28.1 70.0 70.3 69.0 70.0 70.6 78.4 78.9 74.4 78.4 78.9 27.4 27.6 27.4 27.7 28.8 ✓ ✓ ✓ Trained on: ADE-20 56.6 56.8 56.6 56.6 56.9 42.8 22.3 42.8 22.9 42.7 22.3 ✓ 42.9 23.1 ✓ 43.0 23.3 53.4 53.0 53.4 53.3 53.6 93.2 93.5 93.2 93.3 93.7 75.7 75.9 75.7 75.8 76.0 ✓ ✓ ✓ Table 5: Performance comparison of different prompt definition strategies. Prompt augmenta- tion is applied during training and testing. Key, Obj: Object, PH: Photometry, BG: Background. exclusively during testing enhances the model’s gen- erality at the expense of reduced specificity in rela- tion to the training dataset. This is attributed to the decoder’s increased specialization with the training- specific prompts. Conversely, implementing prompt augmentation throughout both training and testing phases enhances the model’s performance in terms of both specificity and generality. Moreover, Table 5 presents a comparison of perfor- mance across datasets using the proposed text aug- mentations studied. Notably, photometry of the im- ages do not improve performance, indicating that fur- ther research is warranted, as such approaches have shown value in image classification. We exclude re- sults for photometry combinations with alternative augmentations, as they also failed to enhance perfor- mance. Figure 6 compares the TSNE representation of the textual features employed. Notably, our robust text embeddings result in distinctly separated clusters for each class while maintaining logical inter-class re- lationships. For instance, while rider and pedes- trian categories are closely grouped, rider also aligns closely with bike and moto, whereas pedestrian is po- sitioned nearer to sidewalk. Moreover, Figure 7, illus- trates qualitatively scenarios where the prompt aug- 11 (a) w/o text ensemble. (b) Text ensemble. Figure 6: Visual comparison of the text embed- dings employed. TSNE representation of the text embeddings employed to describe the 19 Cityscapes semantic categories. Fine-tuning # Parameters CS ADE-20 PC Full model Only decoder Spatial[13] Proposed 0.3B 2.3M 30M 30M 69.9 70.2 72.1 73.5 50.2 51.8 53.6 54.1 61.0 63.1 67.4 68.3 Table 6: Ablation study on the fine-tuning strategy. Reported results for the training dataset validation set. # Parameters stands for the number of parameters tuned during training. mentation allows the model to discern the true cat- egory; The fence, book and the plaything on each of the three columns column, which the model trained without the proposed textual features fails to identify them in favor of more common categories. Finetuning of CLIP Table 6 compares different training protocols across three datasets suggesting that our fine-tuning enhances performance while pre- serving efficiency. We argue that applying a lower learning rate to the model’s early layers yields im- proves performance by avoiding drastic changes that lead to misalignment’s with the frozen text encoder. On the other hand, by selectively targeting at spatial relationship-targeted layers we reduce computational costs. Method Venue OV CS ADE-20 PC COCO FreeSeg [65] ECLIP [25] CLIP sgr [26] MasQCLIP [66] ICCV23 ✓ CVPR23 ✓ ICLR23 ✓ arxiv23 ✓ 31.4 - - - CLOUDS [22] ZegCLIP [67] arXiv23 × 60.2 CVPR23 × - 46.1 - 30.4 30.4 - - - - 29.3 57.8 - 46.5 42.9 25.8 35.2 47.3 - 40.7 Figure 7: Qualitative comparison of model trained and evaluated with (first row) and without (second row) prompt augmentation on the ADE-20 dataset. The enhanced robust text embeddings allow the model to correctly seg- ment the fence, wall and plant (first column), book (second column) and plaything (third column). Quantitative Comparison with State-Of-The- Art CLIP-based Segmentators To validate the first core contribution of our paper (FROVSS), we present Table 7 where the performance on different datasets of several CLIP-based segmentation meth- ods is compared. Additionally, in Table 8 we com- pare the performance of our framework by compar- ing with other OVSS frameworks all trained on the COCO-stuff dataset. Please note that our model out- performs all other proposals, even the ones that em- ploy additional foundational methods like [23] which employs SAM [64]. Segmentators Figure Qualitative Comparison with State-Of-The- Art CLIP-based 8 presents a qualitative comparison of the state-of- the-art OVSS model CAT-seg [13] on the Mapillary dataset, both trained only on the COCO dataset. We have selected this dataset as Mapillary is a densely segmented dataset across multiple labels. As our ASPP and prompt augmentation techniques are tailored to improve on densely labeled areas and recognition of visually similar categories, Mapillary stands as the best testing ground. Notably, our model outperforms CAT-seg both in recognition and level of segmentation detail. Specifically, in the 12 FROVSS ✓ 73.5 53.6 68.1 48.8 Table 7: CLIP-based segmentators perfor- mance comparison in supervised setting. Our models outperforms by large margins CLIP-based segmentators across five different datasets. Each model is trained and evaluated on the dataset indi- cated. Not reported results are indicated by: “-”. Method ADE-20 PC PAS-20 Venue FreeSeg [65] SegCLIP [19] OVSeg [28] SAN [23] HIPIE [68] CAT-seg [13] CVPR24 CVPR23 ICML23 CVPR23 CVPR23 NeurIPS23 24.6 - 24.8 27.5 29.0 27.2 - 24.7 53.3 53.8 - 57.5 91.9 52.6 92.6 94.0 63.8 93.7 FROVSS 31.3 67.4 95.6 Table 8: Performance comparison of open vo- cabulary semantic segmentation frameworks. Our method significantly outperforms current alter- natives by 8% mIoU, 17% mIoU and 2% on the ADE- 20, Pascal-Context and Pascal VOC datasets respec- tively. Not reported results are indicated by: “-”. first row results we find our model presents better segmentation of the vegetation, building and frame. Additionally, our model does not misclassify the traffic sign facing backwards with a bridge. On the Second row, we find better segmentation for vegetation, building and sky. Similarly, the center of the image is densely populated and our model is capable of accurately segment each class meanwhile, CAT-seg fails to segment the traffic sign, lights and frame. To further validate our models improved segmen- tation of fine details, we present a comparison in Fig- ure 9 on the Cityscapes validation set both trained only with the training set of the Cityscapes dataset. Moreover, to validate the improved recognition of (a) Color Image (b) CAT-Seg [13] (c) Ours Figure 8: Visual performance comparison of CAT-seg and our model on the Mapillary dataset, both trained only with COCO. Our model presents better segmentation of the vegetation, building and traffic light. Figure 9: Visual performance comparison of CAT-seg (first row) and our model (second row) both trained and evaluated only with the Cityscapes dataset. We find that our model presents better segmentation of rider, bicycle, fence, light (first column), wall, pole, pedestrian (second col- umn). Additionally, our model does not misclassify the sign (first column) and rider and bike with side- walk (third column). Figure 10: Visual performance comparison of CAT-seg (first row) and our model (second row) on the Pascal Context validation set, both trained only with the Pascal Context training set. Notably our model does not only find more instances but also is less prone to misclassify categories. Please note that people’s faces have been intentionally blurred to preserve anonimity. Encoder Objective ADE-20 CS MAP PC PASb No guidance ViT-L [52] Swin [63] ViT-L[52] Swin [63] - - - SAM [64] Classification 49.1 50.0 49.3 54.9 53.6 36.0 16.3 42.1 73.7 38.2 20.1 44.9 74.6 38.1 19.9 44.5 73.8 50.3 29.4 60.1 80.0 43.0 23.3 56.9 76.0 Table 9: Ablation study on the shape guidance image encoder for the ADE-20 dataset. Ran- domly initialized encoders are denoted by “-” on the Objective column. our model we showcase a comparison on the Pas- cal Context validation set in Figure 10 both mod- els trained only with the training set of the Pascal Context dataset. Visual Guidance Encoder Analysis Table 9 compiles the performances resulting from employing different auxiliary image encoders. We find that even untrained features help in the segmentation as ran- domly initialized auxiliary encoders outperform not it seems that the employing guidance. However, choice of randomly initialized encoder has little im- pact. On the other hand, as expected, pre-trained vi- sual encoders yield significantly better results. More- over, similarity with the pre-training task is also an important factor for performance, obtaining the best results with the SAM [64] backbone. Note that our reported results in the state-of-the-art comparison do not employ SAM guidance for fairness. 4.3 Results on OVSS enhanced with UDA In this subsection, we explore including UDA tech- niques (see Section 3.3 and Figure 5), leveraging un- labeled color images from a given target domain for training. 13 Teacher Mixup ASPP Prompt Finetuning mIoU × ✓ ✓ ✓ ✓ ✓ × × ✓ ✓ ✓ ✓ × × × ✓ ✓ ✓ × × × × ✓ ✓ × × × × × ✓ 36.2 44.7 51.1 54.8 60.0 61.5 Table 10: Ablation study on the Synthia-to- Cityscapes setup. Model trained with labeled Syn- thia images and unlabeled Cityscapes color images. mIoU evaluated on the Cityscapes validation set. Prompt stands for our proposed prompt augmenta- tion (see Figure 4) and Finetuning stands for our pro- posed finetuning (see Equation 1) Ablation Study on the UDA Techniques. Ta- ble 10 presents the ablation results for the differ- ent components presented in the paper. We notice that Prompt Augmentation and Image MixUp signif- icantly drive performance. First, the definition of the prompts help the transfer of categories cross-domain. Second, Image MixUp acts as a data augmentation technique helping the model learn the semantic edges of objects and improving its classification accuracy. As a limitation, we find that our model is not able to distinguish between terrain (not included in the Synthia dataset) and vegetation (semantically related and highly prevalent in the Synthia dataset). 4.4 Teacher Update In our work we decided to only update the teacher de- coder. This is mainly due to two reasons: First, CLIP has shown remarkable zero shot segmentation perfor- mance [66]. Second, we have found that early itera- tions with AdamW optimizer lead to the model over- fitting to the training dataset. In our first approx- imation, we changed the optimizer to SGD, which helped alleviating such over-fitting. However, this seemed to affect the encoded knowledge, leading to worse performing models. Therefore, we opted to only update the decoder of the teacher while main- taining the encoder frozen. This helped the teacher model preserving the encoded knowledge of the tar- 14 (a) Full E.M.A. update of the teacher (b) Proposed Teacher-Student update Figure 11: Example of the full teacher update against our UDA-FROVSS proposed teacher update. Our model remains capable of segmenting the target private train category after training. get private labels thus allowing the student to learn from those target private categories. However, such update led to a misalignment between the teacher en- coder and decoder, that we aim to explore in future work. Thereby, we opted to temporally reduce the impact of teacher predictions in the student training, by only taking into account teacher predictions for the initial iterations and progressively incorporating the student predictions in the pseudo-labels, hence defining the combination of teacher and student la- bels described in equation 6 of the paper. Table 11 and Figure 11 motivate the employment of a specific teacher-student framework to preserve recognition of unseen categories. Performance Across Diferent Real Datasets. Table 12 presents the results of our UDA-enhanced models. As expected, performance improvements seem to be highly related to the target and source similarity. Notably, Cityscapes and Mapillary are closely related. Therefore, employing Cityscapes as the source domain presents the best performance on the Mapillary dataset. These results are sum- marized and compared with CAT-seg [13] in Fig- ure 12. Notably, CAT-seg models trained with a small dataset as VOC significantly underperform Method d a o r k l a w e d i s g n i d l i u b l l a w e c n e f e l o p t h g i l n g i s n o i t a t e g e v * n i a r r e t y k s n a i r t s e d e p r e d i r r a c * k c u r t s u b * n i a r t e l c y c r o t o m e l c y c i b mean 88.1 54.1 82.9 19.8 6.9 26.0 13.3 21.7 82.0 0.0 87.1 48.9 15.4 68.0 0.4 36.8 0.0 12.5 30.1 36.2 Source Only Full E.M.A. 93.5 63.1 86.1 43.8 24.3 21.8 35.5 39.0 85.2 0.0 91.0 60.3 31.2 82.5 59.4 61.9 0.1 31.4 49.5 50.5 UDA-FROVSS 96.1 68.7 88.8 52.4 36.5 28.9 47.6 46.5 87.9 0.0 93.0 70.1 38.9 89.1 80.3 80.2 60.2 45.6 58.8 61.5 Table 11: Per-class performance on the Synthia to Cityscapes domain adptation setup. Target private categories are indicated by “*”. Notably, the full update of the teacher model leads to the forgetting of the target private category: train. Source Target CS MAP ADE-20 CS MAP ADE-20 PC PAS-20b 73.5 72.6 72.5 44.6 18.0 33.0 53.8 24.6 24.2 7.6 32.3 31.7 54.1 30.1 19.5 PC PASb 49.8 54.4 57.4 68.3 39.5 70.2 74.1 77.8 80.6 84.9 Table 12: Performance comparison of the UDA- FROVSS models. Models trained on labeled source data and unlabeled target data, evaluated on the target validation set. Supervised performance is indicated in gray in the diagonal. as they overfit and lose generality due to the lack In comparison, our models preserve and of data. slightly increase performance on the domain (↑ 5% mIoU) while significantly improving performance across other datasets, in some instances even dupli- cating performance (Pascal-Context ↑ 200% mIoU). Comparison with State-Of-The-Art UDA Frameworks Table 13 compares the state-of-the- art performance for UDA frameworks on the Syn- thia to Cityscapes setup. Notably, our framework significantly outperforms alternative state-of-the-art frameworks. As our framework is open vocabulary, it handles a major drawback of segmenting all 19 Cityscapes categories despite only seeing 16 during training. Notably, we outperform previous state of the art methods by over 8%. However, there is still 15 (a) CAT-Seg [13] (b) UDA-FROVSS Figure 12: Performance comparison across five datasets against the state-of-the-art CAT-Seg [13] for Open Vocabulary Semantic Segmen- tation (OVSS). UDA-FROVSS correspond to our proposal for UDA based on VLMs. The numbers ad- jacent to dataset names indicate performance when training and testing with the same dataset. The numbers inside indicate testing results when mod- els are evaluated on datasets at the opposite ends of the chords. For instance, as seen in subfigure (b), our method achieves a 32 mIoU when trained on Cityscapes and tested on ADE-20. Cityscapes(67)Mapilliary(53)ADE-20(47)Pascal(62)VOC(81)47237945352661205070657026412721Cityscapes(74)Mapilliary(53)ADE-20(54)Pascal(68)VOC(85)56298140654550703373542578227432503225 Method Venue d a o r k l a w e d i s g n i d l i u b l l a w e c n e f e l o p t h g i l n g i s n o i t a t e g e v * n i a r r e t y k s n a i r t s e d e p r e d i r r a c * k c u r t s u b * n i a r t e l c y c r o t o m e l c y c i b mean MM[69] DIGA[70] MIC[39] DCF[71] T-ITS 24 88.5 51.0 87.8 38.6 7.4 52.3 56.3 55.5 87.5 0.0 90.5 73.6 51.0 88.6 0.0 64.4 0.0 54.5 60.4 53.1 CVPR 23 88.5 49.9 90.1 51.4 6.6 55.3 64.8 62.7 88.2 0.0 93.5 78.6 51.8 89.5 0.0 62.2 0.0 61.0 65.8 55.7 CVPR 23 84.3 45.6 90.1 48.8 9.2 60.8 66.8 64.4 87.4 0.0 94.4 81.4 58.0 89.7 0.0 65.2 0.0 67.1 64.1 56.8 ACM 24 93.4 63.1 89.8 51.1 9.1 61.4 66.9 64.0 88.0 0.0 94.5 80.9 56.6 90.9 0.0 68.5 0.0 63.7 66.6 58.4 UDA-FROVSS 96.1 68.7 88.8 52.4 36.5 28.9 47.6 46.5 87.9 0.0 93.0 70.1 38.9 89.1 80.3 80.2 60.2 45.6 58.8 61.5 Table 13: State of the art performance comparison of the Synthia-to-Cityscapes UDA setting. mIoU computed across the 19 Cityscapes categories. Our framework is the only available open vocabulary framework to tackle the Synthia-to-Cityscapes UDA setting. Therefore, we are the only framework capable of segmenting Cityscapes private categories, indicated by “*”. room for improvement. Thin and small categories such as pole benefit from lookup architectures like [36, 70, 39] that process multiple detailed crops of the image to refine fine-grained details of the segmenta- tion. Additionally, we present the current state-of- the-art multimodal framework for UDA (MM [69]) whos performance is 15% worse than ours, highlight- ing the strong performance of our framework. Finally, Figures 13a, 13c and 13b qualitatively compare of the segmentation of our UDA model. No- tably, it is capable of segmenting categories which state-of-the-art frameworks are not able due to ex- tremely low representation or being absent in the source dataset. UDA Improvements Across All Datasets Fig- ure 14 illustrates the performance gains of adapt- ing the student model on the two most challenging datasets (Synthia and PAS-20) to Cityscapes, ADE- 20 and COCO datasets. These results suggest that the domain adaptation techniques not only improve performance on the target set but also improve open vocabulary performance. Additionally, performance improvements on other datasets seem to be highly related to the target dataset employed, see Figure 15 for a visual relative performance comparison. For example, adapting to Cityscapes provides better per- formance on Mapillary than adapting to ADE20 or COCO, as both are urban scenes datasets. 16 (a) Fence. (b) Truck. (c) Train. Figure 13: Examples of our UDA-enhanced model in the Synthia-to-Cityscapes setup. Re- sults on the Cityscapes validation of the model trained with labeled Synthia data and unlabeled Cityscapes images. Segmenting the target private category: truck and train, and highly unpopulated category: fence. (a) Synthia (b) VOC Figure 14: Performance improvements driven by UDA on small and Synthetic datasets. In red is depicted the performance of models trained with only source data. Adaptation is performed independently to each analyzed dataset. UDA en- hancements correlate with dataset similarity to tar- get domain, as exemplified by significant improve- ments of PASCAL when performing adaptation with COCO and Mapillary when performing adaptation with Cityscapes. 5 Conclusions In this paper we introduced the first framework for unsupervised domain adaptation open vocabulary se- mantic segmentation, marking a significant integra- tion of these two research areas. By combining UDA with open vocabulary segmentation, we alleviate the necessity for shared categories between source and target domains, as demonstrated by our improvement over the state-of-the-art UDA frameworks by over 8% for the Synthia to Cityscapes. Conversely, the open vocabulary approach benefits from UDA’s capacity to utilize large volumes of unlabeled data, enabling our models to be successfully trained with less than 2K annotated images. Our approach surpasses pre- vious state-of-the-art for open vocabulary semantic segmentation in all analyzed benchmarks. To achieve these results, we propose a decoder for refined seg- mentation, a strategic fine-tuning approach to re- tain CLIP’s original weight integrity, and enhanced text embeddings to bolster open vocabulary segmen- 17 Figure 15: Relative performance comparison driven by UDA on a Synthetic dataset. Rel- ative performance comparison across datasets of FROVSS (first row) and UDA-FROVSS both em- ploying Synthia for training and UDA employing un- labeled images of each dataset. Notably, as Synthia is an urban scenes synthetic dataset the adaptation to a real general purpose dataset (COCO) drives sig- nificantly more performance across all datasets. tation. Additionally, we also adapted the teacher- student framework and pseudo-label protocol to effec- tively train VLMs. For future research, inter-dataset similarity and tuning of the textual encoder emerge as critical factors for further performance enhance- ments. References [1] Wang Y, Li Y, Elder JH, Wu R, Lu H. Class- conditional domain adaptation for semantic seg- mentation. Computational Visual Media, 2024: 1–18. [2] Alcover-Couso R, SanMiguel JC, Escudero- Vinolo M, Garcia-Martin A. On exploring weakly supervised domain adaptation strate- gies for semantic segmentation using synthetic data. Multimedia Tools and Applications, 2023: 35879–35911. PAS-20bCityscapesMapilliaryADE-20Pascal-ContextCOCOPAS-20bCityscapesMapilliaryADE-20Pascal-ContextCOCO [3] Ma H, Yang J, Huang H. Taming diffusion model for exemplar-based image translation. Computa- tional Visual Media, 2024: 1–13. [4] Cheng W, Shan Y. Learning layout generation for virtual worlds. Computational Visual Media, 2024: 1–16. [5] Liang Y, Liu T, Huo Y, Wang R, Bao H. Adap- tive sampling and reconstruction for gradient- domain rendering. Computational Visual Media, 2024: 1–18. [6] Zhou W, Yuan L, Mu T. Multi3D: 3D-aware multimodal image synthesis. Computational Vi- sual Media, 2024: 1–13. [7] Alcover-Couso R, SanMiguel JC, Escudero- Vi˜nolo M. Biased Class disagreement: detection of out of distribution instances by using differ- ently biased semantic segmentation models. In Int. Conf. Comput. Vis. (ICCVW), 2023, 4580– 4588. [8] Li X, Zheng Y, Ma H, Qi Z, Meng X, Meng L. Cross-modal learning using privileged informa- tion for long-tailed image classification. Compu- tational Visual Media, 2024: 1–12. [9] Ding Y, Liu L, Tian C, Yang J, Ding H. Don’t Stop Learning: Towards Continual Learning for the CLIP Model. ArXiv, 2022, abs/2207.09248. [10] Yan S, Hong L, Xu H, Han J, Tuytelaars T, Li Z, He X. Generative Negative Text Replay for Continual Vision-Language Pretraining. ArXiv, 2022, abs/2210.17322. [11] Zhou C, Loy CC, Dai B. Extract Free Dense Labels from CLIP. In Eur. Conf. Comput. Vis. (ECCV), 2022. [12] Srinivasan J. T, TY, Pinto-Alva Chang L, Chochlakis G, Rostami M, Thoma- Learn- CLiMB: A Continual son for Vision-and-Language ing Benchmark Tasks. ArXiv, doi: abs/2206.09059, 10.48550/arXiv.2206.09059. 2022, [13] Cho S, Shin H, Hong S, An S, Lee S, Arnab A, Seo PH, Kim S. CAT-Seg: Cost Aggre- gation for Open-Vocabulary Semantic Segmen- tation. In IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2024. [14] Holger Caesar VF Jasper Uijlings. COCO-Stuff: Thing and Stuff Classes in Context. In IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2018. [15] Xiao T, Singh M, Mintun E, Darrell T, Dollar P, Girshick R. Early Convolutions Help Transform- ers See Better. In Adv. Neural Inform. Process. Syst. (NeurIPS), volume 34, 2021, 30392–30400. [16] Yuan K, Guo S, Liu Z, Zhou A, Yu F, Wu W. Incorporating Convolution Designs Into Visual Transformers. In IEEE Int. Conf. Comput. Vis. (ICCV), 2021, 579–588. [17] Zhang X, Li Q, Quan Z, Yang W. Pyramid Geometric Consistency Learning For Seman- tic Segmentation. Pattern Recognition, 2023, 133: doi:https://doi.org/10.1016/j. patcog.2022.109020. 109020, [18] Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J, Krueger G, Sutskever I. Learning Trans- ferable Visual Models From Natural Language Supervision. In Int. Conf. Mach. Lear. (ICML), volume 139, 2021, 8748–8763. [19] Luo H, Bao J, Wu Y, He X, Li T. SegCLIP: Patch Aggregation with Learnable Centers for Open-Vocabulary Semantic Segmentation. Int. Conf. Mach. Lear. (ICML), 2023. [20] Ding Z, Wang J, Tu Z. Open-Vocabulary Panop- tic Segmentation with MaskCLIP. Int. Conf. Comput. Vis. (ICCV), 2023. [21] Huynh DT, Kuen J, nan Lin Z, Gu J, Elhami- far E. Open-Vocabulary Instance Segmentation via Robust Cross-Modal Pseudo-Labeling. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2021: 7010–7021. 18 [22] Benigmim Y, Roy S, Essid S, Kalogeiton V, Lathuili`ere S. Collaborating Foundation models for Domain Generalized Semantic Segmentation. arXiv:2312.09788, 2023. [23] Xu M, Zhang Z, Wei F, Hu H, Bai X. Side Adapter Network for Open-Vocabulary Seman- tic Segmentation. In IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2023. [24] Li J, Huang Y, Wu M, Zhang B, Ji X, Zhang C. CLIP-SP: Vision-language model with adap- tive prompting for scene parsing. Computational Visual Media, 2024: 741–752. [25] Li Y, Wang H, Duan Y, Xu H, Li X. Ex- for Con- Pre-training. ploring Visual trastive arXiv:2209.07046, 2022. Language-Image Interpretability [26] Li Y, Wang H, Duan Y, Li X. CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks. arXiv:2304.05653, 2023. [27] Hoyer L, Tan DJ, Naeem MF, Gool LV, Tombari F. SemiVL: Semi-Supervised Semantic Segmen- tation with Vision-Language Guidance. In Eur. Conf. Comput. Vis. (ECCV), 2023. [28] Liang F, Wu B, Dai X, Li K, Zhao Y, Zhang H, Zhang P, Vajda P, Marculescu D. Open- vocabulary semantic segmentation with mask- adapted clip. In IEEE Conf. Comput. Vis. Pat- tern Recog. (CVPR), 2023, 7061–7070. [29] Ding J, Xue N, Xia GS, Dai D. Decoupling Zero- Shot Semantic Segmentation. In IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2022, 11573–11582. [30] Xu M, Zhang Z, Wei F, Lin Y, Cao Y, Hu H, Bai X. A Simple Baseline for Open Vocab- ulary Semantic Segmentation with Pre-trained Vision-language Model. Eur. Conf. Comput. Vis. (ECCV), 2022. [31] Wang S, Zhao X, Chen J. Discovering latent target subdomains for domain adaptive seman- tic segmentation via style clustering. Multimedia Tools and Applications, 2023: 3234–3243, doi: 10.1007/s11042-023-15620-6. [32] Schwonberg M, Niemeijer J, Term¨ohlen JA, sch¨afer JP, Schmidt NM, Gottschalk H, Fin- gscheidt T. Survey on Unsupervised Domain Adaptation for Semantic Segmentation for Vi- sual Perception in Automated Driving. IEEE Access, 2023, 11: 54296–54336. [33] Alcover-Couso R, SanMiguel JC, Escudero- Vi˜nolo M, Caballeira P. Per-Class Curriculum for Unsupervised Domain Adaptation in Se- mantic Segmentation. In The Visual Computer, 2023, 1–19. [34] Lee DH. Pseudo-Label : The Simple and Ef- ficient Semi-Supervised Learning Method for Deep Neural Networks. In Int. Conf. Mach. Lear. (ICMLW), 2013. [35] Hoyer L, Dai D, Van Gool L. DAFormer: Im- proving Network Architectures and Training Strategies for Domain-Adaptive Semantic Seg- mentation. In IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2022, 9924–9935. [36] Hoyer L, Dai D, Van Gool L. HRDA: Context- Aware High-Resolution Domain-Adaptive Se- mantic Segmentation. In IEEE Eur. Conf. Com- put. Vis. (ECCV), 2022, 372–391. [37] Wang K, Kim D, Feris R, Saenko K, Betke M. CDAC: Cross-domain Attention Consistency in Transformer for Domain Adaptive Seman- tic Segmentation. In IEEE Conf. Comput. Vis. (ICCV), 2023. [38] Kumar V, Lal R, Patil H, Chakraborty A. CoN- Mix for Source-free Single and Multi-target Do- main Adaptation. In Wint. App. Comp. Vis. (WACV), 2023, 4178–4188. [39] Hoyer L, Dai D, Wang H, Van Gool L. MIC: Masked Image Consistency for Context- Enhanced Domain Adaptation. In IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2023. 19 [40] Tarvainen A, Valpola H. Mean teachers are bet- ter role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Adv. Neural Inform. Process. Syst. (NeurIPS), 2017. [49] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Lu, Polosukhin I. Attention is All you Need. In Adv. Neural Inform. Process. Syst. (NeurIPS), volume 30, 2017. [41] Guo W, Liu F, Song Y, Qin C. Research On Data Model Migration In Image Semantic Segmenta- tion Based On Deep Learning. Int. Conf. Mea- suring Technology and Mechatronics Automa- tion, 2022: 417–420. [42] Zheng Z, Yang Y. Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain Adaptive Semantic Segmentation. Int. J. Com- put. Vis. (IJCV), 2020: 1–15. [43] Yamada S. Characterizations of semantic do- mains for randomized algorithms. Japan Journal of Applied Mathematics, 1989, 6: 111–146. [44] Tremblay J, Prakash A, Acuna D, Brophy M, Jampani V, Anil C, To T, Cameracci E, Boo- choon S, Birchfield S. Training Deep Networks with Synthetic Data: Bridging the Reality Gap by Domain Randomization. IEEE Conf. Com- put. Vis. Pattern Recog. (CVPRW), 2018: 1082– 10828. [45] Prakash A, Boochoon S, Brophy M, Acuna D, Cameracci E, State G, Shapira O, Birchfield S. Structured Domain Randomization: Bridg- ing the Reality Gap by Context-Aware Synthetic Data. IEEE Int. Conf. Rob. Aut. (ICRA), 2018: 7249–7255. [46] Valtchev SZ, Wu J. Domain randomization for neural network classification. Journal of Big Data, 2020, 8. [47] Tranheden W, Olsson V, Pinto J, Svensson L. DACS: Domain Adaptation via Cross-domain Mixed Sampling. IEEE Winter Conf. App. Comp. Vis. (WACV), 2020: 1378–1388. [48] Ma H, Li M, Yang J, Patashnik O, Lischinski D, Cohen-Or D, Huang H. CLIP-Flow: Decoding images encoded in CLIP space. Computational Visual Media, 2024: 1–12. [50] Howard J, Ruder S. Universal Language Model Fine-tuning for Text Classification. In ACL, 2018. [51] Wu H, Zheng S, Zhang J, Huang K. Fast End-to- End Trainable Guided Filter. IEEE Conf. Com- put. Vis. Pattern Recog. (CVPR), 2018: 1838– 1847. [52] Dosovitskiy A, Beyer L, Kolesnikov A, Weis- senborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J, Houlsby N. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In Int. Conf. Learn. Rep. (ICLR), 2021. [53] Rocco I, Arandjelovi´c R, Sivic J. Convolu- tional neural network architecture for geometric matching. In IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2017. [54] Hong S, Cho S, Nam J, Lin S, Kim S. Cost Ag- gregation with 4D Convolutional Swin Trans- former for Few-Shot Segmentation. In ECCV, 2022, 108–126. [55] Cordts M, Omran M, Ramos S, Rehfeld T, Enzweiler M, Benenson R, Franke U, Roth S, Schiele B. The Cityscapes Dataset for Seman- tic Urban Scene Understanding. In IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, 3212–3223. [56] Brown TB, Mann B, Ryder N, Subbiah M, Ka- plan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, et al.. Language Models are Few-Shot Learners. Adv. Neural Inform. Pro- cess. Syst. (NeurIPS), 2020: 1877–1901. [57] Neuhold G, Ollmann T, Rota Bul`o S, Kontschieder P. The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes. In 20 [66] Xu X, Xiong T, Ding Z, Tu Z. MasQCLIP for Open-Vocabulary Universal Image Segmenta- tion. In Int. Conf. Comput. Vis. (ICCV), 2023, 887–898. [67] Zhou Z, Lei Y, Zhang B, Liu L, Liu Y. ZegCLIP: Towards Adapting CLIP for Zero-Shot Seman- tic Segmentation. In IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2023, 11175–11185. [68] Wang X, Li S, Kallidromitis K, Kato Y, Kozuka K, Darrell T. Hierarchical Open-vocabulary Uni- versal Image Segmentation. In Adv. Neural In- form. Process. Syst. (NeurIPS), 2023. [69] Liu P, Ge Y, Duan L, Li W, Luo H, Lv F. Transferring Multi-Modal Domain Knowledge to Uni-Modal Domain for Urban Scene Segmenta- tion. IEEE Transactions on Intelligent Trans- portation Systems, 2024: 11576–11589. [70] Shen F, Gurram A, Liu Z, Wang H, Knoll A. DiGA: Distil To Generalize and Then Adapt for Domain Adaptive Semantic Segmentation. In IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2023, 15866–15877. [71] Chen M, Zheng Z, Yang Y. Transferring to Real- World Layouts: A Depth-aware Framework for Scene Adaptation. In ACM Multimedia, 2024. IEEE Int. Conf. Comput. Vis. (ICCV), 2017, 5000–5009. [58] Tobin J, Fong R, Ray A, Schneider J, Zaremba W, Abbeel P. Domain randomization for trans- ferring deep neural networks from simulation to the real world. IEEE Conf. Intell. Rob. Sys. (IROS), 2017: 23–30. [59] Zhou B, Zhao H, Puig X, Xiao T, Fidler S, Bar- riuso A, Torralba A. Semantic understanding of scenes through the ade20k dataset. Int. Journal of Computer Vision, 2019, 127: 302–321. [60] Mottaghi R, Chen X, Liu X, Cho NG, Lee SW, Fidler S, Urtasun R, Yuille A. The Role of Con- text for Object Detection and Semantic Segmen- tation in the Wild. In IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2014. [61] Everingham M, Eslami SMA, Van Gool L, Williams CKI, Winn J, Zisserman A. The Pascal Visual Object Classes Challenge: A Retrospec- tive. IJCV, 2015, 111(1): 98–136. [62] Ros G, Sellart L, Materzynska J, Vazquez D, Lopez AM. The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Seg- mentation of Urban Scenes. IEEE Conf. Com- put. Vis. Pattern Recog. (CVPR), 2016: 3234– 3243. [63] Liu Z, Hu H, Lin Y, Yao Z, Xie Z, Wei Y, Ning J, Cao Y, Zhang Z, Dong L, Wei F, Guo B. Swin Transformer V2: Scaling Up Capacity and Res- olution. In IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2022. [64] Kirillov A, Mintun E, Ravi N, Mao H, Rolland C, Gustafson L, Xiao T, Whitehead S, Berg AC, Lo WY, Dollar P, Girshick R. Segment Anything. In Int. Conf. Comput. Vis. (ICCV), 2023, 4015– 4026. [65] Qin J, Wu J, Yan P, Li M, Yuxi R, Xiao X, Wang Y, Wang R, Wen S, Pan X, et al.. FreeSeg: Unified, Universal and Open-Vocabulary Image Segmentation. IEEE Conf. Comput. Vis. Pat- tern Recog. (CVPR), 2023. 21
synthetic_cpt
3
JAPAGEN_Efficient_FewZero-shot_Learning_via_Japanese_Training_Dataset_Generation_with_LLM.pdf
JAPAGEN: Efficient Few/Zero-shot Learning via Japanese Training Dataset Generation with LLM Takuro Fujii1,2,∗ and Satoru Katsumata3 1Yokohama National University 2Nomura Research Institute, Ltd. 3Retrieva, Inc. [email protected] [email protected] 4 2 0 2 c e D 9 ] L C . s c [ 1 v 8 3 7 6 0 . 2 1 4 2 : v i X r a Abstract Recently some studies have highlighted the po- tential of Large Language Models (LLMs) as effective generators of supervised training data, offering advantages such as enhanced infer- ence efficiency and reduced costs associated with data collection. However, these studies have predominantly focused on English lan- In this paper, we address the guage tasks. fundamental research question: Can LLMs serve as proficient training data generators for other language tasks? Specifically, we lever- age LLMs to synthesize supervised training data under few-shot and zero-shot learning sce- narios across six diverse Japanese downstream tasks. Subsequently, we utilize this synthesized data to train compact models (e.g., BERT). This novel methodology is termed JAPAGEN. Our experimental findings underscore that JAPA- GEN achieves robust performance in classifi- cation tasks that necessitate formal text inputs, demonstrating competitive results compared to conventional LLM prompting strategies. 1 Introduction Large language models (LLMs) have demonstrated exceptional performance across various natural lan- guage processing (NLP) tasks, even with minimal parameter updates (Brown et al., 2020; Kojima et al., 2022). However, the rapid growth in model size, driven by scaling laws (Kaplan et al., 2020), has led to substantial demands for GPU memory and computational resources, making the operation of LLMs prohibitively expensive. To mitigate these costs, recent studies have in- vestigated the generation of training data using powerful LLMs, followed by training smaller mod- els (e.g., BERT) on the synthesized supervised data (Ye et al., 2022a,b; Yu et al., 2023; Chung * Work done while internship at Retrieva, Inc. when I was a master student. Now I belong to Nomura Research Institute, Ltd. Figure 1: Overview of SUPERGEN in text sentiment classification as an example. et al., 2023a). This approach, termed SUPER- GEN (Supervision Generation Approach) based on prior work (Meng et al., 2022), has demonstrated promising results. The overview of SUPERGEN is illustrated in Figure 1. SUPERGEN has been demonstrated to outperform few-shot and zero-shot prompting and few-shot fine-tuning methods in var- ious tasks, effectively reducing both the cost of col- lecting supervised data and the operational costs of trained models. However, these studies have been limited to English tasks, and thus, the applicabil- ity of SUPERGEN on other language tasks remain uncertain. Given that powerful LLMs like GPT-4 (OpenAI, 2024) are primarily trained on English texts with limited exposure to other languages, it is crucial to investigate the effectiveness of SUPERGEN in such linguistic contexts and its suitability for different types of languages. In this paper, we implement SUPERGEN in Japanese as a case study. Japanese is mid-resource language compared to English and has different characteristics, such as the absence of spaces between words. Therefore, we pose the research question: Do SuperGen methods perform effectively in Japanese? We term the application of SUPERGEN to Japanese tasks as JAPAGEN (§3). To address the aforementioned interests, we eval- uate JAPAGEN across various Japanese tasks, in- cluding text classification, natural language infer- ence, semantic textual similarity, and linguistic ac- ceptability, in both few-shot and zero-shot learn- ing settings. Furthermore, we propose a novel approach termed Knowledge-Assisted Data Gen- eration (KADG)1, which integrates task-specific knowledge into prompts to align generated texts more closely with gold-standard distributions and enhance text diversity (§3.4). Our experiments indicate that, in five out of six tasks, zero-shot JAPAGEN outperforms few-shot BERT fine-tuning. Moreover, JAPAGEN demon- strates superior performances in two tasks com- pared to few-shot PROMPTING. These experimen- tal results suggest that JAPAGEN has the potential to surpass settings with more parameters and more annotated data. Additionally, our analysis shows that KADG enhances the fidelity of generated texts to gold-standard distributions while maintaining label accuracy, although it does not consistently improve overall task performance. In summary, our contributions are four-fold: 1. We empirically evaluate JAPAGEN, leverag- ing LLMs as synthetic data generators, across various Japanese NLP tasks. 2. We demonstrate the effectiveness of JAPA- GEN, particularly in classification tasks with formal text inputs. 3. We analyze the impact of dataset size on JAPA- GEN, observing performance improvements with larger synthetic datasets that eventually reach saturation. 4. We propose and evaluate KADG, demonstrat- ing its potential to refine synthetic data distri- butions to align with gold standards, thereby enhancing the robustness of JAPAGEN. 2 Related Work 2.1 Efficient Learning Strategies with LLMs Large Language Models (LLMs) exhibit high per- formance across various tasks using few-shot or zero-shot learning paradigms. Despite their capa- bilities, LLMs have numerous parameters, leading to substantial operational costs. To address these challenges, several methods for more efficient uti- lization of LLMs have been proposed. One such 1We define the setup of KADG as zero-shot* to distinguish it from strict zero-shot methods due to the incorporation of task knowledge. method is PROMPTING, which enables LLMs to perform tasks effectively without requiring param- eter updates. This is achieved by injecting prompts based on task descriptions (Brown et al., 2020; Gao et al., 2021; Le Scao and Rush, 2021; Zhang et al., 2022). A prompt consists of input text for the LLM and includes instructions to obtain the desired re- sponses. In few-shot PROMPTING2, the prompt includes a small number of text-label pairs. Com- pared to traditional fine-tuning, which necessitates costly updates to the LLM’s parameters, PROMPT- ING improves data efficiency in low-data scenarios. However, Prompting incurs substantial operational costs due to the extensive number of parameters involved. 2.2 Synthesis of Training Data via LLM To reduce the operational costs of LLMs, re- searchers have recently explored using LLMs as training data generators, followed by fine-tuning smaller task-specific models (TAMs), such as BERT (Devlin et al., 2019), on the synthetic data. Existing approaches typically employ simple class- conditional prompts and focus on addressing the issues related to the quality of the generated data. Notable early efforts, such as SuperGen (Meng et al., 2022) and ZeroGen (Ye et al., 2022a), have explored the use of LLMs for generating training data for text classification tasks using basic class- conditional prompts. They have also incorporated additional noise-robust learning techniques (Laine and Aila, 2017; Wang et al., 2019) to mitigate the quality issues of the generated data. However, it has been reported that balancing the diversity of synthetic datasets with task performance remains challenging (Chung et al., 2023b). To date, these approaches have been primarily validated on English-language tasks. This paper investigates the effectiveness of these methods in mid-resource languages with different linguistic characteristics from English. 3 Method: JAPAGEN In this section, we introduce the motivation for syn- thetic data generation via LLMs in Japanese tasks, define the problem, and describe the methodology for generating synthetic training data for each task. 2Few-shot PROMPTING is referred to as In-Context Learn- ing (Brown et al., 2020), however, in this paper, both few- shot and zero-shot PROMPTING are collectively termed as PROMPTING. The overview of generating training data via LLMs is illustrated in Figure 1. 3.1 Motivation We define JAPAGEN as the Japanese counterpart to SUPERGEN. The rationale behind selecting Japanese stems from its status as a mid-resource language compared to English, and its different characteristics, such as the absence of spaces be- tween words. Given that powerful LLMs are pri- marily trained on English texts with limited ex- posure to other languages including Japanese, it is plausible that they can generate high-quality pseudo training data in English. In this paper, we evaluate JAPAGEN, the Japanese version of SUPER- GEN, as a case study focusing on such languages. 3.2 Problem Definition Given the label space Y = {yi}n i=1, we manually create label-descriptive prompts T(task, yi). For prompt details used in our experiments, please refer to §A.4. We employ LLMs Gθ to generate training data for encoder models Eϕ (e.g., LSTM (Hochre- iter and Schmidhuber, 1997), BERT (Devlin et al., 2019)), which are subsequently fine-tuned as esti- mators. SUPERGEN comprises the following three stages: (1) Synthesizing supervised training data using LLM. (2) Fine-tuning small models using synthetic data. (3) Testing the trained model on gold data. 3.3 Pseudo Data Generation In this section, we describe the process of generat- ing pseudo datasets using an LLM for classification and regression tasks. Our approach includes either a single sentence or a sentence pair as input. Single Sentence Task We employ an LLM to generate pseudo-supervised sentences ˜xc,j corre- sponding to a label yc: ˜xc,j ∼ ProbLLM(·|T(task, yc)), (1) where T(task, yc) represents a prompt including the task description and label yc. By repeating Equation 1 M times, we obtain the pseudo dataset ˜Dyc = {(˜xc,j, yc)}M j=1. Applying this process for all labels {yc}C c=1, we generate the pseudo dataset ˜D = [ ˜Dy1, ˜Dy2, ..., ˜DyC ]. Equation 1 but excluding the label yc: ˜x1 c,j ∼ ProbLLM(·|T(task)). (2) In the initial phase of sentence generation, the prompt comprises solely the task description. Sub- sequently, to generate the second sentence ˜x2 c,j, the prompt is augmented to include the task descrip- tion, the first sentence ˜x1 c,j, and the label yc: c,j ∼ ProbLLM(·|T(task), T(task, ˜x1 ˜x2 c,j, yc)). (3) By repeating Equations 2 and 3 M times, we gener- ate the pseudo dataset ˜Dyc = {(˜x1 c,j, yc)}M c,j, ˜x2 j=1. Applying this process for all labels {yc}C c=1, we ob- tain the pseudo dataset ˜D = [ ˜Dy1, ˜Dy2, ..., ˜DyC ]. 3.4 Knowledge-Assisted Data Generation The diversity of synthetic datasets significantly en- hances dataset quality, a critical factor in improving task performance (Chung et al., 2023b). Previous studies attempted to diversify text generation by adjusting hyperparameters such as Top-p and tem- perature. However, this approach may compro- mise label accuracy. In this paper, we introduce Knowledge-Assisted Data Generation (KADG) to enhance dataset diversity while maintaining label correctness. For each task, we manually create a set of task- specific words Stask, and randomly select a word d from this set. We construct a prompt based on the task description, label yc, and the selected task- specific word d: d ∼ Stask, ˜xc,j ∼ ProbLLM(·|T(task, yc, d)). (4) (5) By following a process similar to Section 3.3 across all classes, we generate the synthetic dataset ˜D. For the actual prompts used in our experiments, please refer to §A.4. 4 Experiment In this section, we present an overview of the bench- mark datasets, the corresponding evaluation set- tings, the baseline methods, and the implementa- tion details. Subsequently, we compare our JAPA- GEN to baseline methods in both few-shot and zero- shot settings. 4.1 Setup Sentence Pair Task Initially, we employ an LLM to generate the first sentence ˜x1 c,j, analogous to Benchmarks. To evaluate JAPAGEN across var- ious tasks, we used the following benchmarks from JGLUE (Kurihara et al., 2022): MARC- ja, JSTS, JNLI, and JCoLA. Additionally, to test across diverse domains, we also used two datasets for news topic classification (News) and SNS fact classification (COVID-19). All of these benchmarks are Japanese tasks. JSTS involves sentence similarity estimation, while the others are text classification tasks. We evaluated using Spearman’s rank correlation coefficient (Spearman score) for JSTS, Matthews correlation coefficient (MCC; (Matthews, 1975)) for JCoLA, and Accu- racy for the remaining tasks. For more detailed information such as dataset statistics and task ex- planations, please refer to Section A.1. Baselines. We compared the performances of JAPAGEN with three baselines: (1) PROMPTING, a prompt-based learning framework via LLM, as introduced in Section 2.1. (2) FEW-SHOT FINE- TUNING, where BERT is fine-tuned on five gold samples per class. (3) FULLY SUPERVISED, where BERT is fine-tuned on all gold data. We evaluated the performances of JAPAGEN and PROMPTING in both few- and zero-shot settings. In the few-shot setting, we used one sample per class and incorpo- rated them into the prompt. To distinguish between the few-shot setting of BERT fine-tuning and the one of JAPAGEN and PROMPTING, we refer to the former as "few-shot B⃝" and the latter as "few-shot L⃝". Implementation Details. We conducted our ex- periments using PyTorch (Paszke et al., 2019) and Hugging Face Transformers (Wolf et al., 2020). For synthetic data generation, we utilized the Ope- nAI model gpt-3.5-turbo-06133. The size of the generated data was 25,000 per class. In the few-shot setting B⃝, one sample per class was ran- domly selected. The generation parameters were set to max tokens of 500, top-p of 1.0, tempera- ture of 1.2, and frequency penalty of 0.02, with five pieces of data generated at a time. In JSTS whose labels are continuous values between 0.0 and 5.0, we set six classes {0, 1, 2, 3, 4, 5}. For the fine-tuning of BERT, we used the pretrained BERT4 and performed our experiments on a single NVIDIA TITAN RTX 24GB GPU. The training parameters5 were set to batch size of 32, epoch of 3The generated texts are used solely for study purposes, not for commercial use. 4tohoku-nlp/bert-base-japanese-v3 5We set training parameters based on (Kurihara et al., 2022). 4, label smooth temperature of 0.1, optimizer of AdamW with learning rate of 5e-5, β1 of 0.9, β2 of 0.999, warmup ratio of 0.1. Additionally, we set max token length of 512, 512, 512, 128, 512, 384 for MARC-ja, JNLI, JSTS, JCoLA, News, and COVID-19 respectively. For each task, we mea- sured performances over five runs with different random seeds. In the few-shot setting L⃝, we ran- domly selected five samples per class. 4.2 Experimental Results In this section, we compare JAPAGEN to baselines. Our experimental results are shown in Table 1. Zero-shot JAPAGEN vs. FINE-TUNING Compared to zero-shot JAPAGEN, BERT fine-tuned on gold data uses the same model size but with a larger amount of annotated data. It is well-known that the zero-shot approach cannot outperform task- specific models trained on human-annotated data. In Table 1, JAPAGEN adheres to this rule, un- derperforming compared to fully supervised fine- tuning across all tasks. However, JAPAGEN outper- forms few-shot fine-tuning on five tasks except for COVID-19. Notably in JSTS, JAPAGEN achieves a Spearman score of 57.67%, exceeding the per- formance of few-shot B⃝ fine-tuning. This result suggests that JAPAGEN can be effective in scenar- ios where the cost of data collection or annotation is high. Zero-shot JAPAGEN vs. PROMPTING Compared to zero-shot JAPAGEN, PROMPTING employs a significantly larger model size. In Ta- ble 1, JAPAGEN achieves performance improve- ments of 3.94%, 4.96%, and 17.10% over zero-shot PROMPTING on JSTS, JNLI, and News, respec- tively. These tasks typically involve formal text as input. Moreover, JAPAGEN also surpasses few-shot L⃝ PROMPTING on JNLI and News, suggesting that JAPAGEN has the potential to outperform settings with more parameters and more annotated data. These tasks are commonly classification tasks that involve formal text as input. KADG and JAPAGEN We attempt to enhance the performance of JAPA- GEN by injecting task knowledge into prompts, as prompt engineering has been shown to enhance the capability of LLMs and improve the quality of gen- erated text (Wu and Hu, 2023; Yang et al., 2023; He et al., 2022). In Table 1, KADG outperforms Method MARC-ja Acc. JSTS Spearman JNLI Acc. JCoLA Mcc. News Acc. COVID-19 Acc. Avg. FINE-TUNING: fine-tuning pretrained BERT under gold data. Fully Supervised Few-Shot 95.78±0.1 61.57±8.5 87.47±0.5 14.80±11.3 90.19±0.4 37.72±13.4 40.62±1.2 -0.85±3.5 95.75±0.4 51.98±5.3 78.49±0.3 42.24±9.4 82.82 37.40 PROMPTING: prompt-based LLM learning. Zero-Shot Few-Shot 94.82±0.2 97.38±0.2 68.53±0.6 78.50±2.0 41.53±1.0 35.86±5.3 24.76±1.2 26.00±2.9 40.27±1.3 44.82±2.9 62.76±0.6 65.44±3.4 57.66 61.72 JAPAGEN: fine-tuning pretrained BERT under pseudo training data generated via LLM. Zero-Shot w/ KADG Few-Shot 77.76±5.4 83.24±6.0 62.97±7.3 72.47±0.1 71.49±1.2 72.56±0.3 46.49±1.5 46.04±0.4 50.82±0.8 18.17±1.7 16.22±0.5 14.54±1.1 57.37±2.1 59.00±1.4 62.86±2.8 34.36±6.4 26.29±0.8 43.13±1.5 54.23 50.38 51.15 Table 1: Results on six Japanese tasks. Each value is average with standard deviations over five runs. The tasks that JAPAGEN outperforms zero-shot PROMPTING are in gray . Zero-shot JAPAGEN outperforms zero-shot PROMPTING on JSTS, JNLI, ad News. Few-shot (Only one sample per class) JAPAGEN can improve performances on JNLI and News. zero-shotJAPAGEN only on MARC-ja and News, but does not improve performance on the other four tasks. Specifically, KADG achieves a 5.48% higher score than JAPAGEN on MARC-ja. This suggests that prompt engineering may be particularly effec- tive for specific tasks. In JAPAGEN, the few-shot L⃝ setting consistently outperforms the zero-shot setting on JSTS, JNLI, News, and COVID-19. No- tably, the few-shot setting achieves improvements of 4.33%, 5.49%, and 8.77% over the zero-shot settings on JNLI, News, and COVID-19, respec- tively. Injecting task knowledge into prompts or using few-shot samples can bring generated texts closer to gold-standard texts, but it may restrict the diversity of the synthetic dataset. A detailed analysis is provided in §4.3. 4.3 Additional Analysis In this section, we analyze JAPAGEN on distribu- tion, diversity, and label correctness of synthetic and gold datasets. Then, we qualitatively evaluate synthetic data for each task. Distribution. One of the critical factors influenc- ing task performance is the alignment between the distributions of gold data and synthetic data. To observe this alignment, we compare token appear- ances within their respective datasets in a simple manner. Figure 2 represents the distribution of token frequencies within the dataset. We also quan- titatively assess the alignment using the weighted Jaccard index, based on 1,000 samples per class for distribution analysis. In the top and middle sec- tions of Figure 2, KADG achieves a higher Jaccard index compared to zero-shot JAPAGEN for MARC- ja, JSTS, JNLI, and News. Conversely, in the top and bottom sections of Figure 2, few-shot JAPA- GEN outperforms zero-shot JAPAGEN regarding the Jaccard index for JSTS, JNLI, and News. Qual- itatively, we observe a decrease in the number of words appearing only in the synthetic dataset, the blue-only part in Figure 2, with KADG and the few- shot setting. These results suggest that designing effective prompts and incorporating a few real sam- ples can help bring the synthetic data distribution closer to that of the gold standard. Diversity & Label Correctness. Synthetic datasets often exhibit limited diversity because they are generated using the same prompt input into the LLM. To assess dataset diversity, we adopt the methodology of a previous study (Holtzman et al., 2020) and use the Self-BLEU metric (Zhu et al., 2018) to compare the diversity of synthetic and gold datasets. A lower Self-BLEU score indicates higher dataset diversity. Previous studies have high- lighted a trade-off between dataset diversity and label correctness (Chung et al., 2023b; Ye et al., 2022a). Consequently, we also evaluate label cor- rectness in the synthetic dataset. To do so, we first train BERT on the gold training dataset and then measure accuracy6 on the synthetic dataset. Table 2 presents the diversity and label correctness analysis for each task. 6In JSTS, Mean Squared Error (MSE) is used for measure- ment. Figure 2: Distribution of the number of appeared tokens between gold and synthetic dataset. Top: zero-shot JAPAGEN, Middle: JAPAGEN with KADG, and Bottom: few-shot JAPAGEN. Compared to zero-shot JAPAGEN, KADG can improve alignment between gold and synthetic dataset on MARC-ja, JSTS, JNLI, and News. Few-shot JAPAGEN can also improve alignment on JSTS, JNLI, and COVID-19. Dataset MAR. JSTS* JNLI JCoLA DIVERSITY (%) Gold Zero-shot w/ KADG Few-shot 40.53 91.67 84.97 90.25 72.93 74.89 76.12 81.80 72.94 69.97 73.13 78.28 LABEL CORRECTNESS (%) Gold Zero-shot w/ KADG Few-shot 99.06 99.97 99.96 99.90 0.137 1.540 1.540 1.094 98.01 35.11 39.37 50.16 56.66 65.80 78.91 67.15 96.28 66.34 63.94 63.33 Figure 3: Performance transition with synthetic dataset size on zero-shot, KADG, and few-shot settings. Table 2: Diversity and label correctness of synthetic dataset. We measure the diversity by Self-BLEU. *In JSTS, label correctness is measured by MSE. As shown in the upper part of Table 2, the Self- BLEU score of the synthetic dataset of zero-shot JAPAGEN is approximately twice as high, indicat- ing less diversity compared to the gold dataset in MARC-ja. However, zero-shot JAPAGEN can syn- thesize datasets with a diversity similar to the gold dataset in JSTS, JNLI, and JCoLA. In contrast, in the lower part of Table 2, the label correctness in JSTS, JNLI, and JCoLA is not as high as in the gold dataset. Despite reports suggesting that decreasing the Self-BLEU score reduces label accuracy and degrades downstream task performance (Ye et al., 2022a), in MARC-ja, KADG improves the Self- BLEU score without compromising label correct- ness and enhances downstream performance. The few-shot setting yielded results similar to zero-shot JapaGen in diversity, but improvements in label correctness were observed in the two tasks, JSTS and JNLI. Data Scaling. We analyze the performance scal- ing with respect to data size. Figure 3 demonstrates that for most tasks, performance improves as the data size increases. However, performance tends to plateau, as the results with 5,000 samples are similar to those with 50,000 samples. Task Synthesized Text MARC-ja この商品は思っていた以上に素晴らしかったです! 購入して本当に良かったです。... (This product was even more nice than I expected! I’m really glad I bought it. ...) 商品は非常に不満でした。品質が悪い上に、配送にも遅延がありました。使ってみると... (I was extremely dissatisfied with the product. In addition to poor quality, there were delays in delivery. ...) JSTS JNLI 子供たちが講演で楽しそうに遊んでいます。 (The children are having fun playing in the park.) 講演で遊ぶ子供たちが笑顔で何かを楽しんでいます。 (The children playing in the park are smiling and enjoying something.) 幸せそうなカップルが手をつないで海辺を歩いている。 (A happy couple is walking hand in hand along the seaside.) 青い空と波が背景に広がり、夕日の光が二人を照らしている。 (With the blue sky and waves in the background, the light of the setting sun shines on the couple.) 木々が繁茂する森の中で、明るい光が差し込む風景。 (In the forest where trees grow thickly, bright light streams through the landscape.) 濃い霧がかかり、視界がほとんどない中に立つ孤独な木。 (A solitary tree stands amidst a dense fog, with almost no visibility.) 美しい夕焼け空の中、風景画の中に描かれた山々の輪郭が静かに浮かび上がっている。 (In the beautiful sunset sky, the outlines of mountains depicted in the landscape painting quietly emerge.) 夕暮れ時に描かれた風景で、美しく彩られた空の中には山々の輪郭が描かれています。 (In the landscape painted at dusk, the outlines of mountains are depicted against a beautifully colored sky.) JCoLA 食べった寿司にします。 (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) 私は友達と昨日 (I will have the sushi I (cid:58)(cid:58)atte with my friends yesterday.) 昨日の夜、友達とおいしいラーメンを食べました。 (Last night, I ate delicious ramen with my friends.) COVID-19 News COVID-19の最新情報です。感染拡大を防ぐためには、手洗いやマスクの着用、人との距離... (Here is the latest information on COVID-19. To prevent the spread of infection, it is important,...) 今日は友人がCOVID-19に感染していました。心配ですが、早く回復することを... (Today, my friend tested positive for COVID-19. I’m worried, but I hope they recover quickly...) 新型コロナウイルスの感染が拡大する中、マスクの着用や手洗いの重要性を再認識し... (Amid the spread of the novel coronavirus, I have come to realize once again the importance...) 今日はおいしいお寿司を食べました! 旬のネタが特に美味しかったです! (Today, I had some delicious sushi! The seasonal toppings were especially tasty!) 日本の低価格航空会社PeachAviationは、ユーザーにより快適なフライト体験を提供する ための新しい取り組みを発表しました。 (Japan’s low-cost airline Peach Aviation has announced a new initiative to provide users with a more comfortable flight experience.) 日本の航空会社、エスマックスが業績好調であることが報じられました。新たな路線の 開設や購入した新型機の稼働により、利益が大幅に上昇しています。 (It has been reported that Japan’s airline, Smax, is experiencing strong performance. The opening of new routes and the operation of newly purchased aircraft have significantly increased their profits.) Label Positive Negative similarity = 1.0 Entailment Contradiction Neutral Unacceptable Acceptable General Fact Personal Fact Opinion Impression Peachy S-MAX Table 3: Synthesized data sample by zero-shot JAPAGEN for each task. 4.4 Qualitative Evaluations We observe that JAPAGEN was generally able to synthesize texts in accordance with the tasks. Be- low, we describe examples where JAPAGEN did not perform well for each task. MARC-ja. JAPAGEN tends to generate similar texts such as "この商品は良い/悪いです。(This commodity is good/bad.)". Table 2 also indicates a high Self-BLEU score for MARC-ja, implying significant similarity among the synthesized texts. As indicated by the high score of label correctness in Table 3, we observe no discrepancy between the synthesized text and the corresponding label. JSTS. While labels are continuous values, em- ploying discrete values as labels in the prompt lim- its the capability of JAPAGEN to capture detailed similarity between two sentences. For instance, the similarity between the two sentences presented in Table 3 is 1.0. However, from the perspective of native Japanese speakers, this similarity should be rated above 3.0. The label correctness score (MSE) of synthesized texts by JAPAGEN is also too high, which suggests that several labels are not correct, compared to that of gold data. JNLI. JAPAGEN exhibits difficulty distinguish- ing between "Entailment" and "Neutral". Specif- ically, text pairs for "Neutral" are frequently mis- classified as "Entailment". The label correctness score (Accuracy) of synthesized texts by JAPAGEN is also too low compared to that of the gold data. JCoLA. JCoLA is a binary classification task to predict whether a Japanese text is syntactically ac- ceptable or unacceptable. Our observation indicate that the LLM struggles with generating unaccept- able sentences. Specifically, the expression "食べ った" in Table 3 is not a syntactic error but a typo. This is because LLMs are trained to generate syn- tactically correct sentences, leading to difficulties in generating grammatically incorrect ones. COVID-19. Synthesized texts correspond to each label; however, JAPAGEN frequently gener- ates similar texts (e.g.,"手洗い" (washing hands), "マ スク" (wearing a mask)) within a label. The Self- BLEU score of synthetic texts in COVID-19 is much higher, indicating lower diversity compared to gold data presented in Table 5. News. This is a news topic classification task where topic names as labels include entity-like unique expressions. Synthetic texts frequently fail to align with these labels, particularly when the la- bels involve proper nouns or lacks common sense. For instance, in Table 3, "Peachy" is a category indicating news targeting women; however, it gen- erates content about the real airline "Peach (Peach Aviation)". Similarly, "S-MAX" is a category for software-related news; however, it frequently pro- duces content about fictional people or companies named ’S-MAX’ are often generated. Throughout all six tasks, while the text synthe- sized by JAPAGEN has challenges in terms of di- versity and label consistency, it was generally able to produce text that aligned with the tasks. 4.5 Overall Results In this section, we summarize §4.2, §4.3, and §4.4 related to the experimental results and analysis. The results of zero-shot JAPAGEN, comparing to few-shot fine-tuning and prompting, showed that it is particularly effective for classification tasks with formal text input. This suggests JAPAGEN has the potential to surpass scenarios with more parame- ters and more annotations. Additionally, the results from KADG and few-shot JAPAGEN indicated that incorporating task knowledge and examples into the prompts can further enhance its capabilities. On the other hand, challenges include low label cor- rectness and the difficulty in synthesizing datasets with continuous value labels such as JSTS and with the desired grammatical errors in JCoLA. 5 Conclusion To investigate the effectiveness of SUPERGEN in a mid-resource language with characteristics differ- ent from English, we evaluated SUPERGEN specif- ically for Japanese tasks, termed JAPAGEN. Our experimental results demonstrate that JAPAGEN is particularly effective for classification tasks where the input consists of formal text compared to few- shot PROMPTING. Future Work • We will examine the efficacy of prompts in synthesizing high-quality texts for specific tasks. • As the development of open LLMs is also progressing rapidly, we would like to evaluate JAPAGEN using such LLMs. Limitation • Our trained models are unavailable for com- mercial use because we used OpenAI LLM for data generation. • Although we used GPT-3.5 as a pseudo train- ing data generator, using more advanced LLM (e.g., GPT-4) might yield different results. • To examine the impact of SUPERGEN on lan- guages with distinct characteristics from En- glish and classified as mid-resource, we se- lected Japanese as a case study. Future re- search will address additional languages. Ethics Statement While PLMs have demonstrated remarkable ca- pabilities in text generation and comprehension, they also pose potential risks or harms (Bender and Koller, 2020; Bender et al., 2021), such as generating misinformation (Pagnoni et al., 2021) or amplifying harmful biases (Prabhumoye et al., 2018). Our work specifically focuses on leveraging existing PLMs to generate training data for NLU tasks, rather than on developing new PLMs or gen- eration methods. In this study, we comply with the OpenAI’s terms of use by not disclosing synthetic data and by refraining from using it for purposes other than study. Furthermore, this study did not involve any sensitive data but only used publicly available data, including MARC-ja, JSTS, JNLI, JCoLA, News, and COVID-19. References Emily M. Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language mod- In Proceedings of the 2021 els be too big? ACM Conference on Fairness, Accountability, and Transparency, page 610–623. Emily M. Bender and Alexander Koller. 2020. Climb- ing towards NLU: On meaning, form, and under- standing in the age of data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5185–5198, On- line. Association for Computational Linguistics. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large an- notated corpus for learning natural language infer- In Proceedings of the 2015 Conference on ence. Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, vol- ume 33, pages 1877–1901. Curran Associates, Inc. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Dollar, and C. Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. John Chung, Ece Kamar, and Saleema Amershi. 2023a. Increasing diversity while maintaining ac- curacy: Text data generation with large language models and human interventions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 575–593, Toronto, Canada. Associ- ation for Computational Linguistics. John Chung, Ece Kamar, and Saleema Amershi. 2023b. Increasing diversity while maintaining ac- curacy: Text data generation with large language models and human interventions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 575–593, Toronto, Canada. Associ- ation for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Asso- ciation for Computational Linguistics. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816–3830, Online. Association for Computational Linguistics. Yun He, Steven Zheng, Yi Tay, Jai Gupta, Yu Du, Vamsi Aribandi, Zhe Zhao, Yaguang Li, Zhao Chen, Donald Metzler, Heng-Tze Cheng, and Ed H. Chi. 2022. Hy- perPrompt: Prompt-based task-conditioning of trans- In Proceedings of the 39th International formers. Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 8678–8690. PMLR. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Neural Comput., Long short-term memory. 9(8):1735–1780. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text de- generation. In International Conference on Learning Representations. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. CoRR, abs/2001.08361. Phillip Keung, Yichao Lu, György Szarvas, and Noah A. Smith. 2020. The multilingual Amazon reviews corpus. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4563–4568, Online. Association for Computational Linguistics. Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large lan- guage models are zero-shot reasoners. In Advances in Neural Information Processing Systems, vol- ume 35, pages 22199–22213. Curran Associates, Inc. JGLUE: Japanese general Kentaro Kurihara, Daisuke Kawahara, and Tomohide Shibata. 2022. lan- guage understanding evaluation. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 2957–2966, Marseille, France. European Language Resources Association. Samuli Laine and Timo Aila. 2017. Temporal ensem- bling for semi-supervised learning. In International Conference on Learning Representations. Teven Le Scao and Alexander Rush. 2021. How many data points is a prompt worth? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2627–2636, Online. Association for Computational Linguistics. B.W. Matthews. 1975. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochimica et Biophysica Acta (BBA) - Protein Structure, 405(2):442–451. Yu Meng, Jiaxin Huang, Yu Zhang, and Jiawei Han. 2022. Generating training data with language models: Towards zero-shot language understand- ing. In Advances in Neural Information Processing Systems, volume 35, pages 462–477. Curran Asso- ciates, Inc. Takashi Miyazaki and Nobuyuki Shimizu. 2016. Cross- lingual image caption generation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1780–1790, Berlin, Germany. Associ- ation for Computational Linguistics. OpenAI. 2024. Gpt-4 technical report. Artidoro Pagnoni, Vidhisha Balachandran, and Yulia Tsvetkov. 2021. Understanding factuality in ab- stractive summarization with FRANK: A benchmark In Proceedings of the 2021 for factuality metrics. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4812–4829, Online. Association for Computational Linguistics. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learn- In Advances in Neural Information ing library. Processing Systems, volume 32. Curran Associates, Inc. Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhut- Style trans- dinov, and Alan W Black. 2018. fer In Proceedings through back-translation. of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 866–876, Melbourne, Australia. As- sociation for Computational Linguistics. Taiga Someya, Yushi Sugimoto, and Yohei Os- JCoLA: Japanese corpus of linguis- eki. 2024. In Proceedings of the 2024 tic acceptability. Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 9477–9488. Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jin- feng Yi, and James Bailey. 2019. Symmetric cross en- tropy for robust learning with noisy labels. In IEEE International Conference on Computer Vision. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Yangjian Wu and Gang Hu. 2023. Exploring prompt en- gineering with GPT language models for document- level machine translation: Insights and findings. In Proceedings of the Eighth Conference on Machine Translation, pages 166–169, Singapore. Association for Computational Linguistics. Li Yang, Qifan Wang, Jingang Wang, Xiaojun Quan, Fuli Feng, Yu Chen, Madian Khabsa, Sinong Wang, Zenglin Xu, and Dongfang Liu. 2023. MixPAVE: Mix-prompt tuning for few-shot product attribute value extraction. In Findings of the Association for Computational Linguistics: ACL 2023, pages 9978– 9991, Toronto, Canada. Association for Computa- tional Linguistics. Jiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiang- tao Feng, Zhiyong Wu, Tao Yu, and Lingpeng Kong. 2022a. ZeroGen: Efficient zero-shot learn- ing via dataset generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11653–11669, Abu Dhabi, United Arab Emirates. Association for Com- putational Linguistics. Jiacheng Ye, Jiahui Gao, Zhiyong Wu, Jiangtao Feng, Tao Yu, and Lingpeng Kong. 2022b. ProGen: Progressive zero-shot dataset generation via in- context feedback. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3671–3683, Abu Dhabi, United Arab Emirates. As- sociation for Computational Linguistics. Yue Yu, Yuchen Zhuang, Rongzhi Zhang, Yu Meng, Jiaming Shen, and Chao Zhang. 2023. ReGen: Zero- shot text classification via training data generation with progressive dense retrieval. In Findings of the Association for Computational Linguistics: ACL 2023, pages 11782–11805, Toronto, Canada. Associ- ation for Computational Linguistics. Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang, and Huajun Chen. 2022. Differentiable prompt makes pre-trained language models better few-shot learn- ers. In International Conference on Learning Representations. Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text generation models. In The 41st international ACM SIGIR conference on research & development in information retrieval. A Appendix A.1 Dataset and Task We describe the six tasks used in our experiment. The dataset statistics are presented in Table 4. MARC-ja A binary classification task to predict the sentiment of product reviews as positive or neg- ative. The dataset used for this task is derived from the Japanese subset of the Multilingual Amazon Reviews Corpus (MARC) (Keung et al., 2020). JSTS A regression task to predict the semantic similarity score between two sentences. The score ranges from 0 (least similar) to 5 (most similar). The data for this task are sourced from the Japanese version of the MS COCO Caption Dataset (Chen et al., 2015) and the YJ Captions Dataset (Miyazaki and Shimizu, 2016). JNLI A three-way classification task to predict the relation between two sentences. The possible relations are {contradiction, neutral, entailment} reflecting the categories utilized in the Stanford Natural Language Inference (SNLI) dataset (Bow- man et al., 2015). The data source for this task is the same as that used for JSTS. JCoLA A binary classification task to predict whether a Japanese text is syntactically acceptable or unacceptable. For further details, please refer to (Someya et al., 2024). News A nine-way classification task to predict the news topic of a given news text. The news texts are sourced from Livedoor News. The possible topics are {Trend Topic News, Sports Watch, IT Life hack, Consumer Electronics, MOVIE, DOKU- JOTSUSHIN, S-MAX, HOMME, Peachy}. The categories of COVID-19 A four-way classification task to predict the factuality of tweets about COVID- 19. information include "general fact," "personal fact," "opin- ion," and "impressions." The data for this task are sourced from https://www.db.info.gifu-u. ac.jp/covid-19-twitter-dataset/. factual A.2 Metrics Spearman’s Correlation Score This metric means the consistency between two sets of rank- ings by calculating the correlation between their ranks. A score close to 1 indicates strong agree- ment, meaning the model’s ranked outputs closely match the true ranked labels. Dataset JGLUE MARC-ja JSTS JNLI JCoLA News COVID-19 Number of Samples Train Dev. Test 150,022 9,960 16,058 4,000 4,375 4,375 37,506 2,491 4,015 1,000 625 625 5,654 1,457 2,434 865 1,475 7,547 Table 4: Dataset statistics. Dataset News COVID-19 DIVERSITY (%) 62.97 Gold Zero-shot 79.90 w/ KADG 82.93 79.25 Few-shot 43.14 84.31 81.91 83.40 LABEL CORRECTNESS (%) 98.89 Gold Zero-shot 49.84 w/ KADG 43.61 57.33 Few-shot 90.87 60.80 58.86 64.43 Table 5: Diversity and label correctness of synthetic dataset in News and COVID-19. (MCC) Matthews Correlation Coefficient MCC measures the quality of binary classifications by considering true positives, false positives, true negatives, and false negatives in a balanced way. Its value ranges from -1 to 1, where 1 indicates perfect prediction, and -1 a complete inverse relationship. Self-BLEU This metric calculates BLEU scores for generated text samples against other samples within the same set to measure diversity. Lower Self-BLEU indicates more diverse outputs. A.3 Additional Results The diversity (Self-BLEU) and label correctness of News and COVID-19 are shown in Table 5. While the diversity of News and COVID-19 in few-shot is lower than that in zero-shot, few-shot JAPAGEN can improve the label correctness of News and COVID-19. A.4 Prompt for Each Task For prompt details used in our experiments, please refer to https://github.com/retrieva/ JapaGen due to the page limitation.
synthetic_cpt
3
Can_Large_Language_Models_Really_Improve_by_Self-critiquing_Their_Own_Plans.pdf
3 2 0 2 t c O 2 1 ] I A . s c [ 1 v 8 1 1 8 0 . 0 1 3 2 : v i X r a Can Large Language Models Really Improve by Self-critiquing Their Own Plans? Karthik Valmeekam∗ School of Computing & AI Arizona State University Tempe. [email protected] Matthew Marquez∗ School of Computing & AI Arizona State University, Tempe. [email protected] Subbarao Kambhampati School of Computing & AI Arizona State University, Tempe. [email protected] Abstract There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM’s performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system’s reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks. 1 Introduction Large Language Models have rapidly captured the attention of the AI research community with their exceptional natural language completion capabilities. Trained on web-scale language corpora, these models have demonstrated the ability to generate seemingly valuable completions across a wide range of topics. This led to a surge of interest in determining whether such models were able to perform well on reasoning tasks. Even though initial anecdotal results showed promise, systematic studies revealed their incompetency in reasoning – be it planning [12] or in simple arithmetic or logic [3]. These results questioning the robustness of their reasoning abilities led to researchers exploring ways to improve these systems. Of particular interest to us is the emerging research on self-critiquing, where the LLMs are used to critique their own candidate generations and iterate. The current works [15, 10, 14] exhibit considerable optimism about using LLMs to critique their own candidate generations, especially in an iterative setting where they keep refining their candidate generations. Additionally, the notion that verifying correctness is computationally simpler than generation for reasoning adds to the optimism. However, there are grounds to be skeptical about it as ∗Equal Contribution Preprint. Under Review. the complexity of a reasoning task in the classical sense should be irrelevant to models like LLMs that do approximate retrieval. Intrigued by the prevailing optimism, in this paper, we set out to systematically investigate the effectiveness of using LLMs to critique their own generations in the context of planning. We look at the simplest class of planning problems, the goal-directed deterministic planning problems colloquially referred to as classical planning problems. Our methodology employs a planning system that utilizes the same LLM for both generation and verification, which we term the LLM+LLM system in an iterative setting. Within this setting, the generator LLM continuously produces candidate plans, drawing upon feedback from the verifier LLM, until the verifier LLM either approves a candidate plan as correct or the number of iterations surpasses a predefined threshold. We present an empirical evaluation of (i) the effect of self-critiquing on the plan generation performance of the overall LLM+LLM system (ii) the performance of the verifier LLM in comparison to the ground-truth verification and finally (iii) the influence of varying feedback levels while critiquing the LLM’s generation on the overall system performance. For our study, we use GPT-4 [9] as both the generator and verifier. Our findings suggest that self-critiquing degrades the plan generation performance compared to when an external, sound verifier is utilized. This decline in performance can be directly attributed to the verifier LLM’s subpar results. The verifier LLM yields a significant number of false positives, which can severely undermine the system’s reliability. Furthermore, we explored whether the nature of feedback on invalid plans influences plan generation performance. Our results indicate that the type of feedback—whether it’s merely binary verification or combined with detailed feedback on the errors of the generated plan—doesn’t significantly impact plan generation performance. Thus, our systematic investigation offers compelling preliminary evidence to question the efficacy of LLMs as verifiers for planning tasks within an iterative, self-critiquing framework. In the rest of the paper, we first present the related work, then the required background before delving into the methodology and the evaluation. 2 Related Work There has been significant interest in investigating the reasoning capabilities of LLMs, spanning from planning [12] to logic and arithmetic [3], and even puzzles [15]. As the initial excitement from triumphant anecdotes about LLMs’ reasoning capabilities began to wane with systematic studies [12, 11, 3], researchers proposed that allowing LLMs to verify their own candidate solutions and iterate over this process could enhance their reasoning abilities [10, 7, 6, 14]. Our work systematically investigates the effect of iterative self-critiquing in the context of planning. There have also been studies that utilize multiple LLMs to generate and verify candidate solutions, either in the form of a debate [2] or through cross-examination [1]. However, these studies still rely solely on the verification/self-critiquing abilities of the LLMs, an aspect our work critically examines in the context of planning. Our results provide compelling reasons to question the use of LLMs for self-critiquing in planning. 3 Background We specifically are interested in classical planning problems that are represented within the PDDL (Planning Domain and Definition Language) framework [8]. These problem classes consist of a domain, initial state and a goal state. The domain consists of a set of predicates and a set of actions. The state-space of the planning problem is represented with some truth-assignment on the predicates. Every action in domain have a set of preconditions which determine when an action can be applied and a set of effects which determine the modifications to the state after the action is applied. A plan here is a sequence of actions which are present in the domain that when executed in the initial state, satisfy the goal conditions. 2 Figure 1: Overall evaluation architecture 4 Methodology 4.1 The LLM+LLM planning system The LLM+LLM planning system (as shown in Figure 1) consists of a generator LLM and a verifier LLM. For a given instance, the generator LLM produces a candidate plan, while the verifier LLM determines its correctness. If the plan is found to be incorrect, the verifier provides feedback detailing the reasons for its failure. This feedback is then relayed to the generator LLM, prompting the generation of a new candidate plan. It’s worth noting that there are no constraints on the type or format of feedback the verifier LLM produces. The system ceases generation either when the verifier LLM approves the candidate plan as valid or when the number of prompting iterations exceeds a set threshold (for our experiments, this threshold is set at 15 iterations). This method is similar to the backprompting technique described in [12]. However, the main distinction lies in the type of verifier employed. In our system, both the verifier and generator are LLMs, whereas the referenced approach utilizes an external sound verifier, VAL [4]. For all our experiments, GPT-4 serves as the default LLM. 4.2 Prompt generation For the LLM+LLM Planning system described above, we utilize distinct prompts for the generator and verifier LLMs. The prompt generator (as shown in Figure 1) utilizes the PDDL domain and instance files to generate the required prompts in natural language. Our prompts are structured similarly to the natural language prompts found in [12]. For plan generation, our prompts are one-shot: we begin by presenting the domain description, followed by an example instance (along with its corresponding plan). We then present the query instance. These example instances are randomly selected from our set of instances, and this forms the input for the generator LLM. For the verifier LLM, we adopt a zero-shot approach. Here, we present the domain description, followed by the query instance and its corresponding plan. The verifier LLM is then tasked with verifying the query plan and providing feedback if necessary. As mentioned earlier, we do not restrict the type or format of the feedback for the verifier LLM. Detailed examples of the prompts given to both the generator and verifier LLMs can be found in the Appendix. 5 Evaluation and Analysis We evaluate our planning system on Blocksworld, a widely recognized common-sense planning domain in AI planning literature [5]. We generate 100 random instances for evaluation across various methods. To provide a ground-truth assessment of the final LLM plan’s correctness, we employ an external sound verifier, VAL [4]. For all experiments, GPT-4 [9] serves as the chosen LLM and was run with a temperature of 0, thereby making it deterministic. 3 5.1 Effect of self-critiquing on plan generation We assessed the impact of self-critiquing on plan generation by comparing the LLM+LLM back- prompting system with two other baselines. The first baseline is the LLM+VAL backprompting system, which mirrors the backprompting method described in [12]. In this method, the plan pro- duced by the LLM is validated by an external sound verifier, VAL. If the plan is found lacking, the generator-LLM is reprompted using feedback from VAL. The second baseline involves a generator- LLM without backprompting. Here, the generator LLM receives a single prompt, and the resulting plan is considered final. As illustrated in Table 1, the LLM+LLM backprompting approach slightly outperforms the non- backprompting method in terms of accuracy. However, it falls short when compared to the LLM+VAL system. It’s worth noting that the marginal improvement over the generator-LLM-only method might not solely be attributed to the LLM verifier. The backprompting itself, which offers the generator LLM multiple opportunities to produce a plan, could be a contributing factor. The subpar performance of the LLM+LLM system, especially when compared to LLM+VAL, can likely be traced back to the substantial number of type-1 errors produced by the LLM verifier. It’s evident that incorporating a sound verifier in the backprompting process can significantly enhance overall performance. Plan Generation Method Accuracy Avg. Number of iterations LLM+LLM w/ Backprompting (BP) 55/100 (55%) LLM+VAL w/ BP Generator LLM only w/o BP 88/100 (88%) 40/100 (40%) 3.48 4.18 1.00 Table 1: Comparison between various plan generation methods on the Blocksworld domain. 5.2 Analysis on the self-critique verifier We base our evaluation of the verifier LLM on its binary verification (i.e., determining whether the plan is valid or not) of the final plan produced by the LLM+LLM system. It’s important to note that the system halts either when the verifier LLM considers the plan valid or when the number of iterations surpasses 15. We compare the LLM verifier’s output with ground truth classifications made using VAL [4], a sound verifier. To make the ground truth determination available for each input plan, we separately evaluate that plan using VAL as well. As illustrated in Table 2, out of the 100 instances, the verifier accurately identifies 61 (or 61%). However, a deeper examination of the verifier’s errors reveals a concerning number of false positives. In this context, a false positive refers to the verifier LLM deeming a generated plan valid when, in fact, it is not. Out of the 100 instances, the verifier LLM produces 54 true positives and 38 false positives (type-1 errors). This indicates that the verifier deemed 38 plans, which were actually invalid, to be valid which can be catastrophic if such a system is deployed in scenarios where correctness is paramount. Accuracy True Positive Rate False Positive Rate True Negative Rate False Negative Rate 61/100 (61%) 54/55 (98.2%) 38/45 (84.45%) 7/45 (15.55%) 1/55 (1.8%) Verifier LLM Table 2: Breakdown of Plan Verification results on Blocksworld domain. The denominators (in aspects other than Accuracy) are ground-truth values based on VAL. 5.3 Effect of the levels of feedback on plan generation While the use of a sound verifier appears to enhance overall performance, we sought to further investigate the impact of varied levels of feedback on plan generation performance. We assessed the system’s performance across four distinct feedback levels: 4 1. No Feedback: At this level, the initial plan generated by the LLM is considered to be final and no feedback is provided to the LLM. 2. Binary Feedback: This level simply indicates whether the generated plan is valid or not. 3. Inexecutable Action Feedback: If the plan is invalid and inexecutable, this feedback high- lights the first inexecutable action and the unmet preconditions causing the inexecutability. If the plan is executable but fails to meet all goal conditions, the unmet goal conditions are presented. This feedback mirrors what VAL provides. 4. Open Conditions Feedback: This level treats the plan as a partial-order plan [13] and presents all the actions for which there exists atleast one unmet pre-condition and the corresponding unmet preconditions. Further it also presents the unmet goal conditions. Table 3 showcases the LLM’s performance when subjected to various levels of feedback (including one with no feedback). Interestingly, the amount of feedback provided to the LLM seems to have minimal influence on its performance improvement. As long as the binary feedback is accurate and the LLM is given ample opportunities to generate a plan, the detailed feedback on invalid plans doesn’t appear to significantly enhance the LLM’s performance. We have provided examples for each feedback level in the Appendix. Levels of feedback No feedback Only binary feedback Accuracy Avg. no of steps 40/100 (40%) 37/50 (74%) 1.00 5.38 4.18 4.42 Binary + First error feedback (by VAL) 43/50 (86%) Binary + All errors feedback 43/50 (86%) Table 3: Performance of LLM+VAL system on plan generation with varied levels of feedback. 6 Conclusion and Future Work In this paper, we conducted a systematic investigation into the ability of Large Language Models (LLMs) to critique their own outputs, specifically within the context of classical planning problems. While recent research has been optimistic about LLMs’ potential in self-critiquing, especially in iterative settings, our findings present a different perspective. Our empirical evaluations on Blocksworld, a simple common-sense domain, highlighted the in- effectiveness of self-critiquing in LLMs in the context of planning. We showed that the verifier LLM generates a significant number of false positives which be detrimental to the overall system’s reliability, particularly in domains where the correctness of plans is paramount. Interestingly, the nature of feedback, whether binary or detailed, did not have a pronounced impact on plan generation performance, suggesting that the core issue lies in the LLM’s binary verification capabilities rather than the granularity of feedback. In the future, we plan to conduct more extensive experiments with respect to the number of instances, the number of domains and prompting methods (such as chain-of-thought). References [1] Roi Cohen, May Hamri, Mor Geva, and Amir Globerson. Lm vs lm: Detecting factual errors via cross examination. arXiv preprint arXiv:2305.13281, 2023. [2] Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improv- ing factuality and reasoning in language models through multiagent debate. arXiv preprint arXiv:2305.14325, 2023. [3] Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jian, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D Hwang, et al. Faith and fate: Limits of transformers on compositionality. arXiv preprint arXiv:2305.18654, 2023. 5 [4] Richard Howey, Derek Long, and Maria Fox. VAL: Automatic plan validation, continuous effects and mixed initiative planning using PDDL. In 16th IEEE International Conference on Tools with Artificial Intelligence, pages 294–301. IEEE, 2004. [5] IPC. International planning competition, 1998. [6] Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023. [7] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023. [8] Drew McDermott, Malik Ghallab, Adele E. Howe, Craig A. Knoblock, Ashwin Ram, Manuela M. Veloso, Daniel S. Weld, and David E. Wilkins. Pddl-the planning domain definition language. 1998. [9] OpenAI. Gpt-4 technical report, 2023. [10] Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023. [11] Tom Silver, Varun Hariprasad, Reece S Shuttleworth, Nishanth Kumar, Tomás Lozano-Pérez, and Leslie Pack Kaelbling. PDDL planning with pretrained large language models. In NeurIPS 2022 Foundation Models for Decision Making Workshop, 2022. [12] Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, and Subbarao Kambhampati. On the planning abilities of large language models–a critical investigation. arXiv preprint arXiv:2305.15771, 2023. [13] Daniel S Weld. An introduction to least commitment planning. AI magazine, 15(4):27–27, 1994. [14] Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, and Jun Zhao. Large language models are reasoners with self-verification. arXiv preprint arXiv:2212.09561, 2022. [15] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023. 6
synthetic_cpt
1
LLM_Based_Multi-Agent_Generation_of_Semi-structured_Documents_from_Semantic_Templates_in_the_Public_Administration_Domain.pdf
A Survey of Large Language Models on Generative Graph Analytics: Query, Learning, and Applications Wenbo Shang Department of Computer Science Hong Kong Baptist University Hong Kong, China [email protected] Xin Huang Department of Computer Science Hong Kong Baptist University Hong Kong, China [email protected] 4 2 0 2 r p A 3 2 ] L C . s c [ 1 v 9 0 8 4 1 . 4 0 4 2 : v i X r a Abstract—A graph is a fundamental data model to represent various entities and their complex relationships in society and nature, such as social networks, transportation networks, finan- cial networks, and biomedical systems. Recently, large language models (LLMs) have showcased a strong generalization ability to handle various NLP and multi-mode tasks to answer users’ arbitrary questions and specific-domain content generation. Compared with graph learning models, LLMs enjoy superior advantages in addressing the challenges of generalizing graph tasks by eliminating the need for training graph learning models and reducing the cost of manual annotation. In this survey, we conduct a comprehensive investigation of existing LLM studies on graph data, which summarizes the relevant graph analytics tasks solved by advanced LLM models and points out the existing remaining challenges and future directions. Specifically, we study the key problems of LLM-based generative graph analytics (LLM-GGA) with three categories: LLM-based graph query processing (LLM-GQP), LLM-based graph inference and learning (LLM-GIL), and graph-LLM-based applications. LLM- GQP focuses on an integration of graph analytics techniques and LLM prompts, including graph understanding and knowledge graph (KG) based augmented retrieval, while LLM-GIL focuses on learning and reasoning over graphs, including graph learning, graph-formed reasoning and graph representation. We summarize the useful prompts incorporated into LLM to handle different graph downstream tasks. Moreover, we give a summary of LLM model evaluation, benchmark datasets/tasks, and a deep pro and cons analysis of LLM models. We also explore open problems and future directions in this exciting interdisciplinary research area of LLMs and graph analytics. Index Terms—Graph, LLMs, GNNs, Prompt, Survey I. INTRODUCTION Large language models (LLMs) possess billions of parame- ters and have been trained on extensive corpora using training strategies like instruction tuning [1] [2] and Direct Preference Optimization(DPO) [3], enabling them to exhibit powerful reasoning and semantic representation capabilities, thereby advancing AI intelligence closer to human levels. Undoubt- edly, LLMs currently serve as the foundation model for NLP tasks [4] [5] [6], showcasing strong generalization abilities to handle various NLP tasks such as question answering [7] [8], machine translation [9], code generation [10] [11], etc. LLMs have demonstrated extensive common knowledge and robust semantic comprehension abilities, fundamentally transforming existing text-processing workflows. While initially designed for text data, LLMs are increasingly being utilized for tasks Fig. 1: Illustration of the LLM-GGA domain. LLM-GGA do- main includes three principal components: LLM-based graph query processing (LLM-GQP), which necessitates the melding of graph analytics techniques and LLM prompts for query pro- cessing; LLM-based graph inference and learning (LLM-GIL), focusing on learning and reasoning over graphs; Graph-LLM- based applications that employ the graph-LLM framework to address non-graph tasks, such as recommendation systems. beyond language processing, aiming to leverage the robust ca- pabilities of LLMs across different tasks, showcasing superior performance. Graphs, as structured data, play a crucial role in various real- world application scenarios, including the citation networks [12], social networks [13], molecular graphs [14], web links [15], and to name a few. Various graph analytics tasks have been studied to show their usefulness, e.g., node classification, link prediction, subgraph mining, influence maximization, and so on. Their versatility and ability to capture complex rela- tionships have made graphs indispensable tools in academic research and industry platforms. Recently, one kind of graph- based learning model, graph neural network (GNN) [16] [17], has been widely studied and applied to solve challenging graph tasks. The GNN models utilize recursive message passing [18] and aggregation mechanisms [19] among nodes to derive representations of nodes, edges, or entire graphs, which have been used for various downstream tasks. This is thanks to the strong ability of GNN models to capture both graph structure and node features. However, GNNs exhibit weak generalization capabilities [20] [21] [22], requiring retraining for different graph tasks and showing limited transfer ability. LLM-GGALLM-GQPLLM-GILGraph-LLM-based applicationsGraphsLLMs+Graph QueriesLLMsAnswersLLMsGraphsGraphsGraph representationGraph learning tasksGraph reasoningKGs In other words, no universal graph foundation model could be easily generalized to handle various types of graph tasks. Therefore, whether LLMs’ powerful reasoning, semantic representation, and generalization capabilities can be applied to address graph tasks, leading to the inspiration of a graph foundation model, is the core of current efforts in leveraging existing large language models for graph-related tasks. In one word, can LLMs solve graph data tasks? More specifically, we study three detailed questions: (a) what specific graph tasks can LLMs answer? (b) How do LLMs tackle these tasks? (c) What is the effectiveness of LLM-based methods in solving these tasks compared with the existing graph-based approaches? To address the above question, this survey conducts a comprehensive study of existing relevant work on graph an- alytics and LLMs, focusing on exploring the key issue of the LLM-based generative graph analytics (LLM-GGA) field. Drawing from a thorough investigation of the LLM-GGA domain, we offer a structured and methodical analysis that delineates the field into three principal components: LLM- based graph query processing (LLM-GQP), which necessitates the melding of graph analytics techniques and LLM prompts for query processing; LLM-based graph inference and learning (LLM-GIL), focusing on learning and reasoning over graphs; and lastly, graph-LLM-based applications that employ the graph-LLM framework to address non-graph tasks, such as recommendation systems. The framework is shown in Figure 1. We categorize these three main components into a total of six directions to provide a guideline for researchers to conduct more in-depth studies. LLM-GQP includes graph understanding and KG-based augmented retrieval directions. LLM-GIL covers graph learning, graph-formed reasoning, and graph representation directions. The sixth direction is graph- LLM-based applications. The following section details these six directions: • Graph understanding tasks. This research direction is studying whether LLMs can solve graph algorithm prob- lems, exploring whether LLMs can comprehend graph structures to conduct graph mining and graph search. Cur- rent methods have primarily explored LLMs’ understand- ing of graph structures, such as shortest path, clustering coefficient computation [23] [24], and more complex problems like maximum flow and Hamilton path [25] [26] [27]. Two main methods are introduced: prompting and supervised fine-tuning (SFT). The prompting methods explore the LLM’s current structural understanding abil- ity through query processing. Meanwhile, SFT methods enhance LLMs’ structure understanding capability by tuning it on specific graph datasets. However, many more tasks are yet to be explored, such as the community search, keyword search, subgraph pattern mining, and other NP-hard complex graph problems [28] [29]. • Graph learning tasks. This direction explores whether LLMs can combine graph structure and attributes for learning, extracting features of nodes, edges, and graphs, and understanding the semantic information of graphs, for example, tasks like node classification, graph classi- fication, and GQL generation [30] [31] [32] [33]. There are two main pipelines: LLM-GNN pipelines and LLM pipelines. LLMs can leverage their powerful reasoning ability and vast knowledge repository to enhance GNNs and also can predict results directly. • Graph-formed reasoning. This direction explores how LLMs use graph structures to simulate human thinking during reasoning [34] [35] [36], enabling them to solve more complex reasoning problems such as algorithmic, logical, and mathematical tasks. Graph-formed reasoning involves two types of reasoning: think on the graph and verify on the graph. Think on the graph refers to LLMs deriving the final conclusion through the graph structure. Verify on the graph refers to verifying the correctness of the LLMs’ intermediate or final outputs through the graph structure. • Graph representation. This direction explores enhanc- ing graph representation with LLMs, particularly for Text Attribute Graphs (TAGs). LLMs’ strong text representa- tion capabilities allow text embeddings to capture deeper semantic nuances. However, the key challenge in this area remains how to capture and integrate graph structure into graph representation effectively [37] [38] [39]. There are three forms of graph representation: graph embed- ding, graph-enhanced text embedding, and graph-encoded prompts. Graph embedding methods transform a graph into a sequential format for LLM processing. Graph- enhanced text embedding methods integrate structure into text embedding, where the integration method can be concatenation. Graph-encoded prompts focus on the way a graph is described within prompts. • Knowledge Graph (KG) based augmented retrieval. This direction investigates the relationship between LLMs and Knowledge Graphs (KGs). With the emergence of LLMs, discussions have arisen regarding the potential replacement of KGs [40] [41] [42] [43]. Consequently, this paper discusses the limitations of LLMs in processing factual knowledge, evaluates strategies for improving LLM efficacy via KG-based augmented retrieval, and investigates potential avenues for future advancements in this field. • Graph-LLM-based applications. This part explores the tasks where graph-LLM-based methods can be applied for useful downstream application [44] [45] [46], such as recommendation systems, conversational understanding, and so on. We comprehensively analyze these six research directions of LLM-GGA to provide valuable definitions and highlighted methodologies. We also highlight the pros and cons of these methods and showcase future directions. To further explore the capabilities of LLMs reliably, this paper uses the prompting method to test the effectiveness of LLMs in tasks such as graph structure understanding, graph learning, and graph- formed reasoning. Details of the prompts and results obtained during testing are also provided. Additionally, we refine and compile commonly used and effective prompts for graph- related tasks, assisting researchers in conducting experiments. Furthermore, this paper also organizes and introduces the code for existing popular methods, benchmarks for LLM-GGA tasks, and evaluations measuring LLM performance in graph tasks to facilitate future research. Our contributions and the identified challenges for future research. In this paper, we provide a comprehensive survey of the state-of-the-art work on LLMs applied to graph data. We begin by delineating six critical directions in the field of LLM- GGA: graph structure understanding, graph learning, graph- formed reasoning, graph representation, KG-based augmented retrieval, and graph-LLM-based applications. This categoriza- tion clarifies the current work and offers a guideline for future research endeavors. In each direction, we propose a structured introduction and summarization using vivid examples and offer suitable specific pipelines. We analyze the advantages and limitations of current methodologies and suggest avenues for future research. Furthermore, we organize resources related to benchmarks, evaluations, and code links within the LLM- GGA domain to facilitate further investigation by researchers. Lastly, we identify the fundamental challenges in the LLM- GGA field, which are the primary obstacles to advancing LLM in solving graph tasks, including the fundamental issue of how sequential LLM handles structural graph data, the efficiency issue of large-scale graph data, and the NP-hard problems of complex graph analytics. This clarification guides the research direction for future work on LLM-GGA. Roadmaps. The organization of this paper is as follows. We the fundamental preliminaries and summarize first present the graph description language, which converts graphs into sequences before inputting them into LLMs in Section II. Then, we introduce six tasks of LLM-based graph analytics one by one. We present the graph structure understanding direction in Section III, graph learning direction in Section IV, graph-formed reasoning in Section V, graph representation in Section VI, KG-based augmented retrieval in Section VII and graph-LLM-based applications in Section VIII. In the above six directions, we clarify the tasks that LLMs can perform, discuss the methodologies, conduct a comparative analysis, and propose guidelines and principles in this direction. Fol- lowing this, Section IX introduces the popular datasets and new datasets for solving the above tasks and also provides metrics for evaluating LLMs or tasks in different directions. In Section X, we identify and discuss the current and upcoming challenges that LLM-GGA faces and future directions. Finally, our conclusions are presented in Section XI. II. PRELIMINARY language, which can transform the graph into sequential data as the input of LLMs. A. Graph Graph data represents complex relationships through nodes and edges, where nodes represent entities and edges represent their interconnections. This structure excels at modeling intri- cate networks such as social, biological, and transportation systems. It enables analyses like community detection and shortest path calculations, offering critical insights into the dynamics of various systems. Formally, a general graph can be represented as G = (V, E), where V and E denote the set of nodes and edges. V = {v1, v2, ..., vn} where the number of nodes is |V| and |V| = n. E = {eij} where the number of edges is |E| and eij is an edge from vi to vj. B. Graph Neural Network Graph Neural Networks (GNNs) [16] [17] are a type of deep learning model that can handle graph-structured data. The goal of these GNNs is to learn representations for each node, which are computed based on the node’s own features, the features of the edges connected to it, the representations of its neighbors, and the features of its neighboring nodes, v = AGGR(hl−1 hl v , {hl u − 1 : u ∈ Nv}; θl) (1) where hl v represents the representation of node v in the l-th layer. AGGR denotes the aggregation function that aggregates the representations of neighboring nodes from the previous layer. For the tasks that focus on individual nodes, e.g., node classification, the learned representations can be used directly to accomplish specific objectives. However, for the tasks that consider the entire graph, e.g., graph classification, a global representation can be obtained by pooling or applying other methods to the representations of all nodes. This global representation can then be used to perform the corresponding tasks. C. Large Language Models Currently, there is no precise definition for Large Language Models (LLMs). However, according to the pioneering surveys [47] [48] on LLMs, a distinction can be made between LLMs and Pre-trained Language Models (PLMs). LLMs are large language models with billion-level parameters that are pre- trained on massive amounts of data, such as Llama [5] and ChatGPT. Conversely, PLMs are pre-trained language models with million-level parameters that can be more easily fine- tuned on task-specific data. While LLMs and PLMs share similarities in their pre-training process, the former is char- acterized by its larger size and ability to generate human-like text. Thus, it is essential to consider the potential implications of using LLMs in various applications. In the subsequent section, we will initially introduce graph data, proceed to discuss GNNs as a paradigm of graph- based learning models, then introduce LLMs and distinguish LLMs and PLMs, and ultimately introduce graph description D. Graph Description Language Graphs are represented in the structured data in arbitrary shapes, while LLMs typically process sequential data, such as the text as a sequence of words. To bridge this gap, Fig. 2: Graph Structure Understanding tasks. the graph description language (GDL) transforms the graph into sequential data, which can be inputted into an LLM. Specifically, GDL aims to convert graphs into sequential data while retaining the structure and unique attributes of the graph. This conversion allows the graph’s information to be fed into an LLM for processing. There are several graph description languages: • Text description. Graph structure can be described using words such as ‘Node 1 is connected to Node 2’ and ‘There are three nodes connected to Node 1’. • Adjacency list. An adjacency list represents each vertex in the graph with the collection of its neighbouring vertices or edges. Node A is connected with node B and node C can be denoted as N (v) = {B, C}. • Edge list. An edge list represents the edge connections between two nodes in the graph. (A, B) indicates a connection between nodes A and B. • GML. Graph Modelling Language [49] consists of an unordered sequence of node and edge elements enclosed within ‘[·]’. • GraphML. Graph Markup Language [50] consists of XML containing a graph element and an unordered sequence of node and edge elements. • SQL. Several specialized SQL languages are designed specifically for working with graph data. These languages are also capable of serving as graph description lan- guages. Some notable examples include Cypher [51], a query language developed by Neo4j, and Gremlin [52], SPARQL [53], and GSQL [54]. They combine SQL- like syntax with graph-specific constructs and algorithms, making them suitable for complex graph analytics tasks. • Multi-modality encoding. Except for text description, graph structure can also be represented using image description and motif description. The graph can be visu- alized as an image and inputted into an LLM to process images. Alternatively, motifs such as stars, triangles, or clique patterns can represent the graph structure as input into an LLM. • Encode as a story. The graph can be encoded within a specific context, such as a friendship, co-authorship, social network, politician, or expert. For example, the connections between nodes can represent friendship re- lationships. We can assign names to the nodes, such as ‘David’ and ‘Alice’. Notably, (1) different graph description languages can yield different results of LLMs. Therefore, it is suggested to test with multiple GDLs and select the one with the best experi- mental results. (2) If needed, the LLM’s output form can be specified along with GDLs in the prompt. LLMs often generate excessive reasoning processes that may be unnecessary, so standardizing the LLM’s output can be beneficial. III. GRAPH STRUCTURE UNDERSTANDING TASKS Graph structure understanding tasks evaluate whether LLMs can comprehend graph structures. Simple tasks include the queries of neighbors, shortest paths, connectivity, the calcu- lation of graph radius, and the clustering coefficient. More complex tasks include solving maximum flow problems and performing topological sorting. These tasks need LLMs to comprehend graph structures locally and globally, as shown in Figure 2. In this section, we present 21 graph understanding tasks along with their definitions. Subsequently, we elaborate on the two main methods currently used to address graph structure understanding tasks: prompting and supervised fine- tuning LLMs. Graph Structure Understanding Tasks14320Graph Size CalculationGiven <graph>, what is the number of nodes and edges in this graph? Please answer with the number of nodes: X, number of edges: X. 14320Degree CalculationGiven <graph>, what is the degree of node 4? Or, like, find the node degree of node [given node] in the given graph.14320Connected Nodes SearchGiven <graph>. Is node 3 the 1-hop neighbor of node 4? List the answers after “Ans:” in the format of [Yes, No,]. 14320Edge ValidationGiven <graph>. Is there an edge between node 1 and node 2?14320Path SearchGiven <graph>. Simple path: Find a single path from node 0 to node 4 connected by edges in the given graph. Shortest path: Give the shortest path from node 0 to node 4.14320Attribute RetrievalGiven <graph>, what is the title of node 0?Abstract: Text in curve orientation, despite being one of the common…Title: Total Text A Comprehensive Dataset For Scene Text Detection And Recognition.14320Graph DensityGiven <graph>, what is the density of the given graph?14320EccentricityGiven <graph>, what is the eccentricity of the node 0?14320Pattern matchingGiven <graph>, in the given graph, the triangle must be connected by three edges, list the triangle after ”Ans:” in the format of [0-1-2]14320Topological SortingIn a directed graph with 5 nodes numbered from 0 to 4: node 0 should be visited before node 1, ... Q: Can all the nodes be visited? Give the solution.12100Bipartite Graph MatchingThere are 2 job applicants numbered from 0 to 1, and 3 jobs numbered from 0 to 2. Each applicant is interested in some of the jobs. Each job can only accept one applicant and a job applicant can be appointed for only one job. Applicant 0 is interested in job 1, ... Q: Find an assignment of jobs to applicants in such that the maximum number of applicants find the job they are interested in.14320Hamilton PathGiven <graph>, is there a path in this graph that visits every node exactly once? If yes, give the path. Note that in a path, adjacent nodes must be connected with edges. 14320Maximum FlowIn a directed graph with 5 nodes numbered from 0 to 4, and the edges are: an edge from node 0 to node 1 with capacity 10... Q: What is the maximum flow from node 0 to node 3?101559(a)(b)(c) (d) (e)(f)(g)(h)(j)(k)(l) (m)(n) 14320Graph DiameterGiven <graph>, what is the diameter of the given graph?(i) Task Prompts Graph Data Loading The structure of the [file path] molecular graph of the benzene ring contains a hexagon. Graph Size Detection Given [graph], what is the number of nodes and edges in this graph? Please answer with the number of nodes: X, number of edges: X. Degree Detection Given [graph], what is the degree of node 4? Or, find the node degree of node [given node] in the given graph. Connected Nodes Given [graph]. Is node 5 the 1-hop neighbor of node 4? List the answers after “Ans:” in the format of [Yes, No,]. Edge Detection Given [graph]. Is there an edge between node 1 and node 2? Path Simple path: Given the undirected graph with the specified nodes and edges, nodes: [0, 1, 2, 3, 4], edges: [(0, 1), (1, 4), (1, 3), (4, 3), (3, 2)], find a single path from node 1 to node 2 connected by edges in the given graph. Shortest path: Given the directed graph with the specified nodes and edges, nodes: [0, 1, 2, 3, 4], edges: [(0, 1), (1, 4), (1, 3), (4, 3), (3, 2)], give the shortest path from node 0 to node 4. Attribute Retrieval Given [graph]. What is the title of node 0? Graph Density Given [graph]. What is the density of the given graph? Eccentricity Given [graph]. What is the eccentricity of the given graph? Graph Radius Given [graph]. What is the radius of the given graph? Graph Diameter Given [graph]. What is the diameter of this graph? Graph Periphery Given [graph]. What is the periphery of this graph? Or What are the nodes included by the periphery of the given graph? Clustering Coefficient Computing Given [graph]. What is the clustering coefficient of [given node]? TABLE I: Prompts for Graph Structure Understanding Tasks, where [graph] is the input of the data. A. Task Introduction 1) Graph size calculation: Graph size refers to the number of nodes and edges in a graph. Given a general graph G = (V, E), the graph size detection task is to detect the |V| and |E| in G. Through this task, LLMs are expected to understand the fundamental structure of a graph accurately. Given a prompt describing the graph and asking related queries, LLMs are supposed to determine |V| and |E|, as shown in Figure 2 (a). 2) Degree calculation: The degree detection task involves determining the degree of a specific node in a graph. The neighbors of node v can be denoted as N (v) = {u|(u, v) ∈ E(v)}, where E(v) is the edge set including edges connected to v. The degree of vi is the number of its neighbors in G, which can be denotes as degG(vi) = |N (vi)|. Through this task, LLMs are expected to comprehend the context surrounding vi and identify N (vi) accurately. By inputting a prompt about vi and G, LLMs are expected to calculate the degree of the node. This task is shown in Figure 2 (b). 3) Connected nodes search: The connected nodes detection task involves finding all the nodes in NG(vi) of vi in G. Given the prompt about G, LLMs are expected to analyze the local structure of the given node vi and determine NG(vi), as shown in Figure 2 (c). 4) Edge validation: The edge detection task refers to whether there exists an edge eij or eij between vi and vi. Through this task, LLMs are expected to accurately identify the connectivity between nodes and understand the local structure of nodes. Given the prompt about the neighbors of vi to the LLMs, LLMs will likely indicate whether eij or eij exists, as shown in Figure 2 (d). 5) Path search: We consider two types of paths, including the simple path and the shortest path, as shown in Figure 2 (e). Given a graph G = {V, E}, the simple path task involves detecting whether there exists a path (vi, ..., vj) between a source node vi and a target node vj in G. In other words, it is about finding a simple path (vi, ..., vj) between vi and vj without specific requirements. This task evaluates the ability of LLMs to traverse a graph and understand its structure. Given the prompt about G to LLMs, the goal is to return a simple path from vi to vj. Given a weighted directed acyclic graph G = {V, E} with each edge e ∈ E has a non-negative weight w(e), the shortest paths task involve finding a path p = (e1, e2, . . . , en) from a source node to a target node in G such that the sum of the weights of edges w(p) = (cid:80)n i=1 w(ei) is minimized. LLMs can evaluate the length of the shortest path and identify the qualified paths. This task can be further divided into three objectives: 1. Finding the shortest path between two nodes. 2. Finding all the shortest paths for all paired nodes. 3. Finding the average length of all the shortest paths. This task assesses whether the LLM can effectively determine the shortest route between two specified nodes within the graph. 6) Attribute retrieval: The attribute retrieval task involves retrieving detailed information related to nodes, such as the Task Prompts Graph Partition In the academic collaboration network dblp, scholar #355233 is involved in [TBR] local community formed by his/her collaborators. Graph Searching According to the Freebase knowledge graph, the relation between entity /m/027rn and entity /m/06cx9 is [TBR]. Pattern matching Triangle: find a single triangle containing node X. Or in the given graph, the triangle must be connected by three edges, list the triangle after ”Ans:” in the format of [0-1-2]. Cliques: find all the cliques with N nodes in the given graph, list all the cliques after ”Ans:” in the format of [0-1-2] and separate the answers by a comma. Wedge Centering find a single wedge containing node X in the given graph, node X must be the center of this wedge, list the wedge after ”Ans:” in the format of [0-1-2]. Cycle Check In an undirected graph, (i,j) means that node i and node j are connected with an undirected edge. The nodes are numbered from 0 to 5, and the edges are: (3,4) (3,5) (1,0) (2,5) (2,0) Q: Is there a cycle in this graph? Topological Sort In a directed graph with 5 nodes numbered from 0 to 4: node 0 should be visited before node 4, ... Q: Can all the nodes be visited? Give the solution. Maximum Flow In a directed graph with 5 nodes numbered from 0 to 4, and the edges are: an edge from node 0 to node 1 with capacity 10... Q: What is the maximum flow from node 0 to node 3? Bipartite Graph Matching There are 2 job applicants numbered from 0 to 1, and 3 jobs numbered from 0 to 2. Each applicant is interested in some of the jobs. Each job can only accept one applicant and a job applicant can be appointed for only one job. Applicant 0 is interested in job 1, ... Q: Find an assignment of jobs to applicants in such that the maximum number of applicants find the job they are interested in. Hamilton Path Given [graph], is there a path in this graph that visits every node exactly once? If yes, give the path. Note that in a path, adjacent nodes must be connected with edges. Graph Neural Networks Given [graph]. Embeddings: node 0: [1,1], ... In a simple graph convolution layer, each node’s embedding is updated by the sum of its neighbors’ embeddings. Q: What’s the embedding of each node after one layer of simple graph convolution layer? Dynamic Graph In an undirected dynamic graph, (u, v, t) means that node u and node v are linked with an undirected edge at time t. Your task is to answer when two nodes are first connected in the dynamic graph. Two nodes are connected if there exists a path between them. Given an undirected dynamic graph with the edges [(0, 1, 0), (1, 2, 1), (0, 2, 2)]. When are node 0 and node 2 first connected? TABLE II: Prompts for Graph Structure Understanding Tasks, where [graph] is the input of the data. [TBR] means to be reasoned by LLMs. attributes of a node. For example, in a citation network, LLMs are tasked with retrieving specific attributes of a node, such as the title, abstract, or author of a paper. Given the prompt about G and detailed attribute information, LLMs are expected to retrieve the required information, as shown in Figure 2 (f). 7) Graph density: Graph density represents the ratio be- in a graph and the tween the number of edges present maximum number of edges that the graph can have. For an undirected simple graph G = {V, E}, the graph density is defined as: D = 2|E| |V|(|V| − 1) (2) For a directed simple graph, the graph density is defined as: D = |E| |V|(|V| − 1) (3) This task requires LLM to calculate the density of a given graph and assess its understanding of the entire graph, as shown in Figure 2 (g). 8) Eccentricity: The eccentricity of a node in a graph is defined as the length of the longest shortest path starting at that node. The eccentricity of one node: this task requires LLMs to answer the eccentricity of a given node. The eccentricity of many nodes: this task requires LLMs to answer the eccentricity of a subset of nodes or all the nodes in the graph, as shown in Figure 2 (h). 9) Graph radius: Based on the eccentricity of nodes, the radius of a graph is the minimum eccentricity of any vertex in the graph. LLMs can calculate the radius of the given graph with the description of the graph. 10) Graph center: The center of a graph is the set of vertices of graph eccentricity equal to the graph radius. Based on the eccentricity task and graph radius task, LLMs should be given the graph information and asked to calculate the graph center. 11) Graph diameter: Based on the shortest path, the diam- eter of a graph is the length of the shortest path between the most distant nodes. LLMs can calculate the graph’s diameter with the given graph information, as shown in Figure 2 (i). 12) Graph periphery: Based on the graph eccentricities and graph diameter, the graph periphery is a set of vertices that have graph eccentricities equal to the graph diameter. LLMs can answer questions related to the graph periphery using the given graph information. 13) Clustering coefficient computing: The clustering coef- ficient is a measure of how connected a vertex’s neighbors are to one another. We define the edges among neighbors of vi as {ejk : vj, vk ∈ NG(vi), ejk ∈ E}. For directed graphs, the clustering coefficient is defined as: Ci = |{ejk : vj, vk ∈ NG(vi), ejk ∈ E}| |NG(vi)||NG(vi) − 1| (4) For undirected graphs, the clustering coefficient is defined as: Ci = 2|{ejk : vj, vk ∈ NG(vi), ejk ∈ E}| |NG(vi)||NG(vi) − 1| (5) LLMs can calculate the clustering coefficient as a measure of the degree to which nodes in a graph tend to cluster together. 14) Graph partition: This task is an online social network reasoning task, which is to infer the community structure of an online social network by partitioning users into different clusters based on their interaction information. Each cluster represents a social community formed by users who interact with each other frequently. LLMs partition the users of the social network based on user social interaction patterns and generate the resulting cluster assignments. 15) Graph searching: This task is a knowledge graph reasoning task, which involves inferring relationships between entities based on their information or inferring connected entities based on the information of entities and relationships. Specifically, LLM takes entities or relationships as input and searches for relevant entities or relationships to generate output. 16) Pattern matching: This task is to identify star, wedge, triangle, or clique patterns that contain a target node. The target node can be defined as the center of the pattern. Alternatively, the task can involve identifying whether these patterns exist in a given graph and determining the number of occurrences. Given a description of the LLM graph, the goal is for LLM to identify different patterns and provide the corresponding answers, as shown in Figure 2 (j). 17) Cycle validation: This task is to determine whether a graph contains a cycle. Given G = {V, E}, a cycle is a non- empty trail with a vertex sequence (v1, v2, ..., vn, v1). Given the graph information, LLM is asked to determine whether this graph has a cycle. 18) Topological sorting: Topological sorting of a directed graph G = {V, E} refers to a linear ordering of its nodes, where each node comes before all the nodes it points to, for example, there exists a directed edge eij from vi to vj, vi comes before vj in the ordering. The resulting array of node ordering is called topological ordering. LLM is required to generate a valid topological sorting for the given directed graph, and there may be multiple valid solutions, as shown in Figure 2 (k). 19) Maximum flow: Given a capacity constraint, the max- imum flow problem involves finding the maximum flow that can be sent through pipes, channels, or other pathways in a network. Define a flow as fij from vi to vj and the capacity on edge eij as cij. Given the capability constraints, fij ≤ cij Fig. 3: Examples for Path Task with GPT3.5 - Graph Structure Understanding Tasks. Fig. 4: Examples for Maximum Flow Task with GPT3.5 - Graph Structure Understanding Tasks. Fig. 5: Examples for Bipartite Graph Matching Task with GPT3.5 - Graph Structure Understanding Tasks. Fig. 6: Promoting methods in graph structure understanding tasks. There are three categories: manual prompts, self-prompting, and API call prompts. for all eij. Meanwhile, (cid:80) fij >0 fij = (cid:80) fji>0 fji for ∀vi except for the source and the target {s, t} Given a network graph, LLM generates a path that maximizes the flow from the source to the sink, as shown in Figure 2 (l). 20) Bipartite graph matching: A bipartite graph is a type of graph where the nodes can be divided into two disjoint sets, U and V, such that there are no adjacent nodes within each set. A matching in a bipartite graph is a set of edges where no two edges share an endpoint. In a maximum matching, if any edge is added, it is no longer a matching. For a given bipartite graph, there can be multiple maximum matchings. LLM can generate a solution that finds the maximum matching, as shown in Figure 2 (m). 21) Hamilton Path: In an undirected graph, a Hamiltonian path is a path in the graph that visits each vertex exactly once. Given an undirected graph, the task is for LLM to find a valid Hamiltonian path, as shown in Figure 2 (n). B. Graph Structure Understanding Methods in The rise of LLMs has sparked researchers’ interest exploring their powerful text processing and generalization capabilities for graph reasoning. Therefore, existing efforts have introduced various benchmarks to test LLMs’ graph reasoning potential, aiming to explore their capacity to address graph-related problems. Prompting methods have emerged as the primary approach to assess LLMs’ understanding of graph structures, with some studies also focusing on fine-tuning LLMs to enhance their graph reasoning abilities. Thus, the following two main methods are introduced: prompting method and fine-tuning LLMs. 1) Prompting method: The prompting method [55] can be categorized into three main types: manual prompt, self- prompting, and API call prompt, as shown in Figure 6. Most studies utilize manual prompts, where carefully crafted prompts guide LLMs to comprehend graph structures better and understand the objectives of graph tasks, thereby leading to improved performance on graph-related tasks. Manual prompts. NLGraph [27] introduces a benchmark aiming to assess the understanding capabilities of LLMs in processing textual descriptions of graphs and translating them into conceptual spaces. This benchmark covers various graph reasoning tasks like connectivity, shortest path, maximum flow, and graph neural network construction, with three difficulty levels (easy, medium, hard) based on graph size and density. Meanwhile, the number of nodes n = |V| and the probability p control edge generation, allowing manipulation of graph size and density for a more reliable evaluation of LLM potential in graph comprehension. Next, to guide LLMs in solving these graph tasks, two prompt methods are proposed by NLGraph [27]: build-a-graph prompting and algorithmic prompting. Prompt III-1: Build-a-Graph Prompting. Build-a-Graph prompting method is to guide LLMs to conceptual grounding by adding one sentence shown as red words below: Prompt III-1: Build-a-Graph Prompting Given <graph description>. Let’s construct a graph with the nodes and edges first. Q: What is the degree of node 4? Prompt III-2: Algorithmic Prompting. The algorithmic prompting method is designed to guide LLMs to engage in algorithmic reflection and thinking by adding the details of the algorithm shown as red words below: Prompt III-2: Algorithmic Prompting We can use a Depth-First Search (DFS) algorithm to find the shortest path between two given nodes in an undirected graph. The basic idea is to start at one of the nodes and use DFS to explore all of its adjacent nodes. At each node, you can keep track of the distance it takes to reach that node from the starting node. Once you have explored all the adjacent nodes, you can backtrack and pick the node which has the shortest distance to reach the destination node. Given <graph description>. Q: Give the shortest path Manual promptSelf-promptingAPI call prompts………………Frozen LLMManual prompt: Given <graph>, what is the number of nodes and edges in this graph? Please answer with the number of nodes: X, number of edges: X. ………………Frozen LLMInstructor: You are a brilliant graph master that can handle anything related to graphs like retrieval, detection and classification.Graph description language: GML, GraphML, etc. Query: What is the clustering coefficient of node X?New contexts: Text description of input graph generated by LLM itself.Final output: The clustering coefficient of node X is …………………Trainable LLMRegular prompt: What is the diameter of the binomial tree?API call prompt: The diameter of the binomial tree is Cerulean [GR(GL(“gpr”, “binomial_tree”), “toolx:diameter”) →r] from node 0 to node 4. Compared with other advanced prompts and in-context learning techniques, the two proposed prompts perform better on graph tasks. Based on the experiments, LLMs indeed pos- sess preliminary graph reasoning abilities. Also, the benefits of advanced prompting and in-context learning diminish in complex graph problems and may even have a negative impact. LLMs are also susceptible to false correlations, performing poorly on graph structures such as chains and cliques. To explore whether LLMs can truly comprehend graph structures and reason on graphs, meanwhile, enhance the performance of LLM-GQP tasks, [26] and [24] test LLMs also using manual prompts, where [26] explores the conditions under which LLMs can benefit from the inherent structural information in the data and examines two potential factors in- fluencing LLM’s performance: data leakage and homogeneity. In summary, the conclusions are as follows: • No evidence suggests that LLM’s performance is signif- icantly attributed to data leakage. • The performance of LLMs on target nodes is positively correlated with the local homogeneity of the nodes. [24] investigates the graph reasoning capabilities of LLMs and introduces new evaluation metrics—comprehension, cor- rectness, fidelity, and rectification—to assess LLMs’ pro- ficiency in understanding graph structures and performing reasoning tasks. The findings reveal that LLMs can effectively understand graph structures and perform reasoning tasks. However, LLMs still face challenges in structural reasoning, particularly in multi-answer tasks where GPT models demon- strate errors and overconfidence. In contrast, GPT-4 displays improved self-correction abilities. Beyond static graphs, LLMs’ ability to understand dynamic graph structures is also assessed. Dynamic graphs change over time, capturing temporal network evolution patterns. LLM4DyG [25] introduces the LLM4DyG benchmark, which uses prompting methods to evaluate LLMs’ spatio-temporal understanding capabilities on dynamic graphs. Prompt III-3: DST2. The newly proposed Disentangled Spatial-Temporal Thoughts (DST2) prompting technique en- hances LLMs’ spatial and temporal understanding of dynamic graphs. DST2 is shown below: Prompt III-3: DST2 DyG Instruction: In an undirected dynamic graph, (u, v, t) means that node u and node v are linked with an undirected edge at time t. Task Instruction: Your task is to answer when two nodes are first connected in the dynamic graph. Two nodes are connected if there exists a path between them. Answer Instruction: Give the answer as an integer number at the last of your response after ’Answer:’ Exemplar: Here is an example: Question: Given an undirected dynamic graph with the edges [(0, 1, 0), (1, 2, 1), (0, 2, 2)]. When are node 0 and node 2 first connected? Answer:1 Question: Question: Given an undirected dynamic graph with the edges [(0, 9, 0), (1, 9, 0), (2, 5, 0), (1, 2, 1), (2, 6, 1), (3, 7, 1), (4, 5, 2), (4, 7, 2), (7, 8, 2), (0, 1, 3), (1, 6, 3), (5, 6, 3), (0, 4, 4), (3, 4, 4), (3, 6, 4), (4, 6, 4), (4, 9, 4), (6, 7, 4)]. When are node 2 and node 1 first connected? Results show that LLMs have preliminary spatio-temporal understanding capabilities on dynamic graphs. Dynamic graph tasks become increasingly challenging with larger graph sizes and densities while insensitive to periods and data generation mechanisms. We provide manual prompt examples for various graph structure understanding tasks in Table I and Table II. Addi- tionally, we test LLMs with GPT 3.5 for path, max flow, and bipartite graph matching using manual prompts, as shown in Figure 3, Figure 4 and Figure 5 respectively. For self-prompting. Self-prompting refers to the process where an LLM continuously updates the initial prompt to make it easier for LLMs to understand and more beneficial for solv- ing tasks. In other words, the LLM designs prompts based on the original prompt. GPT4Graph [23] utilizes self-prompting by continuously updating the prompt with descriptions related to the graph. Specifically, first, the graph data is converted into graph description languages, as shown in Section II-D. Then, together with queries, it is inputted into the prompt handler to create a prompt, which is then inputted into the LLM. Based on the output of the LLM, the prompt is updated and re- input into the LLM, repeating multiple rounds of updates to obtain an optimized graph description context, such as context summarization and format explanation. This process can be seen as the LLM’s self-updating prompt procedure. Finally, the optimized graph description context is input along with the original input into the LLM to obtain the final result. Prompt III-4: Self-prompting. The input original prompt is shown below: Prompt III-4: Self-prompting Instructor: You are a brilliant graph master that can handle anything related to graphs like retrieval, detection and classification. Graph description language: GML, GraphML as shown in Section II-D. Context: Node P357 has 4 neighbors, where each of which are about anomaly detection with statistical models... Query: What is the clustering coefficient of node P357? This paper conducts experiments on the obgn-arxiv [56] and Aminer [57] datasets and finds that: • The design of prompts significantly impacts the results. the orga- The choice of graph description language, nization of input data, and the position of in-context knowledge, such as questions, statements, and examples, all affect the model’s ability to understand the graph structure. • Role prompting techniques can improve the effectiveness of LLMs by guiding the model to view the graph as roles and relationships between roles in a specific context. Providing LLMs with more semantic information leads to more accurate results. • Examples in prompts have mixed impacts on graph structure understanding. Adding examples in prompts to guide LLMs in understanding graph structures may not necessarily improve the results; in some graph structure learning tasks, examples may introduce noise. API call prompts LLMs exhibit limited ability to perform precise mathematical calculations, multi-step logical reason- ing, spatial topological structuring, and temporal information processing. To bridge these gaps, taking inspiration from recent models such as ChatGPT and Toolformer [58], Graph- ToolFormer [59] is proposed to equip LLMs with graph reasoning capabilities by training them over a prompt dataset that contains graph reasoning API annotated by ChatGPT. These graph reasoning APIs are used to call external reasoning tools. Then, the trained LLMs can solve graph tasks, from loading graph data and inferring graph attributes to graph partition tasks. The framework consists of three parts. First, it generates a prompt dataset by providing ChatGPT with a regular prompt, guiding ChatGPT to add an API call to the original prompt, and then creating a prompt with an API call. Prompt III-5: API call prompts Prompt III-5: API call prompts Example 1 Input:(Regular prompt) The structure of the benzene ring molecular graph of benzene ring contains a hexagon. Output:(API call prompt) The structure of the [GL(”benzenering”)] molecular graph of benzene ring contains a hexagon. Example 2 Input:(Regular prompt) What is the diameter of the binomial tree? Output:(API call prompt) The diameter of the binomial tree is [GR(GL(”gpr”, ”binomial tree”), ”toolx:diameter”)→ r]. Second, fine-tune existing LLMs such as GPT-J [60] [61], LLaMA [5] [62], etc., using technologies like LoRA [63] on the generated prompt dataset. Thirdly, utilize the fine- tuned LLM for inference to add graph reasoning API calls Fig. 7: Supervised fine-tuning (SFT) method in graph structure understanding tasks. Prefix tuning is shown above: combine graph structural and textual information as prefixes in prefix tuning and input it into LLM with instructions, like GraphLLM [64]. Instruction tuning can also be used. into statements. After generating API call statements, how can external graph tools be invoked? Graph reasoning query processing comes in. Graph reasoning query processing entails utilizing external graph reasoning tools based on API call statements to obtain the final answer. 2) Supervised fine-tuning (SFT) method: Beyond lever- aging prompts for graph-structured tasks with LLMs, cer- tain studies have also implemented supervised fine-tuning of LLMs, illustrated in Figure 7. GraphLLM [64] is committed to addressing the obstacles in graph reasoning by LLMs and introduces a hybrid model that inherits the capabilities of both graph learning models and LLMs, enabling LLMs to interpret and reason about graph data proficiently, utilizing the superior expressive power of graph learning models. C. Comparisons and Discussions In the following part, we compare the prompting and SFT methods mentioned above. The prompting method can be divided into three cate- gories: manual prompts, self-prompting, and API call prompts. Most current methods primarily rely on manual prompts, incorporating techniques like Chain of Thought (CoT) [65], self-consistency [66], and in-context learning [67]. To obtain better prompt representations, self-prompting methods are also widely used. However, the exclusive use of manual prompts and self-prompting offers limited enhancement to model per- formance, as they merely tap into the pre-existing capabilities of LLMs. Additionally, due to the limited input window of LLM, the graph size that can be input to LLM at once is also restricted, while graph sizes in the real world are typically large. For the prompting method, we also propose two feasible directions to better leverage existing LLMs for handling struc- ture understanding tasks. The first direction is breaking down complex tasks into several sub-problems. While LLMs can tackle simple graph tasks, they struggle with more challenging ones. Breaking down complex graph understanding tasks into simpler components enables LLMs to engage in multi-step leading to the resolution of complex reasoning processes, ………………Instructions: How many C-C-O triangles are in the molecule?Graph-enhanced prefix:Structural and textual featuresLLMResponse: There is 1 C-C-O triangle in the molecule. Fig. 8: Graph Learning tasks. issues, such as GoT [59], which can help address more intricate graph tasks like generating GNN frameworks, k-truss tasks, kd-core tasks, etc. The second direction is API call prompts. Inspired by ToolFormer [58], LLMs can be trained as agents to utilize tools for graph tasks that are hard to solve. However, current API call prompt methods [59] utilize LLMs not as agents but solely to convert user queries into API command strings for processing by subsequent programs, exemplified in Prompt III-5. However, compared to prompting methods, fine-tuning LLMs with graph data seems a better way to enhance their understanding of graph structures. There are two mainstream methods for fine-tuning LLMs: Supervised Fine-Tuning (SFT) and Reinforcement Learning with Human Feedback (RLHF) [6]. SFT helps LLMs understand prompts and generate mean- ingful responses. However, SFT only offers a single human- written response for each prompt, whereas RLHF provides detailed human feedback through pairwise comparison label- ing. Furthermore, to address the instability issue in PPO [68] training, the Reward Ranked Fine-Tuning (RAFT) [69] can also be attempted which requires online interaction. For offline algorithms, methods like DPO [3] and Preference Ranking Optimization (PRO) [70] can also be utilized for training LLMs. IV. GRAPH LEARNING TASKS A. Tasks Introduction Recently, LLMs have been shown to possess extensive common sense and powerful semantic understanding capa- bilities, fundamentally transforming the existing workflow for processing text. However, whether LLMs can effectively handle graph learning tasks, transferring their generalization ability from text tasks to graph learning tasks, such as node and graph classification, is still a research subject that needs exploring. These tasks require the model to learn and solve graph learning tasks, as shown in Figure 8. In this section, we present seven graph learning tasks along with their definitions. Next, we introduce graph learning methods, categorized into three types based on the role of LLMs: LLMs act as enhancers, LLMs act as predictors, and graph prompts. 1) Node classification: The node classification task requires LLM to learn based on the neighbors of a node or the attributes of a node. It involves classifying unseen nodes in a given graph, such as categorizing papers in an academic network into different research directions, as shown in Figure 8 (a). 2) Graph classification: The graph classification task re- quires LLM to classify the entire graph. LLM is given several labeled graphs and is expected to classify unseen graphs. For example, a molecule can be viewed as a graph, and LLM can predict the properties or functions of the molecule by classifying the graph, as shown in Figure 8 (b). 3) Edge classification: The edge classification task involves classifying the edges in a graph. Existing methods improve edge classification by training a learnable graph prompt and combining it with a GNN or LLM, as shown in Figure 8 (c). 4) Node generation: The node generation task refers to pro- viding requirements for an LLM to generate nodes, allowing it to generate node attributes, which are then added to the TAG to enhance it. 5) Knowledge graph question qnswering (KGQA): Knowl- edge graph organizes data into a structured format, represent- ing entities, properties, and relationships. Knowledge graph question answering (KGQA) aims to capture the most appro- priate answers by querying the knowledge graph (KG) using natural language questions. This task evaluates the ability of LLM to reason and understand the underlying graph structure to provide accurate answers, as shown in Figure 8 (d). 6) Graph query language (GQL) generation: The graph query language generation task involves generating graph Graph Learning TasksUSKGQAGiven <knowledge graph>, the director who directs Inception also direct what?InceptionNolanOppenheimerLeonardois starred byGQL GenerationGiven <graph>, the director who directs Inception also direct what? Use Cypher to answer.14320Node ClassificationGiven <graph>, which arxiv CS subcategory does paper ”paper title” with abstract ”paper abstract” belongs to? use the abbreviation to answer.Abstract: Text in curve orientation, despite being one of the common…Title: Total Text A Comprehensive Dataset For Scene Text Detection And Recognition.CCCCCOHCGiven <graph>, is this molecule active with H3C4?Graph Classification14320Node Feature ExplanationGiven <graph>, which arXiv CS sub-category does this paper belong to? Give 5 likely arXiv CS subcategories as a comma-separated list ordered from most to least likely, in the form ”cs.XX”, and provide your reasoning. Abstract: Text in curve orientation, despite being one of the common…Title: Total Text A Comprehensive Dataset For Scene Text Detection And Recognition.14320Edge ClassificationLearnable promptUSInceptionNolanOppenheimerLeonardois starred by(a) (b) (c) (d) (e) (f) Task KGQA Prompts Given [knowledge graph], the director who directs Inception also direct what? GQL Generation Given [graph], the director who directs Inception also direct what? Use Cypher to answer. Node Classification Which arxiv CS subcategory does paper ”paper title” with abstract ”paper abstract” belongs to? use the abbreviation to answer. Graph Classification Given [graph]. Is this molecule active with H3C4? Node Feature Explanation Abstract: Text in curve orientation, despite being one of the common text orientations in real world environment... Title: Total Text A Comprehensive Dataset For Scene Text Detection And Recognition. Question: Which arXiv CS sub-category does this paper belong to? Give 5 likely arXiv CS sub-categories as a comma-separated list ordered from most to least likely, in the form ”cs.XX”, and provide your reasoning. Edge classification learnable prompt TABLE III: Prompts for Graph Learning Tasks, where [·] is the input of the data. query languages, including GQL and Cypher, to perform op- erations on graph databases. Evaluating LLM’s ability to gen- erate GQL helps users extract information from the database, as shown in Figure 8 (e). 7) Node feature explanation: Node feature explanation task involves extracting the attributes of nodes in a text attribute graph. For example, in an academic paper network, the node attributes may include abstracts, titles, etc. LLM is expected to provide reasoning for the classification process of nodes based on their text attributes and explain the features of the nodes, as shown in Figure 8 (f). B. Graph Learning Methods LLM-GIL studies focusing on graph learning tasks can be categorized into three main groups: LLMs act as enhancers, LLMs act as predictors, and graph prompts. When LLMs act as enhancers, they leverage their advanced semantic under- standing of the text, strong reasoning capabilities, and vast knowledge repository to enhance the text attributes associated with nodes in the graph to enhance GNNs. When LLMs act as predictors, LLMs are queried or fine-tuned to predict task results. Inspired by NLP ideas, the Graph prompt aims to create a unified framework capable of solving multiple graph learning tasks. Although LLMs are not used, the concept aligns with LLM-based pipelines. In summary, integrating LLMs in graph learning tasks presents a promising avenue for advancing the field. By leveraging the strengths of LLMs as enhancers and predictors, along with the strategic use of graph prompts, researchers can explore new directions for enhanced performance and more profound insights in LLM-GIL tasks. 1) LLMs act as enhancers: LLMs act as enhancers per- tains to the LLMs-GNNs pipelines, where LLMs assume an enhancer role. Within this framework, LLMs are tasked with processing text attributes, while GNNs are responsible for handling graph structures, capitalizing on the complementary strengths of both components to address graph learning tasks effectively. LLMs bolster GNNs through three distinct mecha- nisms: encoding the graph into embeddings (as shown in Fig- Fig. 9: Encoding graph into embeddings, when LLMs act as enhancers. Input the node text attribute into LM/LLM to obtain text embeddings, then combine the text embeddings with the graph structure for training and learning in GNNs. ure 9), generating graph pseudo labels (as shown in Figure 10), and providing external knowledge or explanations (as shown in Figure 11). Subsequently, we will provide a comprehensive elaboration on these three enhancement strategies. Encoding graph into embeddings. LLMs possess signif- icant semantic comprehension capabilities to encode better node embeddings, as shown in Figure 9. TAPE [30] integrates LM with LLM to generate node embeddings. The process involves fine-tuning two LM models using original node text attributes and LLM explanations for node prediction. The Encoding graph into embeddings.Generating graph pseudo labels.Providing external knowledge/explanations.…Trainable LLM………Frozen LLM………Trainable LM……TextEmbeddingsNode Text AttributeGraphStructureGNN+ learning, as shown in Figure 10. However, the simultaneous training of LLM and GNN poses a significant computational challenge. To bridge this gap, GLEM [31] suggests training the GNN and LM separately in a variational Expectation- Maximization (EM) framework. In the E-step, the LM predicts both gold labels and pseudo-labels from the GNN, while in the M-step, the GNN predicts gold labels and LM-inferred pseudo labels using the embeddings and pseudo-labels provided by the LM. Moreover, due to the high cost of annotation and the necessity for GNN to learn from a substantial amount of high-quality labeled data to ensure its performance on graph tasks, leveraging the zero-shot learning capability of LLM becomes advantageous. Therefore, employing LLM for graph annotation can enhance GNN training even with limited la- beled data. LLM-GNN [72] proposes to select a candidate node set to be annotated. Subsequently, LLMs annotate the candidate node set, and post-filtering is conducted to eliminate low-quality annotations. Finally, the GNN is trained using the high-quality annotation set and utilized for prediction. LLM-GNN [72] proposes to select a candidate node set for annotation by LLMs, followed by post-filtering to remove low- quality annotations. Then, GNN is trained using high-quality annotations for prediction. Providing external knowledge/explanations. LLMs pos- sess a vast knowledge base, enabling them to provide external knowledge or explanations related to node features when en- coding them, as shown in Figure 11. The additional knowledge assists the model in better extracting and capturing node features. Graph-LLM [73] utilizes LLMs, such as ChatGPT, to explain text attributes, enhancing them and generating pseudo labels. These enhanced attributes are then fed into a trainable LLM, like Llama, to produce node feature embeddings. The combined pseudo labels and embeddings are input into a GNN, which delivers the final prediction outcomes. Similarly, TAPE [30] leverages LLMs to provide external explanations. In a citation network where each node contains text attributes like title and abstract, the text attribute of each node serves as input to an LLM. The LLM categorizes the nodes and generates multiple predictions ranked in a list with accompanying reasoning explanations. This approach aims to extract the LLM’s reasoning capabilities while integrating external knowledge to aid in understanding node text attributes and extracting node features. 2) LLMs act as predictors.: When LLMs are predictors, they are usually directly employed as standalone predictors. The critical aspect of integrating LLMs as predictors lies in crafting a well-designed prompt that encompasses text attributes and graph structures, enabling LLMs to compre- hend the graph structure effectively and enhance prediction accuracy. Additionally, there are other methodologies to fine- tune LLMs, such as utilizing techniques like LoRA [63] and instruction tuning, aiming to deepen the LLM’s understanding of the graph structure. Based on whether LLMs undergo parameter training, they are categorized into prompting LLMs and SFT LLMs, as shown in Figure 12. Fig. 10: Generating graph pseudo labels, when LLMs act as enhancers. Input unlabeled nodes into LLM for labeling, then use the labeled nodes with pseudo-labels as input for training the GNNs for graph learning. Fig. 11: Providing external knowledge/explanations, when LLMs act as enhancers. Two pipelines are shown above. In the first pipeline, input node text attributes into LLM for elaboration, enhancing the detail of the text attributes. In the second pipeline, input node text attributes and designed queries into LLM. LLM leverages the text attributes to answer queries and explains the reasoning process. resulting embeddings are then used as input to train a GNN model for node classification tasks. To unify graph data and graph learning tasks, OFA [32] introduces a comprehensive framework that unifies diverse graph data by describing nodes and edges using natural language and encoding varied and potentially cross-domain text attributes into feature vectors within the same embedding space. The obtained feature vec- tors are then fed into a GNN to tackle various downstream tasks effectively. Moreover, SIMTEG [71] and GLEM [31] involve training an LM with Lora and subsequently generating embeddings as text representations, then a GNN is trained on top of these text embeddings. On this basis, G-prompt [33] introduces a graph adapter to extract node features, thereby obtaining improved node representations. Generating graph pseudo labels. Many existing pipelines utilize LLMs to process text attributes as node features, then feed the embeddings produced by LLM into a GNN model for Encoding graph into embeddings.Generating graph pseudo labels.Providing external knowledge/explanations.UnlabelednodesNodeswithpseudolabelsTrainingGNNAnnotationwithLLMEncoding graph into embeddings.Generating graph pseudo labels.Providing external knowledge/explanations.Node Text AttributeLLMsEnhancedtextattributesNode Text AttributeDesignedqueriesLLMsExplanationforreasoningprocess1.2. Fig. 12: LLMs act as predictors. For prompting LLMs, input designed manual prompts into LLM, enabling it to predict nodes/links/graphs. For SFT LLMs, input instructions into the LLM to generate multiple answers. Tuning the LLM is then based on these multiple responses. Prompting LLMs. The prompting method can be divided into two categories. One type is the manual prompts, which are manually written prompts. Prompt IV-1: Manual Prompt Template with Slots. For instance, Beyond Text [74], ENG [75], and Graph Agent [76] provide a manual prompt template with slots. By filling these slots with different examples, various prompts can be constructed. For example: Prompt IV-1: Manual Prompt Template with Slots The title of one paper is <Title> and its abstract is <Abstract>. This paper is cited by the following papers: <Titlelist1>. Each of these papers belongs to one category in: <Categories>. You need to 1.Analyse the papers’ topic based on the given title and abstract; 2.Analyse the pattern of citation information based on their titles, and retrieve the citation information you think is important to help you determine the category of the first given paper. Now you need to combine the information from 1 and 2 to predict the category of the first given paper. You should only output one category. Compared to manual prompts, LPNL [77] generates Fig. 13: Examples for Node Classification Task with GPT4 - Graph Learning Tasks. prompts through sampling. Specifically, it conducts a two- stage sampling process on the source node and each candidate neighbor from the original candidate set to acquire anchor nodes. Prompt generation is then based on these anchor nodes. We provide manual prompt examples for various graph learning tasks in Table III. Additionally, we test LLMs with GPT 3.5 for node classification and KGQA using manual prompts, as shown in Figure 13 and Figure 14. Supervised fine-tuning (SFT) LLMs. IntructGLM [78] and GraphGPT [79] both employ SFT to train LLM for the node classification task. IntructGLM [78] utilizes a single LLM by prompting methods. The prompt includes the description of node attributes and structure through text descriptions and corresponding queries. LLMs are then tasked with answer- ing questions and determining node categories, leading to fine-tuning through supervised learning. On the other hand, GraphGPT [79] feeds graph structural information and text Prompting LLMsSupervised fine-tuning (SFT) LLMs.………………Trainable LLMInstructions: How many C-C-O triangles are in the molecule?Response: There is 1 C-C-O triangle in the molecule.Response: There is no C-C-O triangle in the molecule.Response: There is 4C-C-O triangle in the molecule.Instructiontuning………………Frozen LLMManualprompt:The title of one paper is <Title>and its abstract is <Abstract>. This paper is cited by the following papers: <Titlelist1>. Each of these papers belongs to one category in: <Categories>.You need to analyze the paper’s topic based on the given title and abstract. tuning, as shown in Figure 15. The integration of prompts is crucial in assisting downstream tasks in achieving task- specific optimal outcomes, bridging the gap between pre- trained models and the diverse array of graph tasks to enhance performance and transferability. GPPT [80] and GraphPrompt [81] aim to unify pre-training and downstream tasks in graph learning. GPPT transforms node classification tasks into edge prediction tasks and em- ploys masked edge prediction for GNN pre-training. Mean- while, GraphPrompt combines node and graph classification tasks into a subgraph similarity prediction task and utilizes graph prompt functions, introducing unified instances and task templates to enhance performance. Subsequent research, like All in One [82], further consolidates edge, node, and graph classification tasks into a single framework using multi-task prompting approaches, standardizing graph prompts similar to language prompts and enhancing initialization through meta- learning techniques for improved reliability and generality across different tasks in graph data analysis. C. Comparisons and Discussions For addressing graph learning tasks, existing methods [30] [79] [82] categorize based on the role of LLM into three types: LLMs act as enhancers (LLM-GNN pipelines), LLMs act as predictors (LLM pipelines), and graph prompts. In the part of graph prompts, we introduce the prompting engineering in GNNs without utilizing LLMs. Graph prompts aim to unify downstream tasks and construct a universal framework. Therefore, it is compared with LLM-GNN pipelines and LLM pipelines to provide a comprehensive overview. When LLMs act as enhancers, the most popular pipeline is the LLM-GNN pipeline. There are three categories of LLM- GNN pipelines, depending on how LLM enhances GNN: encoding the graph into embeddings, generating graph pseudo labels, and providing external knowledge/explanations. How- ever, the LLM-GNN pipelines that are currently available are not end-to-end pipelines, meaning that LLM and GNN cannot be trained together. LLM and GNN can be trained separately using frameworks like EM framework [31] or by freezing LLM and using it as an external knowledge base. Co-training LLM and GNN can lead to issues like gradient vanishing, which is a significant obstacle in current LLM-GNN pipelines due to the large number of parameters in LLM compared to GNN. To solve this problem, methods like knowledge distillation can reduce the number of LLM parameters while retaining the beneficial capabilities for downstream tasks. When LLMs act as predictors, two main methods are used: prompting LLMs and SFT LLMs. All approaches for fine-tuning LLMs can be reviewed in the ”comparisons and discussions” section of Section III. Currently, SFT and DPO are popular methods for fine-tuning LLMs. For graph prompt, the workflow involves unifying pre- training and downstream tasks, followed by prompt tuning for different downstream tasks through prompt engineering, as shown in Figure 15. Graph prompts require fewer tunable Fig. 14: Examples for KGQA with GPT3.5 - Graph Learning Tasks. Fig. 15: Graph prompt for graph learning.Graph prompt meth- ods first unify prefix and downstream tasks, then pre-train GNN on the unified tasks. The pre-trained GNN, when faced with different downstream tasks, combines with a tunable prompt through tuning prompts to handle the downstream tasks better. into LLM via embedding. Subsequently, two rounds of in- struction tuning are conducted to refine LLM and effectively address the node classification task. IntructGLM [78] em- ploys prompts to input subgraph structures into LLM, while GraphGPT [79] inputs them into LLM through embedding. 3) Graph prompt: In graph learning tasks, a wide array of tasks at the node, edge, and graph levels creates a challenge in achieving compatibility between pre-training and downstream tasks, potentially leading to negative transfer effects that can harm the performance of downstream tasks and compromise the reliability of transfer learning in graph data. Current methods aim to harmonize pre-training and downstream tasks to facilitate more effective transfer learning of graph infor- mation. Despite these efforts, it remains essential to identify task-specific differences for optimal performance. Inspired by NLP, researchers have started incorporating prompts in graph contexts to enable the reuse of pre-trained models across various downstream tasks without the need for repeated fine- PrefixtasksDownstreamtasksUnifiedtasksGNNTrainingonunifiedtasksTunablepromptPre-trainedGNNTrainingonPrefixtasksDownstreamtasksUnifying+DownstreamtasksTuningpromptfor Fig. 16: Graph-formed Reasoning Tasks. V. GRAPH-FORMED REASONING A. Tasks Introduction Graph-formed reasoning refers to combining the graph form with LLMs to obtain more accurate and reliable answers. LLMs have strong reasoning capabilities, and many prompting methods are proposed to enhance LLMs’ reasoning abilities, addressing algorithmic problems, mathematical issues, etc., such as chain of thought, self-consistency, in-context learning, and more. However, these methods diverge from the patterns of human thought. The human thought process is typically non-linear rather than a simple chain of continuous thoughts, like in Figure 17. Graphs can represent the thinking patterns of individuals during the thought process. Suppose LLMs can also use graph-formed reasoning for inference. In that case, they may be able to solve more complex problems, logical reasoning problems, such as algorithmic problems, and mathematical word problems, as shown in Figure 16. In this section, we present seven graph-formed reasoning tasks along with their definitions. Next, we introduce graph-formed reasoning methods involving two types of reasoning: think on the graph and verify on the graph. 1) Sorting: The problem of sorting involves arranging certain elements in a specific order. For example, sorting a list of duplicate numbers from 0 to 9 can be done using a merge-based sorting algorithm. First, the input sequence of numbers is divided into subarrays. Then, these subarrays are sorted individually and merged to form the final solution, as shown in Figure 16 (a). 2) Set operations: Set operation task mainly focuses on set intersection. Specifically, the second input set is split into subsets and the intersection of those subsets with the first input set is determined with the help of the LLM, as shown in Figure 16 (b). 3) Keyword counting: The keyword counting task aims to determine the frequency of specific keywords within a given is divided into text. The input category in the input text Fig. 17: Illustration of human logical derivation. [35] parameters compared to LLM-GNN and LLM pipelines; how- ever, they have a shallower semantic understanding of graph attributes. In LLM pipelines, LLMs need to undergo alignment tuning before they can be used for various downstream tasks. In LLM-GNN pipelines, there is a general trend of training GNNs. Combining LLM-GNN and graph prompts is possible because graph prompts are designed for GNNs through prompt engineering and can be applied to LLM-GNN pipelines. By leveraging LLM’s robust semantic representation capabilities and the lightweight fine-tuning of graph prompts, similar results can be achieved. Classical graph tasks, such as node classification on at- tributed static networks, have recently obtained the most attention. However, there is potential for more complex tasks in the future, such as predicting graph evolution on dynamic graphs. Leveraging LLM models that are suitable for handling sequential data and can process time series data, along with GNNs that are adept at capturing changes in graph structures, can help address a broader range of problems effectively. By combining the strengths of LLM and GNN, we can tackle more challenging tasks in the field of graph analysis. Graph ReasoningSortingSort the following list of numbers in ascending order. Output only the sorted list of numbers, no additional text. Input: [5, 1, 0, 1, 2, 0, 4, 8, 1, 9, 5, 1, 3, 3, 9, 7] Output: [0, 0, 1, 1, 1, 1, 2, 3, 3, 4, 5, 5, 7, 8, 9, 9]Find the intersection of two sets of numbers. Output only the set of numbers that are present in both sets, no additional text. Input Set 1: [13, 16, 30, 6, 21, 7, 31, 15, 11, 1, 24, 10, 9, 3, 20, 8] Input Set 2: [25, 24, 10, 4, 27, 0, 14, 12, 8, 2, 29, 20, 17, 19, 26, 23]Set OperationsCount the frequency of how many times each country is explicitly named in the input text. You can generate any intermediate lists and states, but the final output should only contain the frequency of each country that appears at least once in the following json format, prefixed with ”Output: ” (make sure to keep the same spelling for each country in the output as in the input text): {{ ”country1”: frequency1, ”country2”: frequency2, ... }}Keyword CountingKeyword: frequencyMerge the following 4 NDA documents -into a single NDA, maximizing retained information and minimizing redundancy. Output only the created NDA between the tags and , without any additional text. Here are NDAs: [four documents] Document MergingMath word problemsJanet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers’ market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers’ market?Math problemsMulti-hop Question AnsweringQuestion triplets: (’Hypocrite’, directed by, $1), ($1, death date, $2) Question: When did the director of film Hypocrite (Film) die? To answer this question, we answer the following subquestions: (1) Who directed Hypocrite (Film)? (2) When did Miguel Morayta die? Logic reasoning• Premises: 1.It is not true that some giant language models do not have good performance. 2.All language models with good performance are used by some researchers. 3.If a language model is used by some researchers, it is popular. 4.If BERT is a giant language model, then GPT-3 is also a giant language model. 5.BERT is a giant language model. • Hypothesis: GPT-3 is popular. Give hypothesis label, true or false.51012048195133970011112334557899(a) (b) (c) (d) (e) (a) (f) (g) Fig. 18: Graph-formed reasoning. Two directions: think on graphs and verify on graphs. Think on the graph refers to using the graph structure to derive the final conclusion during the LLMs’ reasoning process. Verify on the graph refers to using the graph to verify the correctness of the LLMs’ intermediate and final output. multiple paragraphs, and the keywords are counted in each paragraph, with the sub-results aggregated, as shown in Figure 16 (e). 4) Document merging: Document merging is the process of generating a new document based on multiple input docu- ments that have overlapping content sections. The goal is to minimize duplication as much as possible while preserving the maximum amount of information, as shown in Figure 16 (c). 5) Math word problems: Math word problems include single- and multi-step word problems with addition, multipli- cation, subtraction, division and other math topics. LLM re- quires an understanding of text and mathematical relationships and involves a multi-step reasoning process where calculations are performed step by step to arrive at an answer ultimately, as shown in Figure 16 (d). 6) Multi-hop question qnswering: Multi-hop question an- swering requires LLM to retrieve and integrate information from multiple text passages or multi-hop graphs to answer questions. For a complex reasoning question, LLM uses a sophisticated thinking process to perform reasoning and ul- timately arrive at the correct answer, as shown in Figure 16 (f). 7) Logic reasoning: Logical reasoning is a process aimed at concluding rigorously. It occurs in inference or argumentation, starting from a set of premises and reasoning towards a conclusion supported by those premises. Propositional logic is the most fundamental logical system, consisting of p, q, r, and various operations, as shown in Figure 16 (g). B. Graph-formed Reasoning Methods The graph form, with its inherent structural features, not only mimics human reasoning patterns but also validates answers from LLM through the relationships between nodes and local structure. Existing work can roughly be divided into two categories: think on the graph and verify on the graph, as shown in Figure 18. Think on the graph refers to LLM thinking in the form of a graph, where each node on the graph represents a step in the thinking process or an intermediate conclusion during thinking, and the edges on the graph indicate the direction of LLM inference or the relationships between intermediate thinking steps. In this way, the LLM thinking process can be visually represented in graph form. Verify on the graph means verifying the consistency and correctness of answers by utilizing the graph’s structure. For example, if the end node of different paths is the same, the results derived from different paths should be the same. If contradictory conclusions arise, then the obtained conclusion is incorrect. 1) Think on the graph: The GoT* reasoning method [36] is proposed with a two-stage framework to enable LLM to reason on a graph for answering multiple-choice questions. Initially, the input query is converted into a graph form, and with the incorporation of graph and multimodal features, LLM generates rationale. This rationale updates the graph to a graph with rationales, which is then combined with the original input and fed into the decoder to obtain the final answer. However, GoT* allows LLM to enhance the graph us- ing multimodal information but does not reason step-by- step deduction in graph form. The Graph of Thought (GoT) [34] represents LLM’s intermediate thinking as an arbitrary graph, facilitating powerful prompting for solving algorithmic problems like sorting and keyword counts. LLM thoughts are depicted as vertices in this approach, with edges representing dependencies between them. By continuously adding LLM responses to the graph, arbitrary thoughts can be aggregated, forming a directed acyclic graph. Multiple LLMs can also be collaboratively harnessed to tackle complex mathematical challenges, extending beyond the capabilities of a single LLM. Cumulative Reasoning (CR) [35] is proposed as a more human-like reasoning process. CR utilizes three LLMs in different roles: the proposer, verifier, and reporter. The proposer suggests the next step, the verifier checks the accuracy of the steps, and the reporter decides when the reasoning process should end. Three roles of LLMs collaborate to achieve more accurate reasoning processes. ThinkongraphVerifyongraphLLM's intermediate answerLLM’sfinalanswerInputDeletedintermediate answerThinkingprocessInputLLM's intermediate conclusionVerificationVerify whether two conclusions from two paths are the same.Verifyingprocess Task Sorting Set Operations Keyword Counting Prompts <Instruction>Sort the following list of numbers in ascending order. Output only the sorted list of numbers, no additional text. </Instruction><Examples>like Input: [5, 1, 0, 1, 2, 0, 4, 8, 1, 9, 5, 1, 3, 3, 9, 7] Output: [0, 0, 1, 1, 1, 1, 2, 3, 3, 4, 5, 5, 7, 8, 9, 9]</Examples>Input: [input list] <Instruction>Find the intersection of two sets of numbers. Output only the set of numbers that are present in both sets, no additional text.</Instruction><Examples>like Input Set 1: [13, 16, 30, 6, 21, 7, 31, 15, 11, 1, 24, 10, 9, 3, 20, 8] Input Set 2: [25, 24, 10, 4, 27, 0, 14, 12, 8, 2, 29, 20, 17, 19, 26, 23] Output: [24, 10, 20, 8] </Examples>Input Set 1: set1 Input Set 2: set2 <Instruction>Count the frequency of how many times each country is explicitly named in the input text. You can generate any intermediate lists and states, but the final output should only contain the frequency of each country that appears at least once in the following json format, prefixed with ”Output: ” (make sure to keep the same spelling for each country in the output as in the input text): {{ ”country1”: frequency1, ”country2”: frequency2, ... }} </Instruction><Approach>To count the frequency for each country follow these steps: 1. Split the input passage into four paragraphs of similar length. 2. Count the frequency of each country in each paragraph. 3. Combine the frequencies of each country from each paragraph by adding them together. </Approach><Examples>(Omitted) </Examples>Input: input text Document Merging Merge the following 4 NDA documents <Doc1>- <Doc4>into a single NDA, maximizing retained information and minimizing redundancy. Output only the created NDA between the tags <Merged>and </Merged>, without any additional text. Here are NDAs: [four documents] Math word problems Q: Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers’ market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers’ market? Multi-hop Question Answering Question triplets: (’Hypocrite’, directed by, $1), ($1, death date, $2) Question: When did the director of film Hypocrite (Film) die? To answer this question, we answer the following subquestions: (1) Who directed Hypocrite (Film)? The film Hypocrite was directed by Miguel Morayta. (2) When did Miguel Morayta die? Miguel Morayta died on 19 June 2013. So the answer is 19 June 2013. Logic reasoning • Premises: 1. It is not true that some giant language models do not have good performance. 2. All language models with good performance are used by some researchers. 3. If a language model is used by some researchers, it is popular. 4. If BERT is a giant language model, then GPT-3 is also a giant language model. 5. BERT is a giant language model. • Hypothesis: GPT-3 is popular. • Label: [True] TABLE IV: Prompts for Graph-formed Reasoning. 2) Verify on the graph: Verify on the graph is to validate the intermediate reasoning results of LLM to enhance its performance. The Reasoning Graph Verifier (RGV) [83] in this study assumes a logical connection between the intermediate steps of different inference paths created by LLM. This allows the multiple solutions generated by LLM for a reasoning task to be structured into a reasoning graph, aiming to improve the accuracy and reliability of the outcomes. By constructing reasoning graphs from the various solutions provided by LLM, a verifier is trained to determine the correctness of the resulting reasoning graph. During the prediction phase, RGV assesses the solutions and selects the highest-scoring one as the final answer. However, to determine this work trains an extra model whether the graph formed by the solutions generated by LLM is correct rather than utilizing the knowledge within the graph and the relationships between the knowledge for validation. The Graph-guided CoT [84] approach aims to improve the relevance of rationales generated by CoT during multi-step reasoning. It starts by extracting triplets from questions using LLM to build a question graph and generates intermediate sub- questions from this graph. To ensure the rationale from LLM is logical, Retrieval Augmented Generation (RAG) is used. In an open-book scenario, knowledge retrieval is based on the sub- questions, providing retrieved documents and sub-questions as input to LLMs. LLMs generate rationales for the sub- questions, creating a rationale graph. Based on the rationale graph, the study assesses whether the generated rationales aid in solving the original question. By iteratively generating intermediate rationales, the solution to the original question can be determined. Finally, we provide manual prompt examples for various graph learning tasks in Table IV. Additionally, we test LLMs with GPT-4 for sorting and logic reasoning using manual prompts, as shown in Figure 19. C. Comparisons and Discussions Graph-formed reasoning is categorized into think on the graph and verify on the graph. Think on the graph refers to using the graph structure to derive the final conclusion during the reasoning process with LLM. On the other hand, verify on the graph involves treating the intermediate or final results generated by LLM as nodes on the graph and using the graph to determine if there are contradictions between the nodes, thus verifying the correctness of the LLM output. For “think on the graph”, a common issue with existing approaches is their lack of convenience. Compared to CoT and SC, the reasoning processes in current works are complex, VI. GRAPH REPRESENTATION A. Tasks Introduction LLMs’ powerful text representation abilities empower text embeddings to capture deeper semantic nuances, which also can enhance graph representations, particularly for Text At- tributed Graphs (TAGs). When dealing with structured text data, the key challenge is integrating graph structures into text embeddings produced by LLMs to enhance their informative- ness or enable LLMs to process text embeddings with graph structures within the text space. Moreover, effectively incor- porating the graph description within the prompt is essential for LLMs, especially in closed-source models like ChatGPT, where the embedding is invisible. How the graph is encoded within the prompt influences the model’s comprehension of the graph. Thus, we summarize three types of graph repre- sentation: graph embedding, graph-enhanced text embedding, and graph-encoded prompts, as shown in Figure 20. Next, we introduce graph-formed reasoning methods corresponding to the above three types. 1) Graph embedding: Graph embedding focuses on trans- forming a graph into a specific ordered sequence, which is then fed into an LLM to learn the sequence’s embedding using their excellent semantic capturing ability and then derive the graph embedding. 2) Graph-enhanced text embedding: Graph-enhanced text embedding emphasizes incorporating structural embedding into text embedding. There are two types of embeddings: structural embedding, which captures the local structure, and text embedding, which captures the semantic meaning. How to combine these two types of embeddings is the core of graph- enhanced text embedding. 3) Graph-encoded prompts: Graph-encoded prompts con- centrate on how to describe a graph so that LLMs can understand it more efficiently and then input it into LLMs. For instance, in a regular graph, the graph can be placed in a story context by assuming that the relationships between the nodes are friends or colleagues. With the emergence of LLM, much work has been done on graph representation. Three goals of the graph representation direction can be identified from the above three categories: to obtain better graph embeddings as an input into GNNs, to obtain better text embeddings as an input into LLMs/LMs, and to get better prompts for graph description as an input into LLMs. B. Graph Representation Methods For the three categories of tasks mentioned above, each type of task has specific focuses, technical characteristics, and objectives. 1) Graph embedding: Text data is sequential, while graph data is structural, posing a challenge for LLMs, which excel at handling text but struggle with graphs. How do we transform graphs into sequences? Graph embedding methods use specific order sequences to represent the graph, where specific order represents graph structure. WalkLM [38] aims to enhance Fig. 19: Examples for Logic Reasoning Task with GPT4 - Graph Reasoning Tasks. requiring multiple stages of reasoning and validation. Graph of thought methods are not plug and play, which contradicts the original intent of prompts. Even though using more LLMs can simplify the reasoning and validation process, it raises the cost and barrier to entry for reasoning. Therefore, the current challenge is to find a plug-and-play, low-barrier LLM graph reasoning method that improves LLM reasoning capabilities. For “verify on the graph”, the current approaches have yet to utilize the nature of the graph structure for validation. Existing methods either retrain a model to determine correctness or use a KG for assessment without using the relationships between nodes to infer whether the conclusions within each node in the graph are correct. Therefore, for the “think on the graph,” the future direction could focus on developing a plug-and-play, low-barrier LLM graph reasoning method that enhances LLM reasoning abili- ties, a pressing issue that needs to be addressed. On the other hand, concerning the “verify on the graph” method, future research could explore how to utilize the relationships between nodes in the graph structure to verify the outputs of LLM or the reasoning process itself. Fig. 20: Graph representation. Three types of graph representation are shown: graph embedding, graph-enhanced text embedding, and graph-encoded prompts. Graph embedding methods use specific order sequences to represent the graph. Graph-enhanced text embedding emphasizes incorporating structural embedding into text embedding. Graph-encoded prompts concentrate on how to describe a graph in prompts. graph representations in TAGs by utilizing a language model. Initially, text sequences are generated on the TAG through the random walk algorithm, capturing structural features and node proximity. By incorporating text information from nodes and edges into these sequences based on the graph structure, the texturing process preserves component attributes. Subse- quently, these sequences are input into a masked language model for training, where each token represents a node or edge, leading to improved graph representations and enhanced downstream task efficiency. Notably, various masked language model options, including LLMs, are available. While WalkLM [38] focuses on superior graph embeddings for tasks like node classification, GraphText [37] transforms graphs into the natural language to enable LLMs to process graph data in the text domain, leveraging LLMs’ generalization capabilities for graph tasks. GraphText [37] reformulates graph reasoning as text-to-text problems, establishing text input and output spaces. GraphText first constructs grammar trees for graphs, then traverses them to generate graph text sequences, and finally maps the graph to the text space. The text input is then fed into an LLM, with the LLM results mapped to the label space, effectively enabling LLMs to handle graph tasks. 2) Graph-enhanced text embedding: Current work focuses on simply passing graph structure information to the LLM through prompts without deeply learning the graph structure, which can lead to an LLM’s insufficient understanding of complex structural relationships. DGTL [39] integrates graph information into text with LLMs for node classification tasks. It begins by inputting text into a frozen LLM to create text embeddings from the last layer. Then, a disentangled graph learning method is employed to extract various structural details and generate structure embeddings. These structure embeddings are combined with the text embeddings and fed back into the frozen LLM for node classification. The entire process is fine-tuned to optimize the disentangled graph learning for better results. While DGTL [39] concentrates on utilizing LLMs to in- tegrate text and graph structure for graph tasks, G2P2 [85] emphasizes merging graph structure with text to address text classification tasks. Textual data commonly exhibit network structures, such as hyperlinks in citation networks or purchase networks, which encapsulate meaningful semantic relation- ships that can enhance text classification performance. G2P2 [85] is proposed to tackle low-resource text clas- sification through a dual approach. Three graph interaction- based contrastive strategies are introduced during pre-training to jointly pre-train the graph-text model. In the downstream classification process, efforts are made to facilitate the joint pre-trained model in achieving low-resource classification. 3) Graph-encoded prompts: The prompting method is crucial for LLMs to solve tasks. For closed-source LLMs, the prompt serves as instructions to guide the LLM in under- standing and solving problems. Therefore, effectively encoding graphs in the prompt is vital for LLMs to comprehend graph structure and solve graph tasks. Graph encoding refers to how graphs are represented in the prompt. Talk Like A Graph [86] introduces diverse graph encoding techniques by placing the same graph in multiple contexts. This strategy highlights how a node, which may lack intrinsic meaning, can be interpreted differently based on the context; for instance, a node could represent a person named David, with edges indicating various relationships like co-authorships or friendships. When asking LLM the degree of one node, in the given contexts, that equals how many friendships David has. In contrast, Talk Like A Graph [86] primarily emphasizes text modality graph encoding, while Which Modality Should I Use [87] employs three encoding modalities - text, image, and motif - to encode graphs. The latter method utilizes different prompt techniques to evaluate the overall connectivity of a graph, enabling LLMs to handle intricate graph structures more effectively. Specifically, the text modality encoding pro- vides insights into subgraphs and their connections at a local level, while the motif modality encoding captures essential graph patterns like stars, triangles, and cliques, offering a bal- anced perspective on local and global information. Moreover, the image modality encoding delivers a broader view of nodes with limited labels, effectively utilizing the input context. GraphrepresentationGraph embeddingGraph-enhanced text embeddingGraph-encoded promptsGraph1432014314013204……SpecificordersequencesLLM/PLMGraphEmbeddingsTextembeddingsTexttokensLLM/PLMTextembeddingswithgraphstructureGraphstructuralembeddings+……Prompt:(placinggraphGinmultiplecontexts)GdescribesafriendshipgraphamongJames,David,John…Gdescribesaco-authorshipgraphamongJames,David,John…GdescribesasocialnetworkgraphamongJames,David,John………LLMResponse: …… indicating a shift towards utilizing LLMs for processing real- world factual knowledge [88] [89]. A. LLMs limitations and comparison with KGs LLMs, while powerful, face several significant challenges: • Hallucination is a common issue for LLMs due to a lack of domain-specific knowledge and knowledge ob- solescence, leading to incorrect reasoning and reduced credibility in critical scenarios like medical diagnosis and legal judgments [88] [90] [43]. Although some LLMs can explain predictions through causal chains, they struggle to address hallucination effectively. Integrating external KGs can help mitigate these problems [41]. • Insufficient domain knowledge hampers LLM perfor- mance in specific areas, including private datasets, ne- cessitating the integration of domain-specific knowledge graphs to enhance their ability to answer domain-specific questions [40]. • LLMs struggle with recalling facts when generating knowledge-based content, despite excelling in learning language patterns and conversing with humans [89]. • LLMs have limitations in accurately capturing and re- trieving foundational knowledge, hindering their ability to access factual information effectively [42]. In contrast, KGs like Wikipedia and DBpedia are structured repositories of rich factual knowledge, providing a more explicit and reliable source of information compared to the black-box nature of LLMs, as shown in Figure 21. How do we measure the shortcomings of LLM relative to KG? KGLens is proposed as an effective method to evaluate the factual accuracy and identify knowledge gaps in LLMs by assessing the alignment between a KG and LLM [91]. B. Solutions to LLMs limitations To address the limitations of LLMs, such as hallucination, insufficient domain knowledge, etc., integrating LLMs with KGs is a potential way to allow LLMs to learn knowledge from KGs and enhance their capabilities. The REASONING ON GRAPHS (RoG) framework [43] synergizes LLMs with KGs for faithful and interpretable reasoning. Specifically, RoG utilizes a planning retrieval-reasoning framework where relation paths grounded by KGs are generated as faithful plans. These plans are then used to retrieve valid reasoning paths from KGs to facilitate LLMs’ faithful reasoning. Existing work has taken on the challenges posed by the four main limitations of LLMs through distinct perspectives, each offering unique solutions. Addressing the first limitation concerning hallucination is- sues in LLMs, the Head to Tail benchmark [88] is introduced to assess LLMs’ reliability in answering factual questions and to evaluate the probability of hallucination in generating KG triples. Additionally, it explores whether factors like model size or instruction tuning can enhance LLM knowledge. Think-on-Graph (ToG) [41] partially addresses hallucination by involving the LLM agent in iteratively searching KGs, identifying promising reasoning paths, and providing likely Fig. 21: KG-based augmented retrieval. Knowledge graphs can enhance LLMs to provide more comprehensive answers. In comparing these two methods, Talk Like A Graph [86] focuses on diverse graph encoding within text modality by constructing contexts, whereas Which Modality Should I Use [87] utilizes multiple modalities to encode graphs compre- hensively, enhancing the LLMs’ ability to understand graph structures. C. Comparisons and Discussions Graph embedding focuses on transforming a graph into a specific ordered sequence, which is then fed into an LLM to learn the sequence’s embedding and derive the graph em- bedding. On the other hand, graph-enhanced text embedding emphasizes incorporating structural embedding into text em- bedding. Lastly, graph-encoded prompts concentrate on how to describe a graph and input it into an LLM. However, due to LLMs’ powerful text representation capa- bilities, the first two methods exhibit a deep semantic under- standing of graph attributes. However, they still need suitable structural information capturing, which remains rudimentary and inadequate. Additionally, aligning the graph structure features with text features to better represent the graph’s features is a current issue that needs to be addressed. For graph-encoded prompts, most methods build a narrative context for the graph or describe it multimodally before feed- ing it into an LLM. Both methods enable the LLM to interpret the graph from various perspectives to improve performance. The critical challenge currently lies in designing diverse and easily understandable graph descriptions for LLMs, convey- ing essential graph descriptions while enhancing the LLM’s comprehension of the input description. VII. KNOWLEDGE GRAPH BASED AUGMENTED RETRIEVAL LLMs have shown remarkable reasoning capabilities in challenging tasks, sparking debates on the potential replace- ment of Knowledge Graphs (KGs) in triplet form (subject, predicate, object) by LLMs. Recent LLMs are seen as viable alternatives to structured knowledge repositories such as KGs, Query: What other works does the director who directed Inception have?KGsLLMs1. "The Dark Knight Trilogy" 2. "Interstellar"3. "Dunkirk"4. "Memento"5. "The Prestige"6. "Insomnia"Incomplete answerQuery: What other works does the director who directed Inception have?LLMs1. "The Dark Knight Trilogy" 2. "Interstellar"3. "Dunkirk"4. "Memento"5. "The Prestige"6. "Insomnia"7. “Oppenheimer"Complete answer+KGs enhanced LLMs reasoning outcomes. The second limitation is LLM needs domain-specific knowledge. To tackle this, GLaM [40] is developed to convert knowledge graphs into text paired with labeled questions and answers, allowing LLMs to acquire and respond to domain-specific knowledge. Regarding the third limitation related to LLMs forgetting facts, integrating KGs with PLMs (KGPLMs) [89] is introduced to enhance the model’s ability to recall facts compared to standalone LLMs. This approach emphasizes the competitive and complementary relationship between LLMs and KGs, where LLMs improve knowledge extraction accuracy, and KGs guide LLM training to enhance memory and knowledge application capabilities. Finally, the fourth limitation pertains to LLMs’ challenge in accurately retrieving and returning knowledge from KGs. KGs can enhance LLM performance by incorporating them during pre-training and inference stages or to deepen LLM’s understanding of acquired knowledge. Graph Neural Prompt- ing (GNP) [42] is proposed to augment pre-trained LLMs using foundational knowledge, such as retrieval-augmented generation, to facilitate effective learning from KGs. GNP [42] retrieves and encodes relevant, grounded knowledge to gener- ate Graph Neural Prompts, embedding vectors that provide guidance and instructions for LLMs. C. Other KG + LLMs works 1) KG tasks with LLMs: Moreover, LLMs can enhance KGs to tackle a broader array of challenges. By leverag- ing LLMs, KGs can be fortified to perform various KG- related tasks such as embedding, completion, construction, text generation from graphs, and question answering [90]. An illustrative example is how LLMs can support KG tasks such as knowledge graph alignment. In entity alignment tasks between different knowledge graphs, the objective is to iden- tify pairs of entities representing the same entity. To address this, AutoAlign [92] facilitates alignment without the need for expensive manual seed creation. Specifically, AutoAlign [92] automatically identifies similarities between predicates across different KGs with the assistance of LLMs. 2) Applications of KGs + LLMs: The combination of KGs and LLMs has other applications as well. For instance, it can address tasks like multi-document question answering. Knowl- edge Graph Prompting (KGP) [93] is introduced to design appropriate context by building and exploring a knowledge graph. Subsequently, this context guides LLMs for answering multi-document questions. D. Summary In conjunction with LLMs, the future directions for KGs fo- cus on overcoming challenges and seizing opportunities in this evolving field. Firstly, leveraging KGs for Hallucination Detec- tion in LLMs aims to address the issue of generating inaccurate content. Secondly, utilizing KGs for Editing Knowledge in LLMs will enable the swift adaptation of internal knowledge to real-world changes. Moreover, the challenge of injecting knowledge into Black-box LLMs due to restricted access to internal structures necessitates innovative approaches. Lastly, Fig. 22: Graph-LLM-based applications - Recommendation systems. This shows LLM for graph data understanding in online job recommendations [46]. integrating Multi-Modal LLMs with KGs can enrich handling diverse data types within knowledge graphs [90]. VIII. GRAPH-LLM-BASED APPLICATIONS Graph-LLM-based applications refer to frameworks that integrate graphs with LLMs. Apart from their applications in graph-related tasks, they are also utilized in various other domains (as shown in Figure ??), such as conversational understanding and recommendation systems, as shown in Figure 22. Common frameworks involve combining GNNs with LLMs, merging graph data with LLMs, and exploring additional innovative approaches that leverage the advantages between graph structures and language models for diverse applications. 1) Conversational understanding: By combining LLM with graph traversal, collaborative query rewriting [94] is proposed to improve the coverage of unseen interactions, addressing the flawed queries users pose in dialogue systems. Flawed queries often arise due to ambiguities or inaccura- cies in automatic speech recognition and natural language understanding. When integrated with graph traversal, LLM can effectively navigate through the graph structure to retrieve relevant information and provide more accurate responses. 2) Response forecasting: LLM can effectively handle social networks and extract latent personas from users’ profiles and historical posts. SOCIALSENSE [95] is proposed to utilize LLMs to extract information to predict the reactions of news media. By analyzing individuals’ characteristics and behavior patterns within social networks, LLM can effectively predict the impact of news releases and prevent unintended adverse outcomes. 3) Multi-domain dialogue state tracking: LLM can learn from multi-domain dialogue history, query, and graph prompts, enabling it to track dialogue states and generate dialogue content, like SHEGO [96]. By incorporating information from various sources, such as previous dialogue exchanges, user queries, and relevant graph prompts, LLM can understand the conversation’s context and dynamics, allowing LLM to track the current dialogue state effectively and generate appropriate responses or dialogue content based on the inputs. 4) Recommendation systems: LLMs can also help address issues in recommendation systems [46], as many tasks in recommendation systems require learning graph structures, such as user-item interaction networks. LLMRec [44] aims OccupationsLLM tuningBehavior Graph to enhance recommendation systems by tackling data sparsity by adopting three simple yet effective LLM-based graph- enhancement strategies. 5) Graph neural architecture search: LLMs can help ad- dress Graph Neural Architecture Search (GNAS). GNAS re- quires intensive human effort and rich domain knowledge to design search spaces and strategies. Leveraging powerful knowledge and reasoning capabilities, LLMs can identify suitable GNN frameworks within the search space of graph neural network frameworks. GPT4GNAS [45] integrates GPT- 4 into GNAS, introducing a new set of prompts for GPT-4 to guide it towards generating graph neural structures. IX. BENCHMARK DATASETS AND EVALUATIONS In this section, we summarize benchmark datasets and evaluation metrics for LLMs. A. Datasets This paper summarizes the popular and new datasets, the LLM employed, the performed tasks, and the links to the open- source code in the LLM-GGA area, as illustrated in Table V. Below, we introduce commonly used benchmarks and the new benchmarks proposed for the LLM-GGA field. 1) Popular datasets: Popular benchmark refers to a graph benchmark that is widely and frequently used. We have sys- tematically categorized these popular benchmarks according to six directions, detailing which benchmarks are used for each direction. Below are listed popular benchmarks commonly used in the six directions. • Graph structure understanding: ogbn-arxiv [56], ogbn-products [101], Aminer(DBLP) [57], MetaQA [102], Wikidata5M [103], PROTEINS [104], MUTAG [105], NCI1 [106], PTC [107], Foursqure [108]. [100], CiteSeer [56], Cora • Graph learning: ogbn-arxiv [56], ogbn-products [56], ogb-papers110M [56], ogb-citation2 [56], Cora [100], CiteSeer [101], Amazon-items [109], PubMed [110], Reddit [111], CoraFull [112], Amazon [113], PROTEINS [104], COX2 [114], BZR [114], OAG [115] • Graph-formed reasoning: GSM8K [116], SVAMP [117], FOLIO [118] • Graph representation: Cora [100], CiteSeer [101], Goodreads-books [119], PubMed [110], Amazon [113], MIMIIC-III [120], Freebase [121], FB15K-237 [122] • KG-based augmented retrieval: CWQ [123], WebQSP [124], Wikidata [103] • Graph-LLM-based applications: depending on specific applications. 2) New datasets: More than existing datasets are needed to explore LLMs’ ability to understand graph structures and their potential to solve graph problems better. As a result, many works have proposed new benchmarks to advance research in this field, as shown in Table VI. • GPR [59] contains 37 particular connected graph in- stances generated by the Networkx toolkit, which include the “bull graph,” “wheel graph,” “lollipop graph,” etc. These generated graph instances are relatively small, with about 15 nodes and 28 links on average. • GraphTMI [87] is a graph benchmark featuring a hi- erarchy of graphs, associated prompts, and encoding modalities. Different graph task difficulty depends on the dual criteria of 1) count of motifs and 2) homophily in the graph, which yields a dataset of EASY, MEDIUM, and HARD graph problems. • LLM4DyG [25] benchmark is to evaluate whether LLMs are capable of understanding spatial-temporal information on the dynamic graph. Nine dynamic graph tasks are designed to assess LLMs’ abilities considering spatial and temporal dimensions. • GraphQA [86] comprises a set of diverse fundamental graph problems with more varied and realistic graph structures compared to previous studies in LLM research. GraphQA is designed to measure the performance of LLMs in graph data reasoning. • NLGraph [27] benchmark is to examine whether lan- guage models can reason with graphs and structures. NLGraph contains eight graph structure understanding tasks with varying algorithmic difficulties. Depending on different network sizes, graph sparsity, and more, NLGraph results in easy, medium, and hard subsets in each graph reasoning task to enable difficulty scaling and fine-grained analysis. • GraphextQA [98] benchmark is a dataset for open do- main question answering. It includes paired subgraphs used to develop and evaluate graph language models. The subgraphs are retrieved from Wikidata and contain reasoning paths from entities mentioned in the questions to the entities that the questions are asking about. • CS-TAG [99] benchmark is a comprehensive and wide- ranging compilation of benchmark datasets for TAGs. This dataset encompasses a variety of challenging scenar- ios, ranging from citation networks to purchase graphs. The collection consists of eight distinct TAGs sourced from diverse domains. We also list which directions these new benchmarks are typ- ically used for. For graph structure understanding, GPR [59], GraphTMI [87], LLM4DyG [25], NLGraph [27], and CS-TAG [99]can be used. For graph learning, CS-TAG [99] can be used. For graph-formed reasoning, GraphextQA [98] can be used. For graph representation, GraphTMI [87], GraphQA [86], and CS-TAG [99] can be used. For KG-based augmented retrieval, GraphextQA [98] can be used. B. Evaluations Evaluating the results of different tasks related to LLM- GGA is also a critical issue. Thus, selecting evaluation metrics to assess the results is essential to determining how well LLMs perform their understanding of graphs and how effectively models combining graphs and LLMs perform on various tasks is vital. This section summarizes the metrics of different tasks, shown as Table VII. Note that all test results related to LLMs Method InstrucGLM [78] GPT4Graph [23] LLMtoGraph [24] TABLE V: A summary of LLM-GGA methods with datasets and source links. Dataset ogbn-arxiv, Cora, PubMed LLM Flan-T5 (instruction-finetune), Llama- v1-7b (LoRA) Task Link, Node ogbn-arxiv,Aminer,Wiki,MetaQA InstructGPT-3(frozen) generated by GPTs GPT-3.5-turbo, GPT-4, Wizard-Vicuna- 13B, 30B-Lazarus-Uncensored-HF LLaMA, Palm-Cortex-001 text-ada-embedding-002, Graph-LLM [73] ogbn-arxiv, Cora, PubMed, ogbn-products TAPE [30] LLM4DyG [25] GraphGPT [79] GPPT [80] GraphPrompt [81] All in one [82] Graph-ToolFormer [59] RGV [83] LLM-GNN [72] ogbn-arxiv, Cora, PubMed, ogbn-products GPT-3.5 LLM4DyG GPT-3.5-turbo, Vicuna-7B, Vicuna-13B, Llama-2-13B, CodeLlama-2-13B ogbn-arxiv, Cora, PubMed vicuna-7B-v1.1, vicuna-7B-v1.5 Cora, Reddit, CoraFull, Amazon-CoBuy, ogbn-arxiv etc. Flickr, PROTEINS, COX2, ENZYMES, BZR Cora, CiteSeer, Reddit, Amazon, Pubmed - - - GPR, Cora, Pubmed, Citeseer, PROTEINS, MUTAG, NCI1, PTC, Twitter, Foursquare GPT-J-6B GSM8K, SVAMP, ASDiv-a CORA, CITESEER, PUBMED, WIKICS, OGBN-ARXIV, OGBN-PRODUCTS GPT-3.5-turbo GPT-3.5-turbo Which Modality should I use [87] Cora, Citeseer, Pubmed,GraphTMI GPT-4, GPT-4V WalkLM [38] GraphText [37] PubMed, MIMIC-III, Freebase, FB15K-237 PLMs Cora, Citeseer, Texas, Wisconsin, Cornell Llama-2-7B TALK LIKE A GRAPH [86] GraphQA PaLM 2-XXS, PaLM 62B Reasoning, Node, Graph Multi-hop Reasoning Node Node Graph Node Link, Node Link, Node, Graph Link, Edge, Node, Graph Q&A, Reasoning math problems Node Representation, Node Representation, Node, Link Node Node, Link Graph-guided CoT [84] 2WikiMultihopQA, MusiQue, Bamboogle Llama-2-13B,Llama-2-70B multi-hop question answering NLGraph [27] NLGraph GPT-3.5- TEXT-DAVINCI-003, TURBO, CODE-DAVINCI-002, GPT-4 Link,Node,Graph,Path,Pattern code link Collaborative Query Rewriting [94] opportunity test sets, guardrail test set WHEN AND WHY [26] CR [35] OGBN-ARXIV, CORA, PUBMED, OGBN- PRODUCT, ARXIV-2023 Dolly V2 ChatGPT Conversational Understanding - Node FOLIO, LogiQA, ProofWriter, LogicalDe- duction GPT-3.5-turbo, GPT-4, LLaMA-13B, LLaMA-65B Logic reasoning Response Forecasting code link SOCIALSENSE [95] RFPN, Twitter DGTL [39] SHEGO [96] Cora, PubMed, Books-History SGD, MultiWOZ 2.1 Graph of Thought(GoT) [34] individual data PLMs Llama-2-13B T5-small GPT3.5(frozen) ogbnarxiv, ogbn-products, ogbn-papers100M PLMs OAG OGBN-Arxiv, OGBN-Products, OGBL- Citation2 Netflix, MovieLens OGB OGBN-ARXIV, CORA OGBN-ARXIV, Instagram, Reddit T5-base PLMs gpt-3.5-turbo-16k gpt-3.5-turbo PLMs PLMs OGBN-ARXIV, CORA, PubMed GPT-3.5, GPT-4 NLGraph Cora, Amazon Gradio Cora, PubMed AQUA-RAT, ScienceQA HotpotQA, PDFTriage, Rank IIRC, 2WikiMQA, MuSiQue, GPT-4 T5-base Llama GPT-4 Llama-7B OGBN-ARXIV, CORA, PubMed, Citeseer GPT-4 Graph neural architecture search Llama2-7B, Llama2-13B Link, node, graph, path, pattern code link PLMs Representation code link GPT-4V, Next-GPT Link, node, graph, application Node multi-domain DST Graph-formed reasoning Node Link Node, link Recommendation Node generation Node, link, graph Representation Node, link Link, node, graph Graph-formed reasoning KG+LLM KG+LLM KG+LLM KG+LLM KG+LLM KG+LLM KG+LLM KG+LLM Head-to-Tail [88] DBpredia, Movie, Book, Academics DBLP, UMLS CWQ, WebQSP, GrailQA, QALD10-en, etc. GPT-3.5, GPT-4, Llama-2 DBpedia, Wikidata ChatGPT, Claude ConceptNet, UMLS, OpenBookQA, etc. FLAN-T5 xlarge (3B), xxlarge (11B) WebQSP, CWQ, Freebase LLaMA2-7B Wikidata GPT-3.5-turbo, GPT-4, Babbage-002, Davinci-002, Vicuna-33b-v1.3, Xwin- LM-13B-V0.2, Yi-34B-Chat Link code link code link code link code link code link - code link code link code link code link code link - code link - code link code link - - code link code link - - code link code link - code link code link - code link - - - - - code link code link - - code link code link code link code link - GLEM [31] LPNL [77] SIMTEG [71] Llmrec [44] ENG [75] OFA [32] G-prompt [33] Beyond Text [74] GPT4GNAS [45] Graphllm [64] G2P2 [85] ChatGraph [97] Graph Agent [76] GoT* [36] KGP [93] GLaM [40] ToG [41] Autoalign [92] GNP [42] RoG [43] KGLens [91] TABLE VI: A summary of new datasets. New Benchmark GPR [59] GraphTMI [87] LLM4DyG [25] GraphQA [86] NLGraph [27] GraphextQA [98] CS-TAG [99] Link https://github.com/jwzhanggy/Graph Toolformer/tree/main/data To be released To be released To be released https://github.com/Arthur-Heng/NLGraph/tree/main/NLGraph https://huggingface.co/datasets/drt/graphext-qa https://github.com/sktsherlock/TAG-Benchmark TABLE VII: Evaluations. Tasks Graph structure understanding task Graph learning task Graph resoning task Graph representation KG-based augmented retrieval Graph-LLM-based applications Metrics Accuracy, ROUGE, BLEU, Time cost, Comprehension, Correctness, Fidelity, Rectification Comprehension Accuracy, Macro-F1, Training Time, Tuned Parameters, GPU Occupy, Mismatch Rate, Denial Rate, Token Limit Fraction Accuracy, F1-score, Precision, Recall, The Latency-Volume Trade-off, Number of errors and cost depending on downstream tasks Accuracy, F1-score, Precision, Recall, depending on different tasks in this paper are conducted using GPT-3.5 turbo or GPT-4 turbo. 1) Graph structure understanding task.: Several metrics are usually used in graph structure understanding tasks: accuracy, ROUGE [125], BLEU [126], Time cost, comprehension, cor- rectness, fidelity, and rectification comprehension. Accuracy, ROUGE, BLEU, and time cost are viral metrics. Meanwhile, comprehension, correctness, fidelity, and rectification compre- hension are new metrics [24] used to evaluate the ability of LLMs to understand graphs through natural language, the accuracy of solving graph problems, and the level of confidence in the answers provided. 2) Graph learning task.: For graph learning tasks, when evaluating a model, various metrics are considered to deter- mine its effectiveness, efficiency, and computational demands. When assessing the effectiveness of a model, metrics such as accuracy, macro-F1, mismatch rate, and denial rate [87] are considered. In terms of efficiency, metrics like training time and tuned parameters are assessed. For computational costs, metrics such as GPU occupancy and token limit fraction are examined. Notably, the token limit fraction indicates the proportion of tokens used compared to the maximum allowed by the model’s constraints and can be formed as follows: T = Number of usage tokens Token limit constraint for the model (6) 3) Graph reasoning task.: When it comes to graph reason- ing tasks, two main factors that are taken into consideration are effectiveness and efficiency. Several metrics are used to assess effectiveness, including accuracy, number of errors and cost, F1-score, precision, and recall [127]. On the other hand, efficiency is evaluated through metrics such as the Latency- Volume Trade-off. 4) Graph representation.: The effectiveness of graph rep- resentation is typically judged based on the performance of downstream tasks that use this graph representation. 5) Knowledge graph-based augmented retrieval.: Tasks in the KG-based augmented retrieval direction typically involve question-answering tasks. Evaluation metrics commonly used include accuracy, precision, recall, F1-score, Hits@k [128], EM [129], MSE, and for some generative tasks, human eval- uation may also be utilized. X. FUTURE DIRECTIONS The above survey of the state-of-the-art LLM-GGA research reveals a promising and young research field. The following section discusses exciting directions for future work. A. More Complex Graph Problems More complex graph tasks. Can LLMs solve graph al- gorithm problems? Existing works on traditional graph tasks are based on fundamental graph problems such as shortest path, clustering coefficient computing, maximum flow, etc. However, can LLMs address NP problems such as community search, interactive graph problems, or even NP-hard problems, and if so, how can they tackle them? For graph learning tasks, current research primarily focuses on simple node, edge, and graph classification. Future work can focus on more complex graph learning problems, such as the diverse classification outcomes arising from isomorphic and heterogeneous graphs. More complex graph patterns. Graphs contain various graph patterns, each with its explicit definition and unique characteristics, such as stars, triangles, cliques, butterflies, and more. Therefore, recognizing graph patterns and utilizing their characteristics to solve downstream tasks can be highly advan- tageous. Currently, only limited works leverage the properties of stars, triangles, and cliques to solve problems. Furthermore, understanding graph data still remains a sig- nificant challenge for existing LLMs, limiting their ability to tackle more complex graph problems. Therefore, incorporating LLMs into the process is a promising direction for solving more complex graph problems. B. LLM Exploration on Diverse Graphs E. Better Graph Prompts Most existing work mainly focuses on static graphs, while including there exists a wide range of different graphs, undirected, directed, cyclic, acyclic, isomorphic, heteroge- neous, dynamic, etc. Different types of graphs have significant structural differences, such as static graphs, dynamic graphs, temporal graphs, uncertain graphs, heterogeneous graphs, etc. Specifically, unlike static graphs, dynamic graphs can be represented as ordered lists or asynchronous streams of timed events, capturing patterns of temporal network evolution, such as the addition or removal of nodes and edges. Evaluating the ability of LLMs to understand the spatio-temporal information of dynamic graphs is crucial for web applications. Evaluating whether LLMs can determine when nodes are connected, identify which nodes are connected to a given node at a specific time, and find a chronological path by combining temporal and spatial to assessing LLMs’ understanding of dynamic graphs. Future work can further explore other types of graphs, such as dynamic graphs and temporal graphs, address problems like maximum flow, and predict the evolution of graphs. information is essential Moreover, existing studies have conflicting views on the LLM graph reasoning ability, with some presenting contradic- tory findings. This ambiguity could be due to various factors, including dataset selection, diverse prompt engineering tech- niques, the range of graph reasoning tasks, and the utilization of different LLM models. C. Better LLM-GNN Pipelines GNNs are designed to handle structural information by continuously learning information from surrounding subgraphs through aggregation functions. On the other hand, LLMs excel in processing textual information, text reasoning, semantic understanding, and more. The challenge lies in leveraging both advantages to enable a pipeline that can effectively handle both attributed and pure graphs. If GNNs and LLMs are simply stacked, the parameter size of GNNs is notably smaller than that of LLMs. This discrepancy may result in the issue of vanishing gradients during training, as mentioned in [130], which can impede the iterative updating process of GNNs. Additionally, GNNs need to utilize the extensive knowledge contained within LLMs fully, and they cannot effectively extract specific knowledge tailored for specific downstream tasks in different graphs. D. Graph Foundation Model Most graph prompts are currently designed based on GNNs, with only a few works focusing on LLMs. Graph prompts for LLMs have yet to be sufficiently explored. Graph Prompt for GNNs. The typical approach uses simple concatenation, addition, or dot product operations with trainable parameters. Some existing works have considered more complex fusion methods, such as [82], which assumes the structural features of graph prompts. However, compared to the combination of prompts and pretexts, the variety of graph prompts and pre-graphs is still in the exploratory stage. Graph-enhanced Prompts for LLMs. Relying solely on manual prompts and self-prompting has limited capabilities in improving model performance, as they only explore the existing abilities of LLM. As shown in Section III-C, LLMs can be trained as agents to utilize tools for graph tasks that are hard to solve, like API call prompt [59]. GoT [130] is also a graph reasoning paradigm that enables LLMs to provide correct answers. Future work based on the graph reasoning paradigm can consider cost-effective approaches for GoT, such as pruning and tricks to reduce algorithm complexity. In the future, it would be beneficial to explore simpler GoT paradigms that can improve the effectiveness of LLMs. F. Modal Alignment Modal alignment refers to the alignment between two modalities: text and graph. The input for LLMs is typically sequential data, often text. Graph and text are two different modalities, and studying the alignment between these two modalities for LLMs involves finding a shared mapping feature space for graphs and text. The shared mapping space allows LLMs to understand graph data similarly to how they know textual information if they comprehend text. G. Explainabilily GNNs are currently widely used for solving complex graph problems. However, they need more interpretability, which hinders their practical application. On the other hand, LLMs possess reasoning capabilities and have succeeded in various natural language processing tasks. The combination of LLMs and GNNs has the potential to offer a more transparent ap- proach to solving graph problems by leveraging the reasoning abilities of LLMs. If the combination of LLMs and GNNs is interpretable, it can be utilized for various tasks., including recommendation systems, drug discovery, and fraud detection. This combination can lead to the development of more reliable and efficient decision-making systems across various domains. LLM is undoubtedly the foundational model in NLP. Can we draw inspiration from LLMs to train a graph foundation model? For example, can training strategies like instruction tuning and DPO be applied to tasks involving graphs? The current research has primarily introduced graph foundation models in the form of LLM-GNN pipelines and graph-aware tuning LLMs. Future endeavors can focus on exploring graph foundation models better suited for tasks involving graphs. H. Efficiency on Large-scale Graphs Due to the limited input length of LLM, the graph sizes inputted through prompts typically consist of dozens of nodes. However, for large graphs with tens of thousands of nodes and edges, how can LLMs with limited input length solve such large graphs? A larger input window is required in the case of attributed graphs, where both node and edge attributes need to be considered along with the graph structure. How does LLM address this case? There are currently few effective methods to enable LLM to handle them. XI. CONCLUSIONS LLM-GGA has emerged as a promising field that has garnered significant attention from researchers. This paper taxonomy based on introduces a comprehensive structural recent research, which classifies LLM-GGA research into three main directions: LLM-GQP, LLM-GIL, and graph-LLM-based applications. LLM-GQP encompasses graph understanding and KG-based augmented retrieval, while LLM-GIL involves graph learning, graph-formed reasoning, and graph represen- tation. The motivation, challenges, and mainstream methods of each direction are thoroughly examined. For the six mentioned directions, a comparison of various in each methods was conducted to explore their potential area. It is observed that LLM shows preliminary capabilities in structural understanding, addressing issues like maximum flow and bipartite graph matching over small graphs. However, it is susceptible to factors such as node degree and graph density, leading to potential misjudgments in graph connec- tivity. Additionally, LLM proves beneficial for graph learning tasks due to its strong semantic understanding and reasoning abilities, coupled with learning from extensive corpora, which can provide external knowledge to GNNs and aid in semantic information comprehension, learning, and reasoning. Thanks to LLM’s semantic understanding capabilities, graph represen- tation can achieve deeper semantic embeddings. The discus- sion also delves into KG-based augmented retrieval to enhance LLMs retrieval and factual knowledge-answering abilities. The paper summarizes over 40 datasets, evaluation metrics for six directions, and source code for over 30 mainstream methods in these directions. It highlights the existing challenges in current methods and proposes future directions to guide and motivate further research in the LLM-GGA field. REFERENCES [1] J. Wei, M. Bosma, V. Y. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V. Le, “Finetuned Language Models Are Zero-Shot Learners,” Feb. 2022, arXiv:2109.01652 [cs]. [Online]. Available: http://arxiv.org/abs/2109.01652 [2] B. Peng, C. Li, P. He, M. Galley, and J. Gao, “Instruction Tuning with GPT-4,” Apr. 2023, arXiv:2304.03277 [cs]. [Online]. Available: http://arxiv.org/abs/2304.03277 is “Direct preference optimization: Your [3] R. Rafailov, A. Sharma, E. Mitchell, C. D. Manning, S. Ermon, language and C. Finn, in Neural secretly a reward model,” in Advances model Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., 2023. [Online]. Available: http://papers.nips.cc/paper files/paper/2023/hash/ a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html [4] L. Sun, Y. Huang, H. Wang, S. Wu, Q. Zhang, Y. Li, C. Gao, Y. Huang, W. Lyu, Y. Zhang, X. Li, Z. Liu, Y. Liu, Y. Wang, Z. Zhang, B. Vidgen, B. Kailkhura, C. Xiong, C. Xiao, C. Li, E. Xing, F. Huang, H. Liu, H. Ji, H. Wang, H. Zhang, H. Yao, M. Kellis, M. Zitnik, M. Jiang, M. Bansal, J. Zou, J. Pei, J. Liu, J. Gao, J. Han, J. Zhao, J. Tang, J. Wang, J. Vanschoren, J. Mitchell, K. Shu, K. Xu, K.-W. Chang, L. He, L. Huang, M. Backes, N. Z. Gong, P. S. Yu, P.-Y. Chen, Q. Gu, R. Xu, R. Ying, S. Ji, S. Jana, T. Chen, T. Liu, T. Zhou, W. Wang, X. Li, X. Zhang, X. Wang, X. Xie, X. Chen, X. Wang, Y. Liu, Y. Ye, Y. Cao, Y. Chen, and Y. Zhao, “TrustLLM: Trustworthiness in Large Language Models,” Mar. 2024, arXiv:2401.05561 [cs]. [Online]. Available: http://arxiv.org/abs/2401.05561 [5] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, D. Bikel, L. Blecher, C. Canton-Ferrer, M. Chen, G. Cucurull, D. Esiobu, J. Fernandes, J. Fu, W. Fu, B. Fuller, C. Gao, V. Goswami, N. Goyal, A. Hartshorn, S. Hosseini, R. Hou, H. Inan, M. Kardas, V. Kerkez, M. Khabsa, I. Kloumann, A. Korenev, P. S. Koura, M. Lachaux, T. Lavril, J. Lee, D. Liskovich, Y. Lu, Y. Mao, X. Martinet, T. Mihaylov, P. Mishra, I. Molybog, Y. Nie, A. Poulton, J. Reizenstein, R. Rungta, K. Saladi, A. Schelten, R. Silva, E. M. Smith, R. Subramanian, X. E. Tan, B. Tang, R. Taylor, A. Williams, J. X. Kuan, P. Xu, Z. Yan, I. Zarov, Y. Zhang, A. Fan, M. Kambadur, S. Narang, A. Rodriguez, R. Stojnic, S. Edunov, and T. Scialom, “Llama 2: Open foundation and fine-tuned chat models,” CoRR, vol. abs/2307.09288, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2307.09288 [6] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, J. Leike, and R. Lowe, “Training language P. F. Christiano, models to follow instructions with human feedback,” in Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, Eds., 2022. [Online]. Available: http://papers.nips.cc/paper files/paper/2022/ hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html [7] Y. Zhuang, Y. Yu, K. Wang, H. Sun, and C. Zhang, “Toolqa: tools,” in A dataset Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., 2023. [Online]. Available: http://papers.nips.cc/paper files/paper/ 2023/hash/9cb2a7495900f8b602cb10159246a016-Abstract-Datasets and Benchmarks.html for LLM question answering with external [8] Z. Li, S. Fan, Y. Gu, X. Li, Z. Duan, B. Dong, N. Liu, and J. Wang, “Flexkbqa: A flexible llm-powered framework for few-shot knowledge base question answering,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 17, 2024, pp. 18 608–18 616. [9] B. Zhang, B. Haddow, and A. Birch, “Prompting large language model for machine translation: A case study,” in International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, ser. Proceedings of Machine Learning Research, A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, Eds., vol. 202. PMLR, 2023, pp. 41 092–41 110. [Online]. Available: https://proceedings.mlr.press/v202/zhang23m.html [10] J. Liu, C. S. Xia, Y. Wang, and L. Zhang, “Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation,” in Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., 2023. [Online]. Available: http://papers.nips.cc/paper files/paper/2023/hash/ 43e9d647ccd3e4b7b5baab53f0368686-Abstract-Conference.html [11] A. Ni, S. Iyer, D. Radev, V. Stoyanov, W. Yih, S. I. Wang, and X. V. Lin, “LEVER: learning to verify language-to-code generation with execution,” in International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, ser. Proceedings of Machine Learning Research, A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, Eds., vol. 202. PMLR, 2023, pp. 26 106–26 128. [Online]. Available: https://proceedings.mlr.press/v202/ni23b.html [12] P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Gallagher, and T. Eliassi-Rad, “Collective Classification in Network Data,” AI Magazine, vol. 29, no. 3, pp. 93–106, Sep. 2008. [Online]. Available: https://onlinelibrary.wiley.com/doi/10.1609/aimag.v29i3.2157 [13] W. L. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation Information learning on large graphs,” in Advances Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, in Neural R. Fergus, S. V. N. Vishwanathan, and R. Garnett, Eds., 2017, pp. 1024–1034. [Online]. Available: https://proceedings.neurips.cc/paper/ 2017/hash/5dd9db5e033da9c6fb5ba83c7a7ebea9-Abstract.html [14] Z. Wu, B. Ramsundar, E. N. Feinberg, J. Gomes, C. Geniesse, A. S. Pappu, K. Leswing, and V. Pande, “Moleculenet: a benchmark for molecular machine learning,” Chemical science, vol. 9, no. 2, pp. 513– 530, 2018. [15] A. Broder, R. Kumar, F. Maghoul, P. Raghavan, S. Rajagopalan, R. Stata, A. Tomkins, and J. Wiener, “Graph structure in the Web,” Computer Networks, vol. 33, no. 1-6, pp. 309–320, Jun. 2000. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/ S1389128600000839 [16] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in 5th International Conference on Learning ICLR 2017, Toulon, France, April 24-26, 2017, Representations, Conference Track Proceedings. OpenReview.net, 2017. [Online]. Available: https://openreview.net/forum?id=SJU4ayYgl [17] P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Li`o, and Y. Bengio, “Graph attention networks,” in 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. [Online]. Available: https://openreview.net/ forum?id=rJXMpikCZ [18] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, “Neural message passing for quantum chemistry,” in Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, ser. Proceedings of Machine Learning Research, D. Precup and Y. W. Teh, Eds., vol. 70. PMLR, 2017, pp. 1263–1272. [Online]. Available: http://proceedings.mlr.press/v70/gilmer17a.html [19] Y. Hong, J. W. Lam, and B. Z. Tang, “Aggregation-induced emission: phenomenon, mechanism and applications,” Chemical communications, no. 29, pp. 4332–4353, 2009. [20] W. Cong, M. Ramezani, and M. Mahdavi, “On provable benefits of depth in training graph convolutional networks,” in Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, and J. W. Vaughan, Eds., 2021, pp. 9936– 9949. [Online]. Available: https://proceedings.neurips.cc/paper/2021/ hash/524265e8b942930fbbe8a5d979d29205-Abstract.html [21] S. Fan, X. Wang, C. Shi, P. Cui, and B. Wang, “Generalizing Graph Neural Networks on Out-of-Distribution Graphs,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 1, pp. 322–337, Jan. 2024. [Online]. Available: https://ieeexplore.ieee.org/ document/10268633/ [22] J. Liu, Z. Shen, Y. He, X. Zhang, R. Xu, H. Yu, and P. Cui, “Towards Out-Of-Distribution Generalization: A Survey,” Jul. 2023, arXiv:2108.13624 [cs]. [Online]. Available: http://arxiv.org/abs/2108. 13624 [23] J. Guo, L. Du, H. Liu, M. Zhou, X. He, and S. Han, “GPT4Graph: Can Large Language Models Understand Graph Structured Data ? An Empirical Evaluation and Benchmarking,” Jul. 2023, arXiv:2305.15066 [cs]. [Online]. Available: http://arxiv.org/abs/2305.15066 [24] C. Liu and B. Wu, “Evaluating Large Language Models on Graphs: Performance Insights and Comparative Analysis,” Sep. 2023, arXiv:2308.11224 [cs]. [Online]. Available: http://arxiv.org/abs/2308. 11224 [25] Z. Zhang, X. Wang, Z. Zhang, H. Li, Y. Qin, and W. Zhu, “LLM4DyG: Can Large Language Models Solve Spatial-Temporal Problems on Dynamic Graphs?” Mar. 2024, arXiv:2310.17110 [cs]. [Online]. Available: http://arxiv.org/abs/2310.17110 [26] J. Huang, X. Zhang, Q. Mei, and J. Ma, “Can llms effectively leverage graph structural information: When and why,” CoRR, vol. abs/2309.16595, 2023. [Online]. Available: https://doi.org/10.48550/ arXiv.2309.16595 [27] H. Wang, S. Feng, T. He, Z. Tan, X. Han, and Y. Tsvetkov, “Can Language Models Solve Graph Problems in Natural Language?” Jan. 2024, arXiv:2305.10037 [cs]. [Online]. Available: http://arxiv.org/abs/ 2305.10037 [28] Q. Dong, L. Dong, K. Xu, G. Zhou, Y. Hao, Z. Sui, and for Science: A Study on P [Online]. Available: F. Wei, “Large Language Model vs. NP,” Sep. 2023, arXiv:2309.05689 [cs]. http://arxiv.org/abs/2309.05689 [29] L. Fan, W. Hua, L. Li, H. Ling, and Y. Zhang, “NPHardEval: Dynamic Benchmark on Reasoning Ability of Large Language Models via Complexity Classes,” Feb. 2024, arXiv:2312.14890 [cs]. [Online]. Available: http://arxiv.org/abs/2312.14890 [30] X. He, X. Bresson, T. Laurent, A. Perold, Y. LeCun, and B. Hooi, “Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning,” Mar. 2024, arXiv:2305.19523 [cs]. [Online]. Available: http://arxiv.org/abs/2305. 19523 [31] J. Zhao, M. Qu, C. Li, H. Yan, Q. Liu, R. Li, X. Xie, and J. Tang, “Learning on Large-scale Text-attributed Graphs via Variational Inference,” Mar. 2023, arXiv:2210.14709 [cs]. [Online]. Available: http://arxiv.org/abs/2210.14709 [32] H. Liu, J. Feng, L. Kong, N. Liang, D. Tao, Y. Chen, and M. Zhang, “One for All: Towards Training One Graph Model for All Classification Tasks,” Dec. 2023, arXiv:2310.00149 [cs]. [Online]. Available: http://arxiv.org/abs/2310.00149 [33] X. Huang, K. Han, D. Bao, Q. Tao, Z. Zhang, Y. Yang, and Q. Zhu, “Prompt-based Node Feature Extractor for Few-shot Learning on Text-Attributed Graphs,” Sep. 2023, arXiv:2309.02848 [cs]. [Online]. Available: http://arxiv.org/abs/2309.02848 [34] M. Besta, N. Blach, A. Kubicek, R. Gerstenberger, M. Podstawski, L. Gianinazzi, J. Gajda, T. Lehmann, H. Niewiadomski, P. Nyczyk, and T. Hoefler, “Graph of Thoughts: Solving Elaborate Problems with Large Language Models,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 16, pp. 17 682–17 690, Mar. 2024, arXiv:2308.09687 [cs]. [Online]. Available: http://arxiv.org/abs/2308. 09687 [35] Y. Zhang, J. Yang, Y. Yuan, and A. C.-C. Yao, “Cumulative Reasoning with Large Language Models,” Apr. 2024, arXiv:2308.04371 [cs]. [Online]. Available: http://arxiv.org/abs/2308.04371 [36] Y. Yao, Z. Li, and H. Zhao, “Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Language Models,” Mar. 2024, arXiv:2305.16582 [cs]. [Online]. Available: http://arxiv.org/abs/2305. 16582 [37] J. Zhao, L. Zhuo, Y. Shen, M. Qu, K. Liu, M. Bronstein, Z. Zhu, and J. Tang, “GraphText: Graph Reasoning in Text Space,” Oct. 2023, arXiv:2310.01089 [cs]. [Online]. Available: http://arxiv.org/abs/2310.01089 Z. Liu, Tan, [38] Y. Lv, W. Zhou, H. and C. Yang, “Walklm: A uniform language model fine-tuning framework for attributed graph embedding,” in Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, 2023, A. Oh, T. Naumann, USA, December 10 A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., 2023. [Online]. Available: http://papers.nips.cc/paper files/paper/2023/ hash/2ac879d1865475a7abc8dfc7a9c15c27-Abstract-Conference.html [39] Y. Qin, X. Wang, Z. Zhang, and W. Zhu, “Disentangled Representation Learning with Large Language Models for Text-Attributed Graphs,” Mar. 2024, arXiv:2310.18152 [cs]. [Online]. Available: http://arxiv. org/abs/2310.18152 16, - [40] S. Dernbach, K. Agarwal, A. Zuniga, M. Henry, and S. Choudhury, “GLaM: Fine-Tuning Large Language Models for Domain Knowledge Graph Alignment via Neighborhood Partitioning and Generative Subgraph Encoding,” Apr. 2024, arXiv:2402.06764 [cs]. [Online]. Available: http://arxiv.org/abs/2402.06764 [41] J. Sun, C. Xu, L. Tang, S. Wang, C. Lin, Y. Gong, L. M. Ni, H.-Y. Shum, and J. Guo, “Think-on-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge Graph,” Mar. 2024, arXiv:2307.07697 [cs]. [Online]. Available: http://arxiv.org/abs/ 2307.07697 [42] Y. Tian, H. Song, Z. Wang, H. Wang, Z. Hu, F. Wang, N. V. Chawla, and P. Xu, “Graph Neural Prompting with Large Language [Online]. Available: Models,” Dec. 2023, arXiv:2309.15427 [cs]. http://arxiv.org/abs/2309.15427 [43] L. Luo, Y.-F. Li, G. Haffari, and S. Pan, “Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning,” Feb. 2024, arXiv:2310.01061 [cs]. [Online]. Available: http://arxiv.org/abs/ 2310.01061 [44] W. Wei, X. Ren, J. Tang, Q. Wang, L. Su, S. Cheng, J. Wang, D. Yin, and C. Huang, “LLMRec: Large Language Models with Graph Augmentation for Recommendation,” Jan. 2024, arXiv:2311.00423 [cs]. [Online]. Available: http://arxiv.org/abs/2311.00423 [45] H. Wang, Y. Gao, X. Zheng, P. Zhang, H. Chen, J. Bu, and P. S. Yu, “Graph Neural Architecture Search with GPT- 4,” Mar. 2024, arXiv:2310.01436 [cs]. [Online]. Available: http: //arxiv.org/abs/2310.01436 [46] L. Wu, Z. Qiu, Z. Zheng, H. Zhu, and E. Chen, “Exploring large language model for graph data understanding in online job recommendations,” in Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2024, Thirty-Sixth Conference on Innovative Applications of Artificial IAAI 2024, Fourteenth Intelligence, Symposium on Educational Advances in Artificial Intelligence, EAAI 2014, February 20-27, 2024, Vancouver, Canada, M. J. Wooldridge, J. G. Dy, and S. Natarajan, Eds. AAAI Press, 2024, pp. 9178–9186. [Online]. Available: https://doi.org/10.1609/aaai.v38i8.28769 [47] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong, Y. Du, C. Yang, Y. Chen, Z. Chen, J. Jiang, R. Ren, Y. Li, X. Tang, Z. Liu, P. Liu, J.-Y. Nie, and J.-R. Wen, “A Survey of Large Language Models,” Nov. 2023, arXiv:2303.18223 [cs]. [Online]. Available: http://arxiv.org/abs/2303.18223 [48] J. Yang, H. Jin, R. Tang, X. Han, Q. Feng, H. Jiang, B. Yin, and X. Hu, “Harnessing the power of llms in practice: A survey on chatgpt and beyond,” CoRR, vol. abs/2304.13712, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2304.13712 [49] M. Himsolt, “Gml: A portable graph file format,” Technical report, Universitat Passau, Tech. Rep., 1997. [50] U. Brandes, M. Eiglsperger, J. Lerner, and C. Pich, “Graph markup lan- guage (graphml),” in Handbook on Graph Drawing and Visualization, R. Tamassia, Ed. Chapman and Hall/CRC, 2013, pp. 517–541. [51] N. Francis, A. Green, P. Guagliardo, L. Libkin, T. Lindaaker, V. Marsault, S. Plantikow, M. Rydberg, P. Selmer, and A. Taylor, “Cypher: An Evolving Query Language for Property Graphs,” in Proceedings of the 2018 International Conference on Management of Data. Houston TX USA: ACM, May 2018, pp. 1433–1445. [Online]. Available: https://dl.acm.org/doi/10.1145/3183713.3190657 [52] M. A. Rodriguez, “The Gremlin graph traversal machine and language (invited talk),” in Proceedings of the 15th Symposium on Database Programming Languages. Pittsburgh PA USA: ACM, Oct. 2015, pp. 1–10. [Online]. Available: https://dl.acm.org/doi/10.1145/2815072. 2815073 [53] J. P´erez, M. Arenas, and C. Gutierrez, “Semantics and complexity of SPARQL,” ACM Transactions on Database Systems, vol. 34, no. 3, pp. 1–45, Aug. 2009. [Online]. Available: https://dl.acm.org/doi/10. 1145/1567274.1567278 [54] B. Wang, R. Shin, X. Liu, O. Polozov, and M. Richardson, “RAT- SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers,” Aug. 2021, arXiv:1911.04942 [cs]. [Online]. Available: http://arxiv.org/abs/1911.04942 [55] P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig, “Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing,” ACM Computing Surveys, vol. 55, no. 9, pp. 1–35, Sep. 2023. [Online]. Available: https: //dl.acm.org/doi/10.1145/3560815 [56] W. Hu, M. Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and J. Leskovec, “Open graph benchmark: Datasets for machine learning on graphs,” in Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., 2020. [Online]. Available: https://proceedings.neurips.cc/paper/2020/ hash/fb60d411a5c5b72b2e7d3527cfc84fd0-Abstract.html [57] J. Tang, J. Li, L. Zhang, J. Zhang, L. Yao, and Z. Su, “ArnetMiner: extraction and mining of academic social networks,” in Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. Las Vegas Nevada USA: ACM, Aug. 2008, pp. 990–998. [Online]. Available: https: //dl.acm.org/doi/10.1145/1401890.1402008 [58] T. Schick, J. Dwivedi-Yu, R. Dess`ı, R. Raileanu, M. Lomeli, E. Hambro, L. Zettlemoyer, N. Cancedda, and T. Scialom, “Toolformer: Language models can teach themselves to use tools,” in Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., 2023. [Online]. Available: http://papers.nips.cc/paper files/paper/2023/ hash/d842425e4bf79ba039352da0f658a906-Abstract-Conference.html [59] J. Zhang, “Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via Prompt Augmented by ChatGPT,” May 2023, arXiv:2304.11116 [cs]. [Online]. Available: http://arxiv.org/abs/2304. 11116 [60] H. Face, “hivemind/gpt-j-6b-8bit.” [61] B. Wang and A. Komatsuzaki, “Gpt-j-6b: A 6 billion parameter autoregressive language model,” 2021. [62] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozi`ere, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample, “LLaMA: Open and Efficient Foundation Language Models,” 2023, publisher: [object Object] Version Number: 1. [Online]. Available: https://arxiv.org/abs/2302. 13971 [63] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, “LoRA: Low-Rank Adaptation of Large Language Models,” Oct. 2021, arXiv:2106.09685 [cs]. [Online]. Available: http://arxiv.org/abs/2106.09685 [64] Z. Chai, T. Zhang, L. Wu, K. Han, X. Hu, X. Huang, and Y. Yang, “GraphLLM: Boosting Graph Reasoning Ability of Large Language [Online]. Available: Model,” Oct. 2023, arXiv:2310.05845 [cs]. http://arxiv.org/abs/2310.05845 [65] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V. Le, and D. Zhou, “Chain-of-thought prompting elicits reasoning in large language models,” in Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, Eds., 2022. [Online]. Available: http://papers.nips.cc/paper files/paper/2022/hash/ 9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html [66] X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowd- hery, and D. Zhou, “Self-Consistency Improves Chain of Thought Reasoning in Language Models,” Mar. 2023, arXiv:2203.11171 [cs]. [Online]. Available: http://arxiv.org/abs/2203.11171 [67] Q. Dong, L. Li, D. Dai, C. Zheng, Z. Wu, B. Chang, X. Sun, J. Xu, L. Li, and Z. Sui, “A Survey on In-context Learning,” Jun. 2023, arXiv:2301.00234 [cs]. [Online]. Available: http://arxiv.org/abs/2301. 00234 [68] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Prox- imal Policy Optimization Algorithms,” Aug. 2017, arXiv:1707.06347 [cs]. [Online]. Available: http://arxiv.org/abs/1707.06347 [69] H. Dong, W. Xiong, D. Goyal, Y. Zhang, W. Chow, R. Pan, S. Diao, J. Zhang, K. Shum, and T. Zhang, “RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment,” Dec. 2023, arXiv:2304.06767 [cs, stat]. [Online]. Available: http: //arxiv.org/abs/2304.06767 [70] F. Song, B. Yu, M. Li, H. Yu, F. Huang, Y. Li, and H. Wang, “Preference the Ranking Optimization for Human Alignment,” Proceedings of AAAI Conference on Artificial Intelligence, vol. 38, no. 17, pp. 18 990–18 998, Mar. 2024. [Online]. Available: https://ojs.aaai.org/ index.php/AAAI/article/view/29865 [71] K. Duan, Q. Liu, T.-S. Chua, S. Yan, W. T. Ooi, Q. Xie, and J. He, “SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning,” Aug. 2023, arXiv:2308.02565 [cs]. [Online]. Available: http://arxiv.org/abs/2308.02565 [72] Z. Chen, H. Mao, H. Wen, H. Han, W. Jin, H. Zhang, H. Liu, and J. Tang, “Label-free node classification on graphs with large language (LLMS),” CoRR, vol. abs/2310.04668, 2023. models [Online]. Available: https://doi.org/10.48550/arXiv.2310.04668 [73] Z. Chen, H. Mao, H. Li, W. Jin, H. Wen, X. Wei, S. Wang, D. Yin, W. Fan, H. Liu, and J. Tang, “Exploring the Potential of Large Language Models (LLMs) in Learning on Graphs,” Jan. 2024, arXiv:2307.03393 [cs]. [Online]. Available: http://arxiv.org/abs/2307. 03393 [74] Y. Hu, Z. Zhang, and L. Zhao, “Beyond Text: A Deep Dive into Large Language Models’ Ability on Understanding Graph Data,” Oct. 2023, [Online]. Available: arXiv:2310.04944 [cs]. http://arxiv.org/abs/2310.04944 [75] J. Yu, Y. Ren, C. Gong, J. Tan, X. Li, and X. Zhang, “Empower Text-Attributed Graphs Learning with Large Language Models (LLMs),” Oct. 2023, arXiv:2310.09872 [cs]. [Online]. Available: http://arxiv.org/abs/2310.09872 [76] Q. Wang, Z. Gao, and R. Xu, “Graph agent: Explicit reasoning agent for graphs,” CoRR, vol. abs/2310.16421, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2310.16421 [77] B. Bi, S. Liu, Y. Wang, L. Mei, and X. Cheng, “LPNL: Scalable Link Prediction with Large Language Models,” Feb. 2024, arXiv:2401.13227 [cs]. [Online]. Available: http://arxiv.org/abs/2401.13227 [78] R. Ye, C. Zhang, R. Wang, S. Xu, and Y. Zhang, “Language is All a Graph Needs,” Feb. 2024, arXiv:2308.07134 [cs]. [Online]. Available: http://arxiv.org/abs/2308.07134 [79] J. Tang, Y. Yang, W. Wei, L. Shi, L. Su, S. Cheng, D. Yin, and C. Huang, “GraphGPT: Graph Instruction Tuning for Large Language Models,” Dec. 2023, arXiv:2310.13023 [cs]. [Online]. Available: http://arxiv.org/abs/2310.13023 [80] M. Sun, K. Zhou, X. He, Y. Wang, and X. Wang, “GPPT: Graph Pre-training and Prompt Tuning to Generalize Graph Neural Networks,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. Washington DC USA: ACM, Aug. 2022, pp. 1717–1727. [Online]. Available: https://dl.acm.org/doi/10.1145/3534678.3539249 [81] Z. Liu, X. Yu, Y. Fang, and X. Zhang, “GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks,” in Proceedings of Austin TX USA: ACM, Apr. 2023, pp. 417–428. [Online]. Available: https://dl.acm.org/doi/10.1145/3543507.3583386 the ACM Web Conference 2023. [82] X. Sun, H. Cheng, J. Li, B. Liu, and J. Guan, “All in One: Multi-Task Prompting for Graph Neural Networks,” in Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. Long Beach CA USA: ACM, Aug. 2023, pp. 2120–2131. [Online]. Available: https://dl.acm.org/doi/10.1145/3580305.3599256 [83] L. Cao, “GraphReason: Enhancing Reasoning Capabilities of Large Language Models through A Graph-Based Verification Approach,” Apr. 2024, arXiv:2308.09267 [cs]. [Online]. Available: http://arxiv. org/abs/2308.09267 [84] J. Park, A. Patel, O. Z. Khan, H. J. Kim, and J.-K. Kim, “Graph- Guided Reasoning for Multi-Hop Question Answering in Large Language Models,” Nov. 2023, arXiv:2311.09762 [cs]. [Online]. Available: http://arxiv.org/abs/2311.09762 [85] Z. Wen and Y. Fang, “Augmenting Low-Resource Text Classification with Graph-Grounded Pre-training and Prompting,” in Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, Jul. 2023, pp. 506–516, arXiv:2305.03324 [cs]. [Online]. Available: http://arxiv.org/abs/2305. 03324 [86] B. Fatemi, J. Halcrow, and B. Perozzi, “Talk like a Graph: Encoding Graphs for Large Language Models,” Oct. 2023, arXiv:2310.04560 [cs]. [Online]. Available: http://arxiv.org/abs/2310.04560 [87] D. Das, I. Gupta, J. Srivastava, and D. Kang, “Which Modality should I use – Text, Motif, or Image? : Understanding Graphs with Large Language Models,” Mar. 2024, arXiv:2311.09862 [cs]. [Online]. Available: http://arxiv.org/abs/2311.09862 [88] K. Sun, Y. E. Xu, H. Zha, Y. Liu, and X. L. Dong, “Head-to-Tail: How Knowledgeable are Large Language Models (LLMs)? A.K.A. Will LLMs Replace Knowledge Graphs?” Apr. 2024, arXiv:2308.10168 [cs]. [Online]. Available: http://arxiv.org/abs/2308.10168 [89] L. Yang, H. Chen, Z. Li, X. Ding, and X. Wu, “Give us the facts: Enhancing large language models with knowledge graphs for fact- aware language modeling,” IEEE Transactions on Knowledge and Data Engineering, 2024. [90] S. Pan, L. Luo, Y. Wang, C. Chen, J. Wang, and X. Wu, “Unifying large language models and knowledge graphs: A roadmap,” CoRR, vol. abs/2306.08302, 2023. [Online]. Available: https://doi.org/10. 48550/arXiv.2306.08302 [91] S. Zheng, H. Bai, Y. Zhang, Y. Su, X. Niu, and N. Jaitly, “KGLens: A Parameterized Knowledge Graph Solution to Assess What an LLM Does and Doesn’t Know,” Feb. 2024, arXiv:2312.11539 [cs]. [Online]. Available: http://arxiv.org/abs/2312.11539 [92] R. Zhang, Y. Su, B. D. Trisedya, X. Zhao, M. Yang, H. Cheng, and J. Qi, “AutoAlign: Fully Automatic and Effective Knowledge Graph Alignment enabled by Large Language Models,” Nov. 2023, arXiv:2307.11772 [cs]. [Online]. Available: http://arxiv.org/abs/2307. 11772 [93] Y. Wang, N. Lipka, R. A. Rossi, A. Siu, R. Zhang, and T. Derr, “Knowledge Graph Prompting for Multi-Document Question Answering,” Dec. 2023, arXiv:2308.11730 [cs]. [Online]. Available: http://arxiv.org/abs/2308.11730 [94] Z. Chen, Z. Jiang, F. Yang, E. Cho, X. Fan, X. Huang, Y. Lu, and A. Galstyan, “Graph Meets LLM: A Novel Approach to Collaborative Filtering for Robust Conversational Understanding,” Jun. 2023, arXiv:2305.14449 [cs]. [Online]. Available: http://arxiv.org/ abs/2305.14449 [95] C. Sun, J. Li, Y. R. Fung, H. P. Chan, T. Abdelzaher, C. Zhai, and H. Ji, “Decoding the Silent Majority: Inducing Belief Augmented Social Graph with Large Language Model for Response Forecasting,” Oct. 2023, arXiv:2310.13297 [cs]. [Online]. Available: http://arxiv.org/ abs/2310.13297 [96] R. Su, T.-W. Wu, and B.-H. Juang, “Schema Graph-Guided Prompt for Multi-Domain Dialogue State Tracking,” Nov. 2023, arXiv:2311.06345 [cs]. [Online]. Available: http://arxiv.org/abs/2311.06345 [97] Y. Peng, S. Lin, Q. Chen, L. Xu, X. Ren, Y. Li, and J. Xu, “ChatGraph: Chat with Your Graphs,” Jan. 2024, arXiv:2401.12672 [cs]. [Online]. Available: http://arxiv.org/abs/2401.12672 [98] Y. Shen, R. Liao, Z. Han, Y. Ma, and V. Tresp, “GraphextQA: A Benchmark for Evaluating Graph-Enhanced Large Language Models,” Oct. 2023, arXiv:2310.08487 [cs]. [Online]. Available: http://arxiv.org/abs/2310.08487 [99] H. Yan, C. Li, R. Long, C. Yan, J. Zhao, W. Zhuang, J. Yin, P. Zhang, W. Han, H. Sun, W. Deng, Q. Zhang, L. Sun, X. Xie, and S. Wang, “A comprehensive study on text-attributed graphs: Benchmarking and rethinking,” in Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., 2023. [Online]. Available: http://papers.nips.cc/paper files/paper/2023/ hash/37d00f567a18b478065f1a91b95622a0-Abstract-Datasets and Benchmarks.html [100] A. McCallum, K. Nigam, J. Rennie, and K. Seymore, “Automating internet portals with machine learning,” Inf. [Online]. Available: the construction of Retr., vol. 3, no. 2, pp. 127–163, 2000. https://doi.org/10.1023/A:1009953814988 [101] C. L. Giles, K. D. Bollacker, and S. Lawrence, “CiteSeer: an automatic citation indexing system,” in Proceedings of the third ACM conference on Digital libraries - DL ’98. Pittsburgh, Pennsylvania, United States: ACM Press, 1998, pp. 89–98. [Online]. Available: http://portal.acm.org/citation.cfm?doid=276675.276685 [102] Y. Zhang, H. Dai, Z. Kozareva, A. Smola, and L. Song, “Variational Reasoning for Question Answering With Knowledge Graph,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, Apr. 2018. [Online]. Available: https://ojs.aaai.org/ index.php/AAAI/article/view/12057 [103] X. Wang, T. Gao, Z. Zhu, Z. Zhang, Z. Liu, J. Li, and J. Tang, “KEPLER: A unified model for knowledge embedding and pre-trained language representation,” Trans. Assoc. Comput. Linguistics, vol. 9, pp. 176–194, 2021. [Online]. Available: https: //doi.org/10.1162/tacl a 00360 [104] K. M. Borgwardt, C. S. Ong, S. Schonauer, S. V. N. Vishwanathan, A. J. Smola, and H.-P. Kriegel, “Protein function prediction via graph kernels,” Bioinformatics, vol. 21, no. Suppl 1, pp. i47–i56, Jun. [Online]. Available: https://academic.oup.com/bioinformatics/ 2005. article-lookup/doi/10.1093/bioinformatics/bti1007 [105] A. K. Debnath, R. L. Lopez de Compadre, G. Debnath, A. J. Shuster- man, and C. Hansch, “Structure-activity relationship of mutagenic aro- matic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity,” Journal of medicinal chemistry, vol. 34, no. 2, pp. 786–797, 1991. descriptor I. A. Watson, “Comparison [106] N. Wale, of and retrieval chemical classification,” Knowledge and Information Systems, vol. 14, no. [Online]. Available: 347–375, Mar. http://link.springer.com/10.1007/s10115-007-0103-5 and G. Karypis, compound spaces 2008. pp. for 3, [107] H. Toivonen, A. of pp. 10, evaluation “Statistical Srinivasan, R. D. King, S. Kramer, Predictive 19, vol. [Online]. Available: 2000–2001,” Jul. and C. Helma, ToxicologyChallenge no. https://academic.oup.com/bioinformatics/article/19/10/1183/184239 and P. S. Yu, the Bioinformatics, 2003. links social networks,” in Proceedings across multiple heterogeneous of the 22nd ACM international conference on Conference on information & knowledge management - CIKM ’13. San Francisco, “Inferring anchor 1183–1193, J. Zhang, [108] X. Kong, Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 2: Short Papers. The Association for Computer Linguistics, 2016. [Online]. Available: https://doi.org/10.18653/v1/p16-2033 [124] A. Talmor and J. Berant, “The Web as a Knowledge-base for Answering Complex Questions,” Mar. 2018, arXiv:1803.06643 [cs]. [Online]. Available: http://arxiv.org/abs/1803.06643 [125] C.-Y. Lin, “Rouge: A package for automatic evaluation of summaries,” in Text summarization branches out, 2004, pp. 74–81. [126] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “BLEU: a method for automatic evaluation of machine translation,” in Proceedings the 40th Annual Meeting on Association for Computational of Linguistics - ACL ’02. Philadelphia, Pennsylvania: Association for Computational Linguistics, 2001, p. 311. [Online]. Available: http://portal.acm.org/citation.cfm?doid=1073083.1073135 [127] C. Goutte and ´E. Gaussier, “A probabilistic interpretation of precision, recall and F-score, with implication for evaluation,” in Advances in Information Retrieval, 27th European Conference on IR Research, ECIR 2005, Santiago de Compostela, Spain, March 21-23, 2005, Proceedings, ser. Lecture Notes in Computer Science, D. E. Losada and J. M. Fern´andez-Luna, Eds., vol. 3408. Springer, 2005, pp. 345–359. [Online]. Available: https://doi.org/10.1007/978-3-540-31865-1 25 “The web as [128] J. M. Kleinberg, R. Kumar, P. Raghavan, S. Rajagopalan, and A. Tomkins, a graph: Measurements, models, and methods,” in Computing and Combinatorics, 5th Annual International Conference, COCOON ’99, Tokyo, Japan, July 26- 28, 1999, Proceedings, ser. Lecture Notes in Computer Science, Imai, D. T. Lee, S. Nakano, and T. Tokuyama, T. Asano, H. Eds., vol. 1627. Springer, 1999, pp. 1–17. [Online]. Available: https://doi.org/10.1007/3-540-48686-0 1 [129] S. M. Iacus, G. King, and G. Porro, “Causal inference without balance checking: Coarsened exact matching,” Political analysis, vol. 20, no. 1, pp. 1–24, 2012. zero-shot [130] H. Zhao, S. Liu, C. Ma, H. Xu, J. Fu, Z. Deng, L. Kong, and instruction- Q. Liu, “GIMLET: A unified graph-text model in Neural based molecule learning,” Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., 2023. [Online]. Available: http://papers.nips.cc/paper files/paper/2023/hash/ 129033c7c08be683059559e8d6bfd460-Abstract-Conference.html for in Advances California, USA: ACM Press, 2013, pp. 179–188. [Online]. Available: http://dl.acm.org/citation.cfm?doid=2505515.2505531 [110] J. Ni, and J. McAuley, [109] M. Wan and J. McAuley, “Item recommendation on monotonic behavior chains,” in Proceedings of the 12th ACM Conference on Recommender Systems. Vancouver British Columbia Canada: ACM, Sep. 2018, pp. 86–94. [Online]. Available: https://dl.acm.org/doi/10. 1145/3240323.3240369 J. Li, “Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Hong Kong, China: Association for Computational Linguistics, 2019, pp. 188–197. [Online]. Available: https://www.aclweb.org/anthology/D19-1018 [111] W. L. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Advances Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. Garnett, Eds., 2017, pp. 1024–1034. [Online]. Available: https://proceedings.neurips.cc/paper/ 2017/hash/5dd9db5e033da9c6fb5ba83c7a7ebea9-Abstract.html [112] A. Bojchevski and S. G¨unnemann, “Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking,” Feb. 2018, arXiv:1707.03815 [cs, stat]. [Online]. Available: http://arxiv.org/abs/ 1707.03815 in Neural [113] O. Shchur, M. Mumme, A. Bojchevski, and S. G¨unnemann, “Pitfalls of Graph Neural Network Evaluation,” Jun. 2019, arXiv:1811.05868 [cs, stat]. [Online]. Available: http://arxiv.org/abs/1811.05868 [114] R. Rossi and N. Ahmed, “The Network Data Repository with Interactive Graph Analytics and Visualization,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 29, no. 1, Mar. 2015. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/ view/9277 [115] H. Huang, H. Wang, and X. Wang, “An analysis framework of research frontiers based on the large-scale open academic graph,” Proceedings of the Association for Information Science and Technology, vol. 57, no. 1, p. e307, Oct. 2020. [Online]. Available: https://asistdl.onlinelibrary.wiley.com/doi/10.1002/pra2.307 [116] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano, C. Hesse, and J. Schulman, “Training Verifiers to Solve Math Word Problems,” Nov. 2021, arXiv:2110.14168 [cs]. [Online]. Available: http://arxiv.org/abs/ 2110.14168 [117] A. Patel, S. Bhattamishra, and N. Goyal, “Are NLP Models really able to Solve Simple Math Word Problems?” Apr. 2021, arXiv:2103.07191 [cs]. [Online]. Available: http://arxiv.org/abs/2103.07191 [118] S. Han, H. Schoelkopf, Y. Zhao, Z. Qi, M. Riddell, L. Benson, L. Sun, E. Zubova, Y. Qiao, M. Burtell, D. Peng, J. Fan, Y. Liu, B. Wong, M. Sailor, A. Ni, L. Nan, J. Kasai, T. Yu, R. Zhang, S. Joty, A. R. Fabbri, W. Kryscinski, X. V. Lin, C. Xiong, and D. Radev, “FOLIO: Natural Language Reasoning with First-Order Logic,” Sep. 2022, arXiv:2209.00840 [cs]. [Online]. Available: http://arxiv.org/abs/2209.00840 [119] Y. Zhang, B. Jin, Q. Zhu, Y. Meng, and J. Han, “The effect of metadata on scientific literature tagging: A cross-field cross-model study,” in Proceedings of the ACM Web Conference 2023, 2023, pp. 1626–1637. [120] A. Johnson, T. Pollard, and R. Mark, “MIMIC-III Clinical Database,” 2015. [Online]. Available: https://physionet.org/content/mimiciii/1.4/ [121] K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor, “Freebase: a collaboratively created graph database for structuring the 2008 ACM SIGMOD human knowledge,” in Proceedings of international conference on Management of data. Vancouver Canada: ACM, Jun. 2008, pp. 1247–1250. [Online]. Available: https://dl.acm.org/doi/10.1145/1376616.1376746 [122] K. Toutanova and D. Chen, “Observed versus features the for knowledge base and text 3rd Workshop on Continuous Vector Space Models and their Compositionality. Beijing, China: Association for Computational Linguistics, 2015, pp. 57–66. [Online]. Available: http://aclweb.org/ anthology/W15-4007 inference,” in Proceedings of latent [123] W. Yih, M. Richardson, C. Meek, M. Chang, and J. Suh, “The value of semantic parse labeling for knowledge base question the answering,” in Proceedings of the 54th Annual Meeting of
synthetic_cpt
2
The_Benefits_of_Bad_Advice_Autocontrastive_Decoding_across_Model_Layers.pdf
3 2 0 2 y a M 1 3 ] G L . s c [ 1 v 8 8 5 9 1 . 5 0 3 2 : v i X r a Active causal structure learning with advice Davin Choo National University of Singapore Themis Gouleakis National University of Singapore Arnab Bhattacharyya National University of Singapore Abstract We introduce the problem of active causal structure learning with advice. In the typical well-studied setting, the learning algorithm is given the essential graph for the observational distribution and is asked to recover the underlying causal directed acyclic graph (DAG) G∗ while minimizing the number of interventions made. In our setting, we are additionally given side information about G∗ as advice, e.g. a DAG G purported to be G∗. We ask whether the learning algorithm can benefit from the advice when it is close to being correct, while still having worst-case guarantees even when the advice is arbitrarily bad. Our work is in the same space as the growing body of research on algorithms with predictions. When the advice is a DAG G, we design an adaptive search algorithm to recover G∗ whose intervention cost is at most O(max{1, log ψ}) times the cost for verifying G∗; here, ψ is a distance measure between G and G∗ that is upper bounded by the number of variables n, and is exactly 0 when G = G∗. Our approximation factor matches the state-of-the-art for the advice-less setting. 1 Introduction A causal directed acyclic graph on a set V of n variables is a Bayesian network in which the edges model direct causal effects. A causal DAG can be used to infer not only the observational distribution of V but also the result of any intervention on any subset of variables V ′ ⊆ V . In this work, we restrict ourselves to the causally sufficient setting where there are no latent confounders, no selection bias, and no missingness in data. The goal of causal structure learning is to recover the underlying DAG from data. This is an important problem with applications in multiple fields including philosophy, medicine, biology, genetics, and econometrics [Rei56, Hoo90, KWJ+04, Woo05, RW06, ES07, SC17, RHT+17, POS+18]. Unfortunately, in general, it is known that observational data can only recover the causal DAG up to an equivalence class [Pea09, SGSH00]. Hence, if one wants to avoid making parametric assumptions about the causal mechanisms, the only recourse is to obtain experimental data from interventions [EGS05, EGS06, Ebe10]. Such considerations motivate the problem of interventional design where the task is to find a set of in- terventions of optimal cost which is sufficient to recover the causal DAG. There has been a series of recent works studying this problem [HG08, HLV14, SKDV15, KDV17, LKDV18, GKS+19, SMG+20, CSB22, CS23] under various assumptions. In particular, assuming causal sufficiency, [CSB22] gave an adaptive algorithm that actively generates a sequence of interventions of bounded size, so that the total number of interventions is at most O(log n) times the optimal. Typically though, in most applications of causal structure learning, there are domain experts and prac- titioners who can provide additional “advice” about the causal relations. Indeed, there has been a long line of work studying how to incorporate expert advice into the causal graph discovery process; e.g. see [Mee95a, SSG+98, DCJ11, FNB+11, LB18, ASC20, FH20]. In this work, we study in a principled way how using purported expert advice can lead to improved algorithms for interventional design. Before discussing our specific contributions, let us ground the above discussion with a concrete problem of practical importance. In modern virtualized infrastructure, it is increasingly common for applications to be modularized into a large number of interdependent microservices. These microservices communicate with each other in ways that depend on the application code and on the triggering userflow. Crucially, the communication graph between microservices is often unknown to the platform provider as the application code may be private and belong to different entities. However, knowing the graph is useful for various critical platform-level tasks, 1 such as fault localization [ZPX+19], active probing [TJG+19], testing [JBT+19], and taint analysis [CLO07]. Recently, [WAJ+23] and [ICM+22] suggested viewing the microservices communication graph as a sparse causal DAG. In particular, [WAJ+23] show that arbitrary interventions can be implemented as fault injections in a staging environment, so that a causal structure learning algorithm can be deployed to generate a sequence of interventions sufficient to learn the underlying communication graph. In such a setting, it is natural to assume that the platform provider already has an approximate guess about the graph, e.g. the graph discovered in a previous run of the algorithm or the graph suggested by public metadata tagging microservice code. The research program we put forth is to design causal structure learning algorithms that can take advantage of such potentially imperfect advice1. 1.1 Our contributions In this work, we study adaptive intervention design for recovering non-parametric causal graphs with expert advice. Specifically, our contributions are as follows. • Problem Formulation. Our work connects the causal structure learning problem with the burgeoning research area of algorithms with predictions or learning-augmented algorithms [MV22] where the goal is to design algorithms that bypass worst-case behavior by taking advantage of (possibly erroneous) advice or predictions about the problem instance. Most work in this area has been restricted to online algorithms, data structure design, or optimization, as described later in Section 2.5. However, as we motivated above, expert advice is highly relevant for causal discovery, and to the best of our knowlege, ours is the first attempt to formally address the issue of imperfect advice in this context. • Adaptive Search Algorithm. We consider the setting where the advice is a DAG G purported to be the orientations of all the edges in the graph. We define a distance measure which is always bounded by n, the number of variables, and equals 0 when G = G∗. For any integer k ≥ 1, we propose an adaptive algorithm to generate a sequence of interventions of size at most k that recovers the true DAG G∗, such that the total number of interventions is O(log ψ(G, G∗)·log k) times the optimal number of interventions of size k. Thus, our approximation factor is never worse than the factor for the advice-less setting in [CSB22]. Our search algorithm also runs in polynomial time. • Verification Cost Approximation. For a given upper bound k ≥ 1, a verifying intervention set for a DAG G∗ is a set of interventions of size at most k that, together with knowledge of the Markov equivalence class of G∗, determines the orientations of all edges in G∗. The minimum size of a verifying intervention set for G∗, denoted νk(G∗), is clearly a lower bound for the number of interventions required to learn G∗ (regardless of the advice graph G). One of our key technical results is a structural result about ν1. We prove that for any two DAGs G and G′ within the same Markov equivalence class, we always have ν1(G) ≤ 2 · ν1(G′) and that this is tight in the worst case. Beyond an improved structural understanding of minimum verifying intervention sets, which we believe is of independent interest, this enables us to “blindly trust” the information provided by imperfect advice to some extent. Similar to prior works (e.g. [SMG+20, CSB22, CS23]), we assume causal sufficiency and faithfulness while using ideal interventions. Under these assumptions, running standard causal discovery algorithms (e.g. PC [SGSH00], GES [Chi02]) will always successfully recover the correct essential graph from data. We also assume that the given expert advice is consistent with observational essential graph. See Appendix A for a discussion about our assumptions. 1.2 Paper organization In Section 2, we intersperse preliminary notions with related work. Our main results are presented in Section 3 with the high-level technical ideas and intuition given in Section 4. Section 5 provides some empirical validation. See the appendices for full proofs, source code, and experimental details. 1Note however that the system in [WAJ+23] is not causally sufficient due to confounding user behavior and [ICM+22] does not actively perform interventions. So, the algorithm proposed in this work cannot be used directly for the microservices graph learning problem. 2 2 Preliminaries and Related Work Basic notions about graphs and causal models are defined in Appendix B. To be very brief, if G = (V, E) is a graph on |V | = n nodes/vertices where V (G), E(G), and A(G) ⊆ E(G) denote nodes, edges, and arcs of G respectively, we write u ∼ v to denote that two nodes u, v ∈ V are connected in G, and write u → v or u ← v when specifying a certain direction. The skeleton skel(G) refers to the underlying graph where all edges are made undirected. A v-structure in G refers to a collection of three distinct vertices u, v, w ∈ V such that u → v ← w and u ̸∼ w. Let G = (V, E) be fully unoriented. For vertices u, v ∈ V , subset of vertices V ′ ⊆ V and integer r ≥ 0, we define distG(u, v) as the shortest path length between u and v, and N r G(V ′) = {v ∈ V : minu∈V ′ distG(u, v) ≤ r} ⊆ V as the set of vertices that are r-hops away from V ′ in G. A directed acyclic graph (DAG) is a fully oriented graph without directed cycles. For any DAG G, we denote its Markov equivalence class (MEC) by [G] and essential graph by E(G). DAGs in the same MEC have the same skeleton and the essential graph is a partially directed graph such that an arc u → v is directed if u → v in every DAG in MEC [G], and an edge u ∼ v is undirected if there exists two DAGs G1, G2 ∈ [G] such that u → v in G1 and v → u in G2. It is known that two graphs are Markov equivalent if and only if they have the same skeleton and v-structures [VP90, AMP97] and the essential graph E(G) can be computed from G by orienting v-structures in skel(G) and applying Meek rules (see Appendix D). In a DAG G, an edge u → v is a covered edge if Pa(u) = Pa(v) \ {u}. We use C(G) ⊆ E(G) to denote the set of covered edges of G. 2.1 Ideal interventions An intervention S ⊆ V is an experiment where all variables s ∈ S is forcefully set to some value, independent of the underlying causal structure. An intervention is atomic if |S| = 1 and bounded size if |S| ≤ k for some k ≥ 1; observational data is a special case where S = ∅. The effect of interventions is formally captured by Pearl’s do- calculus [Pea09]. We call any I ⊆ 2V a intervention set: an intervention set is a set of interventions where each intervention corresponds to a subset of variables. An ideal intervention on S ⊆ V in G induces an interventional graph GS where all incoming arcs to vertices v ∈ S are removed [EGS05]. It is known that intervening on S allows us to infer the edge orientation of any edge cut by S and V \S [Ebe07, HEH13, HLV14, SKDV15, KDV17]. We now give a definition and result for graph separators. Definition 1 (α-separator and α-clique separator, Definition 19 from [CSB22]). Let A, B, C be a partition of the vertices V of a graph G = (V, E). We say that C is an α-separator if no edge joins a vertex in A with a vertex in B and |A|, |B| ≤ α · |V |. We call C is an α-clique separator if it is an α-separator and a clique. Theorem 2 ([GRE84], instantiated for unweighted graphs). Let G = (V, E) be a chordal graph with |V | ≥ 2 and p vertices in its largest clique. There exists a 1/2-clique-separator C involving at most p − 1 vertices. The clique C can be computed in O(|E|) time. For ideal interventions, an I-essential graph EI(G) of G is the essential graph representing the Markov equivalence class of graphs whose interventional graphs for each intervention is Markov equivalent to GS for any intervention S ∈ I. There are several known properties about I-essential graph properties [HB12, HB14]: Every I-essential graph is a chain graph2 with chordal3 chain components. This includes the case of I = ∅. Orientations in one chain component do not affect orientations in other components. In other words, to fully orient any essential graph E(G∗), it is necessary and sufficient to orient every chain component in E(G∗). For any intervention set I ⊆ 2V , we write R(G, I) = A(EI(G)) ⊆ E to mean the set of oriented arcs in the I-essential graph of a DAG G. For cleaner notation, we write R(G, I) for single interventions I = {I} for some I ⊆ V , and R(G, v) for single atomic interventions I = {{v}} for some v ∈ V . For any interventional set I ⊆ 2V , define GI = G[E \ R(G, I)] as the fully directed subgraph DAG induced by the unoriented arcs in EI(G), where G∅ is the graph obtained after removing all the oriented arcs in the observational essential graph due to v-structures. See Fig. 1 for an example. In the notation of R(·, ·), the following result justifies studying verification and adaptive search via ideal interventions only on DAGs without v-structures, i.e. moral DAGs (Definition 4): since R(G, I) = R(G∅, I) ˙∪ R(G, ∅), any oriented arcs in the observational graph can be removed before performing any interventions as the optimality of the solution is unaffected.4 2A partially directed graph is a chain graph if it does not contain any partially directed cycles where all directed arcs point in the same direction along the cycle. 3A chordal graph is a graph where every cycle of length at least 4 has an edge that is not part of the cycle but connects two vertices of the cycle; see [BP93] for an introduction. 4The notation A ˙∪ B denotes disjoint union of sets A and B. 3 Theorem 3 ([CS23]). For any DAG G = (V, E) and intervention sets A, B ⊆ 2V , R(G, A ∪ B) = R(GA, B) ˙∪ R(GB, A) ˙∪ (R(G, A) ∩ R(G, B)) Definition 4 (Moral DAG). A DAG G is called a moral DAG if it has no v-structures. So, E(G) = skel(G). 2.2 Verifying sets A verifying set I for a DAG G ∈ [G∗] is an intervention set that fully orients G from E(G∗), possibly with repeated applications of Meek rules (see Appendix D), i.e. EI(G∗) = G∗. Furthermore, if I is a verifying set for G∗, then so is I ∪ S for any additional intervention S ⊆ V . While there may be multiple verifying sets in general, we are often interested in finding one with a minimum size. Definition 5 (Minimum size verifying set). An intervention set I ⊆ 2V is called a verifying set for a DAG G∗ if EI(G∗) = G∗. I is a minimum size verifying set if EI′(G∗) ̸= G∗ for any |I ′| < |I|. For bounded size interventions, the minimum verification number νk(G) denotes the size of the minimum size verifying set for any DAG G ∈ [G∗]; we write ν1(G) for atomic interventions. That is, any revealed arc directions when performing interventions on E(G∗) respects G. [CSB22] tells us that it is necessary and sufficient to intervene on a minimum vertex cover of the covered edges C(G) in order to verify a DAG G, and that ν1(G) is efficiently computable given G since C(G) induces a forest. Theorem 6 ([CSB22]). Fix an essential graph E(G∗) and G ∈ [G∗]. An atomic intervention set I is a minimal sized verifying set for G if and only if I is a minimum vertex cover of covered edges C(G) of G. A minimal sized atomic verifying set can be computed in polynomial time since the edge-induced subgraph on C(G) is a forest. For any DAG G, we use V(G) ⊆ 2V to denote the set of all atomic verifying sets for G. That is, each atomic intervention set in V(G) is a minimum vertex cover of C(G). 2.3 Adaptive search using ideal interventions Adaptive search algorithms have been studied in earnest [HG08, HB14, SKDV15, SMG+20, CSB22, CS23] as they can use significantly less interventions than non-adaptive counterparts.5 Most recently, [CSB22] gave an efficient algorithm for computing adaptive interventions with provable approximation guarantees on general graphs. Theorem 7 ([CSB22]). Fix an unknown underlying DAG G∗. Given an essential graph E(G∗) and intervention set bound k ≥ 1, there is a deterministic polynomial time algorithm that computes an intervention set I adaptively such that EI(G∗) = G∗, and |I| has size 1. O(log(n) · ν1(G∗)) when k = 1 2. O(log(n) · log(k) · νk(G∗)) when k > 1. Meanwhile, in the context of local causal graph discovery where one is interested in only learning a subset of causal relationships, the SubsetSearch algorithm of [CS23] incurs a multiplicative overhead that scales logarithmically with the number of relevant nodes when orienting edges within a node-induced subgraph. Definition 8 (Relevant nodes). Fix a DAG G∗ = (V, E) and arbitrary subset V ′ ⊆ V . For any intervention set I ⊆ 2V and resulting interventional essential graph EI(G∗), we define the relevant nodes ρ(I, V ′) ⊆ V ′ as the set of nodes within V ′ that is adjacent to some unoriented arc within the node-induced subgraph EI(G∗)[V ′]. For an example of relevant nodes, see Fig. 1: For the subset V ′ = {A, C, D, E, F } in (II), only {A, C, D} are relevant since incident edges to E and F are all oriented. Theorem 9 ([CS23]). Fix an unknown underlying DAG G∗. Given an interventional essential graph EI(G∗), node-induced subgraph H with relevant nodes ρ(I, V (H)) and intervention set bound k ≥ 1, there is a determin- istic polynomial time algorithm that computes an intervention set I adaptively such that EI∪I′(G∗)[V (H)] = G∗[V (H)], and |I ′| has size 1. O(log(|ρ(I, V (H))|) · ν1(G∗)) when k = 1 2. O(log(|ρ(I, V (H))|) · log(k) · νk(G∗)) when k > 1. Note that k = 1 refers to the setting of atomic interventions and we always have 0 ≤ |ρ(I, V (H))| ≤ n. 5If the essential graph E(G∗) is a path of n nodes, then non-adaptive algorithms need Ω(n) atomic interventions to recover G∗ while O(log n) atomic interventions suffices for adaptive search. 4 C A E F B D C (I) B D C (II) A E F B D C (III) A E F B D (IV) A E F Figure 1: (I) Ground truth DAG G∗; (II) Observational essential graph E(G∗) where C → E ← D is a v-structure and Meek rules orient arcs D → F and E → F ; (III) G∅ = G[E \ R(G, ∅)] where oriented arcs in E(G∗) are removed from G∗; (IV) MPDAG ˜G ∈ [G∗] incorporating the following partial order advice (S1 = {B}, S2 = {A, D}, S3 = {C, E, F }), which can be converted to required arcs B → A and B → D. Observe that A → C is oriented by Meek R1 via B → A ∼ C, the arc A ∼ D is still unoriented, the arc B → A disagrees with G∗, and there are two possible DAGs consistent with the resulting MPDAG. 2.4 Expert advice in causal graph discovery There are three main types of information that a domain expert may provide (e.g. see the references given in Section 1): (I) Required parental arcs: X → Y (II) Forbidden parental arcs: X ̸→ Y (III) Partial order or tiered knowledge: A partition of the n variables into 1 ≤ t ≤ n sets S1, . . . , St such that variables in Si cannot come after Sj, for all i < j. In the context of orienting unoriented X ∼ Y edges in an essential graph, it suffices to consider only information of type (I): X ̸→ Y implies Y → X, and a partial order can be converted to a collection of required parental arcs.6 Maximally oriented partially directed acyclic graphs (MPDAGs), a refinement of essential graphs under additional causal information, are often used to model such expert advice and there has been a recent growing interest in understanding them better [PKM17, Per20, GP21]. MPDAGs are obtained by orienting additional arc directions in the essential graph due to background knowledge, and then applying Meek rules. See Fig. 1 for an example. 2.5 Other related work Causal Structure Learning Algorithms for causal structure learning can be grouped into three broad cat- egories, constraint-based, score-based, and Bayesian. Previous works on the first two approaches are described in Appendix C. In Bayesian methods, a prior distribution is assumed on the space of all structures, and the posterior is updated as more data come in. [Hec95] was one of the first works on learning from interventional data in this context, which spurred a series of papers (e.g. [HGC95, CY99, FK00, HMC06]). Research on active experimental design for causal structure learning with Bayesian updates was initiated by [TK00, TK01] and [Mur01]. [CBP16] and [MM13] considered a combination of Bayesian and constraint-based approaches. [ASY+19] have used active learning and Bayesian updates to help recover biological networks. While possibly imperfect expert advice may be used to guide the prior in the Bayesian approach, the works mentioned above do not provide rigorous guarantees about the number of interventions performed or about optimality, and so they are not directly comparable to our results here. 6For every edge X ∼ Y with X ∈ Si and Y ∈ Sj , enforce the required parental arc X → Y if and only if i < j. 5 Algorithms with predictions Learning-augmented algorithms have received significant attention since the seminal work of [LV21], where they investigated the online caching problem with predictions. Based on that model, [PSK18] proposed algorithms for the ski-rental problem as well as non-clairvoyant scheduling. Subsequently, [GP19], [WLW20], and [ADJ+20] improved the initial results for the ski-rental problem. Sev- eral works, including [Roh20, ACE+20, Wei20], improved the initial results regarding the caching problem. Scheduling problems with machine-learned advice have been extensively studied in the literature [LLMV20, BMRS20, AJS22]. There are also results for augmenting classical data structures with predictions (e.g. in- dexing [KBC+18] and Bloom filters [Mit18]), online selection and matching problems [AGKK20, DLPLV21], online TSP [BLMS+22, GLS23], and a more general framework of online primal-dual algorithms [BMS20]. In the above line of work, the extent to which the predictions are helpful in the design of the corresponding online algorithms, is quantified by the following two properties. The algorithm is called (i) α-consistent if it is α-competitive with no prediction error and (ii) β-robust if it is β-competitive with any prediction error. In the language of learning augmented algorithms or algorithms with predictions, our causal graph discovery algorithm is 1-consistent and O(log n)-robust when competing against the verification number ν1(G∗), the minimum number of interventions necessary needed to recover G∗. Note that even with arbitrarily bad advice, our algorithm uses asymptotically the same number of interventions incurred by the best-known advice-free adaptive search algorithm [CSB22]. 3 Results Our exposition here focuses on interpreting and contextualizing our main results while deferring technicalities to Section 4. We first focus on the setting where the advice is a fully oriented DAG (cid:101)G ∈ [G∗] within the Markov equivalence class [G∗] of the true underlying causal graph G∗, and explain in Appendix E how to handle the case of partial advice. Full proofs are provided in the appendix. 3.1 Structural property of verification numbers We begin by stating a structural result about verification numbers of DAGs within the same Markov equivalence class (MEC) that motivates the definition of a metric between DAGs in the same MEC our algorithmic guarantees (Theorem 14) are based upon. Theorem 10. For any DAG G∗ with MEC [G∗], we have that maxG∈[G∗] ν1(G) ≤ 2 · minG∈[G∗] ν1(G). Theorem 10 is the first known result relating the minimum and maximum verification numbers of DAGs given a fixed MEC. The next result tells us that the ratio of two is tight. Lemma 11 (Tightness of Theorem 10). There exist DAGs G1 and G2 from the same MEC with ν1(G1) = 2 · ν1(G2). Theorem 10 tells us that we can blindly intervene on any minimum verifying set (cid:101)V ∈ V( (cid:101)G) of any given advice DAG (cid:101)G while incurring only at most a constant factor of 2 more interventions than the minimum verification number ν(G∗) of the unknown ground truth DAG G∗. 3.2 Adaptive search with imperfect DAG advice Recall the definition of r-hop from Section 2. To define the quality of the advice DAG (cid:101)G, we first define the notion of min-hop-coverage which measures how “far” a given verifying set of (cid:101)G is from the set of covered edges of G∗. Definition 12 (Min-hop-coverage). Fix a DAG G∗ with MEC [G∗] and consider any DAG (cid:101)G ∈ [G∗]. For any minimum verifying set (cid:101)V ∈ V( (cid:101)G), we define the min-hop-coverage h(G∗, (cid:101)V ) ∈ {0, 1, 2, . . . , n} as the minimum number of hops such that both endpoints of covered edges C(G∗) of G∗ belong in N h(G∗, (cid:101)V ) skel(E(G∗))( (cid:101)V ). Using min-hop-coverage, we now define a quality measure ψ(G∗, (cid:101)G) for DAG (cid:101)G ∈ [G∗] as an advice for DAG G∗. 6 zn . . . z2 z1 zn . . . z2 z1 a b c G∗ d e a b c (cid:101)G d e Figure 2: Consider the moral DAGs G∗ and (cid:101)G ∈ [G∗] on n + 5 nodes, where dashed arcs represent the covered edges in each DAG. A minimum sized verifying set (cid:101)V = {a, e, z2} ∈ V( (cid:101)G) of (cid:101)G is given by the boxed vertices on the right. As N 1 skel(G∗)( (cid:101)V ) = {a, b, c, d, e, z1, z2, z3} includes both endpoints of all covered edges of G∗, we see that h(G∗, (cid:101)V ) = 1. Intervening on (cid:101)V = {a, e, z2} in G∗ orients the arcs b → a ← c, c ← e → d, and z3 → z2 → z1 respectively which then triggers Meek R1 to orient c → b via e → c ∼ b and to orient z4 → z3 via e → c → . . . → z4 ∼ z3 (after a few invocations of R1), so {a, b, e, z1, z2, z3} will not be relevant nodes in E skel(G∗)( (cid:101)V )) = |{c, d}| = 2. One can check that ψ(G∗, (cid:101)G) = 2 while n could be arbitrarily large. On the other hand, observe that ψ is not symmetric: in the hypothetical situation where we use G∗ as an advice for (cid:101)G, the min-hop-coverage has to extend along the chain z1 ∼ . . . ∼ zn to reach {z1, z2}, so h(G∗, V ∗) ≈ n and ψ( (cid:101)G, G∗) ≈ n since the entire chain remains unoriented with respect to any V ∗ ∈ V(G∗). (cid:101)V (G∗). Meanwhile, the edge c ∼ d remains unoriented in E (cid:101)V (G∗), so ρ( (cid:101)V , N 1 Definition 13 (Quality measure). Fix a DAG G∗ with MEC [G∗] and consider any DAG (cid:101)G ∈ [G∗]. We define ψ(G∗, (cid:101)G) as follows: ψ(G∗, (cid:101)G) = max (cid:101)V ∈V( (cid:101)G) (cid:16) (cid:12) (cid:12) (cid:12)ρ (cid:101)V , N h(G∗, (cid:101)V ) skel(E(G∗))( (cid:101)V ) (cid:17)(cid:12) (cid:12) (cid:12) By definition, ψ(G∗, G∗) = 0 and maxG∈[G∗] ψ(G∗, G) ≤ n. In words, ψ(G∗, (cid:101)G) only counts the relevant nodes within the min-hop-coverage neighborhood after intervening on the worst possible verifying set (cid:101)V of (cid:101)G. We define ψ via the worst set because any search algorithm cannot evaluate h(G∗, (cid:101)V ), since G∗ is unknown, and can only consider an arbitrary (cid:101)V ∈ V( (cid:101)G). See Fig. 2 for an example. Our main result is that it is possible to design an algorithm that leverages an advice DAG (cid:101)G ∈ [G∗] and performs interventions to fully recover an unknown underlying DAG G∗, whose performance depends on the advice quality ψ(G∗, (cid:101)G). Our search algorithm only knows E(G∗) and (cid:101)G ∈ [G∗] but knows neither ψ(G∗, (cid:101)G) nor ν(G∗). Theorem 14. Fix an essential graph E(G∗) with an unknown underlying ground truth DAG G∗. Given an advice graph (cid:101)G ∈ [G∗] and intervention set bound k ≥ 1, there exists a deterministic polynomial time algorithm (Algorithm 1) that computes an intervention set I adaptively such that EI(G∗) = G∗, and |I| has size 1. O(max{1, log ψ(G∗, (cid:101)G)} · ν1(G∗)) when k = 1 2. O(max{1, log ψ(G∗, (cid:101)G)} · log k · νk(G∗)) when k > 1. Consider first the setting of k = 1. Observe that when the advice is perfect (i.e. (cid:101)G = G∗), we use O(ν(G∗)) interventions, i.e. a constant multiplicative factor of the minimum number of interventions necessary. Meanwhile, even with low quality advice, we still use O(log n · ν(G∗)) interventions, asymptotically matching the best known guarantees for adaptive search without advice. To the best of our knowledge, Theorem 14 is the first known result that principally employs imperfect expert advice with provable guarantees in the context of causal graph discovery via interventions. Consider now the setting of bounded size interventions where k > 1. The reason why we can obtain such a result is precisely because of our algorithmic design: we deliberately designed an algorithm that invokes SubsetSearch as a black-box subroutine. Thus, the bounded size guarantees of SubsetSearch given by Theorem 9 carries over to our setting with a slight modification of the analysis. 7 4 Techniques Here, we discuss the high-level technical ideas and intuition behind how we obtain our adaptive search algorithm with imperfect DAG advice. See the appendix for full proofs; in particular, see Appendix F for an overview of Theorem 10. For brevity, we write ψ to mean ψ(G∗, (cid:101)G) and drop the subscript skel(E(G∗)) of r-hop neighborhoods in this section. We also focus our discussion to the atomic interventions. Our adaptive search algorithm (Algorithm 1) uses SubsetSearch as a subroutine. We begin by observing that SubsetSearch(E(G∗), A) fully orients E(G∗) into G∗ if the covered edges of G∗ lie within the node-induced subgraph induced by A. Lemma 15. Fix a DAG G∗ = (V, E) and let V ′ ⊆ V be any subset of vertices. Suppose IV ′ ⊆ V is the set of nodes intervened by SubsetSearch(E(G∗), V ′). If C(G∗) ⊆ E(G∗[V ′]), then EIV ′ (G∗) = G∗. Motivated by Lemma 15, we design Algorithm 1 to repeatedly invoke SubsetSearch on node-induced subgraphs N r( (cid:101)V ), starting from an arbitrary verifying set (cid:101)V ∈ V( (cid:101)G) and for increasing values of r. For i ∈ N ∪ {0}, let us denote r(i) ∈ N ∪ {0} as the value of r in the i-th invocation of SubsetSearch, where we insist that r(0) = 0 and r(j) > r(j − 1) for any j ∈ N. Note that r = 0 simply implies that we intervene on the verifying set (cid:101)V , which only incurs O(ν1(G∗)) interventions due to Theorem 10. Then, we can appeal to Lemma 15 to conclude that E(G∗) is completely oriented into G∗ in the t-th invocation if r(t) ≥ h(G∗, (cid:101)V ). While the high-level subroutine invocation idea seems simple, one needs to invoke SubsetSearch at suitably chosen intervals in order to achieve our theoretical guarantees we promise in Theorem 14. We now explain how to do so in three successive attempts while explaining the algorithmic decisions behind each modification introduced. As a reminder, we do not know G∗ and thus do not know h(G∗, (cid:101)V ) for any verifying set (cid:101)V ∈ V( (cid:101)G) of (cid:101)G ∈ [G∗]. Naive attempt: Invoke for r = 0, 1, 2, 3, . . . The most straightforward attempt would be to invoke SubsetSearch repeatedly each time we increase r by 1 until the graph is fully oriented – in the worst case, t = h(G∗, (cid:101)V ). However, this may cause us to incur way too many interventions. Suppose there are ni relevant nodes in the i-th invocation. Using Theorem 9, one can only argue that the overall number interventions incurred is O((cid:80)t i log ni could be significantly larger than log((cid:80) i ni) in general, e.g. log 2 + . . . + log 2 = (n/2) · log 2 ≫ log n. In fact, if G∗ was a path on n vertices v1 → v2 → . . . → vn and (cid:101)G ∈ [G∗] misleads us with v1 ← v2 ← . . . ← vn, then this approach incurs Ω(n) interventions in total. i=0 log ni · ν(G∗)). However, (cid:80) Tweak 1: Only invoke periodically Since Theorem 9 provides us a logarithmic factor in the analysis, we could instead consider only invoking SubsetSearch after the number of nodes in the subgraph increases by a polynomial factor. For example, if we invoked SubsetSearch with ni previously, then we will wait until the number of relevant nodes surpasses n2 i before invoking SubsetSearch again, where we define n0 ≥ 2 for simplicity. Since log ni ≥ 2 log ni−1, we can see via an inductive argument that the number of interventions used in the final invocation will dominate the total number of interventions used so far: nt ≥ 2 log nt−1 ≥ log nt−1 + 2 log nt−2 ≥ . . . ≥ (cid:80)t−1 i=0 log ni. Since ni ≤ n for any i, we can already prove that O(log n · ν1(G∗)) interventions suffice, matching the advice-free bound of Theorem 7. However, this approach and analysis does not take into account the quality of (cid:101)G and is insufficient to relate nt with the advice measure ψ. Tweak 2: Also invoke one round before Suppose the final invocation of SubsetSearch is on r(t)-hop neighborhood while incurring O(log nt · ν1(G∗)) interventions. This means that C(G∗) lies within N r(t)( (cid:101)V ) but not within N r(t−1)( (cid:101)V ). That is, N r(t−1)( (cid:101)V ) ⊊ N h(G∗, (cid:101)V )( (cid:101)V ) ⊆ N r(t)( (cid:101)V ). While this tells us that nt−1 ≤ |ρ( (cid:101)V , N r(t−1)( (cid:101)V ))| < |ρ( (cid:101)V , N h(G∗, (cid:101)V )( (cid:101)V ))| = ψ, what we want is to conclude that nt ∈ O(ψ). Unfortunately, even when ψ = r(t − 1) + 1, it could be the case that |ρ( (cid:101)V , N h(G∗, (cid:101)V )( (cid:101)V ))| ≪ |N r(t)( (cid:101)V )| as the number of relevant nodes could blow up within a single hop (see Fig. 3). To control this potential blow up in the analysis, we can introduce the following technical fix: whenever 8 G∗ v1 v2 v3 v4 v5 ... vn (cid:101)G v1 v2 v3 (cid:101)V = {v1} v4 v5 ... vn Figure 3: Consider the ground truth DAG G∗ with unique minimum verifying set {v2} and an advice DAG (cid:101)G ∈ [G∗] with chosen minimum verifying set (cid:101)V = {v1}. So, h(G∗, (cid:101)V ) = 1 and ideally we want to argue that our algorithm uses a constant number of interventions. Without tweak 2 and n0 = 2, an algorithm that increases hop radius until the number of relevant nodes is squared will not invoke SubsetSearch until r = 3 because ρ( (cid:101)V , N 1) = 1 < n2 0. However, ρ( (cid:101)V , N 3) = n − 1 and we can only conclude that the algorithm uses O(log n) interventions by invoking SubsetSearch on a subgraph on n − 1 nodes. 0 and ρ( (cid:101)V , N 2) = 2 < n2 we want to invoke SubsetSearch on r(i), first invoke SubsetSearch on r(i) − 1 and terminate earlier if the graph is already fully oriented into G∗. Putting together Algorithm 1 presents our full algorithm where the inequality ρ(Ii, N r first tweak while the terms Ci and C ′ i correspond to the second tweak. skel(E(G∗))( (cid:101)V )) ≥ n2 i corresponds to the In Appendix H, we explain why our algorithm (Algorithm 1) is simply the classic “binary search with prediction”7 when the given essential graph E(G∗) is an undirected path. So, another way to view our result is a generalization that works on essential graphs of arbitrary moral DAGs. For bounded size interventions, we rely on the following known results. Theorem 16 (Theorem 12 of [CSB22]). Fix an essential graph E(G∗) and G ∈ [G∗]. If ν1(G) = ℓ, then νk(G) ≥ ⌈ ℓ k ⌉ and there exists a polynomial time algo. to compute a bounded size intervention set I of size |I| ≤ ⌈ ℓ k ⌉ + 1. Lemma 17 (Lemma 1 of [SKDV15]). Let (n, k, a) be parameters where k ≤ n/2. There exists a polynomial time labeling scheme that produces distinct ℓ length labels for all elements in [n] using letters from the integer alphabet {0} ∪ [a] where ℓ = ⌈loga n⌉. Further, in every digit (or position), any integer letter is used at most ⌈n/a⌉ times. This labelling scheme is a separating system: for any i, j ∈ [n], there exists some digit d ∈ [ℓ] where the labels of i and j differ. Theorem 16 enables us to easily relate ν1(G) with νk(G) while Lemma 17 provides an efficient labelling scheme to partition a set of n nodes into a set S = {S1, S2, . . .} of bounded size sets, each Si involving at most k nodes. By invoking Lemma 17 with a ≈ n′/k where n′ is related to ν1(G), we see that |S| ≈ n′ k · log k. As νk(G) ≈ ν1(G)/k, this is precisely why the bounded intervention guarantees in Theorem 7, Theorem 9 and Theorem 14 have an additional multiplicative log k factor. 5 Empirical validation While our main contributions are theoretical, we also performed some experiments to empirically validate that our algorithm is practical, outperforms the advice-free baseline when the advice quality is good, and still being at most a constant factor worse when the advice is poor. Motivated by Theorem 3, we experimented on synthetic moral DAGs from [WBL21b]: For each undi- rected chordal graph, we use the uniform sampling algorithm of [WBL21b] to uniformly sample 1000 moral DAGs (cid:101)G1, . . . , (cid:101)G1000 and randomly choose one of them as G∗. Then, we give {(E(G∗), (cid:101)Gi)}i∈[1000] as input to Algorithm 1. 7e.g. see https://en.wikipedia.org/wiki/Learning_augmented_algorithm#Binary_search 9 Algorithm 1 Adaptive search algorithm with advice. Input: Essential graph E(G∗), advice DAG (cid:101)G ∈ [G∗], intervention size k ∈ N Output: An intervention set I such that each intervention involves at most k nodes and EI(G∗) = G∗. 1: Let (cid:101)V ∈ V( (cid:101)G) be any atomic verifying set of (cid:101)G. 2: if k = 1 then 3: 4: else 5: Define I0 = (cid:101)V as an atomic intervention set. Define k′ = min{k, | (cid:101)V |/2}, a = ⌈| (cid:101)V |/k′⌉ ≥ 2, and ℓ = ⌈loga |C|⌉. Compute labelling scheme on (cid:101)V with (| (cid:101)V |, k, a) via Lemma 17 and define I0 = {Sx,y}x∈[ℓ],y∈[a], where Sx,y ⊆ (cid:101)V is the subset of vertices whose xth letter in the label is y. 6: end if 7: Intervene on I0 and initialize r ← 0, i ← 0, n0 ← 2. 8: while EIi(G∗) still has undirected edges do 9: skel(E(G∗))( (cid:101)V )) ≥ n2 if ρ(Ii, N r i then 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: Increment i ← i + 1 and record r(i) ← r. Update ni ← ρ(Ii, N r skel(E(G∗))( (cid:101)V )) Ci ← SubsetSearch(EIi(G∗), N r−1 skel(E(G∗))( (cid:101)V ), k) if EIi−1 ∪ Ci(G∗) still has undirected edges then i ← SubsetSearch(EIi−1 ∪ Ci(G∗), N r C ′ Update Ii ← Ii−1 ∪ Ci ∪ C ′ i. skel(E(G∗))( (cid:101)V ), k) else Update Ii ← Ii−1 ∪ Ci. end if end if Increment r ← r + 1. 20: 21: end while= 22: return Ii (cid:16) (cid:12) (cid:12) (cid:12)ρ skel(E(G∗))( (cid:101)V ) (cid:101)V , N h(G∗, (cid:101)V ) Fig. 4 shows one of the experimental plots; more detailed experimental setup and results are given in (cid:17)(cid:12) (cid:12) (cid:12), which is a lower bound and proxy8 Appendix I. On the X-axis, we plot ψ(G∗, (cid:101)V ) = for ψ(G∗, (cid:101)G). On the Y-axis, we aggregate advice DAGs based on their quality measure and also show (in dashed lines) the empirical distribution of quality measures of all DAGs within the Markov equivalence class. As expected from our theoretical analyses, we see that the number of interventions by our advice search starts from ν1(G∗), is lower than advice-free search of [CSB22] when ψ(G∗, (cid:101)V ) is low, and gradually increases as the advice quality degrades. Nonetheless, the number of interventions used is always theoretically bounded below O(ψ(G∗, (cid:101)V ) · ν1(G∗)); we do not plot ψ(G∗, (cid:101)V ) · ν1(G∗) since plotting it yields a “squashed” graph as the empirical counts are significantly smaller. In this specific graph instance, Fig. 4 suggests that our advice search outperforms its advice-free counterpart when given an advice DAG (cid:101)G that is better than ∼ 40% of all possible DAGs consistent with the observational essential graph E(G∗). 6 Conclusion and discussion In this work, we gave the first result that utilizes imperfect advice in the context of causal discovery. We do so in a way that the performance (i.e. the number of interventions in our case) does not degrade significantly even when the advice is inaccurate, which is consistent with the objectives of learning-augmented algorithms. Specifically, we show a smooth bound that matches the number of interventions needed for verification of the causal relationships in a graph when the advice is completely accurate and also depends logarithmically on the distance of the advice to the ground truth. This ensures robustness to “bad” advice, the number of interventions needed is asymptotically the same as in the case where no advice is available. Our results do rely on the widely-used assumptions of sufficiency and faithfulness as well as access to ideal 8We do not know if there is an efficient way to compute ψ(G∗, (cid:101)G) besides the naive (possibly exponential time) enumeration over all possible minimum verifying sets. 10 Figure 4: Experimental plot for one of the synthetic graphs G∗, with respect to 1000 ≪ |[G∗]| ≈ 1.4 × 106 uniformly sampled advice DAGs (cid:101)G from the MEC [G∗]. The solid lines indicate the number of atomic interventions used while the dotted lines indicate the empirical cumulative probability density of (cid:101)G. The true cumulative probability density lies within the shaded area with probability at least 0.99 (see Appendix I for details). iterventions; see Appendix A for a more detailed discussion. Since wrong causal conclusions may be drawn when these assumptions are violated by the data, thus it is of great interest to remove/weaken these assumptions while maintaining strong theoretical guarantees in future work. 6.1 Interesting future directions to explore Partial advice In Appendix E, we explain why having a DAG (cid:101)G as advice may not always be possible and explain how to extend our results to the setting of partial advice by considering the worst case DAG consistent with the given partial advice A. The question is whether one can design and analyze a better algorithm than a trivial max (cid:101)G∈A. For example, maybe one could pick (cid:101)G = argminG∈A maxH∈[G∗] ψ(H, G)? The motivation is as follows: If [G∗] is a disc in R2 and ψ is the Euclidean distance, then (cid:101)G should be the point within A that is closest to the center of the disc. Note that we can only optimize with respect to maxH∈[G∗] because we do not actually know G∗. It remains to be seen if such an object can be efficiently computed and whether it gives a better bound than max (cid:101)G∈A. Incorporating expert confidence The notion of “confidence level” and “correctness” of an advice are orthogonal issues – an expert can be confidently wrong. In this work, we focused on the case where the expert is fully confident but may be providing imperfect advice. It is an interesting problem to investigate how to principally handle both issues simultaneously; for example, what if the advice is not a DAG (cid:101)G ∈ [G∗] in the essential graph but a distribution over all DAGs in [G∗]? Bayesian ideas may apply here. Better analysis? Empirically, we see that the log factor is a rather loose upper bound both for blind search and advice search. Can there be a tighter analysis? [CSB22] tells us that Ω(log n · ν1(G∗)) is unavoidable when E(G∗) is a path on n vertices with ν1(G∗) = 1 but this is a special class of graphs. What if ν1(G∗) > 1? Can we give tighter bounds in other graph parameters? Furthermore, in some preliminary testing, we observed that implementing tweak 2 or ignoring it yield similar empirical performance and we wonder if there is a tighter analysis without tweak 2 that has similar guarantees. 11 Acknowledgements This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-PhD/2021-08-013). TG and AB are supported by the National Research Foundation Fellowship for AI (Award NRF-NRFFAI-0002), an Amazon Research Award, and a Google South & Southeast Asia Research Award. Part of this work was done while the authors were visiting the Simons Institute for the Theory of Computing. We would like to thank Kirankumar Shiragur and Joy Qiping Yang for valuable feedback and discussions. References [ACE+20] Antonios Antoniadis, Christian Coester, Marek Elias, Adam Polak, and Bertrand Simon. Online Metric Algorithms with Untrusted Predictions. In International Conference on Machine Learning, pages 345–355. PMLR, 2020. [ADJ+20] Spyros Angelopoulos, Christoph D¨urr, Shendan Jin, Shahin Kamali, and Marc Renault. On- line Computation with Untrusted Advice. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Schloss Dagstuhl-Leibniz-Zentrum f¨ur Informatik, 2020. [AGKK20] Antonios Antoniadis, Themis Gouleakis, Pieter Kleer, and Pavel Kolev. Secretary and Online Matching Problems with Machine Learned Advice. Advances in Neural Information Processing Systems, 33:7933–7944, 2020. [AJS22] Antonios Antoniadis, Peyman Jabbarzade, and Golnoosh Shahkarami. A Novel Prediction Setup for Online Speed-Scaling. In 18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022). Schloss Dagstuhl-Leibniz-Zentrum f¨ur Informatik, 2022. [AMP97] Steen A. Andersson, David Madigan, and Michael D. Perlman. A characterization of Markov equivalence classes for acyclic digraphs. The Annals of Statistics, 25(2):505–541, 1997. [And13] [ASC20] Holly Andersen. When to expect violations of causal faithfulness and why it matters. Philosophy of Science, 80(5):672–683, 2013. Bryan Andrews, Peter Spirtes, and Gregory F Cooper. On the Completeness of Causal Discovery in the Presence of Latent Confounding with Tiered Background Knowledge. In International Conference on Artificial Intelligence and Statistics, pages 4002–4011. PMLR, 2020. [ASY+19] Raj Agrawal, Chandler Squires, Karren Yang, Karthikeyan Shanmugam, and Caroline Uhler. In ABCD-Strategy: Budgeted Experimental Design for Targeted Causal Structure Discovery. International Conference on Artificial Intelligence and Statistics, pages 3400–3409. PMLR, 2019. [BLMS+22] Giulia Bernardini, Alexander Lindermayr, Alberto Marchetti-Spaccamela, Nicole Megow, Leen Stougie, and Michelle Sweering. A Universal Error Measure for Input Predictions Applied to Online Graph Problems. In Advances in Neural Information Processing Systems, 2022. [BMRS20] ´Etienne Bamas, Andreas Maggiori, Lars Rohwedder, and Ola Svensson. Learning Augmented Energy Minimization via Speed Scaling. Advances in Neural Information Processing Systems, 33:15350–15359, 2020. [BMS20] [BP93] [Can20] Etienne Bamas, Andreas Maggiori, and Ola Svensson. The Primal-Dual method for Learning Augmented Algorithms. Advances in Neural Information Processing Systems, 33:20083–20094, 2020. Jean R. S. Blair and Barry W. Peyton. An introduction to chordal graphs and clique trees. In Graph theory and sparse matrix computation, pages 1–29. Springer, 1993. Cl´ement L Canonne. arXiv:2002.11457, 2020. A short note on learning discrete distributions. arXiv preprint [CBP16] Hyunghoon Cho, Bonnie Berger, and Jian Peng. Reconstructing Causal Biological Networks through Active Learning. PLoS ONE, 11(3):e0150611, 2016. 12 [Chi95] [Chi02] [CLO07] David Maxwell Chickering. A Transformational Characterization of Equivalent Bayesian Network Structures. In Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, UAI’95, page 87–98, San Francisco, CA, USA, 1995. Morgan Kaufmann Publishers Inc. David Maxwell Chickering. Optimal Structure Identification with Greedy Search. Journal of Machine Learning Research, 3:507–554, 2002. James Clause, Wanchun Li, and Alessandro Orso. Dytan: A Generic Dynamic Taint Analysis Framework. In Proceedings of the 2007 international symposium on Software testing and analysis, pages 196–206, 2007. [CMKR12] Diego Colombo, Marloes H. Maathuis, Markus Kalisch, and Thomas S. Richardson. Learning high- dimensional directed acyclic graphs with latent and selection variables. The Annals of Statistics, pages 294–321, 2012. [CS23] [CSB22] [CY99] Davin Choo and Kirankumar Shiragur. Subset verification and search algorithms for causal DAGs. In International Conference on Artificial Intelligence and Statistics, 2023. Davin Choo, Kirankumar Shiragur, and Arnab Bhattacharyya. Verification and search algorithms for causal DAGs. Advances in Neural Information Processing Systems, 35, 2022. Gregory F. Cooper and Changwon Yoo. Causal Discovery from a Mixture of Experimental and Observational Data. In Proceedings of the Fifteenth conference on Uncertainty in artificial intel- ligence, pages 116–125, 1999. [DCJ11] Cassio P De Campos and Qiang Ji. Efficient Structure Learning of Bayesian Networks using Constraints. The Journal of Machine Learning Research, 12:663–689, 2011. [DLPLV21] Paul D¨utting, Silvio Lattanzi, Renato Paes Leme, and Sergei Vassilvitskii. Secretaries with Advice. In Proceedings of the 22nd ACM Conference on Economics and Computation, pages 409–429, 2021. [Ebe07] [Ebe10] [EGS05] Frederick Eberhardt. Causation and Intervention. Unpublished doctoral dissertation, Carnegie Mellon University, page 93, 2007. Frederick Eberhardt. Causal Discovery as a Game. pages 87–96. PMLR, 2010. In Causality: Objectives and Assessment, Frederick Eberhardt, Clark Glymour, and Richard Scheines. On the number of experiments suf- ficient and in the worst case necessary to identify all causal relations among N variables. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, pages 178– 184, 2005. [EGS06] Frederick Eberhardt, Clark Glymour, and Richard Scheines. N-1 Experiments Suffice to Determine the Causal Relations Among N Variables. In Innovations in machine learning, pages 97–112. Springer, 2006. [ES07] [FH20] [FK00] Frederick Eberhardt and Richard Scheines. science, 74(5):981–995, 2007. Interventions and Causal Inference. Philosophy of Zhuangyan Fang and Yangbo He. IDA with Background Knowledge. In Conference on Uncertainty in Artificial Intelligence, pages 270–279. PMLR, 2020. Nir Friedman and Daphne Koller. Being Bayesian about Network Structure. In Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, pages 201–210, 2000. [FNB+11] M Julia Flores, Ann E Nicholson, Andrew Brunskill, Kevin B Korb, and Steven Mascaro. In- corporating expert knowledge when learning Bayesian network structure: A medical case study. Artificial intelligence in medicine, 53(3):181–204, 2011. [GKS+19] Kristjan Greenewald, Dmitriy Katz, Karthikeyan Shanmugam, Sara Magliacane, Murat Kocaoglu, Enric Boix-Adser`a, and Guy Bresler. Sample Efficient Active Learning of Causal Trees. Advances in Neural Information Processing Systems, 32, 2019. 13 [GLS23] [GP19] [GP21] Themis Gouleakis, Konstantinos Lakis, and Golnoosh Shahkarami. Learning-Augmented Algo- rithms for Online TSP on the Line. In 37th AAAI Conference on Artificial Intelligence. AAAI, 2023. Sreenivas Gollapudi and Debmalya Panigrahi. Online Algorithms for Rent-or-Buy with Expert Advice. In International Conference on Machine Learning, pages 2319–2327. PMLR, 2019. Richard Guo and Emilija Perkovic. Minimal Enumeration of All Possible Total Effects in a Markov Equivalence Class. In International Conference on Artificial Intelligence and Statistics, pages 2395–2403. PMLR, 2021. [GRE84] John R Gilbert, Donald J Rose, and Anders Edenbrandt. A separator theorem for chordal graphs. SIAM Journal on Algebraic Discrete Methods, 5(3):306–313, 1984. [HB12] [HB14] [Hec95] Alain Hauser and Peter B¨uhlmann. Characterization and Greedy Learning of Interventional Markov Equivalence Classes of Directed Acyclic Graphs. The Journal of Machine Learning Re- search, 13(1):2409–2464, 2012. Alain Hauser and Peter B¨uhlmann. Two Optimal Strategies for Active Learning of Causal Models from Interventions. International Journal of Approximate Reasoning, 55(4):926–939, 2014. David Heckerman. A Bayesian Approach to Learning Causal Networks. Eleventh conference on Uncertainty in artificial intelligence, pages 285–295, 1995. In Proceedings of the [HEH13] Antti Hyttinen, Frederick Eberhardt, and Patrik O. Hoyer. Experiment Selection for Causal Discovery. Journal of Machine Learning Research, 14:3041–3071, 2013. [HG08] Yang-Bo He and Zhi Geng. Active Learning of Causal Networks with Intervention Experiments and Optimal Designs. Journal of Machine Learning Research, 9:2523–2547, 2008. [HGC95] David Heckerman, Dan Geiger, and David M. Chickering. Learning Bayesian Networks: The Combination of Knowledge and Statistical Data . Machine learning, 20:197–243, 1995. [HLV14] Huining Hu, Zhentao Li, and Adrian Vetta. Randomized Experimental Design for Causal Graph Discovery. Advances in Neural Information Processing Systems, 27, 2014. [HMC06] David Heckerman, Christopher Meek, and Gregory Cooper. A Bayesian Approach to Causal Discovery. Innovations in Machine Learning, pages 1–28, 2006. [Hoo90] Kevin D Hoover. The logic of causal inference: Econometrics and the Conditional Analysis of Causation. Economics & Philosophy, 6(2):207–234, 1990. [ICM+22] Muhammad Azam Ikram, Sarthak Chakraborty, Subrata Mitra, Shiv Saini, Saurabh Bagchi, and Murat Kocaoglu. Root Cause Analysis of Failures in Microservices through Causal Discovery. In Advances in Neural Information Processing Systems, 2022. [JBT+19] Saurabh Jha, Subho Banerjee, Timothy Tsai, Siva KS Hari, Michael B Sullivan, Zbigniew T Kalbarczyk, Stephen W Keckler, and Ravishankar K Iyer. ML-based Fault Injection for Au- tonomous Vehicles: A Case for Bayesian Fault Injection. In 2019 49th annual IEEE/IFIP inter- national conference on dependable systems and networks (DSN), pages 112–124. IEEE, 2019. [KBC+18] Tim Kraska, Alex Beutel, Ed H. Chi, Jeffrey Dean, and Neoklis Polyzotis. The Case for Learned Index Structures. In Proceedings of the 2018 international conference on management of data, pages 489–504, 2018. [KDV17] Murat Kocaoglu, Alex Dimakis, and Sriram Vishwanath. Cost-Optimal Learning of Causal Graphs. In International Conference on Machine Learning, pages 1875–1884. PMLR, 2017. [KWJ+04] Ross D. King, Kenneth E. Whelan, Ffion M. Jones, Philip G. K. Reiser, Christopher H. Bryant, Stephen H. Muggleton, Douglas B. Kell, and Stephen G. Oliver. Functional genomic hypothesis generation and experimentation by a robot scientist. Nature, 427(6971):247–252, 2004. 14 [LB18] Andrew Li and Peter Beek. Bayesian Network Structure Learning with Side Constraints. International conference on probabilistic graphical models, pages 225–236. PMLR, 2018. In [LKDV18] Erik M. Lindgren, Murat Kocaoglu, Alexandros G. Dimakis, and Sriram Vishwanath. Experimen- tal Design for Cost-Aware Learning of Causal Graphs. Advances in Neural Information Processing Systems, 31, 2018. [LLMV20] Silvio Lattanzi, Thomas Lavastida, Benjamin Moseley, and Sergei Vassilvitskii. Online Scheduling via Learned Weights. In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1859–1877. SIAM, 2020. [LV21] Thodoris Lykouris and Sergei Vassilvitskii. Competitive Caching with Machine Learned Advice. Journal of the ACM (JACM), 68(4):1–25, 2021. [Mee95a] Christopher Meek. Causal Inference and Causal Explanation with Background Knowledge. In Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, UAI’95, page 403–410, San Francisco, CA, USA, 1995. Morgan Kaufmann Publishers Inc. [Mee95b] Christopher Meek. Strong completeness and faithfulness in Bayesian networks. In Proceedings of the Eleventh conference on Uncertainty in artificial intelligence, pages 411–418, 1995. [Mit18] [MM13] Michael Mitzenmacher. A Model for Learned Bloom Filters, and Optimizing by Sandwiching. Advances in Neural Information Processing Systems, 31, 2018. Andr´es R Masegosa and Seraf´ın Moral. An interactive approach for Bayesian network learning using domain/expert knowledge. International Journal of Approximate Reasoning, 54(8):1168– 1181, 2013. [Mur01] Kevin P Murphy. Active Learning of Causal Bayes Net Structure. Technical report, UC Berkeley, 2001. [MV22] [Pea09] [Per20] [PKM17] [POS+18] Michael Mitzenmacher and Sergei Vassilvitskii. Algorithms with Predictions. Communications of the ACM, 65(7):33–35, 2022. Judea Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, USA, 2nd edition, 2009. Emilija Perkovic. Identifying causal effects in maximally oriented partially directed acyclic graphs. In Conference on Uncertainty in Artificial Intelligence, pages 530–539. PMLR, 2020. Interpreting and using CPDAGs Emilija Perkovic, Markus Kalisch, and Marloes H Maathuis. with background knowledge. In Proceedings of the 2017 Conference on Uncertainty in Artificial Intelligence (UAI2017), pages ID–120. AUAI Press, 2017. Jean-Baptiste Pingault, Paul F O’reilly, Tabea Schoeler, George B Ploubidis, Fr¨uhling Rijsdijk, and Frank Dudbridge. Using genetic data to strengthen causal inference in observational research. Nature Reviews Genetics, 19(9):566–580, 2018. [PSK18] Manish Purohit, Zoya Svitkina, and Ravi Kumar. Improving Online Algorithms via ML Predic- tions. Advances in Neural Information Processing Systems, 31, 2018. [Rei56] Hans Reichenbach. The Direction of Time, volume 65. University of California Press, 1956. [RHT+17] Maya Rotmensch, Yoni Halpern, Abdulhakim Tlimat, Steven Horng, and David Sontag. Learning a Health Knowledge Graph from Electronic Medical Records. Scientific reports, 7(1):1–11, 2017. [Roh20] Dhruv Rohatgi. Near-Optimal Bounds for Online Caching with Machine Learned Advice. In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1834– 1845. SIAM, 2020. [RW06] Donald B Rubin and Richard P Waterman. Estimating the Causal Effects of Marketing Interven- tions Using Propensity Score Methodology. Statistical Science, pages 206–222, 2006. 15 [SC17] Yuriy Sverchkov and Mark Craven. A review of active learning approaches to experimental design for uncovering biological networks. PLoS computational biology, 13(6):e1005466, 2017. [SGSH00] Peter Spirtes, Clark N Glymour, Richard Scheines, and David Heckerman. Causation, Prediction, and Search. MIT press, 2000. [SKDV15] Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis, and Sriram Vishwanath. Learning Causal Graphs with Small Interventions. Advances in Neural Information Processing Systems, 28, 2015. [SMG+20] Chandler Squires, Sara Magliacane, Kristjan Greenewald, Dmitriy Katz, Murat Kocaoglu, and Karthikeyan Shanmugam. Active Structure Learning of Causal DAGs via Directed Clique Trees. Advances in Neural Information Processing Systems, 33:21500–21511, 2020. [SSG+98] Richard Scheines, Peter Spirtes, Clark Glymour, Christopher Meek, and Thomas Richardson. he TETAD Project: Constraint Based Aids to Causal Model Specification. Multivariate Behavioral Research, 33(1):65–117, 1998. [SWU21] Liam Solus, Yuhao Wang, and Caroline Uhler. Consistency guarantees for greedy permutation- based causal inference algorithms. Biometrika, 108(4):795–814, 2021. [TJG+19] Cheng Tan, Ze Jin, Chuanxiong Guo, Tianrong Zhang, Haitao Wu, Karl Deng, Dongming Bi, and Dong Xiang. NetBouncer: Active Device and Link Failure Localization in Data Center Networks. In Proceedings of the 16th USENIX Conference on Networked Systems Design and Implementation, pages 599–613, 2019. [TK00] [TK01] Simon Tong and Daphne Koller. Active Learning for Parameter Estimation in Bayesian Networks. Advances in Neural Information Processing Systems, 13, 2000. Simon Tong and Daphne Koller. Active Learning for Structure in Bayesian Networks. In Inter- national joint conference on artificial intelligence, volume 17, pages 863–869. Citeseer, 2001. [URBY13] Caroline Uhler, Garvesh Raskutti, Peter B¨uhlmann, and Bin Yu. Geometry of the faithfulness assumption in causal inference. The Annals of Statistics, pages 436–463, 2013. [VP90] Thomas Verma and Judea Pearl. Equivalence and Synthesis of Causal Models. In Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence, UAI ’90, page 255–270, USA, 1990. Elsevier Science Inc. [WAJ+23] Qing Wang, Jesus Rios Aliaga, Saurabh Jha, Karthikeyan Shanmugam, Frank Bagehorn, Xi Yang, Robert Filepp, Naoki Abe, and Larisa Shwartz. Fault Injection based Interventional Causal Learn- ing for Distributed Applications. In Innovative Applications of Artificial Intelligence Conference, 2023. [WBL21a] Marcel Wien¨obst, Max Bannach, and Maciej Li´skiewicz. Extendability of Causal Graphical Mod- els: Algorithms and Computational Complexity. In Uncertainty in Artificial Intelligence, pages 1248–1257. PMLR, 2021. [WBL21b] Marcel Wien¨obst, Max Bannach, and Maciej Li´skiewicz. Polynomial-Time Algorithms for Count- ing and Sampling Markov Equivalent DAGs. In Proccedings of the 35th Conference on Artificial Intelligence, AAAI, 2021. [Wei20] Alexander Wei. Better and Simpler Learning-Augmented Online Caching. In Approximation, Ran- domization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Schloss Dagstuhl-Leibniz-Zentrum f¨ur Informatik, 2020. [WLW20] Shufan Wang, Jian Li, and Shiqiang Wang. Online Algorithms for Multi-shop Ski Rental with Machine Learned Advice. Advances in Neural Information Processing Systems, 33:8150–8160, 2020. [Woo05] James Woodward. Making Things Happen: A Theory of Causal Explanation. Oxford University Press, 2005. 16 [WSYU17] Yuhao Wang, Liam Solus, Karren Yang, and Caroline Uhler. Permutation-based Causal Inference Algorithms with Interventions. Advances in Neural Information Processing Systems, 30, 2017. [ZPX+19] Xiang Zhou, Xin Peng, Tao Xie, Jun Sun, Chao Ji, Dewei Liu, Qilin Xiang, and Chuan He. Latent error prediction and fault localization for microservice applications by learning from system trace logs. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 683–694, 2019. [ZS02] Jiji Zhang and Peter Spirtes. Strong Faithfulness and Uniform Consistency in Causal Inference . In Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence, pages 632–639, 2002. 17 A Remark about assumptions Under causal sufficiency, there are no hidden confounders (i.e. unobserved common causes to the observed variables). While causal sufficiency may not always hold, it is still a reasonable assumption to make in certain applications such as studying gene regulatory networks (e.g. see [WSYU17]). Faithfulness assumes that independencies that occur in the data do not occur due to “cancellations” in the functional relationships, but rather due to the causal graph structure. It is known [Mee95b, SGSH00] that, under many natural parameterizations and settings, the set of unfaithful parameters for any given causal DAG has zero Lebesgue measure (i.e. faithfulness holds; see also Section 3.2 of [ZS02] for a discussion about faithfulness). However, one should be aware that the faithfulness assumption may be violated in reality [And13, URBY13], especially in the presence of sampling errors in the finite sample regime. Ideal interventions assume hard interventions (forcefully setting a variable value) and the ability to obtain as many interventional samples as desired, ensuring that we always recover the directions of all edges cut by interventions. Without this assumption, we may fail to correctly infer some arc directions and our algorithms will only succeed with some success probability. Our assumption that the given expert advice is consistent with observational essential graph is purely for simplicity and can be removed by deciding which part of the given advice to discard so that the remaining advice is consistent. However, we feel that deciding which part of the inconsistent advice to discard will unnecessarily complicate our algorithmic contributions without providing any useful insights, and thus we made such an assumption. B Additional Preliminaries For any set A, we denote its powerset by 2A. We write {1, . . . , n} as [n] and hide absolute constant multiplicative factors in n using standard asymptotic notations O(·), Ω(·), and Θ(·). The indicator function 1predicate is 1 if the predicate is true and 0 otherwise. Throughout, we use G∗ to denote the (unknown) ground truth DAG, its Markov equivalence class by [G∗] and the corresponding essential graph by E(G∗). We write A ˙∪ B and A \ B to represent the disjoint union and set difference of two sets A and B respectively. B.1 Graph basics We consider partially oriented graphs without parallel edges. Let G = (V, E) be a graph on |V | = n nodes/vertices where V (G), E(G), and A(G) ⊆ E(G) denote nodes, edges, and arcs of G respectively. The graph G is said to be fully oriented if A(G) = E(G), fully unoriented if A(G) = ∅, and partially oriented otherwise. For any subset V ′ ⊆ V and E′ ⊆ E, we use G[V ′] and G[E′] to denote the node-induced and edge-induced subgraphs respectively. We write u ∼ v to denote that two nodes u, v ∈ V are connected in G, and write u → v or u ← v when specifying a certain direction. The skeleton skel(G) refers to the underlying graph where all edges are made undirected. A v-structure in G refers to a collection of three distinct vertices u, v, w ∈ V such that u → v ← w and u ̸∼ w. A directed cycle refers to a sequence of k ≥ 3 vertices where v1 → v2 → . . . → vk → v1. An acyclic completion / consistent extension of a partially oriented graph refers to an assignment of edge directions to the unoriented edges E(G) \ A(G) such that the resulting fully oriented graph has no directed cycles. Suppose G = (V, E) is fully unoriented. For vertices u, v ∈ V , subset of vertices V ′ ⊆ V and integer r ≥ 0, define distG(u, v) as the shortest path length between u and v, distG(V ′, v) = minu∈V ′ distG(u, v), and G(V ′) = {v ∈ V : distG(v, V ′) ≤ r} ⊆ V as the set of vertices that are r-hops away from V ′, i.e. r-hop N r neighbors of V ′. We omit the subscript G when it is clear from context. Suppose G = (V, E) is fully oriented. For any vertex v ∈ V , we write Pa(v), Anc(v), Des(v) to denote the parents, ancestors and descendants of v respectively and we write Des[v] = Des(v) ∪ {v} and Anc[v] = Anc(v) ∪ {v} to include v itself. We define Ch(v) ⊆ Des(v) as the set of direct children of v, that is, for any w ∈ Ch(v) there does not exists z ∈ V \ {v, w} such that z ∈ Des(v) ∩ Anc(w). Note that, Ch(v) ⊆ {w ∈ V : v → w} ⊆ Des(v). B.2 Causal graph basics A directed acyclic graph (DAG) is a fully oriented graph without directed cycles. By representing random variables by nodes, DAGs are commonly used as graphical causal models [Pea09], where the joint probability density f factorizes according to the Markov property: f (v1, . . . , vn) = (cid:81)n i=1 f (vi | pa(v)), where pa(v) denotes 18 the values taken by v’s parents. One can associate a (not necessarily unique) valid permutation / topological ordering π : V → [n] to any (partially directed) DAG such that oriented arcs (u, v) satisfy π(u) < π(v) and unoriented arcs {u, v} can be oriented as u → v without forming directed cycles when π(u) < π(v). For any DAG G, we denote its Markov equivalence class (MEC) by [G] and essential graph by E(G). DAGs in the same MEC have the same skeleton and the essential graph is a partially directed graph such that an arc u → v is directed if u → v in every DAG in MEC [G], and an edge u ∼ v is undirected if there exists two DAGs G1, G2 ∈ [G] such that u → v in G1 and v → u in G2. It is known that two graphs are Markov equivalent if and only if they have the same skeleton and v-structures [VP90, AMP97]. In fact, the essential graph E(G) can be computed from G by orienting v-structures in the skeleton skel(G) and applying Meek rules (see Appendix D). An edge u → v is a covered edge [Chi95] if Pa(u) = Pa(v) \ {u}. We use C(G) ⊆ E(G) to denote the set of covered edges of G. The following is a well-known result relating covered edges and MECs. Lemma 18 ([Chi95]). If G and G′ belong in the same MEC if and only if there exists a sequence of covered edge reversals to transform between them. C Additional Related Works on Causal Structure Learning Constraint-based algorithms, such as ours, use information about conditional independence relations to iden- tify the underlying structure. From purely observational data, the PC [SGSH00], FCI [SGSH00] and RFCI algorithms [CMKR12] have been shown to consistently recover the essential graph, assuming causal sufficiency, faithfulness, and i.i.d. samples. The problem of recovering the DAG using constraints from interventional data was first studied by [EGS06, EGS05, Ebe07]. Many recent works [HLV14, SKDV15, KDV17, LKDV18, GKS+19, SMG+20, CSB22, CS23] have followed up on these themes. Score-based methods maximize a particular score function over the space of graphs. For observational data, the GES algorithm [Chi02] uses the BIC to iteratively add edges. Extending the GES, [HB12] proposed the GIES algorithm that uses passive interventional data to orient more edges. Hybrid methods, like [SWU21] for observational and [WSYU17] for interventional data, use elements of both approaches. D Meek rules Meek rules are a set of 4 edge orientation rules that are sound and complete with respect to any given set of arcs that has a consistent DAG extension [Mee95a]. Given any edge orientation information, one can always repeatedly apply Meek rules till a unique fixed point (where no further rules trigger) to maximize the number of oriented arcs. Definition 19 (The four Meek rules [Mee95a], see Fig. 5 for an illustration). R1 Edge {a, b} ∈ E(G) \ A(G) is oriented as a → b if ∃ c ∈ V such that c → a and c ̸∼ b. R2 Edge {a, b} ∈ E(G) \ A(G) is oriented as a → b if ∃ c ∈ V such that a → c → b. R3 Edge {a, b} ∈ E(G) \ A(G) is oriented as a → b if ∃ c, d ∈ V such that d ∼ a ∼ c, d → b ← c, and c ̸∼ d. R4 Edge {a, b} ∈ E(G) \ A(G) is oriented as a → b if ∃ c, d ∈ V such that d ∼ a ∼ c, d → c → b, and b ̸∼ d. c a R1 b c a b c a R2 b c a b a d R3 c b a d c b d a c b R4 d a c b Figure 5: An illustration of the four Meek rules There exists an algorithm (Algorithm 2 of [WBL21a]) that runs in O(d · |E(G)|) time and computes the closure under Meek rules, where d is the degeneracy of the graph skeleton9. 9A d-degenerate graph is an undirected graph in which every subgraph has a vertex of degree at most d. Note that the degeneracy of a graph is typically smaller than the maximum degree of the graph. 19 E Imperfect partial advice via MPDAGs In the previous sections, we discuss advice that occurs in the form of a DAG (cid:101)G ∈ [G∗]. However, this may be too much to ask for in certain situations. For example: • The Markov equivalence class may be too large for an expert to traverse through and propose an advice DAG. • The expert only has opinions about a subset of a very large causal graph involving millions of nodes / edges. As discussed in Section 2.4, we can formulate such partial advice as MPDAGs. Given a MPDAG as expert advice, a natural attempt would be to sample a DAG (cid:101)G from it to use the full advice. Unfortunately, it is #P-complete even to count the number of DAGs consistent with a given MPDAG in general [WBL21b] and we are unaware of any efficient way to sample uniformly at random from it. Instead, we propose to pick an arbitrary DAG (cid:101)G as advice within the given MPDAG: pick any unoriented edge, orient arbitrarily, apply Meek rules, repeat until fully oriented. The following result follows naturally by maximizing over all possible DAGs consistent with the given partial advice. Theorem 20. Fix an essential graph E(G∗) with an unknown underlying ground truth DAG G∗. Given a set A of DAGs consistent with the given partial advice and intervention set bound k ≥ 1, there exists a deterministic polynomial time algorithm that computes an intervention set I adaptively such that EI(G∗) = G∗, and |I| has size 1. O(max{1, log max 2. O(max{1, log max when k = 1 and k > 1 respectively. (cid:101)G∈A ψ(G∗, (cid:101)G)} · ν1(G∗)) (cid:101)G∈A ψ(G∗, (cid:101)G)} · log k · νk(G∗)) F Technical Overview for Theorem 10 As discussed in Section 2, it suffices to prove Theorem 10 with respect to moral DAGs. Our strategy for proving Theorem 10 is to consider two arbitrary DAGs Gs (source) and Gt (target) in the same equivalence class and transform a verifying set for Gs into a verifying set for Gt using Lemma 18 (see Algorithm 2 for the explicit algorithm10). Instead of proving Theorem 10 by analyzing the exact sequence of covered edges produced by Algorithm 211 when transforming between the DAGs Gmin = argminG∈[G∗]ν1(G) and Gmax = argmaxG∈[G∗]ν1(G), we will prove something more general. Algorithm 2 [Chi95]: Transforms between two DAGs within the same MEC via covered edge reversals Input: Two DAGs Gs = (V, Es) and Gt = (V, Et) Output: A sequence seq of covered edge reversals that transforms Gs to Gt 1: seq ← ∅ 2: while Gs ̸= Gt do 3: Fix an arbitrary valid ordering π for Gs. Let A ← A(Gs) \ A(Gt) be the set of differing arcs. Let y ← argminy ∈ V : PaA(y)̸=∅{π(y)}. Let x ← argmaxz ∈ PaA(y){π(z)}. Add x → y to seq. Update Gs by replacing x → y with y → x. 4: 5: 6: 7: 8: 9: end while 10: return seq ▷ [Chi95, Lemma 2]: x → y ∈ C(Gs) Observe that taking both endpoints of any maximal matching of covered edges is a valid verifying set that is at most twice the size of the minimum verifying set. This is because maximal matching is a 2-approximation to the minimum vertex cover. Motivated by this observation, our proof for Theorem 10 uses the following 10Lemma 2 of [Chi95] guarantees that x → y is a covered edge of the current Gs whenever step 9 is executed. 11The correctness of Algorithm 2 is given in [Chi95] where the key idea is to show that x → y found in this manner is a covered edge. This is proven in Lemma 2 of [Chi95]. 20 transformation argument (Lemma 23): for two DAGs G and G′ that differ only on the arc direction of a single covered edge x ∼ y, we show that given a conditional-root-greedy (CRG) maximal matching12 on the covered edges of G, we can obtain another CRG maximal matching of the same size on the covered edges of G′, after reversing x ∼ y and transforming G to G′. So, starting from Gs, we compute a CRG maximal matching, then we apply the transformation argument above on the sequence of covered edges given by Algorithm 2 until we get a CRG maximal matching of Gt of the same size. Thus, we can conclude that the minimum vertex cover sizes of Gs and Gt differ by a factor of at most two. This argument holds for any pair of DAGs (Gs, Gt) from the same MEC. We now define what is a conditional-root-greedy (CRG) maximal matching. As the set of covered edges C(G) of any DAG G induces a forest (see Theorem 6), we define the CRG maximal matching using a particular greedy process on the tree structure of C(G). The CRG maximal matching is unique with respect to a fixed valid ordering π of G and subset S. We will later consider CRG maximal matchings with S = A(Gs) ∩ A(Gt), where the arc set S remains unchanged throughout the entire transformation process. Definition 21 (Conditional-root-greedy (CRG) maximal matching). Given a DAG G = (V, E) with a valid ordering πG and a subset of edges S ⊆ E, we define the conditional-root-greedy (CRG) maximal matching MG,πG,S as the unique maximal matching on C(G) computed via Algorithm 3: greedily choose arcs x → y where the x has no incoming arcs by minimizing πG(y), conditioned on favoring arcs outside of S. Algorithm 3 Conditional-root-greedy maximal matching Input: A DAG G = (V, E), a valid ordering πG, a subset of edges S ⊆ E Output: A CRG maximal matching MG,πG,S 1: Initialize MG,πG,S ← ∅ and C ← C(G) 2: while C ̸= ∅ do 3: x ← argminz ∈ {u∈V | u→v∈C}{πG(z)} y ← argminz∈V : x→z ∈ C{πG(z) + n2 · 1x→z∈S} Add the arc x → y to MG,πG,S Remove all arcs with x or y as endpoints from C 4: 5: 6: 7: end while 8: return MG,πG,S ▷ x is a root (i.e. no incoming arcs) To prove the transformation argument (Lemma 23), we need to first understand how the status of covered edges evolve when we perform a single edge reversal. The following lemma may be of independent interest beyond this work. Lemma 22 (Covered edge status changes due to covered edge reversal). Let G∗ be a moral DAG with MEC [G∗] and consider any DAG G ∈ [G∗]. Suppose G = (V, E) has a covered edge x → y ∈ C(G) ⊆ E and we reverse x → y to y → x to obtain a new DAG G′ ∈ [G∗]. Then, all of the following statements hold: 1. y → x ∈ C(G′). Note that this is the covered edge that was reversed. 2. If an edge e does not involve x or y, then e ∈ C(G) if and only if e ∈ C(G′). 3. If x ∈ ChG(a) for some a ∈ V \ {x, y}, then a → x ∈ C(G) if and only if a → y ∈ C(G′). 4. If b ∈ ChG(y) and x → b ∈ E(G) for some b ∈ V \ {x, y}, then y → b ∈ C(G) if and only if x → b ∈ C(G′). Using Lemma 22, we derive our transformation argument. Lemma 23. Consider two moral DAGs G1 and G2 from the same MEC such that they differ only in one covered edge direction: x → y ∈ E(G1) and y → x ∈ E(G2). Let vertex a be the direct parent of x in G1, if it exists. Let S ⊆ E be a subset such that a → x ∈ S and x → y, y → x ̸∈ S (if a does not exist, ignore condition a → x ∈ S). Suppose πG1 is an ordering for G1 such that y = argminz : x→z∈C(G1){πG1(z) + n2 · 1x→z∈S} and denote MG1,πG1 ,S as the corresponding CRG maximal matching for C(G1). Then, there exists an explicit modification of πG1 to πG2, and MG1,πG1 ,S to a CRG maximal matching MG2,πG2 ,S for C(G2) such that |MG1,πG1 ,S| = |MG2,πG2 ,S|. 12A special type of maximal matching (see Definition 21). 21 To be precise, given πG1 , we will define πG2 in our proofs as follows: πG2(v) =  πG1(x)  πG1(u) πG1(y)  πG1(v) if v = y if v = x if v = u else (1) As discussed earlier, Theorem 10 follows by picking Gs = argmaxG∈[G∗]ν1(G) and Gt = argminG∈[G∗]ν1(G), applying Algorithm 2 to find a transformation sequence of covered edge reversals between them, and repeatedly applying Lemma 23 with the conditioning set S = A(Gs) ∩ A(Gt) to conclude that Gs and Gt have the same sized CRG maximal matchings, and thus implying that minG∈[G∗] ν1(G) = ν1(Gs) ≤ 2 · ν1(Gt) = 2·argmaxG∈[G∗]ν1(G). Note that we keep the conditioning set S unchanged throughout the entire transformation process from Gs to Gt. For an illustrated example of conditional-root-greedy (CRG) maximal matchings and how we update the permutation ordering, see Fig. 6 and Fig. 7. a 1 b 5 y 4 G1 x 2 u 3 a 1 b 5 y 2 G2 x 3 u 4 Figure 6: Consider the following simple setup of two DAGs G1 and G2 which agree on all arc directions except for x → y in G1 and y → x in G2. Dashed arcs represent the covered edges in each DAG. The numbers below each vertex indicate the πG1 and πG2 orderings respectively. In G1, u = argminz∈ChG1 (x){πG1 (z)}. Observe that Eq. (1) modifies the ordering only for {x, y, u} (in blue) while keeping the ordering of all other vertices fixed. Suppose S = A(G1) ∩ A(G2) = {a → b, a → x, a → y, a → u, x → b, x → u, y → b}. With respect to πG1 and S, The conditional-root-greedy maximal matchings (see Algorithm 3) are MG1,πG1 ,S = {a → x, y → b} and MG2,πG2 ,S = {a → y, x → b}. x 1 u 2 b 4 y 3 G1 x 2 u 3 b 4 y 1 G2 Figure 7: Consider the following simple setup of two DAGs G3 and G4 which agree on all arc directions except for x → y in G3 and y → x in G4. Dashed arcs represent the covered edges in each DAG. The numbers below each vertex indicate the πG3 and πG4 orderings respectively. Observe that C(G3) = {x → u, x → y, y → b}. If we define S = A(G3) ∩ A(G4) = {x → b, x → u, y → b}, we see that the conditional-root-greedy maximal matchings (see Algorithm 3) are MG3,πG3 ,S = {x → y} and MG4,πG4 ,S = {y → x}. Note that Algorithm 3 does not choose x → u ∈ C(G1) despite π(u) < π(y) because x → u ∈ S, so π(y) < π(u) + n2. G Deferred proofs G.1 Preliminaries Our proofs rely on some existing results which we first state and explain below. 22 Lemma 24 (Lemma 27 of [CSB22]). Fix an essential graph E(G∗) and G ∈ [G∗]. If I ⊆ 2V is a verifying set, then I separates all unoriented covered edge u ∼ v of G. Lemma 25 (Lemma 28 of [CSB22]). Fix an essential graph E(G∗) and G ∈ [G∗]. If I ⊆ 2V is an intervention set that separates every unoriented covered edge u ∼ v of G, then I is a verifying set. Lemma 24 tells us that we have to intervene on one of the endpoints of any covered edge in order to orient it while Lemma 25 tells us that doing so for all covered edges suffices to orient the entire causal DAG. G.2 Verification numbers of DAGs within same MEC are bounded by a factor of two We use the following simple lemma in our proof of Lemma 22. Lemma 26. For any covered edge x → y in a DAG G = (V, E), we have y ∈ ChG(x). Furthermore, each vertex only appears as an endpoint in the collection of covered edges C(G) at most once. Proof. For the first statement, suppose, for a contradiction, that y ̸∈ Ch(x). Then, there exists some z ∈ V \ {x, y} such that z ∈ Des(x)∩Anc(y). Fix an arbitrary ordering π for G and let z∗ = argmaxz∈Des(x)∩Anc(y){π(z)}. Then, we see that z∗ → y while z∗ ̸→ x since z∗ ∈ Des(x). So, x → y cannot be a covered edge. Contradiction. For the second statement, suppose, for a contradiction, that there are two covered edges u → x, v → x ∈ C(G) that ends with x. Since u → x ∈ C(G), we must have v → u. Since v → x ∈ C(G), we must have u → v. We cannot have both u → v and v → u simultaneously. Contradiction. Lemma 22 (Covered edge status changes due to covered edge reversal). Let G∗ be a moral DAG with MEC [G∗] and consider any DAG G ∈ [G∗]. Suppose G = (V, E) has a covered edge x → y ∈ C(G) ⊆ E and we reverse x → y to y → x to obtain a new DAG G′ ∈ [G∗]. Then, all of the following statements hold: 1. y → x ∈ C(G′). Note that this is the covered edge that was reversed. 2. If an edge e does not involve x or y, then e ∈ C(G) if and only if e ∈ C(G′). 3. If x ∈ ChG(a) for some a ∈ V \ {x, y}, then a → x ∈ C(G) if and only if a → y ∈ C(G′). 4. If b ∈ ChG(y) and x → b ∈ E(G) for some b ∈ V \ {x, y}, then y → b ∈ C(G) if and only if x → b ∈ C(G′). Proof. The only parental relationships that changed when we reversing x → y to y → x are PaG′(y) = PaG(y) \ {x} and PaG′(x) = PaG(x) ∪ {y}. For any other vertex u ∈ V \ {x, y}, we have PaG′(u) = PaG(u). The first two points have the same proof: as parental relationships of both endpoints are unchanged, the covered edge status is unchanged. 3. Since x → y ∈ C(G), we have a → y ∈ E(G). We prove both directions separately. Suppose a → x ∈ C(G). Then, PaG(a) = PaG(x) \ {a}. Since x → y ∈ C(G), then PaG(x) = PaG(y) \ {x}. So, we have PaG′(a) = PaG(a) = PaG(x) \ {a} = PaG(y) \ {x, a} = PaG′(y) \ {a}. Thus, a → y ∈ C(G′). Suppose a → x ̸∈ C(G). Then, one of the two cases must occur: (a) There exists some vertex u such that u → a and u ̸→ x in G. Since x → y is a covered edge, u ̸→ x implies u ̸→ y in G. Therefore, a → y ̸∈ C(G′) due to u → a. (b) There exists some vertex v such that v → x and v ̸→ a in G. There are two possibilities for v ̸→ a: v ̸∼ a or v ← a. If v ̸∼ a, then v → x ← a is a v-structure. If v ← a, then x ̸∈ Ch(a) since we have a → v → x. Both possibilities lead to contradictions. The first case implies a → y ̸∈ C(G′) while the second case cannot happen. 4. We prove both directions separately. Suppose y → b ∈ C(G). Then, PaG(y) = PaG(b) \ {y}. Since x → y ∈ C(G), then PaG(x) = PaG(y) \ {x}. So, we have PaG′(b) \ {x} = PaG(b) \ {x} = PaG(y) ∪ {y} \ {x} = PaG(x) ∪ {y} = PaG′(x). Thus, x → b ∈ C(G′). Suppose y → b ̸∈ C(G). Then, one of the two cases must occur: 23 • There exists some vertex u → y and u ̸→ b. Since x → y is a covered edge, u → y implies u → x. Therefore, x → b ̸∈ C(G′) due to u ̸→ b. • There exists some vertex v → b and v ̸→ y. There are two possibilities for v ̸→ y: v ̸∼ y or v ← y. If v ̸∼ y, then v → b ← y is a v-structure. If v ← y, then b ̸∈ Ch(y) since we have y → v → b. Both possibilities lead to contradictions. The first case implies x → b ̸∈ C(G′) while the second case cannot happen. Lemma 23. Consider two moral DAGs G1 and G2 from the same MEC such that they differ only in one covered edge direction: x → y ∈ E(G1) and y → x ∈ E(G2). Let vertex a be the direct parent of x in G1, if it exists. Let S ⊆ E be a subset such that a → x ∈ S and x → y, y → x ̸∈ S (if a does not exist, ignore condition a → x ∈ S). Suppose πG1 is an ordering for G1 such that y = argminz : x→z∈C(G1){πG1(z) + n2 · 1x→z∈S} and denote MG1,πG1 ,S as the corresponding CRG maximal matching for C(G1). Then, there exists an explicit modification of πG1 to πG2, and MG1,πG1 ,S to a CRG maximal matching MG2,πG2 ,S for C(G2) such that |MG1,πG1 ,S| = |MG2,πG2 ,S|. Proof. Define u = argminz∈ChG1 (x){πG1 (z)} as the lowest ordered child of x. Note that Algorithm 3 chooses x → y instead of x → u by definition of y. This implies that x → u ∈ S whenever u ̸= y. Let us define πG2 as follows:    πG1 (x) πG1 (u) πG1 (y) πG1 (v) Clearly, πG1(x) < πG1(y) and πG2(x) > πG2(y). Meanwhile, for any other two adjacent vertices v and v′, observe that πG1 (v) < πG1(v′) ⇐⇒ πG2 (v) < πG2(v′) so πG2 agrees with the arc orientations of πG1 except for x ∼ y. See Fig. 6 for an illustrated example. if v = y if v = x if v = u else πG2(v) = Define vertex b as follows: b = argminz∈V : z∈Des(x) and y→z∈C(G1){πG1(z) + n2 · 1x→z∈S} If vertex b exists, then we know that b ∈ ChG1(y) and x → b ∈ C(G2) by Lemma 26 and Lemma 22. By minimality of b, Definition 21 will choose y → b if picking a covered edge starting with y for MG1,πG1 ,S. So, we can equivalently define vertex b as follows: b = argminz∈V : z∈Des(y) and x→z∈C(G2){πG2(z) + n2 · 1x→z∈S} By choice of πG2 , Definition 21 will choose x → b if picking a covered edge starting with x for MG2,πG2 ,S. We will now construct a same-sized maximal matching MG2,πG2 ,S from MG1,πG1 ,S (Step 1), argue that it is maximal matching of C(G2) (Step 2), and that it is indeed a conditional-root-greedy matching for C(G2) with respect to πG2 and S (Step 3). There are three cases that cover all possibilities: Case 1 Vertex a exists, a → x ∈ MG1,πG1 ,S, and vertex b exists. Case 2 Vertex a exists, a → x ∈ MG1,πG1 ,S, and vertex b does not exist. Case 3 a → x ̸∈ MG1,πG1 ,S. This could be due to vertex a not existing, or a → x ̸∈ C(G1), or MG1,πG1 ,S containing a covered edge ending at a so a → x was removed from consideration. Step 1: Construction of MG2,πG2 ,S such that |MG2,πG2 ,S| = |MG1,πG1 ,S|. By Lemma 22, covered edge statuses of edges whose endpoints do not involve x or y will remain unchanged. By definition of y, we know that Definition 21 will choose x → y if picking a covered edge starting with x for MG1,πG1 ,S. Since a → x ∈ MG1,πG1 in cases 1 and 2, we know that there is no arcs of the form x → · in MG1,πG1 ,S. Since there is no arc of the form · → x in MG1,πG1 ,S in case 3, we know that x → y ∈ MG1,πG1 ,S. Case 1 Define MG2,πG2 ,S = MG1,πG1 ,S ∪ {a → y, x → b} \ {a → x, y → b}. 24 Case 2 Define MG2,πG2 ,S = MG1,πG1 ,S ∪ {a → y} \ {a → x}. Case 3 Define MG2,πG2 ,S = MG1,πG1 ,S ∪ {y → x} \ {x → y}. By construction, we see that |MG2,πG2 ,S| = |MG1,πG1 ,S|. Step 2: MG2,πG2 ,S is a maximal matching of the covered edge C(G2) of G2. To prove that MG2,πG2 ,S is a maximal matching of C(G2), we argue in three steps: 2(i) Edges of MG2,πG2 ,S belong to C(G2). 2(ii) MG2,πG2 ,S is a matching of C(G2). 2(iii) MG2,πG2 ,S is maximal matching of C(G2). Step 2(i): Edges of MG2,πG2 ,S belong to C(G2). By Lemma 22, covered edge statuses of edges whose endpoints do not involve x or y will remain unchanged. Since MG1,πG1 ,S is a matching, it has at most one edge e involving endpoint x and at most one edge e′ involving endpoint y (e′ could be e). Case 1 Since b exists, the edges in MG1,πG1 ,S with endpoints involving {x, y} are a → x and y → b. By Lemma 22, we know that a → y, x → b ∈ C(G2). Case 2 Since b does not exist, the only edge in MG1,πG1 ,S with endpoints involving {x, y} is a → x. By Lemma 22, we know that a → y ∈ C(G2). Case 3 Since a → x ̸∈ MG1,πG1 ,S, we have x → y ∈ MG1,πG1 ,S by minimality of y. In all cases, we see that MG2,πG2 ,S ⊆ C(G2). Step 2(ii): MG2,πG2 ,S is a matching of C(G2). It suffices to argue that there are no two edges in MG2,πG2 ,S sharing an endpoint. Since MG1,πG1 ,S is a matching, this can only happen via newly added endpoints in MG2,πG2 ,S. Case 1 The endpoints of newly added edges are exactly the endpoints of removed edges. Case 2 Since we removed a → x and added a → y, it suffices to check that there are no edges in MG1,πG1 ,S involving y. This is true since b does not exist in Case 2. Case 3 The endpoints of newly added edges are exactly the endpoints of removed edges. Therefore, we conclude that MG2,πG2 ,S is a matching of C(G2). Step 2(iii): MG2,πG2 ,S is a maximal matching of C(G2). For any u → v ∈ C(G2), we show that there is some edge in MG2,πG2 ,S with at least one of u or v is an endpoint. By Lemma 22, covered edge statuses of edges whose endpoints do not involve x or y will remain unchanged, so it suffices to consider |{u, v} ∩ {x, y}| ≥ 1. We check the following 3 scenarios corresponding to |{u, v} ∩ {x, y}| ≥ 1 below: (i) y ∈ {u, v}. The endpoints of MG2,πG2 always contains y. (ii) y ̸∈ {u, v} and x → v ∈ C(G2), for some v ∈ V \ {x, y}. Since x → v ∈ C(G2) and y → x in G2, it must be the case that y → v in G2. Since G1 and G2 agrees on all arcs except x ∼ y, we have that y → v in G1 as well. Since x → v ∈ C(G2), we know that v ∈ ChG2(x) via Lemma 26. So, we have y → v ∈ C(G1) via Lemma 22. Since the set {v : y → v ∈ C(G1)} is non-empty, vertex b exists. In both cases 1 and 3, the endpoints of MG2,πG2 includes x. (iii) y ̸∈ {u, v} and u → x ∈ C(G2), for some u ∈ V \ {x, y}. By Lemma 26, we know that x ∈ ChG2(u). Meanwhile, since y → x ∈ C(G2), we must have u → y in G2. However, this implies that x ̸∈ ChG2(u) since u → y → x exists. This is a contradiction, so this situation cannot happen. As the above argument holds for any u → v ∈ C(G2), we see that MG2,πG2 is maximal matching for C(G2). 25 Step 3: MG2,πG2 ,S is a conditional-root-greedy maximal matching. We now compare the execution of Algorithm 3 on (πG1, S) and (πG2, S). Note that S remains unchanged. We know the following: • Since πG2 (y) = πG1(x) and a → x ∈ S, if a exists and a → x is chosen by Algorithm 3 on (πG1 , S), then it means that there are no a → v arc in C(G1) such that a → v ̸∈ S. So, a → y will be chosen by Algorithm 3 on (πG2 , S) if a exists. • Since πG2 (y) = πG1 (x), x is chosen as a root by Algorithm 3 on (πG1 , S) if and only if y is chosen as a root by Algorithm 3 on (πG2, S). • By definition of b, if it exists, then y → b ∈ MG1,πG1 ,S ⇐⇒ x → b ∈ MG2,πG2 ,S. • By the definition of πG2, we see that Algorithm 3 makes the “same decisions” when choosing arcs rooted on V \ {a, x, y, b}. Therefore, MG2,πG2 ,S is indeed a conditional-root-greedy maximal matching for C(G2) with respect to πG2 and S. Theorem 10. For any DAG G∗ with MEC [G∗], we have that maxG∈[G∗] ν1(G) ≤ 2 · minG∈[G∗] ν1(G). Proof. Consider any two DAGs Gs, Gt ∈ [G∗]. To transform Gs = (V, Es) to Gt = (V, Et), Algorithm 2 flips covered edges one by one such that |Es \ Et| decreases in a monotonic manner. We will repeatedly apply Lemma 23 with S = A(Gs) ∩ A(Gt) on the sequence of covered edge reversals produced by Algorithm 2. Let πGs be an arbitrary ordering for Gs and we compute an initial conditional-root-greedy maximal matching for C(Gs) with respect to some ordering πGs and conditioning set S. To see why Lemma 23 applies at each step for reversing a covered edge from x → y to y → x, we need to ensure the following: 1. If x has a parent vertex a (i.e. x ∈ ChG1(a)), then a → x ∈ S. If a → x ̸∈ S, then then a → x is a covered edge that should be flipped to transform from Gs to Gt. However, this means that Algorithm 2 would pick a → x to reverse instead of picking x → y to reverse. Contradiction. 2. x → y, y → x ̸∈ S. This is satisfied by the definition of S = Es ∩ Et since reversing x → y to y → x implies that neither of them are in S. 3. y = argminz : x→z∈C(G1){πG1(z) + n2 · 1x→z∈S}. Since x → y ̸∈ S, this is equivalent to checking if y = argminz : x→z∈C(G1){πG1(z)}. This is satisfied by line 7 of Algorithm 2. 4. MG1,πG1 ,S is a conditional-root-greedy maximal matching for C(G1) with respect to some ordering πG1 and conditioning set S. This is satisfied since we always maintain a conditional-root-greedy maximal matching and S is unchanged throughout. By applying Lemma 23 with S = A(Gs) ∩ A(Gt) repeatedly on the sequence of covered edge reversals for |. produced by Algorithm 2, we see that there exists a conditional-root-greedy maximal matching MGs,πGs C(Gs) and a conditional-root-greedy maximal matching MGt,πGt | = |MGt,πGt for C(Gt) such that |MGs,πGs The claim follows since maximal matching is a 2-approximation to minimum vertex cover, and the verifi- cation number ν(G) of any DAG G is the size of the minimum vertex cover of its covered edges C(G). Lemma 11 (Tightness of Theorem 10). There exist DAGs G1 and G2 from the same MEC with ν1(G1) = 2 · ν1(G2). Proof. See Fig. 8. 26 b a c d b a c d G1 G2 Figure 8: The ratio of 2 in Theorem 10 is tight: G1 and G2 belong in the same MEC with ν(G1) = 2 and ν(G2) = 1. The dashed arcs represent the covered edges and the boxed vertices represent a minimum vertex cover of the covered edges. G.3 Adaptive search with imperfect advice Lemma 15. Fix a DAG G∗ = (V, E) and let V ′ ⊆ V be any subset of vertices. Suppose IV ′ ⊆ V is the set of nodes intervened by SubsetSearch(E(G∗), V ′). If C(G∗) ⊆ E(G∗[V ′]), then EIV ′ (G∗) = G∗. Proof. By Theorem 9, SubsetSearch fully orients edges within the node-induced subgraph induced by V ′, i.e. SubsetSearch will perform atomic interventions on IV ′ ⊆ V resulting in EIV ′ (G∗)[V ′] = G∗[V ′]. Since C(G∗) ⊆ E(G∗[V ′]) and all covered edges C(G∗) were oriented, then according to Lemma 24, it must be the case that V ∗ ⊆ IV ′ for some minimum vertex cover V ∗ of C(G∗), so we see that R(G∗, V ∗) ⊆ R(G∗, IV ′). By Lemma 25, we have R(G∗, V ∗) = A(G∗) and so SubsetSearch(E(G∗), V ′) fully orients E(G∗). We will now prove our main result (Theorem 14) which shows that the number of interventions needed is a function of the quality of the given advice DAG. Let us first recall how we defined the quality of a given advice and restate our algorithm. Definition 13 (Quality measure). Fix a DAG G∗ with MEC [G∗] and consider any DAG (cid:101)G ∈ [G∗]. We define ψ(G∗, (cid:101)G) as follows: ψ(G∗, (cid:101)G) = max (cid:101)V ∈V( (cid:101)G) (cid:16) (cid:12) (cid:12) (cid:12)ρ (cid:101)V , N h(G∗, (cid:101)V ) skel(E(G∗))( (cid:101)V ) (cid:17)(cid:12) (cid:12) (cid:12) Theorem 14. Fix an essential graph E(G∗) with an unknown underlying ground truth DAG G∗. Given an advice graph (cid:101)G ∈ [G∗] and intervention set bound k ≥ 1, there exists a deterministic polynomial time algorithm (Algorithm 1) that computes an intervention set I adaptively such that EI(G∗) = G∗, and |I| has size 1. O(max{1, log ψ(G∗, (cid:101)G)} · ν1(G∗)) when k = 1 2. O(max{1, log ψ(G∗, (cid:101)G)} · log k · νk(G∗)) when k > 1. Proof. Consider Algorithm 1. Observe that n0 = 2 ensures that n2 0 > n0. In this proof, we will drop the subscript skel(E(G∗)) when we discuss the r-hop neighbors N r skel(E(G∗))(·). We first prove the case where k = 1 then explain how to tweak the proof for the case of k > 1. If Algorithm 1 terminates when i = 0, then I = I0 = (cid:101)V and Theorem 10 tells us that |I| ∈ O(ν1(G∗)). Now, suppose Algorithm 1 terminates with i = t, for some final round t > 0. As Algorithm 1 uses an arbitrary verifying set of (cid:101)G in step 3, we will argue that O(max{1, log |N h(G∗, (cid:101)V )( (cid:101)V )|} · ν(G∗)) interventions are used in the while-loop, for any arbitrary chosen (cid:101)V ∈ V( (cid:101)G). The theorem then follows by taking a maximization over all possibilities in V( (cid:101)G). In Line 12, r(i) records the hop value such that ρ(Ii, N r(i)( (cid:101)V )) ≥ n2 i , for any 0 ≤ i < t. By construction of the algorithm, we know the following: 1. For any 0 < i ≤ t, because r(i) − 1 did not trigger Algorithm 1 to record r(i). ni = ρ(Ii, N r(i)( (cid:101)V )) ≥ n2 i−1 > ρ(Ii, N r(i)−1( (cid:101)V )) (2) 27 Algorithm 1 Adaptive search algorithm with advice. Input: Essential graph E(G∗), advice DAG (cid:101)G ∈ [G∗], intervention size k ∈ N Output: An intervention set I such that each intervention involves at most k nodes and EI(G∗) = G∗. 1: Let (cid:101)V ∈ V( (cid:101)G) be any atomic verifying set of (cid:101)G. 2: if k = 1 then 3: 4: else 5: Define I0 = (cid:101)V as an atomic intervention set. Define k′ = min{k, | (cid:101)V |/2}, a = ⌈| (cid:101)V |/k′⌉ ≥ 2, and ℓ = ⌈loga |C|⌉. Compute labelling scheme on (cid:101)V with (| (cid:101)V |, k, a) via Lemma 17 and define I0 = {Sx,y}x∈[ℓ],y∈[a], where Sx,y ⊆ (cid:101)V is the subset of vertices whose xth letter in the label is y. 6: end if 7: Intervene on I0 and initialize r ← 0, i ← 0, n0 ← 2. 8: while EIi(G∗) still has undirected edges do 9: skel(E(G∗))( (cid:101)V )) ≥ n2 if ρ(Ii, N r i then 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: Increment i ← i + 1 and record r(i) ← r. Update ni ← ρ(Ii, N r skel(E(G∗))( (cid:101)V )) Ci ← SubsetSearch(EIi(G∗), N r−1 skel(E(G∗))( (cid:101)V ), k) if EIi−1 ∪ Ci(G∗) still has undirected edges then i ← SubsetSearch(EIi−1 ∪ Ci(G∗), N r C ′ Update Ii ← Ii−1 ∪ Ci ∪ C ′ i. skel(E(G∗))( (cid:101)V ), k) else Update Ii ← Ii−1 ∪ Ci. end if end if Increment r ← r + 1. 20: 21: end while 22: return Ii 2. By Theorem 9 and Eq. (2), for any 1 ≤ i ≤ t, |Ci| ∈ O(log ρ(Ii, N r(i)−1( (cid:101)V )) · ν1(G∗)) ⊆ O(log ni−1 · ν1(G∗)) |C ′ i| ∈ O(log ρ(Ii, N r(i)( (cid:101)V )) · ν1(G∗)) ⊆ O(log ni · ν1(G∗)) (3) Note that the bound for |C ′ i| is an over-estimation (but this is okay for our analytical purposes) since some nodes previously counted for ρ(Ii, N r(i)( (cid:101)V )) may no longer be relevant in EIi ∪ Ci(G∗) after intervening on Ci. √ ni for any 0 < i ≤ t, we know that nj ≤ n1/2t−j t for any 0 ≤ j ≤ t. So, for any 0 ≤ t′ ≤ t, 3. Since ni−1 ≤ we have t′ (cid:88) i=0 log(ni) ≤ (cid:18) n1/2t′−i t′ (cid:19) = log t′ (cid:88) i=0 t′ (cid:88) i=0 log(nt′) 2t′−i ≤ 2 · log(nt′) 4. By definition of t, h(G∗, (cid:101)V ), and Lemma 15, and r(t − 1) < h(G∗, (cid:101)V ) ≤ r(t) N r(t−1)( (cid:101)V ) ⊊ N h(G∗, (cid:101)V )( (cid:101)V ) ⊆ N r(t)( (cid:101)V ) Combining Eq. (2), Eq. (3), and Eq. (4), we get t−1 (cid:88) i=1 (|Ci| + |C ′ i|) ∈ O (cid:32)(cid:32)t−1 (cid:88) i=1 (cid:33) (cid:33) log ni−1 + log ni · ν1(G∗) ⊆ O (cid:32)t−1 (cid:88) i=1 (cid:33) log ni · ν1(G∗) ⊆ O (log nt−1 · ν1(G∗)) (7) 28 (4) (5) (6) To relate |It| with |N h(G∗, (cid:101)V )( (cid:101)V )|, we consider two scenarios depending on whether the essential graph was fully oriented after intervening on Ct or C ′ t. Scenario 1: Fully oriented after intervening on Ct, i.e. EIt−1 ∪ Ct(G∗) = G∗. Then, It = Ct ˙∪ It−1 = Ct ˙∪ (Ct−1 ˙∪ C ′ t−1) ˙∪ It−2 = . . . = Ct ˙∪ t−1 (cid:91) i=1 (Ci ˙∪ C ′ i) ˙∪ (cid:101)V In this case, h(G∗, (cid:101)V ) = r(t) − 1. By definition, nt−1 ≤ |N r(t−1)( (cid:101)V )| and we have nt−1 ≤ |N r(t−1)( (cid:101)V )| < |N h(G∗, (cid:101)V )( (cid:101)V )| (8) since N r(t−1)( (cid:101)V ) ⊊ N h(G∗, (cid:101)V )( (cid:101)V ). So, |It| − | (cid:101)V | = |Ct| + t−1 (cid:88) i=1 (|Ci| + |C ′ i|) ∈ O (log nt−1 · ν1(G∗)) + O (log nt−1 · ν1(G∗)) By Eq. (3) and Eq. (7) (cid:16) ⊆ O log |N h(G∗, (cid:101)V )( (cid:101)V )| · ν1(G∗) (cid:17) Eq. (8) Scenario 2: Fully oriented after intervening on C ′ t, i.e. EIt−1 ∪ Ct ∪ C′ t (G∗) = G∗. Then, It = Ct ˙∪ C ′ t ˙∪ It−1 = . . . = Ct ˙∪ C ′ t ˙∪ t−1 (cid:91) (Ci ˙∪ C ′ i) ˙∪ (cid:101)V In this case, h(G∗, (cid:101)V ) = r(t) and N h(G∗, (cid:101)V )( (cid:101)V ) = N r(t)( (cid:101)V ). So, i=1 nt ≤ |N r(t)( (cid:101)V )| = |N h(G∗, (cid:101)V )( (cid:101)V )| (9) So, |It| − | (cid:101)V | = |Ct| + |C ′ t| + t−1 (cid:88) i=1 (|Ci| + |C ′ i|) ∈ O ((log nt−1 + nt) · ν1(G∗)) + O (log nt−1 · ν1(G∗)) By Eq. (3) and Eq. (7) (cid:16) ⊆ O log |N h(G∗, (cid:101)V )( (cid:101)V )| · ν1(G∗) (cid:17) Eq. (9) Since | (cid:101)V | ∈ O(ν1(G∗)), we can conclude |It| ∈ O (cid:16) (cid:17) ν(G∗) + log |N h(G∗, (cid:101)V )( (cid:101)V )| · ν1(G∗) ⊆ O (cid:16) max (cid:111) (cid:110) 1, log |N h(G∗, (cid:101)V )( (cid:101)V )| (cid:17) · ν1(G∗) in either scenario, as desired. The theorem then follows by taking a maximization over all (cid:101)V ∈ V( (cid:101)G). Adapting the proof for k > 1 By Theorem 16, νk(G∗) ≥ ⌈ν1(G∗)/k⌉. So, |I0| ∈ O(log k · νk(G∗)) via Lemma 17. The rest of the proof follows the same structure except that we use the bounded size guarantee of Theorem 9, which incurs an additional multiplicative log k factor. Polynomial running time By construction, the Algorithm 1 is deterministic. Furthermore, Algorithm 1 runs in polynomial time because: • Hop information and relevant nodes can be computed in polynomial time via breadth first search and maintaining suitable neighborhood information. • It is known that performing Meek rules to obtain essential graphs takes polynomial time ([WBL21a]). • Algorithm 1 makes at most two calls to SubsetSearch whenever the number of relevant nodes is squared. Each SubsetSearch call is known to run in polynomial time (Theorem 9). Since this happens each time the number of relevant nodes is squared, this can happen at most O(log n) times. 29 Theorem 20. Fix an essential graph E(G∗) with an unknown underlying ground truth DAG G∗. Given a set A of DAGs consistent with the given partial advice and intervention set bound k ≥ 1, there exists a deterministic polynomial time algorithm that computes an intervention set I adaptively such that EI(G∗) = G∗, and |I| has size 1. O(max{1, log max 2. O(max{1, log max when k = 1 and k > 1 respectively. (cid:101)G∈A ψ(G∗, (cid:101)G)} · ν1(G∗)) (cid:101)G∈A ψ(G∗, (cid:101)G)} · log k · νk(G∗)) Proof. Apply Theorem 14 while taking a maximization over all possible advice DAGs (cid:101)G consistent with the given partial advice. H Path essential graph In this section, we explain why our algorithm (Algorithm 1) is simply the classic “binary search with predic- tion”13 when the given essential graph E(G∗) is an undirected path on n vertices. So, another way to view our result is a generalization that works on essential graphs of arbitrary moral DAGs. When the given essential graph is a path E(G∗) on n vertices, we know that there are n possible DAGs in the Markov equivalence class where each DAG corresponds to choosing a single root node and having all edges pointing away from it. Observe that a verifying set of any DAG is then simply the root node as the set of of covered edges in any rooted tree are precisely the edges incident to the root. Therefore, given any (cid:101)G ∈ [G∗], we se that h(G∗, (cid:101)V ) measures the number of hops between the root of the advice DAG (cid:101)G and the root of the true DAG G∗. Furthermore, by Meek rule R1, whenever we intervene on a vertex u on the path, we will fully orient the “half” of the path that points away from the root while the subpath between u and the root remains unoriented (except the edge directly incident to u). So, one can see that Algorithm 1 is actually mimicking exponential search from the root of (cid:101)G towards the root of G∗. Then, once the root of G∗ lies within the r-hop neighborhood H, SubsetSearch uses O(log |V (H)|) interventions, which matches the number of queries required by binary search within a fixed interval over |V (H)| nodes. I Experiments In this section, we provide more details about our experiments. All experiments were run on a laptop with Apple M1 Pro chip and 16GB of memory. Source code imple- mentation and experimental scripts are available at https://github.com/cxjdavin/active-causal-structure-learning-with-advice. I.1 Experimental setup For experiments, we evaluated our advice algorithm on the synthetic graph instances of [WBL21b]14 on graph instances of sizes n = {16, 32, 64}. For each undirected chordal graph instance, we do the following: 1. Set m = 1000 as the number of advice DAGs that we will sample. 2. Use the uniform sampling algorithm of [WBL21b] to uniformly sample m advice DAGs (cid:101)G1, . . . , (cid:101)Gm. 3. Randomly select G∗ from one of (cid:101)G1, . . . , (cid:101)Gm. 4. For each (cid:101)G ∈ { (cid:101)G1, . . . , (cid:101)Gm}, • Compute a minimum verifying set (cid:101)V of (cid:101)G. • Define and compute ψ(G∗, (cid:101)V ) = (cid:16) (cid:12) (cid:12) (cid:12)ρ (cid:101)V , N h(G∗, (cid:101)V ) skel(E(G∗))( (cid:101)V ) (cid:17)(cid:12) (cid:12) (cid:12). • Compute a verifying set using (E(G∗), (cid:101)G) as input to Algorithm 1. 13e.g. see https://en.wikipedia.org/wiki/Learning_augmented_algorithm#Binary_search 14See Appendix E of [WBL21b] for details about each class of synthetic graphs. Instances are available at https://github.com/ mwien/CliquePicking/tree/master/aaai_experiments 30 5. Aggregate the sizes of the verifying sets used based on ψ(G∗, (cid:101)V ) and compute the mean and standard deviations. 6. Compare against verification number ν1(G∗) and the number of interventions used by the fully adaptive search (without advice, which we denote as “blind search” in the plots) of [CSB22]. 7. Compute the empirical distribution of the quality measure amongst the m advice DAGs, then use standard sample complexity arguments for estimating distributions up to ε error in TV distance to compute a confidence interval for which the true cumulative probability density of all DAGs within the MEC lies within15. To be precise, it is known that for a discrete distribution P on k elements, when there are m ≥ max{k/ε2, (2/ε2) · ln(2/δ)} uniform samples, the probability that the TV distance between the true distribution P and the empirical distribution P is less than ε is at least 1 − δ. Since the upper bound on the domain size of quality measure is the number of nodes n, by setting m = 1000 and δ = 0.01, we can compute ε = max{(cid:112)n/m, (cid:112)(2/m) · ln(2/δ)} and conclude that the probability that the true cumulative probability density of all DAGs within the MEC lies within ε distance (clipped to be between 0 and 1) of the empirical distribution is at least 99%. I.2 Experimental remarks • The uniform sampling code of [WBL21b] is written in Julia and it uses a non-trivial amount of memory, which may make it unsuitable for running on a shared server with memory constraints. • Note that ψ(G∗, (cid:101)V ) ≤ ψ(G∗, (cid:101)G) = max (cid:17)(cid:12) (cid:12) (cid:12). We use ψ(G∗, (cid:101)V ) as a proxy for ψ(G∗, (cid:101)G) because we do not know if there is an efficient way to compute the latter besides the naive (possibly exponential time) enumeration over all possible minimum verifying sets. (cid:101)V , N h(G∗, (cid:101)V ) skel(E(G∗))( (cid:101)V ) (cid:101)V ∈V( (cid:101)G) (cid:12) (cid:12) (cid:12)ρ (cid:16) • We also experimented with an “unsafe” variant of Algorithm 1 where we ignore the second tweak of In our synthetic experiments, both variants use a similar number of intervening one round before. interventions. • We do not plot the theoretical upper bounds O(log ψ(G∗, (cid:101)V ) · ν1(G∗)) or O(log n · ν1(G∗)) because these values are a significantly higher than the other curves and result in “squashed” (and less interest- ing/interpretable) plots. • Even when ψ(G∗, (cid:101)V ) = 0, there could be cases where [CSB22] uses more interventions than ν1(G∗). For example, consider Fig. 8 with G∗ = G2 and (cid:101)G = G1. After intervening on (cid:101)V = {b, c}, the entire graph will be oriented so the ψ(G∗, (cid:101)V ) = 0 while ν1(G∗) = 1 < 2 = | (cid:101)V |. Fortunately, Theorem 10 guarantees that | (cid:101)V | ≤ 2 · ν1(G∗). • Note that the error bar may appear “lower” than the verification number even though all intervention sizes are at least as large as the verification number. For instance, if ν1(G∗) = 6 and we used (6, 6, 7) interventions on three different (cid:101)G’s, each with ψ(G∗, (cid:101)V ) = 0. In this case, the mean is 6.3333 . . . while the standard deviation is 0.4714 . . ., so the error bar will display an interval of [5.86 . . . , 6.80 . . .] whose lower interval is below ν1(G∗) = 6. I.3 All experimental plots For details about the synthetic graph classes, see Appendix E of [WBL21b]. Each experimental plot is for one of the synthetic graphs G∗, with respect to 1000 ≪ |[G∗]| uniformly sampled advice DAGs (cid:101)G from the MEC [G∗]. The solid lines indicate the number of atomic interventions used while the dotted lines indicate the empirical cumulative probability density of (cid:101)G. The true cumulative probability density lies within the shaded area with probability at least 0.99. 15For example, see Theorem 1 of [Can20]. 31 (a) n = 16 (b) n = 32 (c) n = 64 Figure 9: Subtree-logn synthetic graphs (a) n = 16 (b) n = 32 (c) n = 64 Figure 10: Subtree-2logn synthetic graphs (a) n = 16 (b) n = 32 (c) n = 64 Figure 11: Subtree-sqrtn synthetic graphs (a) n = 16 (b) n = 32 (c) n = 64 Figure 12: Interval synthetic graphs 32 (a) n = 16 (b) n = 32 (c) n = 64 Figure 13: peo-2 synthetic graphs (a) n = 16 (b) n = 32 (c) n = 64 Figure 14: peo-4 synthetic graphs (a) n = 16 (b) n = 32 (c) n = 64 Figure 15: Thickening-3 synthetic graphs (a) n = 16 (b) n = 32 (c) n = 64 Figure 16: Thickening-logn synthetic graphs 33 (a) n = 16 (b) n = 32 (c) n = 64 Figure 17: Thickening-sqrtn synthetic graphs 34
synthetic_cpt
4
Prioritized_Training_on_Points_that_are_Learnable_Worth_Learning_and_Not_Yet_Learnt.pdf
2 2 0 2 p e S 6 2 ] G L . s c [ 3 v 7 3 1 7 0 . 6 0 2 2 : v i X r a Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt Sören Mindermann * 1 Jan Brauner * 1 Muhammed Razzak * 1 Mrinank Sharma * 2 Andreas Kirsch 1 Winnie Xu 3 4 Benedikt Höltgen 1 Aidan N. Gomez 3 1 Adrien Morisot 3 Sebastian Farquhar 1 Yarin Gal 1 Abstract Training on web-scale data can take months. But most computation and time is wasted on redundant and noisy points that are already learnt or not learnable. To accelerate training, we in- troduce Reducible Holdout Loss Selection (RHO- LOSS), a simple but principled technique which selects approximately those points for training that most reduce the model’s generalization loss. As a result, RHO-LOSS mitigates the weaknesses of existing data selection methods: techniques from the optimization literature typically select “hard” (e.g. high loss) points, but such points are often noisy (not learnable) or less task-relevant. Conversely, curriculum learning prioritizes “easy” points, but such points need not be trained on once learnt. In contrast, RHO-LOSS selects points that are learnable, worth learning, and not yet learnt. RHO-LOSS trains in far fewer steps than prior art, improves accuracy, and speeds up training on a wide range of datasets, hyperparameters, and ar- chitectures (MLPs, CNNs, and BERT). On the large web-scraped image dataset Clothing-1M, RHO-LOSS trains in 18x fewer steps and reaches 2% higher final accuracy than uniform data shuf- fling. Speedup on large-scale classification of Figure 1: web-scraped data (Clothing-1M). RHO-LOSS trains all architectures with fewer gradient steps than standard uni- form data selection (i.e. shuffling), helping reduce training time. Thin lines: ResNet-50, MobileNet v2, DenseNet121, Inception v3, GoogleNet, mean across seeds. Bold lines: mean across all architectures. 1. Introduction State-of-the-art models such as GPT-3 (Brown et al., 2020), CLIP (Radford et al., 2021), and ViT (Dosovitskiy et al., 2021) achieve remarkable results by training on vast amounts of web-scraped data. But despite intense paral- Code: https://github.com/OATML/RHO-Loss *Equal contribution 1OATML, Department of Computer Science, University of Oxford 2Department of Statistics, Uni- versity of Oxford 3Cohere 4University of Toronto, performed at Cohere. Correspondence to: Sören Mindermann <so- [email protected]>. Proceedings of the 39 th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copy- right 2022 by the author(s). lelization, training such a model takes weeks or months (Radford et al., 2021; Chowdhery et al., 2022). Even practi- tioners who work with smaller models face slow develop- ment cycles, due to numerous iterations of algorithm design and hyperparameter selection. As a result, the total time required for training is a core constraint in the development of such deep learning models. If it further sped up training, practitioners with sufficient resources would use much larger batches and distribute them across many more machines (Anil et al., 2018). However, this has rapidly diminishing returns (LeCun et al., 2012), to a point where adding machines does not reduce training time (McCandlish et al., 2018; Anil et al., 2018)—see e.g. GPT-3 and PaLM (Chowdhery et al., 2022). 020000400006000080000Training steps60626466687072Test accuracy (%)18x speedupRHO-LOSSselection (ours)Uniform randomdata selectionMaximum accuracyreached by uniform Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt Additional machines can, however, still speed up training by filtering out less useful samples (Alain et al., 2015). Many web-scraped samples are noisy, i.e. their label is incorrect or inherently ambiguous. For example, the text associated with a web-scraped image is rarely an accurate description of the image. Other samples are learnt quickly and are then redundant. Redundant samples are commonly part of object classes that are over-represented in web-scraped data (Tian et al., 2021) and they can often be left out without losing per- formance. Given that web-scraped data is plentiful—often enough to finish training in a single epoch (Komatsuzaki, 2019; Brown et al., 2020)—one can afford to skip less useful points. However, there is no consensus on which datapoints are the most useful. Some works, including curriculum learning, suggest prioritizing easy points with low label noise before training on all points equally (Bengio et al., 2009). While this approach may improve convergence and generalization, it lacks a mechanism to skip points that are already learnt (redundant). Other works instead suggest training on points that are hard for the model, thereby avoiding redundant points, whose loss cannot be further reduced. Online batch selection methods (Loshchilov & Hutter, 2015; Katharopou- los & Fleuret, 2018; Jiang et al., 2019; Schaul et al., 2015) do so by selecting points with high loss or high gradient norm. We show two failure modes of prioritising hard examples. Firstly, in real-world noisy datasets, high loss examples may be mislabelled or ambiguous. Indeed, in controlled experiments, points selected by high loss or gradient norm are overwhelmingly those with noise-corrupted labels. Our results show that this failure mode degrades performance severely. More subtly, we show that some samples are hard because they are outliers—points with unusual features that are less likely to appear at test time. For the aim of reducing test loss, such points are less worth learning. To overcome these limitations, we introduce reducible hold- out loss selection (RHO-LOSS). We propose a selection function grounded in probabilistic modelling that quantifies by how much each point would reduce the generalizaiton loss if we were to train on it, without actually training on it. We show that optimal points for reducing holdout loss are non-noisy, non-redundant, and task-relevant. To approx- imate optimal selection, we derive an efficient and easy to implement selection function: the reducible holdout loss. We explore RHO-LOSS in extensive experiments on 7 datasets. We evaluate the reduction in required training steps compared to uniform sampling and state-of-the-art batch selection methods. Our evaluation includes Clothing- 1M, the main large benchmark with noisy, web-scraped labels, matching our main application. RHO-LOSS reaches target accuracy in 18x fewer steps than uniform selection and achieves 2% higher final accuracy (Fig. 1). Further, RHO-LOSS consistently outperforms prior art and speeds up training across datasets, modalities, architectures, and hyperparameter choices. Explaining this, we show that methods selecting “hard” points prioritize noisy and less relevant examples. In contrast, RHO-LOSS chooses low- noise, task-relevant, non-redundant points—points that are learnable, worth learning, and not yet learnt. 2. Background: Online Batch Selection Consider a model p(y | x; θ) with parameters θ training on data D = {(xi, yi)}n i=1 using stochastic gradient descent (SGD). At each training step t, we load a batch bt of size nb from D. In online batch selection (Loshchilov & Hutter, 2015), we uniformly pre-sample a larger batch Bt of size nB > nb. Then, we construct a smaller batch bt that consists of the top-ranking nb points in Bt ranked by a label-aware selection function S(xi, yi). We perform a gradient step to minimize a mini-batch loss L(yi, p(yi | xi; θ)) summed over i ∈ bt. The next large batch Bt+1 is then pre-sampled from D without replacement of previously sampled points (i.e. random shuffling: replacement when the next epoch starts). 3. Reducible Holdout Loss Selection Previous online batch selection methods, such as loss or gradient norm selection, aim to select points that, if we were to train on them, would minimize the training set loss. (Loshchilov & Hutter, 2015; Katharopoulos & Fleuret, 2018; Kawaguchi & Lu, 2020; Alain et al., 2015). Instead, we aim to select points that minimize the loss on a holdout set. It would be too expensive to naively train on every candidate point and evaluate the holdout loss each time. In this section, we show how to (approximately) find the points that would most reduce the holdout loss if we were to train the current model on them, without actually training on them. For simplicity, we first assume only one point (x, y) ∈ Bt is selected for training at each time step t (we discuss selection of multiple points below). p(y(cid:48) | x(cid:48); Dt) is the predictive distribution of the current model, where Dt is the sequence of data the model was trained on before training step t. i=1, written as xho and yho for brevity, is Dho = {(xho a holdout set drawn from the same data-generating distri- bution ptrue(x(cid:48), y(cid:48)) as the training set D. We aim to acquire the point (x, y) ∈ Bt that, if we were to train on it, would minimize the negative log-likelihood/cross-entropy loss on the holdout set: i )}nho i , yho arg min (x,y)∈Bt − log p(yho | xho; Dt ∪ (x, y)). (1) For a model using a point estimate of θ (such as an MLE or MAP), rather than a distribution over θ, the Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt holdout loss factorises and (up to a constant factor) forms a Monte Carlo approximation of the expected loss under ptrue: Eptrue(x(cid:48),y(cid:48))[L[y(cid:48) | x(cid:48); Dt ∪ (x, y)]] ≈ 1 | xho i ; Dt ∪ (x, y)], where L[·] de- |Dho| notes the cross-entropy loss: L[y | x] := − log p(y | x). L[yho i i )∈Dho i ,yho (cid:80) (xho Deriving a tractable selection function. We now derive a tractable expression for the term in Eq. (1) that does not require us to train on each candidate point (x, y) ∈ Bt and then evaluate the loss on Dho. To make our claims precise and our assumptions transparent, we use the language of Bayesian probability theory. We treat model parameters as a random variable with prior p(θ) and infer a posterior p(θ|Dt) using the already-seen train- ing data Dt. The model has a predictive distribution p(y|x, Dt) = (cid:82) θ p(y|x, θ) p(θ|Dt)dθ. When using a point estimate of θ, the predictive distribution can be written as an integral with respect to a Dirac delta. Using Bayes rule and the conditional independence p(yi | xi, xj; Dt) = p(yi | xi; Dt), we can derive a tractable selection function from Eq. (1). For readability, we switch the sign of the selection function, later changing the mini- mization to a maximization. log p(yho | xho; Dt ∪ (x, y)) (Section 4.1 and Appendix D). We term L[y | x; Dho] the irreducible holdout loss (IL) since it is the remaining loss on point (x, y) ∈ D after training on the holdout set Dho; in the limit of Dho being large, it would be the lowest loss that the model can achieve without training on (x, y). Accord- ingly, we name our approximation of Eq. (2) the reducible holdout loss—the difference between the training loss and the irreducible holdout loss (IL). Our method still requires us to train a model on a holdout set, but a final approximation greatly reduces that cost. We can efficiently compute the IL with an “irreducible loss model" (IL model) that is smaller than the target model and has low accuracy (Approximation 3). We show this and explain it in Sections 4.1, 4.2, and 4.3. Counterintuitively, the reducible holdout loss can therefore be negative. Addi- tionally, one IL model can be reused for many target model runs, amortizing its cost (Section 4.2). For example, we trained all 40 seeds of 5 target architectures in Fig. 1 using a single ResNet18 IL model. Further, this model trained for 37x fewer steps than each target model (reaching only 62% accuracy). Section 5 details further possible efficiency improvements. In summary, selecting a point that minimizes the holdout loss in Eq. (1), for a model trained on Dt, can be approxi- mated with the following easy-to-compute objective: Bayes rule Reducible holdout loss selection (RHO-LOSS) = log = log p(y | x; xho, yho, Dt) p(yho | xho, x; Dt) p(y | x, xho; Dt) p(y | x; yho, xho, Dt) p(yho | xho; Dt) p(y | x; Dt) conditional independence ∝ L[y | x; Dt] − L[y | x; Dho, Dt], (2) where in the final line, we dropped terms independent of (x, y), rearranged, and applied the definition of L[·]. As exact Bayesian inference (conditioning on Dt or Dho) is intractable in neural networks (Blundell et al., 2015), we fit the models with SGD instead (Approximation 1). We study the impact of this approximation in Section 4.1. The first term, L[y | x; Dt], is then the training loss on the point (x, y) using the current model trained on Dt. The second term, L[y | x; Dho, Dt], is the loss of a model trained on Dt and Dho. Although the selection function in Eq. (2) is tractable, it is still somewhat expensive to compute, as both terms must be updated after each acquisition of a new point. However, we can approximate the second term with a model trained only on the holdout dataset, L[y | x; Dho, Dt] ≈ L[y | x; Dho] (Approximation 2). This approximation saves a lot of com- pute: it is now sufficient to compute the term once before the first epoch of training. Later on, we show that this approximation empirically does not hurt performance on any tested dataset and even has some desired properties arg max (x,y)∈Bt (cid:122) L[y | x; Dt] (cid:123)(cid:122) (cid:125) (cid:124) training loss reducible holdout loss (cid:125)(cid:124) − L[y | x; Dho] (cid:125) (cid:123)(cid:122) (cid:124) irreducible holdout loss (IL) (cid:123) (3) Although we required additional data Dho, this is not essen- tial for large (Section 4.0) nor small (Section 4.2) datasets. Understanding reducible loss. We now provide intu- loss selection (RHO- ition on why reducible holdout LOSS) avoids redundant, noisy, and less relevant points. i) Redundant points. We call a training point redundant when the model has already learnt it, i.e. its training loss cannot be further reduced. Since redundant points have low training loss, and the reducible loss is always less than the training loss (Eq. (3)), such points have low reducible loss and are not selected. And if the model forgets them, they ii) Noisy points. While are revisited in the next epoch. prior methods select based on high training loss (or gradient norm), not all points with high loss are informative—some may have an ambiguous or incorrect (i.e. noisy) label. The labels of such points cannot be predicted using the hold- out set (Chen et al., 2019). Such points have high IL and, consequently, low reducible loss. These noisy points are less likely to be selected compared to equivalent points with Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt Algorithm 1 Reducible holdout loss selection (RHO-LOSS) 1: Input: Small model p(y | x; Dho) trained on a hold- out set Dho, batch size nb, large batch size nB > nb, learning rate η. 2: for (xi, yi) in training set do 3: IrreducibleLoss[i] ← L[yi | xi; Dho] Initialize parameters θ0 and t = 0 4: for t = 0, 1, . . . do 5: 6: Randomly select a large batch Bt of size nB. ∀i ∈ Bt, compute Loss[i], the train loss of point i given parameters θt ∀i ∈ Bt, compute RHOLOSS[i]← Loss[i]− IrreducibleLoss[i] bt ← top-nb samples in Bt in terms of RHOLOSS. gt ← mini-batch gradient on bt using parameters θt θt+1 ← θt − ηgt 7: 8: 9: 10: less noise. iii) Less relevant points. Loss-based selection has an additional pitfall. The training loss is likely higher for outliers in input space—values of x far from most of the training data, in regions with low input density under ptrue(x). Points with low ptrue(x) should not be prioritized, all else equal. Consider an ‘outlier’ (x, y) and a non-outlier (x(cid:48), y(cid:48)), with ptrue(x) < ptrue(x(cid:48)) but equal training loss L[y|x; Dt] = L[y(cid:48)|x(cid:48); Dt]. As the holdout set Dho is also drawn from ptrue, Dho will contain fewer points from the re- gion around x in input space compared to the region around x(cid:48). Thus, training on (x, y) is likely to reduce the hold- out loss (Eq. (1)) less, and so we prefer to train on the non-outlier (x(cid:48), y(cid:48)). In the specific sense described, (x, y) is thus less relevant to the holdout set. As desired, RHO-LOSS deprioritizes (x, y): since Dho contains few points from the region around x, the IL of (x, y) will be large. In short, RHO-LOSS deprioritizes points that are redundant (low training loss), noisy (high IL), or less relevant to the holdout set (high IL). That is, RHO-LOSS prioritizes points that are not yet learnt, learnable, and worth learning. We provide empirical evidence for these claims in Section 4.3. See Algorithm 1 for the implementation of RHO-LOSS. Selecting multiple points concurrently. We showed which point is optimal when selecting a single point (x, y). When selecting an entire batch bt, we select the points with the top-nb scores from the randomly pre-sampled set Bt. This is nearly optimal when assuming that each point has little effect on the score of other points, which is often used as a simplifying assumption in active learning (Kirsch et al., 2019). This assumption is much more reasonable in our case than in active learning because model predictions are not changed much by a single gradient step on one mini-batch. Simple parallelized selection. For large-scale neural net- work training, practitioners with sufficient resources would use many more machines if it further sped up training (Anil et al., 2018). However, as more workers are added in syn- chronous or asynchronous gradient descent, the returns di- minish to a point where adding more workers does not fur- ther improve wall clock time (Anil et al., 2018; McCandlish et al., 2018). For example, there are rapidly diminishing returns for using larger batch sizes or distributing a given batch across more workers, for multiple reasons (McCan- dlish et al., 2018; Keskar et al., 2016). The same holds for distributing the model across more workers along its width or depth dimension (Rasley et al., 2020; Shoeybi et al., 2019; Huang et al., 2019). However, we can circumvent these diminishing returns by adding a new dimension of parallelization, namely, for data selection. Since parallel forward passes do not suffer from such di- minishing returns, one can use extra workers to evaluate training losses in parallel (Alain et al., 2015). The theoreti- cal runtime speedup can be understood as follows. The cost per training step of computing the selection function on Bt is nB times as much as the cost of the forward-backward 3nb pass needed to train on bt since a forward pass requires at least 3x less computation than a forward-backward pass (Jouppi et al., 2017). One can reduce the time for the selec- tion phase almost arbitrarily by adding more workers that compute training losses using a copy of the model being trained. The limit is reached when the time for selection is dominated by the communication of parameter updates to workers. More sophisticated parallelization strategies allow reducing the time overhead even further (Section 5). To avoid assumptions about the particular strategy used, we report experiment results in terms of the required number of training epochs. 4. Experiments We evaluate our selection method on several datasets (both in controlled environments and real-world conditions) and show significant speedups compared to prior art, in the pro- cess shedding light on the properties of different selection functions. Recall that our setting assumes training time is a bottleneck but data is abundant—more than we can train on (see Bottou & LeCun (2004)). This is common e.g. for web-scraped data where state-of-the-art performance is often reached in less than half of one epoch (Komatsuzaki, 2019; Brown et al., 2020). As data is abundant, we can set aside a holdout set for training the IL model with little to no downside. For the large Clothing-1M dataset, we implement RHO-LOSS by training the IL model on 10% of the training data, while all baselines are trained on the full 100% of the training data. For the smaller datasets, we simulate abundance of data by Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt reserving a holdout set and training all methods only on the remaining data. However, RHO-LOSS also works on small datasets without additional data by double-using the training set (Section 4.2). Datasets. We evaluate on 7 datasets: 1) QMNIST (Yadav & Bottou, 2019) extends MNIST (LeCun et al., 1998) with 50k extra images which we use as the holdout set. 2) On CIFAR-10 (Krizhevsky & Hinton, 2009) we train on half of the training set and use the other half as a holdout to train the irreducible loss (IL) model. 3) CIFAR-100: same as CIFAR- 10. 4) CINIC-10 (Darlow et al., 2018) has 4.5x more images than CIFAR-100 and includes a holdout set and a test set with 90k images each. 5) Clothing-1M (Xiao et al., 2015), which contains over 1 million 256x256-resolution clothing images from 14 classes. The dataset is fully web-scraped—a key application area of our work—and is the most widely accepted benchmark for image recognition with noisy labels (Algan & Ulusoy, 2021). We use the whole training set for training and reuse 10% of it to train the IL model. We further evaluate on two NLP datasets from GLUE (Wang et al., 2018): 6) CoLA (grammatical acceptability) and 7) SST-2 (sentiment). We split their training sets as for CIFAR. Baselines. Aside from uniform sampling (without replace- ment, i.e. random shuffling), we also compare to selection functions that have achieved competitive performance in online batch selection recently: the (training) loss, as im- plemented by Kawaguchi & Lu (2020), gradient norm, and gradient norm with importance sampling (called gradient norm IS in our figures), as implemented by Katharopoulos & Fleuret (2018). We also compare to the core-set method Selection-via-Proxy (SVP) that selects data offline before training (Coleman et al., 2020). We report results using maximum entropy SVP and select with the best-performing model, ResNet18. We further compare to four baselines from active learning, shown in Appendix G as they assume labels are unobserved. Finally, we include selection using the negative IL (see Eq. 3) to test if it is sufficient to only skip noisy and less relevant but not redundant points. Models and hyperparameters. To show our method needs no tuning, we use the PyTorch default hyperparam- eters (with the AdamW optimizer (Loshchilov & Hutter, 2017)) and nb = 0.1. We test many additional hyperpa- nB rameter settings in Figs. 2 (row 5) and 8. We test various In all other fig- architectures in Figs. 1 and 2 (row 4). ures, we use a 3 layer MLP for experiments on QMNIST, a ResNet-18 adapted for small images for CIFAR-10/CIFAR- 100/CINIC-10, and a ResNet-50 for Clothing-1M. All mod- els for Clothing-1M are pre-trained on ImageNet (standard for this dataset (Algan & Ulusoy, 2021)) and the IL model is always a ResNet-18. For the NLP datasets, we use a pre- trained ALBERT v2 (Lan et al., 2019). We always use the Table 1: Spearman’s rank correlation of rankings of data points by selection functions that are increasingly less faith- ful approximations of Eq. (2), compared to the most faithful approximation. Approximations added from left to right. Mean across 3 seeds. Non- Bayesian Not converged Not updating IL model Small IL model Rank correlation 0.75 0.76 0.63 0.51 IL model checkpoint with lowest validation loss (not highest accuracy); this performs best. Details in Appendix B. Evaluation. We measure speedup in terms of the number of epochs needed to reach a given test accuracy. We measure epochs needed, rather than wall clock time, as our focus is on evaluating a new selection function, not an entire training pipeline. Wall clock time depends primarily on the hard- ware used and implementation details that are beyond our scope. Most importantly, data selection is amenable to par- allelization beyond standard data parallelism as discussed in Section 3. 4.1. Impact of Approximations In Section 3, we introduced a function for selecting exactly the points that most reduce the model’s loss on a holdout set. To make this selection function efficient for deep neural networks, we made several approximations. Here, we study how these approximations affect the points selected, by successively introducing one approximation after the other. Because the exact selection function (Eq. (2)) is intractable, we start with a close (and expensive) approximation as the gold standard (Approximation 0). To make Approxima- tion 0 feasible, the experiments are conducted on an easy dataset—QMNIST (with 10% uniform label noise and data duplication to mimic the properties of web-scraped data). We then successively introduce the Approximations 1, 2, and 3 described in Section 3. To assess the impact of each ap- proximation, we train a model without and with the approxi- mations, and then compute the rank correlation (Spearman’s correlation coefficient) of the selection function evaluated on each batch Bt. Across the first epoch, we present the mean of the rank correlations. Since each approximation se- lects different data, the corresponding models become more different over time; this divergence causes some of the ob- served difference in the points they select. See Appendix E for details. Approximation 0. To get as close as possible to the Bayesian inference/conditioning used in Eq. (2), we use a deep ensem- ble of 5 neural networks and train them to convergence after every time step t on the acquired dataset bt ∪ Dt (Wilson & Izmailov, 2020). Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt Figure 2: The irreducible loss model can be small, trained with no holdout data, and reused across target architec- tures and hyperparameters. Here, we use clean datasets, where speedups are smallest. The x-axis shows speedup, i.e. after how many fewer epochs RHO-LOSS exceeds the highest accuracy uniform selection achieves within 100 epochs. Row 1 uses a ResNet18 as irreducible loss model. All other rows instead use a small, cheap CNN. Each dot shows an experiment with a given combination of irreducible loss model and target model (mean across 2-3 seeds for all but the last row). Approximation 1: SGD instead of Bayesian infer- ence/conditioning. Approximation 0 is a close approxi- mation of Eq. (2), but training an ensemble to convergence at every step t is far too expensive in practice. Starting from this gold-standard, we introduce two stronger approx- imations (1a and 1b) to move to standard neural network fitting with AdamW. 1a) First, we replace the ensemble with a single model, while still training to convergence at each time step. The Spearman’s coefficient between this approximation and Approximation 0 is 0.75, suggesting similar points are selected (“Non-Bayesian” in Table 1). 1b) Next, we only take one gradient step on each new batch bt. The Spearman’s coefficient, when comparing this to Approximation 0, is 0.76 (“Not Converged" in Table 1). Approximation 2. Not updating the IL model on the acquired data Dt. Second, we save compute by approximating L[y | x; Dt, Dho] with L[y | x; Dho]. The points selected are still similar to Approximation 0 (Spearman’s coefficient 0.63, “Not updating IL model” in Table 1). This approximation also performs well on other datasets (Appendix D). Approximation 3: Small IL model. Lastly, we use a model with 256 hidden units instead of 512 (4x fewer parameters) as the IL model and see again that similar points are selected (Spearman’s coefficient 0.51 ). We study cheaper IL models in other forms and datasets in the next section. 4.2. Cheap Irreducible Loss Models & Robustness RHO-LOSS requires training an IL model on a holdout set, which poses additional costs. Here, we show how to min- imize these costs and amortize them across many training runs of target models. The same experiments also show the robustness of RHO-LOSS across architectures and hyperpa- rameter settings. To fit our computational budget, we per- form these experiments on moderate-sized clean benchmark datasets although RHO-LOSS produces greater speedups on noisy or redundant web-scraped data (see Section 4.4). Irreducible loss models can be small and cheap. In our default setting (Fig. 2, row 1), both the target model and IL model have the same architecture (ResNet-18). In rows 2 and below, we instead used a small CNN similar to LeNet as the IL model (LeCun et al., 1989). It has 21x fewer parameters and requires 29x fewer FLOP per forward pass than the ResNet-18. The smaller IL model accelerates training as much or more than the larger model, even though its final accuracy is far lower than the target ResNet- 18 (11.5% lower on CIFAR-10, 7% on CIFAR-100, and 8.1% on CINIC-10). We examine in Section 4.3 why this useful result holds. Irreducible loss models without holdout data. Web- scraped datasets are often so large that even a small fraction of the overall data can be sufficient to train the IL model. E.g., in our experiments on Clothing-1M (Fig. 1), the hold- out set is only 10% as large as the main train set. Addi- tionally, we can train the IL model without any holdout data (Fig. 2, row 3). We split the training set D into two halves and train an IL model on each half (still using small IL models). Each model computes the IL for the half of D that it was not trained on. Training two IL models costs no additional compute since each model is trained on half as much data compared to the default settings. Irreducible loss models can be reused to train differ- ent target architectures. We find that a single small 0No speedup3x6xRHO-LOSS speedup over uniform selectionDefaultSmall irreducible loss modelNo holdout setArchitecture transferHyperparameter transferDatasetCIFAR10CIFAR100CINIC10 Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt Figure 3: Properties of RHO-LOSS and other methods. RHO-LOSS prioritizes points that are non-noisy, task-relevant, and non-redundant—even when the irreducible loss (IL) model is a small CNN. In contrast, loss and gradient norm prioritize noisy and less relevant points (while also avoiding redundant points). Left. Proportion of selected points with corrupted labels. We added 10% uniform label noise, i.e., we randomly switched each point’s label with 10% probability. Middle. Proportion of selected points from low relevance classes on CIFAR100 Relevance dataset. Right. Proportion of selected points that are already classified correctly, which is a proxy for redundancy. Mean over 150 epochs of training and 2-3 seeds. CNN IL model accelerates the training of 7 target ar- chitectures (Fig. 2, row 4): VGG11 (with batchnorm), GoogleNet, Resnet34, Resnet50, Densenet121, MobileNet- v2, Inception-v3. RHO-LOSS does not accelerate training on CIFAR-10 for VGG11, which is also the architecture on which uniform training performs the worst; i.e. RHO-LOSS empirically does not “miss” a good architecture. Not only is RHO-LOSS robust to architectures choice, a single IL model can also be reused by many practitioners who use different architectures (as we did in Fig. 1). Irreducible loss models can be reused to train many tar- gets in a hyperparameter sweep. We find that a single small CNN accelerates the training of ResNet-18 target mod- els across a hyperparameter grid search (Fig. 2, last row). We vary the batch size (160, 320, 960), learning rate (0.0001, 0.001, 0.01), and weight decay coefficient (0.001, 0.01, 0.1). RHO-LOSS speeds up training compared to uniform on nearly all target hyperparameters. The few settings in which it doesn’t speed up training are also settings in which uni- form training performs very poorly (< 30% accuracy on CIFAR-100, < 80% on CIFAR-10). To understand this robustness, we investigate the properties of points selected by RHO-LOSS, when the target and IL model architectures are identical, and when the IL model is smaller. Explaining the robustness, we find that, in both cases, RHO-LOSS prioritizes points that are non-noisy, task- relevant, and not redundant. We also investigate the proper- ties of points selected by prior art. Noisy points. We investigate how often different methods select noisy points by uniformly corrupting the labels for 10% of points and tracking what proportion of selected points are corrupted. RHO-LOSS deprioritizes noisy points for both IL models (Fig. 3). We observe a failure mode of the widely-used loss and gradient norm selection functions: they select far more noisy points than uniform. These methods also severely drop in accuracy when the noise follows the class confusion matrix (Rolnick et al., 2017) and when we add ambiguous images (Mukhoti et al., 2021) (Appendix C). Together, this suggests that noisy points have high loss (and gradient norm), but also high IL and thus low reducible loss. Their IL is high even when the IL model is small as noisy labels cannot be predicted well using the holdout set. 4.3. Properties of RHO-LOSS & Other Selection Functions We established that RHO-LOSS can accelerate the training of various target architectures with a single IL model, even if the IL model is smaller and has considerably lower ac- curacy than the target models (Section 4.2). This suggests robustness to target-IL architecture mismatches. Relevant points. We study how often less relevant points are selected by creating the CIFAR100 Relevance dataset, in which 80% of the data comes from 20% of the classes. This mimics natural distributions of NLP and vision data where most data comes from few object classes, topics, or words (Baayen, 2001; Tian et al., 2021). Concretely, we retain all examples from 20 randomly chosen “high relevance” classes CIFAR10CIFAR100CINIC1002040Proportion of selected pointsalready classified correctly (%)Redundant PointsCIFAR10CIFAR100CINIC100102030Proportion of selected pointswith corrupted labels (%)Noisy PointsCIFAR100 Relevance02040Proportion of selected points less relevant (%)Less Relevant PointsSelection MethodReducible Loss (Ours)Reducible Loss (Ours)Small IL ModelUniform SamplingGradient NormLoss Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt Table 2: Epochs required to reach a given target test accuracy (final accuracy in parentheses). Figs. 4 and 5 (Appendix) show all training curves. Some datasets have 10% uniform label noise added. Results averaged across 2-4 seeds. Best performance in bold. RHO-LOSS performs best in both epochs required and final accuracy. NR indicates that the target accuracy was not reached. ∗On CIFAR10/100, CoLA, and SST-2, only half of the data is used for training (Section 4.0). Dataset Target Acc Clothing-1M CIFAR10∗ CIFAR10∗ (Label Noise) CIFAR100∗ CIFAR100∗ (Label Noise) CINIC10 CINIC10 (Label Noise) SST2∗ CoLA∗ 60.0% 69.0% 80.0% 87.5% 75.0% 85.0% 40.0% 52.5% 40.0% 47.5% 70.0% 77.5% 60.0% 67.5% 82.5% 90.0% 75.0% 80.0% Number of epochs method needs to reach target accuracy ↓ (Final accuracy in parentheses) Train Loss Grad Norm Grad Norm IS 8 NR (65%) 81 129 (90%) NR NR (28%) 138 NR (42%) NR NR (4%) NR NR (36%) NR NR (16%) 8 NR (87%) 8 NR (78%) 13 NR (64%) NR NR (61%) NR NR (23%) 139 NR (42%) NR NR (4%) NR NR (50%) NR NR (16%) 2 4 (91%) 6 NR (79%) 2 9 (70%) 57 139 (89%) 57 NR (84%) 71 132 (55%) 94 142 (48%) 34 64 (82%) 22 35 (79%) 3 NR (89.7%) 16 NR (78%) SVP NR NR (55%) NR NR (55%) NR NR (48%) NR NR (18%) NR NR (14%) NR NR (39%) NR NR (39%) NR NR (66%) NR NR (62%) Irred Loss NR NR (48%) NR NR (60%) NR NR (62%) 93 NR (43%) 89 NR (43%) NR NR (60%) 30 NR (64%) 7 NR (83%) NR NR (69%) Uniform 2 30 (70%) 79 NR (87%) 62 NR (85%) 65 133 (54%) 79 116 (50%) 38 97 (80%) 24 38 (78%) 1 6 (90%) 34 NR (76%) RHO-LOSS 1 2 (72%) 39 65 (91%) 27 49 (91%) 48 77 (61%) 49 65 (60%) 27 38 (83%) 13 17 (82%) 1 3 (92%) 3 39 (80%) but only 6% of the examples from other, “low relevance” classes. Intuitively, since the high relevance classes have higher ptrue(x) and are 17x more likely to appear at test time, improving their accuracy improves the test accuracy much more than improving the accuracy of less relevant classes. els of noise and redundancy. Clothing-1M is such a dataset (Section 4.0). We also include smaller, clean benchmarks from vision (CIFAR-10, CIFAR-100, CINIC-10) and NLP (CoLA, SST-2). Finally, we study if selection functions are robust to the controlled addition of label noise. The loss and gradient norm methods select more points than uniform selection from the low relevance classes (Fig. 3). In contrast, RHO-LOSS selects somewhat fewer low relevance points, suggesting these classes have high IL. Since the less relevant classes are less abundant in the holdout set, both the small and large IL models have higher loss on them. Redundant points. To investigate whether methods se- lect redundant points, we track the percentage of selected points that are already classified correctly. This is only a proxy for redundancy; points that are classified correctly but with low confidence are not fully redundant, since their loss can be further reduced. We control for the different accuracy reached by each method by averaging only over epochs in which test accuracy is lower than the final accu- racy reached by the weakest performing method. Fig. 3 shows that all methods select fewer redundant points than uniform sampling. 4.4. Speedup Finally, we evaluate how much different selection methods speed up training. Recall that the main application area for our work is large web-scraped datasets, known for high lev- Speedup on clean data. RHO-LOSS reaches target accu- racies in fewer epochs than uniform selection on all datasets (Table 2). It also outperforms state-of-the-art methods by a clear margin in terms of speed and final accuracy. On the challenging CoLA language understanding dataset, the speedup over uniform selection exceeds 10x. In Table 3 (Ap- pendix A), we find similar speedups when using no holdout data. Speedup on noisy data. When adding 10% label noise, batch selection with RHO-LOSS achieves greater speedups while, as hypothesized, prior art degrades (Table 2). Notably, on noisier data, the speedup over uniform selection grows. Speedup on large web-scraped data. On Clothing-1M, loss-based and gradient norm-based selection fail to match uniform selection, suggesting they are not robust to noise. In contrast, RHO-LOSS reaches the highest accuracy that uniform selection achieves during 50 epochs in just 2 epochs and improves final accuracy (72% vs 70%). Notably, this was possible even though the IL model we used has low accuracy (62.2%) and was trained on ca. 10x less data. RHO-LOSS also used 2.7x fewer FLOPs to reach the peak Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt accuracy of uniform selection, including the cost of training the IL model (which could be amortized) and despite our implementation being designed to save time, not compute. While Table 2 shows results for a Resnet-50, Fig. 1 includes additional architectures, with an average speedup of 18x. 5. Related Work Time-efficient data selection. Forward passes for selec- tion can be accelerated using low-precision hardware or parallelization. While backward passes typically require high precision, forward passes can tolerate lower precision (Jouppi et al., 2017; Jiang et al., 2019), especially as we only need the loss (not the activations which would be needed for backprop). A forward pass by default requires roughly 3x less time than a forward-backward pass but this speedup can be increased to a factor around 10x when using the low-precision cores available in modern GPUs and TPUs (Jouppi et al., 2017; Jiang et al., 2019). Further, prior work uses a set of workers that perform forward passes on Bt or on the entire dataset asynchronously while the master process trains on recently selected data (Alain et al., 2015). Compute-efficient data selection. While we limit our scope to comparing selection functions and we compute them naively, this choice is inefficient in practice. Selection can be made cheaper by reusing losses computed in previ- ous epochs (Loshchilov & Hutter, 2015; Jiang et al., 2019) or training a small model to predict them (Katharopoulos & Fleuret, 2017; Zhang et al., 2019; Coleman et al., 2020). Al- ternatively, core set methods perform selection once before training (Mirzasoleiman et al., 2020; Borsos et al., 2020), although typically with more expensive selection functions. Data selection functions. RHO-LOSS is best understood as an alternative to existing selection functions, which can be categorized by the properties of points they select and whether they use information about labels. “Hard” points are selected both by high loss (Loshchilov & Hutter, 2015; Kawaguchi & Lu, 2020; Jiang et al., 2019) and high predic- tion uncertainty (Settles, 2009; Li & Sethi, 2006; Coleman et al., 2020). However, prediction uncertainty does not re- quire labels and can thus be used for active learning. Despite this, they both suffer from the same problem: high loss and high uncertainty can be caused by noisy (in particular, am- biguous) labels. This also applies to selection of points whose labels are easily forgotten during training (Toneva et al., 2018). Noisy points are avoided by our negative IL baseline and comparable offline selection methods (Pleiss et al., 2020; Chen et al., 2019; Paul et al., 2021). Points that most reduce (expected) holdout loss are also selected for other purposes (Kirsch et al., 2021; Killamsetty et al., 2020; Ren et al., 2018), although using much more computation. Variance reduction methods. Online batch selection is also used to reduce the variance of the gradient estimator computed by SGD (Katharopoulos & Fleuret, 2018; 2017; Johnson & Guestrin, 2018; Alain et al., 2015), which is widely used in reinforcement learning (Schaul et al., 2015). Such methods typically use importance sampling—points with high (approximate) gradient norm are sampled with high probability but then down-weighted in the gradient calculation to de-bias the gradient estimate. Without de- biasing, methods like RHO-LOSS also create selection bias. However, bias can improve test performance, both in theory and practice (Farquhar et al., 2021; Kawaguchi & Lu, 2020). 6. Conclusion To reduce excessive training times, we introduce a theoret- ically grounded selection function that enables substantial speedups on clean data and even larger speedups on noisy and web-scraped data. By illuminating three properties of optimal selection, we hope to motivate new directions in batch selection. However, our selection function should be combined with methods in Section 5 for cheap and fast selection with maximal speedups. Ethics Statement It will be important to understand how subset selection might affect performance on data about minority groups. The selection may prioritize rare groups since majority groups are learnt more quickly, or deprioritizes rare groups since they affect the loss on holdout data less. Since such biases can also stem from the dataset itself (Mehrabi et al., 2021), it should be investigated if our method can remove data biases through the use of an un- biased holdout set. By training the irreducible loss model on unbiased data, we can implicitly specify that the model should perform well on unbiased data, even when the train- ing set contains bias. This may be useful for specifying that all groups are equally important to learn. Acknowledgements For useful feedback we thank Pascal Notin and Kelsey Do- erksen. Author Contributions Sören Mindermann, Jan Brauner, Mrinank Sharma and Muhammed Razzak designed and analysed the experiments shown in the paper. Jan Brauner implemented experiments on ALBERT, CIFAR- 10, experiments in Figure 2 and 7, Table 3 and 4, among others. Muhammed Razzak implemented experiments on Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt Clothing-1M, CIFAR-100, and MNIST, among others. Mri- nank Sharma implemented experiments on CINIC-10, ex- periments in Figures 3 and 9, among others. Bottou, L. and LeCun, Y. Large scale online learning. Ad- vances in neural information processing systems, 16:217– 224, 2004. Sören Mindermann and Muhammed Razzak implemented pilot experiments. Winnie Xu and Adrien Morisot implemented early experi- ments on language models, advised by Aidan Gomez. Jan Brauner, Mrinank Sharma, Sören Mindermann, Muhammed Razzak, Benedikt Höltgen and Andreas Kirsch wrote the paper. Sören Mindermann conceived of the algorithm. Sören Mindermann, Jan Brauner, Andreas Kirsch and Yarin Gal developed the theory. Sören Mindermann led and managed the research. Yarin Gal and Sebastian Farquhar advised the research. References Alain, G., Lamb, A., Sankar, C., Courville, A., and Bengio, Y. Variance reduction in sgd by distributed importance sampling. arXiv preprint arXiv:1511.06481, 2015. Algan, G. and Ulusoy, I. Image classification with deep learning in the presence of noisy labels: A survey. Knowledge-Based Systems, 215:106771, 2021. Anil, R., Pereyra, G., Passos, A., Ormandi, R., Dahl, G. E., and Hinton, G. E. Large scale distributed neural net- work training through online distillation. arXiv preprint arXiv:1804.03235, 2018. Baayen, R. H. Word frequency distributions, volume 18. Springer Science & Business Media, 2001. Bengio, Y., Louradour, J., Collobert, R., and Weston, J. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41–48, 2009. Blundell, C., Cornebise, J., Kavukcuoglu, K., and Wier- stra, D. Weight uncertainty in neural network. In Bach, F. and Blei, D. (eds.), Proceedings of the 32nd International Conference on Machine Learning, vol- ume 37 of Proceedings of Machine Learning Research, pp. 1613–1622, Lille, France, 07–09 Jul 2015. PMLR. URL https://proceedings.mlr.press/v37/ blundell15.html. Borsos, Z., Mutn`y, M., and Krause, A. Coresets via bilevel optimization for continual learning and streaming. arXiv preprint arXiv:2006.03875, 2020. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. Chen, P., Liao, B. B., Chen, G., and Zhang, S. Understand- ing and utilizing deep neural networks trained with noisy labels. In International Conference on Machine Learning, pp. 1062–1070. PMLR, 2019. Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Coleman, C., Yeh, C., Mussmann, S., Mirzasoleiman, B., Bailis, P., Liang, P., Leskovec, J., and Zaharia, M. Selec- tion via proxy: Efficient data selection for deep learning. International Conference on Learning Representations, 2020. Darlow, L. N., Crowley, E. J., Antoniou, A., and Storkey, A. J. Cinic-10 is not imagenet or cifar-10. arXiv preprint arXiv:1810.03505, 2018. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. Farquhar, S., Gal, Y., and Rainforth, T. On statistical bias in active learning: How and when to fix it. arXiv preprint arXiv:2101.11665, 2021. Gal, Y. and Ghahramani, Z. Dropout as a bayesian approx- imation: Representing model uncertainty in deep learn- ing. In international conference on machine learning, pp. 1050–1059. PMLR, 2016. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learn- ing for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Houlsby, N., Huszár, F., Ghahramani, Z., and Lengyel, M. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745, 2011. Huang, Y., Cheng, Y., Bapna, A., Firat, O., Chen, D., Chen, M., Lee, H., Ngiam, J., Le, Q. V., Wu, Y., et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism. Advances in neural information processing systems, 32:103–112, 2019. Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448– 456. PMLR, 2015. Jiang, A. H., Wong, D. L.-K., Zhou, G., Andersen, D. G., Dean, J., Ganger, G. R., Joshi, G., Kaminksy, M., Kozuch, M., Lipton, Z. C., et al. Accelerating deep learning by focusing on the biggest losers. arXiv preprint arXiv:1910.00762, 2019. Johnson, T. B. and Guestrin, C. Training deep models faster with robust, approximate importance sampling. Ad- vances in Neural Information Processing Systems, 31: 7265–7275, 2018. Jouppi, N. P., Young, C., Patil, N., Patterson, D., Agrawal, G., Bajwa, R., Bates, S., Bhatia, S., Boden, N., Borchers, A., et al. In-datacenter performance analysis of a tensor processing unit. In Proceedings of the 44th annual inter- national symposium on computer architecture, pp. 1–12, 2017. Katharopoulos, A. and Fleuret, F. Biased importance sam- pling for deep neural network training. arXiv preprint arXiv:1706.00043, 2017. Katharopoulos, A. and Fleuret, F. Not all samples are cre- ated equal: Deep learning with importance sampling. In International conference on machine learning, pp. 2525– 2534. PMLR, 2018. Kawaguchi, K. and Lu, H. Ordered sgd: A new stochastic optimization framework for empirical risk minimization. In International Conference on Artificial Intelligence and Statistics, pp. 669–679. PMLR, 2020. Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M., and Tang, P. T. P. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016. Killamsetty, K., Sivasubramanian, D., Ramakrishnan, G., and Iyer, R. Glister: Generalization based data subset selection for efficient and robust learning. arXiv preprint arXiv:2012.10630, 2020. Kirsch, A., Van Amersfoort, J., and Gal, Y. Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning. arXiv preprint arXiv:1906.08158, 2019. Kirsch, A., Rainforth, T., and Gal, Y. Active learning un- der pool set distribution shift and noisy data. CoRR, abs/2106.11719, 2021. URL https://arxiv.org/ abs/2106.11719. Komatsuzaki, A. One epoch is all you need. arXiv preprint arXiv:1906.06669, 2019. Krizhevsky, A. and Hinton, G. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto, 2009. Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., and Soricut, R. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019. LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., and Jackel, L. D. Backpropaga- tion applied to handwritten zip code recognition. Neural computation, 1(4):541–551, 1989. LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient- based learning applied to document recognition. Proceed- ings of the IEEE, 86(11):2278–2324, November 1998. LeCun, Y. A., Bottou, L., Orr, G. B., and Müller, K.-R. Efficient backprop. In Neural networks: Tricks of the trade, pp. 9–48. Springer, 2012. Li, M. and Sethi, I. K. Confidence-based active learning. IEEE transactions on pattern analysis and machine intel- ligence, 28(8):1251–1261, 2006. Loshchilov, I. and Hutter, F. Online batch selection for faster training of neural networks. arXiv preprint arXiv:1511.06343, 2015. Loshchilov, I. and Hutter, F. Decoupled weight decay regu- larization. arXiv preprint arXiv:1711.05101, 2017. McCandlish, S., Kaplan, J., Amodei, D., and Team, O. D. arXiv An empirical model of large-batch training. preprint arXiv:1812.06162, 2018. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., and Galstyan, A. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6):1–35, 2021. Mirzasoleiman, B., Bilmes, J., and Leskovec, J. Coresets for data-efficient training of machine learning models. In International Conference on Machine Learning, pp. 6950–6960. PMLR, 2020. Mukhoti, J., Kirsch, A., van Amersfoort, J., Torr, P. H., and Gal, Y. Deterministic neural networks with appropriate in- ductive biases capture epistemic and aleatoric uncertainty. arXiv preprint arXiv:2102.11582, 2021. Osawa, K., Swaroop, S., Jain, A., Eschenhagen, R., Turner, R. E., Yokota, R., and Khan, M. E. Practical deep learning with bayesian principles. In Proceedings of the 33rd Inter- national Conference on Neural Information Processing Systems, pp. 4287–4299, 2019. Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt Paul, M., Ganguli, S., and Dziugaite, G. K. Deep learning on a data diet: Finding important examples early in training. arXiv preprint arXiv:2107.07075, 2021. Wilson, A. G. and Izmailov, P. Bayesian deep learning and a probabilistic perspective of generalization. arXiv preprint arXiv:2002.08791, 2020. Xiao, T., Xia, T., Yang, Y., Huang, C., and Wang, X. Learn- ing from massive noisy labeled data for image classifica- tion. In CVPR, 2015. Yadav, C. and Bottou, L. Cold case: The lost mnist digits. In Advances in Neural Information Processing Systems 32, 2019. Yi, K. and Wu, J. Probabilistic end-to-end noise correction In Proceedings of the for learning with noisy labels. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. Zhang, J., Yu, H.-F., and Dhillon, I. S. Autoassist: A frame- work to accelerate training of deep neural networks. arXiv preprint arXiv:1905.03381, 2019. Phan, H. huyvnphan/pytorch_cifar10. Jan 2021. doi: 10. 5281/zenodo.4431043. Pleiss, G., Zhang, T., Elenberg, E., and Weinberger, K. Q. Identifying mislabeled data using the area under the mar- gin ranking. Advances in Neural Information Processing Systems, 33:17044–17056, 2020. Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., and Sutskever, I. Learning transferable visual models from natural language supervision, 2021. Rasley, J., Rajbhandari, S., Ruwase, O., and He, Y. Deep- speed: System optimizations enable training deep learn- ing models with over 100 billion parameters. In Proceed- ings of the 26th ACM SIGKDD International Conference on Knowledge Discovery &amp; Data Mining, pp. 3505– 3506, 2020. Ren, M., Zeng, W., Yang, B., and Urtasun, R. Learning to reweight examples for robust deep learning. In Interna- tional conference on machine learning, pp. 4334–4343. PMLR, 2018. Rolnick, D., Veit, A., Belongie, S., and Shavit, N. Deep learning is robust to massive label noise. arXiv preprint arXiv:1705.10694, 2017. Schaul, T., Quan, J., Antonoglou, I., and Silver, D. Priori- tized experience replay. arXiv preprint arXiv:1511.05952, 2015. Settles, B. Active learning literature survey. 2009. Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., and Catanzaro, B. Megatron-lm: Training multi- billion parameter language models using model paral- lelism. arXiv preprint arXiv:1909.08053, 2019. Tian, Y., Henaff, O. J., and Oord, A. v. d. Divide and contrast: Self-supervised learning from uncurated data. arXiv preprint arXiv:2105.08054, 2021. Toneva, M., Sordoni, A., Combes, R. T. d., Trischler, A., Bengio, Y., and Gordon, G. J. An empirical study of example forgetting during deep neural network learning. arXiv preprint arXiv:1812.05159, 2018. Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. Glue: A multi-task benchmark and anal- ysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018. Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt Appendix A. Steps required to reach a given test accuracy Figs. 4 (vision) and 5 (NLP) show the number of steps required to reach a given test accuracy across several datasets for different selection methods. Interestingly, on CoLA (unbalanced and noisy), the uniform sampling baseline shows high variance across seeds, while RHO-LOSS works robustly across seeds. Table 3 shows results for RHO-LOSS training without holdout data. Results are similar to Table 2. Here, we train the IL model without any holdout data. We split the training set D into two halves and train an IL model on each half. Each model computes the IL for the half of D that it was not trained on. (This is as in Fig. 2, row 3, except that previously we only used half of D and further split it into halves of the half.) Training two IL models costs no additional compute since each model is trained on half as much data compared to the default settings. Figure 4: Vision datasets—gradient steps required to achieve a given test accuracy (lower is better). Left column: The speedup of RHO-LOSS over uniform sampling is greatest on a large-scale web-scraped dataset with noisy labels. Middle column: Speedups are still substantial on clean datasets and RHO-LOSS still achieves higher final accuracy than all prior art. Right column: Applying 10% uniform label noise to training data degrades other methods but increases the speedup of our method. A step corresponds to lines 5 − 10 in Algorithm 1. Lines correspond to means and shaded areas to minima and maxima across 3 random seeds. On CIFAR10/100, only half of the data is used for training (see text). 0501000104Steps requiredHalf of CIFAR100500104Steps requiredHalf of CIFAR100050Target Accuracy (%)0104Steps requiredCINIC100501000104Half of CIFAR10(Label Noise)0500104Half of CIFAR100(Label Noise)050Target Accuracy (%)0104CINIC10 (Label Noise)020406080Target Accuracy (%)075⋅10315⋅104Steps required to reach target accuracyClothing-1MSelection MethodRHO-LOSS (Ours)Uniform SamplingIrreducible LossGradient NormLossSVPGradient Norm IS Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt Figure 5: NLP datasets—gradient steps required to achieve a given test accuracy (lower is better). Left: CoLA grammatical acceptibility classification. Right: SST2 sentiment classification. A step corresponds to lines 5 − 10 in Algorithm 1. Lines correspond to means and shaded areas to standard deviations across 4 or more random seeds. Only half of the data is used for training (see text). Table 3: Epochs required to reach a given target test accuracy when using no holdout data (lower is better). Final accuracy in parentheses. Results averaged across 2-3 seeds. Best performance in bold. RHO-LOSS performs best in both epochs required and final accuracy. Dataset Target Acc Uniform RHO-LOSS CIFAR10 CIFAR100 CINIC10 80% 90% 50% 65% 70% 80% 39 177 (90.8%) 17 47 (92.2%) 47 142 (67.8%) 22 87 (68.1%) 37 146 (80.1%) 26 70 (82.1%) 4050607080Target accuracy (%)0300Steps required to reach target accuracycolaRHO-LOSS (Ours)Uniform SamplingIrreducible LossGradient NormLossSVPGradient Norm IS6080100Target accuracy (%)01000sst2 Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt B. Experiment Details Architectures. We experiment with various architectures in Figs. 1 and 2 (row 4). In all other figures and tables, we use the following architectures: For experiments on QMNIST, we use a multi-layer perceptron with 2 hidden layers and 512 units in each hidden layer. For experiments on CIFAR-10, CIFAR-100 and CINIC-10, we use a variant of ResNet-18 (He et al., 2016). We adapted the ResNet18 to 32x32 images by modifying the architecture to remove the downsampling effect. We replaced the spatial downsampling of a strided convolution and max pooling in the original ResNet18, with a convolutional layer with 64 filters and a kernel size of 3x3. We also removed the average pooling at the end of the ResNet18. This ResNet18 variant is similar to Resnet20, just with more filters. For experiments on Clothing-1M, following the experimental set-up of Yi & Wu (2019), the target model is a ResNet-50 pre-trained on ImageNet. The irreducible loss model is a ResNet-18 with random initialisation. The multiple target architectures in Fig 2 were adapted from (Phan, 2021). For NLP datasets, we use a pretrained ALBERT v2 (Lan et al., 2019). Hyperparameters. Vision: All models are trained using the AdamW optimizer with default PyTorch hyperparameters (β1=0.9, β2=0.999, and weight decay of 0.01, learning rate 0.001), a nb = 32 (64 for CINIC-10) nB = 320 (640 for CINIC-10), meaning we select nb = 10% of points. NLP: ALBERT v2 was trained using the AdamW optimizer with a nB learning rate as indicated in the original paper (2 · 10−5) and weight decay of 0.02. We finetuned all weights, not just the final layer. the batch size nb was 32, nB = 320, meaning we select nb = 10% of points. We use between 2 and 10 seeds nB for each experiment. Data augmentation. On CIFAR-10, CIFAR-100, and CINIC-10, we train using data augmentation (random crop and horizontal flip), both for training the IL model, and in the main training runs. Remember that we only compute the irreducible losses once at the start of training, to save compute (Algorithm 1). We use the un-augmented images for this as we found that using augmented images makes little difference to performance but costs more compute. Irreducible loss model training. The irreducible loss models are trained on holdout sets (not test sets, see dataset description in main text). For each dataset, we select the irreducible loss model checkpoint from the epoch with lowest holdout loss on D (as opposed to highest accuracy); we find that this improves performance while also saving compute as the holdout loss typically reaches its minimum early in training. BatchNorm. Like many deep-learning methods, RHO-LOSS interacts with BatchNorm (Ioffe & Szegedy, 2015) since the loss of a given point is affected by other points in the same batch. Important: We compute the BatchNorm statistics for selection and model update separately. For selection (line 5-8 in Algorithm 1), the statistics are computed across the large batch Bt. For training (line 9-10), the statistics are computed across the small batch bt. These choices can affect performance a lot. For new datasets, we recommend to vary how the batchnorm statistics are computed during selection (trying both train mode and eval mode) and choose the option that works best. C. Robustness to Noise Figure 6: RHO-LOSS is robust to a variety of label noise patterns, while other selection methods degrade. A step corresponds to lines 6 − 11 in Algorithm 1. Lines correspond to means and shaded areas to minima and maxima across 3 random seeds. 50010001500Steps60708090100Test Accuracy (%)MNIST50010001500Steps60708090100Test Accuracy (%)MNIST with 10% Label Noise50010001500Steps60708090100Test Accuracy (%)MNIST with Structured Noise50010001500Steps60708090100Test Accuracy (%)Ambiguous MNISTReducible Loss (Ours)Uniform SamplingIrreducible LossGradient NormLoss Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt In this set of experiments, we evaluate the performance of different selection methods under a variety of noise patterns on QMNIST (MNIST with extra holdout data) and variations thereof. We use this dataset because it has little label noise in its original form, allowing us to test the effect of adding noise. Firstly, we add uniform label noise to 10% of training points. Secondly, we add structured label noise that affects easily confused classes. We follow (Rolnick et al., 2017) and flip the labels of the four most frequently confused classes (in the confusion matrix of a trained model) with 50% probability. For example, a 2 is often confused with a 5; thus we change the label of all 2s to 5s with 50% probability. Thirdly, we leverage the natural noise distribution of MNIST by using AmbiguousMNIST (Mukhoti et al., 2021) as the training set. AmbiguousMNIST contains a training set with 60k generated ambiguous digits that have more than one plausible label. While selecting with loss and gradient norm trains accelerates training on the MNIST training set, their performance degrades on all three types of noise distributions (Figure 6). D. Irreducible Holdout Loss Approximation In this appendix section, we examine one of the key approximations made in the theory section. To arrive at Eq. (3), we used the approximation L[y | x; Dho] ≈ L[y | x; Dho, Dt]. In words, we approximated the cross-entropy loss of a model trained on the data points acquired so far Dt and the holdout dataset Dho, with the cross-entropy loss of a model trained only on the holdout set. This approximation saves a lot of compute: rather than having to recompute the term with every change of Dt, it is now sufficient to compute it once at the start of training. We have already highlighted the impact of the approximation on points selected when training on QMNIST in Section 4.1. In our main experiment setting—using neural networks trained with gradient descent—we empirically find that the approximation does not reduce speed of target model training or final target model accuracy (Table 4). This finding holds across a range of datasets (CIFAR-10, CIFAR-100, CINIC-10). Updating the irreducible loss model on Dt seems empirically not necessary. Indeed, the approximation actually has two desirable properties when used for neural networks trained with gradient descent. We will first describe why we expect these desirable properties, and then show that they indeed appear. First, let us restate both selection functions: Original selection function: arg max (x,y)∈Bt L[y | x; Dt] − L[y | x; Dho, Dt]. Approximated selection function: arg max (x,y)∈Bt L[y | x; Dt] − L[y | x; Dho]. Desirable property 1: The approximation prevents repeated selection of undesirable points. When using SGD instead of Bayesian updating, the original selection function can acquire undesired points repeatedly. Let’s say that we Table 4: Number of epochs required to reach a given target test accuracy across several datasets. Results averaged across 2-3 random seeds. NR indicates that the target accuracy was not reached. Dataset Target acc approximated selection function L[y | x; Dt] − L[y | x; Dho] CIFAR10 CIFAR100 CINIC10 60% 75% 90% 30% 45% 60% 55% 65% 75% 18 30 102 35 58 123 12 19 32 original selection function L[y | x; Dt] − L[y | x; Dho, Dt] 13 24 NR, but reaches 88% in 157 epochs 21 NR, but reaches 43% in 61 epochs NR 12 21 NR, but reaches 74% in 68 epochs Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt Figure 7: Desired properties of the irreducible loss model approximation. Left. The approximated selection function selects fewer corrupted points later on in training. Right. The test set accuracy of the irreducible loss model deteriorates over time if it is updated on Dt. With the approximation, the irreducible loss is not updated during target model training. Results on CIFAR-10 with 20% of data points corrupted with uniform label noise. Shaded areas represent standard deviation across three different random seeds. acquire, for whatever reason, a noisy, redundant, or irrelevant point. We only take one gradient step each time we acquire a (batch of) point(s), meaning the training loss (first term in the selection function) will on each only decrease somewhat. In the original selection function, the second term will also decrease somewhat, meaning that the difference between the first and second term may remain large. In the approximated selection function, the second term is constant, the difference between first and second term will thus likely decrease more than under the original selection function. Under the approximated selection function, we are thus less likely to acquire undesired points again, if we have acquired them in earlier epochs. Desirable property 2: The approximation prevents deterioration of the irreducible loss model over time. With both selection functions, we compute the second term of the selection function with an "irreducible loss model", which we train on a holdout set before we start target model training. In the target model training, we (greedily) acquire the points that most improve the loss of the target model (on the holdout set). We thus deliberately introduce bias into the data selection. However, this bias is tailored to the target model and may not be suitable for the irreducible loss model. As a simplifying example, consider a target model early in training, which has not yet learnt a certain class, and an irreducible loss model, which has learnt that class. Data points in that class will have high training loss, low irreducible loss, and will be acquired often. This, however, is not useful for the irreducible loss model, and might lead to decreased accuracy on data points from other classes. With the approximation, this can’t happen. The described failure mode could likely also be alleviated by more sophisticated training schemes for the irreducible loss model, such as periodically mixing in data points from the holdout set. However, such training schemes would require even more compute and/or overhead. We find empirically that both desired properties of the approximation indeed manifest themselves. In Fig. 7, we train a target model (Resnet-18) on CIFAR-10, with 20% of the data points corrupted by uniform label noise. The approximated selection function leads to faster target model training (the approximated selection function needs 80 epochs to reach the same target model accuracy that the original selection function reaches in 100 epochs) and higher final accuracy than the original selection function (88.6% vs 86.1%). Indeed, the original selection function leads to acquiring more corrupted points, especially later in training (Fig. 7, left), and the accuracy of the irreducible loss model deteriorates over time (Fig. 7, right). We tuned the learning rate of the irreducible loss model to 0.01 times that of the target model. Without this adjustment, the results look similar but the original selection function performs worse. 020406080100120140160epoch0.000.050.100.150.200.250.30Percentage of acquired points that are corruptedapproximated objectiveoriginal objective020406080100120140160epoch0.50.60.70.80.91.0Test set accuracy of the irreducible loss modelapproximated objectiveoriginal objective Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt E. Experimental Details for Assessing Impact of Approximations Dataset: QMNIST, with uniform label noise applied to 10% of the dataset. Batch size of 1000 is used. Models: Deep Ensemble contains 5 3-layer MLP’s with 512 hidden units. The weaker irreducible loss model is an MLP with 256 hidden units. Training: For Approximation 0, we use a deep ensemble for both models. The irreducible loss model is trained to convergence on Dho. Then the target model and the irreducible model are used to acquire 10% of points each batch using the selection function. They are then trained to convergence on each batch of points acquired. The irreducible loss model is trained on Dho ∪ Dt, while the target model is only trained on Dt. We train for a maximum of 5 epochs, which often is to convergence, to enable a fair comparison to further approximations. For Approximation 1a, the deep ensembles are replaced with single MLPs. The training regime remains the same. We compare the approximations over the first epoch. To compare Approximation 1b to 0, and for all further approximations, we increase the size of the dataset five-fold, by duplicating samples in QMNIST. This means for approximation 1b, we have 5x the data that we have for Approximation 1a, but with increased redundancy. We train the model in Approximation 1b by taking a single gradient step per datapoint, with the larger dataset. On the other hand, we train the model for Approximation 0 (still to convergence or 5 epochs) on the standard dataset size. By doing this, Approximation 0 and 1b have taken the equivalent number of gradient steps, at the time-steps where we are tracking the reducible loss of points selected, enabling a fair comparison between the approximations. The irreducible loss models are trained on Dho ∪ Dt in their respective set-ups. To compare Approximation 2 to Approximation 0, we compare updating the irreducible loss model with a single gradient on each set of acquired points, to not updating the irreducible loss model on Dt at all. To isolate effect of not updating, we utilise the same initial irreducible loss model. To compare Approximation 3, we simply train a small irreducible model (one with 256 hidden units) and follow the same training regime as Approximation 2. F. Ablation of percentage selected Our method has a hyperparameter, the percentage nb of evaluated points which are selected for training. In the experiments nB above, this parameter was set to 0.1. We have not tuned this parameter, as we aim to analyse how well our method works “out of the box". In fact, on 2/3 datasets, performance further improves with other values of this parameter. Adjusting this percentage should allow practitioners to specify their preferred tradeoff between training time and computation, where a low percentage typically corresponds to a lower training time and greater compute cost. For these experiments, we kept nb = 32 and adapt nB accordingly. The percentage nb of datapoints selected per batch has different effects across datasets as shown nB in Fig. 8. Figure 8: Varying the percent of data points selected in each training batch. Average over 3 random seeds. G. Active Learning Baselines We compare our method to typical methods used in the Active Learning (AL) literature. Note that our method is label-aware, while active learning acquires datapoints without using label information. We consider the following baselines, which select the top-k points using an acquisiton function, α(x): • Bayesian Active Learning by Disagreement (Houlsby et al., 2011) with α(x) = H[y | x, Dt] − Ep(θ|Dt) [H[y | x, θ]]. 0.02.55.07.510.0SpeedupCIFAR100CIFAR10CINIC10Percentage Selected5%10%15%20% Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt • (Average) conditional entropy, α(x) = Ep(θ|Dt) [H[y | x, θ]], where the average is taken over the model parameter posterior. • (Average predictive) entropy, α(x) = H[y | x, Dt]. • Loss minus conditional entropy α(x) = L[y | x, θ] − Ep(θ|Dt) [H[y | x, θ]]. This uses the (average) conditional entropy as an estimate of how noisy datapoint x is—points with high noise are deprioritized. Compared to RHO-LOSS, it replaces the IL with the conditional entropy. This acquisition function uses the label and therefore cannot be used for active learning. We additionally compare our method to uniform sampling. We run all baselines on MNIST and CIFAR10. Note that several of these active learning baselines consider epistemic uncertainty; that is, uncertainty in predictions driven by uncertainty in the model parameters. This mandates performing (approximate) Bayesian inference. We use Monte-Carlo Dropout(Gal & Ghahramani, 2016) to perform approximate inference. For MNIST, we use an 2 hidden layer MLP with 512 hidden units per hidden layer, and a dropout probability of a 0.5. For experiments on CIFAR10, we use a small-scale CNN with dropout probability 0.05 (the dropout probability follows (Osawa et al., 2019)). Fig. 9 shows training curves for our method, uniform sampling, and the active learning baselines. Our method accelerates training across both datasets. The active learning methods accelerate training for MNIST but not for CIFAR10. This highlights that active learning methods, if naively applied to online batch selection, may not accelerate model training. Figure 9: Training curves for several active learning baselines on the MNIST and CIFAR10 datasets. 0500100015002000Steps7580859095Test Accuracy (%)MNIST Active Learning20004000Steps204060Test Accuracy (%)CIFAR Active LearningRHO-LOSS (Ours)Uniform SamplingBALDEntropyConditional EntropyLoss Minus Conditional Entropy
synthetic_cpt
3
Improving_Zero-Shot_Multilingual_Text_Generation_via_Iterative_Distillation.pdf
Zero-Inflated Stochastic Volatility Model for Disaggregated Inflation Data with Exact Zeros Geonhee Han∗1 and Kaoru Irie†2 1Graduate School of Arts and Sciences, Columbia University 2Faculty of Economics, The University of Tokyo March 19, 2024 Abstract 4 2 0 2 r a M 6 1 ] E M . t a t s [ 1 v 5 4 9 0 1 . 3 0 4 2 : v i X r a The disaggregated time-series data for Consumer Price Index often exhibits frequent instances of exact zero price changes, stemming from measurement errors inherent in the data collection process. However, the currently prominent stochastic volatility model of trend inflation is designed for aggregate measures of price inflation, where exact zero price changes rarely occur. We propose a zero-inflated stochastic volatil- ity model applicable to such nonstationary real-valued multivariate time-series data with exact zeros, by a Bayesian dynamic generalized linear model that jointly specifies the dynamic zero-generating process. We also provide an efficient custom Gibbs sampler that leverages the Pólya-Gamma augmentation. Ap- plying the model to disaggregated Japanese Consumer Price Index data, we find that the zero-inflated model provides more sensible and informative estimates of time-varying trend and volatility. Through an out-of-sample forecasting exercise, we find that the zero-inflated model provides improved point fore- casts when zero-inflation is prominent, and better coverage of interval forecasts of the non-zero data by the non-zero distributional component. Keywords: Bayesian, Zero-inflation, Stochastic volatility, Disaggregate consumer price index, Trend inflation, Forecasting 1 Introduction: Econometric Modeling of Inflation Following the seminal work of Stock and Watson (2007), the unobserved components model with stochas- tic volatility [UCSV] has gained widespread popularity as a method for modeling and forecasting aggregate inflation (Faust and Wright, 2013). Several methodological studies have proposed modifications to UCSV, addressing various characteristics of aggregate inflation (e.g. Chan, 2013; Chan et al., 2013, 2016; Chan, 2017; Hwu and Kim, 2019; Zhang et al., 2020; Huber and Pfarrhofer, 2021). Recent works have also made use of disaggregated price index data which provides a higher level of granularity over aggregated measures. Stock and Watson (2016) and Li and Koopman (2021) formulate a multivariate UCSV [MUCSV] model to estimate and forecast U.S. sectoral (trend) inflation. Eo et al. (2023) develop a two-sector MUCSV involving time-varying sectoral correlation. Other non-MUCSV approaches include Chaudhuri et al. (2015). In the development of UCSV and MUCSV models using aggregate data, one feature of price index data that remained insignificant is price staleness at the item-level. Kömm and Küsters (2015) highlight instances of frequent exact zeros in the weekly price differences in skimmed whey powder prices in Ger- many, attributed to censoring and lack of information. A substantial body of empirical macroeconomic literature also studies the phenomenon in the context of firms’ time- and state-dependent price adjust- ment behaviors, and endeavored to explain the variations in time between within-firm price adjustments (Klenow and Kryvtsov, 2008; Nakamura and Steinsson, 2008; Dixon and Grimme, 2022). Price staleness is also relevant in Consumer Price Index [CPI] data with lower-level disaggregation. We exemplify this using Japanese CPI data, and provide specific backgrounds on what gives rise to price staleness. Despite the presence and relevance of zeros, to the best of our knowledge, there lacks (MUC)SV models designed for real-valued zero-inflated nonstationary multivariate time-series. For instance, Kömm and Küsters (2015) bases their approach on a univariate ARMA-GARCH with threshold and Markov-switch ∗[email protected][email protected] 1 induced mechanism of zeros. Barkan et al. (2023) employ a hierarchical Recurrent Neural Network to ex- ploit the hierarchical structure of and to forecast component-wise disaggregated US CPI. Powell et al. (2017) uses a combination of monthly disaggregated CPI at the item-category level and daily web-scraped prices to make component-wise forecasts of disaggregated CPI, using a variant of an autoregressive model. All of these are based on a non-(MUC)SV approach, despite the macroeconometric context where Bayesian dynamic models with SV and time-varying parameters have proven useful not only for forecasting (Prim- iceri, 2005; Nakajima, 2011) but its ability to estimate trend inflation and time-varying volatility. The traditional Bayesian approach to model censored time-series data is by (dynamic) Tobit models (Chib, 1992; Wei, 1999; Li and Zheng, 2008; Liu et al., 2023). However, the unidirectional censoring assumption in Tobit models is not suitable for CPI inflation that takes positive and negative values. Al- ternative lies in dynamic models for zero-inflated data using discrete-valued sampling distributions. This is often seen in the financial econometrics literature, upon interest in modeling zero-inflation in discrete price movements in high-frequency financial transactions data (Hausman et al., 1992; Rydberg and Shep- hard, 2003; Bien et al., 2011). Our specific case requires simultaneously handling continuous observations and zero-inflation. Accordingly, we aim to develop a zero-inflated multivariate stochastic volatility model. We do so by generalizing the (UC)SV model by explicitly specifying the exact-zero generating process alongside the (UC)SV-driven nonzero generating process, as a joint Bayesian model based on the dynamic generalized linear model [DGLM] (West et al. 1985; West and Harrison 1997, Chap. 14). In modeling the exact- zero-generating component, we incorporate the latent dynamic logistic model to allow for heterogeneous cross-sectional propensity, temporal persistence, and cross-correlation in zeros. The idea of specifying the zero-generating process by using the dynamic logistic model can be seen in cases with discrete zero- inflated observations (e.g. Berry and West, 2020; Lavine et al., 2022), while our usage is for continuous response with stochastic volatility. In posterior computation, where the conditional posterior is intractable due to binomial likelihoods with a logistic link function, we utilize the Pólya-Gamma augmentation to restructure the intractable component as conditionally Gaussian and linear (Polson et al., 2013). The augmentation enables us to construct a fast and efficient custom Gibbs sampler, where we can sample the state variables jointly (Windle et al., 2013; Glynn et al., 2019). The presented model and sampler are devised for modeling and forecasting CPI inflation data, but the flexible nature of DGLMs suggests wide applicability of our approach to a variety of real-valued multivariate time-series involving nonstationary and zero-inflation. The rest of the paper is structured as follows. In sec. [2], we introduce the disaggregated Japanese CPI data and provide backgrounds for the occurrence of zeros. In sec. [3], we propose the zero-inflated (UC)SV and M(UC)SV, along with a brief overview of the posterior sampling algorithm. In sec. [4], we apply our model to demonstrate the benefits of accounting for zero-inflation and possible risks of ignoring zero-inflation. sec. [4.3] presents the results of a forecasting exercise. sec. [5] concludes the paper. 2 Disaggregated Price Index Data Consumer Price Index. The price index data we use in our analysis is the Japanese CPI, made publicly available by the Statistics Bureau of Japan [SBJ] as the “2020-Base Consumer Price Index". The original CPI time series is a monthly data that spans from January 1970 to date. The data we deal with is a time series of quarterly percentage changes, spanning 214 quarters from 1970:Q1 to 2023:Q3, which was the longest possible data made available at the time of our analysis. The quarterly conversion is to preserve consistency with approaches taken by the extant literature (e.g. Stock and Watson, 2007; Chan, 2013; Eo et al., 2023). The Japanese CPI is calculated hierarchically. First, the index of the lowest hierarchy, at the time-item- location level, is calculated by averaging month-item-municipality-store specific surveyed prices obtained over different retail stores. These input prices are from the monthly “Trend Survey" of the Japanese Retail Price Survey, an official statistical survey. This is then compiled as a time-item level index as a weighted average over municipalities. Then, a higher-level minor groups index is calculated with a weighted aver- age over items. Further higher-level indices are similarly calculated sequentially, resulting in the follow- ing hierarchy of indices: the item-level, minor groups, subgroups (item-category level), 10 major groups (sectoral-level), and finally the aggregate index. Table [1] exemplifies the hierarchy by listing ten items selected from each of the 10 major groupings. We observe that the breakdown is highly granular. Fig. [1] visualizes the data for sub category indices, which consists of 49 item-category level indices. 2 Table 1: A tabulated example of ten items recorded in Japanese CPI, each from the 10 major classifications. All items All Count Sub (Item-category) Electricity Medical services Communication Tutorial fees Fruits Footwear Rent 10 Major (Sectoral) Fuel, light & water charges Medical care Transport. & comm. Education Food Clothes & footwear Housing Furniture & household utens. Durable goods Durable goods Culture & recreation Personal effects Miscellaneous 49 10 Items Electricity Medical treatment Letters Elementary school Apples Men’s shoes House rent, private Microwave ovens TV sets Suitcases 582 We note the prevalent presence of zeros in some of the item-category level indices. A natural question is: why the exact zeros? We highlight various relevant viewpoints on this matter, and motivate the desirable characteristics of our proposed model. Figure 1: Heatmap visualizing the values of the multivariate time-series. Left: original CPI. Middle: price index inflation, calculated as quarterly percentage change. Right: binary heatmap representing the dichotomy of zero (gray) and non-zero (black) price changes. Note: white blanks in the fourth and fourteenth rows are missing values. Measurement Error. The most likely reason for zeros is the combination of (1) infrequent within-firm price adjustments and (2) the mode of data collection. In generating each of the item-level indices, the SBJ predetermines a set of representative products, often a singleton set, subject to repeated price data collection. The mode of data collection is also precisely predetermined to capture price fluctuations in a cost-efficient manner (e.g. particular store, brand, quantity/size, unit of sales, area of production, model number, etc.). This means the realized price staleness is that of the particular item of a particular firm at a particular time and location, and not necessarily of the representing item in totality. Also, SBJ ac- knowledges that the month-item-municipality-specific index may be missing, for instance, when an item is discontinued in the surveyed municipality. In such cases, the item and weight is omitted in the calculation of higher-level indices: another instance of information scarcity. Combined with the observation that not all firms implement price adjustments on a monthly or even quarterly basis (Higo and Saita, 2007; Dixon and Grimme, 2022), the inevitable scarcity of information gives rise to possible seeming price staleness. As such, in performing model-based estimation of the latent trend and volatilities based on the observable price fluctuations, the model ought to incorporate another layer of uncertainty that reflects the discrepancy between what we observe and what we want to measure. 3 Heterogeneity/Persistence of Zero-inflation. Fig. [2] is a closer look of four selected indices from the 49 item-category level indices, revealing significant variation in the proportions of sparsity; the proportion of exact zeros ranges from sub-single-digit percentages to cases where non-zero observations are rarer. This shows that cross-sectional heterogeneity in zero inflation is an integral feature of the data that the model shall subsume. (a) Fruits (b) School textbooks & reference books (c) Electricity (d) Rent Figure 2: Time-series plot of data. The side-by-side comparison highlights significant heterogeneity in the frequency and persistence patterns in the occurrences of zeros. We also observe inter-temporal persistence in zero inflation. This happens due to various reasons. Other than the fact that not all firms implement price adjustments at a monthly basis, another such rea- son is systematic institutional restriction. Fig. [2c] succinctly exemplifies this with quarterly inflation for Japanese electricity bills. Focusing on the periods up to December 1995 (1995:Q4), which was prior to the 1995 revision of the Electricity Business Act (a policy initiative for energy liberalization in Japan), electricity prices are significantly stale and zeros are persistent. These items can only go through price revisions after some administrative process, and so observable inflation tends to occur only on an inter- mittent basis, causing persistence in zeros. Other such items include medical services [7] or school fees [11], and these are on the high-end in terms of the proportion of zeros. Conversely, items where prices are largely market-driven are less susceptible to zero-inflation, such as fuel or food. Within-component inter-temporal persistence of zeros (or lack thereof) is another desirable feature of the model. We summarize the desirable features of the zero-inflated model as follows. Following Berry and West (2020), we acknowledge the desirability to treat both the zero and nonzero data under a joint process encompassed within the probabilistic model. In modeling the former zero component, the following are crucial: heterogeneity in the propensity of zeros, temporal persistence of zeros, and possible cross-series dependency in the zero-generating mechanism. As for the latter nonzero component, we also acknowl- edge the possibility of exploitable comovements across the cross-sectional dimension in the multivariate time-series; visualization of the empirical pair-wise correlation in Fig. [3] is also affirmative of this view. Finally, the ability to estimate trends and stochastic volatilities should also remain consistent with models 4 of extant literature (e.g. Stock and Watson, 2007, 2016; Eo et al., 2023). The according aim of the follow- ing section is to present the multivariate stochastic volatility model with zero-inflation that addresses the above aspects. Figure 3: Heatmap of the empirical correlation matrix. Left: of the observations. Right: of the zero versus non-zero pseudo-observations. 3 The Model 3.1 Incorporating Time-Varying Non-Zero Probability of Zero We first propose a univariate formulation of the univariate zero-inflated UCSV [Z-UCSV] model. Let y1:T := (y1, ...., yT )T ∈ RT represent the univariate observations. Let θ1:T := (θ1, ..., θT )T, h1:T := (h1, ..., hT )T, and g1:T := (g1, ..., gT )T ∈ RT be the unobserved trend, measurement log-volatility, and trend log-volatility. The UCSV model relates the quadruple (y1:T , θ1:T , h1:T , g1:T ) by t yt = θt + ε(y) , θt = θt−1 + ε(θ) gt = gt−1 + ε(g) , ht = ht−1 + ε(h) , t t t , ε(y) t ε(θ) t ε(g) t ε(h) t | ht | gt | σ2 g | σ2 h ind∼ N1(0, exp ht) ind∼ N1(0, exp gt) ind∼ N1(0, σ2 g) ind∼ N1(0, σ2 h) (1) (2) (3) (4) where ND(µ, Σ) is used to denote the D-dimensional Gaussian distribution with mean and covariance h ∈ R>0 and the initial (µ, Σ). The index runs through t ∈ [T ] := {1, ..., T }. The variances terms σ2 states θ0, g0, h0 ∈ R are also random variables to be particularized later with priors. The time-varying conditional means θ1:T are often referred to as a latent trend component or trend- inflation (Stock and Watson, 2007; Chan, 2013; Chan et al., 2013, 2016; Stock and Watson, 2016; Li and Koopman, 2021; Eo et al., 2023), in consideration of the prior belief that the underlying process governing the permanent level of inflation evolves as a smooth unobservable process. The movement in the trend is often seen as a permanent and potentially persistent shift in the level of inflation, and the deviation of that from the observed component, that is y1:T − θ1:T , is seen as a transient and temporary stochastic shock in the neighborhood of the permanent level: also referred to as the inflation-gap (Cogley et al., 2010; Hwu and Kim, 2019). The log-volatility process in the measurement and transition equation, h1:T and g1:T , are often respectively interpreted as transitory and permanent volatility. g, σ2 From here, we introduce three additional and key latent stochastic processes: y∗ T )T ∈ RT , p1:T := (p1, ..., pT )T ∈ (0, 1)T and π1:T := (π1, ..., πT )T ∈ RT . We specify a latent dynamic logistic 1:T := (y∗ 1, ..., y∗ 5 model, a special case of DGLM (West et al. 1985; West and Harrison 1997, Chap. 14) by yt | y∗ t ind∼ (1 − pt)δy∗ t , pt t = θt + ε(y) y∗ pt = logit−1(πt), πt = πt−1 + ε(π) , t , t + ptδ0, ε(π) t | σ2 π ind∼ N1(0, σ2 π). (5) (6) (7) (8) The triple (θ1:T , g1:T , h1:T ) inherits the same dynamical specification in eq. (2), (3), and (4), but not eq. (1). δc is the point-mass distribution with the mass at c ∈ R. In eq. (5), note that y1:T need not equate to the almost-surely non-zero y∗ 1:T a priori; this reflects the discrepancy between what we observe and what we want to measure, described in sec. [2]. The autonomous probabilities p1:T are granted persistence via a Markovian structure p(π1:T | σ2 π, π0) = (cid:81)T t=1 p(πt | πt−1, σ2 π). The model is completed with priors on the static parameters and initial states: σ2 π ∼ IG(απ, βg), σ2 g ∼ IG(αg, βg), σ2 h ∼ IG(αh, βh), π0 ∼ N1(µπ0, σ2 π0 where IG(α, β) is an inverse Gamma distribution with shape and scale parameters (α, β). h0 ∼ N1(µh0 , σ2 h0 g0 ∼ N1(µg0, σ2 g0 θ0 ∼ N1(µθ0 , σ2 θ0 ), ), ), (9) (10) ), In summary, the observed univariate time-series y1:T relates to the unobserved collection of interest, Θ := (y∗ 1:T , θ0:T , g0:T , h0:T , π0:T , σ2 g, σ2 h, σ2 π), (11) via the joint distribution with prior hyper-parameters (µθ0, µπ0, µg0, µh0, σ2 π0 , σ2 θ0 , σ2 g0 , σ2 h0 ): p(y1:T , Θ) = p(θ0)p(g0)p(h0)p(π0)p(σ2 g)p(σ2 h)p(σ2 π) × T (cid:89) t=1 p(gt|gt−1, σ2 g)p(θt|θt−1, gt)p(ht|ht−1, σ2 h)p(y∗ t |θt, ht)p(πt|πt−1, σ2 π)P(yt|y∗ t , πt). Fig. [4] provodes a graphical representation of the Z-UCSV model. g1 g2 gt gT y∗ 1 y∗ 2 y∗ t y∗ T θ1 θ2 θt θT h1 h2 ht hT y1 y2 yt yT p1 p2 pt π1 π2 πt pT πT σ2 π Figure 4: A graphical representation of the conditional (in)dependence structure in the univariate zero- inflated UCSV [Z-UCSV] model. Note 1: Gray and white nodes respectively indicate observed and latent variables. Note 2: Certain prior parameters are omitted for brevity (e.g. static parameters and initial states on the log-volatilities). The Z-UCSV model contains the UCSV model of Stock and Watson (2007) as a special case a priori under some conditions. If we let πt → −∞ for t ∈ [T ], then it must hold in the the limiting model pt = 0 for t ∈ [T ], in which case yt = y∗ t = θt + ε(y) t . (12) 6 3.2 Posterior Inference We provide an overview of a custom Gibbs sampler to perform posterior inference. A detailed exposition is provided in the appendix. Gibbs sampling is performed in blocks as follows. (1) Sample the initial states (θ0, g0, h0, π0) given (θ1, g1, h1, π1, σ2 (2) Sample the static parameters (σ2 g, σ2 π) given (g1:T , h1:T , π1:T ). g, σ2 h, σ2 h, σ2 π). (3) Sample the latent trend θ1:T given (θ0, g1:T , y∗ (4) Sample the stochastic volatilities (g1:T , h1:T ) given (g0, h0, σ2 (5) Sample the dynamic probabilities p1:T given (π0, σ2 (6) Sample the nonzero data y∗ given (θ1:T , h1:T , y1:T ). π, y1:T ). 1:T ). 1:T g, σ2 h, θ0:T , y∗ 1:T ). In steps (1) and (2), the static parameters and initial states may be readily simulated from Gaussians or Inverse Gammas. In step (3), the full conditional posterior of the latent trend is dynamic, linear, and Gaussian; sampling is doable using any usual apparatus to sample the latent states in a linear and Gaus- sian state-space model. We make use of the forward-filtering backward-sampling [FFBS] algorithm (Carter and Kohn, 1994; Frühwirth-Schnatter, 1994). In step (4), the log-volatility processes is efficiently sam- pleable via auxiliary Gaussian mixtures (Kim et al., 1998). We make use of the ten-component mixture approximation (Omori et al., 2007). We then sample the states jointly using precision samplers (Chan and Jeliazkov, 2009) for its computational efficiency. In step (6), the posterior is augmented with nonzero data sampled from Gaussians. Step (5) encompasses sampling from the full conditional posterior of the dynamic pre-transformed π, y1:T ) that involves logistic transformations and Binomial likelihoods. To do so probabilities p(π1:T | π0, σ2 efficiently, first introduce auxiliary sparsity indicators γ1:T := (γ1, ..., γT )T, γt | pt ind∼ Binomial(1, pt), and replace the measurement equation in eq. (5) with yt = y∗ t tion. The likelihoods on γ1:T are also re-written via augmented likelihoods p(γt, ωt | πt) over t ∈ [T ]: I{γt = 0}, where I{·} is an indicator func- P(γt | πt) = = (exp πt)γt 1 + exp πt (cid:90) ∞ 0 = (cid:90) ∞ 0 p(γt, ωt | πt) dωt 2−1 exp{(γt − 1/2)πt} exp(cid:8)−ωtπ2 t /2(cid:9) PG(ωt | 1, 0) dωt. PG(· | 1, 0) is the probability density of the Pólya-Gamma distribution PG(1, 0) following the conventional π), the joint distribu- notation (Polson et al., 2013; Windle et al., 2013). With a prior density p(π1:T | π0, σ2 tion is locally re-expressed by p(π1:T , γ1:T | π0, σ2 π) = p(π1:T | π0, σ2 π) T (cid:89) (cid:90) ∞ t=1 0 p(γt, ωt | πt) dωt (cid:90) = RT >0 p(π1:T | π0, σ2 π)p(γ1:T , ω1:T | π1:T ) dω1:T , where ω1:T := (ω1, ..., ωT )T. The integrand in the above characterizes a conditional joint distribution on π) as its marginal, and facilitates efficient sampling from (π1:T , γ1:T , ω1:T ) that admits p(π1:T , γ1:T | π0, σ2 the target p(π1:T | π0, σ2 π) via conditioning on ω1:T , performed in two steps. First, given π1:T , sample from π, γ1:T ) ∝ p(π1:T , γ1:T | π0, σ2 Next, given ω1:T , sample from p(ω1:T | π1:T ) ∝ T (cid:89) t=1 PG(ωt | 1, πt). p(π1:T | π0, σ2 π, γ1:T , ω1:T ) ∝ p(π1:T | π0, σ2 π) (cid:123)(cid:122) (cid:125) Linear Gaussian Latent State (cid:124) 7 T (cid:89) (cid:40) exp − ωt 2 (cid:18) κt ωt − πt t=1 (cid:124) (cid:123)(cid:122) Independent Gaussian “Likelihood" (cid:19)2(cid:41) , (cid:125) where κt := γt − 1/2 for t ∈ [T ]. Since the prior was specified to be a conditionally linear and Gaussian in eq. (8), we may simply rely on, say the FFBS algorithm. We remind that other sampling strategies are applicable, but upon reparameterizations or permitting more layers of augmentation (Albert and Chib, 1993; Held and Holmes, 2006; Frühwirth-Schnatter and Frühwirth, 2007). Generic gradient-based Markov-chain Monte Carlo [MCMC] algorithms such as Hamil- tonian Monte Carlo [HMC] (Neal, 2011) are also applicable to sample the dynamic probabilities, using a HMC-within-Gibbs scheme. A reason to employ Pólya-Gamma augmentation over other schemes is that the augmentation is least computationally prohibitive. Although the efficiency gains may be inconsequential, for instance, on the original univariate UCSV model, they become increasingly appealing as the dimension of the time-series becomes greater, such as in our application with 49 time-series. 3.3 Multivariate Extension We now consider a multivariate extension with cross-sectional dependence structure. We redefine nota- tions by introducing a new index k ∈ [K] that indexes the k-th time-series, where K is fixed. Let yt,k ∈ R represent the k-th observable time-series at time t ∈ [T ], and yt := (yt,1, ..., yt,K)T ∈ RK. Similarly allo- cate the k-th latent trend, state log-volatility, and measurement log-volatility indexed at time t ∈ [T ] to θt,k, gt,k, ht,k ∈ R, and compile these as θt := (θt,1, ..., θt,K)T, gt := (gt,1, ..., gt,K)T, ht := (ht,1, ..., ht,K)T ∈ RK. In addition, let pt,k := P(yt,k = 0 | πt,k) ∈ (0, 1), parameterized by πt,k ∈ R. We write y1:t′ to query the first t′ ∈ [T ] observations. We specify the multivariate zero-inflated UCSV [Z-MUCSV] model as follows. + pt,kδ0, t , ht yt,k | pt,k, y∗ t,k t | θt, Σ(y) y∗ θt | θt−1, Σ(θ) t (h),1, ..., σ2 (h),1, ..., σ2 (h),K , gt t,k t (ht)), ind∼ (1 − pt,k)δy∗ ind∼ NK(θt, Σ(y) ind∼ NK(θt−1, Σ(θ) ind∼ NK(gt−1, diag(σ2 ind∼ NK(ht−1, diag(σ2 t (gt)), (h),K pt,k = logit−1(πt,k), πt | πt−1, Σ(π) ind∼ NK(πt−1, Σ(π)). gt | gt−1, σ2 ht | ht−1, σ2 (g),1, ..., σ2 (h),1, ..., σ2 (g),K)), (h),K)), (k ∈ [K]) (k ∈ [K]) We specify priors over k ∈ [K] by θ0,k ind∼ N1(µθ0,k , σ2 ), g0,k ind∼ IG(α(g),k, β(g),k), and σ2 θ0,k ind∼ N1(µg0,k , σ2 ), h0,k ), ind∼ IG(α(h),k, β(h),k). Analogous to the need not necessarily ind∼ N1(µh0,k , σ2 h0,k g0,k (g),k π0,k , is not necessarily assumed to be observed; the equality yt,k = y∗ t,k (h),k ind∼ N1(µπ0,k , σ2 π0,k univariate case, y∗ t hold. ), σ2 If the covariance matrices Σ(y) , and Σ(π) are diagonal, then the multivariate model reduces to K Z-UCSV models [Z-UCSVs]. It may however be the case that the evolution of the underlying zero-inflation probabilities, latent trend, and inflation-gap arises from some unknown dependent joint structure, which is the case in our application. To introduce and infer such dependencies, let , Σ(θ) t t where IW(ν, S) is an inverse Wishart distribution with degrees of freedom ν > K − 1 and S a K × K positive definite scale matrix. Also consider the Cholesky SV parameterizations Σ(π) ∼ IW(νπ, Sπ). (13) Σ(θ) t (gt) = L diag(exp gt,1, ..., exp gt,K)LT, Σ(y) t (ht) = C diag(exp ht,1, ..., exp ht,K)CT, where L = [ℓi,j]i,j∈[K] and C = [ci,j]i,j∈[K] are strictly lower-triangular matrices of size (K × K). The ind∼ N1(µℓi,j , σ2 strictly lower-triangular entries of the Cholesky factors are unrestricted and follow ℓi,j ) ℓi,j and ci,j ) over 1 ≤ j < i ≤ K. This parsimoniously achieves dependency while main- taining the time-varying volatility assumption. Note however that the parameterization is sensitive to the ordering of the variables, in the sense that the variation of the trend evolution on the k-th series y1:T,k is partly described by the variations of the preceding series y1:T,1:k−1. We briefly discuss the choice of our variable ordering in sec. [4.1]. ind∼ N1(µci,j , σ2 ci,j 8 The collection of unobserved parameters of interest in the multivariate case is  Θ :=  , σ2  θ0,1:K, g0,1:K, h0,1:K, π0,1:K , h1:T , C, g1:T , L  (cid:125) (cid:123)(cid:122)  (cid:124) (cid:125) (cid:123)(cid:122) (cid:124) Stochastic Initial Volatilities States (cid:124) (h),1:K, Σ(π) (g),1:K, σ2 , θ1:T (cid:124)(cid:123)(cid:122)(cid:125) (cid:125) (cid:123)(cid:122) Latent Evolution Trend (Co)variances The joint distribution of the Z-MUCSV model reads as  ,     y∗ 1:T (cid:124)(cid:123)(cid:122)(cid:125) Non-zero Observation . (cid:33) p(y1:T , Θ) = p(Σ(π))   (cid:89) 1≤j<i≤K  p(ci,j)p(ℓi,j)  (cid:32) K (cid:89) k=1 p(σ(g),k)p(σ(h),k)p(θ0,k)p(g0,k)p(h0,k)p(π0,k) T (cid:89) t=1 p(gt | gt−1, σ2 (g),1:K)p(θt | θt−1, gt, L)p(πt | πt−1, Σ(π))p(ht | ht−1, σ2 (h),1:K)p(y∗ t | θt, ht, C)P(yt | y∗ t , πt). Gibbs sampling on the Z-MUCSV model is performed in blocks as follows. A detailed exposition for Z-MUCSV is also provided in the appendix. (1) Sample the initial states (θ0, g0, h0, π0) given (θ1, g1, h1, π1, σ2 (g),1:K, σ2 (h),1:K, L, C, Σ(π)). (2) Sample the static parameters (σ2 (g),1:K, σ2 (h),1:K, Σ(π)) given (g1:T , h1:T , π1:T ). (3) Sample the latent trend θ1:T given (θ0, g1:T , L, y∗ 1:T ), (4) Sample the stochastic volatilities (g1:T , h1:T ) given (g0, h0, σ2 (g),1:K, σ2 (h),1:K, L, C, θ0:T , y∗ 1:T ). (5) Sample the dynamic probabilities p1:T given (π0, Σ(π), y1:T ). (6) Sample the nonzero data y∗ 1:T given (θ1:T , h1:T , C, y1:T ). (7) Sample the Cholesky factors (L, C) given (y∗ 1:T , θ0:T , h1:T , g0:T ). 4 Results 4.1 Setup We now analyze Japanese inflation using item-category level disaggregated quarterly Japanese CPI time- series data using the proposed Z-UCSVs and Z-MUCSV. We also estimate the non-zero-inflated counter- parts: UCSVs and Z-MUCSV. We also compare their forecasting performances. In all settings, the variables are ordered based on the ordering of 10 major (sectoral) Table [1]. This reflects our prior belief that price fluctuations of energy and public utility sector propagates to that of daily necessities (e.g. food or housing), then to recreational and miscellaneous goods. In both the in- and out-of-sample settings, the prior hyper- parameters are set as (µx0,k , σ2 ) = (0, 10) for x ∈ {θ, g, h, π} and (α(x),k, β(x),k) = (101, 1) for x ∈ {h, g}. Z-UCSVs and Z-MUCSV is additionally granted (α(π),k, β(π),k) = (51, 1) and (νπ, Sπ) = (2K, IK) respec- tively. We simulate R = 10000 + 2000 MCMC samples and discard the initial 2000 samples. The thinning factor is 20. x0,k 4.2 In-Sample Estimation Probability of zeros. We begin by presenting posterior estimates of the probability of zeros in Fig. [5]. A clear distinction between the UCSVs and Z-UCSVs is that the time- and component-specific probability of zeros are non-zero. The Z-UCSVs quantifies the significant heterogeneity in price staleness during periods up to versus after December 1995 (1995:Q4) as a rapidly declining time-specific probability, when the 1995 revision of the Electricity Business Act, an important policy initiative for energy liberalization in Japan, have taken place. 9 (a) Electricity (b) Rent Figure 5: Top: Time-series plot of two selected data. Bottom: posterior summary statistics of the time- varying probability of zero. Blue lines indicate the posterior mean estimate. Latent trend interpolation. Complementing the UCSV model with the time-varying probability of zeros allows for better recovery of informative fluctuations in the trend, especially when there are informative asymmetries in the non-zero portion of observed inflation. Take for instance school textbooks & reference books for study [12] as shown in Fig. [6a], where there are extremely high occurrences of zeros and exclu- sively non-negative observations. Z-UCSV models interpolate the latent trend(s) when the observations take zeros. The non-zero-inflated UCSV provides a mere dynamic summary of the central tendency, in- evitably being shrunken towards zero, exhibiting underestimation. Fig. [6b] shows similar tendency, albeit smoother, when there are cross-sectional dependencies. The reason for this difference is in their generative process. The non-zero-inflated UCSV models do not capture informative asymmetries due to the equality assumption y = y∗ in eq. (12) alongside symmetry of Gaussian measurement density about the trend. This is relaxed in Z-(M)UCSV, employing a gener- ative process where the latent trend precedes the generation of zero-nonzero dichotomy, which allows informative recovery of informative patterns. (a) Without Covariation (b) With Covariation Figure 6: Time series plot of school textbooks & reference books index CPI inflation and posterior summary statistics of the latent trend. Dashed lines indicate the posterior mean. Shaded regions indicate the 90% credible interval. Volatility interpolation. Posterior fluctuation of time-varying log-volatilities (g1:T , h1:T ) are useful to gauge whether the observed variation is permanent or transitory. However, given the presence of persistent measurement errors as described in sec. [2], they are not necessarily accurately reflected in the data, and the model should reflect the fact that there is little essential available information about volatility. Fig. [7] compares the time-varying volatility estimates of tobacco [48]. Z-UCSVs and Z-MUCSV estimate volatility 10 to be quite high and uncertain, and interpolate those values even during periods when only zeros are observed. UCSVs or MUCSV however yields overconfidently smaller estimates of volatility, as zeros are directly accumulated to the likelihood as Gaussian measurement-generated data. Figure 7: Time series plot of the tobacco [48] CPI inflation and posterior summary statistics of the time- varying volatility. Top: posterior mean of time-specific measurement/transitory and transition/permanent , are visualized as a blue and red shaded re- standard deviations, (exp{ht,k/2})T gion about the posterior mean of the time-varying latent trend θ1:T,k=48, respectively. Bottom: posterior evolution of the log-volatility. Dashed lines indicate the posterior mean. Shaded regions indicate the 90% credible interval. and (exp{gt,k/2})T t=1 t=1 Cross-sectional Interdependence. Fig. [8] is a heatmap of the posterior mean of the covariance Σ(π) in Z-MUCSV. There is a cluster of indices that are interdependent to each other with regards to the evolution of the zero probability process, mainly consisting of goods of daily necessities, such as food (e.g. fish & seafood [15], dairy products & eggs [17], fruits [19]), clothing (e.g. clothing [27], shirts & sweaters [28], footwear [30]), and related services (clothing services [32], eating out [25], transportation [8; 9]). Figure 8: A heatmap of the posterior statistics on the covariance matrix on the latent zero probability evolutions. Left: posterior mean. Right: A binary matrix indicating whether 0 is contained in the 75% credible interval. Fig. [9a] and [9b] similarly displays a heatmap of the posterior mean on the covariance matrix Σ(y) t (ht) of Z-MUCSV for periods 1980:Q1 and 2023:Q3. At 1980:Q1, we observe a cluster that mainly involves foods (e.g. fish & seafood [15], dairy products & eggs [17], vegetables & seaweeds [18], fruits [19]), clothing 11 (e.g. Japanese clothing [26], clothing [27], shirts & sweaters [28]), as well as public utilities (e.g. electricity [1], gas [2], other fuel & lights [3], private transportation [9], communication [10]). Cross-categorical interrelationships include clothing items versus non-clothing items (e.g. clothing [27] versus dairy products & eggs [17] and fruits [19]; shirt & sweaters [28] versus medicines & health fortification [5] and medical services [7]). In 2023:Q3, the three inter-categorical clusters are still present. 4.3 Out-of-sample Forecasting Exercise We will now provide results obtained from our recursive out-of-sample forecasting exercise. The exercise is executed as follows. We initially select a data window spanning 107 quarters (half the length of avail- able data) starting from 1987:Q4 to 2010:Q4 as a training set, and the following 16 quarters of data from 2011:Q1 to 2014:Q1 as the holdout set. Using the training set, forecasts for horizons h = 1, ..., 16 are sim- ulated for each of the thinned MCMC samples to obtain vector forecasts, ˆy[r] t+h,K|t)T ∈ RK, where t corresponds to that of 2010:Q4, and r indexes the r-th thinned MCMC sample. These are used to compute the point forecast by an empirical average t+h,1|t, ..., ˆy[r] t+h|t = (ˆy[r] ˆyt+h|t = (ˆyt+h,1|t, ..., ˆyt+h,K|t)T = 1 R R (cid:88) r=1 ˆy[r] t+h|t. (h ∈ [16]) Interval forecast is similarly computed by the empirical 100(α/2)-th and 100(1 − α/2)-th percentile over the simulated forecasts. Then, the next window of training samples (from 1988:Q1 to 2011:Q1) consisting of the same window size of 107 quarters are used to compute the point forecast over the next holdout set (from 2011:Q2 to 2014:Q2). We repeat this until t is such that its h = 16-step ahead forecast corresponds to the final available data, that is t + 16 = T . In the final window, the training set ranges from 1996:Q3 to 2019:Q3 and the holdout set ranges from 2019:Q4 to 2023:Q3. The quality of the obtained point forecasts for a given model M is evaluated with index- and forecast horizon-specific root mean squared error: RMSE(k, h |M) = (cid:118) (cid:117) (cid:117) (cid:116) 1 36 2019:Q3 (cid:88) t=2010:Q4 (yt+h,k − ˆyt+h,k|t)2. Fig. [10] displays the ratio of the obtained values of RMSE between Z-UCSVs versus UCSVs and Z- MUCSV versus MUCSV: without and with cross-series covariations. We observe in Fig. [10a] that point forecast performances under the absence of cross-series covariation are generally in favor of the zero- inflated specification in terms of the number of instances with improvements. The magnitude of the improvements however are quite subtle, suggesting that the two models are more or less indifferent for the majority of the indices. A notable exception is seen in tobacco [49] which was among the highest in terms of the proportion of zeros present in the data, and the zero-inflated model performs notably well. Other similar yet subtle tendencies are visible in gas [2] and medical services [7]. This result is supportive of the zero-inflated model for forecasting zero-inflated indices. In Fig. [10b], we see a rather dramatic difference, and the results are mixed. On the one hand, the zero- inflated model is inferior over the non-zero-inflated counterpart for indices with lesser occurrences of zeros, particularly those under the “foods" major category as well as clothing [27], repairs & maintenance [34], and recreational services [44]. On the other hand, the zero-inflated model is superior in infrastructural indices such as electricity [1] water & sewerage charges [4], school fees [11], and personal effects [47]. These were either moderate or high in terms of the proportion of zeros, which is expected. The former predictive loss is perhaps due to the inherent diffuseness and instability involved in the inferred covariation patterns when estimating the zero-inflated model, over the non-zero-inflated model. The latter, although incorrectly specified, is perhaps more robust with fewer layers of uncertainty quantification involved by construction. Fig. [11] displays the rate of empirical coverage of the 1-quarter-ahead interval forecasts at different levels of interval percentiles, where rates are computed over different time points of forecast. The 45- degree line indicates the ideal case where the β% central prediction interval covers β% of the out-of-sample data. An immediate observation is that all models tend to exhibit over-coverage. In particular, models with covariation overcovers especially for the latter indices due to diffuse posteriors of the Cholesky factor. With the zero-inflated models, there are regions in the lower end of central interval percentiles where the lines are “flat". This is because, depending on the estimated values of the dynamic probability of zeros 12 in the zero-inflated model, the interval forecasts may degenerate to a singleton set {0}, which covers the data only if the data takes zero. For instance, in medical services [7], since roughly 50% of the data are exact zeros, the “flat line" has a height of roughly 50% at low values of the interval percentiles. The “cutoff" on the x-axis also roughly coincides with 50% (with slight variations), since the zero-inflated model estimates directly estimate the probabilities dynamically. With this in mind, the Z-(M)UCSV is generally preferable over (M)UCSV, in the sense that they are generally closer to the 45-degree line in the non-degenerate region. To name a few, we observe that in indices [4; 5; 7; 11; 12], the zero-inflated model with(out) cross-series covariation is closer to the nomi- nal recovery of empirical coverage rates in the non-degenerate regions. The improvement is particularly emphasized with the latter series, where the Cholesky parameters are diffuse, such as in indices [38; 40; 43; 45-49]. We speculate that the zero-inflated specification, which specifies the cross-series covariation of zeros and nonzeros as a separate process, improves the recovery of the non-zero distributional compo- nent, over that of the non-zero-inflated model that misspecifies the covariation under the same covariance. Moreover, the observation that this becomes increasingly relevant with the latter indices further empha- sizes this view, as the non-zero-inflated model with merely the Cholesky SV parameterization inherently accumulates the misspecification. 5 Concluding Remarks and Future Research We have introduced a novel zero-inflated SV model that can estimate the time-varying probability of exact zeros, based on the flexible framework of DGLMs. We also presented an efficient posterior Gibbs sampler that leverages Pólya-Gamma augmentation to efficiently sample the time-varying probability. Applying our model to a comprehensive dataset of disaggregated Japanese CPI inflation, we empirically demonstrated that the zero-inflated model is better able to estimate and recover informative fluctuations in time-varying trends and time-varying volatility, over the traditional SV model which tends to produce overly conservative estimates. In a forecasting exercise, we observed that introducing zero-inflation leads to gains in point forecast performances particularly in cases where exact zeros are prominent. In terms of interval forecasts, the empirical coverage of the interval forecasts of the non-zero component of the distribution was superior over the non-zero-inflated model. A significant unanswered substantive question is the extent to which price staleness contributes to ex- plaining aggregate inflation persistence (Cogley et al., 2010; Chan et al., 2016; Hwu and Kim, 2019), and how the Z-(M)UCSV may be used to quantify this. Also, although not explored in this paper, many of the extensions of univariate (UC)SV by extant derivative literature of (UC)SV models can also be incorporated into the Z-(UC)SV model, and it may be worthwhile to explore such extensions. Some examples are deter- ministic volatility feedback mechanisms as explored in Chan (2017) or Huber and Pfarrhofer (2021) via the stochastic volatility in-mean model with time-varying parameters, or stochastic counterparts with lever- age effects. Another important methodological direction pertains to modeling and computational strategies to identify the underlying structural mechanisms driving the heterogeneity in the generation of zeros and nonzeros. Related lines of extant research are the use of dynamic latent factor models (Aguilar and West, 2000; Lopes and Carvalho, 2007; Negro and Otrok, 2008; Gruber and West, 2016; Lavine et al., 2022) to extract explicit, scalable, and interpretable covariations. Another direction is ways to integrate the latent threshold approach (Nakajima and West, 2013a,b) in Z-MUCSV components where conservative interpola- tions are useful. These refinements may enhance the understanding of how price staleness interacts with aspects of price inflation dynamics. Acknowledgments We thank Mototsugu Shintani and Jouchi Nakajima for their helpful comments on the early version of this paper. The second author’s research was partly supported by JSPS KAKENHI Grant Number 22K20132 from Japan Society for the Promotion of Science. References Aguilar, O. and West, M. (2000). Bayesian Dynamic Factor Models and Portfolio Allocation. Journal of Business and Economic Statistics, 18:338–357. 13 13 Albert, J. H. and Chib, S. (1993). Bayesian Analysis of Binary and Polychotomous Response Data. Journal of the American Statistical Association, 88(422):669–679. 8 Barkan, O., Benchimol, J., Caspi, I., Cohen, E., Hammer, A., and Koenigstein, N. (2023). Forecasting CPI inflation components with Hierarchical Recurrent Neural Networks. International Journal of Forecasting, 39(3):1145–1162. 2 Berry, L. R. and West, M. (2020). Bayesian Forecasting of Many Count-valued Time Series. Journal of Business and Economic Statistics, 38:872–887. 2, 4 Bien, K., Nolte, I., and Pohlmeier, W. (2011). An Inflated Multivariate Integer Count Hurdle Model: An Application to Bid and Ask Quote Dynamics. Journal of Applied Econometrics, 26(4):669–707. 2 Carter, C. K. and Kohn, R. (1994). On Gibbs sampling for State Space Models. Biometrika, 81(3):541–553. 7, 22, 24 Chan, J. C. (2013). Moving Average Stochastic Volatility Models with Application to Inflation Forecast. Journal of Econometrics, 176(2):162–172. 1, 2, 5 Chan, J. C. (2017). The Stochastic Volatility in Mean Model with Time-Varying Parameters: An Application to Inflation Modeling. Journal of Business and Economic Statistics, 35(1):17–28. 1, 13 Chan, J. C. and Jeliazkov, I. (2009). Efficient Simulation and Integrated Likelihood Estimation in State International Journal of Mathematical Modelling and Numerical Optimisation, 1(1– Space Models. 2):101–120. 7, 21, 24 Chan, J. C., Koop, G., and Potter, S. M. (2013). A New Model of Trend Inflation. Journal of Business and Economic Statistics, 31(1):94–106. 1, 5 Chan, J. C., Koop, G., and Potter, S. M. (2016). A Bounded Model of Time Variation in Trend Inflation, Nairu and the Phillips Curve. Journal of Applied Econometrics, 31(3):551–565. 1, 5, 13 Chaudhuri, K., Kim, M., and Shin, Y. (2015). Forecasting Distributions of Inflation Rates: the Func- tional Auto-Regressive Approach. Journal of the Royal Statistical Society Series A: Statistics in Society, 179(1):65–102. 1 Chib, S. (1992). Bayes Inference in the Tobit Censored Regression Model. 51(1):79–99. 2 Journal of Econometrics, Cogley, T., Primiceri, G. E., and Sargent, T. J. (2010). Economic Journal: Macroeconomics, 2(1):43–69. 5, 13 Inflation-Gap Persistence in the US. American Dixon, H. D. and Grimme, C. (2022). State-dependent or time-dependent pricing? New evidence from a monthly firm-level survey: 1980–2017. European Economic Review, 150:104319. 1, 3 Eo, Y., Uzeda, L., and Wong, B. (2023). Understanding Trend Inflation through the lens of the Goods and Services Sectors. Journal of Applied Econometrics, 38(5):751–766. 1, 2, 5 Faust, J. and Wright, J. H. (2013). Chapter 1 - Forecasting Inflation. In Elliott, G. and Timmermann, A., editors, Handbook of Economic Forecasting, volume 2 of Handbook of Economic Forecasting, pages 2–56. Elsevier. 1 Frühwirth-Schnatter, S. (1994). Data Augmentation and Dynamic Linear Models. Journal of Time Series Analysis, 15(2):183–202. 7, 22, 24 Frühwirth-Schnatter, S. and Frühwirth, R. (2007). Auxiliary Mixture Sampling with Applications to Lo- gistic Models. Computational Statistics & Data Analysis, 51(7):3509–3528. 8 Glynn, C., Tokdar, S. T., Howard, B., and Banks, D. L. (2019). Bayesian Analysis of Dynamic Linear Topic Models. Bayesian Analysis, 14(1):53–80. 2 Gruber, L. F. and West, M. (2016). GPU-accelerated Bayesian Learning and Forecasting in Simultaneous Graphical Dynamic Linear Models. Bayesian Analysis, 11:125–149. 13 14 Hausman, J. A., Lo, A. W., and MacKinlay, A. (1992). An Ordered Probit Analysis of Transaction Stock Prices. Journal of Financial Economics, 31(3):319–379. 2 Held, L. and Holmes, C. C. (2006). Bayesian Auxiliary Variable Models for Binary and Multinomial Re- gression. Bayesian Analysis, 1(1):145–168. 8 Higo, M. and Saita, Y. (2007). Price Setting in Japan: Evidence from CPI Micro Data. Bank of Japan Working Paper Series 07-E-20, Bank of Japan. 3 Huber, F. and Pfarrhofer, M. (2021). Dynamic Shrinkage in Time-Varying Parameter Stochastic Volatility in Mean Models. Journal of Applied Econometrics, 36(2):262–270. 1, 13 Hwu, S. and Kim, C. (2019). Estimating Trend Inflation Based on Unobserved Components Model: Is It Correlated with the Inflation Gap? Journal of Money, Credit and Banking, 51(8):2305–2319. 1, 5, 13 Kim, S., Shephard, N., and Chib, S. (1998). Stochastic Volatility: Likelihood Inference and Comparison with ARCH Models. The Review of Economic Studies, 65(3):361–393. 7, 21, 23 Klenow, P. J. and Kryvtsov, O. (2008). State-Dependent or Time-Dependent Pricing: Does it Matter for Recent U.S. Inflation? The Quarterly Journal of Economics, 123(3):863–904. 1 Kömm, H. and Küsters, U. (2015). Forecasting Zero-Inflated Price Changes with a Markov Switching Mixture Model for Autoregressive and Heteroscedastic Time Series. International Journal of Forecasting, 31(3):598–608. 1 Lavine, I., Cron, A. J., and West, M. (2022). Bayesian Computation in Dynamic Latent Factor Models. Journal of Computational and Graphical Statistics, 31:651–665. Published online December 30, 2021. 2, 13 Li, M. and Koopman, S. J. (2021). Unobserved Components with Stochastic Volatility: Simulation-based Estimation and Signal Extraction. Journal of Applied Econometrics, 36(5):614–627. 1, 5 Li, T. and Zheng, X. (2008). Semiparametric Bayesian Inference for Dynamic Tobit Panel Data Models with Unobserved Heterogeneity. Journal of Applied Econometrics, 23(6):699–728. 2 Liu, L., Moon, H. R., and Schorfheide, F. (2023). Forecasting with a Panel Tobit Model. Quantitative Economics, 14(1):117–159. 2 Lopes, H. F. and Carvalho, C. M. (2007). Factor Stochastic Volatility with Time Varying Loadings and Markov Switching Regimes. Journal of Statistical Planning and Inference, 137(10):3082–3091. Special Issue: Bayesian Inference for Stochastic Processes. 13 Nakajima, J. (2011). Time-Varying Parameter VAR Model with Stochastic Volatility: An Overview of Methodology and Empirical Applications. Monetary and Economic Studies, 29:107–142. 2 Nakajima, J. and West, M. (2013a). Bayesian Analysis of Latent Threshold Dynamic Models. Journal of Business and Economic Statistics, 31:151–164. 13 Nakajima, J. and West, M. (2013b). Bayesian Dynamic Factor Models: Latent Threshold Approach. Journal of Financial Econometrics, 11:116–153. 13 Nakamura, E. and Steinsson, J. (2008). Five Facts about Prices: A Reevaluation of Menu Cost Models. The Quarterly Journal of Economics, 123(4):1415–1464. 1 Neal, R. M. (2011). MCMC using Hamiltonian dynamics. In Brooks, S., Gelman, A., Jones, G., and Meng, X., editors, Handbook of Markov Chain Monte Carlo, chapter 5. Chapman and Hall/CRC. 8 Negro, M. D. and Otrok, C. (2008). Dynamic Factor Models with Time-Varying Parameters: Measuring Changes in International Business Cycles. Staff Reports 326, Federal Reserve Bank of New York. 13 Omori, Y., Chib, S., Shephard, N., and Nakajima, J. (2007). Stochastic Volatility with Leverage: Fast and Efficient Likelihood Inference. Journal of Econometrics, 140(2):425–449. 7, 21, 23, 24 15 Polson, N. G., Scott, J. G., and Windle, J. (2013). Bayesian Inference for Logistic Models Using Pólya–Gamma Latent Variables. Journal of the American Statistical Association, 108(504):1339–1349. 2, 7 Powell, B., Nason, G., Elliott, D., Mayhew, M., Davies, J., and Winton, J. (2017). Tracking and Modelling Prices using Web-Scraped Price Microdata: Towards Automated Daily Consumer Price Index Forecasting. Journal of the Royal Statistical Society Series A: Statistics in Society, 181(3):737–756. 2 Primiceri, G. E. (2005). Time Varying Structural Vector Autoregressions and Monetary Policy. The Review of Economic Studies, 72(3):821–852. 2 Rydberg, T. H. and Shephard, N. (2003). Dynamics of Trade-by-trade Price Movements: Decomposition and Models. Journal of Financial Econometrics, 1:2–25. 2 Stock, J. H. and Watson, M. (2016). Core Inflation and Trend Inflation. The Review of Economics and Statistics, 98(4):770–784. 1, 5 Stock, J. H. and Watson, M. W. (2007). Why has U.S. Inflation become Harder to Forecast? Journal of Money, Credit and Banking, 39(1):3–33. 1, 2, 5, 6 Wei, S. X. (1999). A Bayesian Approach to Dynamic Tobit Models. Econometric Reviews, 18(4):417–439. 2 West, M. and Harrison, P. J. (1997). Bayesian Forecasting and Dynamic Models. Springer, 2nd edition. 2, 6 West, M., Harrison, P. J., and Migon, H. S. (1985). Dynamic Generalised Linear Models and Bayesian Forecasting (with discussion). Journal of the American Statistical Association, 80:73–97. 2, 6 Windle, J., Carvalho, C. M., Scott, J. G., and Sun, L. (2013). Efficient Data Augmentation in Dynamic Models for Binary and Count Data. 2, 7 Zhang, B., Chan, J. C., and Cross, J. L. (2020). Stochastic Volatility Models with ARMA Innovations: An Application to G7 Inflation Forecasts. International Journal of Forecasting, 36(4):1318–1328. 1 16 (a) 1980:Q1, MUCSV (b) 1980:Q1, Z-MUCSV (c) 2023:Q3, MUCSV (d) 2023:Q3, Z-MUCSV Figure 9: A heatmap of the posterior statistics on the time-varying conditional covariance matrix on the (non-zero) observations. Left: posterior mean. Right: A binary matrix indicating whether 0 is contained in the 75% credible interval. 17 Figure 10: Heatmaps of the ratio of an index- and forecast horizon-specific point forecast RMSE. Blue indicates lower RMSE for the zero-inflated model, red indicates lower RMSE for the non-zero-inflated model. (a) 49 Models without cross-series covariation (b) With cross-series covariation 18 Figure 11: Empirical coverage (y-axis) of 1-quarter-ahead β% interval forecasts at different levels of β (x-axis), for different time-series. 19 A Appendix A.1 Tables UCSVs MUCSV Models K = 49 independent UCSV models + Dependent trend evolution + Dependent observational disturbance Zero-inflated UCSVs Z-UCSVs Z-MUCSV + Dependent zero-probability evolution + Dependent trend evolution + Dependent observational disturbance Table 2: A complete list of models estimated in an in- and out-of-sample setting. A.2 Posterior simulator for Z-UCSV(s) A.2.1 Priors Note that the priors are specified by θ0 ∼ N1(µθ0, σ2 θ0 h0 ∼ N1(µh0 , σ2 h0 σ2 g ∼ IG(αg, βg), ), ) g0 ∼ N1(µg0, σ2 g0 π0 ∼ N1(µπ0, σ2 π0 ), σ2 h ∼ IG(αh, βh), ), σ2 π ∼ IG(απ, βπ), where • NK(µ, Σ) is used to represent the K-variate Gaussian distribution with mean µ and covariance Σ; • NK(x | µ, Σ) is used to represent the probability density of a K-variate Gaussian distribution with mean µ and covariance Σ, evaluated at x ∈ RK; • IG(α, β) is for an inverse Gamma distribution with shape and scale (α, β). A.2.2 Gibbs sampler Gibbs sampling procedes as follows. 1. Arbitrarily set the initial values. 2. Sample the initial states from Gaussians, p(θ0 | θ1, g1) ∝ K (cid:89) k=1 N1(θ1 | θ0, egt)N1(θ0 | µθ0, σ2 θ0 ) ∝ N1(θ0 | ˆµθ0, ˆσ2 θ0 ), p(g0 | g1, σ2 p(h0 | h1, σ2 p(π0 | π1, σ2 g) ∝ N1(g1 | g0, σ2 h) ∝ N1(h1 | h0, σ2 π) ∝ N1(π1 | π0, σ2 g)N1(g0 | µg0 , σ2 g0 h)N1(h0 | µh0, σ2 h0 π)N1(π0 | µπ0 , σ2 π0 ) ∝ N1(g0 | ˆµg0 , ˆσ2 ), g0 ) ∝ N1(h0 | ˆµh0 , ˆσ2 h0 ) ∝ N1(π0 | ˆµπ0, ˆσ2 π0 ), ), 20 where ˆσ2 θ0 = ˆσ2 g0 = ˆσ2 h0 = ˆσ2 π0 = 1 + e−g1 , 1 + 1/σ2 g , 1 + 1/σ2 h 1 + 1/σ2 π , , 1/σ2 θ0 1/σ2 g0 1/σ2 h0 1/σ2 π0 ˆµθ0 = ˆµg0 = ˆµh0 = ˆµπ0 = , , µθ0 /σ2 θ0 1/σ2 θ0 µg0/σ2 g0 1/σ2 g0 µh0/σ2 h0 1/σ2 h0 µπ0/σ2 π0 1/σ2 π0 + θ1e−g1 + e−g1 + g1/σ2 g + 1/σ2 g + h1/σ2 h + 1/σ2 h + π1/σ2 π + 1/σ2 π , . 3. Sample the variances from inverse Gammas (cid:32) σ2 g ∼ IG αg + (cid:32) σ2 h ∼ IG αh + (cid:32) σ2 π ∼ IG απ + T 2 T 2 T 2 , βg + , βh + , βπ + 1 2 1 2 1 2 (cid:33) (cid:33) , , T (cid:88) (gt − gt−1)2 t=1 T (cid:88) t=1 (gt − gt−1)2 T (cid:88) (πt − πt−1)2 (cid:33) . t=1 4. Sample measurement stochastic volatilities using auxiliary mixture sampler (Kim et al., 1998; Omori et al., 2007). Fixing t ∈ [T ], note ˆyt := y∗ t − θt = ε(y) t ind∼ N1(0, eht), Equivalently log ˆy2 t = ht + log(ε(y) t )2. This yields t := log(ε(y) ε∗ t )2 ind∼ log χ2(1), which we approximate by a Gaussian mixture by p(ε∗ t ) ≈ 10 (cid:88) n=1 N1(ϵ∗ t |mn, vn)P(st = n), where m1:10 and v1:10 are pre-determined constants of each of the Gaussian components indicated in Omori et al. (2007). The discrete auxiliary variables st ∈ [N ] represents the assignment of the t-th observation yt to the st-th component. The approximate scheme proceeds by first sampling the conditionally independent auxiliary variables from P(st = n | ˆyt, ht,k) ∝ N1(log ˆy2 t | ht + mn, vn)P(sk = n). Conditional on s1:T,1:K, sample the log-volatility h using the precision sampler (Chan and Jeliazkov, 2009); draw from h1:T ∼ NT ( ˆµ, ˆV ), where ˆV = ˆµ = ˆV + diag (cid:20) BTB σ2 h (cid:18) h0BTB1T σ2 h (cid:18) 1 σ2 s1 , ..., + diag (cid:19)(cid:21)−1 1 σ2 sT (cid:18) ε∗ 1 − µs1 σ2 s1 , , ..., ε∗ T − µsT σ2 sT (cid:19)(cid:19) . Here, B is a banded difference matrix of size (T × T ) with 1 on the principal diagonal elements and −1 below the 1s. 1T is a vector of ones of length T . 5. Sample trend stochastic volatilities using by replacing in step (4) the variables by ε∗ t = log(ε(θ) t )2 and ˆyt = θt − θt−1. 21 6. Sample the latent trend θ1:T by the forward-filtering backward-sampling algorithm (Carter and Kohn, 1994; Frühwirth-Schnatter, 1994). 7. Sample the dynamic probability p1:T using the Pólya-Gamma augmented forward-filtering backward- sampling algorithm, in two steps. First, sample from the conditional ωt | πt Then, define κt := γt − 1/2 and sample from ind∼ PG(1, πt). p(π1:T | π0, σ2 π, γ1:T , ω1:T ) ∝ p(π1:T | π0, σ2 π) (cid:125) (cid:123)(cid:122) Linear Gaussian Latent State (cid:124) which, equivalently, is t=1 (cid:124) T (cid:89) (cid:40) exp − ωt 2 (cid:18) κt ωt (cid:19)2(cid:41) , − πt (14) (cid:123)(cid:122) Independent Gaussian “Likelihood" (cid:125) where ˜εt is applicable. ind∼ N1(0, 1/ωt) and ε(π) t κt/γt = πt + ˜εt, ind∼ N1(0, σ2 πt = πt−1 + ε(π) t , π). The forward-filtering backward-sampling algorithm 8. Augment y∗ 1:T by simulating for t such that yt = 0. t = yt. Otherwise, set y∗ 9. Return to step 2. y∗ t ind∼ N1(θt, eht), A.3 Posterior simulator for Z-MUCSV A.3.1 Priors Note that the priors are specified by θ0,k h0,k σ2 (g),k ℓi,j where ), θ0,k ind∼ N1(µθ0,k , σ2 ind∼ N1(µh0,k , σ2 ind∼ IG(α(g),k, β(g),k), ind∼ N1(µℓi,j , σ2 ℓi,j h0,k ), ) g0,k π0,k σ2 (h),k ), g0,k ind∼ N1(µg0,k , σ2 ind∼ N1(µπ0,k , σ2 ind∼ IG(α(h),k, β(h),k), Σ(π) ∼ IW(νπ, Sπ), π0,k ), ci,j ind∼ N1(µci,j , σ2 ci,j ), (1 ≤ j < i ≤ K) • NK(µ, Σ) is used to represent the K-variate Gaussian distribution with mean µ and covariance Σ; • NK(x | µ, Σ) is used to represent the probability density of a K-variate Gaussian distribution with mean µ and covariance Σ, evaluated at x ∈ RK; • IG(α, β) is for an inverse Gamma distribution with shape and scale (α, β); • IW(ν, S) is an inverse Wishart distribution with a (K × K) positive definite scale matrix S and degrees of freedom ν > K − 1. A.3.2 Gibbs sampler Gibbs sampling procedes as follows. 1. Arbitrarily set the initial values. 22 2. Recall Σ(θ) 1 (g1) = L diag(eg1,1, ..., eg1,K )LT. Sample the initial states from Gaussians given by p(θ0 | θ1, g1, L) ∝ NK(θ1 | θ0, Σ(θ) 1 (g1)) K (cid:89) k=1 N1(θ0,k | µθ0,k , σ2 θ0,k ) ∝ NK(θ0 | ˆµθ0 , ˆVθ0), p(g0 | g1, σ2 (g),1:K) ∝ p(h0 | h1, σ2 (h),1:K) ∝ K (cid:89) k=1 K (cid:89) k=1 N1(g1,k | g0,k, σ2 (g),k)N1(g0,k | µg0,k , σ2 g0,k ) ∝ N1(g0,k | ˆµg0,k , ˆσ2 g0,k ), N1(h1,k | h0,k, σ2 (h),k)N1(h0,k | µh0,k , σ2 h0,k ) ∝ N1(h0,k | ˆµh0,k , ˆσ2 h0,k ), p(π0 | π1, Σ(π)) ∝ NK(π1 | π0, Σ(π)) K (cid:89) k=1 N1(π0,k | µπ0,k , σ2 π0,k ) ∝ NK(π0 | ˆµπ0 , ˆVπ0), where (cid:34) (cid:32) ˆVθ0 = diag (cid:34) 1 σ2 θ0,1 (cid:32) ˆµθ0 = ˆVθ0 diag (cid:33) , ..., 1 σ2 θ0,K (cid:35)−1 + (Σ(θ) 1 (g1))−1 , µθ0,1 σ2 θ0,1 (cid:33) , ..., µθ0,K σ2 θ0,K (cid:35) + (Σ(θ) 1 (g1))−1θ1 , ˆσ2 g0,k = 1 + 1/σ2 (g),k 1/σ2 g0,k 1 + 1/σ2 (h),k ˆσ2 h0,k = 1/σ2 h0,k (cid:32) (cid:34) ˆVπ0 = diag (cid:34) ˆµπ0 = ˆVπ0 diag 1 σ2 π0,1 (cid:32) , , ˆµg0,k = ˆµh0,k = (g),k , g0,k µg0,k /σ2 1/σ2 g0,k µh0,k /σ2 1/σ2 h0,k + g1,k/σ2 + 1/σ2 (g),k + h1,k/σ2 + 1/σ2 (h),k h0,k (h),k , (cid:33) (cid:35)−1 + (Σ(π))−1 , , ..., 1 σ2 π0,K µπ0,1 σ2 π0,1 (cid:33) , ..., µπ0,K σ2 π0,K (cid:35) + (Σ(π))−1π1 . 3. Sample the log-volatility variance terms from inverse Gammas σ2 (g),k ind∼ IG σ2 (h),k ind∼ IG (cid:32) α(g),k + (cid:32) α(h),k + T 2 T 2 , β(g),k + , β(h),k + 1 2 1 2 and dynamic probability variance from T (cid:88) t=1 T (cid:88) t=1 (gt,k − gt−1,k)2 (gt,k − gt−1,k)2 (cid:33) (cid:33) , , p(Σπ | Π1:T , π0) ∝ IW(Σπ | ν0, S0) T (cid:89) t=1 N (πt | πt−1, π0, Σπ) (cid:32) ∝ IW Σπ | ν0 + T, S0 + T (cid:88) (πt − πt−1)(πt − πt−1)T (cid:33) . t=1 4. Sample measurement stochastic volatilities using auxiliary mixture sampler (Kim et al., 1998; Omori et al., 2007). Fixing t ∈ [T ], note (ˆyt,1, ..., ˆyt,K)T := C−1(y∗ t − θt) = ε(y) t ind∼ NK(0, diag((eht,k )K k=1)), The above decomposes into K equations by ˆyt,k log(ε(y) t,k )2. This yields ind∼ N1(0, eht,k ) or equivalently log ˆy2 t,k = ht,k + t,k := log(ε(y) ε∗ t,k )2 ind∼ log χ2(1), 23 which we approximate by a Gaussian mixture by p(ε∗ t,k) ≈ 10 (cid:88) n=1 N1(ϵ∗ t,k|mn, vn)P(st,k = n), where m1:10 and v1:10 are parameters of each of the Gaussian components indicated in Omori et al. (2007). The discrete auxiliary variables st,k ∈ [N ] represents the assignment of the (t, k)-th ob- servation yt,k to the st,k-th component. The approximate scheme proceeds by first sampling the conditionally independent auxiliary variables from P(st,k = n | ˆyt,k, ht,k) ∝ N1(log ˆy2 t,k | ht,k + mn, vn)P(sk,t = n). Conditional on s1:T,1:K, sample the log-volatility h using the precision sampler (Chan and Jeliazkov, 2009); draw from h1:T,k ind∼ NT ( ˆµk, ˆVk), where (cid:34) ˆVk = BTB σ2 (h),k (cid:32) ˆµk = ˆVk + diag (cid:32) 1 σ2 s1,k , ..., (cid:33)(cid:35)−1 , 1 σ2 sT ,k (cid:32) ε∗ h0,kBTB1T σ2 (h),k + diag 1,k − µs1,k σ2 s1,k , ..., ε∗ T,k − µsT ,k σ2 sT ,k (cid:33)(cid:33) . Here, B is a banded difference matrix of size (T × T ) with 1 on the principal diagonal elements and −1 below the 1s. 5. Sample trend stochastic volatilities using by replacing in step (4) the variables by ε∗ t,k = log(ε(θ) t,k )2 and (ˆyt,1, ..., ˆyt,K)T = L−1(θt − θt−1). 6. Sample the latent trend θ by the forward-filtering backward-sampling algorithm (Carter and Kohn, 1994; Frühwirth-Schnatter, 1994). 7. Sample the dynamic probability π using the Pólya-Gamma augmented forward-filtering backward- sampling algorithm. The original full conditional density reads as p(π1:T | π0, Σ(π), γ1:T ) ∝ p(π1:T | π0, Σ(π)) T (cid:89) K (cid:89) t=1 k=1 (eπt,k )γt,k 1 + eπt,k , (15) which is not straightforward to simulate from. Instead, sample in the following two steps. First, sample from the conditional Then, define κt,k := γt,k − 1/2 and sample from ωt,k ind∼ PG(1, πt,k). p(π1:T | π0, Σ(π), γ1:T , ω1:T ) ∝ p(π1:T | π0, Σ(π)) (cid:125) (cid:124) (cid:123)(cid:122) Linear Gaussian Latent State T (cid:89) t=1 (cid:124) (cid:40) K (cid:89) − exp (cid:18) κt,k ωt,k ωt 2 (cid:123)(cid:122) Independent Gaussian “Likelihood" − πt,k k=1 (cid:19)2(cid:41) , (cid:125) (16) which, equivalently, is (κt,1/γt,1, ..., κt,K/γt,K)T = πt + ˜εt, πt = πt−1 + ε(π) t , where ˜εt sampling algorithm is applicable. ind∼ NK(0, diag(1/ωt,1, ..., 1/ωt,K)) and ε(π) t ind∼ NK(0, Σ(π)). The forward-filtering backward- 8. Sample the Cholesky factor L as follows. Note L−1(θt − θt−1) ∼ NK(0, diag(eg1,t, ..., egK,t)), 24 which equivalently is −(θt,2 − θt−1,2) ∼ N1(ℓ2,1(θt−1,1 − θt,1), egt,2), ... −(θt,K − θt−1,K) ∼ N1 ℓK,k(θt−1,k − θt,k), egt,K . (cid:33) (cid:32)K−1 (cid:88) k=1 Given θ0:T and g1:T , the lower triangular entries of L−1 may be sampled from the respective equa- tions sequentially. For instance, for the last equation, we may write − ˆyK = ˆXKℓK + εK, εK ∼ N (0T , diag(eg1,K , ..., egT ,K )) where it was defined ˆyK := θ1:T,K − θ0:T −1,K, ˆXK := θ1:T,1:K−1 − θ0:T −1,1:K−1, ℓK = (ℓK,1, ..., ℓK,K−1)T. Simulate ℓK | θ1:T,K, g1:T,K ∼ NK−1( ˆµK, ˆVK), where ˆVK = (diag(1/σ2 ˆµK = ˆVK[diag(µℓK,1/σ2 ℓK,1 , ..., 1/σ2 ) + ˆX T ℓK,K−1 , ..., µℓK,K−1/σ2 ℓK,1 K diag(e−g1,K , ..., e−gT ,K ) ˆXK)−1, ℓK,K−1 ) + ˆX T K diag(e−g1,K , ..., e−gT ,K ) ˆyK]. 9. Sample the Cholesky factor C by replacing in step (8), for k = 2, ..., K, the variables as ˆyk := y∗ 1:T,k − θ1:T,k, ˆXk := y∗ 1:T,1:k−1 − θ1:T,1:k−1, ck = (ck,1, ..., ck,k−1)T, ˆVk = (diag(1/σ2 ˆµk = ˆVk[diag(µck,1/σ2 ck,1 , ..., 1/σ2 ck,k−1 ) + ˆX T , ..., µck,k−1 /σ2 ck,1 k diag(e−h1,k , ..., e−hT ,k ) ˆXk)−1, ) + ˆX T k diag(e−h1,k , ..., e−hT ,k ) ˆyk], ck,k−1 and simulate ck | y∗ 1:T,k, θ1:T,k, h1:T,k ∼ Nk−1( ˆµk, ˆVk). 10. Fix t ∈ [T ]. Recall Σ(y) t (ht) = C diag(eht,1, ..., eht,K )CT. Define Kt := {k ∈ [K] | yt,k = 0}. If Kt = [K], that is yt = 0K, then fully augment y∗ t by simulating y∗ t ind∼ NK(θt, Σ(y) t (ht)). If ∅ ⊊ Kt ⊊ [K], then first re-order as (cid:20)y∗ y∗ t,Kt t,Kc t (cid:21) | θt, Σ(y) t , ht ind∼ NK (cid:32)(cid:20)θt,Kt θt,Kc t (cid:34) (cid:21) , Σ(y) Σ(y) t (ht)Kt,Kt Σ(y) t ,Kt Σ(y) t (ht)Kc t (ht)Kt,Kc t t (ht)Kc t ,Kc t (cid:35)(cid:33) , where Kc t is the complement of Kt in [K]. Augment y∗ t,Kt from the conditional y∗ t,Kt | θt, Σ(y) t , ht, where (cid:110) y∗ t,Kc t = yt (cid:111) ind∼ N|Kt|( ˆµt, ˆVt) ˆµt = θt,Kt + Σ(y) ˆVt = Σ(y) t (ht)Kt,Kc t (ht)Kt,Kt − Σ(y) t [Σ(y) t (ht)Kc ]−1(yt,Kc t ,Kc t [Σ(y) t (ht)Kc t ,Kc t ), t − θt,Kc ]−1Σ(y) t t (ht)Kc t ,Kt. t (ht)Kt,Kc t If Kt = ∅, then set y∗ t = yt. 11. Return to step 2. 25
synthetic_cpt
3
Procedural_Knowledge_in_Pretraining_Drives_Reasoning_in_Large_Language_Models.pdf
4 2 0 2 v o N 9 1 ] L C . s c [ 1 v 0 8 5 2 1 . 1 1 4 2 : v i X r a Preprint, under review. PROCEDURAL KNOWLEDGE IN PRETRAINING DRIVES REASONING IN LARGE LANGUAGE MODELS Laura Ruis∗ AI Centre, UCL Maximilian Mozes Cohere Juhan Bae University of Toronto & Vector Institute Siddhartha Rao Kamalakara Cohere Dwarak Talupuru Cohere Acyr Locatelli Cohere Robert Kirk AI Centre, UCL Tim Rockt¨aschel AI Centre, UCL Edward Grefenstette AI Centre, UCL Max Bartolo Cohere ABSTRACT The capabilities and limitations of Large Language Models (LLMs) have been sketched out in great detail in recent years, providing an intriguing yet conflicting picture. On the one hand, LLMs demonstrate a general ability to solve prob- lems. On the other hand, they show surprising reasoning gaps when compared to humans, casting doubt on the robustness of their generalisation strategies. The sheer volume of data used in the design of LLMs has precluded us from applying the method traditionally used to measure generalisation: train-test set separation. To overcome this, we study what kind of generalisation strategies LLMs employ when performing reasoning tasks by investigating the pretraining data they rely on. For two models of different sizes (7B and 35B) and 2.5B of their pretraining tokens, we identify what documents influence the model outputs for three simple mathematical reasoning tasks and contrast this to the data that are influential for answering factual questions. We find that, while the models rely on mostly dis- tinct sets of data for each factual question, a document often has a similar influence across different reasoning questions within the same task, indicating the presence of procedural knowledge. We further find that the answers to factual questions often show up in the most influential data. However, for reasoning questions the answers usually do not show up as highly influential, nor do the answers to the intermediate reasoning steps. When we characterise the top ranked documents for the reasoning questions qualitatively, we confirm that the influential documents often contain procedural knowledge, like demonstrating how to obtain a solution using formulae or code. Our findings indicate that the approach to reasoning the models use is unlike retrieval, and more like a generalisable strategy that synthe- sises procedural knowledge from documents doing a similar form of reasoning. 1 INTRODUCTION Current advancements in artificial intelligence are characterised by the increasing scale of datasets, computational power, and model size (Kaplan et al., 2020; Hoffmann et al., 2022). While one of the manifestations of this approach, Large Language Models (LLMs), is rapidly saturating bench- marks measuring reasoning capabilities (Cobbe et al., 2021; Hendrycks et al., 2021, inter alia), the debate over whether they exhibit ‘genuine understanding’ is ongoing (as reviewed by Mitchell & Krakauer, 2023). The well-documented robust and versatile reasoning abilities (Webb et al., 2023; 2024; McLeish et al., 2024, inter alia) sharply contrast with the line of work highlighting the brittle- ness of LLM reasoning (Razeghi et al., 2022; McCoy et al., 2023; Ullman, 2023; Wu et al., 2024; Mahowald et al., 2024). A finding common to these works is that LLM reasoning depends on the frequency of similar problems in the training data. ∗Work done while at Cohere, correspondence to [email protected] 1 Preprint, under review. Figure 1: A summary of our most important findings towards answering the question “how do LLMs learn to reason from pretraining data?” We rank 5 million pretraining documents according to their influence on the likelihood of completions of two models, Cohere’s Command R 7B and 35B, for 40 factual and 40 reasoning queries. We find that procedural knowledge drives influence on reasoning traces: a document’s influence on the reasoning traces of one query is strongly predictive of that document’s influence on another query with the same mathematical task, in 3 of the 4 cases. We show this on the left through arrows indicating influence, and on the right through correlations of all 5M document influences between a random sample of 10 queries per task (a plot with all queries can be found in Figure 12 in Appendix A.9.1). Further, we find that the answers to factual queries often show up in the top 0.01% of pretraining documents (see text in bottom row of documents), but not for the reasoning questions. Finally, individual documents influence reasoning traces much less strongly than factual answer generations, indicating models rely on documents less when reasoning. All documents and queries shown are redacted versions of real data, and the relations are based on documents found in the top 50 for the queries. A key reason why benchmark saturation cannot be taken at face value is the issue of data contamina- tion: benchmark data often appear in the pretraining set. Where we typically measure generalisation in machine learning by separating the test data from the training data, the trillions of tokens used in the design of current state-of-the-art models cannot reasonably be separated from benchmarks anymore. Recent works have documented the extent of the contamination issue (Brown et al., 2020; Touvron et al., 2023; Gunasekar et al., 2023; Yang et al., 2023; Deng et al., 2024), showing that many common benchmarks have a high percentage of contaminated data. Additionally, Yang et al. (2023) show that even rephrased benchmark data that elude N-gram-based detection methods can impact performance, further complicating the issue. However, it is unclear how and when state-of-the-art LLMs rely on contaminated data to perform reasoning. This raises the question: “how do LLMs learn to reason from pretraining data?” In this work, we take a complementary approach to most interpretability research by focusing on the pretraining data used by language models to generalise, rather than interpreting the model weights themselves. We investigate which data influence the model’s produced reasoning traces and how those data relate to the specific problems being addressed. Are models simply ‘retrieving’ answers from previously seen pretraining data and reassembling them, or are they employing a more robust strategy for generalisation? We use a technique from robust statistics (Hampel, 1974) adapted to large-scale Transformers (Koh & Liang, 2017; Grosse et al., 2023) to compute the influence of pretraining documents on the likelihood of prompt-completions pairs under a trained model. In the extreme case, a language model answering reasoning questions may rely heavily on retrieval from parametric knowledge influenced by a limited set of documents within its pretraining data. In this scenario, 2 Preprint, under review. specific documents containing the information to be retrieved (i.e. the reasoning traces) contribute significantly to the model’s output, while many other documents play a minimal role. Conversely, at the other end of the spectrum, the model may draw from a broad range of documents that are more abstractly related to the question, with each document influencing many different questions similarly, but contributing a relatively small amount to the final output. We propose generalisable reasoning should look like the latter strategy. We investigate the pretraining data (called ‘documents’) that are influential for a set of factual and reasoning questions (called ‘queries’). The reasoning questions cover three mathematical tasks; two-step arithmetic, calculating slopes, and solving linear equations. The factual questions require retrieving from parametric knowledge. We experiment with two LLMs (7B and 35B) and 2.5B of their pretraining tokens. Our findings are as follows (summarised in Figure 1): 1. Procedural knowledge in documents drives influence on reasoning traces: a docu- ment’s influence on the reasoning traces of a query is strongly predictive of that document’s influence on another query with the same mathematical task (Figure 1 and Finding 1 in Sec- tion 5.1). By contrast, this does not hold for factual queries. This indicates that documents often contribute similarly to many questions that require applying the same procedure to different numbers. The correlation is particularly strong for queries involving calculating a slope, and for that task we find procedures for a solution in code or math in the top 0.002% of ranked pretraining data multiple times for most queries (Finding 4 in Section 5.2). 2. The models rely less on individual documents for reasoning questions, and the set of documents they rely on is less specific: we find that the magnitude of influence of documents per unit of query information generated by the models is usually much lower for reasoning questions than for factual questions (Finding 2 in Section 5.1). Further, the overall magnitude of influence of the set of documents is less volatile. The former indicates that when generating reasoning traces, the models rely less on each individual document per nat of query information they generate than for factual retrieval. The latter indicates that for a random subset of 2.5B pretraining tokens, it is more up to chance whether highly influential documents are part of it for factual questions than for reasoning questions. Taken together, this indicates the models likely generalise from a more general set of documents for reasoning than for factual questions, relying on each individual document less. 3. For the factual questions, the answer often shows up as highly influential, whereas for reasoning questions it does not: we look at the top 500 (top 0.01%) influential documents for each query, and find the answer to factual questions relatively often (55% of the queries for the 7B, and 30% for the 35B), and almost never for reasoning questions, even when we do find the answers in the larger set of 2.5B tokens (Finding 3 in Section 5.2). 4. We find evidence for code being important for mathematical reasoning: code data is strongly overrepresented w.r.t. the training distribution for the top portions of the positively and negatively influential rankings for reasoning queries (Finding 5 in Section 5.2). Our findings suggest a generalisation strategy for reasoning that is unlike retrieval from the paramet- ric knowledge formed during pretraining. Instead, the models learn to apply procedural knowledge extracted from documents involving similar reasoning processes, either in the form of general de- scriptions of procedures, or applications of similar procedures. This indicates that we may not need to cover every possible case in the pretraining data; focusing on high-quality data demonstrating procedures across diverse reasoning tasks could be more effective. Although our findings are lim- ited to models learning from procedures within the same mathematical task, we observe that code plays a significant role for all tasks we look at. This raises an interesting question: is there a type of pretraining data — such as code — from which models, particularly larger ones, can learn about multiple tasks? Understanding the extent of procedural generalisation can inform future pretraining strategies and help determine where to concentrate data selection efforts. 2 RELATED WORK The subfield with the aim of understanding how large language models generalise is growing rapidly. This question can be approached in different ways, and many recent works interpret weights of smaller models on synthetic tasks to explain particular phenomena that we observe in language 3 Preprint, under review. models at scale such as grokking (Wang et al., 2024), in-context learning (Olsson et al., 2022; Singh et al., 2024), or superposition (Elhage et al., 2022; Bricken et al., 2023). Scaling interpretability methods to modern-sized LLMs is challenging for many reasons, of which one is computational tractability. Nonetheless, there are a few works that apply techniques from interpretability to lan- guage models at scale. Templeton et al. (2024) use sparse autoencoders to extract interpretable features from Claude 3 Sonnet, and demonstrate how to use these features to control model outputs. Grosse et al. (2023) adapt EK-FAC influence functions (George et al., 2018) to large-scale Trans- formers, and use them to understand what kind of pretraining data influence completions of models up to 50B parameters. The authors show, among many other things, that larger models rely on pre- training data that are more abstractly related to the completion than smaller models. In this work, we build on the results of Grosse et al. (2023), leaning heavily on their efforts to make influence functions tractable at scale, but focus instead on understanding reasoning specifically. 3 COMPUTING THE INFLUENCE OF A DOCUMENT ON A COMPLETION Background on influence functions. Given a pretrained model θu that parametrises a distribution over next tokens conditioned on a prompt pθu (yc | yp) (where yc = {y1, . . . , ym} is a com- pletion, yp = {y1, . . . , yn} a prompt, and u indicates the parameters are not necessarily trained to convergence), we are interested in finding data from the pretraining set D = {xi}N i=1 that in- fluence the completion. Put differently, we want to know which examples in the pretraining set ‘caused’ a completion. To this end, we use EK-FAC influence functions for large-scale transform- ers as proposed by Grosse et al. (2023). The parameters θu are typically found by performing a gradient-based iterative algorithm on an objective function and stopping based on some crite- rion. We want to know the influence of a training document xj ∈ D on the parameters θu (which can be reformulated to influence on any continuous differentiable function of θu using the chain- rule). We can calculate influence exactly by removing xj from the original training set, re-training the model, and comparing the resulting set of parameters (or a function thereof) to the originally trained model. This is intractable for any interesting number of documents and parameters. Influ- ence functions estimate this counterfactual by taking a Taylor expansion of the response function:1 θ⋆(ϵ) = arg minθ∈RD i̸=j L(xi, θ) + ϵL(xj, θ), where L(·) is a loss function, like the cross- entropy. The first-order Taylor approximation around ϵ = 0 of the response function is used to reason about how the optimal parameters change if you change ϵ, which changes the weight of the document we want to know the influence of. Using the implicit function theorem, influence can then be defined as follows: Iθ⋆ (x) = dθ⋆ θJ (θp, D) dϵ is the Hessian of the pretraining objective. Using the chain-rule, we estimate influence of a training document x = {x1, . . . , xk} on the completion given a prompt by approximating the following: = −H−1∇θL(x, θ⋆). Where H = ∇2 (cid:12) (cid:12) (cid:12)ϵ=0 (cid:80) 1 N If (x) = −∇θf (θu)T H−1∇θL(x, θu) (1) Since we are investigating models with billions of parameters D, the above Hessian is intractable, and we estimate it using EK-FAC estimation. For a detailed derivation, the reader is referred to Section 2 and 3 in Grosse et al. (2023). We will mention here that it involves estimating two ex- pectations Epθ [∆θ∆θT ] and Epθ [AAT ] where A denotes the activations of the model. To make this estimation tractable we make a number of simplifying assumptions across all our estimations, like independence between layers and we only take into account MLP parameters of the transformer layers (Grosse et al., 2023). A full list of approximations can be found in Appendix A.7. Adapting EK-FAC influence functions to our problem. Prior work has shown that EK-FAC in- fluence functions more accuractely estimate the counterfactual given by the response function than other types of influence functions (Grosse et al., 2023). However, besides influence on language model completions, we are also interested in influence on the accuracy of a trained language model when answering questions. We can only calculate the influence on a continuous differentiable func- tion, and to the best of our knowledge, no work has shown that influence functions also estimate effect on the underlying accuracy of text produced by next-word prediction. As a proxy for accu- racy, we take as a continuous differentiable function the cross-entropy loss function (f in Equation 1The true response function to derive influence functions for non-converged parameters is the Proximal Bregman function, but this step is left out here. The reader is referred to a derivation in Grosse et al. (2023). 4 Preprint, under review. Table 1: Example from the reasoning set that involves simple two-step arithmetic. Table 2: Example from the factual set that requires retrieving the right answer. Reasoning query set (arithmetic) Factual query set Prompt Calculate the answer: (7 - 4) * 7 Think step-by-step. Completion (by 7B) First, let’s evaluate the expression inside the parentheses: 7 - 4 = 3 Now, let’s multiply the result by 7: 3 * 7 = 21 Therefore, the answer to the expression is 21. Prompt What is the tallest mountain in the world and how tall is it? Completion (by 7B) The tallest mountain in the world is Mount Everest, which is located in the Himalayas. It is 29,029 feet tall. 1). In Appendix A.1 we show that the influence calculated in this way surfaces documents that have a causal effect on the accuracy of a 7B model fine-tuned to do reasoning and reading comprehension tasks. Namely, if we remove documents from the fine-tuning data according to their influence and re-train the model, the accuracy drops significantly more than if we take out the same amount of documents randomly, or the same amount of documents using gradient similarity. In parallel, we motivate the use of EK-FAC estimations of the Hessian, by showing it significantly improves over a method using only first-order information. It is only reasonably possible to loop over the pretraining data sample once, and to store more than a single query gradient in memory (which has the same memory complexity as the model itself), Grosse et al. (2023) use singular-value decomposition (SVD). Instead of SVD, we use approximate SVD with a probabilistic algorithm (Halko et al., 2011), which significantly speeds up the compu- tation of the query gradients. We justify each approximation we do in Appendix A.2.1. We approximate Equation 1 to get scores for documents from the pretraining data D that represent the influence they have on a completion yc given a prompt yp. Given the counterfactual question approximated by the response function, an influence score of 1 implies the log-probability of the sequence yc is increased by 1 (Grosse et al., 2023). To compare influence scores across different completions (and token lengths), we normalise the scores for each query by the information content of its completion yc, measured in nats. The information content of a query is defined as I(yc) = − log (pθu (yc | yp)). The influence scores induce a ranking over documents from most positively to most negatively influential, where a score can be interpreted as the increase (or decrease) in log- probability per nat of query information. The pipeline is shown in Figure 6 in the Appendix. 4 EXPERIMENTAL SETUP Query set. We collect a query set with different types of questions, of which 40 are reasoning questions and 40 factual questions. Note that it is only tractable to loop over the pretraining sample we look at once, so we need to be able to store all query gradients in memory and cannot go beyond about 80 questions. For the reasoning questions, we identify two types of mathematical reasoning each model can do robustly with zero-shot chain-of-thought (Wei et al., 2022). We do this by evaluating the models on larger sets of 100 questions for each type of reasoning, and selecting tasks where it gets at least 80% correct. This surfaces simple two-step arithmetic for the 7B model (Table 1), calculating the slope between two numbers for both models (of which two redacted examples are shown in Figure 1), and solving for x in linear equations for the 35B model (see Table 9 in Appendix A.3 for prompt-completion pairs of the linear equations task). We ensure no query ever requires outputting a fraction. To make the results between 7B and 35B more comparable, we use the same slope questions for both models. For the 40 factual questions, we make sure the model gets half right and half wrong, allowing us to identify failures of retrieving facts from parametric knowledge, and we also ensure 16 of 40 overlap between models. We calculate influence over the full completion, which includes the chain-of-thought in the reasoning case. The query sets are provided in the supplement. 5 Preprint, under review. Documents set. We want to compare the influence of pretraining data on reasoning by differently sized models (7B and 35B), so we select two models that are trained on the same data. The EK-FAC estimation of the Hessian only needs to be done once per model, but the other terms in Equation 1 require two forward- and backward-passes through the model per document-query pair. This means that obtaining a ranking over pretraining data for a single query has a computational complexity similar to pretraining itself. To overcome this issue, we sample a set of documents from the pre- training data that covers multiple examples from each batch seen during pretraining, giving a total of 5 million documents (approximately 2.5B tokens) distributed similary as the training distribution. We batch queries and obtain the influence scores in parallel. Each document contains 512 tokens.2 EK-FAC estimation. To estimate the Hessian for the 7B and 35B models (the expectations from Section 3), we randomly sample 100 000 documents equally spread-out through pretraining for both models. Details on how exactly we approximate the Hessian are in Appendix A.2. We note here that although this aspect of the pipeline requires estimating over 300B parameters representing second- order information, the bottleneck remains calculating document gradients. Models. We look at two models of different sizes, 7B and 35B, which are base and supervised fine-tuned versions of Cohere’s Command R series.3 We estimate the second order information and calculate document gradients using the base models, and generate completions and calculate the query gradients using the models fine-tuned with supervised instruction-tuning. The reason for choosing this setup is that the fine-tuned models are much better at instruction following. This means we are assuming the EK-FAC for the fine-tuning phase is the identity (Bae et al., 2024), and we are focusing only on the influence of the pretraining data and ignoring the fine-tuning data. 5 EXPERIMENTS AND RESULTS We compare the rankings (from most positively to most negatively influential) over pretraining data produced by influence functions for reasoning questions to the rankings for factual questions (which can only be answered by retrieving parametric knowledge). We first analyse the rankings quanti- tatively by looking at the influence of different documents per nat of generated query information (Section 5.1). We aim to elucidate how generalisable the information in the influential documents is, and how many documents the model is relying on when doing reasoning compared to retrieval. Then, in Section 5.2 we investigate how the documents relate to the queries qualitatively. 5.1 QUANTITATIVE ANALYSIS Finding 1: There is a significant positive correlation between the influence scores of documents for queries with the same underlying reasoning task, indicating that these documents are relevant for questions requiring the same procedure applied to different numbers. If models are relying on documents that contain ‘general’ knowledge that is applicable to any query with the same task (e.g. queries that require finding the slope between two points for many different points), we would expect there to be a significant correlation in the influence scores for these queries. We calculate the Pearson’s R correlation of all 5 million document scores for all query combinations (leading to 802 correlations per model). The results can be seen in the right panel of Figure 1 for a subsample of 10 queries per task, and all query correlations can be found in Figure 12 in Appendix A.9.1. We find a strongly significant (p-values all below 4e − 8) positive correlation between many queries of the same reasoning type, and a strongly significant absence of correlation (p-values all around 4e − 3) for most (but not all) factual queries or other combinations (e.g. reasoning queries of different types). This means that many documents have a similar influence on the same type of reasoning. Given that each type of reasoning query requires applying the same procedure to different numbers, the positive correlation indicates that the influence scores for reasoning queries pick up on procedural knowledge. The correlations are strongest for the slope queries by the 35B model, and this is also the type of reasoning the model can do most robustly compared to solving linear equations. For the model to be able to solve linear equations with an accuracy of more than 80%, we restrict the calculations to lead to positive x, whereas for the slopes questions the answers can be positive or negative. In Appendix A.9.1 we falsify the hypothesis that the correlations are 2We choose 512 tokens because qualitatively interpreting more is hard (usually spanning multiple topics). 3https://cohere.com/command 6 Preprint, under review. caused by the fact that the reasoning questions are superficially similar to each other, by using a set of control queries that are also superficially similar but do not require any reasoning and repeating the entire experiment. For the control queries we mostly do not observe a correlation. In Appendix A.9.1 we highlight examples of queries with high or low correlation for different query sets, finding that some of the correlation seems driven by formatting of reasoning steps, and most by reasoning procedure. Finding 2: When reasoning, the model on average relies on each individual document less per generated nat of information than when answering factual questions, and the total magnitude of influence is much less volatile, indicating it is generalising from a more general set of documents. The effect is more pronounced for the larger model. In Figure 2 we show the total influence for different percentiles of the positive parts of the rankings. Figure 2: The total influence per nat of query completion information for different portions of the positive ranking over documents, left for the 7B model, right for the 35B. The total influence per nat is usually lower for reasoning questions than for factual questions, and the influence per document varies more for factual questions than for reasoning questions, especially for the 35B model. The results depict the total amount of influence contained in the top-k percentile of the positively ranked documents: e.g. the 20th percentile contains 20% of the positive documents for a query, and the amount of total influence shown is the sum of all document influences up to that part of the ranking. The equivalent for the negative portions looks similar (Figure 15 in Appendix A.9.2) and the discussion below applies similarly to the negative ranking. We observe two things for both models. Firstly, the amount of total influence for most factual questions at any part of the ranking is higher than for reasoning questions. Secondly, there is more variation in the influence of documents at the same rank across different factual queries (and for a few factual queries the amount of influence is actually lower than for the reasoning queries, seen more clearly in Figure 20 in Appendix A.9.3). The first result means that, on average, the models rely on individual documents within our set less for generating reasoning traces than for answering factual questions. The second result indicates that for the factual questions the model relies on more ‘specific’ and infrequent documents: for a factual question it is more up to chance whether relatively highly influential documents (w.r.t. influence of documents for other factual questions) are part of the pretraining sample or not. Influence spread. Another way to analyse the magnitude of influence is to look at the dispersion of influence across the ranking: how much of total influence for each query is contained at the top and bottom parts of the ranking? Similarly to what Grosse et al. (2023) report, we observe that the top parts of the rankings over documents follow a power law characterised by a linear relation between rank and influence per nat in log-log space (shown in Figure 20 in Appendix A.9.3). We find that the slopes for the reasoning questions by the 35B are slightly steeper than for the factual questions, and therefore the percentage of positive influence contained in the top portions of the rankings for the 35B reasoning questions increases faster with rank than for the factual questions (shown in Figure 22 in Appendix A.9.3). For the 7B, the slopes for the reasoning questions the model gets right are on average also a bit steeper than for the factual questions, but the effect goes away when comparing slopes for all factual vs. reasoning queries. This means that the percentage of the total positive influence the top sequences cover is higher for the reasoning questions than for the factual questions for the 35B model (and similarly for the bottom sequences, see Figure 15). There 7 Preprint, under review. is a chance this finding is caused by noise for the 35B model and we discuss this possibility more in Appendix A.9.3, where we note that for the reasoning query with the steepest power law, the top 1 document is qualitatively entirely unrelated to the prompt. If we compare the result between models, we find that the difference in magnitude and volatility are more pronounced for the 35B model across the full rankings. We look into this in Appendix A.9.2, and find that the effect remains even if we only look at queries that are the same for both models, which points to higher data efficiency for the larger model. 5.2 QUALITATIVE ANALYSIS We perform three qualitative analyses on the top portions of the rankings for each query; we search for the answer, we characterise the documents’ relation to the reasoning queries, and we investigate what source datasets they are from (for both the top and bottom parts of the ranking, e.g. code, Wikipedia, etc). To filter some of the noise, we divide the influence scores by the document gradient norm and re-rank them, which has empirically been found to help (Choe et al., 2024). Finding 3: The answer to the factual questions shows up relatively often in the top influ- ential documents for the factual questions, and almost never for the reasoning questions. To find the answer to the questions in the queries in the top documents manually, we construct keywords for each query that should be in the document if the answer is there. For example, for the factual query in Table 2, the keywords are “tallest”, “highest”, “Mount Everest”, “29029”, “8848”. For the reasoning queries, we construct many more keywords per query, but some examples for the example in Table 2 are 7 − 4, 3, 21, 3∗7, as well as replacing the operations with words like ‘minus’ and ‘times’, and different ways of represent- ing the content in this query. For details on which key- words we use for each query, see Appendix A.4. We determine the occurrence of each of these keywords in- dependently in the top 100 documents for each query (meaning even if just the keyword ‘7’ is present it would be a hit), resulting in many false-positives. We manually look over the hits to find the answer. On top of that, we craft a prompt for Command R+ (a more capable 100B model) to find the answer in a query- document pair, and use it to find the answer in the top 500 documents for each query independent of keyword overlap (the prompt is given in Appendix A.5). Then, we manually look over the hits and keep track of documents that have the answer to a query. We verify that Command R+ finds all, and more, of the answers we have identified manually. We look for the full answer in a single document. For the reasoning queries, we also count partial answers in separate documents if they combine to the full answer. For example, if one document contains 7 − 4 = 3, and another 3 ∗ 7 = 21, we consider that an answer. Finally, we apply the keyword overlap search combined with prompting Command R+ to a subset of the broader 2.5B pretraining tokens to verify that the answer to the questions are in the entire set even if they do not show up in the top 500 documents for queries. Figure 3: We search for the answer in the top 500 (top 0.01%) documents, and find it relatively frequently for the factual ques- tions. For the reasoning questions, we find the answer twice for the 7B, and never for the 35B. Both those times, the answers to the steps occur in separate documents. The results are shown in Figure 3. For the 7B model, we find the answer in the top 500 documents for 55% of the factual queries, compared to 7.4% of the reasoning queries. For the 35B model, the answer to the factual queries shows up in the top influential documents 30% of the time, and never for the reasoning set. We expect the answer shows up less frequently for the 35B model simply because the factual questions are much more ‘niche’. For example, one of the questions the model gets correct is “In which year did the Beinecke Library open?”. Moreover, in certain cases, the answer shows up multiple times in the top 500 documents. If we count all these separately, as opposed to a binary ‘yes’ or ‘no’ per query on which the results in Figure 3 are based, answers to questions show up 30 times for the factual questions in the 7B rankings, and twice for the reasoning questions. For the 35B, the same result is 15 times for the factual questions, and never for the reasoning questions. Interestingly, the answer to the factual questions often shows up in different languages, like Spanish or Portuguese. We give two examples in Appendix A.8.2. To falsify the 8 Preprint, under review. hypothesis that the answers to reasoning questions are not showing up because they are not present in the set of 5M documents, we repeat the above keyword search over a random subset of the 5M documents. We identify answers to reasoning steps in documents that do not show up in the top 500 documents for 13 of 20 arithmetic queries and a full answer for 1 of 20, and expect more to be there that elude the keyword search. For the slopes and linear equation queries, we find answers to 3 reasoning steps which do not show up in the top 0.01%. In Appendix A.8.1 we show some of these documents and their ranks. Finding 4: We find that influential documents for the reasoning queries are often doing a similar form of step-by-step reasoning, e.g. also arithmetic. Further, we find that the influential docu- ments often implement a solution to reasoning questions in code or general math. For the slope queries (of which we have 20 which are the same for both models), many different documents surface as highly influential that show how to calculate the slope between two points in code or math. For the 7B model, documents that present procedural knowledge on how to calculate the slope in either code or math show up in the top 100 documents for 16/20 queries (38 times), and for the 35B model they show up for all queries (51 times). All together, we manually find 7 unique documents that implement the slope in code in the top 100 documents, and 13 that present equations for calculating the slope. The 7B model relies on 18 of these documents for its completions (mean- ing 18 different ones appear in the top 100 documents for all queries), and the 35B on 8. An example of a highly influential document implementing the solution in JavaScript (left) and in maths (right): Positively influential code Positively influential math function eqOfLine(x1, y1, x2, y2) { if (x1 === x2) { // Handle a vertical line return ‘x = ${x1}‘; } else { // Calculate the slope const m = (y2 - y1) / (x2 - x1); const b = y1 - m * x1; // Return y = mx + b return ‘y = ${m}x + ${b}‘; } } a straight If points passing P (x1, y1), Q(x2, y2) is making an angle θ with the positive X-axis, then the slope of the straight line is: through line the (A) (B) θ (C) y2+y1 x2+x1 y2−y1 x2−x1 (D) sin θ Solution: Correct answer: (C) We prompt Command R+ to further characterise the top 500 documents for each query by choosing from a set of provided keywords, and find that often the documents are doing similar arithmetic on other numbers (e.g. much larger or smaller), doing similar arithmetic on similar numbers (for the slope questions), or similar algebraic operations on similar numbers (for solving linear equations). We present the detailed results and prompt for this analysis in Appendix A.8.3. the source datasets of Finding 5: For factual queries, the most influential data sources include Wikipedia and trivia, while for reasoning, key sources consist of maths, StackExchange, ArXiv, and code. We look at the type of source datasets that represent the most influential documents. Specif- the top and bottom k documents with k ∈ ically, we count {50, 500, 5000, 50000, 500000}, and compare the count to the pretraining distribution. We present the details in Appendix A.8.4, but mention here that code data is highly influential for reasoning. StackExchange as a source has ten times more influential data in the top portions of the rankings than expected if the influential data was randomly sampled from the pretraining distribution. Other code sources are twice as influential as expected when drawing randomly from the pretraining distribution for k = 50 up to k = 50000. Similar patterns hold for the bottom portions of the rankings. 6 DISCUSSION, LIMITATIONS, AND FUTURE WORK In this work, we investigate what kind of generalisation strategy two LLMs (7B and 35B respec- tively) employ when reasoning, and contrast it to the strategy used for a task that requires retrieving factual parametric knowledge. By creating rankings for 200 such questions over 5 million pretrain- ing documents based on their influence on the likelihood of the completions, we conclude that the generalisation strategy for reasoning is unlike retrieval. More often than not, even if the answer is part of the set of pretraining documents we look at, it does not show up as highly influential as the 9 Preprint, under review. answers to factual questions do. We find that instead, the positively influential documents often con- tain procedural knowledge on how to get to a solution. Further, the models rely less on individual documents when reasoning than when answering factual questions, and the set of documents they rely on is more general. Finally, documents often have a similar influence on reasoning queries that require applying the same procedure to different numbers. These findings can inform pretraining data selection for more robust reasoning: we likely do not need to cover every case in pretraining but can rather focus on data describing and applying procedures to diverse reasoning problems. We find that the distribution of influence is less spread out for reasoning than for factual questions, characterised by steeper power laws. The distribution of influence over documents tells us something about the type of generalisation strategy the model is using; the more documents that contribute to each nat of query information (i.e. the more spread out the total influence), the more documents the model is relying on to produce the completion. One would perhaps expect a steeper power law for factual questions than for reasoning (meaning more of the total positive influence contained at the top parts of the ranking), but our results show evidence for the opposite. Perhaps a model needs to generalise from a broader set of documents for factual retrieval than for reasoning because it needs to see the same information more often to memorise it. This is supported by the finding that for factual questions the answer often shows up multiple times in the top 0.01% most influential data. There are important limitations to our approach, most notably that we do not calculate influence on the entire training set, which is intractable. An alternative explanation of our results is then the opposite conclusion: the model is relying on data for reasoning that are so infrequent that a random sample of 2.5B tokens does not surface relatively highly influential samples for any of the 60 reason- ing queries. This would result in the conclusion that LLMs rely on sparse and infrequent documents for reasoning. That means we are effectively looking at a set of relatively uninfluential documents for reasoning, and that perhaps the answers to reasoning traces would be highly influential when looking at the entire pretraining data. We would argue that this is the more unlikely explanation for three reasons: (1) the qualitative analysis shows that the influential data for the reasoning questions are intuitively highly relevant, and that the answers to many reasoning traces are part of the 2.5B to- kens, they are just not highly influential for reasoning, (2) the correlation of influence scores for the different reasoning tasks is highly significant, and (3) we confirm that these results do not hold for contol queries that look similar to the reasoning queries superficially, but do not require step-by-step reasoning. Moreover, it seems exceedingly unlikely that the model is learning to do retrieval from such infrequent data for one of the simplest forms of mathematical reasoning, namely subtraction and multiplication on small numbers. Taken together we argue the results indicate a generalisation strategy that relies on procedural knowledge. Regardless, the nature of interpretability research such as the work presented here is that all we can do is provide evidence, and not proof. Another limitation is that we do not look at the supervised fine-tuning stage. The reason we only look at the pretraining data is because the fine-tuning stage is targeted at making the models more aligned and ‘instructable’, and prior work has shown that SFT serves primarily to enhance existing model capabilities (Jain et al., 2024; Kotha et al., 2024; Prakash et al., 2024). Nonetheless, an interesting direction for future work is applying the same method used here to the fine-tuning data. This work spurs further avenues for future work. Firstly, as previously discussed, identifying data types that are similarly influential across reasoning types could provide additional insight into data selection techniques for improved reasoning. Relatedly, what properties of code data makes it influ- ential for reasoning? What kind is positively influential, and what kind negatively? Further, since we only take into account the feed-forward layers and treat the attention as fixed, an interesting avenue for future work would be to investigate how the relatively low magnitude of influence of pretraining data on feed-forward parameters for reasoning traces interacts with attention, connecting to a finding from literature that certain forms of reasoning happen in the attention heads (Olsson et al., 2022). Finally, in this work we investigate mathematical reasoning. Future work should verify whether similar results hold for other types of reasoning, such as inductive reasoning. With this work, we do not claim to say contamination is not an issue, or that LLM reasoning is not brittle and reliant on pretraining statistics. What we demonstrate is that, in principle, it appears possible for LLMs to produce reasoning traces using a generalisation strategy that combines infor- mation from procedurally related documents, as opposed to doing a form of retrieval. This is not to say that there are no cases of LLM reasoning where the model is in fact doing retrieval, on the contrary, models can be overfit to contaminated data if it appears often enough in the training data. 10 Preprint, under review. REPRODUCIBILITY STATEMENT Although this work is based on proprietary models and pretraining data, we make the following efforts for reproducibility. We add pretraining data with answers to factual and reasoning questions to the supplement, as well as data in which procedures for calculating the slope have been identified. For one of the models we use (the 35B model), the final-stage model (further trained after SFT) is publicly available on HuggingFace.4 We provide all queries, completions, and keywords in the supplemental material. Additionally, we verify that the influence scores generated with our internal codebase correlate with a Pearson’s R of more than 0.99 with a public implementation of EK-FAC influence functions (see Appendix A.2.2). Further, we provide details on hyperparameters for every experiment we have done at the relevant sections, as well as the prompts used to find answers to the reasoning questions and characterise the relationship between the query-document pairs (Appendix A.5 and A.6 respectively). ACKNOWLEDGEMENTS We’d like to thank Andrew Lampinen, Stephanie Chan, Akbir Khan, and Philipp Jettkant for fruit- ful discussions about the work presented here. This work was supported by the EPSRC Grant EP/S021566/1 and UCL International Scholar Award for Doctoral Training Centres. REFERENCES Viraat Aryabumi, Yixuan Su, Raymond Ma, Adrien Morisot, Ivan Zhang, Acyr Locatelli, Marzieh Fadaee, Ahmet ¨Ust¨un, and Sara Hooker. To code, or not to code? exploring impact of code in pre-training, 2024. URL https://arxiv.org/abs/2408.10914. Juhan Bae, Wu Lin, Jonathan Lorraine, and Roger Grosse. Training data attribution via approximate unrolled differentiation, 2024. URL https://arxiv.org/abs/2405.12186. Elnaz Barshan, Marc-Etienne Brunet, and Gintare Karolina Dziugaite. Relatif: Identifying ex- planatory training samples via relative influence. In Silvia Chiappa and Roberto Calandra (eds.), Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pp. 1899–1909. PMLR, 26–28 Aug 2020. URL https://proceedings.mlr.press/v108/barshan20a.html. Trenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Con- erly, Nick Turner, Cem Anil, Carson Denison, Amanda Askell, Robert Lasenby, Yifan Wu, Shauna Kravec, Nicholas Schiefer, Tim Maxwell, Nicholas Joseph, Zac Hatfield-Dodds, Alex Tamkin, Karina Nguyen, Brayden McLean, Josiah E Burke, Tristan Hume, Shan Carter, Tom Henighan, and Christopher Olah. Towards monosemanticity: Decomposing language models with dictionary learning. https://transformer- circuits.pub/2023/monosemantic-features/index.html. Transformer Circuits Thread, 2023. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhari- wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agar- wal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neu- ral Information Processing Systems, volume 33, pp. 1877–1901. Curran Associates, Inc., URL https://proceedings.neurips.cc/paper_files/paper/2020/ 2020. file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf. Sang Keun Choe, Hwijeen Ahn, Juhan Bae, Kewen Zhao, Minsoo Kang, Youngseog Chung, Adithya Pratapa, Willie Neiswanger, Emma Strubell, Teruko Mitamura, Jeff Schneider, Eduard Hovy, Roger Grosse, and Eric Xing. What is your data worth to gpt? llm-scale data valuation with influence functions, 2024. URL https://arxiv.org/abs/2405.13954. 4https://huggingface.co/CohereForAI/c4ai-command-r-v01 11 Preprint, under review. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Lev- skaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Bren- nan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways, 2022. URL https://arxiv.org/abs/2204.02311. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Dask Development Team. Dask: Library for dynamic task scheduling, 2016. URL http:// dask.pydata.org. Chunyuan Deng, Yilun Zhao, Xiangru Tang, Mark Gerstein, and Arman Cohan. Benchmark In NeurIPS 2023 Workshop probing: on Backdoors in Deep Learning - The Good, the Bad, and the Ugly, 2024. URL https: //openreview.net/forum?id=a34bgvner1. Investigating data leakage in large language models. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proc. of NAACL, 2019. Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, Roger Grosse, Sam McCandlish, Jared Kaplan, Dario Amodei, Martin Wattenberg, and Christopher Olah. Toy models of superposition. https://transformer- circuits.pub/2022/toy model/index.html. Transformer Circuits Thread, 2022. Thomas George, C´esar Laurent, Xavier Bouthillier, Nicolas Ballas, and Pascal Vincent. Fast approximate natural gradient descent In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Ad- vances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., URL https://proceedings.neurips.cc/paper_files/paper/2018/ 2018. file/48000647b315f6f00f913caa757a70b3-Paper.pdf. in a kronecker factored eigenbasis. Roger Grosse, Juhan Bae, Cem Anil, Nelson Elhage, Alex Tamkin, Amirhossein Tajdini, Benoit Steiner, Dustin Li, Esin Durmus, Ethan Perez, Evan Hubinger, Kamil˙e Lukoˇsi¯ut˙e, Karina Nguyen, Nicholas Joseph, Sam McCandlish, Jared Kaplan, and Samuel R. Bowman. Studying large lan- guage model generalization with influence functions, 2023. URL https://arxiv.org/ abs/2308.03296. Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio C´esar Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, S´ebastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, and Yuanzhi Li. Textbooks are all you need, 2023. URL https://arxiv.org/ abs/2306.11644. N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review, 53(2):217–288, 2011. doi: 10.1137/090771806. URL https://doi.org/10.1137/090771806. Frank R. Hampel. The influence curve and its role in robust estimation. Journal of the doi: 10.1080/01621459.1974. American Statistical Association, 69(346):383–393, 1974. 12 Preprint, under review. 10482962. URL https://www.tandfonline.com/doi/abs/10.1080/01621459. 1974.10482962. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. Proceedings of the Interna- tional Conference on Learning Representations (ICLR), 2021. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Thomas Hennigan, Eric Noland, Katherine Millican, George van den Driessche, Bogdan Damoc, Au- relia Guy, Simon Osindero, Kar´en Simonyan, Erich Elsen, Oriol Vinyals, Jack Rae, and Lau- rent Sifre. An empirical analysis of compute-optimal large language model training. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 30016–30030. Curran Associates, Inc., URL https://proceedings.neurips.cc/paper_files/paper/2022/ 2022. file/c1e2faff6f588870935f114ebe04a3e5-Paper-Conference.pdf. Samyak Jain, Robert Kirk, Ekdeep Singh Lubana, Robert P. Dick, Hidenori Tanaka, Tim Rockt¨aschel, Edward Grefenstette, and David Krueger. Mechanistically analyzing the effects of fine-tuning on procedurally defined tasks. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=A0HKeKl4Nl. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models, 2020. URL https://arxiv.org/abs/2001.08361. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), San Diega, CA, USA, 2015. Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, pp. 1885–1894. JMLR.org, 2017. Suhas Kotha, Jacob Mitchell Springer, and Aditi Raghunathan. Understanding catastrophic forget- ting in language models via implicit inference. In The Twelfth International Conference on Learn- ing Representations, 2024. URL https://openreview.net/forum?id=VrHiF2hsrm. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. RACE: Large-scale ReAding comprehension dataset from examinations. In Martha Palmer, Rebecca Hwa, and Sebastian Riedel (eds.), Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pp. 785–794, Copenhagen, Denmark, September 2017. Association for Computational Lin- guistics. doi: 10.18653/v1/D17-1082. URL https://aclanthology.org/D17-1082. Kyle Mahowald, Anna Ivanova, Idan Blank, Nancy Kanwisher, Joshua Tenenbaum, and Evelina Fedorenko. Dissociating language and thought in large language models. Trends in Cognitive Sciences, 28, 03 2024. doi: 10.1016/j.tics.2024.01.011. R. Thomas McCoy, Shunyu Yao, Dan Friedman, Matthew Hardy, and Thomas L. Griffiths. Embers of autoregression: Understanding large language models through the problem they are trained to solve, 2023. URL https://arxiv.org/abs/2309.13638. Sean McLeish, Arpit Bansal, Alex Stein, Neel Jain, John Kirchenbauer, Brian R. Bartoldson, Bhavya Kailkhura, Abhinav Bhatele, Jonas Geiping, Avi Schwarzschild, and Tom Goldstein. Transform- ers can do arithmetic with the right embeddings, 2024. URL https://arxiv.org/abs/ 2405.17399. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models, 2016. Melanie Mitchell and David C. Krakauer. The debate over understanding in ai’s large language models. Proceedings of the National Academy of Sciences, 120(13):e2215907120, 2023. doi: 10.1073/pnas.2215907120. URL https://www.pnas.org/doi/abs/10.1073/pnas. 2215907120. 13 Preprint, under review. Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Scott Johnston, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and Chris Olah. In-context learning and induction heads. Transformer Circuits Thread, 2022. https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html. Nikhil Prakash, Tamar Rott Shaham, Tal Haklay, Yonatan Belinkov, and David Bau. Fine-tuning In The Twelfth International enhances existing mechanisms: A case study on entity tracking. Conference on Learning Representations, 2024. URL https://openreview.net/forum? id=8sKcAWOf2D. Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. Estimating training data influence by tracing gradient descent. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 19920–19930. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_ files/paper/2020/file/e6385d39ec9394f2f3a354d9d2b88eec-Paper.pdf. Yasaman Razeghi, Robert L Logan IV, Matt Gardner, and Sameer Singh. Impact of pretraining term frequencies on few-shot numerical reasoning. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang (eds.), Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 840–854, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Lin- guistics. doi: 10.18653/v1/2022.findings-emnlp.59. URL https://aclanthology.org/ 2022.findings-emnlp.59. Aaditya K Singh, Ted Moskovitz, Felix Hill, Stephanie C.Y. Chan, and Andrew M Saxe. What needs to go right for an induction head? a mechanistic study of in-context learning circuits and their formation. In Forty-first International Conference on Machine Learning, 2024. URL https: //openreview.net/forum?id=O8rrXl71D5. Adly Templeton, Tom Conerly, Jonathan Marcus, Jack Lindsey, Trenton Bricken, Brian Chen, Adam Pearce, Craig Citro, Emmanuel Ameisen, Andy Jones, Hoagy Cunningham, Nicholas L Turner, Callum McDougall, Monte MacDiarmid, C. Daniel Freeman, Theodore R. Sumers, Edward Rees, Joshua Batson, Adam Jermyn, Shan Carter, Chris Olah, and Tom Henighan. Trans- Scaling monosemanticity: Extracting interpretable features from claude 3 sonnet. former Circuits Thread, 2024. URL https://transformer-circuits.pub/2024/ scaling-monosemanticity/index.html. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models, 2023. URL https://arxiv.org/abs/2307.09288. Tomer Ullman. Large language models fail on trivial alterations to theory-of-mind tasks, 2023. URL https://arxiv.org/abs/2302.08399. Boshi Wang, Xiang Yue, Yu Su, and Huan Sun. Grokked transformers are implicit reasoners: A mechanistic journey to the edge of generalization, 2024. URL https://arxiv.org/abs/ 2405.15071. Taylor Webb, Keith Holyoak, and Hongjing Lu. Emergent analogical reasoning in large language models. Nature Human Behaviour, 7:1–16, 07 2023. doi: 10.1038/s41562-023-01659-w. 14 Preprint, under review. Taylor Webb, Keith J. Holyoak, and Hongjing Lu. Evidence from counterfactual tasks supports emergent analogical reasoning in large language models, 2024. URL https://arxiv.org/ abs/2404.13070. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Ad- models. vances in Neural Information Processing Systems, 2022. URL https://openreview.net/ forum?id=_VjQlMeSB_J. Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Aky¨urek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, and Yoon Kim. Reasoning or reciting? exploring the capabilities and limita- tions of language models through counterfactual tasks. In Kevin Duh, Helena Gomez, and Steven Bethard (eds.), Proceedings of the 2024 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies (Volume 1: Long Pa- pers), pp. 1819–1862, Mexico City, Mexico, June 2024. Association for Computational Linguis- tics. doi: 10.18653/v1/2024.naacl-long.102. URL https://aclanthology.org/2024. naacl-long.102. Shuo Yang, Wei-Lin Chiang, Lianmin Zheng, Joseph E. Gonzalez, and Ion Stoica. Rethinking benchmark and contamination for language models with rephrased samples, 2023. URL https: //arxiv.org/abs/2311.04850. A APPENDIX Below we outline the contents of the appendix. EK-FAC influence functions. In Appendix A.1 we discuss the counterfactual re-training experi- ments that motivate our use of EK-FAC influence functions for estimating the effect of pretraining data on the accuracy of downstream behaviour. We describe in more detail how we use influence functions at scale in Appendix A.2, documenting how we estimate the Hessian, how we store many query gradients in memory (each having the same memory complexity as the entire model), and how we sample from the pretraining distribution. Query sets examples. Then, in Appendix A.3, we show examples of the reasoning sets that we did not show examples for in the main body of this manuscript. Finding query answers in documents and characterising document-query relations. In Appendix A.4 we discuss how we create keywords for each query in order to find the answer in the top documents, and in the sections directly after that, Appendix A.5 and A.6, we give the prompts we used to allow Command R+ to search for answers in the top 500 documents for each query, as well as characterise their relationship. Limitations. In Appendix A.7 we discuss limitations specific to influence functions. Additional qualitative results. In Appendix A.8 we provide additional qualitative results. Answer finding. We show examples of answer documents in Appendix A.8.1. Cross-lingual transfer. We give some examples of cross-lingual transfer in Appendix A.8.2. Characterise query-document relation. We give detailed results on the characterisation of the relationship between queries and the top 500 documents in Appendix A.8.3. Source-dataset analysis. We analyse which datasets the influential data comes from in Appendix A.8.4. Content analysis of relevant documents. We classify data from the source dataset code for whether it actually contains code in Appendix A.8.5. Additional quantitative results. In Appendix A.9 we provide additional quantitative results. Correlation analysis. Further results for the correlation analysis of influence scores for documents for different queries in Appendix A.9.1. Magnitude of influence. Further results for the magnitude of influence in Appendix A.9.2. Spread of influence. Further results for the spread of influence over the rankings in Appendix A.9.3. 15 Preprint, under review. A.1 COUNTERFACTUAL RE-TRAINING EXPERIMENTS WITH INFLUENCE FUNCTIONS We use EK-FAC influence functions to approximate the counterfactual question: which documents from pretraining have a causal effect on the completions of a trained model. However, we are also interested in the causal effect on the accuracy of the completions. In this section, we aim to motivate two aspects of this choice; the fact that influence functions are designed to estimate the effect on continuous differentiable functions, like the log-likelihood, and not on the accuracy. Secondly, we motivate the need for estimating the second-order information of the pretraining objective using EK-FAC, which is very computationally expensive. We present four different experiments in this section, which show that indeed the influence of documents as determined by influence functions also estimate the effect on downstream task accuracy, as well as the benefits from estimating second order information over simply using first-order gradient information. The pipeline for each of these experiments is similar; we take a pretrained model, we fine-tune it on some dataset, and evaluate it on 50 validation examples with a metric (perplexity or accuracy). We then use the fine-tuned weights to calculate the influence of the documents in the dataset used for fine-tuning on the set of 50 validation questions with two methods: EK-FAC influence functions and TracIn (Pruthi et al., 2020). Subsequently, we use those two methods to remove the k most positively influential documents from the fine-tuning dataset, as well as randomly selecting k documents as a baseline, and fine-tune the original pretrained model five times (with different seeds) on each new fine-tuning dataset created (for different values for k). We then calculate the perplexity or accuracy on the validation questions used to calculate the influence, and see how it changed. The more it changed, the more the documents indeed influence the relevant metric (i.e. perplexity or accuracy). Note that for n different values for k, this requires fine-tuning 3 ∗ 5 ∗ n models: five times for each of the three methods of removing documents from the training set. We start by motivating the use of EK-FAC influence functions over simple similarity information between document and query gradients. In our setup, where we only have access to the final check- point of pretraining, a dot-product between the query and document gradient effectively boils down to a method for estimating influence of documents on queries called TracIn (Pruthi et al., 2020). With access to multiple checkpoints, TracIn uses gradient information from all of them, account- ing for the learning rate used at that point in training. However, we only use the final checkpoint and hence taking into account learning rate only changes scores by a constant. We take GPT-2- small (124M) from HuggingFace,5 and fine-tune it for three epochs with next-word prediction on Wikitext-2 (Merity et al., 2016). We use Adam optimizer (Kingma & Ba, 2015) with default param- eters (b1 0.9, b2 0.999, eps 1e-8, additive weight decay 0.01). The results can be found in Figure 4 and Table 3, showing that removing documents using EK-FAC influence functions has a signifi- cantly larger effect on downstream perplexity for all values of k. We do the exact same experiment but instead remove the most negatively influential documents, and see that instead the perplexity decreases significantly more for EK-FAC influence functions (Figure 4 and Table 4). Table 3: Wikitext remove top influential k → 50 100 150 200 250 300 Random TracIn IF (ours) 22.09 ± 0.02 22.16 ± 0.02⋆⋆ 22.49 ± 0.02⋆⋆ 22.12 ± 0.02 22.22 ± 0.02⋆⋆ 22.66 ± 0.02⋆⋆ 22.10 ± 0.02 22.25 ± 0.01⋆⋆ 22.73 ± 0.02⋆⋆ 22.20 ± 0.06 22.35 ± 0.03⋆⋆ 22.88 ± 0.01⋆⋆ 22.19 ± 0.05 22.42 ± 0.01⋆⋆ 22.97 ± 0.02⋆⋆ 22.15 ± 0.05 22.45 ± 0.02⋆⋆ 23.05 ± 0.05⋆⋆ Table 4: Wikitext remove bottom influential k → 50 100 150 200 250 300 Random TracIn IF (ours) 27.40 ± 0.08 26.73 ± 0.04⋆⋆ 25.96 ± 0.04⋆⋆ 26.24 ± 0.10 25.48 ± 0.05⋆⋆ 24.78 ± 0.05⋆⋆ 25.62 ± 0.15 24.86 ± 0.02⋆⋆ 23.95 ± 0.03⋆⋆ 25.22 ± 0.10 24.36 ± 0.04⋆⋆ 23.52 ± 0.03⋆⋆ 25.04 ± 0.12 24.16 ± 0.05⋆⋆ 23.46 ± 0.03⋆⋆ 24.85 ± 0.10 23.94 ± 0.03⋆⋆ 23.32 ± 0.04⋆⋆ Next, we turn to motivating the use of EK-FAC influence functions in estimating the effect of docu- ments on downstream accuracy of model generations. To this end, we look at two different datasets: 5https://huggingface.co/ 16 Preprint, under review. (a) (b) Figure 4: (a) Counterfactual retraining experiments on Wikitext-2. We finetuned GPT-2 (124M) on Wikitext-2 and use three different methods to remove training examples from the training set: randomly, TracIn, and Influence Functions (IF). For each number of samples removed we finetune the base model five times with different training data ordering, the variance over these runs is repre- sented by the error bars. Each point on the plot is the average perplexity achieved by the five models after fine-tuning on the augmented dataset. We find that influence functions can find examples that impact the perplexity significantly more than baselines. (b) We repeat the same experiment as in (a), but retain top influential queries instead (removing most negatively influential). DROP (Dua et al., 2019) and RACE (Lai et al., 2017). DROP is a reading comprehension dataset re- quiring different skills like subtraction, addition, coreference resolution, counting, and other skills. The model needs to generate an answer that often consists of one or a few words. We allow the fine-tuned models to generate answers to the questions freely, and evaluate based on exact match. In this experiment, we use a 7B model. We randomly select a subset of 8000 examples for fine-tuning, and use the procedure described above to perform counterfactual experiments. We use Adam op- timizer again, with the same hyperparameters as for the above experiment: b1 0.9, b2 0.999, eps 1e-8, additive weight decay 0.01, but only train for one epoch. The results can be found in the left panel of Figure 5 as well as in Table 5. We find that EK-FAC influence functions are succesful in selecting data points that impact downstream accuracy, much more so than randomly removing the same amount of training data. For most k (all but k = 1000), EK-FAC influence functions also have a significantly stronger effect on accuracy than TracIn, but the difference is less large. We apply the exact same procedure to the RACE dataset, except now we keep 10k examples (empirically found to lead to the least overfitting when fine-tuning). Further, RACE is a multiple-choice dataset, so we allow the model to generate a single token indicating the choice, and calculate the accuracy. The results can be seen in Figure 5 and Table 6. Again, the finding is similar; EK-FAC influence func- tions surface documents that have a stronger effect on accuracy than TracIn for all but one value of k, and for all values of k than randomly removing documents. There is a large variance in the results for all methods though, which we attribute to the fact that the model sometimes seems to overfit to the fine-tuning data. Further, the reason why the difference between TracIn and EK-FAC influence functions is much larger in the perplexity experiments than in the accuracy experiments could be attributed to the fact that we only fine-tune for one epoch in the accuracy experiments (as more cause overfitting). EK-FAC influence functions differ from TracIn in that they estimate second order information, which becomes more important with more training steps. An interesting avenue for future work is to do counterfactual re-training experiments like these on a subset of pretraining data for a 7B model, but this is incredibly computationally expensive. Although the results of the experiments in this section are an encouraging sign for using EK-FAC influence functions in estimating causal effect of data on accuracy, it is important to note that they are limited in several ways. Accuracy is a discrete metric and it is a prior unclear how many documents need to be removed to flip its value. However, the influence functions we use estimate effect of removing a single document, and removing multiple documents can have additional effects that are unaccounted for. This makes removing multiple documents a cruder way to empirically show impact of influence functions on accuracy, but at the same time it is unavoidable. Therefore, any significant 17 Preprint, under review. Table 5: Counterfactual re-training accuracies on DROP (free generation of answers). We use three different methods (random, TracIn, influence functions) to remove k datapoints, and re-train a model on the resulting dataset. Each number is the mean over five re-training runs with different data ordering. ⋆ indicates significantly lower than random with a p-value below 0.1 and ⋆⋆ with a p- value below 0.05. The underlined means are the lowest. k → 500 1000 1500 2000 Random 0.61 ± 0.05 0.55 ± 0.03⋆ TracIn 0.51 ± 0.03⋆⋆ IF (ours) 0.60 ± 0.03 0.49 ± 0.02⋆⋆ 0.50 ± 0.04⋆⋆ 0.56 ± 0.05 0.44 ± 0.04⋆⋆ 0.40 ± 0.05⋆⋆ 0.57 ± 0.06 0.43 ± 0.06⋆⋆ 0.38 ± 0.05⋆⋆ (a) Counterfactual retraining experiments on read- ing comprehension questions. We finetuned Cohere Command 2 (7B) on a subset of the DROP training set (8k examples) and use three different methods to remove training examples from the training set: ran- domly, TracIn, and Influence Functions (IF). For each number of samples removed we finetune the base model five times with different training data order- ing, the variance over these runs is represented by the error bars. Each point in the plot is the average accu- racy achieved by the five models after fine-tuning on the augmented dataset. We find that influence func- tions can find examples that impact the accuracy sig- nificantly more than baselines, although only slightly more than TracIn. retraining experiments (b) Counterfactual on multiple-choice reasoning data. We finetuned Cohere Command 2 (7B) on a subset of the RACE training set (10k examples) and use three different methods to remove training examples from the training set: randomly, TracIn, and Influence Functions (IF). For each number of samples removed we finetune the base model five times with different training data ordering, the variance over these runs is represented by the error bars. Each point in the plot is the average accuracy achieved by the five models after fine-tuning on the augmented dataset. We find that influence functions can find examples that impact the accuracy significantly more than baselines, although there is some variance in the results. Figure 5: Counterfactual retraining experiments on reading comprehension benchmark DROP (a) and the multiple-choice reasoning dataset RACE (b). causal effect on accuracy over other methods is a good signal, but the absence of a significant effect does not necessarily mean EK-FAC influence functions do not properly do what they are designed to do. 18 Preprint, under review. Table 6: Counterfactual re-training accuracies on RACE (multiple-choice). We use three different methods (random, TracIn, influence functions) to remove k datapoints, and re-train a model on the resulting dataset. Each number is the mean over five re-training runs with different data ordering. ⋆ indicates significantly lower than random with a p-value below 0.1 and ⋆⋆ with a p-value below 0.05. The underlined means are the lowest. k → 1000 1500 2000 2500 Random 0.85 ± 0.04 0.84 ± 0.01 TracIn 0.80 ± 0.04⋆ IF (ours) 0.83 ± 0.03 0.78 ± 0.03⋆⋆ 0.76 ± 0.05⋆⋆ 0.82 ± 0.04 0.80 ± 0.03 0.74 ± 0.04⋆⋆ 0.81 ± 0.04 0.79 ± 0.04 0.74 ± 0.05⋆ 19 Preprint, under review. A.2 EK-FAC INFLUENCE FUNCTIONS The code we use for EK-FAC influence functions at scale is a part of larger internal infrastructure, and hence cannot be released publicly. However, we base our code on the public GitHub repository https://github.com/pomonam/kronfluence. We implement estimation of the Hessian in the same way as in that codebase, except for a few changes to make it tractable, which we discuss in more detail below. Further, we compare the results produced by our implementation with the results using the public implementation. We do this by fine-tuning GPT-2 (124M) on Wikitext-2 using internal infrastructure, and calculating influence scores with both code bases. We find that the results correlate very strongly (with a Pearson’s R of more than 0.99, see A.2.2 below for more details). Here, we provide details of the design choices and hyperparameters used in our implementa- tion, as well as the additional approximations to make EK-FAC estimation and influence calculation tractable at scale. Query-batching and approximation As mentioned in the main text, we approximate query gradi- ents using approximate SVD (Halko et al., 2011). We use the default parameters for this algorithm, which can be found in the Dask documentation (Dask Development Team, 2016). Sampling from the Pretraining Data. It is intractable to calculate influence for the entire pretrain- ing data, so we sample a set of 5 million documents. To this end, we loop over the training data as seen by the models in order, and randomly sample 6 examples from each batch. This ensures that the pretraining sample we use is both similar to the pretraining distribution in terms of what kind of data the model sees, as well as when it has encountered the data during pretraining. Estimating EK-FAC. To estimate the EK-FAC matrices, we sample 100 000 documents from pre- training in the same manner as described above. We use the same samples to estimate the EK-FAC for the 7B as for the 35B. For both models, we use a damping factor of 0.1 (see Grosse et al. (2023) for details on what the damping factor is). Further, part of estimating the EK-FAC is an eigende- composition on the EK-FAC matrices. We use the same approximation as empirically motivated in (Grosse et al., 2023), namely block-diagonal approximation. For the 7B, we use 2 blocks, and for the 35B, we use 4. The block-diagonal approximation is not part of the public codebase, but simply amounts to dividing the matrices in n blocks (where n is 2 and 4 in our case), zero-ing out the remaining entries, and taking the eigendecomposition of each block individually. After, these blocks are patched back together again into the original size matrix, which will be further processed as in the public codebase. A.2.1 JUSTIFYING APPROXIMATIONS In this section, we justify the additional approximations we do on top of those mentioned in Grosse et al. (2023) by reporting the correlation with the full implementation for a smaller model (124M parameters). Applying EK-FAC influence functions to models with billions of parameters requires estimating a multiple of the model parameters. E.g., for the 7B model we estimate around 70B EK-FAC parameters, and for the 35B model we estimate around 320B parameters. Further, to calculate the influence scores for a set of 5 million documents we have to calculate the gradient for 100 queries × 5 million documents, each of which has the same size as all feed-forward layers in the model itself. We can only afford to loop over the 5 million documents and calculate their gradients once, so we need to batch the query gradients in memory. This is impossible for the full gradients and we use SVD to store low-rank approximations instead, like in Grosse et al. (2023). Details on the experiment. To compare results of using EK-FAC influence functions with different approximations, we use the same fine-tuned model from Section A.1 to calculate influence scores for the 4656 training examples (i.e. documents) on the first 32 validation examples (i.e. queries) of the Wikitext-2 dataset. We repeat this with different types of approximations applied; full SVD on the query gradients, approximate SVD (Dask Development Team, 2016) on the query gradients, and a block-diagonal approximation of the EK-FAC matrices before the eigendecomposition (described in Appendix A of Grosse et al. (2023)) with 2 and 4 blocks. For each level of approximation applied, this gives us 32 vectors with 4656 scores (one for each query-document pair), and we compare these to the full implementation without SVD and block diagonal approximations using Pearson’s R correlation. The correlations reported are the average over all 32 queries, but in the supplement we provide the correlations for each query for all experiments done below. 20 Preprint, under review. In Table 7 we report the correlations of increasingly more approximations w.r.t. a full implementa- tion. Note that the full implementation also uses approximations, but those are all justified in Grosse et al. (2023). Here, for completeness, we additionally justify the approximations we use that are different, namely approximate SVD instead of full SVD, and a block-diagonal approximation with 4 blocks instead of 2. From Table 7, we can see that the approximate SVD algorithm has a neglible effect on the scores, whereas the block-diagonal approximation has a small effect on the scores. Approximations SVD Approximate SVD Approximate SVD + block diagonal EK-FAC (2 blocks) Approximate SVD + block diagonal EK-FAC (4 blocks) Pearson R 0.96 ± 0.01 0.96 ± 0.01 0.95 ± 0.00 0.93 ± 0.00 Table 7: Score correlations of using increasingly more approximations with a full implementation. A.2.2 FULL IMPLEMENTATION We also compare the full implementation scores of our own influence functions implementation with the scores calculated for the same model and dataset with the public implementation at https://github.com/pomonam/kronfluence, and confirm the average score correlation between queries is 0.993 (± 0.003). We add a direct score comparison of both methods for the top 3 documents for each of the 32 queries to the supplemental material. Specifically, for each query we log the top 3 documents as determined by our internal implementation as well as the external imple- mentation, showing that they are almost always the same documents, and logging the score given to that document by each implementation (the supplemental file also contains the score correlation for each query separately). The average number of documents that appear in both top 50’s determined by the internal and external implementation is 46.7. The reason for using an internal implementa- tion nonetheless is that the public implementation is not optimised for usage on large-scale models, and cannot be used for models above about 1B parameters. We used the internal pretraining library for implementing influence functions, because part of the infrastructure used for pretraining large models could be re-used. 21 Preprint, under review. Figure 6: The pipeline for creating rankings of the most influential pretraining documents for a question-completion pair (query) using influence functions. The documents at the top of the ranking influence the likelihood of the completion positively, and the bottom negatively. We create rankings for a set of 40 reasoning and 40 factual queries over 5 million pretraining documents (2.5B tokens) for two models of different sizes (Cohere’s Command R series, 7B and 35B). 22 Preprint, under review. A.3 QUERY SETS Reasoning query sets. We show an example of the other two types of reasoning present in the reasoning query sets in Table 8 and 9. The former requires calculating the slope of a line going through two given points (used for both the 7B and 35B model) and the latter is about solving for x in a linear equation (only used for the 35B model). Control query sets. We design two control sets with 10 questions each for both the 7B and 35B model. These query sets resemble reasoning and factual questions, but do not require actual rea- soning or factual retrieval. The reason for constructing such query sets is to compare the results for the factual / reasoning query sets to the corresponding control sets, and comment on whether the patterns we are seeing also show up for queries that do not require reasoning / retrieval, which further helps interpretations of the results. Two factual control examples can be seen in Table 10 and 11, for the 7B and 35B model respectively. They are designed to match the first factual query superficially (e.g. the same numbers), but they are instead reading comprehension questions about fictional entities. This means all the model needs to do to answer the question is retrieve from context (not parameters). Two of the reasoning control examples for the 7B and 35B respectively can be seen in Table 12 and 13. Again, these resemble reasoning queries in number and style, but do not require any actual reasoning as the answer is given in the context. Table 8: Example from the reasoning set that involves calculating the slope of a line going through two points. We have 20 examples like these on different points. The completion is by the 7B model, but we use the same questions for the 35B model. Reasoning query set (slopes) Prompt What is the slope of the line passing through the points (93, 28) and (74, 47)? Think step-by-step. Completion by 7B To find the slope of the line passing through the points (93, 28) and (74, 47), we can use the following steps: 1. Identify the coordinates of the two points: - Point A: (93, 28) - Point B: (74, 47) 2. Calculate the difference in x-coordinates between the two points: - Difference in x-coordinates: x2 - x1 = 74 - 93 = -19 3. Calculate the difference in y-coordinates between the two points: - Difference in y-coordinates: y2 - y1 = 47 - 28 = 19 4. Determine the ratio of the difference in y-coordinates to the difference in x-coordinates: - Ratio: 19 / -19 = 1 / -1 5. Multiply the ratio by -1 to reverse the direction of the line: - Slope: -1 / 1 = -1 Therefore, the slope of the line passing through the points (93, 28) and (74, 47) is -1. 23 Preprint, under review. Table 9: Example from the reasoning set that requires solving for x in a linear equation. We have twenty other examples like these on different numbers, only for the 35B model. The completion is by the 35B model. Reasoning query set (linear) Prompt Solve for x: 5x + 21 = 91 Think step-by-step. Completion by 35B Here are the steps to solve for x: 1. Subtract 21 from both sides of the equation: 5x + 21 - 21 = 91 - 21 5x = 70 2. Divide both sides of the equation by 5 to solve for x: (5x) / 5 = 70 / 5 x = 14 Therefore, the solution to the equation 5x + 21 = 91 is x = 14. Table 10: Example from the factual control set that resembles the question about the tallest mountain in the world from the 7B factual query set (Table 2), but does not require any factual recall. The completion is by the 7B model. Factual control query set Prompt The planet Zog has multiple mountains, of which the tallest one is called Wirtu which is 29,029 feet tall. What is the largest mountain on the planet Zog called and how tall is it? Completion by 7B The largest mountain on the planet Zog is called Wirtu and it is 29,029 feet tall. Table 11: Example from the factual control set that resembles the first factual query for the 35B model superficially, but does not require any factual recall. The completion is by the 35B model. Factual control query set Prompt In a fictional planetary system, there is a supernova remnant and pulsar wind nebula in the centre of a constellation of stars called Minotaur called the Goat Nebula. What is the supernova remnant and pulsar wind nebula in the constellation Minotaur called? Completion by 35B In the fictional planetary system, the supernova remnant and pulsar wind nebula in the centre of the constellation Minotaur is called the “Goat Nebula”. 24 Preprint, under review. Table 12: Example from the reasoning control set that resembles the slope queries superficially, but does not require any reasoning. The completion is by the 7B model. Reasoning control query set Prompt The slope of the line is -22. What is the slope of the line? Think step-by-step. Completion by 7B Given that the slope of the line is -22. To find the slope of the line, we need to find the change in y-coordinates divided by the change in x-coordinates. The slope of the line is -22. Therefore, the slope of the line is -22. Table 13: Example from the reasoning control set that resembles the slope queries superficially, but does not require any reasoning. The completion is by the 35B model. Reasoning control query set Prompt This line has a slope 6. What is the slope of the aforementioned line? Think step-by-step. Completion by 35B The slope of a line is a measure of the steepness of the line. It is defined as the ratio of the change in the y-coordinate to the change in the x-coordinate. In this case, the line has a slope of 6. This means that for every 1 unit increase in the x-coordinate, the y-coordinate will increase by 6 units. It’s important to note that the slope of a line is not limited to positive values and can be negative or zero depending on the orientation of the line. 25 Preprint, under review. A.4 QUERY KEYWORDS FOR FINDING THE ANSWER We provide all keywords used to search for the answers for both the 7B and 35B in the supplementary material to this manuscript, and describe here the procedure for generating keywords, as well as giving some examples. For the factual questions, we generated keywords manually for each query, and these can be found in the supplemental material. For example, for the question “What is the world’s smallest mammal by body length?” (answer: bumblebee bat), we have the following keywords: bumblebee bat; bumblebee; bumble; bee; bat; smallest mammal; body length; mammal; smallest; small. This results in many false positives, e.g. if only the word ‘small’ occurs, which we all check manually for the answer. Based on the type of reasoning question, we programatically create keywords for each question. For example, for the question in Table 9, the keywords are: [’14’, ’x = 14’, ’5x + 21’, ’91’, ’5x + 21 = 91’, ’21’, ’5’, ’91 - 21’, ’91 - 21 = 70’, ’(91 - 21) / 5’, ’70 / 5’, ’70 / 5 = 14’, ’70’, ’x=14’, ’5x+21’, ’5x+21=91’, ’91-21’, ’91-21=70’, ’(91-21)/5’, ’70/5’, ’70/5=14’, ’(91 - 21) divided by 5’, ’(91-21) divided by 5’, ’(91 minus 21) divided by 5’, ’(91 min 21) divided by 5’, ’70 divided by 5’, ’70 divided by 5 = 14’, ’70 divided by 5 is 14’, ’70 / 5 is 14’, ’70/5 is 14’, ’91 - 21 is 70’, ’91-21 is 70’, ’91 minus 21 is 70’, ’91 min 21 is 70’, ’70 divided by 5 equals 14’, ’70 / 5 equals 14’, ’70/5 equals 14’, ’91 - 21 equals 70’, ’91-21 equals 70’, ’91 minus 21 equals 70’, ’91 min 21 equals 70’, ’5x plus 21’, ’5x plus 21 = 91’, ’5x plus 21 is 91’, ’5x + 21 is 91’, ’91 minus 21’, ’91 min 21’, ’91 minus 21 = 70’, ’91 min 21 = 70’, ’(91 minus 21) / 5’, ’(91 min 21) / 5’] Note that, because the individual numbers ‘14’, ‘5’, ‘91’, and ‘70’ are part of the keywords, each document that contains one of these numbers becomes a hit, and we go over all hits manually. 26 Preprint, under review. A.5 PROMPTS GIVEN TO COMMAND R+ FOR FINDING THE ANSWER We use multiple prompts for each different type of reasoning question to allow Command R+ to find the answer in the top 500 influential documents; prompts to find the answer to the intermediate reasoning steps, and a prompt for finding the answer to the full question. We provide an example of each below. Preamble: You are a brilliant AI assistant that is excellent at arithmetic designed to help users with data analysis. You will be given an arithmetic query and a document, and your task is to determine whether the answer to the question is in the document. Prompt for the first step to a two-step arithmetic question Question: 4 + 2 Answer: 4 + 2 = 6 What also counts as an answer: - The calculation is written out in words, or part of a story. - The order of operations are changed. E.g. 2 + 4 = 6. - Different symbol used for sum/subtract sign. E.g. plus/minus. - The calculation is part of another larger calculation. E.g. (4 + 2) * 9 = 6 * 9 or (4 + 2)/12 = 6/12. - Different formatting. E.g. (4) + (2) = (6). - The calculation is a part of an algebraic formulation. E.g. 4X + 2X = 6X. What does not count as an answer: - Other numbers are being summed/subtracted. E.g. 5 + 2. - Numbers are taken to the other side of the equals sign. E.g. 6 - 2 = 4. Document: <document > Is the answer given in the document? Answer with yes or no. If you answer with yes, indicate where the answer is by copying the part of the document in which the answer occurs, ending with an explanation of why that passage contains the answer. Think step-by-step and carefully consider all the different ways in which such an answer might be given. Prompt for the second step to a two-step arithmetic question Question: 6 * 15 Answer: 90 What also counts as an answer: - The calculation is written out in words, or part of a story. - The order of operations are changed. E.g. 15 * 6 = 90. - Different symbol used for the multiplier sign. E.g. x or times. - The calculation is part of another larger calculation. E.g. (6 * 15) * 9 = 90 * 9 or (6 * 15)/12 = 90/12. - Different formatting. E.g. (6) * (15) = (90). - The calculation is a part of an algebraic formulation. E.g. 6X * 15X = 90X. What does not count as an answer: - Other numbers are being multiplied. E.g. 7 * 15. - Numbers are taken to the other side of the equals sign. E.g. 6 = 90/15. Document: <document > Is the answer given in the document? Answer with yes or no. If you answer with yes, indicate where the answer is by copying the part of the document in which the answer occurs, ending with an explanation of why that passage contains the answer. Think step-by-step and carefully consider all the different ways in which such an answer might be given. 27 Preprint, under review. Prompt for step 1 (and 2 is similar) to answer a slope question Question: 74 - 73 Answer: 74 - 73 = 1 What also counts as an answer: - The calculation is written out in words, or part of a story. - The calculation is written in terms of a difference or change. E.g. the difference (or change) between 73 and 74 is 1. - The order of operations are changed. E.g. 73 - 74 = -1. - Different symbol used for the minus sign. E.g. subtracted from. - The calculation is part of another larger calculation. E.g. (74 - 73) * 9 = 1 * 9 or (74 - 73)/12 = 1/12. - Different formatting. E.g. (74) - (73) = (1). - The calculation is a part of an algebraic formulation. E.g. 74X - 73X = 1X. What does not count as an answer: - Other numbers are being subtracted. E.g. 75 - 73. - Numbers are taken to the other side of the equals sign. E.g. 74 = 1 + 73. Document: <document > Is the answer given in the document? Answer with yes or no. If you answer with yes, indicate where the answer is by copying the part of the document in which the answer occurs, ending with an explanation of why that passage contains the answer. Think step-by-step and carefully consider all the different ways in which such an answer might be given. Prompt for step 3 to answer a slope question Question: 74 / 1 Answer: 74 / 1 = 74 What also counts as an answer: - The calculation is written out in words, or part of a story. - The signs on the LHS are flipped. E.g. -74 / -1 = 74. - Different symbol used for the division sign. E.g. divided by. - The calculation is part of another larger calculation. E.g. (74 / 1) * 9 = 74 * 9 or (74 / 1)/12 = 74/12. - Different formatting. E.g. (74) / (1) = (74). - The calculation is a part of an algebraic formulation. E.g. 74X / 1 = 74X. What does not count as an answer: - Other numbers are being divided. E.g. 75 / 1. - Numbers are taken to the other side of the equals sign. E.g. 74 = 74 * 1. Document: <document > Is the answer given in the document? Answer with yes or no. If you answer with yes, indicate where the answer is by copying the part of the document in which the answer occurs, ending with an explanation of why that passage contains the answer. Think step-by-step and carefully consider all the different ways in which such an answer might be given. 28 Preprint, under review. Prompt for step 1 to answer a linear question Question: 32 - 16 Answer: 16 What also counts as an answer: - The calculation is written out in words, or part of a story. - The calculation is written in terms of a difference or change. E.g. the difference (or change) between 32 and 16 is 16. - The order of operations are changed. E.g. -16 + 32 = 16. - Different representation used for the minus sign. E.g. ’subtracted from’. - The calculation is part of another larger calculation. E.g. (32 - 16) * 9 = 16 * 9 or (32 - 16)/12 = 16/12. - Different formatting. E.g. (32) - (16) = (16). - The calculation is a part of an algebraic formulation. E.g. 32X - 16X = 16X. What does not count as an answer: - Other numbers are being subtracted. E.g. 33 - 16. - Numbers are taken to the other side of the equals sign. E.g. 32 = 16 + 16. Document: <document > Is the answer given in the document? Answer with yes or no. If you answer with yes, indicate where the answer is by copying the part of the document in which the answer occurs, ending with an explanation of why that passage contains the answer. Think step-by-step and carefully consider all the different ways in which such an answer might be given. Prompt for step 2 to answer a linear question Question: 16 / 8 Answer: 16 / 8 = 2 What also counts as an answer: - The calculation is written out in words, or part of a story. - The calculation is written in terms of a ratio. E.g. the ratio between 16 and 8 is 2. - Different representation used for the division sign. E.g. ’divided by’. - The calculation is part of another larger calculation. E.g. (16 / 8) * 9 = 2 * 9 or (16 / 8)/12 = 2/12. - Different formatting. E.g. (16) / (8) = (2). - The calculation is a part of an algebraic formulation. E.g. 32X / 16X = 2X. What does not count as an answer: - Other numbers are being divided. E.g. 17 / 8. - Numbers are taken to the other side of the equals sign. E.g. 16 = 2 * 16. Document: <document > Is the answer given in the document? Answer with yes or no. If you answer with yes, indicate where the answer is by copying the part of the document in which the answer occurs, ending with an explanation of why that passage contains the answer. Think step-by-step and carefully consider all the different ways in which such an answer might be given. 29 Preprint, under review. Prompt for the full answer to a linear question Question: 8x + 16 = 32 Answer: 2 What also counts as an answer: - The calculation is written out in words, or part of a story. - The calculation is written in terms of a ratio. E.g. the ratio between 16 and 8 is 2. - Different representation used for the plus sign or the equals sign. E.g. ’added to’ and ’equals’. - A different variable than X is used. E.g. ’t’: 8t + 16 = 32’. - The calculation is part of another larger calculation. E.g. (8x + 16 = 32) * 9 = 2 * 9 or (8x + 16 = 32)/12 = 2/12. - The solution is written out in steps below each other. E.g.: 8x + 16 = 32 8x = 2 x = 0. - The calculation is a part of an algebraic formulation. E.g.: 5 * (8x + 16) = 5 * 32 5 * x = 5 * 2. What does not count as an answer: - Other numbers are being used. E.g. 9x + 16 = 32. Document: <document > Is the answer given in the document? Answer with yes or no. If you answer with yes, indicate where the answer is by copying the part of the document in which the answer occurs, ending with an explanation of why that passage contains the answer. Think step-by-step and carefully consider all the different ways in which such an answer might be given. 30 Preprint, under review. A.6 PROMPTS GIVEN TO COMMAND R+ FOR CHARACTERISING THE RELATIONSHIP BETWEEN THE QUERY AND THE DOCUMENT We combine all reasoning queries in pairs with their top 500 most influential documents, and prompt Command R+ to characterise the relationship. For all types of reasoning, we use the same preamble: You are a brilliant AI assistant that is excellent at arithmetic designed to help users with data analysis. You will be given an arithmetic query and a document, and your task is to characterise the document by choosing keywords from a given set that best describe how the document relates to the question. For each type of reasoning, we craft a prompt that allows Command R+ to choose multiple keywords for each query-document pair in the top 500 documents. We provide each below. Prompt for arithmetic questions Start of Query: <query> End of Query Start of Document <document> End of Document How is the document related to the query? Choose from the following keywords: Similar arithmetic operations on similar numbers (e.g. the numbers are similar in magnitude or the numbers are the same) Similar arithmetic operations (on other types of numbers, e.g. much larger or smaller) Reasoning traces (multiple reasoning steps are explicitly given in the document explaining how one gets to an answer) Other types of maths Code that contains arithmetic Code that concerns other types of math Code that concerns no math/arithmetic Text about math/arithmetic (no other relation to the query than that the text is about math, text does not perform math/arithmetic) Superficial similarities (there is no real relation, but loosely related topics occur, like the text contains words related to other parts of math, like algebra) Similar formatting (question/answer pair about other topics than math) Similar formatting (other) Other (pick own keyword) Explain your answer for each keyword by quoting from the query and document and describing why they are similar. Keep in mind that the document might be in another language than English. If you pick any of the code keywords, add the programming languages in brackets (e.g. ‘Code that contains arithmetic (Python, LaTeX)’). If the relation between the query and the document is not described by any of the given keywords, choose ‘other’ and pick your own keyword that describes the document. Otherwise, if the query is not related to the document, state ‘no relation’ and describe why. Give your answer in the form of a semicolon-separated list of keywords, and add an explanation below separated by newlines Give your answer in the form of a semicolon-separated list of keywords, and add an explanation below separated by newlines (e.g. ‘keyword 1; keyword 2; keyword 3 (Python) [explanation]’). 31 Preprint, under review. Prompt for slope questions Start of Query: <query> End of Query Start of Document <document> End of Document How is the document related to the query? Choose from the following keywords: Similar arithmetic operations on similar numbers (e.g. the numbers are similar in magnitude or the numbers are the same) Similar arithmetic operations (on other types of numbers, e.g. much larger or smaller) Reasoning traces (multiple reasoning steps are explicitly given in the document explaining how one gets to an answer) Other types of maths Code that contains arithmetic Code that calculates the slope between two numbers Math that calculates the slope between two numbers Code that calculates the slope of an equation Math that calculates the slope of an equation Code that concerns other types of math Code that concerns no math/arithmetic Text about math/arithmetic (no other relation to the query than that the text is about math, text does not perform math/arithmetic) Superficial similarities (there is no real relation, but loosely related topics occur, like the text contains words related to other parts of math, like algebra) Similar formatting (question/answer pair about other topics than math) Similar formatting (other) Other (pick own keyword) Explain your answer for each keyword by quoting from the query and document and describing why they are similar. Keep in mind that the document might be in another language than English. If you pick any of the code keywords, add the programming languages in brackets (e.g. ‘Code that contains arithmetic (Python, LaTeX)’). If the relation between the query and the document is not described by any of the given keywords, choose ‘other’ and pick your own keyword that describes the document. Otherwise, if the query is not related to the document, state ‘no relation’ and describe why. Give your answer in the form of a semicolon-separated list of keywords, and add an explanation below separated by newlines (e.g. ‘keyword 1; keyword 2; keyword 3 (Python) [explanation]’). 32 Preprint, under review. Prompt for linear questions Start of Query: <query> End of Query Start of Document <document> End of Document How is the document related to the query? Choose from the following keywords: Code that solves a linear equation for a variable (of the form ax + b = c or ax - b = c) Code that solves a linear equation with multiple variables for one or both variables (e.g. ax + by = c) Code that solves a linear equation of another form than ax + b = c or ax - b = c Math that solves a linear equation for a variable (of the form ax + b = c or ax - b = c) Math that solves an equation with multiple variables for one or both variables (e.g. ax + by = c) Math that contains linear equations of another form than ax + b = c or ax - b = c Math that contains linear equations but they are not solved (of the form ax + b = c or ax - b = c) Math that contains linear equations but they are not solved (of another form than ax + b = c or ax - b = c) Similar algebraic operations on similar numbers (e.g. the numbers are similar in magnitude or the numbers are the same) Similar algebraic operations (on other types of numbers, e.g. much larger or smaller) Other forms of algebra Arithmetic operations Other types of maths Code that contains arithmetic Code that concerns other types of math Code that concerns no math/algebra Text about math/algebra (no other relation to the query than that the text is about math, text does not perform math/algebra) Reasoning traces (multiple reasoning steps are explicitly given in the document explaining how one gets to an answer) Superficial similarities (there is no real relation, but loosely related topics occur, like the text contains words related to other parts of math, like arithmetic) Similar formatting (question/answer pair about other topics than math) Similar formatting (other) Other (pick own keyword) Explain your answer for each keyword by quoting from the query and document and describing why they are similar. Keep in mind that the document might be in another language than English. If you pick any of the code keywords, add the programming languages in brackets (e.g. ‘Code that contains arithmetic (Python, LaTeX)’) If the relation between the query and the document is not described by any of the given keywords, choose ‘other’ and pick your own keyword that describes the document. Otherwise, if the query is not related to the document, state ‘no relation’ and describe why. Give your answer in the form of a semicolon-separated list of keywords, and add an explanation below separated by newlines (e.g. ‘keyword 1; keyword 2; keyword 3 (Python) [explanation]’). If you pick a keyword about solving a linear equation, add the linear equation in the explanation. 33 Preprint, under review. A.7 FURTHER DISCUSSION OF LIMITATIONS More broadly, our work suffers from the same limitations any work does that uses EK-FAC in- fluence functions; we do many approximations to estimate the counterfactual and only take into account MLP parameters. This latter decision is because EK-FAC influence functions are not prop- erly defined for the attention layers (Grosse et al., 2023), although we do look at the dense layers used within them. We list the assumptions and approximations here: • First-order Taylor approximation to the PBRF. • Assume different layers of MLPs are independent, making the Gauss-Newton Hessian block-diagonal. • Assume activations are independent of pre-activation pseudo-gradients. • Estimate the approximation to the Fisher Information Matrix or equivalently the Gauss- Newton Hessian by sampling from the empirical data distribution / model output distribu- tion, because it’s an expectation over that distribution (MC estimation). • Block-diagonal approximation of the eigenvector matrices within each layer. • Low-rank approximation of query gradients. • Assume EK-FAC for SFT stage is identity (Bae et al., 2024). All these approximations are verified and justified in Grosse et al. (2023) and (Bae et al., 2024), and the reader is referred there for a more in-depth analysis. Our empirical results showing that nonetheless influence functions surface documents that are causally related to accuracy in Appendix A.1 should alleviate some of these concerns, but not all. 34 Preprint, under review. A.8 ADDITIONAL RESULTS FOR THE QUALITATIVE ANALYSIS A.8.1 DETAILS ON ANSWERS TO QUESTIONS IN PRETRAINING DATA In the main text, we find the answer to factual questions relatively often compared to the answer to reasoning questions. In this section, we comment on the possibility that the answer to reasoning questions are simply not part of the pretraining sample of 5 million documents we look at, as well as present examples of documents with answers to queries. Recall that all reasoning tasks require multiple steps, and the model outputs reasoning traces to get to the final answer. This means that if the model is retrieving the answers, it should retrieve answers to all the reasoning steps. On top of the search in the main paper in Section 5.2, we search for answers to the reasoning steps and factual questions in a random subset of the 5M pretraining documents. For the 7B reasoning questions, we find 43 documents containing answers to reasoning steps, of which only 9 show up in the top 0.02% of the data. Of these 9, 4 documents together contain the 2 answers found for the 7B arithmetic queries in the main text. The remaining 5 are answers to single reasoning steps that do not combine to a full answer. By contrast, we find the full answer to factual questions in 73 documents, of which 35 show up in the top 0.02% of the data. For the 35B, we find 7 documents with answers to reasoning steps, of which 4 show up in the top 0.02% (none combining to a full answer). For the factual questions, we find 17 documents with answers, of which 15 show up in the top 0.02%. In terms of full answers showing up in the top 0.02%, we find one additional full answer on top of the ones we found in the main text for the 7B reasoning questions, spread over two documents with rank 896542 and 4997351 of 5 million respectively (i.e. highly un- or negatively influential). For the 35B we do not find full answers to reasoning queries at all. We provide many documents with answers to factual and reasoning queries found in the top 0.02% in the supplemental material as well as one example per reasoning step answer we find (e.g. if we find the answer to 6 − 4 = 2 four times, we show one example in the supplement). We highlight here some examples from the larger pretraining sample for illustration. Examples of pretraining data with answers. For factual questions, it happens relatively frequently that the answer to the question shows up as highly influential in multiple documents of the top 10 documents. For example, for the factual question in Table 2 (“What is the tallest mountain in the world and how tall is it?”), the answer shows up at ranks 1, 4, 6, and 7. The document at rank 1 (the most positively influential document), is the following, which has the same question and the answer (question 5 below, underlined): 35 Preprint, under review. Factual query What is the tallest mountain in the world and how tall is it? Completion by 7B The tallest mountain in the world is Mount Everest, which is located between Nepal and Tibet. The height of Mount Everest is 8,849 meters (29,029 feet) above sea level. Rank 1 document Write some general trivia questions and answers. Here are some general trivia questions and answers: 1. What is the longest word in the English language? Answer: The longest word in the English language is ”pneumonoultramicroscopic- silicovolcanoconiosis”, which is a type of lung disease. 2. How many bones are in the human body? Answer: There are 206 bones in the human body. 3. What is the largest ocean in the world? Answer: The largest ocean in the world is the Pacific Ocean, which covers approxi- mately one-third of the Earth’s surface. 4. What is the longest river in the world? Answer: The longest river in the world is the Nile River, which is 6,650 kilometers (4,130 miles) long. 5. What is the highest mountain in the world? Answer: The highest mountain in the world is Mount Everest, which is 8,848 meters (29,029 feet) high. 6. What is the most common element in the human body? Answer: The most common element in the human body is oxygen, which makes up approximately 65% of the human body’s mass. ... redacted 36 Preprint, under review. Another document has the answer to part of an arithmetic query for the 7B (“Calculate the answer: (5 - 3) * 12. Think step-by-step.”), namely 5 - 3 = 2 (underlined below, note that one needs to understand rules to writing arithmetic to figure the answer out): Reasoning query (arithmetic) Calculate the answer: (5 - 3) * 12 Think step-by-step. Completion by 7B First, let’s evaluate the expression inside the parentheses: 5 - 3 = 2 Now, let’s multiply the result by 12: 2 * 12 = 24 Therefore, the answer is 24. Rank 2 document shown in numbers 7, 9 and 10. PEMDAS Rules Practice 1 Solutions 1. Solution: Multiply: 2 x 3 + 4 x 6 = 6 + 24 Add: 6 + 24 = 30 Answer: 30 2. Solution: Multiply: Divide: Add: Answer: 0 3. Solution: Parenthesis: 4 + (6 – 2)2 + 1 = 4 + 42 + 1 Exponent: 4 + 42 + 1 = 4 + 16 + 1 Add: 4 + 16 + 1 = 21 Answer: 21 4. Solution: Parenthesis: 8(6 – 2) ÷ 2(5 – 3) = 8(4) ÷ 2(2) Multiply: 8(4) ÷ 2(2) = 32 ÷ 2(2)* Divide: 32 ÷ 2(2)= 16(2) Multiply: 16(2) = 32 Answer: 32 *This is the case mentioned in the PEMDAS Rules that when multiplication and divi- sion are performed consecutively (without any other operations or grouping symbols in between), the perform the operations from the left hand side to the right hand side. ... redacted Interestingly, this document shows up in the top 10 documents for 11 of 20 arithmetic queries. By contrast, the factual answer document shown before shows up in the top 10 for 4 of 40 queries (we 37 Preprint, under review. have another query that asks for the largest ocean in the world, for which this document also has the answer). To show that answers to more “niche” questions also show up, consider this document that contains the answer to the question “What is the common name for the larva of a housefly?” (answer: maggot, underlined below): Factual query What is the common name for the larva of a housefly? Completion by 35B The common name for the larva of a housefly is a maggot. Rank 6 document Common House Fly Flies are easily recognized and are universally annoying. To say they are unap- pealing is an understatement. Flies enjoy spending time on decaying organic filth, including rotting meat and feces. They then manage to enter your home and land on your kitchen counters and any uncovered food. House flies do more than just buzz around and steal your food. They can transfer over 65 diseases including cholera, dysentery, poliomyelitis, and tuberculosis. Flies accumulate nasty pathogens when they rest on sewers, trash, feces, and any other moist or decaying material. Flies lay their eggs on food (fruit in your fruit bowl), other animals, and even rotting flesh. When the eggs hatch, the tiny rice-like larvae are called maggots. Few other bugs provoke nausea and an overall disgust like tiny, squirming maggots. There are a number of ways to eliminate flies and their larvae from your home. The most com- mon practices include using sticky fly traps, ultra-violet lights, and baited traps. Fly swatters are effective when trying to get rid of the lone fly. A timely smack from a dish towel can also be employed. There are simple daily practices that will help keep flies from taking over your house. Store food in sealed containers, wipe crumbs off the counters, and don’t leave food out. Grass clippings, garbage, and decaying matter will also attract flies, so get rid of these items immediately. Keep your trash cans tightly covered and clean them on a man-made surface where the run-off can’t seep into the soil. Whenever possible, use door and window screens. This will help keep flies from entering your home. How- ever, flies can find holes and gaps so remember to check your screens occasionally and ensure they are impenetrable. Flies can also find openings on pipes or conduits that lead inside. As a means of prevention, caulk those areas that channel into your home. At ZipZap Termite & Pest Control, we know how pesky the persistent buzzing of the house fly can be. With kids running in and out, it is near impossible to keep flies from getting in. Backyard parties are hard to enjoy when you’ve got house flies buzzing around and landing on your food. If you feel you have an overwhelming number of flies This document has rank 6 for the relevant query, and never shows up in the top 10 for other queries. Below, we show a document containing the answer to the reasoning step 5 + 4 = 9, required for one of the arithmetic queries for the 7B model (“Calculate the answer: (5 + 4) * 2. Think step-by-step.”), which does not show up in the top 0.02%. 38 Preprint, under review. Reasoning query (arithmetic) Calculate the answer: (5 + 4) * 2 Think step-by-step. Completion by 7B First, let’s evaluate the expression inside the parentheses: 5 + 4 = 9 Now, let’s move on to the next step: 9 * 2 = 18 Therefore, the answer to the given expression is 18. Rank 2140 document For example: n=1 = 1+2=3 n=2 = 1+4=5, 2x3=6 n=3 = 4+5=9, 1+7=8, 2x3=6 The question is, does a solution exist for every n? I tried writing a basic program and it becomes too slow after n = 14. Here are the solutions I have so far: 1 [‘1+2=3’] 2 [‘2*3=6’, ‘1+4=5’] 3 [‘4+5=9’, ‘1+7=8’, ‘2*3=6’] 4 [‘3+6=9’, ‘1+10=11’, ‘4+8=12’, ‘2+5=7’] 5 [‘2+8=10’, ‘3+6=9’, ‘1+13=14’, ‘5+7=12’, ‘11+4=15’] 6 [‘3*5=15’, ‘2+8=10’, ‘4+14=18’, ‘6+11=17’, ‘7+9=16’, ‘1+12=13’] 7 [‘6+12=18’, ‘3*5=15’, ‘7+10=17’, ‘1+20=21’, ‘4+9=13’, ‘2+14=16’, ‘8+11=19’] 8 [‘8+14=22’, ‘11+13=24’, ‘4+5=9’, ‘3+20=23’] 9 [‘6+19=25’, ‘9+15=24’, ‘5+16=21’, ‘11+12=23’] 10 [‘6+19=25’, ’ ‘4+13=17’, ‘2+18=20’, ‘7+10=17’, ‘2+19=21’, ‘8+14=22’, ‘6+12=18’, ‘1+15=16’, ‘1+26=27’, ‘3+7=10’, This document has rank 2140 for the relevant query. 39 Preprint, under review. A.8.2 CROSS-LINGUAL TRANSFER Additional finding: The answer to the factual question sometimes shows up in non-English lan- guages. Interestingly, we observe some crosslingual transfer for the factual questions. For example, for the question about the tallest mountain in the world (Table 2), the answer shows up in Portuguese: A americana Samantha Larson, de 19 anos, se tornou nesta sexta-feira a mulher es- trangeira mais jovem a conquistar o Monte Everest, segundo nota oficial divulgada pelo Minist´erio de Turismo do Nepal. A montanha, de 8.848m, ´e a mais alta do mundo e se encontra na fronteira entre o Nepal e Tibet. Which translates to: American Samantha Larson, 19, became the youngest foreign woman to conquer Mount Everest on Friday, according to an official statement released by Nepal’s Ministry of Tourism. The 8,848m mountain is the highest in the world and is located on the border between Nepal and Tibet. We observe more crosslingual transfer for questions, for example for the question “What is the capital of Belgium?” the answer shows in up in French and Spanish. We show the French document here: Le Premier ministre belge Yves Leterme a assur´e ce mercredi qu’il resterait en place et m`enerait `a bien la r´eforme institutionnelle entre les r´egions, malgr´e les profondes divi- sions entre Flamands et Wallons qui menacent l’unit´e du pays. ... Les francophones redoutent pour leur part une r´eduction des budgets accord´es `a la Wallonie, r´egion la plus pauvre du pays, et `a la capitale bilingue, Bruxelles. Ils esti- ment ´egalement que les r´egions se sont vu transf´erer depuis les ann´ees 1980 assez de comp´etences f´ed´erales, et soupc¸onnent les n´eerlandophones de chercher `a faire s´ecession de la Belgique afin de pouvoir d´eclarer l’ind´ependance de la Flandre. Which translates to: Belgian Prime Minister Yves Leterme assured on Wednesday that he would stay in office and carry out the institutional reform between the regions, despite the deep divisions be- tween Flemish and Walloons that threaten the unity of the country. ... The French speakers, for their part, fear a reduction in the budgets granted to Wallonia, the poorest region of the country, and to the bilingual capital, Brussels. They also believe that the regions have been transferred enough federal powers since the 1980s, and suspect that the Dutch-speaking countries are seeking to secede from Belgium in order to be able to declare the independence of Flanders. Note that both these quotes are snippets from otherwise larger documents. We did not translate all documents and hence only found cases of crosslingual transfer if there happened to be keyword overlap. We show a few here, but have found the answer to factual questions through keyword overlap with non-English documents 8 times for the 7B model and 4 times for the 35B model. Note that because this is only based on circumstantial keyword overlap, we likely missed most cases of cross-lingual transfer, and therefore cannot assign any meaning to the fact that it happened less for the 35B than the 7B. It would be interesting to focus on cross-lingual transfer in future work. 40 Preprint, under review. A.8.3 CHARACTERISE RELATION TOP DOCUMENTS TO QUERY Finding 4: why documents are influential for reasoning. We prompt Command R+ to characterise the relationship between the top 500 documents and each query (see prompts in Appendix A.6). We add ‘reasoning traces’ as a potential keyword in the prompt, but after inspecting the results we find the model uses that keyword for almost any document, and we remove those results. We report the raw counts of each keyword occurring in the tables below. Arithmetic (7B) Other types of maths Similar arithmetic operations on other numbers (e.g. much larger/smaller) Code that contains arithmetic Text about math/arithmetic Code that concerns other types of math Similar arithmetic operations on similar numbers Similar formatting Superficial similarities Code that concerns no math/arithmetic Count 5765 4691 4038 3202 2554 2246 2223 1391 277 Table 14: Raw counts of the amount of times Command R+ assigns a certain keyword to a query- document pair to characterise its relation, for the arithmetic (7B) queries. Slopes (7B) Other types of maths Similar arithmetic operations on similar numbers Code that contains arithmetic Similar formatting Text that explains in words how to calculate the slope of an equation Code that concerns other types of math Text about math/arithmetic Text that explains in words how to calculate the slope between two numbers Math that calculates the slope of an equation Math that calculates the slope between two numbers Superficial similarities Text that mentions the slope but does not explain how to calculate it Code that calculates the slope between two numbers Code that calculates the slope of an equation Code that concerns no math/arithmetic Other Count 10787 7312 5035 4675 3911 3577 3323 2959 2921 2490 2222 1677 1633 1110 263 15 Table 15: Raw counts of the amount of times Command R+ assigns a certain keyword to a query- document pair to characterise its relation, for the slopes (7B) queries. 41 Preprint, under review. Slopes (35B) Other types of maths Similar arithmetic operations on similar numbers Code that contains arithmetic Similar formatting Text that explains in words how to calculate the slope of an equation Text about math/arithmetic Math that calculates the slope of an equation Math that calculates the slope between two numbers Code that concerns other types of math Text that explains in words how to calculate the slope between two numbers Superficial similarities Text that mentions the slope but does not explain how to calculate it Code that calculates the slope between two numbers Code that calculates the slope of an equation Code that concerns no math/arithmetic Other Similar arithmetic operations on other numbers (e.g. much larger/smaller) Count 11104 8340 4617 4141 3869 3845 3745 3533 3192 2747 2291 1936 1150 865 121 12 1 Table 16: Raw counts of the amount of times Command R+ assigns a certain keyword to a query- document pair to characterise its relation, for the slopes (35B) queries. Linear (35B) Math that contains linear equations but they are not solved Similar algebraic operations on similar numbers Similar formatting Math that solves a linear equation for a variable Other forms of algebra Arithmetic operations Code that contains arithmetic Other types of maths Text about math/algebra Code that solves a linear equation of another form than ax + b = c or ax - b = c Superficial similarities Code that concerns other types of math Code that concerns no math/algebra Code that solves a linear equation for a variable Math that solves an equation with multiple variables for one or both variables Math that contains linear equations of another form than ax + b = c or ax - b = c Code that solves a linear equation with multiple variables for one or both variables Other Count 13434 10717 5533 2415 2234 2057 1417 1390 1146 1109 1105 949 560 475 172 156 110 1 Table 17: Raw counts of the amount of times Command R+ assigns a certain keyword to a query- document pair to characterise its relation, for the linear (35B) queries. 42 Preprint, under review. Figure 7: For the reasoning and factual sets, we compare the amount of documents from a certain source dataset that show up in the top portions of the rankings to the amount you would expect to show up if you randomly sample from the pretraining distribution (indicated by ‘Training distribu- tion’ in the figure). The top two plots are for the 7B, and the bottom for the 35B. We find that data from Wikipedia and Math & Trivia are important for the factual questions for both models, for the reasoning questions Math & Trivia, StackExchange, Code, and ArXiv data is important. In all cases, the multipliers tend to the training distribution for higher k. A.8.4 SOURCE DATASET ANALYSIS Finding 5: code is heavily overrepresened for reasoning both for the top and bottom portions of the ranking. For each source dataset, we report the multiplier w.r.t. the training distribution. This means that if the top k documents are randomly sampled from pretraining, the multipliers will be one, whereas if they are above or below one, that source dataset is either over- or underrepresented in the most influential documents. The full results are presented in Figure 7, and we discuss the most interesting deviations from the pretraining distribution here. For the factual questions, the most overrepresented source datasets for both the 7B and 35B are Math & Trivia (multiplier of 27 and 16 for k = 50 respectively) and Wikipedia (multipliers of 5 and 6 respectively). For the reasoning questions, the most overrepresented datasets are StackExchange and Math & Trivia (with 50 and 24 als multipliers for the 7B, and 62 and 21 for the 35B). Interestingly, for both the 7B and the 35B, code data is important for the influential documents. Besides StackExchange, for the medium-influential portion of the rankings (between k = 5000 and k = 50000), more code data becomes influential (with multipliers around 2, compared to 0.5 for the factual questions at that same part of the ranking). This is conventional wisdom among practitioners (most LLMs designers use some percentage of code data in pretraining now, e.g. Touvron et al. (2023)), and recent work has empirically found code to be important for reasoning performance (Aryabumi et al., 2024). However, the question of why code data is important for reasoning is still open. Below, in Appendix A.8.5, we further confirm that code is important for reasoning by not only relying on the fact that these documents come from a code dataset, but actually classifying their contents. In Figure 8 we present the same plot for the bottom portion of the ranking, showing the findings are similar. Further, in Figure 9 and 10 we respectively show the same results for the top and bottom portion of the rankings for the 43 Preprint, under review. Figure 8: For the reasoning and factual sets, We compare the amount of documents from a certain source dataset that show up in the bottom portions of the rankings to the amount you would expect to show up if you randomly sample from the pretraining distribution (indicated by ‘Training distri- bution’ in the figure). The top two plots are for the 7B, and the bottom for the 35B. We find the patterns are almost identical to those shown for the top portions of the ranking: data from Wikipedia and Math & Trivia are important for the factual questions for both models, for the reasoning ques- tions Math & Trivia, StackExchange, Code, and ArXiv data is important. In all cases, the multipliers tend to the training distribution for higher k. control queries. Again, the results look similar (code and StackExchange is also overrepresented for the reasoning control queries), but arXiv is less overrepresented for reasoning control and wiki is less overrepresented for factual control answering. 44 Preprint, under review. Figure 9: For the query control sets, we also compare the amount of documents from a certain source dataset that show up in the top portions of the rankings to the amount you would expect to show up if you randomly sample from the pretraining distribution (indicated by ‘Training distribution’ in the figure). The top two plots are for the 7B, and the bottom for the 35B. We find that code is still overrepresented, but arXiv as source is less overrepresented for the top portions of the reasoning control set than for the reasoning set. 45 Preprint, under review. Figure 10: For the query control sets, we also compare the amount of documents from a certain source dataset that show up in the bottom portions of the rankings to the amount you would expect to show up if you randomly sample from the pretraining distribution (indicated by ‘Training distri- bution’ in the figure). The top two plots are for the 7B, and the bottom for the 35B. We find that it again looks similar to the source distribution for the top of the rankings for the query control sets. 46 Preprint, under review. A.8.5 CONTENT ANALYSIS OF RELEVANT DOCUMENTS We provide further insights into the characteristics of influential documents on reasoning queries. To do so, we compute capability categories of the n = 500 most frequently occurring documents among the k = 5000 most (top) or least (bottom) influential documents for the reasoning queries (for the 7B model), and compare these to a randomly sampled set of 500 documents (we repeat the sampling process three times and provide mean and standard deviation scores on the detected capabilities). Results are shown in Figure 11. We can see that the “code” category represents the vast majority of most and least influential documents, whereas for the random subsets the fraction of code-related documents is relatively small. This provides further evidence that code-related documents strongly influence model performance on reasoning tasks. Figure 11: Comparison of capability categories identified for the most and least influential docu- ments for the reasoning queries, as well as for a random subset of sampled documents. We repeat the random sampling three times and report mean scores with standard deviations indicated. 47 codecreative_generationgrounded_textreasoning_and_factsCategory050100150200250300350400Frequency[7B] Category Distribution Across Different Datasetsn=500 (random)k=5000 (bottom)k=5000 (top) Preprint, under review. A.9 ADDITIONAL RESULTS FOR THE QUANTITATIVE ANALYSIS A.9.1 CORRELATION ANALYSIS Figure 12: The correlation between the influence scores of all 5 million documents for pairs of queries. All queries are on the x- and y-axis, with the first 40 belonging to the factual set, the next 40 to the reasoning set (arithmetic and slopes for the 7B, and linear and slopes for the 35B), the following 10 to the factual control set, and the last 10 to the reasoning control set. The take-away is that there is only a signficant correlation between queries of the same reasoning type, most strongly so for the 35B slopes queries. Figure 13: The average correlations between the influences of all documents for queries of a specific type grouped. We leave out any query combinations where the correlation is not significant and any combination where the query on the x- and y-axis is the same query. We again observe that there is only a correlation of influence for queries of the same reasoning type. Additional results finding 1 (correlation between reasoning queries of the same type). In the main text, we find that there is a correlation between the influence scores for the documents for different queries that underlie the same type of reasoning question (e.g. questions that all require calculating the slope but for different numbers). One other explanation for this result could be the 48 Preprint, under review. fact that all these queries are superficially more similar to each other than the factual questions, and that this is the reason the influence correlates. To test this hypothesis, we use the 10 control queries for both the factual questions and the reasoning questions that are superficially similar, but do not require factual retrieval or reasoning to get to the solution (see Appendix A.3 for examples from each set). Figure 12 shows all query-query correlations for the 100 queries we look at, ordered as follows: 40 factual queries, 40 reasoning (of which 20 are arithmetic and 20 slopes questions for the 7B, and 20 linear equation and 20 slopes questions for the 35B), 10 factual control, and 10 reasoning control queries. We see that there is only a significant correlation between queries of the same reasoning type, which we summarise in Figure 13 by grouping queries of the same type and averaging the correlations. To get a sense of what the control questions look like, we show a side-by-side comparison of a slopes query (top) and the control query that is meant to resemble it (bottom) below. The influences Table 18: Two query examples, of which one is from the reasoning set (slopes) and the other from the reasoning control set that resembles the slope queries superficially, but does not require any reasoning. The completions are by the 7B model. Reasoning and Reasoning Control Query Set (7B) Slopes (original prompt) What is the slope of the line passing through the points (79, 47) and (80, 25)? Think step-by-step. Completion To find the slope of the line passing through the points (79, 47) and (80, 25), we can use the following steps: 1. Identify the coordinates of the two points: - Point A: (79, 47) - Point B: (80, 25) 2. Calculate the difference in x-coordinates between the two points: - Point A x-coordinate: 79 - Point B x-coordinate: 80 ... redacted completion. Slopes (control prompt) The slope of the line is -22. What is the slope of the line? Think step-by-step. Completion Given that the slope of the line is -22. To find the slope of the line, we need to find the change in y-coordinates divided by the change in x-coordinates. The slope of the line is -22. Therefore, the slope of the line is -22. for this query correlate with the influences for the slope questions on average with a Pearson’s R of 0.05, which is much smaller than the 0.32 average correlation between the influences found for the different slope reasoning questions by the 7B model. Below, we perform a more detailed qualitative analysis of the query combinations and what drives their correlations, but first we discuss the quantitative result. As mentioned, we have 10 factual and 10 reasoning control questions for both models, and show the full correlation matrices below in Figure 12 (per query) and Figure 13 (averaged per group). We observe that the correlations between queries from the control sets and other query sets for the 35B is always between 0.05 and 0.10, which indicates that there can be a score correlation of at least 0.10 for other things than genuine reasoning (e.g. formatting, or topic). Further, the within- group correlations of the reasoning control set sometimes go as high as 0.38 (although the average 49 Preprint, under review. is 0.06 for the 7B and 0.10 for the 35B). For comparison, the average linear-linear score correlation for the 35B is 0.16, and not many of the correlations that make up this average are higher than the correlations in the reasoning control sets. To get a sense of how different the correlations are in magnitude between the reasoning questions and the control questions, we calculate the highest correlation of a query from a specific reasoning type with any other query that does not concern reasoning, and count the amount of reasoning query-query combinations for which the correlation is higher. For example, the maximum correlation we find between any slope question for the 35B and any other query that is not a slope question is 0.30 Pearson’s R. If we discard all slope query combinations that are below 0.30 we are left with 138 of 190 significant combinations that are higher, ranging up to 0.96 Pearson’s R (note that each reasoning group has 20 queries, and all combinations are 20 ∗ 19/2 = 190). For the linear equation queries by contrast, there are only 34 of 190 query- query combinations within this group that have a correlation higher than the highest correlation with the control queries, ranging up to 0.95 Pearson’s R. For the 7B, 84 of 190 arithmetic query combinations have a higher correlation than the control correlations, ranging up to 0.96 Pearson’s R, and 120 of 190 slopes query combinations, ranging up to 0.88. We therefore conclude that the correlations between the queries for the linear equations can mainly be explained by other, more superficial things than procedural knowledge, and connect this finding to the fact that the model is less robustly able to solve linear equations. The within-group correlations of the factual set are much lower, and for the 7B we only find 5 of 780 correlations that are higher than the maximum correlation of a factual query with another query group, ranging to 0.63 Pearson’s R (we show the queries with the highest correlation below). For the 35B, we find no correlations for factual queries higher than the maximum correlation with another group. We release all 100 ∗ 100 query-query correlations in the supplemental material (on which Figure 12 is based), and we highlight a few examples here to get a sense of what is driving higher or lower correlations. We mostly do not find a correlation between the influence scores for different factual questions (on average the correlation is 0.06 for the 7B and 0.03 for the 35B), but we show the highest correlation found between two factual queries below in Table 19. For this particular example, the correlation seems driven by the fact that they are both “what”-questions with very short completions using quotation marks. By contrast, a similar style query-completion pair for the 35B shows almost no correlation (Table 20). For the factual control set (the average correlation is 0.04 for the 7B and 0.08 for the 35B, but we also show a factual control correlation below for completeness. Table 11 shows a factual query and a matched factual control example for the 35B model. The correlation between this control query and the factual query it matches is -0.03. Table 19: Two query examples for which the Pearson’s R correlation of their document influences is 0.63, both from the factual set of the 7B model. We observe that the main similarity between these queries is that they both concern short “what” questions, and the completions both are short answers with quotes used. Factual Query Set (7B) Factual prompt 1 What is the symbol for gold on the periodic table? Completion to factual prompt 1 The symbol for gold on the periodic table is “Au”. Factual prompt 2 What is the common name for the larva of a housefly? Completion to factual prompt 2 The common name for the larva of a housefly is “maggot” See Table 22 below for a representative high correlation example for the 35B slopes questions (Pear- son’s R of 0.89): We see that the numbers for the questions are different, but the steps to get to the answer are the same and formatted similarly. We generally observe that a combination of different numbers used in the same procedure and similarly formatted reasoning steps results in higher correlation. Another 50 Preprint, under review. Table 20: Two query examples for which the Pearson’s R correlation of their document influences is 0.08, both from the factual set of the 35B model. We observe that the main similarity between these queries is that they both concern short “what” questions, and the completions both are relatively short answers with quotes used. Factual Query Set (35B) Factual prompt 1 What is the name of the main antagonist in the Shakespeare play “Othello”? Completion to factual prompt 1 The main antagonist in Shakespeare’s play “Othello” is Iago, Othello’s standard-bearer. Iago is a manipulative and deceitful character who orchestrates the tragedy in the play. Factual prompt 2 What is the name of the smallest member of the camel family? Completion to factual prompt 2 The smallest member of the camel family is called a “dromedary”. Dromedaries are easily recognizable by their single hump. They are native to North Africa and the Middle East. Table 21: Two query examples for which the Pearson’s R correlation of their document influences is -0.03, one from the factual set (top), and one from the factual control set (bottom). We observe that although the formatting is similar, and there are some overlapping concepts, there is no correlation between the document influences. Factual and Control Query Set (35B) Factual prompt What is the supernova remnant and pulsar wind nebula in the constellation of Taurus called? Completion to factual prompt The supernova remnant and pulsar wind nebula in the constellation Taurus is called the Crab Nebula. Factual control prompt In a fictional planetary system, there is a supernova remnant and pulsar wind nebula in the centre of a constellation of stars called Minotaur called the Goat Nebula. What is the supernova remnant and pulsar wind nebula in the constellation Minotaur called? Completion to factual control prompt In the fictional planetary system, the supernova remnant and pulsar wind nebula in the centre of the constellation Minotaur is called the “Goat Nebula”. example is shown in Table 23 below for a query combination that gets a lower correlation (0.55), but has the same answer (-22). The relatively lower correlation here seems driven by the different formatting of the completion (less new lines). This indicates that part of the correlations are driven by superficial similarities such as completion formatting. For comparison, Table 24 shows a pair of control queries for which the prompt and completion are also very similarly formatted, and the correlation is 0.38 (which is the highest correlation we find using a control query). However, superficial formatting cannot be the full picture, as we show in Table 25, where the cor- relation is 0.35 but the completions are very dissimilar. For this particular query, it seems that it’s not the positive scores that explain the correlation (Pearson’s R of about 0.2), but rather the negative scores (Pearson’s R of 0.44). Perhaps some of the correlation can be explained by documents that are generally negatively influential for the types of completions the models generate here. Finally, in Table 26 and 27 we show two relatively high correlation examples for the arithmetic queries, again indicating that both underlying operations as well as formatting drive correlations. 51 Preprint, under review. Table 22: Two query examples for which the Pearson’s R correlation of their document influences is 0.89, both from the reasoning query set concerning calculating slopes. We observe that, although the reasoning is applied to different numbers, the steps follow an identical pattern. In fact, the only difference between the two queries, including completions, is the numbers used. Reasoning Query Set (35B) Slopes (prompt 1) What is the slope of the line passing through the points (94, 62) and (95, 19)? Think step-by-step. Completion to prompt 1 The slope of a line passing through two points is calculated using the formula: Slope = (change in y) / (change in x) In this case, we have the points (94, 62) and (95, 19). Substituting the values into the formula, we have: Slope = (19 - 62) / (95 - 94) Slope = (-43) / (1) Slope = -43 Therefore, the slope of the line passing through the points (94, 62) and (95, 19) is -43. Slopes (prompt 2) What is the slope of the line passing through the points (90, 20) and (94, 64)? Think step-by-step. Completion to prompt 2 The slope of a line passing through two points is calculated using the formula: Slope = (change in y) / (change in x) In this case, we have the points (90, 20) and (94, 64). Substituting the values into the formula, we have: Slope = (64 - 20) / (94 - 90) Slope = (44 / 4) Slope = 11 Therefore, the slope of the line passing through the points (90, 20) and (94, 64) is 11 Taken together, it seems like correlations can be driven by underlying procedures, formatting of the completion, and other more general things (like “what”-questions in Table 19 and 25). We generally find the highest correlations when procedures and formatting of completions coincide (of which two examples are given in Table 22 and 27). The magnitude of these correlations indicate that almost all of the influence of the 5 million documents in similar for such queries. One interesting possibility is that the query information surrounding the actual numbers generated (which do not seem to drive correlation much at all) is determined by the attention layers (which, besides the dense parameters contained in them, we ignore in this work), connecting potentially to literature attributing reasoning operations to attention heads. An interesting avenue for future work would be investigating this further. 52 Preprint, under review. Table 23: Two query examples for which the Pearson’s R correlation of their document influences is 0.55, both from the reasoning query set concerning calculating the slope. We observe that the second completion is structured very similarly to the first, except it has less new lines. Reasoning Query Set (7B) Slopes (prompt 1) What is the slope of the line passing through the points (62, 31) and (61, 53)? Think step-by-step. Completion The slope of a line passing through two points is calculated using the formula: Slope = (change in y) / (change in x) In this case, we have the points (62, 31) and (61, 53). Substituting the values into the formula, we have: Slope = (31 - 53) / (62 - 61) Slope = (-22) / (1) Slope = -22 Therefore, the slope of the line passing through the points (62, 31) and (61, 53) is -22 Slopes (prompt 2) What is the slope of the line passing through the points (79, 47) and (80, 25)? Think step-by-step. Completion The slope of a line passing through two points is calculated using the formula: Slope = (change in y) / (change in x) In this case, we have the points (79, 47) and (80, 25). Substituting the values into the formula, we have: Slope = (25 - 47) / (80 - 79) Slope = (-22) / (1) Slope = -22 Therefore, the slope of the line passing through the points (79, 47) and (80, 25) is -22. 7B vs 35B An additional finding that is not central to the research question in this work, but is nonetheless interesting, is that there is almost no correlation between the influence scores of the two different models. We have 36 queries that share the same prompt for the 7B and 35B (16 factual questions, and 20 slopes reasoning questions) and we can calculate the Pearson’s R of the queries with matched prompts (i.e. 36 combinations). The average correlation of influence scores is 0.02 Pearson’s R (if we only look at the slopes questions the average correlation is 0.03). The maximum correlation we find is 0.19, for the question “What is the capital of Belgium?”, which we know from above is not a comparatively high score correlation. Interestingly, for this query, both models produced the exact same completion, and still the correlation is comparatively low. All other query combinations correlate with a Pearson’s R below 0.11. This connects to a finding from Grosse et al. (2023) (larger models rely on data that is more abstractly related to the prompt): the 35B model relies on very different pretraining data than the 7B, and the same pretraining documents influence completions for the same prompt very differently. 53 Preprint, under review. Table 24: Two query examples for which the Pearson’s R correlation of their document influences is 0.38, both from the reasoning control set for the 7B model. We observe that the formatting is very similar, but the correlation is still lower than for the reasoning queries above. Reasoning Control Query Set (7B) Reasoning Control (prompt 1) Find the answer: There are 21 kids in the bus. How many kids are in the bus? Think step-by-step. Completion There are 21 kids in the bus. Reasoning Control (prompt 2) Find the answer: There are 132 marbles in a jar. How many marbles are in the jar? Think step-by-step. Completion There are 132 marbles in the jar. Table 25: Two query examples for which the Pearson’s R correlation of their document influences is 0.35, both from the reasoning control set for the 35B model. We observe that their completions are very different, indicating that it can not only be superficial similarities that drive correlations. Reasoning Control Query Set (35B) Reasoning control (prompt 1) This line has a slope 6. What is the slope of the aforementioned line? Think step-by-step. Completion The slope of a line is a measure of the steepness of the line. It is defined as the ratio of the change in the y-coordinate to the change in the x-coordinate. In this case, the line has a slope of 6. This means that for every 1 unit increase in the x-coordinate, the y-coordinate will increase by 6 units. It’s important to note that the slope of a line is not limited to positive values and can be negative or zero depending on the orientation of the line Reasoning control (prompt 2) Find the answer: there are two bananas on a tree, how many bananas are on the tree? Think step-by-step. Completion There are two bananas on the tree. 54 Preprint, under review. Table 26: Two query examples for which the Pearson’s R correlation of their document influences is 0.49, both from the reasoning query set concerning two-step arithmetic. We observe that one of the operations required is overlapping (+ 6), and the completion is somewhat similarly formatted but not fully. Reasoning Query Set (7B) Arithmetic (prompt 1) Calculate the answer: (5 + 6) * 10 Think step-by-step. Completion First, let’s evaluate the expression inside the parentheses: 5 + 6 = 11 Now, let’s multiply the result by 10: 11 * 10 = 110 Therefore, the answer is 110 Arithmetic (prompt 2) Calculate the answer: (6 + 10) * 6 Think step-by-step. Completion First, let’s evaluate the expression inside the parentheses: 6 + 10 = 16 Now, let’s calculate the result of multiplying the expression inside the parentheses by 6: 16 * 6 = 96 Therefore, the answer to the expression is 96 55 Preprint, under review. Table 27: Two query examples for which the Pearson’s R correlation of their document influences is 0.94, both from the reasoning query set concerning two-step arithmetic. We observe that not only is the completion very similarly formatted, but one of the operations overlaps (multiplying something by 2). Reasoning Query Set (7B) Arithmetic (prompt 1) Calculate the answer: (5 - 3) * 12 Think step-by-step. Completion First, let’s evaluate the expression inside the parentheses: 5 - 3 = 2 Now, let’s multiply the result by 12: 2 * 12 = 24 Therefore, the answer is 24 Arithmetic (prompt 2) Calculate the answer: (4 - 2) * 15 Think step-by-step. Completion First, let’s evaluate the expression inside the parentheses: 4 - 2 = 2 Now, let’s multiply the result by 15: 2 * 15 = 30 Therefore, the answer is 30 56 Preprint, under review. A.9.2 MAGNITUDE OF INFLUENCE Additional results finding 2 (magnitude of influence is much lower and less volatile for reason- ing questions). In the main paper, we find that the influence of documents at the same rank for factual questions is much more volatile than for reasoning questions. We mention that one explanation for this might be that the queries for the 35B model are much more niche, and therefore the relevant documents much more infrequent. To test this hypothesis, we plot the same results for only the overlapping queries (those that are part of both query sets for the 7B and 35B) in Figure 14. We find that the magnitude and variance is still larger for the 35B model than for the 7B model, indicating that the influence of influential documents for the factual and reasoning questions by the 35B can be much larger than for the 7B model. Further, in Figure 15 we show that the results look similar for the negative portions of the ranking (where we flip the influence scores from negative to positive). Figure 14: The total influence per nat of query completion information for different portions of the positive ranking over documents, left for the 7B model, right for the 35B. In this case, we only plot queries that are present in the query sets for both models. This means the prompt is the same, but the completion is be different. The pattern is very similar as the observed pattern for the top of the ranking. Figure 15: The total influence per nat of query completion information for different portions of the negative ranking over documents, left for the 7B model, right for the 35B. We again only plot queries that are present in the query sets for both models. In this case, the k-th percentile contains the top k % of most negatively influential documents. The pattern is very similar as the observed pattern for the top of the ranking. Finally, in Figure 16 and Figure 17 we plot the same metric for all queries for the top and bot- tom parts of the rankings respectively, now including the 10 control set queries of the factual and 57 Preprint, under review. Figure 16: The total influence per nat of query completion information for different portions of the positive ranking over documents, left for the 7B model, right for the 35B. We plot all queries, including the query control sets for both factual and reasoning, which contain 10 queries each. Figure 17: The total influence per nat of query completion information for different portions of the negative ranking over documents, left for the 7B model, right for the 35B. We plot all queries, including the query control sets for both factual and reasoning, which contain 10 queries each. reasoning control set. As shown in Appendix A.3, we use 10 control queries for each set to investi- gate whether results hold similarly for queries that superficially look similar as the factual/reasoning questions, but that do not require factual retrieval or reasoning respectively. We observe that the control sets both show much higher variance and magnitude than the reasoning queries as well, for the positive and negative portions of the ranking. For completeness, we show the same result with the number of documents on the x-axis instead of percentiles in Figure 18 and Figure 19, to show that the results are similar if we take into account that the 20-th percentile of documents for each query contains a different amount of documents k. 58 Preprint, under review. Figure 18: The total influence per nat of query completion information for different number of documents k of the positive ranking, left for the 7B model, right for the 35B. We plot all queries, including the query control sets for both factual and reasoning, which contain 10 queries each. Figure 19: The total influence per nat of query completion information for different number of documents k of the negative ranking, left for the 7B model, right for the 35B. We plot all queries, including the query control sets for both factual and reasoning, which contain 10 queries each. 59 Preprint, under review. A.9.3 INFLUENCE SPREAD: POWER LAWS Figure 20: The ranked influence scores per query nat for each query shown separately in log-log space. We observe; the results follow power laws (linear in log-log space), everything is shifted up for the 35B model (right), generally the scores for the reasoning documents are lower for the 7B model, and for the 35B model there is less variance in magnitude of influence for reasoning queries than for factual queries, and more often than not the influence scores are lower than for factual questions. Figure 21: The ranked influence scores per query nat for each query shown separately in log-log space again, but now also showing the control queries. We observe that also for the control queries the influence is much more volatile than for reasoning questions, and on average the magnitude is higher. In this section, we look at the power laws induced by the top portions of the rankings. We can fit linear functions to the rankings in log-log space, and analyse the slopes to comment on the sparsity of the rankings (i.e. how many documents do models rely on for a completion). Specifically, we perform linear regression on the log-log top 500 rankings of each query, and report the slopes in Table 28. After qualitatively inspecting the queries for the 35B model with the steepest slope, we believe an explanation for this result may be ‘noise’ in the influence scores. For example, the query with the steepest slope (α = −0.45) has as the most influential document a document that is seemingly entirely unrelated to the query. Namely, the query asks the question “What is the slope of the line passing through the points (41, 23) and (18, 92)? Think step-by-step.”, and the top influential 60 Preprint, under review. Table 28: Slopes of the fitted functions to the top 500 documents in the influence rankings in log-log space, separated by query set and whether the model gets the question right or wrong. ⋆ indicates the significance of an independent T-test performed between the slopes of the factual vs. reasoning queries, where ⋆ indicates a p-value below 0.1 and ⋆⋆ below 0.05. 7B (Incorrect) 7B (Correct) Reasoning (α) −0.36 ± 0.03⋆ −0.33 ± 0.02 −0.34 ± 0.04 −0.34 ± 0.03 Factual (α) 35B (Correct) 35B (Incorrect) −0.36 ± 0.04⋆⋆ −0.38 ± 0.04⋆ −0.34 ± 0.04 −0.32 ± 0.05 document is a snippet about the lunar eclipses and when and where they can be viewed which does not have high N-gram overlap with the query either: December 8, 1946 — Total Lunar Eclipse — Rawaki, Phoenix Islands, Kiribati Max view in Rawaki Sunday, December 8, 1946 at 5:01 AM Global Type: Total Lunar Eclipse Rawaki: Partial Lunar Eclipse Began: Sun, Dec 8, 1946 at 3:13 AM Maximum: Sun, Dec 8, 1946 at 5:01 AM Ended: Sun, Dec 8, 1946 at 8:22 AM Duration: 5 hours, 10 minutes December 8, 1946 — Total Lunar Eclipse — Rawaki You are using an outdated browser, to view the animation please update or switch to a modern browser. Alternatively you can view the old animation by clicking here. Animation: How the Partial Lunar Eclipse Looked The total phase of this lunar eclipse was not visible in Rawaki, but it could be observed there as a partial lunar eclipse. More about the December 8, 1946 — Total Lunar Eclipse Phases and local times of this eclipse Eclipses visible in Rawaki All eclipses worldwide, from 1900 to 2100 This is the only query for which we observe an unrelated top 1 document, but for the 35B model we qualitatively observed seemingly irrelevant documents in the rankings more often (in the 7B we did not observe this). This connects to a finding from literature that for large models influence functions sometimes surface documents with high gradient norms that are unrelated to the query (Barshan et al., 2020; Grosse et al., 2023; Choe et al., 2024). As Grosse et al. (2023) note, it is currently unclear whether this is true noise, or whether these are genuinely influential for the completions. Regardless, it seems like noise cannot easily explain the difference between the factual and slopes queries, as one would expect noise to show up equally everywhere. Another way to visualise this result is to plot the percentage of total influence contained in different parts of the top ranking, which we do in Figure 22 below. The results in this plot show that for the top-k percentile of most positively influential documents, the total percentage of positive influence is much higher than k (e.g. 20% of the total positive influence is contained in the top 5% of documents). Here, it is clear that on average, for the 35B model the total amount of influence contained in the top-k percentile increases faster for reasoning questions than for factual questions, indicating that a larger portion of the total positive influence is contained in the top portions of the rankings. In Figure 23 we show the same result holds if we include the control queries. As Grosse et al. (2023), it is not clear whether this is a sensible result to show because for each query we are dividing the total influence at each k by the sum of positive influence for that query (perhaps a large part of the positive influence gets cancelled out by negative influence), but we show the result here nonetheless for completeness. We know from the absolute results of the total influence at different portions of the ranking that each percentage of total influence at the top-k percentile a much lower value in absolute terms for reasoning than for the factual questions. If the relative result does not turn out to be noise, it is the case that of the total influence, a higher percentage is contained in the top portions 61 Preprint, under review. of the rankings for reasoning questions than for factual questions. Taken together with the fact that the absolute influence is often much higher for factual questions, this indicates that the model relies on more highly influential documents for factual retrieval than for reasoning. This could indicate that there are more highly relevant factual documents further down the ranking, which makes sense given the fact that the pretraining distribution is dominated by websources and news, which are more likely to contain relevant information for factual question answering than for reasoning. Further, it connects to the finding from literature that models need to see examples often before text gets memorised (Chowdhery et al., 2022). Figure 22: The percentage of total influence per nat of query completion information for different portions of the positive ranking over documents, left for the 7B model, right for the 35B. We plot only non-control queries. Figure 23: The percentage of total influence per nat of query completion information for different portions of the positive ranking over documents, left for the 7B model, right for the 35B. We plot all queries, including the query control sets for both factual and reasoning, which contain 10 queries each. Again, the picture looks similar for the negative portions of the ranking, shown for completeness below in Figure 24 and 25. 62 Preprint, under review. Figure 24: The percentage of total influence per nat of query completion information for different portions of the negative ranking over documents, left for the 7B model, right for the 35B. We plot only non-control queries. Figure 25: The percentage of total influence per nat of query completion information for different portions of the negative ranking over documents, left for the 7B model, right for the 35B. We plot all queries, including the query control sets for both factual and reasoning, which contain 10 queries each. 63
synthetic_cpt
1
Non-Autoregressive_Fully_Parallel_Deep_Convolutional_Neural_Speech_Synthesis.pdf
hep-th/0107226 NSF-ITP-01-74 1 0 0 2 l u J 6 2 1 v 6 2 2 7 0 1 0 / h t - p e h : v i X r a Non-Linear / Non-Commutative Non-Abelian Monopoles Koji Hashimoto∗ Institute for Theoretical Physics, University of California, Santa Barbara, CA 93106-4030 Abstract Using recently proposed non-linearly realized supersymmetry in non-Abelian 2), we derive the non-linear BPS equations in gauge theory corrected to O(α′ the background B-field for the U(2) monopoles and instantons. We show that these non-Abelian non-linear BPS equations coincide with the non-commutative anti-self-dual equations via the Seiberg-Witten map. ∗[email protected] It has been known that there are α′ corrections to the super Yang-Mills theory as a low energy effective action of superstring theories [1][2]. The low energy effective theories have been a very strong tool for analyzing the full string theory to find dualities and non- perturbative properties. However, the entire structure of the α′ corrections is still beyond our reach, although much elaborated work has been devoted to this subject [3]–[19]. To be concrete, a stack of parallel Dp-branes has a low energy effective description which is the p+1-dimensional super Yang-Mills theory accompanied with the α′ corrections, but even in the slowly-varying field approximation the complete form of the effective action has not been obtained yet. To fix this problem, recently there appeared several attempts to constrain the action by supersymmetries [17], or the equivalence [20] to non-commutative theories [10][11]. 2). Especially the paper [17] fixed all the ambiguity of the ordering and coefficients up to O(α′ In this paper, we give an evidence supporting both of these arguments of supersymmetries and non-commutative geometries, by analyzing the BPS equations. Solitons and instantons as solutions of the BPS equations in the low energy effective theory of D-branes have brane interpretations. For example, a BPS monopole in U(2) Yang-Mills-Higgs theory corresponds to a D-string suspended between two parallel D3-branes. We consider these brane configu- rations in the background B-field, and explicitly construct U(2) non-linear BPS equations for the monopoles and the instantons. For the construction we need an explicit form of the linearly/non-linearly realized supersymmetry transformations in the effective theory which was obtained in [3] and [17]. According to the equivalence observed in [20], these equations should be equivalent with the U(2) non-commutative BPS equations [21]–[26]. In this paper we shall explicitly show this equivalence∗. This fact is a supporting evidence of the super- symmetry transformation in the effective action determined in [17]. Then we shall proceed to obtain the explicit solutions to these equations and discuss the brane interpretation of them. The low energy effective action of open superstring theory with U(N) Chan-Paton factor is given by the super Yang-Mills action corrected by α′ [1][2]: L = str 1 4 − (cid:20) (Fij)2 + 1 2 2 π2α′ FijFjkFklFli − (cid:18) 1 4 (Fij)2(Fkl)2 (cid:19)(cid:21) + (fermions) + O(α′ 3). (1) The recent argument [17][18] on the ordering of the gauge fields and the fermions shows that 2 all the terms can be arranged by the symmetrized trace (str), which up to the order of α′ is compatible with the string scattering amplitudes and also the supersymmetries. We use the action in the Euclidean four-dimensional space to treat the anti-self-duality equation for both the monopoles and instantons simultaneously. This action is obtained via dimensional reduction with Aµ = 0 (µ = 0, 5, 6, 7, 8, 9). The normalization for the gauge symmetry ∗In the Abelian case, this equivalence was shown in [27]. 1 generators is given by tr[T AT B] = δAB, which follows the convention of [18]. The action (1) has a linearly realized supersymmetry for the gaugino [3] δǫχA = 1 2 ΓijF A ij ǫ − 1 8 π2α′ 2str(T AT BT CT D) ij F C F B h ji F D kl Γkl − 4F B ij F C jkF D kl Γil ǫ, i (2) which includes the α′ corrections to the first nontrivial order. The recent paper [17] shows that this system has another supersymmetry, non-linearly realized supersymmetry, as is expected from the fact that the action (1) describes a stuck of N D-branes which breaks half of the bulk supersymmetries. This non-linearly realized supersymmetry is given by δηχA = ηA − 1 2 π2(α′)2str(T AT BT CT D) 1 2 (cid:20) F B ij F C ij + 1 4 F B ij F C klΓijkl(cid:21) ηD, (3) where the transformation parameter η has its value only for a U(1) subgroup of U(N) [17]. We have already neglected the fermions in the right hand sides of (2) and (3). In order to compare our results with the previous literatures [23]–[25][27] we will consider σa for a = . Therefore especially the symmetrized trace of the four generators only the gauge group U(2). The normalized generators are defined as T a = 1 √2 1, 2, 3 and T 4 = 1 √2 appearing in the above supersymmetry transformations (2) and (3) is given by str(T AT AT AT A) = str(T aT aT 4T 4) = 1 2 , str(T aT aT bT b) = 1 6 (a 6= b), (4) where the upper case A runs all the generators of U(2): A = 1, 2, 3, 4. We turn on the background B-field which induces the non-commutativity on the world- ij +2Bij, volume of the D-branes. This B-field is appearing in the action (1) as F 4 due to the bulk gauge invariance of the B-field. ij → F 4 ij = F 4 For simplicity, we put πα′ = 1, which can be restored on the dimensional ground anytime. The action (1) and its symmetries (2) (3) are obtained in string theory in the approximation F ≪ 1 and the slowly-varying field approximation. We keep this in mind, and in the following we shall obtain the non-linearly-modified BPS equations, perturbatively in small B. The basic BPS equations around whose solutions we expand the fields are the anti-self-duality equations ij + ∗F (0)a F (0)a ij = 0, F (0)4 ij = 0, (5) ij = F (0)A where we have expanded the fields as F A ij + O(B), and the Hodge ∗ is defined as ∗Fij ≡ ǫijklFkl/2. These equations are obtained by considering the lowest order in α′ in (2) by requiring a half of the linearly-realized supersymmetries are preserved. The transformation parameters of the preserved supersymemtries then obey the chirality condition (1 + Γ5)ǫ = 0 2 (6) where Γ5 = Γ1234. In the following, we assume that this chirality condition for ǫ persists also to the higher order in α′ and even with the inclusion of B. This assumption will be checked by the explicit existence of the solutions. Along the argument given in [20][27]–[31], first we consider a combination of the two supersymmetries (2) and (3) which remains unbroken at the spatial infinity where F = 0. The vanishing of F gives δǫχA = BijΓijǫ + O(B3), δηχ4 = η4 + O(B2). Thus (δǫ + δη)χ4 = 0 at the infinity is equivalent with η4 = −BijΓijǫ + O(B3). (7) (8) Using this relation between two supersymmetry transformations, the vanishing of the super- symmetry transform of the gaugino in all the four-dimensional space leads to BPS conditions 1 2 1 2 F a ijΓijǫ = 0, F 4 ijΓijǫ − BijΓijǫ − − 1 4 1 8 str(T 4T BT CT 4) F B ij F C ij + F B ij F C (cid:20) 1 2 F B h 1 4 klΓijkl(cid:21) ij F C str(T 4T BT CT D) ij F C ji F D kl Γkl − 4F B jkF D kl Γil ǫ = 0. (10) i (9) BijΓijǫ The first one (9) gives usual anti-self-duality equation† without any correction of B. In the analysis up to this order, only the U(1) part of the gauge field obtains the first nontrivial correction of B as F 4 ij = O(B). Let us calculate the third and the fourth terms in (10). Keeping in mind that we neglect the terms of the higher order, the third term can be arranged as − 1 4 h (F (0)B ij )2 i BklΓklǫ + O(B2), (11) where we have used the anti-self-duality of F (5) and the chirality of ǫ (6). After a straight- forward calculation, the fourth term in (10) can be evaluated in the same manner and turns out to be the same as (11)‡. The term (F (0))3 is negligible because it is of the higher order. These evaluation simplifies the BPS condition (10) to ijΓijǫ − (F (0)A F 4 kl )2BijΓijǫ = 0. (12) †The α′ corrections in the linearly-realized transformation (2) are actually factored-out when the lowest order relations (5) are substituted. ‡These calculations are easily performed with the use of the block-diagonal form of the matrix B which is obtained by the space rotation without losing generality. 3 Decomposing this condition into the components, we obtain the non-linear BPS equations ij + ∗F 4 F 4 ij − π2α′ 2(Bij + ∗Bij)(F (0)A kl )2 = 0, (13) where we have restored the dimensionality. The important is to check whether the equations (13) are equivalent with the non- commutative BPS equations via the Seiberg-Witten map [20]. The non-commutative U(2) monopoles/instantons [21]–[26] satisfy the following BPS equations ij + ∗ ˆF A ˆF A kl = 0, (14) where fields with the hat indicate the ones in the non-commutative space. Substituting the Seiberg-Witten map [20] ˆFij = Fij + 1 2 θkl (cid:18) 2{Fik, Fjk} − {Ak, (Dl + ∂l)Fij} + O(θ2) (cid:19) (15) into the above non-commutative BPS equation (14) and noting that the last gauge-variant terms in (15) vanish with the use of the lowest level anti-self-duality (5), then we obtain ij + ∗F 4 F 4 ij + 1 4 (θij + ∗θij)(F (0) kl )2 = 0. Now we can use the relation [20, 27] θij = −(2πα′)2Bij (16) (17) which has been deduced from the worldsheet propagator for an open string in the approxima- tion α′B ≪ 1, then we can see the equivalence between the non-commutative BPS equations (14) and the non-linear BPS equations (13). Let us consider the specific brane configurations. (1) U(2) non-commutative monopole. In this case we perform the dimensional reduction further down to the three-dimensional space and regard the fourth gauge field A4 as a scalar field Φ. We turn on only one component of the B-field, B12 6= 0. Since we have a solution to the U(2) non-commutative BPS equation for a monopole [23][24][26], and we know the Seiberg-Witten transform of that solution to an appropreate order in α′ [27], then from the above equivalence, that transform is actually a corresponding solution to the non-linear BPS equation (13). After diagonalization of the scalar field, the eigenvalues exhibits the configuration in which the single D-string suspended between the two parallel D3-branes is tilted [21] so that they preserve 1/4 supersymmetries in the bulk with the B-field, as shown in [27]. 4 (2) U(2) instanton. It is known that the small instanton singularity of the anti-self-dual instanton moduli space is resolved if we introduce self-dual background θ [20, 32]. However, this resolution does not occur in the case of anti-self-dual θ. This fact may be observed from the non-linear BPS equations and their solutions. First let us analyze the anti-self-dual B-field (note the relation (17)) B12 + B34 = 0. Since the equation BijΓijǫ = 0 (18) holds for ǫ which is involved with the preserved supersymmetries for the anti-self-dual gauge field configuration, the whole η terms vanish. Thus the linear BPS equation is not corrected, and so the configuration is not affected by the B-field: F + ∗F = 0. (19) This is consistent with the observation that the linear BPS equation F + ∗F = 0 may solve fully α′-corrected non-Abelian effective theory, as it is true in the case of Abelian theory [33]. Since now the self-duality is the same as the B-field orientation, we can subtract the B-field from the both sides of the above equation and then obtain (19). This result may be related to the observation in [34] that for the large instanton radius the commutative description of the non-commutative U(2) instanton [25] does not seem to have θ dependence§. From the non-commutative side, we substitute the Seiberg-Witten map to the non-commutative BPS equation (14), but then the order θ terms cancel with each other and we found the usual anti-self-dual equation (19). On the other hand, for the self-dual B-field background B12 = B34, there exists a correc- tion, which is expected from the resolution of the small instanton singularity. One can solve the non-linear BPS equation (13) using the general ansatz [20] in this background for a radial function h(r). Substituting the lowest order solution A4 i = Bijxjh(r) F (0)a ij = 4ρ2 (r2 + ρ2)2 ηaij, we obtain a differential equation for h(r) and the solution is h(r) = 16π2 ρ4(3r2 + ρ2) r4(r2 + ρ2)3 . (20) (21) (22) §For the small value of ρ the gauge fields is not slowly-varying, the D-instanton charge distribution is corrected due to the derivative corrections to the Wess-Zumino term [35], thus we may not see any relation with [34]. 5 This is the first nontrivial correction to the anti-self-dual instanton. Since in this case the small instanton singularity must be resolved, we might be able to see it by computing the instanton charge distribution with this correction, but it turns out to be very small as ∼ B2ρ8/r16 compared to the original instanton density ∼ ρ4/r8. Therefore unfortunately we cannot see the change of the instanton radius caused by the introduction of the B-field. Acknowledgments: The author would like to thank T. Hirayama and W. Taylor for useful comments. This research was supported in part by Japan Society for the Promotion of Science under the Postdoctoral Research Program (# 02482), and the National Science Foundation under Grant No. PHY99-07949. References [1] D. J. Gross and E. Witten, “Superstring Modification of Einstein Equations”, Nucl. Phys. B277 (1986) 1. [2] A. A. Tseytlin, “Vector Field Effective Action in the Open Superstring Theory”, Nucl. Phys. B276 (1986) 391; Erratum – ibid. B291 (1987) 879. [3] E. Bergshoeff, M. Rakowski and E. Sezgin, “Higher Derivative Super Yang-Mills theo- ries”, Phys. Lett. B185 (1987) 371. [4] Y. Kitazawa, “Effective Lagrangian for Open Superstring from Five Point Function”, Nucl. Phys. B289 (1987) 599. [5] A. A. Tseytlin, “On non-abelian generalization of Born-Infeld action in string theory”, Nucl. Phys. B501 (1997) 41, hep-th/9701125. [6] A. Hashimoto and W. Taylor IV, “Fluctuation Spectra of Tilted and Intersecting D- branes from the Born-Infeld Action”, Nucl. Phys. B503 (1997) 193, hep-th/9703217. [7] S. Gonorazky, F. A. Schaposnik and G. Silva, “Supersymmetric Non-Abelian Born- Infeld Theory”, Phys. Lett. B449 (1999) 187, hep-th/9812094. [8] F. Denef, A. Sevrin and J. Troost, “Non-Abelian Born-Infeld versus String Theory”, Nucl. Phys. B581 (2000) 135, hep-th/0002180. [9] S. V. Ketov, “N = 1 and N = 2 supersymmetric non-abelian Born-Infeld actions from superspace”, Phys. Lett. B491 (2000) 207, hep-th/0005265. 6 [10] L. Cornalba, “On the General Structure of the Non-Abelian Born-Infeld Action”, hep-th/0006018. [11] S. Terashima, “The Non-Abelian Born-Infeld Action and Noncommutative gauge the- ory”, JHEP 0007 (2000) 033, hep-th/0006058. [12] A. Refolli, N. Terzi and D. Zanon, “Non abelian N=2 supersymmetric Born Infeld action”, Phys. Lett. B486 (2000) 337, hep-th/0006067. [13] E. A. Bergshoeff, M. de Roo and A. Sevrin, “Non-abelian Born-Infeld and kappa- symmetry”, hep-th/0011018. [14] A. Sevrin, J. Troost and W. Troost, “The non-abelian Born-Infeld action at order F 6”, Nucl. Phys. B603 (2001) 389, hep-th/0101192. [15] M. Cederwall, B. E. W. Nilsson and D. Tsimpis, “The structure of maximally supersym- metric Yang-Mills theory: constraining higher-order corrections”, JHEP 0106 (2001) 034, hep-th/0102009. [16] A. Refolli, A. Santambrogio, N. Terzi and D. Zanon, “F 5 contributions to the non- abelian Born Infeld action from a supersymmetric Yang-Mills five-point function”, hep-th/0105277. [17] M. Cederwall, B. E. W. Nilsson and D. Tsimpis, “D = 10 Super-Yang-Mills at O(α′ 2)”, hep-th/0104236. [18] E. A. Bergshoeff, A. Bilal, M. de Roo and A. Sevrin, “Supersymmetric non-abelian Born-Infeld revisited”, hep-th/0105274. [19] A. Bilal, “Higher-Derivative Corrections to the Non-Abelian Born-Infeld Action”, hep-th/0106062. [20] N. Seiberg and E. Witten, “String theory and noncommutative geometry”, JHEP 9909 (032) 1999, hep-th/9908142. [21] A. Hashimoto and K. Hashimoto, “Monopoles and Dyons in Non-Commutative Geom- etry”, JHEP 9911 (1999) 005, hep-th/9909202. [22] D. Bak, “Deformed Nahm Equation and a Noncommutative BPS Monopole”, Phys. Lett. B471 (1999) 149, hep-th/9910135. [23] K. Hashimoto, H. Hata and S. Moriyama, “Brane Configuration from Monopole Solution in Non-Commutative Super Yang-Mills Theory”, JHEP 9912 (1999) 021, hep-th/9910196. 7 [24] S. Goto and H. Hata, “Noncommutative Monopole at the Second Order in θ”, Phys. Rev. D62 (2000) 085022, hep-th/0005101. [25] K. Furuuchi, “Dp-D(p+4) in Noncommutative Yang-Mills”, JHEP 0103 (2001) 033, hep-th/0010119. [26] D. J. Gross and N. Nekrasov, “Solitons in Noncommutative Gauge Theory”, JHEP 0103 (2001) 044, hep-th/0010090. [27] K. Hashimoto and T. Hirayama, “Branes and BPS Configurations of Non-Commutative /Commutative Gauge Theories”, Nucl. Phys. B587 (2000) 207, hep-th/0002090. [28] M. Marino, R. Minasian, G. Moore and A. Strominger, “Nonlinear Instantons from Supersymmetric p-Branes”, JHEP 0001 (2000) 005, hep-th/9911206. [29] S. Terashima, “Instantons in the U(1) Born-Infeld Theory and Noncommutative Gauge Theory”, Phys. Lett. B477 (2000) 292, hep-th/9911245. [30] S. Moriyama, “Noncommutative Monopole from Nonlinear Monopole”, Phys. Lett. B485 (2000) 278, hep-th/0003231. [31] S. Moriyama, “Noncommutative/Nonlinear BPS Equations without Zero Slope Limit”, JHEP 0008 (2000) 014, hep-th/0006056. [32] N. Nekrasov and A. Schwarz, “Instantons on noncommutative R4, and (2,0) superconfor- mal six dimensional theory”, Commun. Math. Phys. 198 (1998) 689, hep-th/9802068. [33] L. Thorlacius, “Born-Infeld String as a Boundary Conformal Field Theory”, Phys. Rev. Lett. 80 (1998) 1588, hep-th/9710181. [34] K. Hashimoto and H. Ooguri, “Seiberg-Witten Transforms of Noncommutative Soli- tons”, hep-th/0105311, to be published in Phys. Rev. D. [35] N. Wyllard, “Derivative corrections to D-brane actions with constant background fields”, Nucl. Phys. B598 (2001) 247, hep-th/0008125. 8
synthetic_cpt
3
L2G_Repurposing_Language_Models_for_Genomics_Tasks.pdf
9 1 0 2 g u A 2 ] V C . s c [ 1 v 0 2 7 0 0 . 8 0 9 1 : v i X r a L2G Auto-encoder: Understanding Point Clouds by Local-to-Global Reconstruction with Hierarchical Self-Attention Xinhai Liu School of Software, Tsinghua University & Beijing National Research Center for Information Science and Technology (BNRist) Beijing, China [email protected] Zhizhong Han Department of Computer Science, University of Maryland College Park, USA [email protected] Xin Wen School of Software, Tsinghua University & Beijing National Research Center for Information Science and Technology (BNRist) Beijing, China [email protected] Yu-Shen Liu∗ School of Software, Tsinghua University & Beijing National Research Center for Information Science and Technology (BNRist) Beijing, China [email protected] Matthias Zwicker Department of Computer Science, University of Maryland College Park, USA [email protected] ABSTRACT Auto-encoder is an important architecture to understand point clouds in an encoding and decoding procedure of self reconstruc- tion. Current auto-encoder mainly focuses on the learning of global structure by global shape reconstruction, while ignoring the learn- ing of local structures. To resolve this issue, we propose Local-to- Global auto-encoder (L2G-AE) to simultaneously learn the local and global structure of point clouds by local to global reconstruction. Specifically, L2G-AE employs an encoder to encode the geome- try information of multiple scales in a local region at the same time. In addition, we introduce a novel hierarchical self-attention mechanism to highlight the important points, scales and regions at different levels in the information aggregation of the encoder. Simul- taneously, L2G-AE employs a recurrent neural network (RNN) as decoder to reconstruct a sequence of scales in a local region, based on which the global point cloud is incrementally reconstructed. Our outperforming results in shape classification, retrieval and upsam- pling show that L2G-AE can understand point clouds better than state-of-the-art methods. CCS CONCEPTS • Computing methodologies → Computer vision; Shape rep- resentations; • Information systems → Information retrieval. ∗Corresponding author. This work was supported by National Key R&D Program of China (2018YFB0505400) and NSF (award 1813583). Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. MM ’19, October 21–25, 2019, Nice, France © 2019 Association for Computing Machinery. ACM ISBN 978-1-4503-6889-6/19/10. . . $15.00 https://doi.org/10.1145/3343031.3350960 KEYWORDS point clouds; auto-encoder; unsupervised learning; hierarchical attention; interpolation layer; recurrent neural network ACM Reference Format: Xinhai Liu, Zhizhong Han, Xin Wen, Yu-Shen Liu, and Matthias Zwicker. 2019. L2G Auto-encoder: Understanding Point Clouds by Local-to-Global Reconstruction with Hierarchical Self-Attention. In Proceedings of the 27th ACM International Conference on Multimedia (MM’19), Oct. 21–25, 2019, Nice, France. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3343031. 3350960 1 INTRODUCTION In recent years, point clouds have attracted increasing attention due to the popularity of various depth sensors in different applications. Not only the traditional methods, deep neural networks have also been applied to point cloud analysis and understanding. However, it remains a challenge to directly learn from point clouds. Different from 2D images, point cloud is an irregular 3D data which makes it difficult to directly use traditional deep learning framework, e.g., traditional convolution neural network (CNN). The traditional CNN usually requires some fixed spatial distribution around each pixel so as to facilitate the convolution. One way to alleviate the problem is to voxelize a point cloud into voxels and then apply 3D Cov- Nets. However, because of the sparsity of point clouds, it leads to resolution-loss and explosive computation complexity, which sacrifices the representation accuracy. To address above challenges, PointNet [28] has been proposed to directly learn shape representations from raw point sets. Along with the availability of directly learning from point clouds by deep learning models, auto-encoder (AE) has become an vital architecture of the involved neural networks. Current AE focuses on the learning of the global structure of point clouds in the encoding and decoding procedure. However, current AE structure is still limited by learning the local structure of point clouds, which tends to be an important piece of information for point cloud understanding. Figure 1: Illustration of our local to global auto-encoder architecture. In the encoder, multi-scale areas is established in each local region around the sampled centroids in (a). And a hierarchical feature abstraction is employed to abstract the global feature of point clouds with self-attention in (b). The learned global feature is applied to shape classfication and retrieval applications. In the decoder, local areas and the global point cloud are reconstructed by hierarchical feature decoding with the interpolation layer, the RNN layer and the FC layer in (c)(d). To simultaneously learn global and local structure of point clouds, we propose a novel auto-encoder called Local-to-Global auto-encoder (L2G-AE). Different from traditional auto-encoder, L2G-AE lever- ages a local region reconstruction to learn the local structure of a point cloud, based on which the global shape is incrementally reconstructed for the learning of the global structure. Specifically, the encoder of L2G-AE can hierarchically encode the information at point, scale and region levels, where a novel hierarchical self- attention is introduced to highlight the important elements in each level. The encoder further aggregates all the information extracted from the point cloud into a global feature. In addition, L2G-AE employs a RNN-based decoder to decode the learned global fea- ture into a sequence of scales in each local region. And based on scale features, the global point cloud is incrementally reconstructed. L2G-AE leverages this local to global reconstruction to facilitate the point cloud understanding, which finally enables local and global reconstruction losses to train L2G-AE. Our key contributions are summarized as follows. • We propose L2G-AE to enable the learning of global and local structures of point clouds in an auto-encoder architecture, where the local structure is very important in learning highly discriminative representations of point clouds. • We propose hierarchical self-attention to highlight important elements in point, scale and region levels by learning the correlations among the elements in the same level. • We introduce RNN as decoding layer in an auto-encoder architecture to employ more detailed self supervision, where the RNN takes the advantage of the ordered multi-scale areas in each local region. 2 RELATED WORK Point clouds is a fundamental type of 3D data format which is very close to the raw data of various 3D sensors. Recently, applica- tions of learning directly on point clouds have received extensive attention, including shape completion [33], autonomous driving [27], 3D object detection [32, 39, 47], recognition and classification [5, 23, 24, 28, 29, 31, 35, 37, 38, 42], scene labeling [22], upsampling [41, 44], dense labeling and segmentation [34] , etc. Due to the irregular property of point cloud and the inspiring performances of 2D CNNs on large-scale image repositories such as ImageNet [4], it is intuitive to rasterize point clouds into 3D voxels and then apply 3D CNNs. Some studies [7, 27, 47] represent each voxel with a binary value which indicates the occupation of this location in space. The main problem of voxel-based methods is the fast growth of neural network size and computation complexity with the increasing of spatial resolution. To alleviate this problem, some improvements [25] have been proposed to explore the data sparsity of point clouds. However, when dealing with point clouds with huge number of points, the complexity of the neural network is still unacceptable. Recently, deep neural networks work quite effectively on the raw 3D point clouds. Different from learning from readered views [6, 12–15, 17] 2D meshes [8] or 3D voxels [9–11], PointNet [28] is the pioneer study which directly learns the representation for point clouds by computing features for each point individually and aggregating these features with max-pool operation. To capture the contextual information of local patterns inside point clouds, PointNet++ [29] uses sampling and grouping operations to extract features from point clusters hierarchically. Similarly, several recent studies [21, 30] explores indexing structures, which divides the input point cloud into leaves, and then aggregates node features from leaves to the root. Inspired by the convolution operation, recent methods [24, 35, 38] investigate well-designed CNN-like operations to aggregate points in local regions by building local connections with k-neareat-neighbors (kNN). Capturing the context information inside local regions is very important for the discriminative ability of the learned point cloud representations. KC-Net [31] employs a kernel correlation layer and a graph pooling layer to capture the local patterns of point clouds. ShapeContextNet [37] extends 2D Shape Context [2] to the 3D, which divides a local region into small bins and aggregates the bin features. Point2Seqeuce [26] employs an attention-based sampling&searchingself-attentionself-attentionself-attention1024 !"#$!%&$'()&interpolationRNNFCFC(a) multi-scale establishment(b) hierarchical feature abstractionskip link*+,-./35/,-./36789:/3.;<=8+,/(c) hierarchical feature decoding(d) local to global reconstruction is always the farthest one from the rest points {pi1 , pi2 , · · · , pi j−1 }. Compared to other sampling method, such as random sampling, FPS can achieve a better coverage of the entire point cloud with the given same number of centroids. As shown in Figure 2, around each sampled centroid, T different scale local areas are established con- tinuously by kNN searching with {K1, K2, · · · , KT } nearest points, respectively. An alternative searching method is ball query [29] which selects all points with a radius around the centroid. However, it is difficult for ball query to ensure the information inside local regions, which is sensitive to the sparsity of the input point clouds. Figure 3: Self-attention module. The input of this module is a D1 × D2 feature map and the output is another D1 × (D2 +C) feature map, where C is a parameter. 3.2 Hierarchical Self-attention In current work of learning on point clouds, Multi-Layer-Perceptron (MLP) layer is widely applied to integrate multiple features. Tradi- tional MLP layer first abstracts each feature into higher dimension individually and then aggregates these features by a concise max pooling operation. However, these two simple operations can hardly encode the correlation between feature vectors in the feature space. Inspired by the self-attention machanism in [45], the attention machanism is suitable for improving the traditional MLP by learn- ing the correlation between features. In this work, we propose a self-attention module to make up the defects of the MLP layer with an attention mechanism. Here, self-attention refers to learn the correlation among features in the same level. Different from the raw self-attention, we enforce a hierarchical feature extraction architecture with hierarchical self-attention in the encoder. There are three different levels inside the encoder, including point level, scale level, and region level. At each level, we introduce a self-attention module to learn self-attention weights by mining the correlations among the corresponding feature elements. Consequently, three self-attention modules are designed to prop- agate features from the lower level to the higher level. Supposed the input of the self-attention module is a feature map x ∈ RD1×D2 , where D1, D2 are the dimensions of the feature map. Therefore, D1, D2 are equal to Kt , 3 in the point level, equal to T , D in the scale level and equal to M, D in the region level, respectively. As depicted in Figure 3, the feature map x is first transformed into two feature spaces f and д to calculate the attention below, Figure 2: A multi-scale example inside a local region of an airplane point cloud, where there are four scales ar- eas [A1, A2, A3, A4] with different colors around the centroid point (red). sequence to sequence architecture to encode the multi-scale area features inside local regions. In order to alleviate the dependence on the labeled data, some studies have performed unsupervised learning for point clouds. FoldingNet [40] proposes a folding operation to deform a canonical 2D grid onto the suface of a point cloud. 3D-PointCapsNet [46] em- ploys a dynamic routing scheme in the reconstruction of input point clouds. However, it is difficult for these methods to capture the local patterns of point clouds. Similar to FoldingNet, PPF-FoldNet [3] also learns local descriptors on point cloud with a folding operation. LGAN [1] proposes an auto-encoder based on PointNet and extends the decoder module to the point cloud generation application with GAN. In this work, we propose a novel auto-encoder architecture to learn representations for point clouds. On the encoder side, an hierarchical self-attention mechanism is applied to embedding the correlation among features in each level. And on the decoder side, an interpolation layer and a RNN decoding layer are engaged to reconstruct multi-scale areas inside local regions. After building local areas, the global point cloud is generated by a fully-connected (FC) layer which acts as a down sampling function. 3 METHOD Now we introduce the L2G-AE in detail, where the structure is illus- trated in Figure 1. The input of the encoder is an unordered point set P = {p1, p2, · · · , pN } with N (N = 1024) points. Each point in the point set is composed of a 3D coordinate (x, y, z). L2G-AE first establishes multi-scale areas At (t ∈ [1,T ]) in each local region around the sampled points. Then, a hierarchical feature abstraction is enforced to obtain the global features of input point clouds with self-attentions. In the decoder, we simultaneously reconstruct local scale areas and global point clouds by hierarchical feature decoding. ′ The output of L2G-AE is the reconstructed local areas A t and the reconstructed P with same number of points to P. ′ 3.1 Multi-scale Establishment To capture fine-grained local patterns of point clouds, we first es- tablish multi-scale areas in each local region, which is similar to PointNet++ [29] and Point2Sequence [26]. Firstly, a subset {pi1 , pi2 , · · · , piM } of the input points is selected as the centroid of local re- gions by iterative farthest point sampling (FPS). The latest point pi j A1A2A3A4centroid×1x1 convtranspose×attention map+MLPinput feature mapsoftmax (!)"(!)#(!)$%×$&skip link$%×'$%×'$%×'concatenate$%×($&+')! where f (x) = Wf x, д(x) = Wдx, βj,i = exp(si j ) i=1 exp(si j ) (cid:205)D1 , where si j = f (xi )T д(x j), (1) and βj,i evaluates the attention degree which the model pays to the ith location when synthesizing the jth feature vector. Then the ) ∈ RD1×D2 , where attention result is r = (r1, r2, · · · , r j, · · · , rD1 r j = D1(cid:213) i=1 βj,ih(xi ), where h(xi ) = Whxi . (2) In above formulation, Wf ,Wд,Wh ∈ RD2×C are learned weight matrices, which are implemented as 1 × 1 convolutions. We use C = M/8 in the experiments. In addition, inspired by the skip link operation in ResNet[18] and DenseNet [20], we further concatenate the result of the atten- tion mechanism with the input feature matrix. Therefore, the final output of the self-attention module is given by oi = xi ⊕ ri , (3) where ⊕ is the concatenation operation. This allows the network to rely on the cues among the feature vectors. To aggregate the features with correlation information, a MLP layer and a max pooling operation are employed to integrate the multiple features. In particular, the first self-attention module ag- gregates the points in a scale to a D-dimensional feature vector. The second one encodes the multi-scale features in a region into a D-dimensional feature. The final one integrates features of all local regions on a point cloud into a 1024-dimensional global feature. Therefore, the encoder hierarchically abstracts point features from the levels of point, scale and region to a global representation of the input point cloud. 3.3 Interpolation Layer The target of the decoder is to generate the points of the local areas and entire points. Previous approaches [1, 3, 40] usually use simple fully-connected (FC) layers or MLP layers to build the decoder. However, the expressive ability of the decoder is largely limited without considering the relationship among features. In this work, we propose a progressive decoding way which can be regarded as a reverse process of the encoding. The first step is to generate local region features from the global feature. To propagate the global feature д to region features, a simple interpolation operation is first engaged in the decoder. The local region feature li is calculated by li = c (pi − p0)2 д, i ∈ [1, M], (4) where c (c = 10−10) is a constant. Here, p0 = (0, 0, 0) is the centroid of the input point cloud after the normalization processing. And pi is the centroid point of the corresponding local region. By the simple interpolation operation, the spatial distribution information of local region can be integrated to facilitate the feature decoding. The interpolated local region features are then concatenated with skip linked local region features from the encoder. The concatenated features are passed through another MLP layer into a M × D feature matrix. Figure 4: The decoding process of the RNN layer. 3.4 RNN Layer Given the feature of local regions, we want to decode the scale level features. Due to the multi-scale setting, the features of different scales in a local region can be regarded as a feature sequence with length T . As we all know that recurrent neural network [19] has shown excellent performances in processing sequential data. Thus, a RNN decoding layer is employed to generate the multi-scale area features. The decoding process is shown in Figure 4. We first replicate the local region feature li for T times, and the replicated local region features are feed into the RNN layer by ht = f (ht −1, l t i ), t ∈ [1,T ], (5) where f is a non-linear activation function and t is the index of RNN step. Therefore, the predicted t th area feature at can be calculated by at = Wθ ht . (6) Here, Wd is a learnable weight matrix. To generate the points inside each local area, several FC layers are adopted to reconstruct the ′ points. The local area A t is reconstructed by ′ A t = Wθt at + bθt , (7) where Wθt , bθt are weights of the FC layer. Based on the recon- structed local areas, another FC layer is applied to incrementally reconstruct the entire point cloud. All reconstructed areas are con- catenated and then passed through the FC layer by T ] + b. Here, ⊕ represents the concatenation operation. 2 ⊕ · · · ⊕ A P = W [A 1 ⊕ A ′ ′ ′ (8) 3.5 Loss Function We propose a new loss function to train the network in an end-to- end fashion. There are two parts in the loss function, local scale reconstruction and global point cloud reconstruction, respectively. As mentioned earlier, we should encourage accurate reconstruction of local areas and the global point cloud at the same time. Suppose At is the t th scale area in the multi-scale establishment subsection, ′ then, the local reconstruction error for A t is measured by the well- known Chamfer distance, Llocal = dCH (At , A ′ t ) = T (cid:213) ( 1 |At | t =1 + 1 ′ t | |A (cid:213) pi ∈At (cid:213) p ′ ′ i ∈A t ∥pi − p ′ i ∥2 min ′ ′ i ∈A p t ∥pi − p ′ i ∥2), min pi ∈At (9) RNN Chamfer lossRNN RNN RNN (cid:127)ilocal region feature:Llocal+++ Similarly, let the input point set be P and the reconstructed point set be P . The global reconstruction error can be denoted by ′ Lдlobal = dCH (P, P ′ ) = 1 |P | + 1 |P ′ | (cid:213) pi ∈P (cid:213) ′ p ′ i ∈P ∥pi − p ′ i ∥2 min ′ ′ i ∈P p ∥pi − p ′ i ∥2. min pi ∈P (10) Altogether, the network is trained end-to-end by minimizing the following joint loss function L = Llocal where γ (γ = 1) is the proportion of two part errors. + γ Lдlobal , (11) 4 EXPERIMENTS In this section, we first investigate how some key parameters affect the performance of L2G-AE in the shape classification task on ModelNet10 [36]. Then, an ablation study is done to show the effectiveness of each module in L2G-AE. Finally, we further evaluate the performances of L2G-AE in multiple applications including 3D shape classification, 3D shape retrieval and point cloud upsampling. 4.1 Network Configuration In L2G-AE, we first sample M = 256 points as the centroids of local regions by FPS. Then, around each centroid, a kNN searching algorithm selects T = 4 scale areas with [K1 = 16, K2 = 32, K3 = 64, K4 = 128] points inside each area. In the multi-level feature propagation process, we initialize the feature dimension C = M/8 = 32 and D = 256. The encoder learns a 1024-dimension global feature for the input point cloud through hierarchical feature extraction. Similarly, the decoder hierarchically reconstructs local scales and global point cloud. In the RNN decoding layer, we adopt LSTM as the default RNN cell with hidden state dimension h = D = 256. In the experiment, we train our network on a NVIDIA GTX 1080Ti GPU using ADAM optimizer with the initial learning rate of 0.0001 and batch size of 8. The learning rate is decreased by 0.3 for every 20 epochs. 4.2 Parameters All experiments on parameter comparison are evaluated under Mod- elNet10. ModelNet10 contains 4899 CAD models from 10 categories and is split into 3991 for training and 908 for testing. For each model, we adopt 1024 points which are uniformly sampled from mesh faces and are normalized into a unit ball before being fed into the network. During the training process, the loss function keeps decreasing and stabilizes around the 180th epoch. To acquire the accuracies on ModelNet10, we train a linear SVM from the global features obtained by the auto-encoder. Specifically, the OneVsRest strategy is adopted with the linearSVM function as the kernel. We first explore the number of sampled points M which deter- mines the distribution of local regions inside point clouds. In the experiment, we keep the network settings as depicted in the net- work configuration and vary the number of sampled points M from 128 to 320. The results are shown in Table 1, where the instance accuracies on the benchmark of ModelNet10 have a tendency to rise first and then fall. This comparison implies that L2G-AE can Table 1: The effects of the number of sampled points M un- der ModelNet10. M Acc (%) 128 93.83 192 94.38 256 95.37 320 93.94 Figure 5: The reconstructed results with different sampled points, where the CD represents the Chamfer distance be- tween ground-truth and the reconstructed point cloud. effectively extract the contextual information in point clouds by multi-level feature propagation and M = 256 is an optimal choice which can well cover input point clouds without excessive redun- dant. To learn the reconstructed results intuitively, Figure 5 shows the reconstructed point clouds with different sampled points. Ac- cording to Chamfer distances, L2G-AE can also reconstruct the input point cloud with the varying of sampled points. With keeping the sampled points M = 384, we investigate the key parameter dimension C inside the self-attention modules. To unify the parameter in self-attention module, we keep the same dimension C in different semantic levels. We change the default C = 32 to 16 and 64, respectively. In Table 2, L2G-AE achieves the best performance when the feature dimension C is 32. Finally, Table 2: The effects of the feature dimension C of the self- attention module under ModelNet10. M Acc (%) 16 93.94 32 95.37 64 94.16 we show the effects of feature dimension of local areas D and the global feature Dдlobal . The dimension is varied as shown in Table 3 and Table 4. Neither the biggest nor the smallest, L2G-AE gets better performances when D, Dдlobal are set to 256 and 1024 respectively. There is a trade-off between the network complexity and the expressive ability of our L2G-AE. Table 3: The effects of the local feature dimension D on Mod- elNet10. D Acc (%) 128 93.72 256 95.37 512 93.28 4.3 Ablation Study To quantitatively evaluate the effect of the self-attention module, we show the performances of L2G-AE under four settings: with point level self-attention module only (PL), with area level self-attention module only (AL), with region level self-attention module only (RL), remove all self-attention modules (NSA) and with all self-attention ground-truthM=128CD=0.003529M=192CD=0.003376M=256CD=0.003118M=320CD=0.003510 Table 4: The effects of the global feature dimension Dдlobal under ModelNet10. Table 7: The comparison of classification accuracy (%) under ModelNet10 and ModelNet40. Dдlobal Acc (%) 512 94.16 1024 95.37 2048 93.94 Figure 6: The reconstruction results of L2G-AE with only the local loss and only the global loss. modules (ASA). As shown in Table 5, the self-attention module is effective in learning highly discriminative representations of point clouds by capturing the correlation among feature vectors. The results with only one self-attention module outperform the results without any self-attention module. And we achieve the best performance when three self-attention modules work together. The performance of self-attentions is affected by the discriminative ability of features. At the area level, the features of areas in the same region are similar, since there are only four areas, which makes the self-attention at area level contribute the least among all three self- attentions. In contrast, at the point level and the region level, the features of points or regions change a lot, so these self-attentions contribute more. From our observation, the results of PL and RL are coincidentally equal in the experiments. Table 5: The effects of the self-attention module on Model- Net10. Metric Acc (%) PL 94.16 AL 94.05 RL 94.16 NSA 93.72 ASA 95.37 After exploring the self-attention module, we also discuss the contributions of the two loss functions Llocal and Lдlobal . In Table 6, the results with local loss only (Local), global loss only (Global) and two losses together (Local + Global) are listed. The local loss function is very important in capturing local patterns of point clouds. And the two loss functions together can further enhance the classification performances of our neural network. In addition, Figure 6 shows the reconstruction results of our L2G-AE with only local loss and only global loss, respectively. From the results of the reconstructed point clouds, L2G-AE can reconstruct the input point cloud with only part of the joint loss function. In particular, the local reconstructed result in Figure 6 is a dense point cloud. Table 6: The effects of the two loss functions Llocal and Lдlobal on ModelNet10. Metric Acc (%) Local Global 92.84 94.71 Local+Global 95.37 Methods PointNet PointNet++ ShapeContextNet Kd-Net KC-Net PointCNN DGCNN SO-Net Point2Sequence MAP-VAE LGAN LGAN(MN40) FoldingNet FoldingNet(MN40) Our Supervised MN40 MN10 Yes Yes Yes Yes Yes Yes Yes Yes Yes No No No No No No 89.20 90.70 90.00 91.80 91.00 92.20 92.20 90.90 92.60 90.15 85.70 87.27 88.40 84.36 90.64 - - - 94.00 94.4 - - 94.1 95.30 94.82 95.30 92.18 94.40 91.85 95.37 4.4 Classification In this subsection, we evaluate the performance of L2G-AE under ModelNet10 and ModelNet40 benchmarks, where ModelNet40 con- tains 12, 311 CAD models which is split into 9, 843 for training and 2, 468 for testing. Table 7 compares L2G-AE with state-of-the-art methods in the shape classification task on ModelNet10 and Model- Net40. The compared methods include PointNet [28], PointNet++ [29], ShapeContextNet [37], KD-Net [21], KC-Net [31], PointCNN [24], DGCNN [35], SO-Net [23], Point2Sequence [26], MAP-VAE [16], LGAN [1] and FoldingNet [40]. L2G-AE significantly outperforms all the unsupervised competi- tors under ModelNet10 and ModelNet40, respectively. In particular, L2G-AE achieves accuracy 95.37% which is even higher than other methods of supervision under ModelNet10. Although the results of LGAN [1] and FoldingNet [40] also show good performances under ModelNet10 and ModelNet40. This is because these methods are trained under a version of ShapeNet55 that contains more than 57,000 3D shapes. However, this version of ShapeNet55 dataset is not avaiable for public download from the official website. There- fore, we train all these methods under ModelNet40 for the fair comparison. Table 8: The comparison of retrieval in terms of under Mod- elNet10. Methods Acc (%) LGAN FoldingNet 49.94 53.42 Our 67.81 4.5 Retrieval L2G-AE is further evaluated in the shape retrieval task under Mod- elNet10 and compared with some other unsupervised methods of learning on point clouds. The compared results include two state- of-the-art unsupervised methods for point clouds, i.e., LGAN [1] and FoldingNet [40]. The target of shape retrieval is to obtain the ground-truthlocal loss onlyglobal loss only Table 9: The quantitative comparison of 16× upsampling from 625 points under ModelNet10. 10−3 PU EC Our bathtub 1.01 1.43 1.74 bed 1.12 1.81 1.46 chair 0.82 1.80 1.58 desk 1.22 1.30 2.08 dresser monitor 1.55 1.43 1.40 1.19 2.04 1.61 n.stand 1.77 1.88 1.86 sofa 1.13 1.79 1.67 table 0.69 1.00 1.86 toilet 1.39 1.72 2.10 Figure 7: The comparison of PR curves for retrieval under ModelNet10. relevant information of a inquiry from a collection. In these experi- ments, the 3D shapes in the test set are used as quires to retrieve the rest shapes in the same set, and mean Average Precision (mAP) is used as a metric. As shown in Table 8, our results outperform all the compared results under ModelNet10. It shows that L2G-AE can be effect in improving the performance of unsupervised shape retrieval on point clouds. Their PR curves under ModelNet10 are also compared in Figure 7 which intuitively shows the performances of these three methods. 4.6 Unsupervised Upsampling for Point Clouds Benefit from the design of local to global reconstruction, it is com- petent for our L2G-AE to be applied in the unsupervised point cloud upsampling application. In the local reconstruction, a dense point cloud is obtained by reconstructing multiple local scales with overlapping. Therefore, it is convenient to produce the upsampling results by downsampling from the dense local reconstructed results using some unsupervised methods, such as random sampling or far- thest point sampling. As far as we know, L2G-AE is the first method which performs point cloud upsampling with deep neural networks in an unsupervised manner. To evaluate the performance of L2G- AE, We compare our method on relatively sparse (625 points) inputs with state-of-the-art supervised point cloud upsampling methods, including PU-Net [44] and EC-Net [43]. The target of upsampling is to generate a dense point clouds with 10000 points. For PU-Net and EC-Net, the 16× results (10000 points) are obtained from inputs (625 points) in a supervised manner. Differently, L2G-AE first obtains the local reconstruction results and then downsamples them to 10000 points. As shown in Table 9, mean Chamfer Distance (mCD) is used as a metric for quantitative comparison with PU-Net (PU) and EC-Net (EC) under ModelNet10. Although the results of PU-Net and EC- Net are better than "Our" in some classes under ModelNet10, the most likely reason is that the ground-truth is not visible to L2G-AE in the training. In addition, the input point cloud with 625 points Figure 8: Some upsampled results of L2G-AE. Figure 9: Some reconstructed examples of L2G-AE. contains very limited information. Figure 8 shows some upsamled results of our L2G-AE. 4.7 Visualization In this section,we will show some important visualization results of L2G-AE. Firstly, some reconstructed point clouds by L2G-AE are listed with the ground-truths as shown in Figure 9. From the results, the reconstructed point clouds of L2G-AE are consistent with the ground-truths. 00.51Recall00.51PrecisionFoldingNetLGANOurinputground-truthourground-truthour ground-truthour Figure 11: Some examples of the attention in the scale level. The abscissa represents the 4 scales [s1, s2, s3, s4] around each centroid in a point cloud and the ordinate indicates the in- dex of 256 centroids, where each subfigure represents a 3D object. Figure 10: Some examples of the attention in the region level, where each subfigure represents a 3D object. Then, some visualizations of the attention map inside self-attention modules are engaged to show the effect of attentions in the hierar- chical feature abstraction. There are three self-attention modules in the encoder, and we first visualize the attention map inside the local region level. For intuitively understanding, we directly attach the attention values to the centroids of local regions and then show these centroids. By summing attention map by column in the region level, the attention value of each centroid is caculated. For example, a 256 × 256 attention map is translated to a 256-dimension attention vector, when the number of sampled centroids is 256. Then, both the size and the color of centroids are associated with the attention values. Therefore, the centroids with lighter colors and larger sizes indicate larger attention values. As depicted in Figure 10, we show some examples of the region level attention. Figure 10 shows that the self-attention in the region level tends to on special local regions at conspicuous locations such as edges, corners or protruding parts. Similarly, we also show some examples of the scale level attention in Figure 11 and the point level attention in Figure 12. In Figure 11, each image shows the 4 scale attention values around 256 sampled centroids of a point cloud. And the color indicates the value of attention, where large attention value corresponds to a bright color such as yellow. The results indicate that the network tends to focus on the 4th scale which contains more information of local structures. In Figure 12, each row represents the 4 scale areas around a centroid. In different scale areas, the network concern on different points inside the areas to capture the local patterns in the local region. Figure 12: Some examples of the attention in the point level, where the four subfigures in each row represent the four scales of a local region. 5 CONCLUSIONS In this paper, we propose a novel local to global Auto-encoder framework for point cloud understanding in the shape classification, retrieval and point cloud upsampling tasks. In the encoder, a self- attention mechanism is employed to explore the correlation among features in the same level. In contrast, an interpolation layer and RNN decoding layer successfully reconstruct local scales and global point clouds hierarchically. Experimental results show that our method achieves competitive performances with state-of-the-art methods. s1s2s3s4scale50100150200250centroid numbers1s2s3s4scale50100150200250centroid numbers1s2s3s4scale50100150200250centroid numbers1s2s3s4scale50100150200250centroid numberS1S2S3S4 Attention-Based Sequence to Sequence Network. In AAAI. [27] Charles R Qi, Wei Liu, Chenxia Wu, Hao Su, and Leonidas J Guibas. 2017. Frustum PointNets for 3D Object Detection from RGB-D Data. In CVPR. [28] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. 2016. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In CVPR. [29] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. 2017. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In NeurIPS. 5099–5108. [30] Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger. 2017. OctNet: Learning Deep 3D Representations at High Resolutions. In CVPR, Vol. 3. [31] Yiru Shen, Chen Feng, Yaoqing Yang, and Dong Tian. 2018. Mining Point Cloud Local Structures by Kernel Correlation and Graph Pooling. In CVPR, Vol. 4. [32] M Simon, S Milz, K Amende, and HM Gross. 2018. Complex-YOLO: Real-Time 3D Object Detection on Point Clouds. arXiv preprint arXiv:1803.06199 (2018). [33] David Stutz and Andreas Geiger. 2018. Learning 3D Shape Completion from Laser Scan Data with Weak Supervision. In CVPR. [34] Yuan Wang, Tianyue Shi, Peng Yun, Lei Tai, and Ming Liu. 2018. PointSeg: Real- Time Semantic Segmentation Based on 3D LiDAR Point Cloud. In arXiv preprint arXiv:1807.06288. [35] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. 2018. Dynamic Graph CNN for Learning on Point Clouds. In arXiv preprint arXiv:1801.07829. [36] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 2015. 3D ShapeNets: A Deep Representation for Volumetric Shapes. In CVPR. 1912–1920. [37] Saining Xie, Sainan Liu, Zeyu Chen, and Zhuowen Tu. 2018. Attentional ShapeContextNet for Point Cloud Recognition. In CVPR. 4606–4615. [38] Yifan Xu, Tianqi Fan, Mingye Xu, Long Zeng, and Yu Qiao. 2018. SpiderCNN: Deep Learning on Point Sets with Parameterized Convolutional Filters. In ECCV. [39] Bin Yang, Wenjie Luo, and Raquel Urtasun. 2018. PIXOR: Real-Time 3D Object Detection from Point Clouds. In CVPR. [40] Yaoqing Yang, Chen Feng, Yiru Shen, and Dong Tian. 2018. FoldingNet: Point Cloud Auto-Encoder via Deep Grid Deformation. In CVPR. [41] Wang Yifan, Shihao Wu, Hui Huang, Daniel Cohen-Or, and Olga Sorkine- Hornung. 2019. Patch-Base Progressive 3D Point Set Upsampling. In CVPR. [42] Haoxuan You, Yifan Feng, Rongrong Ji, and Yue Gao. 2018. PVNet: A Joint Convolutional Network of Point Cloud and Multi-View for 3D Shape Recognition. In ACM Multimedia Conference. [43] Lequan Yu, Xianzhi Li, Chi-Wing Fu, Daniel Cohen-Or, and Pheng-Ann Heng. 2018. EC-Net: An Edge-Aware Point Set Consolidation Network. In ECCV. [44] Lequan Yu, Xianzhi Li, Chi-Wing Fu, Daniel Cohen-Or, and Pheng-Ann Heng. 2018. PU-Net: Point Cloud Upsampling Network. In CVPR. [45] Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. 2018. Self- Attention Generative Adversarial Networks. In NeurIPS. [46] Yongheng Zhao, Tolga Birdal, Haowen Deng, and Federico Tombari. 2019. 3D Point-Capsule Networks. In CVPR. [47] Yin Zhou and Oncel Tuzel. 2017. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. In CVPR. REFERENCES [1] Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas Guibas. 2018. Learning Representations and Generative Models for 3D Point Clouds. In ICML. [2] Serge Belongie, Jitendra Malik, and Jan Puzicha. 2001. Shape Context: A New Descriptor for Shape Matching and Object Recognition. In NeurIPS. 831–837. [3] Haowen Deng, Tolga Birdal, and Slobodan Ilic. 2018. PPF-FoldNet: Unsupervised Learning of Rotation Invariant 3D Local Descriptors. In ECCV. [4] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A Large-Scale Hierarchical Image Database. In CVPR. [5] Aleksey Golovinskiy, Vladimir G Kim, and Thomas Funkhouser. 2009. Shape- Based Recognition of 3D Point Clouds in Urban Environments. In ICCV. 2154– 2161. [6] Zhizhong Han, Xinhai Liu, Yu-Shen Liu, and Matthias Zwicker. 2019. Parts4Feature: Learning 3D Global Features from Generally Semantic Parts in Multiple Views. In IJCAI. [7] Zhizhong Han, Zhenbao Liu, Junwei Han, ChiMan Vong, Shuhui Bu, and C.L.P. Chen. 2019. Unsupervised Learning of 3D Local Features from Raw Voxels Based on a Novel Permutation Voxelization Strategy. IEEE Transactions on Cybernetics 49, 2 (2019), 481–494. [8] Zhizhong Han, Zhenbao Liu, Junwei Han, Chi-Man Vong, Shuhui Bu, and C.L.Philip Chen. 2017. Mesh Convolutional Restricted Boltzmann Machines for Unsupervised Learning of Features With Structure Preservation on 3D Meshes. IEEE Transactions on Neural Network and Learning Systems 28, 10 (2017), 2268 – 2281. [9] Zhizhong Han, Zhenbao Liu, Junwei Han, Chi-Man Vong, Shuhui Bu, and Xuelong Li. 2016. Unsupervised 3D Local Feature Learning by Circle Convolutional Restricted Boltzmann Machine. IEEE Transactions on Image Processing 25, 11 (2016), 5331–5344. [10] Zhizhong Han, Zhenbao Liu, Chi-Man Vong, Yu-Shen Liu, Shuhui Bu, Junwei Han, and CL Philip Chen. 2017. BoSCC: Bag of Spatial Context Correlations for Spatially Enhanced 3D Shape Representation. IEEE Transactions on Image Processing 26, 8 (2017), 3707–3720. [11] Zhizhong Han, Zhenbao Liu, Chi-Man Vong, Yu-Shen Liu, Shuhui Bu, Junwei Han, and CL Philip Chen. 2018. Deep Spatiality: Unsupervised Learning of Spatially-Enhanced Global and Local 3D Features by Deep Neural Network with Coupled Softmax. IEEE Transactions on Image Processing 27, 6 (2018), 3049–3063. [12] Zhizhong Han, Honglei Lu, Zhenbao Liu, Chi-Man Vong, Yu-Shen Liua, Matthias Zwicker, Junwei Han, and CL Philip Chen. 2019. 3D2SeqViews: Aggregating Sequential Views for 3D Global Feature Learning by CNN with Hierarchical Attention Aggregation. IEEE Transactions on Image Processing (2019). [13] Zhizhong Han, Mingyang Shang, Yu-Shen Liu, and Matthias Zwicker. 2019. View Inter-Prediction GAN: Unsupervised Representation Learning for 3D Shapes by Learning Global Shape Memories to Support Local View Predictions. In AAAI. [14] Zhizhong Han, Mingyang Shang, Zhenbao Liu, Chi-Man Vong, Yu-Shen Liu, Matthias Zwicker, Junwei Han, and CL Philip Chen. 2018. SeqViews2SeqLabels: Learning 3D Global Features via Aggregating Sequential Views by RNN with Attention. IEEE Transactions on Image Processing 28, 2 (2018), 658–672. [15] Zhizhong Han, Mingyang Shang, Xiyang Wang, Yu-Shen Liu, and Matthias Zwicker. 2019. Yˆ 2Seq2Seq: Cross-Modal Representation Learning for 3D Shape and Text by Joint Reconstruction and Prediction of View and Word Sequences. In AAAI. [16] Zhizhong Han, Xiyang Wang, Yu-Shen Liu, and Matthias Zwicker. 2019. Multi- Angle Point Cloud-VAE: Unsupervised Feature Learning for 3D Point Clouds from Multiple Angles by Joint Self-Reconstruction and Half-to-Half Prediction. In ICCV. [17] Zhizhong Han, Xiyang Wang, Chi-Man Vong, Yu-Shen Liu, Matthias Zwicker, and CL Chen. 2019. 3DViewGraph: Learning Global Features for 3D Shapes from A Graph of Unordered Views with Attention. In IJCAI. [18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In CVPR. 770–778. [19] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural computation 9, 8 (1997), 1735–1780. [20] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. 2017. Densely Connected Convolutional Networks. In CVPR. 4700–4708. [21] Roman Klokov and Victor Lempitsky. 2017. Escape from Cells: Deep KD- Networks for the Recognition of 3D Point Cloud Models. In ICCV. 863–872. [22] Hema S Koppula, Abhishek Anand, Thorsten Joachims, and Ashutosh Saxena. 2011. Semantic Labeling of 3D Point Clouds for Indoor Scenes. In NeurIPS. 244–252. [23] Jiaxin Li, Ben M Chen, and Gim Hee Lee. 2018. SO-Net: Self-Organizing Network for Point Cloud Analysis. In CVPR. 9397–9406. [24] Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. 2018. PointCNN: Convolution on X-Transformed Points. In NeurIPS. [25] Yangyan Li, Soeren Pirk, Hao Su, Charles R Qi, and Leonidas J Guibas. 2016. FPNN: Field Probing Neural Networks for 3D Data. In NeurIPS. 307–315. [26] Xinhai Liu, Zhizhong Han, Yu-Shen Liu, and Matthias Zwicker. 2019. Point2Sequence: Learning the Shape Representation of 3D Point Clouds with an
synthetic_cpt
1
A_Comprehensive_Evaluation_of_Large_Language_Models_on_Aspect-Based_Sentiment_Analysis.pdf
SPOR: A Comprehensive and Practical Evaluation Method for Compositional Generalization in Data-to-Text Generation Ziyao Xu, Houfeng Wang National Key Laboratory for Multimedia Information Processing, Peking University {xzyxzy,wanghf}@pku.edu.cn 4 2 0 2 l u J 5 1 ] L C . s c [ 8 v 0 5 6 0 1 . 5 0 4 2 : v i X r a Abstract Compositional generalization is an important ability of language models and has many differ- ent manifestations. For data-to-text generation, previous research on this ability is limited to a single manifestation called Systematicity and lacks consideration of large language models (LLMs), which cannot fully cover practical ap- plication scenarios. In this work, we propose SPOR, a comprehensive and practical evalua- tion method for compositional generalization in data-to-text generation. SPOR includes four aspects of manifestations (Systematicity, Pro- ductivity, Order invariance, and Rule learnabil- ity) and allows high-quality evaluation without additional manual annotations based on exist- ing datasets. We demonstrate SPOR on two different datasets and evaluate some existing language models including LLMs. We find that the models are deficient in various aspects of the evaluation and need further improve- ment. Our work shows the necessity for com- prehensive research on different manifestations of compositional generalization in data-to-text generation and provides a framework for eval- uation. The dataset and code are available at https://github.com/xzy-xzy/SPOR. 1 Introduction Data-to-text generation (Gatt and Krahmer, 2018) is an important task in natural language genera- tion (NLG). It aims to generate fluent and faithful text based on structured data input and is critical in many NLG systems, such as report generation (Wiseman et al., 2017), oriented dialogues (Mehta et al., 2022), etc. In data-to-text generation, struc- tured data input is compositional, i.e., it can be considered as a combination of elements formed according to certain rules. Therefore, in order to handle the practical data-to-text generation, the lan- guage models should have the ability to recombine previously learned elements with certain rules to map new inputs made up from these elements to their correct output (Hupkes et al., 2022), which is the so-called compositional generalization. Compositional generalization is an important ability of language models for many tasks. In se- mantic parsing and mathematical reasoning tasks, many different manifestations of this ability have been studied (Hupkes et al., 2020; Ontañón et al., 2022), such as systematicity (handle combinations unseen during training), productivity (extrapolate to longer sequences than those seen during training), etc. For compositional generalization in data-to- text generation, only systematicity receives atten- tion (Mehta et al., 2022), and research on other manifestations is lacking. The single systematic manifestation cannot fully cover practical applica- tion scenarios of compositional generalization and cannot comprehensively reflect this ability of lan- guage models in data-to-text generation. Although research on different manifestations of composi- tional generalization in data-to-text generation is necessary, there is currently no comprehensive eval- uation method to support such research. To solve this problem, we propose SPOR, a comprehensive and practical evaluation method for compositional generalization in a data-to-text gen- eration. Based on the manifestations of compo- sitional generalization mentioned in Hupkes et al. (2020), SPOR includes four aspects of composi- tional generalization in data-to-text generation: • Systematicity. The ability to handle data com- binations unseen during training. • Productivity. The ability to handle a larger amount of data within a sample than seen dur- ing training. • Order invariance. The ability to maintain the fidelity and proper data ordering of the out- put text when the input order of data in an unordered set is changed. • Rule learnability. The ability to actually learn and apply copy rule for generation, rather than memorize specific mappings. For each aspect, we propose the corresponding methods for dataset construction and evaluation. Based on existing datasets, we mainly perform repartition (Keysers et al., 2020) and element mod- ification to construct datasets for our evaluation. Overall, the evaluation method SPOR has the fol- lowing properties: • Necessity. The ability or property in each as- pect manifests compositional generalization and is required by the model for practical data- to-text generation. • High evaluation quality. For each aspect, the evaluation method can effectively evaluate the corresponding ability or property. • Low construction cost. Based on existing datasets, the dataset used for evaluation does not require additional manual annotation and can be constructed automatically. We demonstrate SPOR on two existing datasets for data-to-text generation and evaluate some ex- isting language models. Previous research on com- positional generalization in data-to-text genera- tion lacks consideration of large language models (LLMs) due to the lack of methods to directly fine- tune and apply LLMs to data-to-text generation in the past. Nowadays, advanced Parameter-Efficient Fine-Tuning such as LoRA (Hu et al., 2022) pro- vides the methods, and the consideration of LLMs becomes necessary. Therefore, we include some advanced LLMs in our evaluation to partially fill the gap in previous research. 2 Preliminaries In this section, we provide a brief description of the datasets that SPOR is demonstrated on, the evaluated models, and the evaluation metrics. 2.1 Datasets We demonstrate SPOR on two data-to-text genera- tion datasets, WebNLG (Gardent et al., 2017) and E2E (Novikova et al., 2017). Both contain (D, T ) pairs, where D is the input data and T is the text that verbalizes the data. Figure 1 shows examples of data-text pairs in WebNLG and E2E. WebNLG is a realistic multi-domain dataset. In WebNLG, D is an unordered set of 1~7 triples ⟨s, p, o⟩, where s, p, o represents subject, predi- cate, and object, respectively. We regard triples as Figure 1: Examples of data-text pairs in WebNLG (above) and E2E (below). data units for WebNLG. In the original WebNLG dataset, 10 domains are present in the training set and can be used in the evaluation. We select the latest version, WebNLG+ (Ferreira et al., 2020), which increases the number of available domains to 16 and contains more samples. For the samples used for testing, we retain only samples in which all data units appear in the training set. After pro- cessing, WebNLG+ contains 3,873 distinct triples, 13,211 samples in the training set, and 2,179 sam- ples in the test set. E2E is a dataset in the restaurant domain. In E2E, D is a name with an unordered set of 1~7 pairs (a, v), where a, v represents attribute and value, re- spectively. We regard attribute-value pairs as data units for E2E. We select the cleaned version (Dusek et al., 2019), which fixes the data to eliminate in- consistencies between the data and the text. We perform further filtering based on the clean version, retaining only samples in which all input values have matches in the text. After processing, E2E contains 7 distinct attributes, 45 distinct attribute- value pairs, 6,735 samples in the training set, and 1,635 samples in the test set. 2.2 Models We evaluate some smaller-sized, previously state- of-the-art language models in data-to-text genera- tion, including two encoder-decoder language mod- els T5-large (Raffel et al., 2020) and BART-large (Lewis et al., 2020), and one causal language model GPT-2-large (Radford et al., 2019). We also evalu- ate some advanced LLMs, including one encoder- decoder language model T5-11b (Chung et al., 2022), and two causal language models Mistral- 7b (Jiang et al., 2023) and Llama-2-13b (Touvron et al., 2023). For data input, we use the lineariza- tion method (Kale and Rastogi, 2020). Following previous work in data-to-text generation (Mehta et al., 2022), we use fine-tuning method and treat < Bananaman, starring, Bill Oddie >< Bill Oddie, birth place, Lancashire >Bill Oddie, who was born in Lancashire, starred in Bananaman.name[The Phoenix], eatType[pub], food[French],priceRange[more than £30], customer rating[5 out of 5]The Phoenix is a pub with French food. It has a customer ratingof 5 out of 5 and a price range of more than £30. Algorithm 1 Construction of Atom and the test set Input: original dataset S Output: Atom (A), test set (T ), Blocked (B) T, A, B ← ∅ while S ̸= ∅ do x ← randomly selected sample in S S ← S − {x} R ← {y | y ∈ A ∪ S ∧ y /∈ B ∧ |y ∩ x| = 1} if x ⊆ (cid:83) R and maxy∈A |y ∩ x| ≤ 1 then T ← T ∪ {x} S ← S − R A ← A ∪ R B ← B ∪ {y | y ∈ S ∧ |y ∩ x| > 1} end if end while 2020; Hupkes et al., 2020; Keysers et al., 2020), which refers to the ability to handle combinations of known elements that are not seen during training. In the data-to-text generation task, the elements re- fer to the data. Although a large corpus allows the model to see a large amount of data, the possible combinations of data are too numerous to be fully covered. In practical applications, the model will often see combinations of known data in the input that are not seen during training, so the ability to handle unseen combinations of data is important. In the systematicity evaluation, by reconstruct- ing the dataset, we allow the model to see all data units in the test set during training, but not any combination of them. In this case, the model needs systematicity to handle unseen combinations at test time. We use the model performance in this case as the systematicity metric. Based on the same test set, we also construct the case where the model can see combinations of data units to test whether the model’s performance when it cannot see combina- tions is comparable to that when it can. 3.1.1 Dataset Construction We construct one test set and two training sets Atom (A) and Combination (C). Figure 2 illus- trates the goal of our construction. We call the data units that appear in the test set atoms. Both Atom and Combination cover all atoms, and they have the same total number of atoms and close distribu- tion of atoms. However, Atom does not contain any combination of atoms, but Combination does. We use Algorithm 1 to construct Atom and the test set. We assume that the original dataset is the set S and each sample x in S is a set of data units. For a set x, we use |x| to denote the number of data units it contains. For a set S containing sets, we use (cid:83) S to denote the union of the sets it contains, Figure 2: An example of datasets for the systematicity evaluation. Each pair of brackets denotes a sample and each letter (A~G) denotes a data unit. the fine-tuning phase as the training phase. We use LoRA fine-tuning, which has better performance than full fine-tuning in data-to-text generation (Hu et al., 2022). For model training, the optimizer is Adam (Kingma and Ba, 2015). The learning rate is 1e-4, and the batch size is 6. For the LoRA setting, we use r = 8, a = 32, and 0.1 dropout. We train the models for 10 epochs. For model inference, the beam width is 5. See Appendix A for more details about model size, input, training, and inference. 2.3 Metrics We use PARENT (Dhingra et al., 2019) as the performance metric to measure the quality of the model’s output. PARENT is a metric designed for data-to-text generation tasks, which considers the alignment of the output to both input data and ref- erence texts. PARENT better reflects the semantic fidelity of the output and has a stronger correlation with human judgments than reference-only-based metrics. Metrics other than the performance metric are described in the corresponding aspects. 3 Evaluation Method In this section, we describe each aspect of SPOR. Each subsection corresponds to an aspect that in- cludes: (1) the overview; (2) how to construct the dataset; (3) the statistics of the dataset; (4) how to perform the evaluation and (5) the results and anal- ysis. For all results reported, we run experiments three times with different random seeds and aver- age the results to avoid contingency. Appendix B provides the qualitative analysis of evaluations, showing specific samples with model outputs. 3.1 Systematicity The first aspect we evaluate is systematicity (Hup- kes et al., 2020). Systematicity is a notion fre- quently used in tests of compositional generaliza- tion (Lake and Baroni, 2018; Kim and Linzen, CombinationAtom(AF) (B) (BE) (C) (CG) (D) (ABC) (BCD)Test (AB) (BC) (CD) atoms A, B, C, Dsame total number of atoms close distribution of atoms[1A 2B 2C 1D] [1A 2B 2C 1D] Algorithm 2 Construction of Combination Input: Atom (A), test set (T ), Blocked (B), divergence mea- sure function D, threshold r Output: Combination (C) C, A′ ← A B ← B − T T ← (cid:83) T define function F(x, G) as (cid:80) define function V(x) as F(x ∩ T, A) − F (x ∩ T, C) while B ̸= ∅ do y∈G |x ∩ y| x ← sample in S with maximum V(x) B ← B − {x} R ← ∅ for all y ∈ A′ in ascending order of V(y) do R′ ← R ∪ {y} if | (cid:83) R′| ≤ |x| and T ⊆ (C − R′) ∪ {x} then R ← R′ end if end for if | (cid:83) R| = |x| and D(A, (C − R) ∪ {x}) ≤ r then C ← (C − R) ∪ {x} A′ ← A′ − R end if end while i.e., (cid:83) S is the set of all data units occurring in S. Initially, both Atom and the test set are empty sets, and we set an initially empty auxiliary set Blocked to store samples containing combinations of atoms. Each time, we remove a sample x from S and check all samples in the current Atom and samples in S that are not in Blocked and include only one data unit in x. If these samples cover all data units in x, and Atom does not contain combinations of data units in x, then we: • Add x to the test set. • Remove samples in S that are not in Blocked and include only one data unit in x, and add them to Atom. • Add samples in S that include more than one data unit in x to Blocked. This process is repeated until S is empty. Under this construction method, Atom covers all atoms but does not contain any combination of atoms. The samples containing combinations of atoms are all in Blocked. We then use Algorithm 2 to construct Combi- nation. The core idea of Algorithm 2 is to replace samples in Atom with samples that have combi- nations of atoms to obtain Combination. We ini- tialize Combination with Atom. For each sam- ple x in Blocked but not in the test set, we try to replace a cluster of samples belonging to Atom with x in Combination, ensuring that Combina- WebNLG C A 3,256 4,717 8,267 9,636 5,281 5,281 1,969 0 E2E A 3,351 13,311 3,298 0 C 1,390 7,043 3,298 2,670 # samples # data units # atoms # pairs Table 1: Some statistics about the training sets for the systematicity evaluation. Pairs refer to pairs of atoms that co-occur in a sample. tion still covers all atoms and the total number of atoms remains the same after the replacement. Each replacement makes Combination have one more sample with combinations of atoms. k p0.5 k q0.5 To ensure that the distributions of atoms in Atom and Combination are close, we perform the re- placement only if the divergence of the two dis- tributions after the replacement does not exceed a threshold r. Following Keysers et al. (2020), we measure the divergence using the Chernoff coeffi- cient D(P, Q) = 1 − (cid:80) k ∈ [0, 1] (Chung et al., 1989) and set the threshold r = 0.02, where pk and qk denote the proportion of the atom k in datasets P and Q, respectively. Random replace- ments will cause the divergence to reach the thresh- old too early. To avoid this, we define V(x) as the subtraction of the total occurrences of atoms from x in Atom and Combination, and try to use samples with high V (x) to replace samples with low V (x). This replacement method controls the growth of divergence, allowing more replacements to occur and thus allowing Combination to contain more combinations of atoms. 3.1.2 Dataset Statistics Table 1 shows the statistics about the training sets for the systematicity evaluation. The size of the test set for the systematicity evaluation is related to the number of distinct data units contained in the original dataset. For a dataset like E2E with a small number of distinct data units, it is more difficult to construct a large test set. To maximize the size of the test set, we randomly pick x among those with the largest |x| in Algorithm 1. We perform multiple random constructions and use the one with the largest test set size. The test set contains 2,360 samples on WebNLG and 156 samples on E2E. 3.1.3 Evaluation We train the model on Atom and Combination respectively and test the performance of the two WebNLG C A 66.14† 66.54 64.44† 64.80 63.98‡ 64.93 68.93 69.07 66.87† 67.09 65.87† 66.18 E2E A C 49.19‡ 52.76 50.49‡ 52.63 51.82‡ 52.95 53.78‡ 54.72 53.06† 54.22 51.28‡ 53.35 T5-large BART-large GPT-2-large T5-11b Mistral-7b Llama-2-13b Table 2: Performance of models on the two training sets for the systematicity evaluation. Significance tests are conducted to check whether the performance of the model on Atom is significantly lower than that on Combination. † means p < 0.1 and ‡ means p < 0.05. trained models on the test set. We evaluate sys- tematicity of the model by the performance on Atom. We use the performance on Combination as a bound to analyze the systematicity level of the model. 3.1.4 Results and Analysis Table 2 shows the results of the systematicity eval- uation. On WebNLG, T5-11b performs best on Atom, showing the strongest systematicity. Among the LLMs, both T5-11b and Mistral-7b outperform all the smaller LMs on Atom, reflecting an im- provement in systematicity. However, all models, including LLMs, show performance gaps on Atom and Combination. As Atom and Combination have the same total number of atoms and close distribution of atoms, the gaps are attributed to dif- ferences in the visibility of combinations of atoms, indicating that when the model cannot see combina- tions of atoms during training, it is unable to handle combinations of atoms as well as when it can see. This reflects a deficiency in systematicity of the model. The results on E2E are similar, and the performance gaps on Atom and Combination on E2E are more significant than on WebNLG, which further confirms the deficiency in systematicity of the model. In conclusion, the LLMs overall show an improvement in systematicity compared to the smaller LMs but do not eliminate the deficiency in systematicity of the model. Figure 3: An example of datasets with threshold N = 4 for the productivity evaluation. Each number represents a sample with a corresponding number of data units. N = 3 N = 4 N = 5 N = 3 N = 4 N = 5 1 249 19 249 0 249 9 86 0 86 0 86 0 2 193 18 193 17 193 52 592 66 592 80 592 389 3 239 9 239 35 239 128 1,480 633 1,480 1,227 1,480 1,400 4 0 56 260 25 260 99 0 0 2,151 1,601 2,151 2,029 5 0 57 0 148 227 34 0 414 0 4 1,612 1,435 6 0 44 0 99 0 203 0 148 0 543 0 219 7 0 71 0 117 0 178 0 103 0 113 0 113 I V I V I V I V I V I V Table 3: Number of samples in training sets for the productivity evaluation with each number (from 1 to 7) of data units in WebNLG (above) and E2E (below). productivity is also a notion frequently used in tests of compositional generalization (Lake and Baroni, 2018; Hupkes et al., 2020; Ontañón et al., 2022). In the data-to-text generation task, productivity cor- responds to the ability to handle a larger amount of data in the input than those seen during training. In practical applications, the amount of data con- tained in an input can be arbitrarily large, and it is impossible for a finite corpus to cover inputs with arbitrarily large amounts of data. The model will often encounter inputs with a larger amount of data than those seen during training and should have the ability to handle this situation. In the productivity evaluation, we limit the num- ber of data units of each sample during training, and test how the model performs when handling a larger amount of input data units than those seen during training. On the same test set, we also test the model trained with samples without the limit on the number of input data to see whether the model’s performance with the limit is comparable to that without the limit. 3.2 Productivity 3.2.1 Dataset Construction The second aspect we evaluate is productivity (Hup- kes et al., 2020). Productivity, in the context of compositionality, refers to the ability to extrapolate to longer sequences than those seen during training (Ontañón et al., 2022). Similar to systematicity, We construct one test set and two training sets In- visible (I) and Visible (V). We start by setting a number threshold N . We construct Invisible us- ing all samples with no more than N data units. Similar to Algorithm 2, we replace the samples in 1 1 1 1 1 12 2 2 3 3 34 4 4 5 6 7InvisibleVisible1 1 2 3 4 5 6 7n > 45 6 7Test3 4 4 4 5 6 7replacement (same total)1 1 1 1 1 1 2 2 2 3 3 3 4 4 4n ≤ 4n > 4 N = 3 I V 68.24‡ 69.82 67.58† 69.17 63.95‡ 66.43 70.86‡ 71.10 68.92‡ 70.55 68.77‡ 69.78 WebNLG N = 4 I V 68.32‡ 70.11 67.54† 69.89 64.96‡ 68.61 70.03 70.15 69.43† 71.09 69.55† 70.30 N = 5 I V 68.36† 68.71 68.84 69.17 65.25† 66.90 69.57‡ 69.83 69.63 69.41 69.23 69.08 N = 3 I V 61.27‡ 62.91 62.59 62.98 57.81‡ 62.89 62.79† 63.33 62.71‡ 64.53 61.18‡ 62.76 E2E N = 4 I V 64.31‡ 64.91 64.31 64.68 64.22† 65.17 63.97 64.48 65.13‡ 66.06 64.86 64.46 N = 5 V I 63.81 64.11 63.37† 63.71 64.15 63.99 63.89† 64.25 64.82 64.18 64.22 64.40 T5-large BART-large GPT-2-large T5-11b Mistral-7b Llama-2-13b Table 4: Performance of models trained on the two training sets with the number threshold N ∈ {3, 4, 5} for the productivity evaluation. Significance tests are conducted to check whether the performance of the model on Invisible is significantly lower than that on Visible. † means p < 0.1 and ‡ means p < 0.05. Invisible with samples with more than N data units to obtain Visible, ensuring that the total numbers of data units in Invisible and Visible are the same and that the divergence of the distribution is less than the threshold r = 0.02 (using the same metric as in systematicity). We construct the test set using all samples with more than N data units in the orig- inal test. We ensure that any data unit in the test set is present in both Invisible and Visible. Our ex- periments try the number threshold N ∈ {3, 4, 5}. Figure 3 shows an example of dataset construction. 3.2.2 Dataset Statistics Samples in WebNLG with 6 and 7 data units only cover four domains: Astronaut, Monument, Univer- sity, and Company. To avoid inconsistent domain distributions of training sets, we only use samples from these four domains to construct the datasets for the productivity evaluation on WebNLG. Ta- ble 3 shows the number of samples in training sets with each number of input triples. For N ∈ {3, 4, 5}, the test set of WebNLG contains 219 / 153 / 99 samples, and the test set of E2E contains 1,314 / 1,002 / 477 samples. 3.2.3 Evaluation We train the model on Invisible and Visible respec- tively and test the performance of the two trained models on the test set. We evaluate productivity of the model by the performance on Invisible. We use the performance on Visible as a bound to analyze the productivity level of the model. 3.2.4 Results and Analysis Table 4 shows the results of the productivity evalua- tion. On WebNLG, T5-11b performs best on Invis- ible with different thresholds. On E2E, the best per- forming model on Invisible with each threshold is one of the LLMs. The LLMs overall show stronger productivity than the smaller LMs. However, all models, including LLMs, show performance gaps on Invisible and Visible on both WebNLG and E2E. As Invisible and Visible have the same total number of data units and close distribution of data units, the gaps are attributed to differences in the visibility of samples with the number of input data units exceeding the threshold, indicating that when the model cannot see samples with the number of input data units exceeding the threshold during training, it is unable to handle such samples as well as when it can see. This reflects a deficiency in productivity of the model. The performance gaps of most models on Invisible and Visible are more significant for smaller thresholds, indicating that the deficiency in productivity is more pronounced when the maximum number of input data units within a sample seen during training decreases. In conclusion, the LLMs overall show an improve- ment in productivity compared to the smaller LMs but do not eliminate the deficiency in productivity of the model. 3.3 Order Invariance The third aspect we evaluate is order invariance. This notion is previously studied by Wang et al. (2023), who finds that LLMs are sensitive to the order of options in multiple choice task. In the data- to-text generation task, order invariance refers to the ability that a model’s output text maintains the fidelity and proper ordering of data when the same unordered set of data is input in different orders. Having order invariance means that the model can decompose the input into the set of data units and recombine them properly, regardless of the order of data units in the input, which reflects compositional generalization. In practical application scenarios, there are often cases where the data does not have a known linear order, and thus the model is required to have order invariance to ensure the fidelity and Fidelity WebNLG Ordering PBH POH PBH POH 6.84 97.56 3.98 97.65 9.90 90.55 99.10 4.53 7.80 96.49 6.86 96.69 39.98 94.63 38.14 92.45 37.74 81.06 96.58 38.81 38.46 94.65 39.00 91.44 87.15 88.69 82.64 89.05 86.29 87.28 53.56 54.78 54.84 54.93 54.95 54.59 1.67 0.94 6.86 0.64 2.67 2.33 4.55 5.94 15.80 3.01 4.64 7.38 CWIO PERF +0.13 +0.10 +0.11 +0.10 +0.11 +0.09 +0.81 +0.76 +0.76 +0.76 +0.79 +0.78 67.95 66.96 67.64 68.47 68.69 68.07 65.53 64.12 65.07 66.04 66.29 65.66 T5-large BART-large GPT-2-large T5-11b Mistral-7b Llama-2-13b T5-large BART-large GPT-2-large T5-11b Mistral-7b Llama-2-13b Fidelity E2E Ordering PBH POH PBH POH 10.15 91.39 3.59 98.05 10.99 74.37 99.12 3.68 4.46 96.49 7.54 96.88 43.95 98.36 43.91 97.25 42.31 85.34 99.28 43.56 43.40 97.95 43.68 97.97 77.22 82.26 68.08 82.75 82.28 78.50 37.28 37.58 38.96 37.05 37.24 37.38 6.80 0.90 19.22 0.60 3.08 2.75 1.62 2.67 11.77 0.72 1.93 1.99 CWIO PERF +0.51 +0.52 +0.50 +0.57 +0.42 +0.46 +0.95 +0.95 +0.95 +0.94 +0.94 +0.95 63.07 62.58 62.60 62.56 63.91 62.81 55.74 56.98 56.52 56.21 57.37 56.52 Table 5: Results of models trained on Original (above) and Match (below) for the order invariance evaluation. CWIO refers to the correlation with the input order. PERF refers to the performance on the original test set. proper data ordering of the output texts under any data input order. In the order invariance evaluation, for the same set of data units, we use two different input or- ders and then evaluate whether outputs maintain the fidelity and proper data ordering under both input orders. Further, we investigate the effect of the training process on order invariance. We con- struct a training set in which data units are arranged in the input in the same order as they appear in the text. We evaluate whether using such a train- ing set makes the model more inclined to arrange data units in the text according to input order and whether it affects the order invariance of the model. 3.3.1 Dataset Construction We design a search algorithm to find the occurrence position of data units in the text (see Appendix C for details). For each data-text pair in the original training set, we arrange the data units in the input according to their occurrence in the text, forming the training set Match (M). Correspondingly, Orig- inal (O) refers to the original training set. 3.3.2 Dataset Statistics For the order invariance evaluation on fidelity and proper data ordering, we remove samples with only one data unit and samples where the order of the data units in the text cannot be determined. The test set of WebNLG contains 1,559 samples, and the test set of E2E contains 1,623 samples. Figure 4: An illustration of the order invariance evalua- tion. Each letter (A~D) denotes a data unit. For a certain property, the evaluation checks whether the output has that property. ✓ means yes and × means no. the input data units to form two different inputs. We determine the set of data units contained in the output and the order of the data units, and then consider two properties: (1) The output is consid- ered to have fidelity if the set of data units exactly matches the input. (2) The output is considered to have proper data ordering if the order of the data units satisfies k > 0 with the order of at least one reference text, where k ∈ [−1, 1] is the Kendall coefficient (Abdi, 2007), which measures the corre- lation of two orders. For each of the two properties, we evaluate the proportion of both outputs having the property (PBH) and the proportion of only one output having the property (POH). A model with high order invariance on the property should have a higher PBH. Relatively, POH reflects the order variance of the model. Figure 4 shows an illustra- tion of the evaluation. 3.3.3 Evaluation 3.3.4 Additional Tests We train the model on Original. For each sample of the original test set, we randomize the order of To investigate the effect of the order consistency of data units in input and output in the training set, we A B C Dorder1B A D Corder2output1output2Property Checkinginvariant(PBH)variant(POH)noproperty train the model on Match and perform additional tests. Besides fidelity and proper data ordering in the evaluation, we also perform the following tests on the models trained on Original and Match. First, for the input and model output of the original test set, we determine the order of data units in the output, and then calculate its correlation with the input order of the data units (CWIO). We use the Kendall coefficient to measure the correlation. A higher correlation means that the model is more inclined to arrange data units in the text according to input order. Second, we test the performance of the model on the original test set to see the effect of different training sets on the performance. 3.3.5 Results and Analysis Table 5 shows the results of the order invariance evaluation. When trained on Original, on fidelity, T5-11b has the highest PBH on both WebNLG and E2E, showing the strongest order invariance. As a smaller LM, BART-large has the second highest PBH, which is higher than LLMs Mistral-7b and Llama-2-13b. From the POH we can see that all models show order variant cases on fidelity, i.e., for two input orders of the same set of data units, a model may show fidelity in one order but not in the other. On proper data ordering, the results are similar to fidelity and show a larger proportion of order variant cases. This means that for two input orders of data units, the two outputs of the model may differ in their data ordering, where one is proper and the other is not. Overall, the models are deficient in order invariance on both fidelity and proper data ordering. Compared to Original, when trained on Match, the CWIO of the model is significantly higher, in- dicating that the model is more inclined to arrange the data units in the text according to input order. This inclination about ordering leads to a decrease in order invariance on proper data ordering. An unexpected finding is that the inclination also af- fects order invariance on fidelity, overall leading to a decrease on WebNLG and an increase on E2E (see Appendix B.3 for the discussion). The perfor- mance of the model trained on Match is signifi- cantly lower than on Original, indicating that high order consistency of data units in input and output during training negatively affects the performance when the order of input data units is arbitrary. Figure 5: An example of dataset construction for the rule learnability evaluation. 3.4 Rule Learnability Models with high compositionality have the “will- ingness to prefer rules over memorization” (Hupkes et al., 2020), i.e., they tend to apply observed rules to recombine elements rather than simply memo- rizing combinations of elements. Based on this understanding, we propose the last aspect of the evaluation, rule learnability, which refers to the ability to learn rules from training and apply them during testing. Our evaluation focuses on the copy rule (Gehrmann et al., 2018) in data-to-text gen- eration, which refers to the rule that certain infor- mation involved in the text (e.g., entities, numeric values) should be copied directly from the data to ensure the fidelity of the text. In the rule learnability evaluation, we replace some entities or numeric values that should be copied with phrases that hide information, and then check whether the model correctly applies the copy rule. A correct copy should not have omissions of phrases that hide information or hallucinations of outputting entities and numeric values that have been hidden. If the model only memorizes spe- cific mappings that conform to the copy rule during training, rather than actually learning the copy rule, then it will not be able to correctly apply the copy rule to the phrases that hide information. 3.4.1 Dataset Construction On WebNLG, the copy rule is mainly applied to entities. For each sample in the original WebNLG test set, we find the entities that act as subjects and are copied in every reference text, and replace these entities in the input with "Entity i" (i denotes the entity’s label, which is used to distinguish between different entities). On E2E, the copy rule is mainly applied to values, and we focus on numeric values. Similar to WebNLG, we replace the numeric value with "Value i". If a value contains more than one < Entity 1, starring, Entity 2 >< Entity 2, birth place, Lancashire >Entity 1: Bananaman / Entity 2: Bill Oddiename [The Phoenix], eatType [pub], food[French], priceRange [more than Value A], customerRating [Value B out of 5]Value A: £30 / Value B: 5 numeric value, only the first one will be replaced. Figure 5 shows an example of dataset construction. 3.4.2 Dataset Statistics For the rule learnability evaluation, on WebNLG, we retain only samples in which there is at least one entity that satisfies the replacement condition. The final test set contains 1,614 samples. On E2E, since the training data guarantees copies of values, we can construct samples without reference texts to cover more combinations. We enumerate the values of 6 attributes (except the attribute near, which is similar to name) and ensure that at least one value contains the numeric value, resulting in 1,440 samples in the final test set. 3.4.3 Evaluation We train the model on the original training set and then check the output of the model on the replaced inputs. The result of checking each sample can be represented as (a, b), where a ∈ {0, 1} indicates whether all phrases that hide information are copied correctly (using fuzzy matching, see Appendix D for details), and b ∈ {0, 1} indicates whether the hidden entities or numeric values appear. In E2E, for a hidden value, we also consider b = 1 if other possible values corresponding to its attribute appear. In the representation of the result, a = 0 implies omissions and b = 1 implies hallucinations. Of the four possible results, only (1, 0) indicates that the copy rule is correctly applied. We count the proportions of the four cases and evaluate the rule learnability by the proportion of (1, 0). 3.4.4 Results and Analysis Table 6 shows the results of rule learnability eval- uation. On WebNLG, all models apply the copy rule less than 90% correctly. The errors are mainly concentrated on the (0, 0) case. This case indicates that the model does not have the hallucinations of outputting entities that have been hidden, but it has omissions of phrases that hide information. Among all the models, T5-large and BART-large have rel- atively high correct rates. The LLMs do not show higher correct rates compared to the smaller LMs. All LLMs have a correct rate of less than 80%. The results shown on E2E are different. On E2E, the LLMs have high correct rates and outperform the smaller LMs. Among the LLMs, both Mistral- 7b and Llama-2-13b are almost completely correct. Among the smaller LMs, BART-large and GPT-2- large show very low correct rates. Their propor- tions of (0, 1) are both high, indicating that there T5-large BART-large GPT-2-large T5-11b Mistral-7b Llama-2-13b T5-large BART-large GPT-2-large T5-11b Mistral-7b Llama-2-13b (0, 0) 10.16 10.64 19.43 17.35 19.08 21.15 2.64 13.17 15.28 0.05 0.65 0.86 (0, 1) 0.31 1.30 1.69 3.02 1.40 0.45 1.44 57.57 48.19 2.38 0.00 0.00 (1, 0) 89.32 87.59 78.44 79.62 79.04 78.11 95.93 29.26 36.06 97.57 99.35 99.14 (1, 1) 0.21 0.48 0.43 0.02 0.48 0.29 0.00 0.00 0.46 0.00 0.00 0.00 Table 6: Results of the rule learnability evaluation on WebNLG (above) and E2E (below). Each column repre- sents the proportion of the corresponding case. are serious hallucinations of outputting numeric values that have been hidden. When outputting these numeric values, the model tends not to output the corresponding phrases that information, result- ing in omissions. Their proportions of (0, 0) also indicate the presence of simple omissions unrelated to the hallucinations. In summary, the results show that all models, including LLMs, are unable to achieve high correct copy rates on both WebNLG and E2E, and that omissions and hallucinations are prevalent in the models. This indicates that for copy rules in data- to-text generation, the models are deficient in rule learnability and need further improvement. 4 Conclusions In this work, we propose SPOR, a comprehensive and practical evaluation method for compositional generalization in data-to-text generation, which in- cludes four aspects of manifestations: systematicity, productivity, order invariance, and rule learnabil- ity. We demonstrate on WebNLG and E2E how SPOR enables evaluations without additional man- ual annotations based on existing datasets. We evaluate some existing language models, including LLMs. We find that the models are deficient in various aspects of compositional generalization in data-to-text generation and need further improve- ment. Our work supports comprehensive research on different manifestations of compositional gener- alization in data-to-text generation and provides a framework for identifying and evaluating improve- ments in this ability of language models. Limitations A limitation of our work is the limited size of the models evaluated. Although we include some LLMs in our evaluation, due to the need for fine- tuning with limited resources, the size of the LLMs does not exceed 13b. Resource constraints make it difficult to apply fine-tuning methods on larger LMs, and there is currently no effective method for directly applying larger LMs to data-to-text gener- ation. One possible method is in-context learning, which performs inference directly but adds a prefix to the input that demonstrates a small number of samples for the model to learn. In the in-context learning style, the training phase of compositional generalization corresponds to the sample demon- stration in the prefix, and the evaluation needs to consider the method of sample demonstration se- lection. We will continue to follow the progress of applying larger LMs to data-to-text generation and explore evaluation methods for compositional generalization in data-to-text generation of larger LMs. Ethics Statement The datasets and models we use are open-source and we use them for scientific research purposes only. The datasets we construct will also be open source for scientific research purposes. The datasets we use and construct do not contain any information that names or uniquely identifies indi- vidual people or offensive content. Since we use the realistic dataset WebNLG, we are particularly concerned with data faithfulness, i.e., all data in the reconstructed evaluation dataset must not show information that contradicts the orig- inal realistic dataset, which may be inconsistent with the real world and may be harmful. In the systematicity, productivity, and order invariance evaluations, we do not modify the information in any triple. In the rule learnability evaluation, we only hide the information, and no new information is generated. Therefore, the data used in the eval- uation do not contain information that contradicts the original realistic dataset. The AI assistant we use in our work is Copilot (for simple code completion). Acknowledgements This work was supported by National Science and Technology Major Project (2022ZD0116308). The corresponding author is Houfeng Wang. We would like to thank the anonymous reviewers for their recognition and valuable suggestions for our work. These suggestions helped us to revise the work to make it more solid. References Hervé Abdi. 2007. The kendall rank correlation coeffi- cient. Encyclopedia of Measurement and Statistics. Sage, Thousand Oaks, CA, pages 508–510. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Web- son, Shixiang Shane Gu, Zhuyun Dai, Mirac Suz- gun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language mod- els. J.K Chung, P.L Kannappan, C.T Ng, and P.K Sahoo. 1989. Measures of distance between probability dis- tributions. Journal of Mathematical Analysis and Applications, 138(1):280–292. Bhuwan Dhingra, Manaal Faruqui, Ankur P. Parikh, Ming-Wei Chang, Dipanjan Das, and William W. Co- hen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Conference of the Association for Compu- tational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4884–4895. Association for Computational Linguis- tics. Ondrej Dusek, David M. Howcroft, and Verena Rieser. 2019. Semantic noise matters for neural natural lan- guage generation. In Proceedings of the 12th Interna- tional Conference on Natural Language Generation, INLG 2019, Tokyo, Japan, October 29 - November 1, 2019, pages 421–426. Association for Computational Linguistics. Thiago Castro Ferreira, Claire Gardent, Nikolai Ilinykh, Chris van Der Lee, Simon Mille, Diego Moussallem, and Anastasia Shimorina. 2020. The 2020 Bilingual, Bi-Directional WebNLG+ Shared Task Overview and Evaluation Results (WebNLG+ 2020). In Pro- ceedings of the 3rd International Workshop on Nat- ural Language Generation from the Semantic Web (WebNLG+), Dublin/Virtual, Ireland. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg chal- lenge: Generating text from RDF data. In Proceed- ings of the 10th International Conference on Natural Language Generation, INLG 2017, Santiago de Com- postela, Spain, September 4-7, 2017, pages 124–133. Association for Computational Linguistics. Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. J. Artif. Intell. Res., 61:65–170. Sebastian Gehrmann, Falcon Z. Dai, Henry Elder, and Alexander M. Rush. 2018. End-to-end content and In Pro- plan selection for data-to-text generation. ceedings of the 11th International Conference on Natural Language Generation, Tilburg University, The Netherlands, November 5-8, 2018, pages 46–56. Association for Computational Linguistics. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. 2020. Compositionality decomposed: How do neural networks generalise? (extended abstract). In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 5065–5069. ijcai.org. Dieuwke Hupkes, Mario Giulianelli, Verna Dankers, Mikel Artetxe, Yanai Elazar, Tiago Pimentel, Chris- tos Christodoulopoulos, Karim Lasri, Naomi Saphra, Arabella Sinclair, Dennis Ulmer, Florian Schottmann, Khuyagbaatar Batsuren, Kaiser Sun, Koustuv Sinha, Leila Khalatbari, Maria Ryskina, Rita Frieske, Ryan Cotterell, and Zhijing Jin. 2022. State-of-the-art gen- eralisation research in NLP: a taxonomy and review. CoRR, abs/2210.03050. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Men- sch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Re- nard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timo- thée Lacroix, and William El Sayed. 2023. Mistral 7b. CoRR, abs/2310.06825. Mihir Kale and Abhinav Rastogi. 2020. Text-to-text pre- training for data-to-text tasks. In Proceedings of the 13th International Conference on Natural Language Generation, INLG 2020, Dublin, Ireland, December 15-18, 2020, pages 97–102. Association for Compu- tational Linguistics. Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee, and Olivier Bousquet. 2020. Measur- ing compositional generalization: A comprehensive method on realistic data. In 8th International Confer- ence on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Najoung Kim and Tal Linzen. 2020. COGS: A compo- sitional generalization challenge based on semantic interpretation. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 9087–9105. Association for Computa- tional Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A In 3rd Inter- method for stochastic optimization. national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Brenden M. Lake and Marco Baroni. 2018. General- ization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 2879–2888. PMLR. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871–7880. Association for Computational Linguistics. Sanket Vaibhav Mehta, Jinfeng Rao, Yi Tay, Mihir Kale, Ankur Parikh, and Emma Strubell. 2022. Improv- ing compositional generalization with self-training In Proceedings of the for data-to-text generation. 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 4205– 4219. Association for Computational Linguistics. Jekaterina Novikova, Ondrej Dusek, and Verena Rieser. 2017. The E2E dataset: New challenges for end- to-end generation. In Proceedings of the 18th An- nual SIGdial Meeting on Discourse and Dialogue, Saarbrücken, Germany, August 15-17, 2017, pages 201–206. Association for Computational Linguistics. Santiago Ontañón, Joshua Ainslie, Zachary Fisher, and Vaclav Cvicek. 2022. Making transformers solve compositional tasks. In Proceedings of the 60th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 3591–3607. Association for Computational Linguistics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. J. Mach. Learn. Res., 21:140:1–140:67. Leonardo F. R. Ribeiro, Martin Schmitt, Hinrich Schütze, and Iryna Gurevych. 2020. Investigating pretrained language models for graph-to-text genera- tion. CoRR, abs/2007.08426. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton- Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, An- thony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Di- ana Liskovich, Yinghai Lu, Yuning Mao, Xavier Mar- tinet, Todor Mihaylov, Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subrama- nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Ro- driguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine- tuned chat models. CoRR, abs/2307.09288. Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023. Large language models are not fair evaluators. CoRR, abs/2305.17926. Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2017. Challenges in data-to-document gen- eration. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2253–2263. Association for Com- putational Linguistics. A Model Details B.3 Order Invariance The models we evaluate include T5-large (738M), BART-large (406M), GPT2-large (774M), T5-11b, Mistral-7b, and Llama-2-13b. All models are down- loaded from HuggingFace, and training and infer- ence are based on the transformers library. Each item in our experiment is done on a single NVIDIA A800 80G GPU. For model input, we use the linearization method (Ribeiro et al., 2020; Kale and Rastogi, 2020). For WebNLG, we add the special identifiers <head>, <relation>, and <tail> before the subject, predi- cate, and object of each triple, and then linearly concatenate all triples to form the input. For E2E, We form the input by linearly concatenating each attribute-value pair in the form of "attribute[value]". Following Ribeiro et al. (2020), for WebNLG, we add a prefix “translate from Triple to Text:” before the input. Similarly, we use the prefix "translate from MR to Text:" for E2E. For systematicity and productivity evaluations, we report the best results on the test set among all checkpoints. For order invariance and rule learnability evaluations, we report the results of the checkpoint that has the best performance on the original test set. B Qualitative Analysis of Evaluations Table 9 shows samples from Llama-2-13b in the order invariance evaluation. On WebNLG, for the model trained on Original, both outputs have fi- delity. However, the data ordering of Output 1 is improper, while that of Output 2 is proper (for < Trance music, stylistic origin, Pop music >, it should be next to < Andrew Rayel, genre, Trance music >, not isolated at the end). For the model trained on Match, the order of the output data units is consistent with the input order. When the input order is not the proper data ordering, the model may try to apply complex grammar on an unnatural order of data units, which results in some data units not being generated as demonstrated in the sample. On E2E, the two outputs on Original are consistent in ordering but vary in fidelity. The two outputs on Match have exactly the same data ordering as the inputs, resulting in a stiff expression. How- ever, from the experimental results, such a form of output on improves order invariance on fidelity on E2E. We hypothesize that due to the relatively simple grammar of E2E, this form does not lead to omissions as on WebNLG, and the model may be easier to maintain fidelity because there is no need to rearrange the data units. Table 7 ~ 10 show some specific samples with model outputs in each aspect of the evaluation. B.4 Rule Learnability B.1 Systematicity Table 7 shows samples from Llama-2-13b in the systematicity evaluation. On WebNLG, the issue on fidelity is the omission of data units, and the issue on fluency is the stiff expression (the model repeatedly enumerates data units by applying the same pattern, and lacks fluency in articulation). On E2E, the issues center on fluency similar to those shown on WebNLG. The stiff expression can be attributed to the difficulty of models trained on the Atom in handling unseen combinations. B.2 Productivity Table 8 shows samples from Llama-2-13b in the productivity evaluation. The issues center on fi- delity. In addition to the omissions present on WebNLG and E2E, hallucinations are found on E2E. The fidelity issue can be attributed to the dif- ficulty of models trained on Invisible in handling a larger number of input data units. Table 10 shows samples of error cases in the rule learnability evaluation. The most frequent error case on WebNLG is (0, 0). In the sample of (0, 0) on WebNLG, there is no hallucination in the output but "Entity 1" is omitted, resulting in a factual er- ror. The other two samples demonstrate cases with hallucinations. On a realistic dataset like WebNLG, the hallucination may be a correct inference based on known information but does not satisfy the re- quirement for fidelity in data-to-text generation. The poorer performing models on E2E, such as BART-large / GPT-2-large, have a large proportion of (0, 1) cases. In the sample of (0, 1) on E2E, the model outputs "5 out of 5" instead of "Value B of 5", which is a hallucination with the omission. On E2E, known information is irrelevant to the hidden numeric value, so the hallucination is unfounded. The sample of (0, 0) demonstrates an omission un- related to the hallucination, which is the only case of errors for the better performing models on E2E such as Mistral-7b / Llama-2-13b. C Search Algorithm for D Fuzzy Matching for Rule-Learnability Order-Invariance Evaluation Evaluation In the rule learnability evaluation, for the checking of copying phrases that hide information, we find that there are cases where the model does not per- form strict copying, but semantically completes the copying, which should also be considered correct. Therefore, in addition to strictly correct copying, the following cases are also considered as correct copying: • Case is ignored. For example, "entity 1" and "value b" are considered correct. • Numeric symbols can be changed to ordinal numbers. For example, "1st Entity" is consid- ered correct. • If the symbol is copied, it is allowed not to copy "Entity" or "Value". For example, "Its customer rating is B out of 5." is considered correct. The fuzzy matching covers most cases of seman- tically completed copies, which makes the check- ing of copying more accurate. For each data-text pair in WebNLG, we first locate where the entities in the data appear in the text. Although most of the entities appear unchanged in the text, variations still exist, such as token discon- tinuities or token distortions. However, discontin- uous tokens are not too far away from each other, and the degree of token distortion is not too large. Therefore, we use the following algorithm for lo- calization: 1. We first slice the entity into tokens, and for each token t, find the set of candidate-matching tokens in the text with the smallest edit distance from t and no more than min (2, length of t). 2. Keep all non-empty candidate sets, and then use depth-first search to select a position in each candidate set such that the final variance of all po- sitions is minimized as the token position represen- tation of the entity. If there are multiple minimum variance representations, then all are retained. 3. The entities are sorted by the number of position representations retained from smallest to largest, and then one representation is selected for each entity and the smallest position number in the representation is used to represent that entity. We require that the position number representing an entity cannot appear in the representations of other entities, and if it cannot be satisfied, then the posi- tion number of this entity is set to a large boundary value (the percentage of such cases is about 1.6%). After determining the position number of each entity, we determine the order of triples. We con- sider the set of triples as an undirected graph, and each triple represents a connected edge between the subject and the object. For each triple, if the de- gree of the subject and object are different, we take the position of the entity with the smaller degree to represent the position of the triple, otherwise, we take the larger of the two entity positions to represent the position of the triple. According to the position number of triple, we get the order of triple. The order relationship between triples with the same position number follows the input. On E2E, since the training data guarantees copies of values, we use strict matching to localize the values. Sample Dataset WebNLG Input: < Ayam penyet, region, Malaysia >, < Ayam penyet, country, Java >, < Ayam penyet, ingredient, Fried chicken >, < Ayam penyet, main ingredient, Squeezed or smashed fried chicken served with sambal >, < Ayam penyet, serving temperature, Hot > Combination: Ayam penyet is a dish from Malaysia and Java. It includes fried chicken which is squeezed or smashed and served with sambal. It should be served hot. Atom: Ayam penyet is a dish from the region of Malaysia and Java. It contains fried chicken and is served hot. Performance: (73.90, 56.98) Issue: Omission of data units Input: < Spain, leader, Felipe VI of Spain >, < Spain, language, Spanish language >, < Spain, currency, Euro >, < Ajoblanco, country, Spain >, < Spain, demonym, Spaniards > Combination: Ajoblanco is a dish from Spain, where the currency is the euro and the language is Spanish. The country is led by Felipe VI and the people who live there are called Spaniards. Atom: Ajoblanco is a food found in Spain, where Felipe VI of Spain is the leader, Spanish is spoken, the Euro is the currency and Spaniards live. Performance: (62.93, 8.43) Issue: Stiff expression Input: name[Wildwood], eat type[restaurant], food[French], area[riverside], near[Raja Indian Cuisine] Combination: Wildwood is a French restaurant near Raja Indian Cuisine in the riverside area. Atom: Wildwood is a restaurant providing French food It is located in the riverside. It is near Raja Indian Cuisine. Performance: (59.89, 24.93) Issue: Stiff expression / Missing punctuation E2E Table 7: Samples from Llama-2-13b in the systematicity evaluation. Combination and Atom refer to the output of the same model trained on the corresponding dataset. Performance refers to the performance scores of the two outputs. E2E Sample Dataset WebNLG Input: < Elliot See, alma mater, University of Texas at Austin >, < University of Texas at Austin, affiliation, University of Texas System >, < Elliot See, birth place, Dallas >, < Elliot See, death place, St. Louis >, < Elliot See, status, Deceased > Visible: Elliot See was born in Dallas and died in St. Louis. He attended the University of Texas at Austin, which is affiliated to the University of Texas system. Invisible: Elliot See was born in Dallas, Texas, graduated from the University of Texas at Austin and died in St Louis. Performance: (82.85, 70.91) Issue: Omission of data units Input: name[Browns Cambridge], eat type[coffee shop], food[Chinese], customer rat- ing[average], area[city centre], area[riverside], family friendly[no], near[Crowne Plaza Hotel] Visible: Browns Cambridge is a Chinese coffee shop located in the city centre near the Crowne Plaza Hotel. It is not family friendly and has an average customer rating. It is located in the riverside area. Invisible: Browns Cambridge is a coffee shop providing Chinese food It is located in the city centre. Its customer rating is average. It is not family friendly. It is near Crowne Plaza Hotel. Performance: (62.84, 53.84) Issue: Omission of data units Input: name[Clowns], eat type[coffee shop], food[English], customer rating[5 out of 5], near[Clare Hall] Visible: Clowns is a coffee shop near Clare Hall. It serves English food and has a customer rating of 5 out of 5. Invisible: Clowns is a coffee shop providing English food in the higher price range. Its customer rating is 5 out of 5. It is near Clare Hall. Performance: (64.59, 23.47) Issue: Hallucination Table 8: Samples from Llama-2-13b in the productivity evaluation with the threshold N = 3. Visible and Invisible refer to the output of the same model trained on the corresponding dataset. Performance refers to the performance scores of the two outputs. Sample Dataset WebNLG Input: < Trance music, stylistic origin, Pop music >, < Andrew Rayel, genre, Trance music >, < Jwaydan Moyine, associated band/associated musical artist, John Digweed >, < Andrew Rayel, associated band/associated musical artist, Jwaydan Moyine > Input Order 1: < Trance music, stylistic origin, Pop music > → < Andrew Rayel, genre, Trance music > → < Jwaydan Moyine, associated band/associated musical artist, John Digweed > → < Andrew Rayel, associated band/associated musical artist, Jwaydan Moyine > Original 1: Andrew Rayel is a Trance musician who is associated with the musical artist Jwaydan Moyine. Moyine is associated with the musical artist John Digweed. Trance music originated from pop music. Match 1: Trance music originated from pop music and is performed by artists such as Andrew Rayel and John Digweed. Input Order 2: < Jwaydan Moyine, associated band/associated musical artist, John Digweed > → < Trance music, stylistic origin, Pop music > → < Andrew Rayel, associated band/associated musical artist, Jwaydan Moyine > → < Andrew Rayel, genre, Trance music > Original 2: Andrew Rayel’s musical genre is Trance which has its origins in Pop music. He is associated with the musical artist Jwaydan Moyine who is associated with the musical artist John Digweed. Match 2: Jwaydan Moyine is associated with the musical artist John Digweed and with Trance musician Andrew Rayel. Input: name[The Eagle], family friendly[no], price range[cheap], area[city centre], near[Burger King], customer rating[average], food[Chinese], eat type[coffee shop] Input Order 1: family friendly[no] → price range[cheap] → area[city centre] → near[Burger King] → customer rating[average] → food[Chinese] → eat type[coffee shop] Original 1: The Eagle is a cheap Chinese coffee shop in the city centre near Burger King. It has an average customer rating. Match 1: The Eagle is not family friendly, but is cheap. It is located in the city centre near Burger King. It has an average customer rating and serves Chinese food. It is a coffee shop. Order 2: food[Chinese] → customer rating[average] → near[Burger King] → price range[cheap] → area[city centre] → family friendly[no] → eat type[coffee shop] Original 2: The Eagle is a cheap Chinese coffee shop in the city centre near Burger King. It has an average customer rating and is not family friendly. Match 2: The Eagle serves Chinese food. It has an average customer rating, is near Burger King, is cheap, is in the city centre, is not family friendly, and is a coffee shop. E2E Table 9: Samples from Llama-2-13b in the order invariance evaluation. Original and Match refer to the output of the same model trained on the corresponding dataset. We mark the position of data units in the output in red. Case Dataset WebNLG (0, 0) (0, 1) (1, 1) E2E (0, 0) (0, 1) (1, 1) Sample Input: < Antares (rocket), manufacturer, Yuzhnoye Design Office >, < Entity 1, launch site, Vandenberg Air Force Base >, < Antares (rocket), comparable, Entity 1 >, <Antares (rocket), launch site, Mid-Atlantic Re- gional Spaceport >, < Entity 1, country origin, United States > Hidden: (Entity 1, Delta II) Output: The Antares rocket, manufactured by the Yuzhnoye Design Office, was launched from the Mid-Atlantic Regional Spaceport and the Vanden- berg Air Force Base in the United States. Input: < Entity 1, academic discipline, Mathematics >, < Entity 1, first publication year, 1878 >, < Entity 1, abbreviation, Am. J. Math >, < Entity 1, issn number, 1080-6377 > Hidden: (Entity 1, American Journal of Mathematics) Output: The American Journal of Mathematics (abbreviated to Am. J. Math.) has the ISSN number 1080-6377. Input: < Entity 2, leader, Mulatu Teshome >, < Entity 2, leader, Haile- mariam Desalegn >, < Addis Ababa, is part of, Addis Ababa Stadium >, < Entity 1, location, Addis Ababa >, < Addis Ababa, country, Entity 2 > Hidden: (Entity 1, Addis Ababa City Hall), (Entity 2, Ethiopia) Output: Addis Ababa Stadium is located in Addis Ababa, Ethiopia. Entity 1 is located in Addis Ababa. Mulatu Teshome and Hailemariam Desalegn are leaders of Entity 2. Input: name[The Twenty Two], price range[less than Value A], area[city centre], eat type[pub], food[Italian], near[Café Rouge], family friendly[no], customer rating[Value B out of 5] Hidden: (Value A, £20), (Value B, 1/3/5) Output: The Twenty Two is a pub located in the city centre near Café Rouge. It serves Italian food and has a customer rating of Value B out of 5. It is not family friendly. Input: name[Loch Fyne], eat type[coffee shop], customer rating[Value B out of 5], area[city centre], price range[more than Value A], food[English], near[The Sorrento] Hidden: (Value A, £30), (Value B, 1/3/5) Output: Loch Fyne is a coffee shop near The Sorrento in the city centre. It has a customer rating of 5 out of 5 and serves English food at a price range of more than Value A. Input: name[Alimentum], area[city centre], near[The Six Bells], eat type[pub], food[Chinese], price range[more than Value A] Hidden: (Value A, £30) Output: more than Value A, Alimentum is a pub that provides Chinese food in the more than £30 price range. It is located in the city centre. Table 10: Samples from Llama-2-13b / T5-11b / GPT-2-large that performs incorrect copies in the rule learnability evaluation. Hidden indicates the entities or numeric values that are hidden (this part does not appear in inputs). We mark copies of phrases that hide information in blue and occurrences of hidden entities or numerical values in red.
synthetic_cpt
2
Self-Play_Fine-Tuning_of_Diffusion_Models_for_Text-to-Image_Generation.pdf
1 0 0 2 r a M 9 2 1 v 5 4 2 3 0 1 0 / h t - p e h : v i X r a Non-abelian self-duality from self-interaction A. Khoudeir Instituto de F´ısica, Universidad Nacional Aut´onoma de M´exico Apdo. Postal 20-364, 01000 M´exico D. F. M´exico and Centro de Astrof´ısica Te´orica, Departamento de F´ısica, Facultad de Ciencias, Universidad de los Andes, M´erida, 5101,Venezuela. Abstract The non-abelian self-dual action in three dimensions is derived using the self-interaction mechanism. Self-duality in three dimensions was proposed initially by Townsend et. al. [1] as an alternative to the topologically massive theory[2]. In principle, they seem different descriptions of a locally massive spin 1 physical excitation: the self-dual theory is described by a non-gauge invariant first order action while the topologically massive action is written down in a gauge invariant second order formulation. Both actions have an abelian Chern-Simons term (ǫmnpAm∂nAp). Despite these differences, Deser and Jackiw stablished that both theories are locally equivalent through the existence of a master action, even in the presence of external sources[3]. Moreover, both theories are dual equivalent[4] and the self-dual theory can be seen as a gauged fixed version of the topologically massive theory[5]. The self-dual theory for gravity and for higher spin in three dimensions was achieved in [6] and [7], respectively. If glogal properties are considered, the equivalence is modified, for instance, the partition functions of the self dual and topologically massive theories are not the same but they are related in the following way: ZSD = ZCSZT M [8] (where ZCS is the partition function of the abelian Chern-Simons action). The non-abelian generalization of the topologically massive theory was given in [2] while the non-abelian self-dual theory was formulated indepen- dently by McKeon [9] and Arias, et. al.[10], which has a structure of a Freedman-Townsend action[11]. In this letter, starting from an appropiate master action, we will derive the non-abelian self-dual action using the self-interaction mechanism[12]. 1 We will start by considering the following master action[13] I = Z d3x[−µǫmnpAm∂nap − 1 2 µ2amam − µǫmnpAm∂nvp + 1 2 µǫmnpvm∂nvp] (1) This action can be seen as the coupling between a Maxwell field (Am) and a vector field (vm) described by an abelian Chern-Simons action through a three dimensional BF topological term. Independent variations in the am, vm and Am fields, yield the following equations of motion am = −1 2 µǫmnpfnp(A), ǫmnp∂n[Ap − vp] = 0 (2) (3) and ǫmnp∂n[ap + vp] = 0, (4) where fmn(A) = ∂mAn − ∂nAm. The last two equations can be solved locally. We have and vm = Am + ∂mφ am = −vm + ∂mσ. The master action has abelian gauge invariance δAm = ∂mλ1 δvm = ∂mλ2 (5) (6) (7) Substituting the equations (2) and (5), into the master action lead to the action for the abelian topologically massive theory d3x[−1 4 (A) fmn(A) − 1 f mn 4 µǫmnpAmfnp(A)]. I = (8) Z On the other hand, we can eliminate the am and Am fields, through the use of equations (5) and (6) in order to obtain I = Z d3x[−1 2 µ2(vm − ∂mφ)(vm − ∂mφ) + 1 2 µǫmnpvm∂nvp], (9) which is invariant under the following abelian gauge transformations δvm = ∂mλ1, δφ = λ1. (10) 2 Fixing the gauge φ = 0, we obtain the non-gauge invariant self-dual action. Then, the proposed master action show the equivalence (at classical level) between the topologically and self-dual theories. The master action that we are considering is locally equivalent to the master action of Deser and Jackiw, as can be seen after eliminating only the vm field and is written down as I = Z d3x[−µǫmnpAm∂nap − 1 2 µ2amam − 1 2 µǫmnpAm∂nAp] (11) Introducing the Lie-algebra valued vectors Am = Ai mT i and the mT i, am = ai mnT i, where the generators T i of Lie-algebra valued field strength Fmn = F i the gauge group are normalized by T iT j = δij, the non-abelian generalization of the master action of Deser and Jackiw obtained by replacing ordinary derivative by covariant derivative, fmn = ∂mAn − ∂nAm → Fmn = ∂mAn − ∂nAm + [Am, An] and considering the non-abelian Chern-Simons term is I = µtr Z d3x[ǫmnpamFnp − 1 2 µamam − 1 2 ǫmnpAm(∂nAp + 2 3 AnAp)] (12) and only can reproduce the non-abelian version of the topologically mas- sive theory after eliminating the am field by using its equation of motion (am = ǫmnpFnp). On the other hand, the equation of motion obtained by independent variations in Am has no known solutions and in consecuence the non-abelian master action of Deser and Jackiw can not reproduce the non-abelian self-dual action. The non-abelian topologically massive theory can be deduced from the self-interaction mechanism[14]. Now, we will consider for simplicity a triplet of SU(2) free vector fields m (i = 1, 2, 3). The m coupled with a triplet of SU(2) free vector fields vi Ai action is Io = Z d3x[−µǫmnpAi m∂nai p − 1 2 µ2ai mami − µǫmnpAi m∂nvi p + 1 2 µǫmnpvi m∂nvi p]. (13) This action has two global simmetries. One is the global SU(2) symmetry δωX = gǫijkX jωk where X = (A, a, v) and the other global symmetry is given by δρAi m = gǫijk[aj m + vj m]ρk; 3 δρai m = 0 = δρvi m. (14) (15) Under these transformations, the action changes by a total derivative. The Noether currents associated with the global symmetries are jmi = −µgǫmnpǫijkAj n[ak p + vk p ] + 1 2 µgǫmnpǫijkvj nvk p and K mi = −1 2 µgǫmnpǫijk[aj n + vj n][ak p + vk p ]. (16) (17) These currents are conserved on-shell. Now, we will couple these Noether currents to the action I0 through the corresponding self-interaction term defined by jmi ≡ δISI δvi m , K mi ≡ δISI δAi m . We find d3x[−ǫmnpǫijkvi ǫmnpǫijkvi mvj nAk p Z ISI = gµ − 1 2 ǫmnpǫijkAi maj nak p + nak p − 1 2 mvj ǫmnpǫijkvi mAj 1 6 nvk p ]. (18) (19) The self-interaction mechanism stops here since no other derivative terms appear in ISI. Now, we add ISI to Io. The last term in eq. (13) combines with the last term in eq. (19) to give a Chern-Simons term for the vm field. The non-abelian action is d3x[−ǫmnpAi m(F i np(a) + F i np(v) + 2gǫijkanvk p ) − µai mami (20) I = µ 1 2 + ǫmnpvi Z m(∂nvi p + 1 3 ǫijkvj nvk p )], or I = 1 2 µ Z where and d3x[−ǫmnpAi mF i np(a+v) − µai mami + ǫmnpvi m(∂nvi p + 1 3 ǫijkvj nvk p )], (21) mn(a) = ∂mai F i n mn(v) = ∂mvi F i n − ∂nai m + gǫijkaj mak n − ∂nvi m + gǫijkvj mvk n 4 (22) (23) are the field strengths for the ai m fields. The self-interaction process combines the abelian gauge transformations with the global ones giving rise to the following non-abelian local gauge transformations m and vi δAi δvi m = gǫijkAj m = ∂mαi + gǫijkvj mαk; δai mαk m = gǫijkaj mαk and δAi δai m = ∂mκi + gǫijk[aj m = 0 = δvi m m + vj m]κk (24) (25) Defining ωm ≡ am + vm, the action is rewritten down as I = 1 2 µ g2 tr Z d3x[−ǫmnpAmFnp(ω) − µ(vm − ωm)(vm − ωm) (26) + ǫmnpvm[∂nvp + 2 3 vnvp]. This action was interpreted as the interaction between a Chern-Simons and a BF(ǫAF ) topological terms propagating a massive spin 1 physical mode[10]. Like as in the non-abelian topologically massive theory, invariance in the functional integral implies the quantization condition: 4π µ g2 = integer. We observe that Am play the role of a Lagrange multiplier. Its equation of motion is which tell us that ω is a pure gauge. Fmn(ω) = 0 ωm = U −1∂mU. Then, the action becomes I = 1 2 µ g2 tr Z d3x[−µ(vm −U −1∂mU)(vm −U −1∂mU) + ǫmnpvm(∂nvp + (27) (28) 2 3 vnvp)], (29) where the vm field appear coupled with a Stuckelberg field. Now, we have invariance under the following (finite) gauge transformations vm → g−1∂m∂mg + g−1vmg, U → Ug. (30) 5 This gauge invariance allow us to fix the gauge U = 1, in order to obtain the standard action for the non-abelian self-dual field vm I = 1 2 µ g2 tr Z d3[−µvmvm + ǫmnpvm(∂nvp + 2 3 vnvp)]. (31) To conclude, we have derived the non-abelian self-dual action in three di- mensions using the self-interaction mechanism. Recently, a dual version of a pure non-abelian Chern-Simons action was formulated [15]. It would be interesting to analyse the duality properties of the self-dual and topologically masive theories at non-abelian level. ACKNOWLEDGEMENTS The author would like to thank to Marti Ruiz Altaba for his hospitality at Instituto de F´ısica de la Universidad Nacional Aut´onoma de M´exico. Also, the author thanks Conicit-Venezuela for financial support. References [1] P. K. Townsend, K. Pilch and P. van Nieuwenhuizen, Phys. Lett. B136 (1984) 38. [2] S. Deser, R. Jackiw and S. Tempelton, Ann. Phys. 140 (1982) 372. [3] S. Deser and R. Jackiw, Phys. Lett. B139 (1984) 371. [4] J. Stephany, Phys.Lett. B390 (1997) 128. [5] R. Gianvittorio, A. Restuccia and J. Stephany, Mod. Phys. Lett. A6 (1991) 2121; P. J. Arias and J. Stephany, J. Math. Phys. 36 (1995) 1868. [6] C. Aragone and A. Khoudeir, Phys.Lett. B173 (1986) 141. [7] C. Aragone and A. Khoudeir, Revista Mexicana de F´ısica 39 (1993) 819. [8] P. J. Arias and A. Restuccia, Phys. Lett. B347 (1995) 241. [9] D. G. C. McKeon, Int. Journal of Mod. Phys. A7 (1992) 2005. 6 [10] P. J. Arias, L. Leal and A. Restuccia, Phys.Lett. B367 (1996) 170. [11] D. Freedman and P. K. Townsend, Nucl. Phys. B177 (1981) 282. [12] S. Deser, Gen. Rel. Grav. 1 (1970) 9; Class. Quantum Grav. 4 (1987) L99; S. Deser and M. Henneaux, Mod. Phys. Lett. A10 (1995) 991. [13] A. Khoudeir, Mod. Phys. Lett. A11 (1996) 2489. [14] C. Aragone and E. Araujo, Acta Cient´ıfica Venezolana 36 (1985) 207. [15] H. Garc´ıa-Compean, O. Obregon and C. Ram´ırez, hep-th/0103066. 7
synthetic_cpt
3
STraTA_Self-Training_with_Task_Augmentation_for_Better_Few-shot_Learning.pdf
CONNECTED COMPONENTS OF STRATA OF RESIDUELESS MEROMORPHIC DIFFERENTIALS MYEONGJAE LEE Abstract. Generalized strata of meromorphic differentials are loci in the usual strata of differentials, where certain sets of residues sum up to zero. They appear naturally in the boundary of the multi-scale compact- ification of the usual strata. Enumerating the connected components of generalized strata is necessary to understand the boundary complex of the multi-scale compactification. In this paper we classify connected components of the strata of residueless meromorphic differentials, which are the strata with the maximum possible number of conditions imposed on the residues of the poles. Contents Introduction 1. 2. Flat surfaces and their deformations 3. The multi-scale compactification and degenerations of flat surfaces 4. The principal boundary of residueless strata 5. Hyperelliptic components 6. Multiplicity one saddle connections 7. Genus one single-zero strata 8. Classification of hyperelliptic components 9. Genus one multiple-zero strata 10. Higher genus strata References 1 4 6 10 16 18 31 43 46 56 64 1. Introduction In the present paper, we investigate the flat surfaces that arise from meromorphic differentials on compact Riemann surfaces with vanishing residues at each pole. We prove that the connected components of strata of residueless meromorphic differentials are classified by the well-known topological invariants called hyperellip- ticity, spin parity, and rotation number. This is the first step of a more general project on the classification of connected components of generalized strata, which are loci of the usual strata where the residues of the poles satisfy certain linear conditions. These generalized strata appear in the description of the boundary of the moduli space of multi-scale differentials, which is a smooth compactification of the stratum PHg(µ) with normal crossing boundary. The final goal of the study in this direction is to compute the boundary complex of the moduli space of multi-scale differentials, which may lead to new results on the top-weight cohomology of PHg(µ), parallel to the recent results [5] on Mg and [4] on Ag. 4 2 0 2 p e S 3 2 ] G A . h t a m [ 3 v 5 7 7 7 0 . 0 1 3 2 : v i X r a For integers m, n > 0 and g ≥ 0, let µ := (a1, . . . , am, −b1, . . . , −bn) be a partition of 2g − 2, where −bn ≤ · · · ≤ −b1 < 0 ≤ a1 ≤ · · · ≤ am. Recall that the stratum of meromorphic differentials of type µ, denoted by Hg(µ), is defined to be the moduli space of meromorphic differentials on genus g compact Riemann surfaces with orders of zeroes and poles prescribed by µ. More precisely, an element of Hg(µ) is a pair ((X, z ⊔ p), ω) where (X, z ⊔ p) ∈ Mg,m+n is a (m + n)-pointed compact Riemann surface of genus g, and ω is a meromorphic differential on X, where z = {z1, . . . , zm}, p = {p1, . . . , pn} are the sets of j bjpj. An element zeroes and poles of ω with orders prescribed by µ. In other words, div(ω) = ((X, z ⊔ p), ω) of the stratum is called a flat surface, since the differential ω defines a flat metric on X \p with i aizi − P P Key words and phrases. Meromorphic differentials, Translation surfaces, Moduli of differentials. 1 conical singularities at the points in z. In the present paper, we simply denote the flat surface by (X, ω), or X when we are not focusing on the differential ω itself and no confusion can arise. The space Hg(µ) has a smooth complex orbifold structure given by the period map Per : H1(X \ p, z; Z) → C obtained by integrating the meromorphic form ω. The local coordinate neighborhood can be identified with a C-vector space H 1(X \ p, z; C) of dimension 2g + m + n − 2, and in particular dimC Hg(µ) = 2g + m + n − 2. The connected components of the usual strata Hg(µ) of differentials are well-known. In [13], Kontsevich and Zorich classified the connected components of the strata of holomorphic differentials. In this case, it turns out that there are at most three connected components of each stratum, distinguished by two topological invariants called spin parity and hyperellipticity. The main tools in [13], which will also be used in the present paper, are the flat-geometric surgeries called breaking up a zero and bubbling a handle. In [14], Lanneau classified the connected components of the strata of holomorphic quadratic differentials. In the quadratic case, there are at most two connected components for each stratum. In [2] and [3], Boissy classified the connected components of the strata of meromorphic differentials and the strata of meromorphic differentials with marked horizontal separatrices. As in the holomorphic case, there are at most three connected components for each stratum, unless g = 1. For g = 1, a new topological invariant called rotation number is required to classify the connected components, and strata with arbitrarily many components exist. In the present paper, we will expand the discussion on the connected components to certain types of loci of Hg(µ) that we introduce now. Let R be a partition of the set p of poles of the differential. A p∈P resp ω = 0 meromorphic differential (X, ω) ∈ Hg(µ) is said to satisfy the residue condition given by R if for each part P of R. In particular, if a singleton {p} is a part of R, then resp ω = 0, and the pole p is said to be residueless. A generalized stratum HR g (µ) is defined to be the subspace of Hg(µ) consisting If R is a partition p = P1 ⊔ · · · ⊔ Pr and of differentials satisfying the residue conditions given by R. µi = (−bj)pj ∈Pi , the generalized stratum HR g (µ) is also denoted by Hg(a1, . . . , am; µ1; . . . ; µr). The residue resp ω at the pole p is obtained by integrating ω along a small circle around p. For each part Pi, let αi be a closed curve only enclosing the poles in Pi, with trivial homology class [αi] in H1(X, z; Z). The residue ω = 0 for each i. So in the period coordinates of Hg(µ), condition given by R is the linear condition g (µ) is a linear subvariety given by the vector subspace H 1(X \ p, z; C)R of the generalized stratum HR H 1(X \ p, z; C) = Hom(H1(X \ p, z; Z), C), consisting of the linear functions that vanish on the classes [αi]. We denote H1(X \ p, z; Z)R := H1(X \ p, z; Z)/h[αi]i. Thus the period coordinates on HR g (µ) can be identified with Hom(H1(X \ p, z; Z)R, C). If |R| is the number of parts of R, the codimension of HR g (µ) in Hg(µ) is equal to |R| − 1, if it is nonempty. P αi R If R is the finest possible partition consisting of n singletons, then the residue condition given by R is that every pole of ω is residueless. We denote the corresponding generalized stratum HR g (µ) = In the present paper, we Hg(a1, . . . , am; −b1; . . . ; −bn) also by Rg(µ), and call it a residueless stratum. will simply call it a stratum when no confusion can arise. The dimension of the residueless stratum Rg(µ) is equal to 2g + m − 1. In this case, the quotient space H1(X \ P , z; Z)R can be identified with H1(X, z; Z) by the map H1(X \ p, z; Z) → H1(X, z; Z) induced by the inclusion X \ p ֒→ X. If there is only one pole (which is then necessarily of order > 1), then automatically Hg(µ) = Rg(µ). If m = 1, then the residueless genus zero stratum R0(µ) is empty. The finest possible partition R such that HR 0 (µ) is nonempty consists of n − 2 singletons and one part of cardinality two. This case also plays an important role in the present paper, so we introduce the notation for it. By relabeling the poles, assume that {pn−1, pn} is the only part of R with two elements. Then we denote HR 0 (µ) by R0(a1, −b1, . . . , −bn−2; −bn−1, −bn). A simple pole cannot be residueless. So if R contains a singleton consisting of a simple pole, then g (µ) = ∅. In particular, Rg(µ) = ∅ if b1 = 1. Throughout the paper, we assume that a stratum is strictly HR meromorphic (i.e. n > 0) and that the orders of all residueless poles are at least two. It is a bit harder to describe the deformations of flat surfaces in the generalized strata than in the usual strata, since we need to make sure that the deformation does not change the residue condition. This is why the arguments in [13] and [2] cannot be directly applied to the generalized strata. In order to deal with this difficulty, we will be benefited from the existence of GL+(2, R)-action and the multi-scale compactification of the generalized strata. The surgeries from [13] will be interpreted as the degeneration to the boundary of the multi-scale compactification, introduced in [1] and applied to the residueless cases in [16]. The degeneration 2 technique is motivated by the principal boundary introduced in [10], and its interpretation in terms of the compactification in [6]. We will prove that we can approach to the principal boundary by applying certain GL+(2, R)-action from a general point. Then we will be much benefited from the general construction of the multi-scale compactification, which allows us to navigate around the boundary. 1.1. Main results. As mentioned above, we will prove that hyperellipticity, spin parity and rotation number still suffice to distinguish all connected components of the generalized strata, with few exceptional cases. However, there could be many hyperelliptic connected components, depending on the singularity type µ. Before giving the statements classifying the connected components, we introduce the topological invariants of connected components. Definition 1.1. A flat surface (X, ω) is called hyperelliptic if X has an involution σ such that X/σ ∼= P1 and σ∗ω = −ω. A connected component C of Rg(µ) is said to be hyperelliptic if every flat surface contained in C is hyperelliptic. Otherwise C is said to be non-hyperelliptic. Definition 1.2. A stratum Rg(µ) is said to be of even type if all zeroes and poles have even orders. Definition 1.3. For a stratum Rg(µ) with m ≤ 2 zeroes, an involution P on z ⊔ p is called a ramification profile of Rg(µ) if the following holds • If m = 1, then P fixes the unique zero z. If m = 2, then a1 = a2, and P interchanges z1 and z2. • If P(pi) = pj, then bi = bj. • P fixes at most 2g + 2 marked points, only of even orders. Note that for a given stratum Rg(µ), a ramification profile P is determined by its restriction to p. Since the poles are labeled by {1, . . . , n}, we will identify P with an involution (i.e. an element of order two) in Symn when no confusion can arise. Our first result is the classification of the hyperelliptic connected components of Rg(µ). Theorem 1.4. For a stratum Rg(µ) of genus g > 0, there is a one-to-one correspondence between the hyperelliptic connected components and the ramification profiles of Rg(µ). Example 1.5. We observe that it is possible for Rg(µ) to have multiple ramification profiles. For example, consider the stratum Rg(bn + 2g − 2, −bn), where the exponent n means that the stratum has n poles of order b > 1, as usual. Assume that n > 2g + 1 and b is even. For any 1 ≤ r ≤ 2g + 2 such that n + 1 − r is even, µ has a ramification profile P that fixes exactly r − 1 poles. In particular, there exist 2(n−r)/2 such ramification profiles. n! For strata of genus g > 1, the non-hyperelliptic connected components of Rg(µ) are classified by the following Theorem 1.6. For g > 1 and a stratum Rg(µ) is of even type, then Rg(µ) has two non-hyperelliptic connected components distinguished by spin parity. Otherwise if g > 1 and Rg(µ) is not of even type, then Rg(µ) has a unique non-hyperelliptic connected component. For strata of genus g = 1, the non-hyperelliptic connected components of R1(µ) are classified by the following Theorem 1.7. For a stratum R1(µ) of genus one, denote d := gcd(a1, . . . , am, b1, . . . , bn) and let r be a positive integer divisor of d. Then R1(µ) has a unique non-hyperelliptic connected component Cr with rotation number r, except for the following cases: • The strata R1(r, −r) does not have a non-hyperelliptic component with rotation number r. • The strata R1(2n, −2n) and R1(n, n, −2n) have no non-hyperelliptic components. • The strata R1(2r, −2r), R1(2r, −r, −r), R1(r, r, −2r) and R1(r, r, −r, −r) have no non-hyperelliptic components with rotation number r. • The stratum R1(12, −34) has two non-hyperelliptic components with rotation number 3. Since R1(r, −r) = H1(r, −r), the first genus one exceptional cases follows from [2]. The second cases are treated in Proposition 7.5 and Proposition 9.10. The third cases are treated in Proposition 7.6, Proposi- tion 7.7 and Proposition 9.12. The last case, R1(12, −34), is treated in Proposition 7.8 and [15]. 3 In summary, except for these special cases listed above, as in the case of usual meromorphic strata Hg(µ), the connected components of the residueless stratum Rg(µ) can be classified by hyperellipticity (though now with multiple hyperelliptic components with different ramification profiles), and spin parity (if g > 1) or rotation number (if g = 1). 1.2. Outline of the paper. • In Section 2, we give basic definitions related to flat surfaces, and recall the GL+(2, R)-action on the strata and related concepts such as cores and polar domains. We will classify zero-dimensional (projectivized) generalized strata. • In Section 3, we recall the definition and the properties of the multi-scale compactification of the generalized strata. Then we describe how we can shrink a collection of parallel saddle connections using the contraction flow. • In Section 4, we recall the definition of the principal boundary of strata and how a flat surface degenerates to the principal boundary. We will explain that two surgeries introduced in [13] can be considered as smoothing processes from certain multi-scale differentials, and they can be reversed by degeneration into the principal boundary under certain conditions. • In Section 5, we describe hyperelliptic components of the stratum and their principal boundary. • In Section 6, we prove the existence of a flat surface with a multiplicity one saddle connection for all connected component but hyperelliptic components with 2g + 2 fixed marked points. • In Section 7, we classify the non-hyperelliptic components of genus one single-zero strata. • In Section 8, we classify the hyperelliptic components of strata, completing the proof of Theorem 1.4. • In Section 9, we classify the connected components of genus one multiple-zero strata, completing the proof of Theorem 1.7. • In Section 10, we classify the non-hyperelliptic components of strata of higher genus, completing the proof of Theorem 1.6. Acknowledgements. This research was partially supported by Kwanjeong Educational Foundation and also by Simons Foundation International, LTD. The author would like to thank his advisor, Samuel Grushevsky for introducing him to the theory of strata of differentials, and encouraging him to work on this project. The author is grateful to Benjamin Dozier, Corentin Boissy and Yiu Man Wong for many valuable discussions. The author would like to thank to Martin M¨oller and Guillaume Tahar for useful comments on an earlier version of this text. The author would also like to show gratitude to reviewers for carefully reading the proofs and for providing comments that greatly improved the manuscript. 2. Flat surfaces and their deformations In this section, we will recall and introduce basic properties of flat surfaces and their deformations that will be used in later sections. Recall that a saddle connection of a flat surface (X, ω) is a straight line with respect to the flat structure connecting two (possibly identical) zeroes of ω that does not contain any other zeroes of ω in its interior. The saddle connections play a main role in understanding the flat structure of (X, ω) and their deformations. 2.1. GL+(2, R)-action and the contraction flow. There is a natural GL+(2, R)-action on the meromor- phic stratum Hg(µ). For u ∈ X \ (z ⊔ p), let z = x + iy be a local flat coordinate at u given by ω. That is, z(u) = 0 and ω = dz in a neighborhood of u. For a matrix M = a b dã c Å ∈ GL+(2, R), we can associate another complex coordinate z′ = (ax + by) + i(cx + dy) at u. This can be done for any u ∈ X \(z ⊔ p), and these local patches give a new complex structure X ′. Also the new flat structure given by z′ is equivalent to the meromorphic differential ω′ on X ′, locally determined by ω′ = dz′. This new flat surface (X ′, ω′) is also contained in the stratum Hg(µ). The GL+(2, R)-action on Hg(µ) is defined by M ◦ (X, ω) := (X ′, ω′). A remarkable property of the GL+(2, R)-action is that the action preserves the straight lines. In other words, if γ is a straight line on (X, ω), then its image in M ◦ (X, ω) is also a straight line. 4 Given two distinct directions α, θ ∈ S1, the contraction flow Ct α,θ is given by contracting θ direction and preserving α direction of flat surfaces. More precisely, it is the action of the semigroup of matrices of the form Ct α,θ = sin θ cos θ cos α sin αã Å Å e−t 0 0 1ã Å sin θ cos θ cos α sin αã −1 ∈ GL+(2, R) for t ∈ R+. If (X, ω) satisfies some residue condition R, then M ◦ (X, ω) also satisfies R for any M ∈ GL+(2, R). In g (µ) is a GL+(2, R)-invariant subvariety of Hg(µ). In particular, the other words, the generalized stratum HR generalized stratum HR g (µ) also has the contraction flows. 2.2. Flat surfaces with degenerate core. For a flat surface (X, ω), a subset Y ⊂ X is said to be convex if any straight line joining two points in Y is also contained in Y . The convex hull of a subset Y is the smallest convex subset of X containing Y . Recall from [17] that the core C(X) of (X, ω) is defined to be the convex hull of z. In particular, C(X) contains all zeroes and saddle connections of (X, ω). In [17], Tahar established the following properties of the core, allowing us to decompose every flat surface into the core and the polar domains: Proposition 2.1. For any flat surface (X, ω) ∈ Hg(µ), ∂C(X) is a finite union of saddle connections. The complement X \ C(X) has exactly n connected components, each of which is homeomorphic to a disk containing one pole pi of ω. For a pole p of ω, the connected component of X \ C(X) containing p is called the polar domain of p. A flat surface (X, ω) is said to have degenerate core if the core C(X) has empty interior. Since C(X) is closed, it is equivalent to saying C(X) = ∂C(X). By [17, Lemma 5.15], we can construct a flat surface with degenerate core contained in any connected component of a stratum. A consequence of the above proposition is the following Proposition 2.2. For any flat surface (X, ω), there exist a finite collection of saddle connections γ1, . . . , γN of (X, ω) such that their homology classes generate H1(X \ p, z; Z). Proof. First, we will prove this for flat surfaces (X, ω) with degenerate core. Then C(X) is a union of finitely many saddle connections. By Proposition 2.1, X \ C(X) is a disjoint union of polar domains. Each polar domain is homeomorphic to a disk containing one pole. Any path in X \ p between two zeroes can be homotoped to a union of saddle connections in C(X). Therefore, the saddle connections of (X, ω) generate H1(X \ p, z; Z). In general, any flat surface (X, ω) can be deformed to a flat surface (X ′, ω′) with degenerate core by a contraction flow Ct α,θ with general α, θ as t → ∞ (See [17, Lemma 5.15]). Any saddle connection γ′ of (X ′, ω′) is a limit of some saddle connection γ of (X, ω). Since GL+(2, R)-action preserves the homology class of saddle connections, γ′ and γ have the same homology class. Therefore, we can find a set of saddle (cid:3) connections of (X, ω) generating H1(X \ p, z; Z). g (µ) has a natural C⋆ 2.3. Flat surfaces in zero-dimensional (projectivized) strata. A stratum HR action given by scaling of the differentials. The projectivized generalized stratum PHR g (µ) is defined by the quotient HR g (µ) be the quotient map. Flat surfaces in zero-dimensional projectivized strata will play a role of building blocks. Each connected component of such a stratum is just a point, thus the number of connected components can be computed by counting isomorphism classes of flat surfaces up to scaling. By looking at the dimension formula dim PRg(µ) = 2g + m + n − r − 2, we see that there are two types of zero-dimensional projectivized generalized strata. g (µ)/C⋆. Let π : HR g (µ) → PHR The first cases are the genus zero residueless strata with two zeroes. That is, (g, m) = (0, 2) and r = n. In general, such a stratum is of the form PR0(a1, a2, −b1, . . . , −bn). By [6, Proposition 2.3], each connected component of this stratum correspond to a configuration of type I of parallel saddle connections. For the future use, we can paraphrase the proposition there into the following Proposition 2.3. Let (P1, ω) ∈ R0(a1, a2, −b1, . . . , −bn). Then it has exactly n saddle connections joining z1 and z2, parallel to each other. Also, (P1, ω) is uniquely determined up to scaling by following information: • A cyclic order on p given by a permutation τ ∈ Symn. 5 p 2π(b − 1) + π 2πD p 2πC P1(b) P2(C, D) Figure 1. Polar domains of type I and II • A tuple of integer C = (C1, . . . , Cn) such that 1 ≤ Ci ≤ bi − 1 for each i, satisfying Remark that if we denote Di := bi − Ci, then i Di = a2 + 1. i Ci = a1 + 1. P The second cases of zero-dimensional projectivized generalized strata are of the form PR0(a, −b1, . . . , −bn−1; −bn−1, −bn). P That is, (g, m) = (0, 1) and |R| = n − 1. By [6, Proposition 3.8], each component of this stratum is given by a configuration of type II of parallel saddle connections. We can paraphrase the proposition there into the following Proposition 2.4. Let (P1, ω) ∈ R0(a, −b1, . . . , −bn−2; −bn−1, −bn). Then it has n − 1 saddle connections, parallel to each other. Also, (P1, ω) is determined uniquely up to scaling by following information: • A permutation τ ∈ Symn−2 on the set of n − 2 residueless poles. • A tuple of integer C = (C1, . . . , Cn−2) such that 1 ≤ Ci ≤ bi − 1 for each i. 2.4. Polar domains. In this subsection, we define two types of polar domains that will be used for some constructions in the later sections. Definition 2.5. Let (P1, ω) ∈ H0(b + 1, −1, −b), whose residue at the simple pole is equal to 1. By cutting along the unique saddle connection, the Riemann sphere P1 is separated into two regions. The region containing the pole of order b is called the polar domain of type I and denoted by P1(b). Note that the boundary of P1(b) is a straight line joining the unique singularity z to itself, forming an angle equal to 2π(b − 1) + π. The polar domain P1(b) is depicted in the left of Figure 1. Definition 2.6. Let (P1, ω) ∈ H0(C − 1, D − 1, −b), whose period over the unique saddle connection is equal to 1. The surface obtained by cutting along the unique saddle connection is called the polar domain of type II and denoted by P2(C, D). Note that P2(C, D) is bounded by two straight lines joining two singularities z1, z2. They form two angles at z1 and z2, equal to 2πC and 2πD, respectively. The polar domain P2(C, D) depicted in the right of Figure 1. 3. The multi-scale compactification and degenerations of flat surfaces The projectivized stratum PHg(µ) has a smooth compactification PHg(µ) with normal crossings boundary, called the (projectivized) moduli space of multi-scale differentials. This is constructed in [1] by Bainbridge, Chen, Gendron, Grushevsky and M¨oller. The construction can be generalized by a simple modification to the R projectivized generalized stratum PHR g (µ). For example, Mullane dealt with the strata of residueless multi-scale differentials PRg(µ) in [16]. g (µ), and we obtain a smooth compactification PH Since PH connected component C ⊂ HR R g (µ) is a smooth compactification with normal crossings boundary divisor, the closure C of a R g (µ) is also a connected component therein. Therefore, there is a g (µ) in H 6 one-to-one correspondence between the connected components of HR g (µ) and the connected components of H R g (µ). In this section, we will briefly recall the notion of the generalized stratum of multi-scale differentials H R g (µ) R and discuss how a flat surface in HR g (µ) degenerates to the boundary of H g (µ). The multi-scale differentials in the boundary consist of flat surfaces in the strata with smaller dimensions and a combinatorial datum called (the equivalence classes of) prong-matchings. Therefore the degeneration will provide us a way to use the induction on the dimension of the strata. 3.1. The moduli space of multi-scale differentials. We recall some notions related to the multi-scale differentials. Enhanced level graph. Let (X, z ⊔p) ∈ Mg,n+m be a stable z ⊔p-pointed curve. Recall that the enhanced level structure on the dual graph Γ of (X, z ⊔ p) is given by 1. A weak order (cid:22) on the set of vertices V (Γ). It is equivalent to a surjective level function ℓ : V (Γ) → {0, −1, . . . , −L}. An edge e ∈ E(Γ) is called vertical if it is joining vertices in the distinct levels. Otherwise e is called horizontal. We denote the set of vertical edges of Γ by Ev(Γ). 2. An assignment of a positive integer κe for each edge e ∈ Ev(Γ). For each v ∈ V (Γ), we denote gv the genus of the irreducible component Xv of X. Then it satisfies 2gv − 2 = ai − Xzi7→v Xpj 7→v bj + Xe∈E+(v) (κe − 1) − Xe∈E−(v) (κe + 1) where the first (second, respectively) sum is over all zeroes (poles) incident to v, and E+(v) (E−(v), respec- tively) are the set of edges incident to v that are going from v to a lower (upper) level vertex. Twisted differentials. A twisted differential on (X, z ⊔ p) ∈ Mg,n+m compatible with the enhanced level graph Γ is a collection of meromorphic differentials η = {ηv}, one for each irreducible component Xv of X, compatible with Γ. There are several conditions that ensure that (X, z ⊔ p, η) is compatible with Γ. One condition is that at the node corresponding to a vertical edge e, the differential η+ on the upper component has a zero of order κe − 1 and the differential η− on the lower component has a pole of order −κe − 1. The other condition is the Global Residue Condition, which forces a sum of residues of certain poles at the nodes is equal to zero. See [1] for the full detail. Prong-matching. At the node q corresponding to a vertical edge e ∈ Ev(Γ), the upper level component has a zero q+ of order κe −1. That is, the cone angle at the node is equal to 2πκe. The lower level component has a pole q− of order −κe − 1, which also has the cone angle 2πκe. At each of q+ and q−, there are exactly κe prongs, that is, choices of a horizontal direction. The prong-matching at q is an orientation-reversing one-to- one correspondence between the prongs at q+ and the prongs at q−. There are exactly κe prong-matchings, usually indexed by Z/κeZ. The set of prong-matchings PΓ := e∈Ev (Γ) Z/κeZ is called the prong rotation group. The level rotation action is a homomorphism ZL → e∈Ev (Γ) PΓ given by n 7→ (nℓ(e+) − nℓ(e−) mod κe)e∈Ev (Γ). Two prong-matchings are called equivalent if they are contained in the same coset of the image of the level rotation action. Multi-scale differentials. The moduli space H of type µ with residue condition R, consisting of the following data: R g (µ) parameterizes multi-scale differentials (X, z, p, η, P r) Q Q • A stable pointed curve (X, z ⊔ p) ∈ Mg,n+m with an enhanced level structure on the dual graph Γ of X. • A twisted differential (X, z, p, η) of type µ, compatible with the enhanced level graph Γ and satis- fying the residue condition R. • A prong-matching equivalence class P r. In this paper, we simply denote this multi-scale differential by (X, η) or X, when no confusion can arise. R g (µ) are the closures of subspaces DΓ of multi-scale differentials compatible The boundary divisors of H with Γ, where Γ ranges over all enhanced level graphs with two levels and no horizontal edges, or with one level and one horizontal edge. R 3.2. Plumbing construction. Let (X, η) ∈ ∂H g (µ). The neighborhood of (X, η) can be described by the plumbing construction. We can plumb any horizontal node, or plumb the level transition, that is, the 7 collection of all nodes between chosen levels. The moduli parameters and smoothing parameters form a nice system of complex-analytic coordinates, see [9] for detail. Here, we recall the explicit description for simple cases that we will use in this paper. Since we are only plumbing one horizontal node or the level transition between two levels, there is only one smoothing parameter, that we denote t ∈ C here. More precisely, we will treat plumbing of a horizontal node and plumbing of the level transition between two levels, while there are at most two poles with non-zero residues at the bottom level. These cases can be described as combination of the following local constructions. Horizontal node. A horizontal node h represents a pair of simple poles h1 and h2 with opposite residues ±r. The neighborhoods of each simple pole is a half-infinite flat cylinder, where the period of the closed geodesic enclosing the cylinder measures the residue of the pole. Since the residues at two poles are opposite, we can cut each cylinder along a closed geodesic and glue two boundaries. u du. Also we can find a unique coordinate v at h2 so that ω = − r We can make this more precise using the standard coordinates. We can find a unique coordinate u at h1 so that ω = r v dv. For some ǫ > 0, the standard coordinate neighborhoods contains each of the disks U := {|u| < ǫ} and V := {|v| < ǫ}. We can remove two small disks {|u| ≤ |t|} ⊂ U and {|v| ≤ |t|} ⊂ V , containing q+ and q− respectively. For remaining points u ∈ U and v ∈ V , we glue u and v whenever uv = s. Note that ω is invariant under the coordinate change v = s u . Node between two levels, no residue. Assume the simplest possible situation, where we only have two irreducible components (X0, ω0) and (X−1, ω−1), at the level 0 and -1, respectively. Suppose that there is only one node q between them. The prong rotation group is isomorphic to Z/κZ. Fix a prong-matching, and suppose that two prongs v− and v+, respectively at q− and q+, are matched by this prong-matching. If the top level component X0 only contains, if any, residueless marked poles, then by Global Residue Condition, the residues of the pole of ω−1 at q is equal to zero. Let κ be the number of prongs at q. We denote the scaling parameter by s, so we need to plumb at q between two flat surfaces (X0, ω0) and (X−1, sω−1) for small |s|. We can choose a standard coordinate v at q− so that ω−1 = d(v−κ) = −κv−κ−1. There are κ choice of such coordinates, because whenever v satisfies the above, ξi κv also satisfies the same equation. For the given prong v− (i.e. a horizontal direction) at q−, we can choose a unique standard coordinate v such that {v ∈ R+} is the ray toward the direction of the prong v−. Similarly, we can uniquely choose a standard coordinate u at q+, so that ω0 = d(uκ) = κuκ−1du and {u ∈ R+} is the ray toward the direction of the prong v+. κ For some ǫ > 0, the standard coordinate neighborhoods contains each of the disks U := {|u| < ǫ} and V := {|v| < ǫ}. We can remove two small disks {|u| ≤ |s|} ⊂ U and {|v| ≤ |s|} ⊂ V , containing q+ and q− respectively. For remaining points u ∈ U and v ∈ V , we glue u and v whenever uv = s. Consequently we can identify ω0 = d Node between two levels with a non-zero residue. Now we assume that X0 contains a non-residueless pole, say p. Then Global Residue Condition does not apply and the pole q− of ω−1 may have a nonzero residue r 6= 0. By scaling ω−1, we may assume that r = 1. In order to plumb at q, we need to choose a modification differential ξ on X0 that has only two (simple) poles at p and q+, so that the residue of ξ at q+ is equal to −1. = sκd(v−κ) = sκω−1. s v (cid:1) (cid:0) We can choose a standard coordinate v at q− so that ω−1 = −(v−κ − 1) dv v . There are κ choice of such coordinates, because whenever v satisfies the above, ξi κv also satisfies the same equation. For the given prong v− (i.e. a horizontal direction) at q−, we can choose a unique standard coordinate v such that {v ∈ R+} is the ray toward the direction of the prong v−. Similarly, we can uniquely choose a standard coordinate u at q+, so that ω0 + sκξ = (uκ − sκ) du u and {u ∈ R+} is the ray toward the direction of the prong v+. As in the previous case, we glue two annulus centered at q+ and q− by identifying u and v whenever uv = s. Consequently we can identify ω0 + sκξ = sκω−1. A pole of ω0 is also a pole of the differential ω0 + sκξ with the same order. It has one additional simple pole at q. A zero z of ω0 is no longer a zero of ω0 + sκξ. However, ω0 + sκξ has κ distinct zeroes in a small neighborhood of z. They are given by u = sξi κ for i = 1, . . . , κ where ξκ is a primitive κ-th root of unity. Thus they are all contained in the small disk {|u| ≤ |s|} which was removed. Pair of nodes between two levels, with Global Residue Condition. Suppose that there are two nodes s1 and s2 between X0 and X−1. If the top level component (X0, ω0) only contains, if any, residueless 8 marked poles, then by Global Residue Condition, the residues of the poles of ω−1 at s− to each other. By scaling ω−1, we assume that the residue at s− 1 is equal to 1. 1 and s− 2 are opposite Let κ1 and κ2 be the numbers of prongs at the nodes s1, s2, respectively. We denote κ = lcm(κ1, κ2). A prong-matching between two levels is represented by an element of a prong-rotation group Z/κ1Z × Z/κ2Z. 2 ), is matched to v+ (w+) at s+ Fix a prong-matching, and suppose that the prongs v− (w−, resp) at s− 1 (s+ 2 ), by this prong-matching. To plumb two nodes s1 and s2, we need to choose a modification differential ξ on X0 that has only two 1 (s− (simple) poles at s+ 1 and s+ 2 , so that the residue of ξ at s+ 1 is equal to 1. We can choose a standard coordinate v at s− v , uniquely determined by the prong v− (i.e. a horizontal direction) at s− 1 , so that ω0 − sκξ = (uκ1 − sκ) du u , uniquely determined by the prong v+. Similar to the previous situation, we glue two annulus centered at s+ 1 by identifying u and v whenever uv = s κ1 . Consequently we can identify ω0 − sκξ = sκω−1. This plumbs the node s1. Simultaneously, we can plumb the node s2 in the same way, using the prongs w− and w+. 1 . Similarly, we can choose a standard coordinate u at s+ 1 , so that ω−1 = −(v−κ1 − 1) dv 1 and s− κ 3.3. Parallel saddle connections. The main tool that we will use in the proof of the main theorems is the degeneration to the principal boundary of HR g (µ). In [13], the principal boundary is defined to be the set of surfaces obtained by shrinking a family of parallel saddle connections in a flat surface in the given stratum. In [6], the principal boundary is described as a certain subspace of the twisted differentials in the boundary of the incidence variety compactification of HR g (µ). The description can be refined for the case of R g (µ). There can be various ways to shrink a given saddle connection, but the multi-scale compactification H in this paper we will use the contraction flow that contracts the direction of the saddle connection. Two saddle connections of (X, ω) are said to be parallel if the ratio of the periods of ω over them is a real number. In particular, if two saddle connections are homologous (i.e. they represent the same class in H1(X \ p, z; Z)), then the periods of ω over them are equal, thus they are parallel in particular. The converse is not necessarily true, since the periods of ω over two non-homologous saddle connections can be R-proportional. For general flat surfaces in HR g (µ), it is still true that non-homologous saddle connections are not parallel, due to the fact that the periods of ω over a certain set of saddle connections provide complex local coordinates for the stratum HR g (µ), and we can always slightly perturb the period coordinates in the stratum. Since the period coordinates give a map Per : H1(X \ p, z; Z)R → C, we have the following Proposition 3.1. For any stratum HR g (µ) such that for any flat surface (X, ω) ∈ W , two saddle connections of (X, ω) are parallel if and only if they are homologous in H1(X \ p, z; Z)R. Proof. For the usual strata Hg(µ), this is proved in [10, Proposition 3.1]. The same argument applies to the (cid:3) generalized strata. g (µ), there exists an open dense subset W ⊂ HR The following definition of multiplicity of saddle connection is analogous to [14], but adjusted to the setup of the generalized strata. Definition 3.2. A saddle connection γ of X ∈ HR distinct k saddle connections of X homologous to γ in H1(X \ P , z; Z)R. g (µ) is said to have multiplicity k if there exists exactly If two saddle connections γ1 and γ2 are homologous in H1(X \ P , z; Z)R, then they have the same endpoints. Moreover, the closed curve γ1 ∪γ2 has trivial homology class. In other words, X \(γ1 ∪γ2) has two connected components, because any non-separating closed curve represents nonzero homology class. In the case of residueless strata Rg(µ), recall that H1(X \ P , z; Z)R can be identified with H1(X, z; Z). Therefore, Proposition 3.1 implies that two saddle connections of a general residueless flat surface are parallel if and only if they are homologous in H1(X, z; Z). 3.4. Shrinking a saddle connection. Let C be a connected component of HR g (µ), and let (X, ω) ∈ C be a general flat surface in the sense of Proposition 3.1. For a given saddle connection γ on (X, ω), we now describe how to shrink it. Proposition 3.3. For any ǫ > 0, the flat surface (X, ω) ∈ C can be continuously deformed within its GL+(2, R)-orbit so that |γ| < ǫ|γ′| for any other saddle connection γ′ not homologous to γ. 9 Proof. Since the length of the saddle connection γ is equal to necessary, we may assume that the period over γ is equal to 1. Fix L > 0 such that 1 finitely many saddle connections α1, . . . , αM in (X, ω) of length < L. , we have γ ω R (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) γ ω 6= 0. By scaling ω if R L < ǫ. There are only Let Ct be the contraction flow that contracts the real direction and preserves the imaginary direction. By applying Ct to (X, ω), we obtain a family of flat surfaces (X t, ωt) := Ct ◦ (X, ω) ⊂ C containing a saddle connection γt that comes from γ. The period of ωt over γt is equal to e−t. Any saddle connection β of (X, ω)of length > L deforms to a saddle connection βt of length > Le−t. So for such β, we have |γt| |βt| < 1 L < ǫ. For the saddle connections αj, we denote Let αj R ω := aj +ibj for some real aj and bj 6= 0 for each j = 1, . . . , M . By applying Ct, we obtain δ := 1 2 min1≤j≤M bj. ωt = aje−t + ibj. Zαt j For large enough t, the real part of the period becomes negligible and the length of αt 2 bj. Also, we may assume that e−t < δǫ. Therefore, the ratio between |γt| and |αt 1 j| is smaller than j is then larger than 2e−t bj < 2δǫ 2δ = ǫ. (cid:3) 4. The principal boundary of residueless strata In [10], Eskin-Masur-Zorich described the principal boundary of strata of abelian differentials. It is a subspace of Hg(µ) that parameterises flat surfaces containing a collection of short parallel saddle connec- tions. In [6], D. Chen and Q. Chen described the principal boundary in terms of the twisted (holomorphic) differentials, as a certain subspace in the boundary of the incidence variety compactification. In this section, we will introduce the principal boundary and describe them as boundary strata of the multi-scale compact- ification — in particular, in terms of the corresponding enhanced level graphs. The principal boundary of type I is obtained by shrinking a collection of parallel saddle connections joining two distinct zeroes. Thus it is only defined for the multiple-zero strata. The principal boundaries of type II is obtained by shrinking a collection of saddle connections joining a zero to itself. They are defined for any strata, but we will only describe them for single-zero strata as we only need these cases. At the end of this section, we prove that every connected component C of Rg(µ) with g > 0 has some principal boundary in its closure C in Rg(µ). More precisely, C contains some principal boundary of type I if Rg(µ) is a multiple-zero stratum, and some principal boundary of type II if Rg(µ) is a single-zero stratum. 4.1. Configurations of type I. Let X ∈ Rg(µ) be a general flat surface in the sense of Proposition 3.1, with at least two zeroes. Suppose X has a multiplicity k saddle connection joining z1 and z2. That is, there are precisely k saddle connections γ1, . . . , γk in the given direction, labeled in the clockwise order at z1. We set γk+1 := γ1 for convenience. We observe that the angles between γi and γi+1 are 2πCi at z1 and 2πDi at z2, for some positive integers Ci, Di. Denote C := (C1, . . . , Ck), D := (D1, . . . , Dk). The angles satisfy P i Di = a2 + 1. i Ci = a1 + 1 and Let pi (zi, respectively) denote the set of poles (zeroes), contained in the region bounded by γi and γi+1. P Also denote z−1 := {z1, z2}. Then z−1 ⊔ z1 ⊔ · · · ⊔ zk is a partition of z and p1 ⊔ · · · ⊔ pk is a partition of p. The data F := (a1, a2, C, D, {zi}, {pi}) given by the collection of saddle connections γi is said to be a configuration of type I. We say that X has a configuration F if there exists a collection of saddle connections of X that gives F . 10 level 0 level − 1 pi ... gi ... vi zi ei ... ... v−1 z2 z1 gi qj i ∈ J j /∈ J Figure 2. The graph Γ(F ) of a configuration F of type I 4.2. Graphs of configurations of type I. Given a configuration F of type I, we introduce the configuration graph Γ(F ) to describe the enhanced level graph of the multi-scale differentials in the principal boundary corresponding to F . This graph is already introduced in [8], as a backbone graph. For each i = 1, . . . , k, consider a non-negative integer gi := 1 2  Xzj ∈zi  aj − Xpj ∈pi bj + Ci + Di .  If zi = ∅, gi = 0 and pi = {qi} for some pole qi, then the region bounded by γi and γi+1 is isomorphic to the polar domain P2(Ci, Di). In this case, the order of the pole qi is equal to Ci + Di. Let J ⊂ {1, . . . , k} be the set of indices for which the region bounded by γi and γi+1 is not isomorphic to such a polar domain, and denote p−1 = {qi|i /∈ J}. We define the configuration graph Γ(F ) as follows: • The set of vertices is V (F ) := {v−1} ∪ {vi|i ∈ J}. • The set of edges is E(F ) := {ei|i ∈ J} where each ei is joining v−1 and vi. • The vertex vi has half-edges marked by zi and pi, for each i ∈ {−1} ∪ J. • We assign to each vertex vi, i ∈ J, an integer gi ≥ 0. • We assign to each edge ei, i ∈ J, an integer κi := Ci + Di − 1 > 0. • The level function ℓ : V (F ) → {0, −1} is given by ℓ(v−1) = −1 and ℓ(vi) = 0 for each i ∈ J. The enhanced level graph Γ(F ) is shown in Figure 2. It has two levels and no horizontal edges. The prin- cipal boundary D(Γ(F )) of Rg(µ) is the subspace of multi-scale differentials compatible with the level graph Γ(F ). For a multi-scale differential (X, η) ∈ D(Γ(F )), we denote the irreducible component corresponding to the vertex vi by (Xi, ηi) for each i ∈ J ∪ {−1}. 4.3. Configuration of type II. Assume that Rg(µ) is a single-zero stratum with unique zero z of order a. Let X ∈ Rg(µ) be a general flat surface in the sense of Proposition 3.1. Suppose that X has a multiplicity k saddle connection. That is, there are precisely k saddle connections γ1, . . . , γk in the given direction, labeled in the clockwise order at z. Remark that the homology class [γi] is nontrivial in H1(X; Z) and X \ (∪iγi) has k connected regions. There is precisely one region with a pair of holes boundary, which is the union of γ1 and γk. We observe that the angles between γi and γi+1 are 2πCi and 2πDi, for some positive integers Ci, Di. Denote C := (C1, . . . , Ck), D := (D1, . . . , Dk). The angle bounded by γ1 is 2πQ1 + π and the angle bounded i(Ci +Di)+Q1 +Q2 = a. by γk is 2πQ2 +π for some non-negative integers Q1, Q2. Also these angles satisfy Let pi denote the set of poles contained in the region bounded by γi and γi+1. Also, Let p0 denote the P set of poles contained in the region bounded by γ1 and γk. Then p0 ⊔ · · · ⊔ pk−1 is a partition of p. The data F := (a, C, D, Q1, Q2, {pi}) given by the collection of saddle connections γi is said to be a configuration of type II. We say that X has a configuration F if there exists a collection of saddle connections of X that forms F . 11 level 0 pi ... gi vi ei level − 1 f gi i ∈ J qj j /∈ J ... z ... v−1 Figure 3. The graph Γ(F ) of a configuration F of type II when Q1 = Q2 = 0 4.4. Graphs of configurations of type II. Given a configuration F of type II, we introduce the con- figuration graph Γ(F ) to describe the enhanced level graph of the multi-scale differentials in the principal boundary corresponding to F . For each i = 1, . . . , k − 1, consider a non-negative integer gi := 1 2  Ci + Di −  Xpj ∈pi bj  and g0 := 1 2  Q1 + Q2 −  Xpj ∈p0 bj .  If gi = 0 and pi = {qi} for some pole qi, then the region bounded by γi and γi+1 is isomorphic to the polar domain P2(Ci, Di). Let J ⊂ {1, . . . , k − 1} be the set of indices for which the region bounded by γi and γi+1 is not isomorphic to such a polar domain, and denote p−1 = {qi|i /∈ J}. Suppose that X has a configuration F . All possible local patterns at z are given in [6, Sec. 3.1]. In other words, the regions of X \ (∪iγi), listed in a clockwise order at z, are one of the following three possibilities: (i) A cylinder, followed by surfaces of genus gi for each i ∈ J with a figure eight boundary, followed by a cylinder. Two cylinders at the beginning and the end are the same. This is the case when Q1 = Q2 = 0. (ii) A cylinder, followed by surfaces of genus gi for each i ∈ J with a figure eight boundary, followed by another surface with a pair of holes boundary. This case cannot happen because the cylinder at the beginning and the surface at the end should be the same. Therefore, it is impossible to have the cases Q1 = 0 and Q2 > 0, or Q1 > 0 and Q2 = 0. (iii) A surface of genus g0 with a pair of holes boundary, followed by surfaces of genus gi for each i ∈ J with a figure eight boundary, followed by the surface of genus g0. This is the case when Q1, Q2 > 0. If Q1 = Q2 = 0, then the local pattern at z follows (i) above, and in this case we define the configuration graph Γ(F ) as follows: • The set of vertices is V (F ) := {v−1} ∪ {vi|i ∈ J}. • The set of edges is E(F ) := {f } ∪ {ei|i ∈ J} where each ei is joining v−1 and vi, and f is joining v−1 to itself. • Each vertex vi has half-edges marked by pi. • We assign to each vi, i ∈ J, an integer gi ≥ 0. • We assign to each ei, i ∈ J, an integer κi := Ci + Di − 1 > 0. • The level function ℓ : V (F ) → {0, −1} is defined by ℓ(v−1) = −1 and ℓ(vi) = 0 for each i ∈ J. The enhanced level graph Γ(F ) is depicted in Figure 3. Remark 4.1. Note that it is possible that J = ∅ if g = 1. In this case there exists no top level component, and Γ(F ) is in fact a single-level graph. We will still call the level containing v−1 the bottom level for convenience. 12 p0 ... level 0 g0 v0 pi ... gi gi i ∈ J vi f1 f2 ei ... qj j /∈ J ... v−1 z level − 1 Figure 4. The graph Γ(F ) of a configuration F of type II when Q1, Q2 > 0 If Q1, Q2 > 0, then X can be constructed by the pattern (iii) above, and we define the configuration graph Γ(F ) as follows: • The set of vertices is V (F ) := {v−1, v0} ∪ {vi|i ∈ J}. • The set of edges is E(F ) := {f1, f2} ∪ {ei|i ∈ J} where each ei is joining v−1 and vi and f1 and f2 is joining v−1 and v0. • Each vertex vi has half-edges marked by pi. • We assign to each vi, i ∈ {0} ∪ J, an integer gi ≥ 0. • We assign to each ei, i ∈ {0} ∪ J, an integer κi := Ci + Di − 1 > 0. Also we assign Q1 and Q2 to f1 and f2, respectively. • The level function ℓ : V (F ) → {0, −1} is defined by ℓ(v−1) = −1 and ℓ(vi) = 0 for any i ∈ {0} ∪ J. The enhanced level graph Γ(F ) is described in Figure 4; it has two levels and no horizontal edge. The principal boundary D(Γ(F )) of Rg(µ) is the subspace of multi-scale differentials compatible with the level graph Γ(F ). For a multi-scale differential (X, η) ∈ D(Γ(F )), we denote the irreducible component corresponding to the vertex vi by (Xi, ηi) for each i ∈ J ∪ {−1, 0}. 4.5. Degeneration to the principal boundary. Suppose that X ∈ Rg(µ) has a configuration F formed by k parallel saddle connections γ1, . . . , γk. By Proposition 3.3, for any ǫ > 0, X can be continuously deformed to another flat surface, which by abuse of notation we will henceforth denote X, such that the saddle connections γi have length < ǫ and any other saddle connections of X have length > 3ǫ. By [6, Thm. 2.1 and Thm. 3.4], we have the following Proposition 4.2. Suppose that X ∈ Rg(µ) has a configuration F . Then there exists a continuous degen- eration to the principal boundary D(F ) ⊂ ∂Rg(µ). Conversely, any multi-scale differential in D(F ) can be smoothed to a flat surface in Rg(µ) that has a configuration F . In particular, if R(µ) is a multiple-zero stratum, then any connected component C of R(µ) has a principal boundary of type I. In case of a single-zero stratum, any connected component C has a principal boundary of type II. 4.6. Degeneration to a multi-scale differential with two irreducible components. Using the de- generation to the principal boundary, we can prove the following statement that will be useful when we apply the induction on the dimension of Rg(µ). Proposition 4.3. Suppose that dimC Rg(µ) > 2. Then for any connected component C of Rg(µ), there exists a multi-scale differential Y ∈ ∂C satisfying the following: • Y has exactly two irreducible components Y0 and Y−1 which are at different levels. Let p0 and p−1 denote the set of marked poles contained in Y0 and Y−1, respectively. • The components Y0 and Y−1 intersect at only one node q. • Moreover, if the number of zeroes m > 1, for any given pair of zeroes, say z1, z2 by relabeling zeroes, the bottom level component Y−1 contains z1, z2. 13 level 0 level − 1 p0 ... g0 ... z0 e p−1 ... g−1 ... z2 z1 z−1 (Y0, ζ0) (Y−1, ζ−1) Figure 5. Breaking a flat surface into two irreducible components In other words, if dimC Rg(µ) > 2, we can break a flat surface in C into two flat surfaces contained in the residueless strata of smaller dimensions. The level graph of Y is illustrated in Figure 5. Proof. We will navigate the boundary of C, until we obtain Y ∈ ∂C satisfying the desired condition. Use the induction on dimC Rg(µ) = 2g + m − 1. First, we assume m > 1. We need to show that there exists X ∈ C that contains a saddle connection joining z1 and z2. If m = 2, then this is trivial by Proposition 2.2. So assume that m > 2. By Proposition 2.2, any flat surface X ∈ C has a saddle connection γ joining z1 and zj for some j 6= 1. If j = 2, then it is done. So assume j > 2. Suppose that γ is contained in a collection of parallel saddle connections that forms a configuration F of type I. By Proposition 4.2, we can shrink γ and other saddle connections in the collection, obtaining X ∈ D(F ) ⊂ ∂C. Then X has unique bottom level component X−1 containing only two zeroes z1, zj, and |J| top level components Xi for i ∈ J. There exists some top level component Xi that contains z2. Since Xi contains less than m zeroes, by induction hypothesis, there exists a saddle connection γi in Xi joining z2 and the node qi between Xi and X−1. At q+ i , there exists a outgoing prong u that corresponds to γi. At q− i , there are at least one incoming prong v that comes from z2. See Figure 13. We can choose a prong-matching of X that sends v to u. By plumbing the level transition with this prong-matching, we obtain a flat surface in C and γi deforms to a saddle connection joining z1 and z2. So we can assume that γ is joining z1 and z2. By shrinking γ, we obtain X such that X−1 contains only two zeroes z1 and z2. If |J| = 1, then we can take Y = X. If |J| > 1, then we further degenerate X to a three-level multi-scale differential X ′ ∈ ∂C as follows. We keep one top level component, say X1. We send X−1 to the lowest level -2 and all other components are sent to the level -1. Plumb the level transition between the levels -1 and -2. As a result, we obtain Y as desired. Now we assume m = 1. Then we have g > 1 by assumption. We choose a flat surface X ∈ C and a saddle connection γ of X. Then γ is contained in a collection of parallel saddle connections that forms a configuration F of type II. By shrinking γ, we can obtain X ∈ D(F ) ⊂ ∂C. If Q1 = Q2 = 0, then the bottom level component has a pair of simple poles. If J = ∅, then we have g = 1 and dimC Rg(µ) = 2, which contradicts the assumption. If J 6= ∅, then there exists a top level component, say X1. As in the previous paragraph, we can degenerate X further into three-level differential so that X1 is the only component at level 0. By plumbing the level transition between the levels -1 and -2, we obtain Y as desired. If Q1 > 0 or Q2 > 0, then there exists a top level component X0 that intersects X−1 at two nodes. If J 6= ∅, we can obtain Y by the same argument as in the previous paragraph. If J = ∅ and X0 is the unique top level component, then the genus of X0 is equal to g − 1 > 0. Therefore by induction hypothesis, we can degenerate X0 into a multi-scale differential Z with two irreducible components intersecting at one node. Together with X−1 at level -2, they form a three-level multi-scale differential. By plumbing the level (cid:3) transition between the levels -1 and -2, we obtain Y as desired. 14 ... g Identify zero & pole ... a + 1 Plumb level transition a1 a2 ... g ... a1 a2 ... g ... a + −a − 2 a1 a2 Figure 6. Breaking up a zero 4.7. Breaking up a zero and merging zeroes. Now we can explain how the two main surgeries in [13], breaking up a zero and bubbling a handle, are related to the degeneration to the principal boundary of Rg(µ). Remark 4.4. Let γ be a multiplicity one saddle connection in X ∈ C, which forms a configuration F . In this case, the bottom level component X−1 is contained in the stratum R0(a1, a2, −a1 − a2 − 2) (if γ has two distinct endpoints) or R0(a; −Q1 − 1, −Q2 − 1) (if γ is a simple closed curve). Both of these strata are connected, which makes it easy to keep track of the connected component C. Therefore, multiplicity one saddle connections will play an important role in classification of the connected components. The first surgery used in [13] is called breaking up a zero. For a flat surface X ∈ Rg(µ) with a zero z of order a > 0, breaking up the zero z constructs a flat surface X ′ ∈ Rg(µ′) where µ′ is obtained by replacing a with two integers a1, a2 ≥ 0 such that a1 + a2 = a. This surgery from the point of view of the multi-scale compactification is described in [7]. Let (P1, η−1) be the unique (up to scaling) element of H0(a1, a2, −a − 2). We identify the unique zero z of X with the pole p ∈ P1 of ω−1 to obtain a multi-scale differential in ∂Rg(µ′). The enhanced level graph of the differential, illustrated in Figure 6, consists of two vertices at distinct levels, and one edge connecting them (Note that there is a unique prong-matching equivalence class). By plumbing the level transition, we obtain a flat surface in X ′ ∈ Rg(µ′). This surgery is called breaking up a zero z. The connected component of X ′ depends only on a1 and the connected component C of X. Conversely, if C contains a multi-scale differential X with level graph given in the middle of Figure 6, then a flat surface in C can be obtained by breaking up a zero of the top level component X1. Therefore a degeneration to this boundary divisor is the inverse operation to breaking up a zero. We will call this operation merging zeroes z1 and z2. We have the following result about merging zeroes. Proposition 4.5. Suppose that X ∈ Rg(µ) is a general flat surface and it has two zeroes z1, z2 of orders a1, a2, respectively. If there exists a multiplicity one saddle connection γ joining z1 and z2, then we can merge two zeroes z1 and z2 to obtain a flat surface with a zero of order a1 + a2. Proof. By applying Proposition 4.2 in the case k = 1, we can easily see that the enhanced level graph Γ(F ) is equal to the enhanced level graph that appears in the middle of Figure 6. Thus X can be obtained from (cid:3) a flat surface with a zero of order a1 + a2 by breaking up this zero. 4.8. Bubbling and unbubbling a handle. The second surgery used in [13] is called bubbling a handle. For a flat surface X ∈ Rg(µ) with a zero z of order a, bubbling a handle at z produces a flat surface X ′ ∈ Rg+1(µ′) where µ′ is obtained by replacing a with a + 2. Bubbling a handle can also be considered as plumbing a node from a certain multi-scale differential. Take a flat surface (P1, η−1) ∈ R0(a + 2, −a − 2; −1, −1). Note that by Proposition 2.4, up to scaling, such a flat surface is uniquely determined by the angle 2πs, 1 ≤ s ≤ a + 1, between the two half-infinite cylinders. Denote p ∈ P1 the residueless pole of order a + 2. We identify the zero z of X and p ∈ P1. We also identify two simple poles in P1 to obtain a multi-scale differential in ∂Rg(µ′). The enhanced level graph is illustrated in Figure 7 (Note that there is a unique prong-matching equivalence class). By plumbing the horizontal edge and the level transition, we obtain a flat surface X ′ ∈ Rg+1(µ′). This surgery is called bubbling a handle 15 ... g ... a + −a − 2 a + 2 ... g Identify zero & pole ... a + 1 Plumb level transition & horizontal node a + 2 ... g g+1 ... a + 2 Figure 7. Bubbling a handle at z. The connected component of X ′ depends only on the connected component C of X and the choice of 1 ≤ s ≤ a + 1. We denote this connected component by C ⊕z s, as in [14] and [2]. Note that the flat surface (P1, η−1) ∈ H1(a + 2, −a − 2) we used for bubbling a handle is contained in the boundary of the connected component of H1(a + 2, −a − 2) with rotation number gcd(a + 2, s). The definition of rotation number will be recalled in Section 7.1. Therefore, we have C′ ⊕z s1 = C′ ⊕z s2 when gcd(a + 2, s1) = gcd(a + 2, s2). Conversely, if C contains a multi-scale differential with level graph in the middle of Figure 7, then a flat surface in C can be obtained by bubbling a handle at a zero z of the top level component. If D is the connected component containing the top level component, then C = D ⊕z s for some s. Therefore a degeneration to this boundary divisor is the inverse operation of bubbling a handle. We will call this operation unbubbling a handle. Unlike merging zeroes, the operation of unbubbling a handle requires vanishing of periods of two independent classes in H1(X, Z). So we will achieve this by shrinking two non-parallel multiplicity one saddle connections in Section 10. 5. Hyperelliptic components Recall that a connected component C of Rg(µ) is called hyperelliptic if every flat surface (X, ω) in C is hyperelliptic. That is, the curve X has an involution σ such that X/σ ∼= P1 and σ∗ω = −ω. In this section, we will enumerate the hyperelliptic components of Rg(µ). The involution σ above permutes the poles, giving an involution P ∈ Symn. It is immediate that P is a ramification profile of Rg(µ) if m ≤ 2, as defined in Definition 1.3. In this case, the connected component C is said to have the ramification profile P. Obviously, the ramification profile is a topological invariant of hyperelliptic components. That is, two hyperelliptic components are distinct if they have different ramification profiles. The number 0 ≤ r ≤ 2g + 2 of marked points (i.e. poles and zeroes) fixed by P is also a topological invariant. The hyperelliptic connected component C is said to have r fixed marked points if the number of poles plus the number of zeroes fixed by the ramification profile P of C is equal to r. 5.1. Existence of hyperelliptic components. We now prove the existence part of Theorem 1.4. Proposition 5.1. A stratum Rg(µ) with g > 0 has a hyperelliptic component if and only if Rg(µ) has m ≤ 2 zeroes and it has a ramification profile. Moreover, for each ramification profile P of Rg(µ), there exists at least one hyperelliptic component with ramification profile P. Proof. We first deal with the case of single-zero strata. Then the profile P must fix the unique zero z. By relabeling the poles, we may assume that P fixes 1, . . . , r − 1 and P(r + 2i) = r + 2i − 1 for each i = 1, . . . , f where f = n−r−1 . Denote 2 ν := (a − 1, −b1 − 1, . . . , −br−1 − 1, −12g+2−r, −2br, −2br+2 . . . , −2br+2f ). This is a partition of −4, so there exists a stratum Q0(ν) of meromorphic quadratic differentials on P1. Since a quadratic differential on P1 is uniquely given (up to scaling) by the locations of its poles and zeroes, PQ0(ν) is isomorphic to M0,2g+2+f . So dimC Q0(ν) = 2g + f . Let w denote the unique zero of order a − 1, ui denote the pole of order −bi − 1, vj denote the pole of order −2br+2j, and sℓ denote the simple poles of the quadratic 16 differentials in Q0(ν). The 2-residues of a quadratic differential ξ ∈ Q0(ν) automatically vanish at all odd order poles ui and sℓ. Recall from [11] that the residual application R2 0 (ν) : Q0(ν) → Cf is a map sending a quadratic differential ξ to (Res2 0 (ν) denote the subspace of Q0(ν) of quadratic differentials with vanishing 2-residues at all vj. By [11, Theorem 1.3], R2 0 (ν) ⊂ Q0(ν) is a nonempty subspace of codimension f . So dimC QR 0 (ν) is surjective and QR vj ξ)j. Let QR 0 (ν) = 2g. Given a quadratic differential ξ ∈ Q0(ν), consider a double covering φ : C → P1 ramified at the first 2g + 2 marked points. Note that φ is ramified exactly over the poles of odd orders. The curve C is hyperelliptic (or elliptic if g = 1) and φ is unique up to post-composing the hyperelliptic involution σ on C, assuming g > 1. The pullback φ⋆ξ is a square of an abelian differential ω on C, uniquely determined up to multiplication by ±1 (see [14]). We have ordφ(x) ξ = ordx ω − 1 2 ordx ω ® if σ fixes x otherwise We label the preimages of vj by pr+2j and pr+2j+1 and the preimage of ui by pi. Then (C, ω) is contained in the stratum Hg(µ). Since φ is compatible with σ, we have σ∗ω = −ω and therefore (C, ω) is a hyperelliptic flat surface. Also, Res2 0 (ν), then (C, ω) ∈ Rg(µ). This gives an injective morphism Φ : PQR 0 (ν) → PRg(µ) whose image Im Φ consists of hyperelliptic flat surfaces. Since dimC PQR 0 (ν) = dimC PRg(µ) = 2g, Im Φ is a connected component of PRg(µ). φ(x) ξ = (Resx ω)2. Therefore if (P1, ξ) ∈ QR Suppose now that Rg(µ) is a double-zero stratum. So m = 2 and a1 = a2 = a. By relabeling the poles, 2 . As in we may assume that P fixes 1, . . . , r and P(r + 2i − 1) = r + 2i for each i = 1, . . . , f , where f = n−r the single-zero hyperelliptic case, denote ν := (2a, −b1 − 1, . . . , −br − 1, −12g+2−r, −2br+2, . . . , −2bf ) and we find a hyperelliptic component using the double covering of P1 ramified at the odd poles of a quadratic differentials in QR 0 (ν). Conversely, suppose that Rg(µ) has a hyperelliptic connected component C. The hyperelliptic involution gives an involution P on the set of poles and zeroes. Therefore, in order to prove that Rg(µ) is of hyperelliptic type, it is enough to show that m ≤ 2 and P interchanges two zeroes when m = 2. For any (X, ω) ∈ C, there exists a hyperelliptic involution σ such that σ⋆ω = −ω. Let φ : X → X/σ ≃ P1 be the quotient map. There exists a unique quadratic differential ξ on P1 such that φ⋆ξ = (ω)2. If σ fixes a point p ∈ X, choose a local complex coordinate x on X at p such that locally σ is given by x 7→ −x. If the order of ω at p is equal to b, then ω = (cbxb + cb+1xb+1 + . . . )dx and σ⋆ω = (cb(−x)b + . . . )d(−x) = ((−1)b+1cbxb + . . . )dx = −ω. Therefore, ci = 0 for all i even. In particular, b is even and p is residueless if b < 0. Also, ordφ(p) ξ = ordp ω − 1 = b − 1 is odd. If σ(p1) = p2, then p1 and p2 have the same order and ordφ(p1) ξ = 2 ordp1 ω is even. Moreover, Res2 φ(p1) ξ = (Resp1 ω)2. Therefore, the 2-residue of ξ at φ(p1) vanishes if and only if p1 is residueless. In particular, the singularity type ν and the 2-residue condition of ξ is completely determined by P. Since all orders of singularities of ξ are integers, they are constant along deformations of (X, ω) ∈ C and thus we obtain a morphism Φ′ : PC → PQR 0 (ν). Since PC is connected, the image is also contained in a connected component of PQR 0 (ν). We denote this connected component by PD. Conversely, for any quadratic differential (P1, ξ) ∈ D, consider the double covering X → P1 ramified at the odd poles. The pullback φ⋆ξ is a square of an abelian meromorphic differential ω on X, uniquely determined up to multiplication by ±1, which defines a morphism Φ : PD → PRg(µ). The morphisms Φ, Φ′ are the inverses to each other, thus PC and PD are isomorphic. In particular, the dimensions of Rg(µ) and QR 0 (ν) must be equal. The dimension of Rg(µ) is equal to 2g + m − 1. Suppose that P fixes m1 zeroes and n1 poles, and interchanges m2 pairs of zeroes and n2 pairs of poles. Then m = m1 + 2m2, n = n1 + 2n2. The dimension of Q0(ν) is equal to 2g + m2 + n2. The codimension of QR 0 (ν) in Q0(ν) is equal to the number of even order poles. There are exactly n2 of them. So dimC QR 0 (ν) = 2g + m2 and we must have 2g + m − 1 = 2g + m2. That is, m1 + m2 = 1. So Rg(µ) has either a unique zero (m1 = 1, m2 = 0), or a pair of zeroes interchanged (cid:3) by P (m1 = 0, m2 = 1). In order to complete the proof of Theorem 1.4, we need to show that for a given ramification profile P of µ, there is a unique corresponding hyperelliptic component. In Section 8, together with the techniques 17 level 0 σ1 y p0 ... g0 κ p−1 ... level − 1 σ−1 y g−1 a a Figure 8. The involutions of Y developed throughout Section 6 and Section 7, we will prove the remaining part of Theorem 1.4 by using the description in this subsection. 5.2. Boundary of hyperelliptic components. In this subsection, we will give an immediate description of some boundary elements of hyperelliptic components. Let C be a hyperelliptic component of Rg(µ). The symmetry given by the hyperelliptic involution allows us to describe the multi-scale differentials in ∂C. They also have an involution compatible with the level structure and the prong-matching equivalence class. For example, if dimC Rg(µ) > 2, then by Proposition 4.3 there exists a two-level differential Y with two irreducible component intersecting at a node q. Then both components have the symmetries compatible to each other. In conclusion, we have the following equivalent condition to being contained in a hyperelliptic component. Lemma 5.2. Suppose that dimC Rg(µ) > 2. Let C be a hyperelliptic connected component of Rg(µ), and Y ∈ ∂C be a two-level differential obtained by Proposition 4.3. Then both Y0 and Y−1 are hyperelliptic flat surfaces and their hyperelliptic involutions σ0 and σ−1 are compatible with the ramification profile of C. Moreover, both involutions fix the node q. Conversely, any multi-scale differential (Y , η) satisfying the above conditions is contained in the principal boundary of some hyperelliptic component of Rg(µ) with ramification profile P. The level graph of Y and the involutions on each level are illustrated in Figure 8. Similarly, for a two-level differential with two irreducible components intersecting at two nodes, we also have the condition equivalent to being contained in the boundary of a hyperelliptic component. Lemma 5.3. Let C be a hyperelliptic connected component of Rg(µ), and Y ∈ ∂C be a two-level differential with two nodes s1, s2 between the levels. Then both Y0 and Y−1 are hyperelliptic flat surfaces and their hyper- elliptic involutions σ0 and σ−1 are compatible with the ramification profile of C. Moreover, both involutions interchange the nodes s1, s2 and the prong-matching classes are compatible with the involution. Conversely, any multi-scale differential Y satisfying the above conditions is contained in the principal boundary of some hyperelliptic component of Rg(µ) with ramification profile P. The level graph of Y and the involutions on each level are illustrated in Figure 9. 6. Multiplicity one saddle connections In Section 4.7, we showed that the existence of a certain set of multiplicity one saddle connections enables In this section, we will show that every us to merge zeroes and unbubble a handle from a flat surface. connected component of Rg(µ), except for the hyperelliptic components with 2g + 2 fixed marked points, contains a flat surface with a multiplicity one saddle connection. The main goal of this section is the following Theorem 6.1. Let C be a connected component of a residueless stratum Rg(µ) of genus g > 0. Assume that C is not a hyperelliptic component with 2g + 2 fixed marked points. Then for any given pair zi, zj of (possibly identical when m = 1) zeroes, C contains a flat surface with a multiplicity one saddle connection joining zi and zj. 18 p0 ... g0 level 0 σ0 y Q p−1 ... Q P r level − 1 σ−1 y g−1 a Figure 9. The involutions of Y with two nodes We use the induction on dimC PRg(µ) = 2g + m − 2 > 0. In order to initiate the induction, we will use the degeneration to the boundary of C, using the following Proposition 6.2. Let X be a multi-scale differential contained in ∂C. Suppose that an irreducible component Xi of X contains a multiplicity one saddle connection γi. Then by plumbing all horizontal nodes and the level transitions, γi deforms into a multiplicity one saddle connection γ of some flat surface X ∈ C. Proof. By perturbation of the parameters for plumbing, we may assume that X is general in the sense of Proposition 3.1. Suppose that X contains another saddle connection γ′ parallel to γ. Then γ and γ′ are homologous. If we degenerate X to X by reversing the plumbing construction, γ and γ′ remains homologous throughout the deformation. Thus they degenerate to parallel saddle connections on Xi, a contradiction. (cid:3) That is, if we want to find a flat surface with a multiplicity one saddle connection, we can go to the boundary and look into the flat surfaces in each level. We prove one useful lemma below for future use. Lemma 6.3. Let Y be a two-level multi-scale differential with two irreducible components Y0 and Y−1 at distinct levels, intersecting at one node q. Suppose that γ is a saddle connection of Y0 joining q to itself, and Y−1 is a genus zero residueless flat surface with two zeroes, z1 and z2. Then by plumbing the level transition with a suitable choice of prong-matching at q, γ deforms to a saddle connection joining z1 and z2. Proof. Let κ be the number of prongs at q. Then there are κ outgoing prongs in Y0 at q, denoted by u1, . . . , uκ. We can label them in clockwise order so that the saddle connection γ encloses from u1 to ut. There are κ incoming prongs in Y−1 at q, denoted by v1, . . . , vκ in counterclockwise order. We can assume that first s prongs are coming from z1 and the others are coming from z2. Since 1 < t ≤ κ, we can choose a prong-matching that sends u1 to vm so that 1 ≤ m ≤ s and s + 1 ≤ t + m − 1 ≤ κ. By plumbing the level (cid:3) transition with this prong-matching, γ deforms to a saddle connection γ′ joining z1 and z2. We first deal with the base case: g = 1 and m = 1 (i.e. genus one single-zero strata). Proposition 6.4. Let C be a connected component of a genus one single-zero stratum R1(µ). Assume that C is not a hyperelliptic component with four fixed marked points. Then C contains a flat surface with a multiplicity one saddle connection. This case is very difficult compared to the remaining cases, because Proposition 4.3 is not applied. In order to use the degeneration techniques, we need to investigate the principal boundary of each connected component of PR1(µ). Since dimC PR1(µ) = 1, its boundary is a finite collection of points. We will prove that a connected component C of PR1(µ) contains a principal boundary obtained by shrinking a multiplicity one saddle connection if C is not a hyperelliptic component with four fixed marked points. The proof of Proposition 6.4 will be given in Section 6.3. Remark 6.5. One case that we can easily find a multiplicity one saddle connection is when a single-zero flat surface X ∈ C contains a flat cylinder. The cross curve of the cylinder joining the unique zero to itself is automatically a multiplicity one saddle connection. The flat cylinder is closely related to the horizontal boundary of C. Suppose that X ∈ ∂C and the level graph Γ(X) has a horizontal edge. Then we can plumb 19 the node corresponding to this horizontal edge and obtain a flat surface in C containing a flat cylinder. Therefore, if C contains a horizontal boundary divisor, then C contains a flat surface with a multiplicity one saddle connection. 6.1. The principal boundary of genus one single-zero strata. By Proposition 4.2, every connected component of a single-zero stratum of dimension> 1 has a principal boundary of type II. In the case of genus one single-zero strata, we can prove the following stronger statement. Proposition 6.6. Each multi-scale differential in the boundary of a genus one single-zero stratum R1(µ) is contained in the principal boundary of type II. Moreover, a two-level multi-scale differential in the boundary consists of two irreducible component intersecting at two nodes. Proof. Since the boundary of PR1(µ) is a finite collection of points, each multi-scale differential in the boundary is either horizontal or a two-level multi-scale differential. A horizontal multi-scale differential is obviously contained in the principal boundary of type II (see Figure 4). Let X be a two-level multi-scale differential. Since there is only one marked zero, it has only one bottom level component, which we denote by X−1. The component X−1 cannot admit a further degeneration, because dimC PR1(µ) = 1. So X−1 is contained in a zero-dimensional (projectivized) generalized stratum. In particular, X−1 contains only two poles s1 and s2 with nonzero residues. Since all marked poles are residueless, s1 and s2 are the nodes of X. Also, each top level component gives a Global Residue Condition. Thus two nodes s1 and s2 must be contained in the same top level component, which we denote by X0. If X−1 has a pole other than s1, s2, then it is a residueless pole. Thus it is a marked pole or a unique node between X−1 and some top level component. Therefore, if there exists a top level component other than X0, then it is contained in a genus zero single-zero stratum. This is impossible because there exist no genus zero single-zero residueless flat surfaces. Therefore, X consists of two irreducible component intersecting at (cid:3) two nodes, thus contained in the principal boundary of type II (see Figure 3). Let R1(µ) be a genus one single-zero stratum. In order to prove Proposition 6.4, we will navigate the boundary of the connected component C of R1(µ) until we find a principal boundary obtained by shrinking a multiplicity one saddle connection. In this subsection, we will give a complete description of the principal boundary of R1(µ). Let X ∈ R1(µ) be a general flat surface in the sense of Proposition 3.1 and let γ be a saddle connection of X, forming a configuration F . By Proposition 4.2, we can shrink γ and obtain X ∈ D(F ). Assume that X is not a horizontal boundary. The following lemma provides the combinatorial description of X. Lemma 6.7. A two-level multi-scale differential X ∈ ∂R1(µ) is determined by the following combinatorial data: • An integer 1 ≤ t ≤ n, the number of marked poles on the unique top level component. • A permutation τ on {1, . . . , n}, the set of marked poles. • A tuple of integers C = (C1, . . . , Cn) such that 1 ≤ Ci ≤ bi − 1 for each i. The number of prongs at t i=1 bτ (i) − Q1. • A prong-matching equivalence class P r represented by a prong-matching (u, v) ∈ Z/Q1Z × Z/Q2Z. two nodes s1, s2 on the top component X0 are given by Q1 = t i=1 Cτ (i) and Q2 = P P We denote this multi-scale differential by X(t, τ, C, P r). Proof. By Proposition 6.6, X consists of two irreducible components. The number of marked poles contained in the top level component X0 determines the datum 1 ≤ t ≤ n. We define τ and Ci as follows. By Proposition 2.3, there are t parallel saddle connections α0, . . . , αt−1 from s1 to s2, labeled clockwise at s1. By cutting the surface X0 along all αi, we obtain t connected components of X0 \ (α0 ∪ · · · ∪ αt−1). The component bounded by αi−1 and αi contains one pole, which we denote by pτ (i). This component is isomorphic to the polar domain P2(Cτ (i), bτ (i)−Cτ (i)) for some integer 1 ≤ Cτ (i) ≤ bτ (i)−1. Now we have k angles 2πCτ (i) for i = 1, . . . , t at s1, given by the saddle connections. Therefore the total t angle 2πQ1 at s1 is equal to 2π i=1 bτ (i) − Q1. The bottom level component X−1 contains the other n − t marked poles and two more unmarked poles at s1 and s2 of orders Q1 + 1 and Q2 + 1, respectively. By Proposition 2.4, there are n − t + 1 parallel saddle connections βt, . . . , βn, labeled in clockwise order at z so that βt bounds the polar domain of s1. By cutting 20 t i=1 Cτ (i), and we have Q1 = t i=1 Cτ (i) and Q2 = P P P X0 X−1 pτ (i), 3 ≤ i ≤ t − 1 αt−1 ... s1 v+ c2+1 α2 . . . v+ c1+1 pτ (2) α1 pτ (t) v+ Q1 −1 α0 ... v+ 1 v+ c1 −1 pτ (1) αt−1 w+ Q2 −1 w+ . 1 . . w+ d1 −1 α1 . . . s2 w+ d2+1 ... α2 w+ d1+1 pτ (i), t + 1 ≤ i ≤ n βt v − Q1 βn w − Q2 z . . . . . . − v 1 − w 1 s1 s2 Figure 10. The prongs at the nodes of X X−1 along all βi, we obtain n − t + 2 connected components of X−1 \ (βt ∪ · · · ∪ βn). Two components bounded by each of βn and βt are isomorphic to the polar domains P1(Q2 + 1) and P1(Q1 + 1), respectively. For i = t + 1, . . . , n, the component bounded by βi−1 and βi contains only one pole, denoted by pτ (i). This component is isomorphic to the polar domain P2(Cτ (i), bτ (i) − Cτ (i)) for some integer 1 ≤ Cτ (i) ≤ bτ (i) − 1. It is easy to see that τ is a permutation on {1, . . . , n}, and thus we have defined Ci for each i above. To determine P r, we will label the prongs at the nodes. By scaling the differential of X0 if necessary, we may assume that the periods of αi are equal to −1. In X−1, consider the Q1 incoming prongs at s1. They can be represented by half-infinite rays emanating from z. Let v− denote them in clockwise order at z. Similarly, there are Q2 outgoing prongs at s2, denoted by w− in counterclockwise order at z. In X0, there are Q1 prongs at s1 denoted by v+ Q1−1 in clockwise order, where α0 is in the direction of v+ Q2−1 in counterclockwise order, where α0 is in the direction of w+ 0 . The prongs at the nodes are illustrated in Figure 10. The prong rotation group PΓ(F ) is isomorphic to Z/Q1Z × Z/Q2Z. A prong-matching is determined by the images of v− 1 and w− v , respectively, we identify the prong-matching with an element Q2 (cid:3) (u, v) ∈ Z/Q1Z × Z/Q2Z. This represents the prong-matching equivalence class P r of X. 0 . Similarly, there are Q2 incoming prongs at s2 denoted by w+ 1 , . . . , v− Q1 1 , . . . , w− Q2 If they are mapped to v+ 0 , . . . , w+ u and w+ 0 , . . . , v+ . Remark 6.8. We also introduce more notation related to the differential X(t, τ, C, P r) in Lemma 6.7 for i := bi − Ci and D := (D1, . . . , Dn). Also, we denote ci j=1 Cτ (j) and later uses. We denote Di i di := j=1 Dτ (j). For convenience, we denote c0 = d0 = 0. Then the saddle connection αi lies between the prongs v+ ci−1 and v+ ci at s1 and between w+ at s2 as depicted in Figure 10. di−1 and w+ di P P := Remark 6.9. The combinatorial data in Lemma 6.7 is not uniquely determined by X. It is only unique up to the choice of the labeling of two nodes s1, s2 and the labeling of the t saddle connections αi of X0. We can describe the relations between data that give the same multi-scale differential. relabeling is generated by shifting the labeling by one. τ ′ = τ ◦ τ1 on the poles where τ1 = X(t, τ, C, P r) = X(t, τ ◦ τ1, C, P r). We can relabel the saddle connections αi so that the cyclic order is remained unchanged. Any such In this case, new labeling gives a permutation . Since other information is unchanged, we have (cid:1) We can change the labeling of two nodes s1 and s2. Then the angles 2πCi and 2πDi also change the roles. The saddle connections αi and βj are relabeled in the inverse order. So new labeling gives a permutation 21 1 2 ... t t+1 ... n 2 3 ... 1 t+1 ... n (cid:0) τ ′′ = τ ◦ τ2 on the poles where τ2 = . The prong-matching (u, v) ∈ Z/Q1Z × Z/Q2Z is sent (cid:1) to (−v, −u) ∈ Z/Q2Z × Z/Q1Z with new labeling, so we have X(t, τ, C, [(u, v)]) = X(t, τ ◦ τ2, D, [(−v, −u)]). 1 ... t t+1 ... n t ... 1 n ... t+1 (cid:0) Remark 6.10. The level rotation group Z acts on the prong rotation group Z/Q1Z×Z/Q2Z by k·(u, v) = (u+ k, v − k). Recall that two prong-matchings are said to be equivalent if the level rotation action transforms one prong-matching into the other. So the number of prong-matching equivalence classes is equal to gcd(Q1, Q2). There also exist multi-scale differentials in the horizontal boundary of R1(µ). Those differentials are given by the elements of the stratum R0(a, −b1, . . . , −bn; −1, −1), except that two simple poles are unmarked (i.e, switching the labeling of two simple poles does not change the differential as a boundary point of R1(µ)). The stratum PR0(a, −b1, . . . , −bn; −1, −1) is zero-dimensional and the flat surfaces in this stratum are described in Proposition 2.4. We obtain the following Lemma 6.11. A horizontal multi-scale differential X in the boundary of R1(µ) is given by the following combinatorial data: • A permutation τ on {1, . . . , n}. • A tuple of integers C = (C1, . . . , Cn) such that 1 ≤ Ci ≤ bi − 1 for each i. We denote this multi-scale differential by X(0, τ, C). Remark 6.12. The data in Lemma 6.11 is only unique up to the choice of the labeling of two simple poles. By changing the roles of two simple poles, we have X(0, τ, C) = X(0, τ ◦ τ2, D) as in Remark 6.9, where τ2 = is a permutation inverting the order. ... n 1 n n−1 ... 1 2 (cid:0) (cid:1) 6.2. Plumbing and saddle connections of genus one single-zero flat surfaces. Let X = X(t, τ, C, P r) be a two-level multi-scale differential in ∂R1(µ), given by the combinatorial data in Lemma 6.7. Recall that the top level component X0 has t saddle connections αi, i = 0, . . . , t − 1 and the bottom level component X−1 has n − t + 1 saddle connections βj, j = t, . . . , n. By rescaling the differential X0, we may assume that the period of αi are equal to −1. We can obtain a flat surface (X, ω) ∈ R1(µ) by plumbing construction with a prong-matching (u, v) ∈ P r and the smoothing parameter s = ǫeiθ ∈ C. We will denote this flat surface by X s(u, v). The periods of the saddle connections β′ j deformed from βj are equal to s. Also, recall that the periods of αi are set to be −1. Each saddle connection in the components of X is the limit of saddle connections in X s(u, v) as |s| = ǫ → 0. So there exists a saddle connection α′ i that converges to αi. For < 0. We can describe the configuration of the small s, we have Im s > 0 if and only if Im ω/ ω saddle connections of X s(u, v). β′ j (cid:16)R α′ i R (cid:17) Proposition 6.13. Let X = X(t, τ, C, [(u, v)]) ∈ ∂R1(µ). Consider X s(u, v) for Im s ≤ 0. For the saddle connections βj and αi of each irreducible components of X, there exists a unique saddle connection β′ j and α′ i of the flat surface X s(u, v) that degenerates to βj and αi, respectively, as |s| → 0. j degenerates to βj, then X is obtained by shrinking β′ j. Proof. If β′ j of X s(u, v) degenerates to βj, then the homology classes [β′ j ] in H1(X \ p, z; Z) differ by a multiple of the vanishing class, which is equal to [β′ j]. However, since both saddle connections are simple closed curves in X s(u, v), their homology classes are primitive. This is a contradiction unless [β′ j are homologous, and thus they degenerate to distinct parallel j simultaneously. This is a contradiction and therefore β′ saddle connections in X by shrinking β′ j is unique. j ] is an integer multiple of [β′ If another saddle connection β′′ j]. In particular, [β′′ j] and [β′′ j and β′′ j ]. So β′ j and β′′ j] = [β′′ Now suppose that α′ i and α′′ i degenerate to αi. Then the homology classes [α′ H1(X, z; Z) or they differ by a multiple of the vanishing class [β′ flat parallelogram bounded by α′ n. This means Im (cid:16)R n] for some k. We may assume that an angle between α′ assume that [α′ than π, as this angle converges to zero as |s| → 0. In particular, α′ The last side of this triangle is β′ n]. If [α′ ω/ i and α′′ ω i] = [α′′ ω i ] + k[β′ t and β′ i] = [α′′ n. This means Im t or β′ i, α′′ i , β′ α′ i R ω/ β′ j (cid:17) i] and [α′′ i ] are equal in i ], then X s(u, v) contains a < 0, a contradiction. So we i at z is smaller i are two sides of a flat triangle. (cid:3) i and α′′ < 0, a contradiction. β′ j (cid:16)R α′ i R (cid:17) The following proposition determines parallel saddle connections of X s(u, v). 22 Proposition 6.14. Let X = X(t, τ, C, [(u, v)]) be a two-level multi-scale differential in ∂R1(µ). By relabeling the saddle connections, we may assume ct−1 < u ≤ Q1. If Im s ≤ 0, then two saddle connections α′ i and α′ j of X s(u, v) for i < j are parallel if and only if dj ≤ v < Q2 or 0 ≤ v < di. Proof. Since the surface X s(u, v) has genus one, two simple closed curves in X s(u, v) are homologous if and only if the intersection number between them is equal to zero. Two saddle connections α′ j intersect only at the unique zero z. So the intersection number between them is equal to zero if and only if they do not intersect at z transversely. We will determine the intersection number in terms of the prong-matching (u, v). i and α′ i is coming out from z along v− First, suppose that dj ≤ v < Q2. At the node s1, the prongs v+ cj +Q1−u+1, respectively. At s2, the prongs w+ dj +Q2−v, respectively. The saddle con- di nection α′ di+Q2−v in Figure 10. Similarly, j is coming out from z along v− α′ dj +Q2−v. Note that 1 ≤ ci + Q1 − u + 1 < cj + Q1 − u + 1 ≤ Q1 and 1 ≤ di + Q2 − v < dj + Q2 − v ≤ Q2. Thus α′ j do not intersect transversely at z. By the same argument, this is also true under the assumption 0 ≤ v < di. Therefore, the intersection number between α′ are matched to w− ci+Q1−u+1 and going into z along w− cj +Q1−u+1 and going into z along w− ci , v+ di+Q2−v, w− cj are matched to v− ci+Q1−u+1, v− i and α′ , w+ dj j is zero. Thus α′ j are parallel. i and α′ i and α′ Conversely, suppose that di ≤ v < dj. At the node s1, the prongs v+ cj +Q1−u+1, respectively. At the node s2, the prongs w+ di ci+Q1−u+1 and v− dj −v, respectively. Note that 1 ≤ ci + Q1 − u + 1 < cj + Q1 − u + 1 ≤ Q1 and 1 ≤ dj − v < di + Q2 − v ≤ Q2. By the similar argument as in the previous paragraph, α′ j intersect transversely at z and the intersection (cid:3) j is equal to one. Thus α′ number between α′ j are not parallel to each other. di+Q2−v and w− i and α′ i and α′ ci and v+ are matched to w− cj are matched to v− and w+ dj i and α′ The configuration of saddle connections X s(u, v) in various cases is depicted in Figure 11. The shaded area is C(X s(u, v)), the core of this flat surface. The other regions are polar domains of the poles. The definitions of the core and the polar domain are recalled in Section 2.2. Suppose that ct−1 < u ≤ ct. Then in Figure 11, α′ Since the smoothing parameter s is small, t−1 is drawn vertically and β′ n is drawn horizontally. ω, the ratio between the periods of saddle connections ω is also a part of smooth coordinate drawn vertically and horizontally, is close to −s. In fact, − system of the stratum that converges to 0 as s → 0. Therefore, by change of the coordinate, we can set up this ratio as a new smoothing parameter, still denoted by s by abuse of notation. It is immediate that this new parameter can take values in the entire C. From now on, let X s(u, v) denote the flat surface obtained from X with this new smoothing parameter. β′ n R α′ R ω/ t−1 As a consequence we have the following corollary, which will be crucial for the proof of existence of a multiplicity one saddle connection. In particular, we have a criterion for a two-level multi-scale differential to contain a flat surface with a multiplicity one saddle connection in a coordinate neighborhood. Corollary 6.15. Let X = X(t, τ, C, [(u, v)]) be a two-level multi-scale differential in ∂R1(µ). Assume that there exist i such that ci−1 < u ≤ ci and di ≤ v < di+1, or ci < u ≤ ci+1 and di−1 ≤ v < di. Then α′ i is a multiplicity one saddle connection of X s(u, v) for Im s ≤ 0. Proof. Suppose that ci−1 < u ≤ ci and di ≤ v < di+1. The other case follows from this by relabeling the nodes s1, s2. By relabeling the saddle connection if necessary, we may assume that i = 0. If t = 1, then α0 is a unique saddle connection of X0, thus α′ 0 is obviously a multiplicity one saddle connection. So suppose j for 1 ≤ j ≤ t − 1 is not parallel to α′ t ≥ 2. By Proposition 6.14, the saddle connections each α′ 0. Therefore (cid:3) α′ 0 is a multiplicity one saddle connection. From the configurations of saddle connections described in Figure 11, we can see that there are at most three collections of parallel saddle connections on the flat surface X s(u, v) obtained by plumbing construction. By shrinking β′ i, we obtain the original two-level multi-scale differential X. The following lemma describes the result of shrinking other two collections of saddle connection, providing two ways to navigate the multi-scale differentials in the boundary of a given connected component C. Lemma 6.16. For X = X(t, τ, C, [(u, v)]) ∈ ∂C, we can find two other multi-scale differentials T (u,v) T (u,v) 2 X in ∂C given by the following. 1 X and 23 ω/ β′ n R α′ R t−1 β′ n α′ j−1 ... p1 pj pt−1 ... α′ j α′′ j α′ t−1 ... pt−1 α′ j α′ t−1 α′ 0 pt 2πC1 β′ t z 2πCt+1 pt+1 β′ n . . . β′ n α′ j pj α′ j−1 ... p1 2πC1 β′ t α′ 0 pt 2πCt+1 pt+1 β′ n . . . α′′ j z ct−1 < u ≤ ct dj−1 ≤ v < dj Im s ≤ 0 ct−1 < u < ct v = dj Im s > 0 β′ n β′ n pj α′ j z ... α′ j pt α′ 0 2πCt α′′ 0 α′ j−1 ... p2 2πC1 α′ 1 p1 β′ t 2πCt+1 pt+1 β′ n . . . ... α′′ 0 pt α′ 0 2πCt α′′ 0 β′ t z 2πCt+1 pt+1 β′ n . . . u = ct = Q1 dj−1 < v < dj Im s > 0 u = ct v = dt Im s > 0 Figure 11. The saddle connections of X s(u, v) By relabeling the saddle connection, we may assume that ct−1 < u ≤ Q1. If dj−1 ≤ v < dj for some j, then T (u,v) 1 X = X(t′, τ ′, C′, P r′) is given by the following combinatorial data: • t′ = n − (j − 1) poles on the top level component. • A permutation τ ′ ∈ Symn defined by τ ′(i) = i + j − 1 j − (i − t′) ® if 1 ≤ i ≤ t′ if t′ + 1 ≤ i ≤ n • A set of integers 24 if j + 1 ≤ i ≤ t − 1 or t + 1 ≤ i ≤ n if 1 ≤ i ≤ j − 1 C′ i =    • A prong-matching (v − dj−1, dt − v) ∈ Z/Q′ P Ci Di Cj + v − dj−1 u − ct−1 ′ t i=1 C′ In particular, Q′ 1 := if i = j if i = t. i = u+v+cn−ct−cj−1−dj−1 and Q′ 2 := 1Z × Z/Q′ 2Z. ′ t i=1 bτ ′(i)−Q′ 1 = dn+ct−u−v. P Also, T (u,v) 2 X = X(t′′, τ ′′, C′′, P r′′) is given by the following combinatorial data: • n − (t − 1 − j) poles on the top level component. • A permutation τ ′′ defined by i τ ′′(i) =   n + (j + 1 − i) i − (n − t + 1) if 1 ≤ i ≤ j if j + 1 ≤ i ≤ j + (n − t + 1) if j + (n − t + 1) + 1 ≤ i ≤ n • A set of integers  C′′ i = Ci Di Cj + (dj − v − 1) ct − u + 1 ′′ t i=1 C′′ In particular, Q′′ u + v + cn + dt − ct − cj − dj.    1 := P if 1 ≤ i ≤ j − 1 or j + 1 ≤ i ≤ t − 1 if t + 1 ≤ i ≤ n if i = j if i = t. i = cj + dj + dn + ct − dt − u − v and Q′′ 2 := ′′ t i=1 bτ ′′(i) − Q′′ 1 = P • A prong-matching (cj − 1, −Dt + 1) ∈ Z/Q′′ 1 Z × Z/Q′′ 2 Z. Proof. Consider the flat surface X s(u, v) for Im s ≤ 0. If ck−1 < u ≤ ck and dj−1 ≤ v < dj, then X s(u, v) has three collections of parallel saddle connections {β′ k−1}. By shrinking the first, we obtain X again. By shrinking the second and the third, we obtain T (u,v) 1 X and T (u,v) 2 X, respectively in ∂C. j−1} and {α′ 0, . . . , α′ j, . . . , α′ t, . . . , β′ n}, {α′ More precisely, we have X s(u, v) = −s−1 (cj − 1, −Dt + 1). The flat surface X s(u, v) is drawn in three different ways in Figure 12. By shrinking saddle connections hor- izontally drawn in Figure 12, we can obtain three different multi-scale differentials X, T (u,v) 2 X. (cid:3) 1 X and T (u,v) (v − dj−1, dt − v) = s+1 Ä ä ä Ä T (u,v) 2 X T (u,v) 1 X We can characterize the hyperelliptic connected components of R1(µ) in terms of the combinatorial data given in Lemma 6.7. Lemma 6.17. Let C be a hyperelliptic component of R1(µ). Suppose X = X(t, τ, C, P r) ∈ ∂C. If (ci, v) ∈ P r for some 0 ≤ i ≤ t − 1, then v = dj for some 0 ≤ j ≤ t − 1. Conversely for any connected component C of R1(µ), if every X(t, τ, C, P r) ∈ ∂C satisfies the above condition on P r, then C is a hyperelliptic component. Proof. By relabeling the poles if necessary, we may assume that τ = Id. The first part easily follows from the existence of the involution σ0 on the top level component. Since σ0 sends saddle connections to saddle connections, the prong v+ for some j by σ0. Since the involutions are compatible with P r, we must have (ci, dj ) ∈ P r and v = dj. ci corresponding to the saddle connection αi must be mapped to w+ dj We now prove the converse. First of all, we prove that if X(t, Id, C, P r) satisfies the condition on P r, then the top level component has an involution σ0 that interchanges the two nodes s1, s2. Suppose that (ci, dj) ∈ P r for some i, j. If Ci+1 < Dj, then by the level rotation action, we obtain a prong-matching (ci+1, dj − Ci+1) ∈ P r and dj−1 < dj − Ci+1 < dj, a contradiction. Similarly if Ci+1 > Dj, a prong-matching (ci + Dj, dj−1) ∈ P r gives a contradiction after interchanging the labeling of s1 and s2. So Ci+1 = Dj and (ci+1, dj−1) ∈ P r. By repeating this, we can conclude that Ci+k = Dj+1−k for any k. In particular, Q1 = t j=1 Dj = Q2 and the orders of zeroes in X0 at the nodes s1, s2 are equal. t i=1 Ci = P P 25 β′ n α′ j−1 ... p1 pj ... pt−1 α′ j α′ t−1 α′ 0 pt 2πC1 β′ t 2πCt+1 pt+1 β′ n . . . pj β′ n α′ . . . j−1 p1 α′ 0 2 π C t + 1 pt α′ 0 β′ n ... β′ t pt+1 2π C 1 z X s(u, v) T (u,v) 1 X Ä s+1 ä (v − dj−1, dt − v) T (u,v) 2 X Ä ä −s−1 (cj − 1, −Dt + 1) Figure 12. Proof of Lemma 6.16 26 ct−1 < u ≤ ct dj−1 ≤ v < cj Im s ≤ 0 α′ j z α′ 0 α′ t−1 pt β′ t pt+1 1 t + C π 2 ... β′ n pt−1 ... α′ j 2πC1 z α′ t−1 β′ n α′ j−1 pj p1 ... α′ j . . . pt−1 α′ t−1 2 If i + j is even, then by relabeling the saddle connections if necessary, we may assume i + j = 0 ∈ Z/tZ and (0, 0) ∈ P r. Since Ck = Dt+1−k for each k, there exists an involution σ0 on the top level component that interchanges pairs of saddle connections αk and αt+1−k. That is, σ0 interchanges the pair of poles pk, pt+1−k, for each k = 1, . . . , t. If t is odd, the pole p t+1 is fixed. 1, s′ If i + j is odd, then by relabeling the saddle connections if necessary, we may assume m = 1 and (0, d1) ∈ P r. Since Ck = Dt+2−k for each k, there exists an involution σ0 that interchanges the pair of poles pk, pt+2−k, for each k = 1, . . . , t + 1. Therefore in any case, the top level component X0 has an involution that interchanges two nodes s1 and s2. If (0, 0) ∈ P r, then we can take X ′ := T (0,0) 0 interchanging two nodes s′ 2 − 1, − a 1 X ∈ ∂C. The top level component contains all n poles and has an involution σ′ −1 is contained in the hyperelliptic stratum H0(a, − a −1 interchanging s1, s2. Note that (0, dt) ∈ P r′ and Ck = Dt+1−k for any k = 1, . . . , t. By level rotation action, we obtain a ) ∈ P r′ when t is even and (c t−1 ) ∈ P r′ when t is odd. So the involutions σ′ 0, prong-matching (c t −1 are compatible with the prong-matching class P r′ and C is hyperelliptic. σ′ If (0, d1) ∈ P r, then we can take X ′ = T (0,d1) 2. The bottom level component X ′ 2 − 1), so it obviously has a unique involution σ′ 0 contains n − 1 poles and has an involution σ′ −1 contains one marked pole p1 and is contained in the stratum R1(a, −b1; − a−b1 2 − 1). The pole p1 of order b1 is residueless since every prescribed pole is residueless. Note that C1 = D1 by assumption. So −1 that interchanges s1, s2. Note that (−D1, dt) ∈ P r′ and Ck+1 = Dt−k+1 X ′ 2 +1) ∈ P r′. So the for any k = 1, . . . , t − 1. By level rotation action, we obtain a prong-matching (c t (cid:3) involutions σ′ 0 interchanging s1, s2 as above. In this case, the bottom level component X ′ −1 are compatible with the prong-matching class P r′ and C is hyperelliptic. X ∈ ∂C. The top level component X ′ −1 has a unique involution σ′ 2 − 1, − a−b1 2 −1, d t 0, σ′ , d t+1 , d t 1 2 2 2 2 6.3. Existence of a multiplicity one saddle connection — base case. We can finally prove Proposi- tion 6.4 for hyperelliptic components of genus one single-zero strata. Lemma 6.18. Let C be a hyperelliptic component of a genus one single-zero stratum R1(µ) with less than four fixed marked points. Then C contains a flat surface with a multiplicity one saddle connection. Proof. Assume the contrary — that C does not contain any flat surface with a multiplicity one saddle connection — and consider X = X(t, τ, C, P r) ∈ ∂C as before. The top level component X0 has an involution σ0 interchanging two nodes. If X0 does not contain two fixed poles, then there exists 1 ≤ m ≤ t such that (cm, dm) ∈ P r and the flat surface X(cm, dm) ∈ C contains a multiplicity one saddle connection by Corollary 6.15. So X0 contains two fixed poles. By relabeling the poles if necessary, we may assume that τ = Id and the pole p1 ∈ X0 is one of the two fixed poles contained in X0. Then (0, c1) ∈ P r and we can take another multi-scale differential X ′ = T (0,c1) X ∈ ∂C, as in Lemma 6.16, so that the pole p1 is now contained in the bottom level component X ′ −1. Since the top level component X ′ 0 still contains two fixed poles by the argument of the previous paragraph, we can conclude that C has three fixed marked poles. Since the unique zero z is always fixed by the involution, C has four (cid:3) fixed marked points. 1 Finally, we prove Proposition 6.4 for a non-hyperelliptic component C by showing that there exists a multi-scale differential in ∂C satisfying the assumption of Corollary 6.15. Proof of Proposition 6.4. Let C be a non-hyperelliptic component of R1(µ). Assume the contrary — that C does not contain any flat surface with a multiplicity one saddle connection. By Lemma 6.17, there exists a multi-scale differential X = X(t, τ, C, P r) ∈ ∂C with a prong-matching (u, v) ∈ P r that satisfies u = ci and dj < v < dj+1 for some i, j. By relabeling the saddle connections if necessary, we may assume that i = 0. Also by relabeling the poles, we may assume that τ = Id. If j = 0, then by Corollary 6.15, X s(0, v) ∈ C for Im s ≤ 0 has a multiplicity one saddle connection. Thus we only have to deal with the case j > 0. We can choose X and (0, v) ∈ P r such that this j > 0 is minimal among all possible choices in ∂C. If t = 1, then 0 < j ≤ t − 1 = 0, a contradiction. So t > 1. Suppose that C1 < v − dj. Then by the level rotation action, we have (c1, v − C1) ∈ P r. If j > 1, then this contradicts to the minimality of j since dj < v − C1 < dj+1 and 0 < j − 1 < j. So we have j = 1 and the flat surface X s(c1, v − C1) for Im s ≤ 0 has a multiplicity one saddle connection α′ 1 by Corollary 6.15. 27 Now suppose that C1 > v − dj. Then we have (v − dj , dj) ∈ P r. If j > 1, then this contradicts to the minimality of j by relabeling the nodes s1 and s2, since 0 < v − dj < c1. So we have j = 1 and the flat surface X s(v − dj, dj) for Im s ≤ 0 has a multiplicity one saddle connection α′ 1 by Corollary 6.15. Finally, suppose that C1 = v − dj . Then we have (c1, dj) ∈ P r. If j = 1, then X s(c1, d1) has a multiplicity one saddle connection α′ 1 by Corollary 6.15. Therefore, we may assume (c1, dj) ∈ P r and j > 1. By repeating the argument as above, we have Ci = Dj+2−i for each 2 ≤ i < j 2 + 1. Also, by relabeling the nodes s1 and s2 and considering the minimality 2 + 1 ≤ i ≤ j. Now we take X ′ = T (0,v) of j, we can further obtain the same equation for j 2 X ∈ ∂C as defined in Lemma 6.16. It has a prong-matching (cj+1, −D1) ∈ Z/Q′ 2Z. If D1 < Cj+1, then by the level rotation action, we have (cj + (Cj+1 − D1), 0) ∈ P r′. Since c′ j, this contradicts the minimality of j by relabeling the nodes. If D1 > Cj+1, then we have (cj, −(D1 − Cj+1)) ∈ P r′. Since −Dt′ = −D1 < −(D1 − Cj+1) < 0, this also contradicts the minimality of j. Therefore we have D1 = Cj+1. X ∈ ∂C. By the same argument as above, we can show C1 = Dj+1. Then v = dj +C1 = dj +Dj+1 = dj+1, a contradiction. (cid:3) Now we consider the prong-matching (cj+1, 0) ∈ P r of X. We can take X ′′ = T (cj+1,0) 1Z × Z/Q′ j−1 < cj + (Cj+1 − D1) < c′ 1 6.4. Existence of a multiplicity one saddle connection. The next step is to prove Theorem 6.1 for hyperelliptic components of genus g > 0 single-zero strata. Lemma 6.19. Let C be a hyperelliptic component of a single-zero stratum Rg(µ) with less than 2g + 2 fixed marked points. Then C contains a flat surface with a multiplicity one saddle connection. Proof. We use induction on g > 0. Assume the contrary that any flat surface in C has a multiplicity one saddle connection. By Proposition 4.3, we can obtain a two-level multi-scale differential Y ∈ ∂C with two irreducible components Y0, Y−1 intersecting at one node q. By Lemma 5.2, both Y0 and Y−1 are residueless single-zero hyperelliptic flat surfaces. In particular, their genera, which we denote by g−1 and g0, satisfy 1 ≤ g−1, g0 ≤ g − 1 and g = g−1 + g0. If any of these components has a multiplicity one saddle connection, then we obtain a flat surface in C with a multiplicity one saddle connection by Proposition 6.2. Therefore we suppose not. By induction hypothesis, Y0 and Y−1 have 2g0+2 and 2g−1+2 fixed marked points, respectively. The node q must be fixed by the involutions of both components, so C have (2g−1 + 1) + (2g0 + 1) = 2g + 2 (cid:3) fixed marked points. This is a contradiction. We now observe that even for the strata where a flat surface with a multiplicity one saddle connection does not exist, there always exists a flat surface with a pair of saddle connections, which has no other parallel saddle connection. This will be used in the induction step below to deal with the situation where this stratum appears in the boundary of Rg(µ). Proposition 6.20. Let C be a hyperelliptic component of Rg(µ) of genus g > 0 and p1 be a pole fixed by the ramification profile of C. Then there exists a flat surface X ∈ C and a multiplicity two saddle connections. More precisely, there exist a pair of parallel saddle connections γ1, γ2 of X bounding the polar domain of p1, and there does not exist any other saddle connection parallel to them. Proof. We use the induction on dimC Rg(µ) = 2g + m − 1. First, we take care of the base case — genus one single-zero strata. First, we need to find X ∈ ∂C such that p1 is contained in the top level component. Suppose that p1 ∈ X−1 for some X = X(t, τ, C, [(0, v)]) ∈ ∂C. Then di−1 ≤ v < di for some i. The multi-scale differential T (0,v) 1 X ∈ ∂C as defined in Lemma 6.16 contains p1 in the top level component. So we can always find X ∈ ∂C such that p1 ∈ X0. By relabeling the saddle connections, we may assume that τ (1) = 1. Then C1 = D1 and (0, D1) ∈ P r. By Proposition 6.14, the flat surface X s(0, D1) for Im s ≤ 0 0 and α′ has a desired pair of parallel saddle connections α′ 1. Now assume that dimC Rg(µ) > 2. by Proposition 4.3, there exists a two-level multi-scale differential Y ∈ ∂C consisting of two hyperelliptic irreducible components Y0, Y−1 intersecting at one node q. The genera g0, g−1 of two components satisfy g = g0 + g−1. Also, since the node q must be fixed by hyperelliptic involutions, the top level component Y0 is a single-zero hyperelliptic flat surface. Thus g0 > 0. We will prove that C contains Y with p1 ∈ Y0. If Y0 contains p1, then we can use the induction hypothesis on the stratum containing Y0 so that it has a desired pair of saddle connections. Assume the contrary that Y−1 contains p1. 28 If g−1 > 0, then we can use the induction hypothesis on the stratum containing Y−1 and deform Y−1 so that it has a desired pair of saddle connections. So we may assume that Y−1 contains p1 and g−1 = 0. Since there do not exist genus zero single-zero residueless flat surfaces, we automatically have m = 2. If Y0 has a multiplicity one saddle connection, then by Proposition 6.2, we can obtain a flat surface (X, ω) ∈ C with a multiplicity one saddle connection. By shrinking this multiplicity one saddle connection, we obtain another multi-scale differential Y ′ that contains p1 in the top level component. If Y0 does not have a multiplicity one saddle connection, then by Lemma 6.19, Y0 has 2g + 2 fixed marked points. By relabeling the poles, we may assume that p2 is a fixed pole in Y0. By induction hypothesis on the stratum containing Y0, Y0 can be deformed so that it contains a pair of parallel saddle connections bounding the polar domain of p2 and there do not exist any other saddle connections parallel to them. After plumbing the level transition of Y , we obtain a flat surface in C with a pair of saddle connections with the same property. By shrinking them, we obtain another multi-scale differential Y ′ that only contains p2 in the bottom level component. So we can always assume that p1 ∈ Y0. However, since Y0 is a single-zero residueless flat surface, it have the genus g0 > 0. By induction hypothesis on the stratum containing Y0, Y0 can be deformed so that it has a desired pair of saddle connections bounding the polar domain of p1. By plumbing the level transition of Y , we obtain a flat surface in C that still has (cid:3) the desired pair of saddle connections. We are now ready to prove Theorem 6.1 for an arbitrary connected component of Rg(µ). Proof of Theorem 6.1. The case of hyperelliptic components are given by Lemma 6.19. So we may assume that C is non-hyperelliptic component. We use the induction on dimC Rg(µ) ≥ 2 with Proposition 6.4 giving the base case dimC Rg(µ) = 2. Suppose dimC Rg(µ) > 2. Assume the contrary — that C does not contain any flat surface with a multiplicity one saddle connection joining zi and zj. By Proposition 4.3, there exists a two-level multi-scale differential Y ∈ ∂C with two irreducible components Y−1 and Y0 of genera g−1 and g0, respectively, intersecting at one node q. Each component is contained in a residueless stratum with dimension smaller than dimC Rg(µ). First, we assume that m = 1 and zi = zj = z1. Then both components Y0 and Y−1 are residueless single-zero flat surfaces. So g0, g−1 > 0. If both components are hyperelliptic, then C is also hyperelliptic by Lemma 5.2. So at least one of Y0 or Y−1 is contained in a non-hyperelliptic component. By induction hypothesis, that component can be deformed so that is has a saddle connection. After plumbing construction, we can obtain a flat surface in C with a multiplicity one saddle connection by Proposition 6.2. If m > 1, then by relabeling the zeroes, we may assume that (zi, zj) = (z1, z2). By Proposition 4.3, we can further assume that the bottom level component Y−1 contains z1 and z2. If Y−1 has nonzero genus, then by induction hypothesis Y−1 can be continuously deformed to have a multiplicity one saddle connection joining z1 and z2. So we assume that g−1 = 0. If Y−1 contains more than two zeroes, then we can repeatedly apply Proposition 4.3 to Y−1 so that the bottom level component contains only two zeroes z1 and z2. By plumbing for all level transitions except the bottom one, we may assume that Y−1 only contains two zeroes z1 and z2. Now we assume that m = 2. Then Y0 has a unique zero at the node q. Suppose that Y0 contains a multiplicity one saddle connection γ joining the node q to itself. By Lemma 6.3, there exists a suitable prong-matching at q, so that γ deforms to a multiplicity one saddle connection γ′ joining z1 and z2 after plumbing construction. So we may assume that any continuous deformation of Y0 does not have a multiplicity one saddle connection. By induction hypothesis, Y0 is then contained in a hyperelliptic component with 2g+2 fixed marked points. By Proposition 6.20, we can deform Y0 so that it has a pair of parallel saddle connections γ1, γ2 with multiplicity two. Also, they bound the polar domain of a fixed pole, say p1. Therefore, both angles at q in the polar domain bounded by γ1 and γ2 must be equal to 2π b1 2 . By plumbing the level transition, we 1 and γ′ obtain a flat surface in C with saddle connections γ′ 2, deformed from γ1 and γ2. By Lemma 6.3, we can choose a suitable prong-matching for plumbing so that γ′ 1 is joining z1 and z2. By assumption, γ′ 1 must not be a multiplicity one saddle connection and thus γ′ 2 must be parallel to γ1. In particular, γ′ 2 is also joining z1 and z2. By shrinking γ′ 2, we obtain a multi-scale differential Y ′ with two components intersecting at one node q′, whose bottom level component Y ′ −1 contains only one marked pole p1. The top level component Y ′ 0 has unique zero at q′. By the same argument as in the previous paragraph, Y ′ 0 is hyperelliptic flat surface with 2g + 2 fixed marked points. Also, Y ′ −1 is contained in the stratum R0(a1, a2, −b1, b1 − a1 − a2 − 2) and 1 and γ′ 29 q v − Q1 −1 Y−1 . . . z1 − v 1 γ2 p1 γ1 Figure 13. The prongs in Y−1 at q z2 . . . v − Q1 v − Q Y ′ −1 has two parallel saddle connections that come from γ1 and γ2 between z1 and z2. Still the angles of the polar domain of p1 are equal to 2π b1 2 . Summarizing the previous paragraphs, we can reduce to the case when Y0 is a single-zero hyperelliptic flat surface with 2g + 2 marked points and Y−1 is genus zero flat surface containing two zeroes z1, z2 and only one marked pole p1. Also, the two angles in the polar domain of p1 are equal to 2π b1 2 . If a1 = a2, then Y−1 is a genus zero hyperelliptic flat surface and the node q is fixed by the involution. By Lemma 5.2, C is then a hyperelliptic component with 2g + 2 fixed marked points, which contradicts the assumption. So we must have a1 < a2. Denote the number of prongs at q by Q := a1 + a2 − b1 + 1. The two angles in the 2 and Q2 := a2 + 1 − b1 polar domain of q are given by 2πQ1 at z1 and 2πQ2 at z2, where Q1 := a1 + 1 − b1 2 . Consider the incoming prongs v− i , i = 1, . . . , Q at q. We can label them in counterclockwise order so that v− 1 , . . . , v− Q are coming from z2 (see Figure 13 for the configuration of the prongs at q in Y−1). Q1−1 are coming from z1 and v− Q1 , . . . , v− The order of zero of Y0 at q is equal to Q − 1. By Proposition 6.20, we can deform Y0 so that it contains a pair of parallel saddle connections α1 and α2 with multiplicity two, bounding the polar domain of a fixed pole, say p2. First, assume that Y0 is a single-zero hyperelliptic flat surface. Then q is a unique zero of Y0 and α1, α2 are joining q to itself. Since they are parallel, they intersect at q non-transversely. The two angles in the polar domain of p2 bounded by α1 and α2 are equal to 2π b2 2 . Also there are two other angles at q bounded by α1, and α2 themselves. These angles are both equal to 2π Q−b2 . We can label the outgoing 1 , . . . , v+ prongs in Y0 at q in clockwise order, v+ are lying between α1 and i α2. Then v+ , . . . , v+ determined by the image of v− u with u ∈ Z/QZ . We will find a proper prong-matching u such that the flat surface Y s(u), obtained by plumbing the level transition with small smoothing parameter s ∈ C, has a multiplicity one saddle connection. Note that Q 2 = a1+a2−b1+1 2 + 1 ≤ u ≤ Q − (Q1 − 1)} is nonempty. 2 Moreover, for any Q 2 + 1 ≤ v < Q, there exists u ∈ U such that u ≤ v < u + (Q1 − 1). Since g0 > 0, we have Q − 1 − b2 ≥ 2g0 − 2 ≥ 0. That is, b2 < Q and thus Q 2 < Q. So we can take u ∈ U such that u ≤ Q+b2 1, deformed from α1, joining z1 and z2 (see Figure 14). 2 < u + (Q1 − 1). Then the flat surface Y (u) has a multiplicity one saddle connection α′ are also lying between α2 and α1, on the other side. The prong-matching at q is 1 . We can identify the prong-matching that sends v− 2 = Q1 − 1. So the set U := {u| Q for 1 ≤ i ≤ Q − 1 so that v+ 2 + 1 ≤ Q+b2 2 = a1 − b1 1 to v+ > 2a1−b1 Q+b2 2 Q 2 +1 b2 2 2 Now suppose that m > 2. In this case, we assume that the bottom level component Y−1 of Y contains z2 and z3 instead. Then Y0 has at least two zeroes z1 and q. If Y0 contains a multiplicity one saddle connection γ joining z1 and q, then by plumbing with proper choice of prong-matching, γ deforms to a multiplicity one saddle connection joining z1 and z2. So we may assume that Y0 cannot be continuously deformed to have a multiplicity one saddle connection joining z1 and q. By induction hypothesis, Y0 is double-zero hyperelliptic flat surface with 2g + 2 fixed marked points. In particular, the order of q is equal to a1. Note that this order is at least a2 + a3, since Y−1 is a genus zero flat surface. That is, a1 ≥ a2 + a3. Now we repeat this assuming that Y−1 contains z1 and z3 instead. In this setting, we also have a2 ≥ a1 + a3. This is a contradiction, thus there is a flat surface in C with a multiplicity one saddle connection joining z1 and z2. 30 Y0 Y s(u) v+ 1 α1 p2 ... ... v+ b2 2 q v+ Q+b2 2 v+ Q 2 +1 α2 α′ 1 v+ 1 p1 p2 ... v+ b2 2 z2 α′ 2 v+ u+Q1 −1 z1 ... v+ Q+b2 2 v+ u v+ Q 2 +1 Figure 14. The prongs at q in single-zero Y0, and after plumbing (cid:3) 7. Genus one single-zero strata In this section we classify all non-hyperelliptic connected components of the genus one single-zero stratum R1(a, −b1, . . . , −bn), proving Theorem 1.7 for the single-zero cases. Recall that by Proposition 6.6, every multi-scale differential in the boundary of R1(µ) is in the principal boundary. As in Section 6, we will navigate the principal boundary to prove that the non-hyperelliptic connected components of R1(µ) are classified by rotation number. In Section 7.2, we give a criterion that determines whether a multi-scale differential is contained or not contained in the boundary of a hyperelliptic component. In Section 7.3, we first deal with some special strata that our general strategy cannot be applied to. In Section 7.4, we finally prove Theorem 1.7. In each step, the combinatorial description of multi-scale differentials introduced in Section 6.1 will play an important role. Recall that any multi-scale differential in the boundary of R1(µ) can be given by the combinatorial datum X(t, τ, C, P r). This combinatorial datum is uniquely determined only up to the cyclic order of the poles and the markings of the nodes s1 and s2. Throughout this section, we assume that Ci and bi are indexed by the elements of the cyclic group Z/nZ. For example, Cn+i = Ci for each integer i. Throughout most of the discussion, we will fix t = n and we will drop it from the notation when no confusion can arise. Also from now on, the bold notation X, X′, . . . are reserved for multi-scale differentials of the form X = X(τ, C, P r) = X(n, τ, C, P r). 7.1. Rotation number. Denote d := gcd(a, b1, . . . , bn). Recall from [2, Definition 4.2] that the rotation number r of X ∈ H1(µ) is defined by the formula r := gcd(d, Ind α, Ind β) where α, β are simple closed curves on X which form a symplectic basis of H1(X, Z), and the index Ind α of a closed curve α is defined to be the degree of Gauss map Gα : S1 → S1 with respect to the flat structure on X. The rotation number is a deformation invariant, thus it is constant in a connected component of H1(µ). By [2, Theorem 4.3], the connected components of H1(µ) are indexed by their rotation number. That is, r|d Cr, where Cr is the connected component consisting of flat surfaces of rotation number r, H1(µ) = except there is no Cd for H1(d, −d). ` Note that PR1(µ) is a smooth compactification and the rotation number is a topological invariant of connected components of R1(µ). So for a multi-scale differential X ∈ ∂R1(µ), the rotation number of a flat 31 surface in R1(µ) near X is constant. We will call this number the rotation number of X. We can compute the rotation number of a multi-scale differential in ∂R1(µ) in terms of its combinatorial data. Proposition 7.1. The rotation number of X(t, τ, C, [(0, v)]) ∈ ∂R1(µ) is equal to gcd(d, Q1, n Xi=1 Cτ (i) + v). Proof. By relabeling the poles if necessary, we may assume that τ = Id. By plumbing two nodes of X with prong-matching (0, v), we obtain a flat surface X(0, v) ∈ R1(µ), whose saddle connections are described by n Proposition 6.13. Also see Figure 11. The index of the saddle connection α′ i=t+1 Di− 0 is equal to Q1+Q2+ 1 is equal to Q1. Since two curves α′ P 0 and v = Q1 + β′ 0, Ind β′ 1 form a symplectic basis of X(0, v), the rotation number of X(0, v) is equal to gcd(d, Ind α′ 1) = (cid:3) gcd(d, Q1 + n i=1 Di − v. The index of the saddle connection β′ n i=1 Di − v, Q1) = gcd(d, Q1, n i=1 Ci + v). P P P In particular, if t = n, then the rotation number of X = X(Id, C, [(0, v)]) is equal to gcd(d, Q1, v). 7.2. The principal boundary of hyperelliptic components. If C is a hyperelliptic connected component of R1(µ), then multi-scale differentials in the boundary of C must satisfy properties that come from the hyperelliptic involution. In particular, in case of multi-scale differentials X(τ, C, P r) containing all poles in the top level component, we can determine whether it is contained in the boundary of a hyperelliptic or a non-hyperelliptic component by the following propositions. Proposition 7.2. Let C be a hyperelliptic component of R1(µ) with the ramification profile P fixing two or three marked points. After relabeling the poles and the saddle connections so that τ = Id, X(Id, C, P r) ∈ ∂C satisfies the following: • P(i) = −i. In particular, bi = b−i. • Ci + C−i = bi for each i ∈ Z/nZ. In particular, Q1 = Q2 = a 2 . • The prong-matching class P r is represented by (0, dn−1) ∈ Z/Q1Z × Z/Q2Z. Conversely, if a multi-scale differential X(Id, C, P r) ∈ ∂R1(µ) is given by the data satisfying the above conditions, it lies in the boundary of some hyperelliptic component of R1(µ) with ramification profile P. Proof. The top level component X0 has the hyperelliptic involution σ0 that interchanges two zeroes. By relabeling the saddle connections on X0, we may assume that σ0 fixes the pole pτ (n). By relabeling the poles, we may assume that τ = Id. For each i, σ0 sends the saddle connection αi to αn−1−i, so it also sends the pole pi the pole pn−i. Therefore P(i) = −i. Moreover, the angle 2πCi between αi−1 and αi at s1 is equal to the angle 2πDn−i between αn−i−1 and αn−i at s2. Therefore, C−i = b−i − D−i = bi − Ci. The prong v− 1 is sent to w− by σ0 as the saddle connection α0 is sent to α−1. Q2 Since the prong-matching is compatible with the involutions σ0 and σ−1, the prong-matching is represented by (0, dn−1). by σ−1, and the prong v+ 0 is sent to w+ dn−1 Conversely, if X = X(Id, C, P r) is given by the data satisfying the conditions in the proposition, then it is immediate that the top level component X0 has an involution σ0 compatible with P. The bottom level component X−1 ∈ H0(a, −Q − 1, −Q − 1) also has an involution σ−1, and the prong-matching P r is compatible with the involutions σ0 and σ−1. So X is contained in the boundary of the hyperelliptic (cid:3) component with ramification profile P by Lemma 5.3. Remark 7.3. Under the assumptions of the above proposition, the number of fixed marked points of the component C is determined by the parity of n. If n is even, then P fixes three marked points. If n is odd, then P fixes two marked points. Proposition 7.4. Let C be a hyperelliptic component of R1(µ) with ramification profile P fixing one marked point. After relabeling the poles and the saddle connections so that τ = Id, X(Id, C, P r) ∈ ∂C satisfies the following: • P(i + 1) = −i. In particular, bi+1 = b−i. • Ci+1 + C−i = bi for each i ∈ Z/nZ. In particular, Q1 = Q2 = a 2 . • The prong-matching class is represented by (0, 0) ∈ Z/Q1Z × Z/Q2Z. 32 Conversely, if a multi-scale differential X(Id, C, P r) ∈ ∂R1(µ) satisfies above conditions, it lies in the boundary of some hyperelliptic component of R1(µ) with ramification profile P. Proof. The top level component X0 has the hyperelliptic involution σ0 that interchanges two nodes s1, s2. By relabeling the saddle connections if necessary, we may assume that σ0 interchanges two poles pτ (n) and (cid:3) pτ (1). The proof follows from the same argument as Proposition 7.2. 7.3. Connected components of special strata. In this subsection, we will classify non-hyperelliptic components of certain strata that cannot be dealt with the general strategy which we will follow in the next subsection. They are exactly the single-zero exceptional cases in Theorem 1.7 The stratum R1(2n, −2n). Consider the case when the order of every pole is equal to -2. Since d := gcd(bi) = 2, there are two possible rotation numbers, r = 1, 2 for flat surfaces in R1(2n, −2n). We have the following result that makes this case special. Proposition 7.5. Every flat surface in R1(2n, −2n) is hyperelliptic. Therefore this stratum has no non- hyperelliptic components. Proof. If n = 1, then R1(2, −2) = H1(2, −2), and this stratum is already known to be connected and hyperelliptic by [2]. Suppose n > 1. Assume the contrary that C is a non-hyperelliptic component of R1(2n, −2n). By Theorem 6.1, we could find a multi-scale differential X = X(τ, C, [(0, v)]) ∈ ∂C. Since all bi = 2, we have Ci = 1 for each i and Q1 = Q2 = n. By relabeling the poles, we may assume that τ = Id. Suppose that n is odd. We can apply the level rotation action to obtain a prong-matching of the form (u, u − 1) ∈ P r as follows. If v is odd, we can simply take ( v+1 If v is even, then since n is odd, we can take ( n−b+1 ) ∈ P r. Since Ci = 1 for each i, we have ci = di = i and therefore (u, u − 1) = (cu, du−1) ∈ P r. By relabeling the saddle connections and the poles, we may assume that (0, n − 1) ∈ P r. By Proposition 7.2, X is contained in the boundary of a hyperelliptic component with two fixed marked points. 2 ) ∈ P r. , n−b−1 2 2 , v−1 2 Now suppose that n is even. The proof follows from the same argument as above, and by using Proposi- tion 7.2 or Proposition 7.4. If v is odd, then X has one fixed marked points. If v is even, then X has three (cid:3) fixed marked points. As already mentioned in the proof, the stratum R1(2, −2) is connected. For n > 1, up to relabeling the poles, there are two ramification profiles of (2n, −2n), determined by the number of fixed marked points. A ramification profile P can fix one or three (resp. two or four) marked points if n is even (resp. odd). In Section 8, we will prove that hyperelliptic components of strata are classified by ramification profile. Thus R1(2n, −2n) for n > 1 has exactly two (hyperelliptic) connected components, up to relabeling the poles. It is easy to see that the rotation number is two if and only if P fixes one or three marked points. The strata R1(2r, −2r) and R1(2r, −r, −r). In these two cases, the connected components with rotation number r are the exceptions. For rotation numbers other than r, we can follow the general strategy in the next subsection. Proposition 7.6. Let C be a connected component of R1(2r, −2r) with rotation number r. Then C is hyperelliptic. This is an immediate consequence of [2], since R1(2r, −2r) = H1(2r, −2r) and the rotation number of the hyperelliptic component of H1(2r, −2r) is equal to r. We present below another proof more in the spirit of the current paper, using the description of the principal boundary of R1(2r, −2r). Proof. The stratum has two marked points, so by Proposition 6.4, we can always find a flat surface in C with a multiplicity one saddle connection. So there exists a multi-scale differential X(Id, C1, P r) ∈ ∂C. Let (0, v) ∈ P r for some 0 ≤ v ≤ D1 − 1. By Proposition 7.1, we have r = gcd(2r, C1, v). In particular, r divides both C1 and v. Since 1 ≤ C1 ≤ 2r − 1, we have C1 = D1 = r. Since 0 ≤ v ≤ r − 1, we have v = 0. By (cid:3) Proposition 7.2, C is then a hyperelliptic component. Proposition 7.7. Let C be a connected component of R1(2r, −r, −r) with rotation number r. Then C is hyperelliptic. 33 For completeness, we give two proofs, the first using the principal boundary as above, and the second just investigating the geometry of the situation directly. First proof. The stratum has three marked points, so by Proposition 6.4, we can always find a flat surface in C with a multiplicity one saddle connection. So there exists a multi-scale differential X(τ, C, P r) ∈ ∂C. Let (0, v) ∈ P r for some 0 ≤ v ≤ D1 + D2 − 1. By relabeling the saddle connections, we may assume that τ = Id. By Proposition 7.1, we have r = gcd(r, C1 + C2, v). In particular, r divides both C1 + C2 and v. Since 2 ≤ C1 + C2 ≤ 2r − 2, we have C1 + C2 = r. Therefore, C1 = r − C2 = D2 and C2 = r − C1 = D1. (cid:3) Since 0 ≤ v ≤ r − 1, we have v = 0. By Proposition 7.4, C is a hyperelliptic component. Second proof. In fact, we can see that the connected component H1(2r, −r, −r) with rotation number r, which is unique by [2], is hyperelliptic. Consider the map φr : PH1(2, −1, −1) → PH1(2r, −r, −r) defined by f dz → f rdz. The stratum H1(2, −1, −1) is connected and hyperelliptic. Since φr preserves zeroes and poles of the differential, it also preserves hyperellipticity of flat surfaces. The dimensions of PH1(2, −1, −1) and PH1(2r, −r, −r) are equal, so the image of φr must be a hyperelliptic component of PH1(2r, −r, −r) and the rotation number is equal to r. However, by [2], there exists unique connected component of rotation number r. Thus R1(2r, −r, −r) ⊂ H1(2r, −r, −r) and we can conclude that any connected component of (cid:3) R1(2r, −r, −r) with rotation number r is hyperelliptic. The stratum R1(12, −34). This is the strangest case, since R1(12, −34) has two non-hyperelliptic compo- nents C1 3 with rotation number 3. In order to prove this, we need to describe the projective structure of R1(12, −34) given by the period coordinates in full detail. This will be given in another paper [15] with G. Tahar. Here we give an upper bound of the number of connected components. Proposition 7.8. The stratum R1(12, −34) has at most two non-hyperelliptic components with rotation number 3. 3 and C2 Proof. Let X = X(τ, C, [(0, v)]) be a two-level multi-scale differential of rotation number 3, not contained in the boundary of some hyperelliptic component. Then Q1 = i Ci = 6. So {Ci} = {1, 1, 2, 2}. By relabeling If Cτ (3) = 1, then X is always hyperelliptic, a the saddle connections, we may assume that Cτ (1) = 1. P If Cτ (2) = 1, then we have v = 3 because otherwise v = 0 and X is contradiction. Thus Cτ (3) = 2. hyperelliptic. Similarly if Cτ (2) = 2, then we have v = 0. Therefore by relabeling the saddle connections again, we may assume that Cτ (1) = Cτ (2) = 1, Cτ (3) = Cτ (4) = 2 and v = 3. Under this assumption, X is only determined by the permutation τ , so we denote it by Xτ . By relabeling the nodes, we obtain XId = X(14)(23). Also, we can have T (0,2) T (3,0) 1 XId = X(132) ∼ XId. By symmetry, we can also have X(423) ∼ XId. Therefore, for any τ ∈ Alt4, Xτ ∼ XId. Thus we can conclude (cid:3) that there are at most two non-hyperelliptic components of R1(12, −34) with rotation number r = 3. The stratum R1(2n + 2, −2n−1, −4) for odd n and r = 2. In fact, this case satisfies Theorem 1.7 although we cannot use the same strategy in the next subsection. So we prove that there exists a unique non-hyperelliptic component C2 of rotation number 2, directly for this case. Proposition 7.9. The stratum R1(2n + 2, −2n−1, −4) for odd n has a unique non-hyperelliptic component with rotation number 2. 2 Proof. By relabeling the poles, we may assume bn = 4. Let X = X(τ, C, [(0, v)]) be a two-level multi-scale differential of rotation number 2, not contained in the boundary of some hyperelliptic component. Then C = (1, . . . , 1, 2). By relabeling the saddle connections, we may assume that τ (n) = n. If v = n − 1, then X is hyperelliptic, a contradiction. Under this assumption, X is determined by τ and an even number 0 ≤ v < n − 1, so we denote it by Xτ,v. We have T (2,n−v) XId,v = Xτ,0 for τ = (12 . . . v + 1). 2 +1),v. Therefore we have Xτ,0 = XId,0 for each τ = (i, i+1), XId,v = T 1 ≤ i ≤ n − 2. Thus R1(2n + 2, −4, −2n−1) has a unique non-hyperelliptic component with rotation number (cid:3) 2. T (−1,v+1) 2 Note that T 2 −1, v ( v 1 ( v 2 +1, v 2 T (v,1) 1 X( v 2 −1) 2 +1) 1 v 2 We call the strata introduced in this subsection by special strata. They are the only exceptions for the proof of existence and uniqueness of non-hyperelliptic component in the next subsection. Three non-hyperelliptic 3 of R1(12, −34) and C2 of R1(2n + 2, −2n−1, −4) for odd n) components introduced in this subsection (C1 are called special connected components. 3 , C2 34 7.4. Classification of non-hyperelliptic components. Now we will prove Theorem 1.7 for single-zero strata. Throughout this subsection, we suppose that R1(µ) is none of the strata that is dealt with in Section 7.3. Suppose that C is a non-hyperelliptic component of R1(µ). By Theorem 6.1, we can always find a flat surface X ∈ C with a multiplicity one saddle connection. By shrinking the multiplicity one saddle connection, we can obtain a multi-scale differential X = X(τ, C, P r) ∈ ∂C by Proposition 4.2. Conversely, any multi-scale differential of the form X(τ, C, P r) ∈ ∂C can be obtained by shrinking a multiplicity one saddle connection of a flat surface in C. Let r|d where d = gcd(b1, . . . , bn) as usual. We will explicitly construct a multi-scale differential in ∂R1(µ) with rotation number r that is not contained in the boundary of hyperelliptic components. This proves the existence of a non-hyperelliptic component with rotation number r. Proposition 7.10. Let R1(µ) be a genus one single-zero stratum, not one of special strata treated in Sec- tion 7.3. Suppose that n > 1 and r|d. There exists a multi-scale differential X(τ, C, P r) ∈ ∂R1(µ) with rotation number r, that is not contained in the boundary of any hyperelliptic component. 2 2 P i Ci < a Proof. It suffices to show that there exists a combinatorial data C = (C1, . . . , Cn), 1 ≤ Ci ≤ bi − 1, such that 2 and r|Q1. Then we can construct a multi-scale differential X(Id, C, [(0, r)]) with rotation Q1 = number r. Since Q1 6= Q2 = a − Q1, by Lemma 5.3, X is not contained in the boundary of any hyperelliptic component. Given a number n ≤ Q ≤ a − n, we can always find C such that i Ci = Q. So we need to find Q satisfying n ≤ Q ≤ a − n, Q < a 2 and r|Q. r is odd and take Q := a−r Q ≤ 2Q − n = a − r − n < a − n follows. First, suppose that r > 2. Then a = i. So Q ≥ (n − 1) r a ≥ 2n + 2 and thus Q ≥ 2n+2−2 /2. It is sufficient to prove n ≤ Q, because then i bi ≥ rn since r|bi for each 2 ≥ n since n > 1. Now suppose that r = 2. Since bn > 2 and 2|bi for each i, we have = n. Finally when r = 1, then again a ≥ 2n + 1 and Q ≥ 2n+1−1 Assume that a 2 = r a r − 1 = n. P P (cid:0) (cid:1) a r − 2 2 − r = r Assume that a r is even and take Q := a /2. Again, we need to prove n ≤ Q. First, suppose that r > 3. Since r|bi for each i, we have a ≥ rn. So Q ≥ (n − 2) r 2 ≥ n if n > 3. If n ≤ 3, then a ≥ 4r since a r is even and µ = (2r, −r, −r) is excluded. Thus Q ≥ r > n. Now suppose that r = 1. Then a is even and 2 − 1 = n. thus a = i bi ≥ 2n + 2 since bn > 2. So Q ≥ 2n+2 Suppose that r = 3, and assume the contrary that Q < n. Then 3n−6 P 2 ≤ Q < n and thus n < 6. If n = 5, then since a 3 is even, we have a ≤ 18 and thus Q ≥ 5. If n = 3, then similarly a ≥ 12 and Q 6= 3. If n ≤ 2, then since µ 6= (6, −6) or (6, −3, −3), we have a ≥ 12 and Q ≥ 2. So we have n = 4 and 3 ≤ Q < n, thus a = 12 and µ = (12, −34), which is excluded by assumption. (cid:1) (cid:0) Finally, suppose that r = 2, and assume the contrary that Q < n. Since µ 6= (2n, −2n), we have a ≥ 2n+1. 2 ≤ Q < n and thus Q = n − 1. Since 2|Q, n is odd. Thus a = 2n + 2 and µ = (2n + 2, −2n−1, −4) (cid:3) Then 2n−3 for odd n, which is excluded by assumption. It only remains to prove the uniqueness of non-hyperelliptic component of R1(µ) with given rotation number r|d. Theorem 7.11. Let R1(µ) be a genus one single-zero stratum, not one of special strata treated in Section 7.3. Suppose that n > 1 and r|d. There exists a unique non-hyperelliptic component Cr of R1(µ) with rotation number r. Proof. By Proposition 7.10, there exists a non-hyperelliptic component of R1(µ) with rotation number r. By Theorem 6.1, we can find a multi-scale differential X = X(τ, C, P r) ∈ ∂R1(µ). We denote by X ∼ X′ if X and X′ := X(τ ′, C′, P r′) are contained in the boundary of the same connected component of R1(µ). Our goal is to prove that X ∼ X′ for any pair X, X′ of non-hyperelliptic multi-scale differentials with rotation number r. To this end, we need to be able to show that all combinatorial data of a multi-scale differential can be changed to another one within the connected component, and we do it step by step as follows to achieve the goal. The following statements holds for non-hyperelliptic multi-scale differentials X and X′ with rotation number r. • X ∼ X′′ for some X′′ satisfying 0 < Q′′ • X ∼ X′′′ for some X′′′ satisfying 0 < Q′′′ • X ∼ X′ whenever τ = τ ′, Q1 = Q′ 2 − Q′′ 2 − Q′′′ 1 . See Proposition 7.15. 1 ≤ 2r. See Proposition 7.17. 1 and P r = P r′. See Proposition 7.19. 35 • X ∼ X′ whenever τ = τ ′. See Proposition 7.20. • X ∼ X′. See Proposition 7.21. The rest of this section is devoted to proving various ingredients that we need to prove each step for Theorem 7.11. We fix X = X(τ, C, P r) ∈ ∂C as in the statement, and use the operators T1, T2 defined in Section 6 to navigate in the boundary of C. We will also denote X′, X′′, and X ′, X ′′, etc for other elements of ∂C and write their combinatorial data correspondingly. Following two lemmas provide useful tool to connect multi-scale differentials in ∂C given by combinatorial data. Lemma 7.12. Let X = X(t, Id, C, [(u, v)]). Suppose that ct−1 < u ≤ ct and dj−1 ≤ v < dj. If Dt, Cj ≥ 2, let X ′ = X(t, Id, C′, [(u, v)]) be a multi-scale differential given by (cid:3) i =  C′  Ct + 1 Cj − 1 Ci if i = t if i = j otherwise. Then X ′ ∼ X.  Similarly if Ct, Dj ≥ 2, let X ′′ = X(t, Id, C′′, P r) be a multi-scale differential given by Then X ′′ ∼ X. Ct − 1 Cj + 1 Ci if i = t if i = j otherwise i =  C′′   In other words, given a prong-matching satisfying the conditions above, we can increase (decrease) Ct and decrease (increase, resp) Cj by one, while any other data are unchanged, in the same connected component. Proof. If Dt, Cj ≥ 2, it is straightforward to check that T (u−1,v+1) Ct, Dj ≥ 2, we have T (u+1,v−1) X ′′ = T (u,v) 2 X. So X ′′ ∼ X. 1 2 X ′ = T (u,v) 1 X. So X ′ ∼ X. Also if (cid:3) Lemma 7.13. Let X = X(t, Id, C, [(0, v)]) for some t < n and dj−1 < v < dj for some j. Then there exists 1 = Q1 + bt+1 and Q′ X ′ = X(t + 1, Id, C′, P r′) ∼ X such that Q′ 2 = Q2. Proof. We can take X ′ = T (v−dj−1−1,dt−v+1) conditions. 2 T (0,v) 1 X and it is straightforward to see that X ′ satisfies the (cid:3) Lemma 7.14. Let X satisfy Q1 < Q2. Then X has a prong-matching (ci, v) ∈ P r for some i, such that v 6= dj for any j. Proof. Let S := {(ci, dj) ∈ P r|i, j = 1, . . . , n} and q := gcd(Q1, Q2). For each dj , there are Q1 matching given by (u, dj) ∈ P r for some u. So |S| ≤ n Q1 such that dj < v < dj+1, then |S| = n Q2 contradiction. q prong- q . If there exists no prong-matching (ci, v) ∈ P r q = |S|, a (cid:3) q . However, since Q1 < Q2, we have |S| ≤ n Q1 q < n Q2 We are now ready to prove the first step of the proof in Theorem 7.11. Proposition 7.15. Let C be a non-hyperelliptic component of a genus one single-zero stratum R1(µ), not one of special strata treated in Section 7.3. Then there exists a multi-scale differential X(τ, C, P r) ∈ ∂C such that Q1 < Q2. Proof. Assume the contrary — that every X = X(τ, C, P r) ∈ ∂C satisfies Q1 = Q2 = a 2 . Fix a multi- scale differential X. We will navigate the boundary of C using X to show that R1(µ) is one of the strata dealt in Section 7.3, a contradiction. We can find a prong-matching (ci, di + v) ∈ P r, v 6= 0. Otherwise X is hyperelliptic by Lemma 6.17, a contradiction. We can further assume that (ci, di + v) is chosen so that |v| > 0 is minimal among all such prong-matchings in P r. By relabeling the poles, the nodes and 36 1 − Q′ 2 − Q′ the saddle connections, we may assume that τ = Id, i = 0 and v > 0. So (0, v) ∈ P r. If v > b1, then (c1, d1 + (v − b1)) ∈ P r with 0 < v − b1 < v. This contradicts the minimality of |v|, so v ≤ b1. Then (c1, d1 − (b1 − v)) ∈ P r with 0 ≤ b1 − v. If v < b1, then v ≤ b1 − v by the minimality of |v|, thus v ≤ b1 2 . i−1 j=1 Cj = 1 X ∈ ∂C. This satisfies C′ 2 . If v < D1, then we can take X′ = T (0,v) First, suppose that v ≤ b1 i = Ci for any i 6= 1. Therefore, Q′ 1 X ∈ ∂C, satisfying Q′ 1 = C1 +v and C′ 2 = (Q1 + v) − (Q2 − v) = 2v > 0. Similarly, if v < C1, we can take X′ = T (v,0) 1 = 2v > 0. This contradicts to the assumption, so v = C1 = D1 = b1 2 . If Di = C2−i for each i = 1, . . . , n, then C is hyperelliptic by Lemma 6.17, a contradiction. Let i > 1 be i−1 j=1 D2−j = Q2 − dn+2−i + D1, the smallest such that Ci 6= D2−i or Di 6= C2−i. Since ci−1 = P we have (ci−1, dn+2−i) ∈ P r. We take X ′ = T (ci−1,dn+2−i) 2 = a 2 − ci−1 and (0, 0) ∈ P r′. The bottom level component X ′ −1 has 2i − 3 marked poles pn−i+3, . . . , pn, p1, . . . , pi−1. Suppose Ci < D2−i. The other possible cases (Ci > D2−i, Di < C2−i, or Di > C2−i) can be treated in a similar way. We have a prong-matching (Ci − 1, −Ci + 1) ∈ P r′. If Ci > 1, then can apply Lemma 7.12 to reduce Ci and increase C2−i by one. So we may assume Ci = 1. Then we take X ′′ = T (1,−1) 1 = a 2 , Q′′ −1 has only one marked pole pi. After relabeling the saddle connections so that τ ′′(1) = 1, we have (C1 − ci−1 − 1, ci−1) ∈ P r′′. Since Cj = D2−j and Dj = C2−j for all j < i, we have (0, C1 − 1) ∈ P r′′. The process of obtaining X ′ and X ′′ are depicted in each row of Figure 15. 2 > 1, thus 0 < C1 − 1 < c1. By Lemma 7.13, we can obtain some X′′′ ∈ ∂C 2 = 2bi > 0, a contradiction. If b1 = 2, then C1 = 1 and (0, 0) ∈ P r′′. In this case, we take 2 = 2 > 0, a contradiction. The process of obtaining X′′′ for two cases 2 − bi. The bottom level component X ′′ If b1 > 2, then C1 = b1 1 − Q′′′ with Q′′′ X′′′ = T (0,0) 1 X ′′ satisfying Q′′′ 1 − Q′′′ are depicted in each row of Figure 16. X ∈ ∂C, satisfying Q′ X ′ satisfying Q′′ 1 = Q′ 2 = a P 2 2 Now we suppose that v = b1. Then (c2, d2 − b2) ∈ P r. By the minimality of |v|, we have b1 = v ≤ b2. Suppose that C1 ≥ D2 and D1 ≥ C2. Then b1 = C1 + D1 ≥ C2 + D2 = b2, thus b1 = b2. That is, C1 = D2 and D1 = C2. If Ci = D3−i for each i = 1, . . . , n, then C is hyperelliptic by Lemma 6.17, a contradiction. Let i > 2 be the smallest such that Ci 6= D3−i or Di 6= C3−i. We can repeat the argument in the previous paragraph, to get a contradiction. So we suppose C1 < D2 or D1 < C2 holds. Suppose C1 < D2. The other possible case D1 < C2 can be treated in a similar way. If C1 > 1, we can apply Lemma 7.12 with the prong-matching (c1 − 1, d1 + 1) ∈ P r to reduce C1 and increase C2 by one. So we may assume C1 = 1. Since d1 < b1 = C1 + D1 < d2, we have a prong-matching (0, v) ∈ P r such that d1 < v < d2. We take X ′ = T (0,v) 0 contains only two marked poles p1, p2. Assume that q′ := gcd(b1, b2) < b1. If b1 > 2, then q′ ≤ b1 2 < b1 − 1. By the level rotation action, we have (c2, q′) ∈ P r′ with 0 < q′ < d′ X ′. If b1 = 2, then C1 = D1 = 1, q′ = 1 and b2 > 2. Therefore, (c2 + 1, 0) ∈ P r′ and we take X′′ = T (c2,q X ′. In both 1 = Q1 + q′, a contradiction. Therefore gcd(b1, b2) = b1, and thus cases, X′′ satisfies Q′′ b1|b2. We take X ′′′ = T (0,v) 2 = Q2 − b1 and (1, 0) ∈ P r′′′. The bottom level component X ′′′ 1 = D2 − 1 > 1. 2 − Q′′′′ Consider a prong-matching (0, 1) ∈ P r′′′. By Lemma 7.13, there exists X′′′′ ∈ ∂C with Q′′′′ 1 = 2b1 > 0, a contradiction. So we reduce to the case when D2 = 2 and C1 = 1. In particular, D2 − C1 = 1. −1 contains only one marked pole p1. Suppose that D2 > 2. Then d′′′ 2 = b1 and (c2, 0) ∈ P r′. The top level component X ′ 1 = b1 − 1. We take X′′ = T (c2,q 1 X ∈ ∂C, satisfying Q′′′ 2 = Q2 − q′ and Q′′ 2 X satisfying Q′ 1 = Q1, Q′′′ 1 = b2, Q′ 2 2 ) ) ′ ′ Since we have b1|b2, C2 = b2 − 2 ≥ b1 − 2 = D1 − 1. If C2 = D1, then b2 = b1 + 1, thus b1 = 1. This is a contradiction. So C2 > D1 or C2 = D1 − 1. If C2 > D1, then by the same argument as in the previous paragraph, we can deduce that C2 − D1 = 1. So b2 = b1 + 2, thus b1 = 2 and b2 = 4. If C2 = D1 − 1, then b1 = b2. We will prove that bi = b1 for i = 3, . . . , n in both cases. Consider again X ′′′ in the previous paragraph. Note that gcd(Q′′′ 2 ) divides b1. So by the level rotation action, we obtain 1 + 1 = b2 is divisible by b1. If D3 > 1, then d′′′ 1 , 2) ∈ P r′′′. (c′′′ 1 + 2, 0) ∈ P r′′′ since c′′′ 2 − Q′′′′ By Lemma 7.13, we obtain X′′′′ satisfying Q′′′′ 2 = 2. Consider a 2 ) ∈ P r′′′. We take X ′′′′′ = T (c prong-matching (c′′′ 2 = b1. The top level component X ′′′′′ contains only two marked poles p1 and p3. By repeating the same argument applied to X ′ as in the above, we can deduce that b3 = b1 and there exists a prong-matching (c′′′ 2 , 3) ∈ P r′′′. By repeating this, we can conclude that bi = b1 and Di = 1 for each i = 3, . . . , n. 1 = 2b1 > 0. So D3 = 1 and d′′′ X ′′′, satisfying Q′′′′′ 2 = 1 + D3 > 2 and (c′′′ 2 ) = gcd(b1, a 1 = b3, Q′′′′′ 1 , d′′′ 0 1 , Q′′′ ′′′ 1 ,d ′′′ 2 ) 2 37 i − 2 D π 2 p2−i ... p1 ... X ′ := T (ci−1,dn+2−i) 2 X ... p2−i 2 π pi . . . p1 . . . X ′′ := T (1,−1) 2 X ′ i − 2 D π 2 ... p2−i = pi Ci π 2 z X s(ci−1, dn+2−i) Q1 = Q2 t = n 2π ... = ... p2−i . . . p1 ... p1 pi Ci π 2 z X ′ −s−1 (0, 0) ′ ′ 1 = Q Q 2 ′ t = n − (2i − 1) z 2π 2π pi z X ′ −s−1 (1, −1) X ′′ s(C1 − ci−1 − 1, ci−1) Q ′′ ′′ 1 − Q 2 = bi ′′ = n − 1 t Figure 15. Proof of Proposition 7.15, part I 1 − Q′ If b1 > 2, then bi = b1 for each i and µ = (nb1, −bn 2 = b1, we have b1 = 2(n−1) 2 = n − 1. Since Q′ n−2 ). The only integer solution for this equation is n = 4, b1 = 3. Therefore, µ = (12, −34) and the rotation number of C is equal to gcd(d, Q1, v) = 3. This is a contradiction. If b1 = 2 and b2 = 4, we have µ = (2n + 2, −2n−1, −4). Obviously we have c1 = d1 = 1, c2 = d2 = 3 and ci = di = i + 1 for all i > 2. If n is even, then Q1 = Q2 = n + 1 is odd. We have a prong-matching 2 + 1, n ( n 2 + 2) ∈ P r by the level rotation action. This contradicts the minimality of |v|, thus n is odd and (cid:3) the rotation number of C is equal to gcd(d, Q1, v) = 2. This is a contradiction. 1 = (n − 1)(b1 − 1) and Q′ 1 ). In this case Q′ To proceed to the second step in the proof of Theorem 7.11, we need the following lemma. Lemma 7.16. Let X ∈ ∂C and Q2 − Q1 > 0. Choose a prong-matching (ci, di + v) ∈ P r with minimal |v| > 0. If |v| < Q2−Q1 1 = Q2 − Q1 − 2|v|. This lemma states that we can reduce Q2 − Q1 > 0 by 2|v| whenever we have |v| < Q2−Q1 , then there exists X′ ∈ ∂C such that 0 ≤ Q′ 2 − Q′ 2 . 2 Proof. Assume that v > 0. The case v < 0 can be proven in a symmetric way. By relabeling the poles and the saddle connections, we may assume that X = X(Id, C, [(0, v)]). By the level rotation action, (c1, d1 + (v − b1)) ∈ P r. By the minimality of |v|, we have v ≤ b1. 38 2π(C1 − 1) p1 2π(C1 − 2) 2π p1 ... Lemma 7.13 ... 2πC1 z pi 2πDi pi 2π(C1 + 1) 2π(Di − 1) z X ′′ s(0, C1 − 1) X ′′′ −(s+1)−1 (1 − Di, C1 − 2) C1 > 1, X ′′′ := T (Di−1,C1−2) 2 T (0,C1−1) 1 X ′′ ... 2π p1 2π z ... = pi 2 π 2π pi 2π(bi − 1) X ′′ s(0, 0) C1 = 1, X ′′′ := T (0,0) 1 X ′′ Figure 16. Proof of Proposition 7.15, part II Q ′′′ ′′′ 1 − Q 2 = 2bi > 0 ′′′ t = n 1) − 2π 2 π (b i p1 2π z X ′′′ s+1(0, −bi + 1) Q ′′′ ′′′ 2 = 2 > 0 1 − Q ′′′ = n t 2 − Q′′ 2 − Q′ 1 = v. The bottom level component X ′′ First, suppose that v = b1. Then (c1, d1) ∈ P r. Also (c2, d2 − b2) ∈ P r. By minimality of |v|, we have v = b1 ≤ b2. Assume that C1 < D2. Then d1 < v < d2 and we can take X ′′ = T (0,v) 1 X, satisfying Q′′ −1 contains only one pole p1. By Lemma 7.14 and Lemma 7.13, we can obtain X′ ∈ ∂C with Q′ 1 = Q2 − Q1 − 2v. This is the end of the proof, so we may assume that C1 ≥ D2. By Lemma 7.12 with (c1 − 1, d1 + 1) ∈ P r, if D2 > 1, then we can reduce D2 and increase D1 by one. So we may assume D2 = 1. If C1 = 1, then (0, d2) ∈ P r. We take X ′′ = T (0,d2) X, satisfying Q′′ −1 contains two marked poles p1, p2. Since Q′′ 1 = Q2 − Q1 + (b2 − b1) > 0, we can apply Lemma 7.14 and Lemma 7.13 to obtain a multi-scale differential X ′′′ ∈ ∂C, satisfying Q′′′ −1 contains only one marked pole p1. Again, since Q′′′ 1 = Q2 − Q1 − v > 0, we can apply Lemma 7.14 and Lemma 7.13 and obtain a desired multi-scale differential X′ ∈ ∂C, satisfying Q′ 1 = Q2 − Q1 − 2v. Finally, we reduced to the case when D2 = 1 and C1 > 1. 2 = Q2 − b1. The bottom level component X ′′ 2 = Q2 − b1. The bottom level component X ′′′ 1 = Q1 − b2, Q′′ 2 − Q′′ 1 = Q1, Q′′′ 2 − Q′′′ 2 − Q′ 1 We use the induction on C1 > 1. Consider a prong-matching (c1 − 2, d2 + 1) ∈ P r. If Di = 1 for each i = 2, . . . , n, then Q2 − Q1 ≤ d2 − c2 = D1 + 1 − C1 − C2 = (D1 + 2 − C1) − b2 ≤ b1 − b2 ≤ 0, a contradiction. Let i be the smallest number larger than 2 such that Di > 1. We use the induction on i to reduce C1. Consider a prong-matching (c1 − (i − 2), d1 + (i − 2)) = (c1 + 2 − i, di−1) ∈ P r. There exists j ≤ i − 4 that cn−j−1 ≤ c1 + 2 − i < cn−j. If Dn−j > 1, then we can use Lemma 7.12 to increase Cn−j and decrease Ci−1 39 by one. So we assume that Dk = 1 for all k = n − j, . . . , n. If cn−j−1 < c1 + 2 − i, then cn−j−1 ≤ c1 + 1 − i. So by Lemma 7.12, we can decrease Cn−j and increase Ci by one. That is, we again have Dn−j > 1. So we assume that cn−j−1 = c1 + 2 − i. Now consider (v + j + 1, n − j − 1) = (v + j + 1, dn−j−1) ∈ P r. Since c2 = C1 + C2 = C1 + b2 − 1 ≥ C1 + b1 − 1 > b1 = v, we have c2 < v + j + 1 < cj+2 ≤ ci−2. There exists 2 < k < i such that ck−1 ≤ v + j + 1 < ck. So by Lemma 7.12, we can increase Cn−j−2 and decrease Ck by one. That is, now Dk > 1 and we can use the induction hypothesis. 2 . If v < D1, then we take X′ = T (0,v) 2 − Q′ Now, we suppose that v < b1. Then (c1, d1 − (b1 − v) ∈ P r and b1 − v ≥ v by the minimality of |v|. So v ≤ b1 1 X ∈ ∂C. In both cases, X′ satisfies Q′ 2 . Again, we take X ′′ = T (0,v) −1 contains only one marked pole p1. By Lemma 7.14, Lemma 7.13, we can obtain a desired multi-scale differential X′ ∈ ∂C, (cid:3) satisfying Q′ 1 X. Similarly, if v < C1, we take X′ = T (v,0) 1 = Q2 − Q1 − 2v. So we may assume that v = C1 = D1 = b1 1 = Q1 − v, Q′′ 2 = Q2 − v. The bottom level component X ′′ 1 X, satisfying Q′′ 1 = Q2 − Q1 − 2v. 2 − Q′ Now we are ready to prove step (2) of Theorem 7.11. Proposition 7.17. Let C be a non-hyperelliptic component of R1(µ) with rotation number r. Then there exists X = X(τ, C, P r) ∈ ∂C such that Q2 − Q1 ≤ 2r. Proof. By Proposition 7.15, there exists X ∈ ∂C such that Q1 < Q2. Suppose that X has minimal Q2−Q1 > 0 among all elements in ∂C with t = n. We will prove that Q2 − Q1 ≤ 2r. Let q = gcd(Q1, Q2) and choose a prong-matching (ci, di + v) ∈ P r with minimal |v| > 0. By relabeling the poles, the nodes and saddle connections, we may assume that Q1 < Q2, i = 0 and τ = Id. Thus (0, v) ∈ P r and the rotation number r is equal to gcd(d, Q1, v). If 0 < Q2 − Q1 − 2v, then by Lemma 7.16, we can further reduce Q2 − Q1 > 0 and this contradicts to the 2 . By level rotation action, we have (0, v + kq) ∈ P r for any 2 . We want to prove that v = r. minimality of Q2 − Q1 > 0. So v ≥ Q2−Q1 2 ≥ q integer k. By minimality of |v|, v = q or v = q First, suppose that v = q. Then we have a prong-matching (0, 0) ∈ P r. So (c1, d1 − b1) ∈ P r. If b1 is not divisible by q, then (c1, d1 − b′) ∈ P r when b′ > 0 is the remainder of b1 divided by q. This contradicts the minimality of |v|, so q|b1 and (c1, d1) ∈ P r. We can apply the same argument to each bi and conclude that q|bi for each i, thus q|d. Therefore r = gcd(d, Q1, v) = q = v. Now, suppose that v = q 2 . Then we have a prong-matching (0, q P r, a contradiction to the minimality of |v|. So v ≤ b1 and we have (c1, d1 − (b1 − q be the remainder of b1 − q the minimality of |v|. In any case, b1 is divisible by q argument to each bi and conclude that q Therefore v = r in any cases. So r = v ≥ q 2 ) ∈ P r. If v > b1, then (c1, d1 + (v − b1)) ∈ 2 )) ∈ P r. Let v′ ≥ 0 2 by 2 , so (c2, d2 − (v′ + b2)) ∈ P r. We can apply the same 2 = v. (cid:3) 2 divided by q. Then (c1, d1 − v′) ∈ P r and we must have v′ = 0 or v′ = q 2 |d. Therefore, r = gcd(d, Q1, v) = q 2 |bi for each i, thus q 2 ≥ Q2−Q1 and thus Q2 − Q1 ≤ 2r. 2 Since d|Q1 + Q2 , we have r| gcd(Q2, Q1) by Proposition 7.1 and the following discussion. If Q2 − Q1 ≤ 2r, then gcd(Q1, Q2) = gcd(Q2 − Q1, Q1) ≤ 2r and thus gcd(Q1, Q2) = r or 2r. Moreover, if gcd(Q1, Q2) = 2r, then we also have Q2 − Q1 = 2r. We need the following lemma for the third step in the proof of Theorem 7.11. Lemma 7.18. Let X(t, Id, C, [(0, v)]) ∈ ∂R1(µ) with rotation number r and suppose that Q1 6= Q2 and gcd(Q1, Q2) = r. For any choice of the integers 1 ≤ C′ i = Ci for each t < i ≤ n, we have X(t, Id, C′, [(0, v)]) ∼ X(t, Id, C, [(0, v)]). i ≤ bi − 1, such that i = Q1 and C′ t i=1 C′ P t Proof. The difference between X(t, Id, C, P r) and X(t, Id, C′, P r) can be measured by D := i=1 |Ci − C′ i|. It is obvious that D = 0 if and only if two multi-scale differentials are equal. If t = 1, then C1 = C′ 1 = Q1, P j and Ck > C′ thus D = 0. We use the induction on D. If D > 0, then we can find j, k such that Cj < C′ k. In particular, C′ j, Ck > 1. By relabeling the poles if necessary, we may assume that 1 = j < k ≤ t. If we can increase C1 and decrease Ck by one within C, the difference D is decreased and thus X(t, Id, C′, P r) ∈ ∂C by induction hypothesis. First, suppose that r > 2. We use the induction on k > 1. Assume that k = 2 and consider V := {v|(u, v) ∈ P r, 0 < u ≤ c1}. If v ∈ V for some d1 ≤ v < d2, then by Lemma 7.12, we can increase C1 and decrease C2 by one within C. So assume that v /∈ V for any d1 ≤ v < d2. Since gcd(Q1, Q2) = r, we have 40 C1 + D2 ≤ r. Similarly, if U := {u|(u, v) ∈ P r, 0 < v ≤ d1} and u /∈ U for any c1 ≤ u < c2, then D1 + C2 ≤ r. Therefore b1 + b2 = C1 + D2 + D1 + C2 ≤ 2r, thus the equalities hold and C1 + D2 = D1 + C2 = b1 = b2 = r. Now we have C1 = b1 − D1 = r − D1 = C2 and similarly D1 = D2. Moreover, (0, d1), (c1, 0) ∈ P r. Therefore, C1 = D1 = C2 = D2 = r 2 . If we can apply Lemma 7.12 to increase C1 and decrease C3, and also to decrease C2 and increase C3, then we are done. If not, then by the argument as above, we must have C3 = D3 = r 2 . By repeating this to other poles on the top level component, we have Q1 = Q2 = rt 2 , a contradiction. Now assume that k > 2. If C2 > 1, then as above, we can increase C1 and decrease C2 by one. By induction hypothesis, we can now increase C2 and decrease Ck by one. This two steps correspond to increasing C1 and decreasing Ck by one, as desired. If C2 = 1, then C2 < b2 − 1 and we can decrease Ck and increase C2 by one, by induction hypothesis. Also we can increase C1 and decrease C2 by one, obtaining the desired result. Now suppose that r = 2. By level rotation action, we have (u, v + 2ℓ) ∈ P r for each ℓ and (u, v) ∈ P r. Since Ck ≥ 2, we can choose (0, v) ∈ P r such that ck−1 < v ≤ ck. Therefore by Lemma 7.12, we can increase (cid:3) C1 and decrease Ck by one. The following proposition gives step (3) in the proof of Theorem 7.11. It states that whenever Q1 is fixed, the connected component is independent of the choice of C. i ≤ bi−1 such that Proposition 7.19. Let X = X(τ, C, [(0, v)]) ∈ ∂R1(µ) with rotation number r, satisfying 0 < Q2 −Q1 ≤ 2r. i = Q1, we have X′ = X(τ, C′, [(0, v)]) ∼ X. Then for any choice of the integers 1 ≤ C′ Proof. By the discussion following Proposition 7.17, we have gcd(Q1, Q2) = r or 2r. If gcd(Q1, Q2) = r, then the proposition follows from Lemma 7.18. So assume that gcd(Q1, Q2) = Q2 − Q1 = 2r. By relabeling the poles, we may assume that τ = Id. The prong-matching class P r contains exactly one of two prong- matchings (0, 0) and (0, r). We will use Lemma 7.12 to X in order to prove X′ ∼ X. For convenience, we will say a pair of poles (pi, pj) of X is adjustable if we can increase Ci by one and decrease Cj by one by applying Lemma 7.12. i C′ P We will find a sufficient condition for a pair (pi, pj) to be adjustable. Assume that Di > 1 and (pi, pj) is not adjustable. By relabeling the poles, we may assume that i = 1. Consider V := {v|(u, v) ∈ P r, 0 < u ≤ c1}. If dj−1 ≤ v < dj, then v /∈ V by assumption. We can find (0, v − 1) ∈ P r such that v − 1 < dj ≤ v − 1 + 2r. Then v − 1 < dj−1. Since (u, v − 1 − u) ∈ P r for each 0 < u ≤ c1, we can conclude that dj ≤ v + 2r − C1. Therefore, Dj = dj − dj−1 ≤ 2r − C1. Similarly, if we consider U := {u|(u, v) ∈ P r, 0 < v ≤ D1}, then we can obtain Cj ≤ 2r − D1. So b1 + bj = C1 + C2 + Cj + Dj ≤ 4r. The equality holds if and only if C1 + Dj = D1 + Cj = 2r, b1 + bj = 4r, and (0, dj−1) ∈ P r. If b1, bj > r, then the equality holds and we have b1 = bj = C1 + Dj = D1 + Cj = 2r. That is, C1 = b1 − D1 = 2r − (2r − Cj) = Cj and D1 = b1 − C1 = 2r − (2r − Dj) = Dj. Also, if b1 ≥ 4r or bj ≥ 4r, then the pair (p1, pj) is always adjustable. 1. Let 1 < k ≤ n be the smallest number such that C′ k > Ck. We use the induction on k > 1 to prove that (p1, pk) is adjustable. If this is possible, we can repeat changing C1 and Ck until we reach to X′. Assume the contrary that we cannot achieve this. In particular, (p1, pk) is not an adjustable pair. Now relabeling the poles again, we may assume that C1 < C′ If bi ≥ 4r for some i, then two pairs (p1, pi) and (pi, pk) are adjustable. By applying Lemma 7.12 to these pairs, we can increase C1 and decrease Ck by one. So assume that bi ≤ 3r for each i. If k = 2, then we have (0, d1) ∈ P r. That is, r|d1 and b1 > r. We have D2 = D1 ≥ r and thus b2 = C2 + D2 > r. The only possible case is when b1 = b2 = 2r, C1 = C2 = r and (0, r) ∈ P r. Since (0, d2) = (0, 2r) /∈ P r, (p1, p3) is adjustable. If b3 6= 2r or C3 6= C2 = r, then (p2, p3) are also adjustable, a contradiction. So we have b3 = 2r and C3 = r. Now (c1, d3) = (r, 3r) /∈ P r and (p4, p2) is adjustable. If (p3, p4) is adjustable, then we have a chain p1-p3-p4-p2 of poles where consecutive pairs is adjustable. That is, we can increase C1 and decrease C2 by one, a contradiction. By repeating this argument, we can finally get bi = 2r and Ci = r for any i, which is also a contradiction because then Q1 = Q2 = nr. Now assume that k > 2. Then consider two pairs (p1, pi) and (pi, pk) for some 1 < i < k. If bi 6= 2, then we can use induction hypothesis to these two pairs to increase C1 and decrease Ck as desired. The third number Ci will be decreased (or increased) by one, and then increased (or decreased) to its original position. If bi = 2, then r = 1 or r = 2. If r = 1, then note that b1, bk ≥ 3 because C′ 1, Ck > 1. In particular, b1 + bk ≥ 6 > 4r and therefore (p1, pk) is adjustable. If r = 2, then b1 = bk = 2r = 4 and C1 = Ck = r = 2. However, since Q1 > Q2, there exists some pi such that bi > 4 or bi = 4 and Ci = 1. We can use two (cid:3) adjustable pairs (p1, pi) and (pi, pk) now. 41 The following proposition gives the fourth step in the proof of Theorem 7.11. It states that whenever C is fixed, the connected component is independent of the choice of the prong-matching class P r. Proposition 7.20. Let X = X(τ, C, [(0, v)]) ∈ ∂R1(µ) with rotation number r, satisfying 0 < Q2 −Q1 ≤ 2r. Suppose that X′ = X(τ, C′, [(0, v′)]) with C = C′ also has rotation number r. Then X ∼ X′. Proof. By the discussion following Proposition 7.17, we have gcd(Q1, Q2) = r or 2r. If gcd(Q1, Q2) = r, then there exists a unique prong-matching class. So we assume that gcd(Q1, Q2) = Q2 − Q1 = 2r. If 2r|d and (0, 0) ∈ P r, then r = gcd(d, Q1, 0) is divisible by 2r, a contradiction. So there exists unique prong-matching class. So we assume that 2r ∤ d. That is, there exists bi such that 2r ∤ bi. By relabeling the poles, we may assume that τ = Id and b1 = (2k + 1)r for some k. There are exactly two prong-matching equivalence classes, shifted by r from each other. So we may assume that (0, (k + 1)r) ∈ P r and (0, kr) ∈ P r′. Our goal is to find multi-scale differentials X ∼ X and X ′ ∼ X′ containing all poles but p1 in the top 1. Then we can apply Lemma 7.18 to 1 = r and ˜C1 = ˜C′ 2 − ˜Q′ level component, satisfying ˜Q2 − ˜Q1 = ˜Q′ conclude that X ∼ X′. n < bn − 1. Since b2, bn > 2, we can take ˜X = T (0,(k+1)r) Suppose that bi > 2 for each i. By Proposition 7.19, the connected component of X is independent of the choice of C. So we may assume that C1 = (k + 1)r − 1 and C2 < b2 − r + 1. Also, we can assume that C′ 1 = kr + 1 and C′ Suppose that b1 = r = 2. Assume that (0, 2) ∈ P r and (0, 0) ∈ P r′. Since µ 6= (2n, −2n), by relabeling the poles of necessary, we may assume that b2 > 3 and C2 = 1. We take ˜X = T (0,2) 1 X. Similarly, if b2 > 4, then we can take ˜X ′ = T (0,4) 1 X′ contains two poles p1, p2 in the bottom level component. Since Q′ 1 = 4 and 2|bi for each i, we can find j such that Dj > 2. Then there exists v such that (0, v) ∈ P r such that dj−1 < v < dj. By Lemma 7.13, we can obtain a multi-scale differential ˜X ′ with all poles but p1 in the top level component. We can conclude that X ∼ X′. 1 X′. Assume that b2 = 4. Then T (0,4) 2 − Q′ X and ˜X ′ = T (kr+1,−1) X′. 2 1 Suppose that r = 1 and b1 = 2. By relabeling the poles, we may assume that b2 > 2 and C2 = 1. 1 X′. Since (cid:3) Suppose that (0, 1) ∈ P r and (0, 2) ∈ P r′. Then we can take ˜X = T (0,1) gcd( ˜Q2, ˜Q1) = gcd( ˜Q′ 1) = 1, we can apply Lemma 7.18 and therefore X ∼ X′. 1 X and ˜X ′ = T (0,2) 2, ˜Q′ The following proposition gives the last step in the proof of Theorem 7.11. It states that whenever C and P r are fixed, the connected component is independent of the choice of the permutation τ . Proposition 7.21. Let X = X(Id, C, [(0, v)]) ∈ ∂R1(µ) with rotation number r, satisfying 0 < Q2 − Q1 ≤ 2r. Then X(τ, C, [(0, v)])) ∼ X for any permutation τ ∈ Symn. Proof. If n ≤ 2, there is nothing to prove because we have only one choice of permutation τ as a cyclic order. So assume that n > 2. Since the transpositions of the form (i, i + 1) generates the symmetric group Symn, it suffices to prove for the case τ = (i, i + 1). By relabeling the saddle connections, we may assume that τ = (1, 2). Let X′ = X((1, 2), C, P r). Case (1): Suppose that q = r, or q = 2r and 2r ∤ d. Then by Proposition 7.20, the connected component of X is independent of the choice of prong-matching as long as the rotation number is not changed. By assumption, X(Id, C, [(0, 0)]) has rotation number r, so we may assume that v = 0. If 2r|b1 + b2, then by Proposition 7.19, we can change C1 and C2 within the connected component so that c2 = d2 = b1+b2 It is straightforward that T (c2,0) . Then we have a prong-matching (0, c2) ∈ P r. 2 X. Therefore X′ ∼ X. X′ = T (0,c2) 2 1 If 2r ∤ b1 + b2, then b1 + b2 ≥ 3r. So b1 ≥ 2r or b2 ≥ 2r. Assume that b2 ≥ 2r. The case when b1 ≥ 2r can be treated similarly. Then by Proposition 7.19, we can change C1 and C2 within the connected component so that d2 − c2 = r, and also D2 > r. We have a prong-matching (0, d2) ∈ P r. Now we consider X′. By Proposition 7.19, we may assume that C′ 2 = C2 + r. We have a prong-matching (0, r) ∈ P r′. Since 0 < r < d′ 1 X′. It has a prong-matching (0, c2) and we can take T (0,c2) T (0,r) 1 X′ = T (0,d2) X. Therefore X′ ∼ X. 1 = C2 + r, we can take T (0,r) It is straightforward that T (0,c2) 2 = D2 − r and D′ 1 = D1, D′ 1 = C1, C′ T (0,r) 1 X′. 1 1 1 Case (2): Suppose that q = 2r and 2r|d. Then (ci, di +r) ∈ P r for each i = 0, . . . , n−1. Since 2r|bi for each i, we have bi ≥ 2r. If b1+b2 is odd, then . We have a prong-matching (0, d2) ∈ P r and we 2r by Proposition 7.19, we may assume that d2 = c2 = b1+b2 2 42 ... Ci+1 p−i ... p−i+1 pi+1 Ci+1 z Ci+1 pi+1 ... ... p−i+1 p−i Ci+1 z X X s(−di, di) T (−di,di) 1 X (Ci+1, −Ci+1) Ä Ci+1 s ä pi+1 ... Ci+1 p−i ... p−i+2 ... pi+1 p−i+1 ... p−i+2 T (Ci+1,−Ci+1) T (−di,di) 2 1 = T (Ci+1,−Ci+1) X T (−di−1,di−1) 1 2 X ′ X ′ X ′ s(−di−1, di−1) Ci+1 z p−i Ci+1 z T (−di−1,di−1) 1 X ′ Ä s ä (Ci+1, −Ci+1) Figure 17. The path in R1(µ) when τ = (i, i + 1)(1 − i, −i) can deduce that X′ ∼ X by the same argument as in Case (1). If b1+b2 may assume that d2 − c2 = 2r and D1, D2 > r. Then d2 = b1+b2 We can deduce that X′ ∼ X by the same argument as in Case (1). is even, then by Proposition 7.19, we 2 + r is an odd multiple of r, so (0, d2) ∈ P r. (cid:3) 2r 8. Classification of hyperelliptic components In this section, we will complete the proof of Theorem 1.4. First, we deal with genus one single-zero residueless strata R1(µ). Proposition 8.1. Let R1(µ) be a genus one single-zero stratum. For each ramification profile P of R1(µ), there exists a unique hyperelliptic component CP of R1(µ). Proof. Case (1): First, suppose that P fixes one marked point. After relabeling the poles, we may assume P(i) = 1 − i for each i. By Proposition 7.4, the boundary of any hyperelliptic component with ramification profile P contains X(τ, C, P r) for some τ satisfying P(τ (i)) = τ (1 − i) and Cτ (i) + Cτ (1−i) = bτ (i). That is, τ sends a pair of poles interchanged by P to another such pair. Such permutations τ can be generated by the permutations of the form (i, i + 1)(1 − i, −i), interchanging two pairs, and (i, 1 − i), interchanging two poles in a pair, for 1 ≤ i ≤ n 2 . Let X = X(Id, C, P r) and X′ = X(τ, C, P r). It suffices to show that X ∼ X′ for each τ = (i, i + 1)(1 − i, −i) or τ = (i, 1 − i). T (−di−1,di−1) 1 First, let τ = (i, i + 1)(−i + 1, −i). Then T (Ci+1,−Ci+1) X = T (Ci+1,−Ci+1) T (−di,di) 1 X′. The 2 2 path in R1(µ) connecting X and X′ is illustrated in Figure 17. Thus X ∼ X′. 43 Ci ... p−i+1 ... p−i+2 X X s(−di−1, di−1) ... Ci pi p−i+2 ... X ′ X ′ s(ci−1, −ci−1) pi z p−i+1 Ci z Ci pi ... ... p−i+2 p−i+1 Ci z T (−di−1,di−1) 1 X Ä Ci pi (Ci, −Ci) s ä ... p−i+2 ... p−i+1 Ci+1 z T (ci−1,−ci−1) 2 X ′ Ä (−Di, Di) s ä Figure 18. The path in R1(µ) when τ = (i, 1 − i) T (−di−1,di−1) T (Ci,−Ci) 2 1 = T (−Di,Di) X T (ci−1,−ci−1) 2 1 X ′ Now let τ = (i, −i+1). By applying Lemma 7.12 to X′ if necessary, we may assume that D′ −i+1 = Ci. X′. The path in R1(µ) connecting X and X′ is i = C′ Then T (Ci,−Ci) 2 illustrated in Figure 18. Thus X ∼ X′. T (−di−1,di−1) 1 X = T (−Di,Di) 1 T (ci−1,−ci−1) 2 2 Case (2): Now suppose that P fixes two or three marked points. After relabeling the poles, we may assume P(i) = 2 − i for each i. The set of fixed marked poles is {p1, p n+2 } if n is even, or {p1} if n is odd. Proposition 7.4, the boundary of any hyperelliptic component with ramification profile P contains X(τ, C, P r) for some τ satisfying P(τ (i)) = τ (2−i) and Cτ (i)+Cτ (2−i) = bτ (i). Such permutations τ form a subgroup of Symn, generated by the permutations of the form (i, i+1)(2−i, 1−i) and (i, 2 − i), for 2 ≤ i ≤ n 2 . The rest of the proof is completely analogous to the proof of Case (1). Case (3): Finally suppose that P fixes four marked points. By relabeling the poles, we may assume that p1, p n+1 and pn are the fixed poles and P(i) = 2 − i for i = 1, . . . , n − 1. By Proposition 7.4, the boundary of any hyperelliptic component with ramification profile P contains X(n − 1, τ, C, P r) for some τ satisfying P(τ (i)) = τ (2 − i) and Cτ (i) + Cτ (2−i) = bτ (i) for each i = 1, . . . , n − 1. Such permutations τ form a subgroup of Symn, generated by the permutations of the form (i, i+1)(2−i, 1−i) and (i, n+1−i) for 2 ≤ i ≤ n+1 2 . Let X = X(n−1, Id, C, P r) and X′ = X(n−1, τ, C, P r). (cid:3) The rest of the proof is completely analogous to the proof of Case (1). 2 Now we prove Theorem 1.4 for any Rg(µ) with g > 0 inductively. 44 ... g σ y z1 z2 shrink simple s.c. σ0 y σ−1 y ... g ∈ CP0 ∈ H0(a, a, −2a − 2) < 2g + 2 fixed marked points ... g σ y z1 z2 shrink pair of s.c. σ0 y z1 z2 ... g p1 ∈ CP0 2g + 2 fixed marked points z1 z2 σ−1 y ∈ R0(a, a, −b1, b1 − 2a − 2) Figure 19. Navigating in the boundary of a single-zero hyperelliptic component Proof of Theorem 1.4. We use the induction on dim Rg(µ) = 2g + m − 1. The base case is g = m = 1 and this is already treated in Proposition 8.1. So we assume that 2g + m > 3. Fix a ramification profile P of Rg(µ) and let C be any hyperelliptic component of Rg(µ) with ramification profile P. If P fixes less than 2g + 2 marked points, then C contains a flat surface with a multiplicity one saddle connection. By shrinking this saddle connection, we obtain a two level multi-scale differential X and the top level component X0 is a hyperelliptic component with ramification profile P contained in a stratum of dimension 2g + m > 2. By induction hypothesis, the connected component containing X0 is unique. If m = 2, then the bottom level component X−1 is contained in a connected stratum H0(a, a, −2a − 2), and X0 and X−1 intersect at a unique node. Thus there is a unique prong-matching equivalence class of X and therefore C is unique. If m = 1, then X−1 is still contained in a connected stratum H0(a, − a 2 − 1), and X0 and X−1 intersect at two nodes. However, by hyperellipticity of C, there is still a unique possible prong-matching equivalence class of X because the prong-matching commutes with hyperelliptic involutions. Therefore C is also unique in this case. 2 − 1, − a Now suppose that P fixes 2g + 2 marked points. By relabeling the poles if necessary, we may assume that p1 is one of the fixed poles. First, assume that Rg(µ) is a single-zero stratum. So it is 2g-dimensional. By Proposition 6.20, there exists a flat surface X ∈ C that contains a pair of parallel saddle connections with multiplicity two bounding the polar domain of p1. By shrinking them, we obtain a multi-scale differential X ∈ ∂C. The bottom level component X−1 contains p1, and is contained in the stratum R0(a, −b1; − a−b1 2 − 1, − a−b1 2 − 1). By hyperellipticity of X, the component X−1 is uniquely determined up to re-scaling the differential. Specifically, an element of this stratum is determined by two angles of the polar domain of p1, and it is hyperelliptic if and only if two angles are equal to each other. The top level component X0 is a double-zero hyperelliptic flat surface of genus g − 1 with 2g fixed marked points. It is contained in a hyperelliptic component of a stratum of dimension 2g − 1, whose ramification profile P0 is induced by P. By induction hypothesis, X0 is contained in a unique connected component with ramification profile P0. By hyperellipticity of C, there is a unique possible prong-matching equivalence class of X because the prong-matching commutes with hyperelliptic involutions. Combining this with the uniqueness of the connected components containing X0 and X−1, we can conclude that C is also unique for given P. Now assume that Rg(µ) is a double-zero stratum. So it is (2g + 1)-dimensional. By Proposition 6.20, there exists a flat surface X ∈ C that contains a pair of parallel saddle connections with multiplicity two bounding the polar domain of p1. By shrinking them, we obtain X ∈ ∂C. See Figure 19 for the description 45 σ y ... g z shrink simple s.c. σ0 y X σ−1 y < 2g + 2 fixed marked points ∈ CP0 ∈ H0 ... −1g g − 1 P r z X ... 2 − 1, − a 2 − 1 a, − a (cid:0) (cid:1) σ y ... g z shrink pair of s.c. σ0 y −1g g − 1 ∈ CP0 X σ−1 y P r p1 ∈ R0 a, −b1; − a−b1 (cid:0) 2 − 1, a−b1 2 − 1 (cid:1) 2g + 2 fixed marked points z X Figure 20. Navigating in the boundary of a double-zero hyperelliptic component of the level graph of X. The bottom level component X−1 contains p1, and is contained in the stratum R0(a, a, −b1, b1 − 2a − 2). Up to re-scaling the differential, X−1 is a unique hyperelliptic flat surface in R0(a, a, −b1, b1 − 2a − 2). The top level component X0 is a single-zero hyperelliptic flat surface of genus g with 2g + 2 fixed marked points. It is contained in a hyperelliptic component of dimension 2g, whose ramification profile P0 is induced by P. By induction hypothesis, there exists a unique connected component with ramification profile P0. Since there is a unique node, there is also a unique prong-matching equivalence (cid:3) class. Therefore we can conclude that C is unique for given P. 9. Genus one multiple-zero strata In this section, we will complete the proof of Theorem 1.7. In [13] and [3], the first step in enumerating the connected components of multiple-zero strata is proving that each of their connected components is adjacent to a connected component of the single-zero stratum, obtained by merging all zeroes. Since merging zeroes can be done by a combination of a certain GL+(2, R) action and a local isoperiodic surgery, it does not affect the residue conditions. So we will follow the same strategy here for the residueless stratum Rg(µ). In Section 9.1, we will prove that every non-hyperelliptic component of Rg(µ), except for the special case R1(2n, −2n), is adjacent to a non-hyperelliptic component of a single-zero stratum. Then we will classify all non-hyperelliptic components of genus one multiple-zero strata in Section 9.2. 9.1. Merging zeroes. For a given µ = (a1, . . . , am, −b1, . . . , −bn), we denote a := a1 + · · · + am and µ′ := (a, −b1, . . . , −bn). Recall that we assume b1 ≤ · · · ≤ bn by default. Also, recall that a connected component D of Rg(µ) is adjacent to a connected component C of Rg(µ′) if some flat surface in D can be obtained by breaking up a zero from some flat surface in C. In other words, D is adjacent to C if some flat surface in C can be obtained by merging all zeroes of some flat surface in D. We can define a breaking up the zero map B : {non-hyperelliptic components of Rg(µ′)} → {non-hyperelliptic components of Rg(µ)} 46 by B(C) = D if D is adjacent to C. The goal of this subsection is to prove that B is surjective when g > 1, or g = 1 and bn > 2. In other words, Proposition 9.1. Assume that g > 1, or bn > 2. Let D be a non-hyperelliptic component of a stratum Rg(µ) of genus g > 0. Then D is adjacent to a non-hyperelliptic component of the single-zero stratum Rg(µ′). As a result of Proposition 9.1, we obtain an upper bound for the number of non-hyperelliptic components. Corollary 9.2. Assume that g > 1 or bn > 2. The number of non-hyperelliptic components of Rg(µ) is less than or equal to the number of non-hyperelliptic components of the corresponding single-zero stratum Rg(µ′). We will prove Proposition 9.1 by induction on dim Rg(µ) = 2g + m − 1 > 2. First, we deal with the base case, when (g, m) = (1, 2) and bn > 2. So B is given by breaking up the zero of order a into two zeroes of orders a1 and a2. Proposition 9.3. Let D be a non-hyperelliptic component of a genus one stratum R1(a1, a2, −b1, . . . , −bn) with bn > 2. Then D is adjacent to a non-hyperelliptic component of the single-zero stratum R1(a, −b1, . . . , −bn). Before proving this proposition, we explain the strategy of the proof. Again we move around in the boundary of D, until we end up in B(C) for some non-hyperelliptic component C of R1(a, −b1, . . . , −bn). By Theorem 6.1 and Proposition 4.5, we already know that D = B(C) for some (possibly hyperelliptic) C. To ensure that we can find a non-hyperelliptic C, we first need to prove several lemmas that describe the conditions when B(C1) = B(C2) for distinct connected components C1, C2 of R1(a, −b1, . . . , −bn). Then for each hyperelliptic component C1, we will find a non-hyperelliptic component C2 such that B(C1) = B(C2). Choose a two-level multi-scale differential X = X(Id, C, [u, v]) ∈ ∂C1. Then the bottom level component X−1 is contained in the stratum H0(a, −Q1 − 1, −Q2 − 1). This flat surface has distinguished prongs 1 and w− v− 1 , at two poles s1 and s2, respectively (See Figure 10). By breaking up the zero, we obtain −1 ∈ H0(a1, a2, −Q1 − 1, −Q2 − 1), and it still has the prongs v′ and w′, deformed from v− X ′ 1 . As X ′ −1 deforms in the stratum H0(a1, a2, −Q1 − 1, −Q2 − 1), we can keep track of the prongs at the poles. In other words, we keep the information (X ′, v′, w′), where v′, w′ are the prongs at the poles. The moduli space of this data is called the moduli space of differentials with marked prongs (separatrices), which is defined and studied in full generality in [3]. Now assume that X ′ −1 has a multiplicity one saddle connection joining z1 and z2, then we can shrink the saddle connection and merge two zeroes, obtaining a flat surface in H0(a, −Q1 − 1, −Q2 − 1) isomorphic to X−1 up to scaling. As a result, the prongs v′ and w′ will deform to v− j of X−1 for some i, j. Thus we can obtain a multi-scale differential X(Id, C, [u − i, v − j]), in the boundary of a stratum adjacent to B(C1). See Figure 21. 1 and w− i and w− Lemma 9.4. Let C1, C2 be two connected components of a single-zero stratum R1(a, −b1, . . . , −bn). Suppose that X(Id, C, [(0, v)]) ∈ ∂C1 and X(Id, C, [(0, v + a1)]) ∈ ∂C2. Then B(C1) = B(C2). Proof. Let X = X(Id, C, [(0, v)]) and β be the unique saddle connection in X−1. By breaking up the zero of X−1, we obtain a flat surface with a new saddle connection α joining the two distinct zeroes z1, z2. The saddle connection β also becomes a saddle connection, also denoted by β, in the new flat surface. By Lemma 6.3, we may assume that β is joining z1, z2. By shrinking β, we obtain a flat surface isomorphic to X−1, up to scaling. Along this deformation, we obtain (X−1, v− 1 ), as illustrated in Figure 22. Therefore, we obtain X(Id, C, [(0, v + a1)]) in the boundary of a stratum adjacent to B(C1). (cid:3) Thus B(C1) = B(C2). 1 ) from (X−1, v− a1+1, w− 1 , w− Lemma 9.5. Let C1, C2 be two connected components of a single-zero stratum R1(a, −b1, . . . , −bn). Suppose that bn = 2 and X(n − 1, Id, C, [(0, v)]) ∈ ∂C1. Note that the pole pn of order 2 is contained in the bottom level component. (1) Suppose that a1 < a (2) Suppose that a1 = a 2 − 1 or Q1 < Q2. If X(n − 1, Id, C, [(0, v + a1)]) ∈ ∂C2, then B(C1) = B(C2). 2 −1 and Q1 = Q2 = a Proof. There are a + 1 outgoing prongs in X−1 at z, denoted by u+ a+1. We can label them in the clockwise order so that β1 encloses from u+ Q1+2, u+ a . When we break up the zero z, we identify z with the pole of a flat surface in H0(a1, a2, −a − 2). There are a + 1 incoming prongs 47 2 −1. If X(n−1, Id, C, [(0, v−2)]) ∈ ∂C2, then B(C1) = B(C2). 1 , . . . , u+ and β2 encloses from u+ 1 to u+ Q1 C1 ... a B(C1) = B(C2) ... ... ... break up plumb levels -1 and -2 a1 a2 shrink simple s.c. plumb levels -1 and -2 C2 ... a X(Id, C, P r1) a1 a2 a1 a2 X(Id, C, P r2) −a − 2 + a1 a2 break up ... −bn a ... −bn plumb ... −bn shrink ... −bn −a − 2 − a1 a2 plumb levels -1 and -2 simple s.c. levels -1 and -2 a1 a2 ... −bn a X(n − 1, Id, C, P r1) a1 a2 a1 a2 X(n − 1, Id, C′, P r2) Figure 21. Strategy of proving B(C1) = B(C2) β z v w β z1 π α 2π(Q1 − a1) v 2π(Q1 − a1) z2 = w z1 α v β z2 w 2π(Q1 − a1) v z w α (v, w) = v− 1 , w− 1 (cid:0) (cid:1) (v, w) = v− a1+1, w− (cid:0) 1 (cid:1) Break up z Shrink β Figure 22. Prongs and saddle connections along the deformation in B(C) 48 β1 β2 β1 β2 z v w z2 v α 2π(Q1 − a1) w z1 = z2 v β1 2π(Q1 − a1) w β2 z1 α (v, w) = 1 , w− v− 1 (cid:0) (cid:1) β1 α z 2πa1 + π v w (v, w) = 1 , w− v− Q2−a1 Ä ä Break up z Shrink β2 Figure 23 at the pole, denoted by u− 1 , . . . , u− a1 are coming from z1, and the others are coming from z2. The direction of breaking up the zero is equivalent to the choice of a prong-matching at z. a+1. We can label them in the counterclockwise order so that u− 1 , . . . , u− 2 − 1 or Q1 < Q2. First we prove (1). Suppose that a1 < a By breaking up the zero, we obtain a flat surface with new saddle connection α joining two distinct zeroes z1, z2. The saddle connections β1, β2 also deform to saddle connections in the new flat surface, also denoted by β1, β2. Since Q1 + Q2 + 2 = a1 + a2 = a, we have a1 < Q2 in particular. Choose a prong-matching that sends u− Q1+1. Then breaking up the zero, β2 is joining z1, z2 and β1 is joining z2 to itself. So β2 is a multiplicity one saddle connection. By shrinking β2, we obtain a flat surface isomorphic to X−1, up to scaling. Along this deformation, we obtain (X−1, v− 1 ), as illustrated in Figure 23. Therefore, we obtain X(n − 1, Id, C, [(0, v + a1)]) in the boundary of a stratum adjacent to B(C1). Thus B(C1) = B(C2). ) from (X−1, v− 1 to u+ 1 , w− 1 , w− 1 to u+ t , such that t 6= a Now we prove (2). Suppose that a1 = Q1 = Q2 = a 2 − 1. As before, we break up the zero of X−1. Choose a prong-matching that sends u− 2 . Then β1, β2 are parallel saddle con- nections, joining z1, z2. By shrinking β1 and β2, we obtain Y . The top level component is contained in 2 , − a H0(a − 2, − a 2 at the nodes, as in X−1. However, under this defor- mation, we obtain (Y−1, v− 1 ), as illustrated in Figure 24. So the levels 0 and -1 form X(n − 1, Id, C, [0, v + 1]) ∈ ∂R1(a − 2, −b1, . . . , −bn−1). The bottom level component is contained in PR0(a1, a2, −2, −a), which is a singleton. Similarly, if we start from X(n − 1, Id, C, [(0, v + 2)]), then we can obtain the same multi-scale differential as above. See Figure 25. Therefore, the boundaries of B(C1) and (cid:3) B(C2) have a common multi-scale differential and thus B(C1) = B(C2). 2 ). This has two poles of order − a 1 ) from (X−1, v− 1 Q1, w− 1 , w− Q2−a1 Lemma 9.6. Let C1 be a connected components of a single-zero stratum R1(a, −b1, . . . , −bn). Suppose bn > 2 and a1 < a2. Also suppose that X = X(n − 1, Id, C, [(0, v)]) ∈ ∂C1. Then there exists a connected component C2 such that B(C1) = B(C2), and there exists X ′ = X(n − 1, Id, C′, [(0, v′)]) ∈ ∂C2, where C′ n 6= Cn and C′ i = Ci for each i = 1, . . . , n − 1. Proof. The bottom level component X−1 is contained in R0(a, −bn; −Q1 − 1, −Q2 − 1), for Q1 ≤ Q2. First, we assume that Dn ≤ Cn. In particular, we have Cn + Q2 ≥ a 2 > a1. By Proposition 2.4, X−1 has two saddle connections, denoted by β1 and β2. They are bounding the polar domain of pn, so form two angles equal to 2πCn and 2πDn at z. There are a + 1 outgoing prongs in X−1 at z, denoted by u+ 1 to u+ Q1 a+1. We can label them in the clockwise order so that β1 encloses from u+ 1 , . . . , u+ 49 β1 β2 β1 z v w β2 α z2 2π v v 2π z1 w β1 β2 z1 = w z2 α 2π v z′ w α (v, w) = v− 1 , w− 1 (cid:0) (cid:1) (v, w) = v− Q1 , w− 1 Ä ä Break up z Shrink β1 and β2 Figure 24 p l u m b - e l s v 1 l e 2 - d n a ... −2 shrink simple s.c. ... −2 a1 a2 X(n − 1, Id, C, [0, v + 1]) ∈ R1(a − 2, −b1, . . . , −bn−1) ∈ R0(a1, a2, −2, −a) ... −2 plumb ... −2 levels -1 and -2 a1 a2 shrink pair of s.c. a1 a2 break up ... −2 a Y ... } −2 a1 a2 a1 a2 plumb levels -1 and -2 ... −2 a X(n − 1, Id, C, [0, v]) X(n − 1, Id, C, [0, v + 2]) Figure 25. Navigating the boundary of B(C1), bn = 2 Q1+Q2+Cn Q1+Cn+1, u+ and β2 encloses from u+ . When we break up the zero z, we identify z with the pole of a flat surface in H0(a1, a2, −a − 2). There are a + 1 incoming prongs at the pole, denoted by u− a+1. We can label them in the counterclockwise order so that u− a1 are coming from z1, and the others are coming from z2. The direction of breaking up the zero is equivalent to the choice of a prong-matching at z. By breaking up the zero, we obtain a flat surface with a new saddle connection α joining two distinct zeroes z1, z2. The saddle connections β1, β2 also become saddle connections in the new flat surface, also denoted by β1, β2. Since Cn + Q2 > a1, we can choose a prong-matching that sends u− Q1+s for some 1 ≤ s < Cn, such that Q1 + Cn + 1 ≤ Q1 + a1 + s − 1 ≤ Q1 + Q2 + Cn. Then β2 is joining z1, z2 and β1 is 50 1 , . . . , u− 1 , . . . , u− 1 to u+ β1 2πCn β2 z v w β1 v β2 β1 α 2πs z2 α z1 2π(Q2 + Cn − a1 − s) w 2πs 2π(Q2 + Cn − a1 − s) w z2 z1 β2 = v α 2πs 2π(Q2 + Cn − a1 − s) w β1 z v (v, w) = v− 1 , w− 1 (cid:0) (cid:1) (v, w) = v− 1 , w− (cid:0) 1+a1+s−Cn (cid:1) Break up z Shrink β2 Figure 26. Navigating the boundary of B(C1) = B(C2) joining z2 to itself. So β2 is a multiplicity one saddle connection. By shrinking β2, we obtain a flat surface X ′ −1 in R0(a, −bn; −Q1 − 1, −Q2 − 1). The angles formed by β2 and α are equal to 2πs and 2π(bn − s). −1, v− Under this deformation, we obtain (X ′ 1 ) from (X ′ ), as illustrated in Figure 26. Therefore, we obtain X(n − 1, Id, C′, [(0, v + Cn − a1 − s)]) ∈ C2 . −1, v− 1 , w− 1 , w− 1+a1+s−Cn Now we assume that Cn < Dn. Then Q2 + Dn > a1. By the similar argument as above, we can obtain (cid:3) n > Cn, such that B(C1) = B(C2). X(n − 1, Id, C′′, [(0, v′′)]) ∈ C2 with C′′ Proof of Proposition 9.3. By Theorem 6.1, D is adjacent to some connected component C of R1(a1 + a2, −b1, . . . , −bn). If C is non-hyperelliptic, there is nothing to prove. So assume that C is a hyperellip- tic component. If a1 = a2, then D is also hyperelliptic by Lemma 5.2. Thus we assume that a1 < a2. Since bn > 2, the ramification profile P of C satisfies at least one of the following three possibilities: (1) P fixes less than 4 marked points. (2) P fixes three poles of order −2, and interchanges at least one pair of poles of order −b < −2. (3) P fixes some pole of order −b < −2. We will deal with each of these cases separately. Case (1) — By Proposition 6.4, we obtain X(Id, C, [(0, v)]) ∈ ∂C. Each component of this multi-scale differential has a hyperelliptic involution, and the prong-matching (0, v) is compatible with the involutions. Let C′ be the connected component of R1(a1 + a2, −b1, . . . , −bn) containing X(Id, C, [(0, v − a1)]) in the boundary. By Lemma 9.4, we have D = B(C) = B(C′). Since a1 < a 2 , the prong-matching (0, v − a1) is not compatible with the hyperelliptic involution. Thus C′ is non-hyperelliptic and D is adjacent to some non-hyperelliptic component of R1(a1 + a2, −b1, . . . , −bn). Case (2) — By relabeling the poles, we may assume that bn = 2 and pn is a fixed pole. By Proposition 6.20, We obtain X(n − 1, Id, C, [(0, v)]) ∈ ∂C. Suppose that a1 < a 2 − 1. Let C′ be the connected component of R1(a1 + a2, −b1, . . . , −bn) containing X(Id, C, [(0, v − a1)]) in the boundary. By Lemma 9.5, we have D = B(C) = B(C′). By the same argument as in the previous case, C′ is a non-hyperelliptic component. If a1 = a 2 − 1, then we can take C′ so that X(Id, C, [(0, v − 2)]) ∈ ∂C′. Again by Lemma 9.5, we have D = B(C) = B(C′), and C′ is a non-hyperelliptic component. Case (3) — By relabeling the poles, we may assume that bn > 2 and pn is a fixed pole. We obtain 2 > 1. By Lemma 9.6, there exists X(n − 1, Id, C′, [(0, v′)]) ∈ C′ 2 , such that D = B(C) = B(C′). Then the bottom level component of (cid:3) X(n − 1, Id, C, [(0, v)]) ∈ ∂C. Then Cn = bn for component C′ with C′ X(n − 1, Id, C′, [(0, v′)]) does not have an involution. So C′ is a non-hyperelliptic component. n 6= Cn = bn 51 Proof of Proposition 9.1. We use the induction on dimC Rg(µ) = 2g +m−1 ≥ 2. The case m = 1 is trivial, so assume m > 1. By Theorem 6.1, there exists a flat surface X ∈ D with a multiplicity one saddle connection γ joining two zeroes z1 and z2. By shrinking γ, we obtain X. The bottom level component is in a connected stratum H0(a1, a2, −(a1 + a2 + 2)). The top level component X1 is the flat surface with a zero of order a1 + a2 at the node. Let C be the connected component of Rg(a1 + a2, a3, . . . , am, −b1, . . . , −bn) containing X1. If C is non-hyperelliptic, then we can merge all zeroes by the induction hypothesis. So we may assume that C is a hyperelliptic component. If C is a double-zero hyperelliptic component, then m = 3 and a3 = a1 + a2. By Theorem 6.1, we can deform X continuously so that it contains a multiplicity one saddle connection joining z1 and z3. We merge z1 and z3 by shrinking the multiplicity one saddle connection, obtaining a flat surface X ′ 1 is non-hyperelliptic, thus we can merge all zeroes by induction hypothesis. 1 with two zeroes of different orders a2 and a1 + a3. In particular, X ′ Now suppose that C is a single-zero hyperelliptic component. In particular, m = 2. If a1 = a2, then by Lemma 5.2, D is also hyperelliptic. This is a contradiction to the assumption, and thus we must have a1 < a2. The base case g = 1 and bn > 2 is already proven in Proposition 9.3. By Theorem 1.4, we can find a multi-scale differential Y ∈ ∂C with two components intersecting at one node, such that the bottom level component Y−1 is a single-zero hyperelliptic flat surface of genus g−1 = 1 and containing the pole pn. By breaking up the zero of Y−1, we can obtain an element of ∂D. Let C−1 be the connected component of a stratum containing Y−1. We can apply induction hypothesis to B(C−1), so there exists a non-hyperelliptic −1 such that B(C′ component C′ In other words, ∂D is adjacent to the boundary of some (cid:3) non-hyperelliptic component C′. −1) = B(C−1). 9.2. Connected components of genus one multiple-zero strata. In this subsection, we will prove that non-hyperelliptic connected components of genus one strata are classified by rotation number. Assume that R1(µ) is a genus one single-zero stratum and µ 6= (12, −34), (2n, −2n), (2r, −r, −r) and (2r, −2r). Recall from Theorem 7.11 that the non-hyperelliptic connected components of R1(µ) are classified by rotation number, a topological invariant recalled in Section 7. For each r|d := gcd(b1, . . . , bn), there exists a unique non-hyperelliptic component Cr with rotation number r. Consider a map B : {Cr : r|d} → {non-hyperelliptic components of R1(a1, . . . , am, −b1, . . . , −bn)} obtained by breaking up the zero. We can easily compute the rotation number of B(Cr). For a flat surface X ∈ Cr with symplectic basis {α, β}, the rotation number r is given by gcd(d, Ind α, Ind β). While breaking up the zero, {α, β} still remains to be a symplectic basis, and their indices are not changed. So the rotation number of B(Cr) is equal to gcd(d, a1, . . . , am, Ind α, Ind β) = gcd(a1, . . . , am, r). By Proposition 9.3, B is surjective. In order to prove Theorem 1.7 for R1(µ), we need to show that B(Cr1) = B(Cr2 ) if gcd(a1, . . . , am, r1) = gcd(a1, . . . , am, r2). Instead of proving this directly, we first deal with the simplest case — double-zero strata R1(a1, a2, −b1, . . . , −bn). Proposition 9.7. Let R1(µ) be a genus one single-zero stratum and µ 6= (12, −34), (2n, −2n), (2r, −r, −r) and (2r, −2r). Let r1, r2 be positive integer divisors of d. Suppose that B is the map given by breaking up the zero into two zeroes of orders a1, a2. Then B(Cr1) = B(Cr2 ) if and only if gcd(a1, r1) = gcd(a1, r2). Proof. One direction of the proposition is immediate. If B(Cr1 ) = B(Cr2), then rotation numbers of B(Cr1 ) and B(Cr2 ) must be equal, thus gcd(a1, r1) = gcd(a1, r2). Conversely, assume that r = gcd(a1, r1) = gcd(a1, r2). Once we prove that B(Cr) = B(CR) for any R|d such that r = gcd(a1, R), then we can conclude that B(Cr) = B(Cr1 ) = B(Cr2). So we can reduce to the case when r1 = r. Consider a two-level multi-scale differential X(τ, C, [(0, 0)]) ∈ ∂Cr2. By Proposi- tion 7.1, we have r2 = gcd(d, Q1). Another multi-scale differential X(τ, C, [(0, a1)] is contained in ∂Cr, since (cid:3) gcd(d, Q1, a1) = gcd(a1, r2) = r. By Lemma 9.4, we have B(Cr) = B(Cr2 ) as desired. 9.3. Proof of Theorem 1.7. We finally complete the proof of Theorem 1.7 by using Proposition 9.1 and Proposition 9.7. First, we are dealing with the components adjacent to the special strata from Proposition 9.8 to Proposition 9.12. Proposition 9.8. The stratum R1(62, −34) has a unique non-hyperelliptic component with rotation number 3. 52 3 containing X = X(Id, (1, 1, 2, 2), [(0, 3)]), or C2 3 ) = B(C2 Proof. Let D be a non-hyperelliptic component of R1(62, −34) with rotation number 3. By Proposition 9.3, D is adjacent to some non-hyperelliptic component C of R1(12, −34). By Proposition 7.8, C is equal to one of two possibilities: C1 3 containing X ′ = X((1, 2), (1, 1, 2, 2), [(0, 3)]) in their boundary. It is sufficient to prove that B(C1 3 ) as connected components of R1(62, −34). We consider Y = T (0,3) 2 X = X(2, Id, (1, 2, 2, 2), [(2, 0)]). The bottom level component Y−1 is contained in R0(12, −3, −3; −4, −4), with three saddle connections β1, β2, β3. By breaking up the zero of Y−1 to obtain Y ′ −1. With proper choice of prong-matching, two of them, β2 and β3, remain to be parallel. By shrinking them, we can further degenerate Y ′ −1 into two-level multi-scale differential Z. See Figure 24. The top level component Z0 is contained in R0(9, −3; −4, −4) and the bottom level component Z−1 is contained in R0(62, −3, −11). By keeping track of the prongs, (Y−1, v− 1 ). Thus the levels 0 and -1 form a multi-scale differential X(2, Id, (1, 2, 1), [(2, 0)]) ∈ R1(9, −33), containing p1, p2, p3. By plumbing the transition between the level 0 and -1, we obtain a flat surface in R1(9, −33) with rotation number gcd(3, 3, 3) = 3 by Proposition 7.1. By swapping the labeling of p1 and p2, we obtain the flat surface in R1(9, −33) with the same rotation number. By Theorem 7.11, two flat surfaces are contained in the same connected component of R1(9, −33). However, remark that swapping p1 and p2 also swaps the connected (cid:3) components C1 1 ) degenerates to (Z0, v− 3 in R1(12, −34). Therefore, B(C1 3 ) = B(C2 3 and C2 1 , w− 1 , w− 3 ). Proposition 9.9. The stratum R1(3, 9, −34) has a unique non-hyperelliptic component with rotation number 3. 3 and C2 3 ) = B(C2 3 of R1(12, −34), as in the proof of Proposition 9.8. Again we Proof. We consider two components C1 3 ) as connected components of R1(3, 9, −34). Let Y = X(2, Id, (1, 2, 2, 2), [(2, 0)]) ∈ need to proof B(C1 1 C 3. By breaking up the zero with proper choice of prong-matching, two saddle connections β2 and β3 remain to be parallel. By shrinking β2, β3, we can further degenerate into two-level multi-scale differential Z. As in the proof of Proposition 9.8, the top level component Z0 is contained in R0(9, −3; −4, −4). By keeping track of the prongs, (Y−1, v− 1 ). Thus the levels 0 and -1 form a multi-scale differential with rotation number 3 in R1(9, −33), containing p1, p2, p3. By swapping the labeling of p1 and p2, we obtain the flat surface in R1(9, −33) with the same rotation number. By Theorem 7.11, two flat surfaces are contained in the same connected component in R1(9, −33). However, remark that swapping p1 (cid:3) and p2 also swaps the connected components C1 1 ) degenerates to (Z0, v− 3 in R1(12, −34). Therefore, B(C1 3 ) = B(C2 3 and C2 1 , w− 1 , w− 3 ). If bn = 2, then µ′ = (2n, −2n) and R1(µ′) does not have any non-hyperelliptic component by Proposi- tion 7.5. So the map B does not give any useful information. In order to analyze the double-zero stratum R1(a1, a2, −2n), we need to consider a slightly modified map B′ : {connected components of R1(2n, −2n)} → {connected components of R1(a1, a2, −2n)} also given by breaking up the zero. If a1 = a2 = n, we have the following result. Proposition 9.10. The stratum R1(n, n, −2n) does not have any non-hyperelliptic connected component. Proof. Assume the contrary — let D be a non-hyperelliptic component of R1(n, n, −2n). Then by Theo- rem 6.1, D contains a flat surface with a multiplicity one saddle connection joining z1 and z2. By shrinking the saddle connection, we obtain a two-level multi-scale differential X. The top level component is con- tained in some connected component C of R1(2n, −2n). By Proposition 7.5, C is hyperelliptic. The bottom level component is contained in the stratum R0(n, n, −2n − 2), which is connected and hyperelliptic. By (cid:3) Lemma 5.2, we can conclude that D is also hyperelliptic, a contradiction. Recall that the hyperelliptic components of R1(n, n, −2n) are classified by its ramification profiles by Theorem 1.4 proved in Section 8. Now suppose that a1 < a2. Since R1(a1, a2, −2n) does not have any ramification profiles, it does not have any hyperelliptic component. So B′ is surjective by Theorem 6.1 and Proposition 4.5. By Proposition 8.1 the domain of B′ is equivalent to the set of ramification profiles of R1(2n, −2n). Proposition 9.11. Let C1, C2 be the (hyperelliptic) connected components of R1(2n, −2n) corresponding to the ramification profiles P1, P2, respectively. Then B′(C1) = B′(C2) if and only if a1 is odd, or a1 is even and P1, P2 fix the same number of marked points. 53 Proof. Note that for a given n, there are two possible number k of fixed marked points, since k ≡ n+1(mod 2). For example, if n is even, then k is either one or three. Suppose that B′(C1) = B′(C2) and a1 is even. We need to prove P1, P2 fix the same number of marked points. Note that the rotation numbers of C1 and C2 are determined by the number of fixed marked points. Let P1 fixes k marked points. If k = 1, then after relabeling the poles, we have X(Id, 1, [(0, 0)]) ∈ ∂C1. Here, 1 means (1, . . . , 1). So by Proposition 7.1, the rotation number of C1 is equal to gcd(2, n) = 2. If k = 2, 3, then after relabeling the poles, we have X(Id, 1, [(0, 1)]) ∈ ∂C1. The rotation number is equal to gcd(2, n, 1) = 1. If k = 4, then after relabeling the poles, we have X(n − 1, Id, 1, [(0, 1)]) ∈ ∂C1. The rotation number is equal to gcd(2, n − 1, n + 1) = 2. Since the rotation number of B′(C1) is equal to the rotation number of C1, it is also determined by the number of fixed marked points. So if B′(C1) = B′(C2), then P1, P2 fix the same number of marked points. Conversely, suppose first that a1 is even and P1, P2 fix the same number of marked points. First, suppose that a1 < n − 1. We will deal with the case when P1 and P2 fix four marked points. This is the most complicated case, and the other cases will follow more easily by the same argument. By relabeling the poles, we may assume that P1 fixes n. Consider a multi-scale differential X(n − 1, τ, 1, [(0, 1)]) ∈ ∂C2 containing only one (fixed) pole pτ (n) in the bottom level component. The other fixed poles are labeled by τ (1) and τ ( n+1 2 ). Suppose that P2 does not fix n (So n > 3). Then pn is contained in the top level component. By relabeling the saddle connections, we may assume that τ ( a1 2 . Consider another multi-scale differential X(n − 1, τ, 1, [(0, a1 + 1)]). It has a ramification profile P3 that fixes n. By (1) of Lemma 9.5, we have B′(C2) = B′(C3). So we can reduce to the case when P2 fixes n. By relabeling the poles other than pn, we may assume that P1(1) = P1(1) and P1(i) = P1(n + 1 − i) for each i = 2, . . . , n − 1. Since P1 and P2 have the same cycle type, there exists a permutation σ fixing n, such that P2 = σ ◦ P1 ◦ σ−1. Therefore, it is enough to show that B′(C1) = B′(C2) whenever P2 = σ ◦ P1 ◦ σ−1 for each transposition σ = (i, j), 1 ≤ i < j ≤ n − 1. It is obvious that σ ◦ P1 ◦ σ−1 = P1 for each σ = (j, n + 1 − j), 2 ≤ j ≤ n − 1. So it remains to show B′(C1) = B′(C2) when P2 = (1, j) ◦ P1 ◦ (1, j) for each 1 < j ≤ n+1 2 , then we can take τ = (j, a1 + 1)(n + 1 − j, n − a1). In particular, τ (a1 + 1) = j. We have X(n − 1, τ, 1, [(0, 1)]) ∈ ∂C1. Consider a multi-scale differential X(n − 1, τ, 1, [(0, a1 + 1)]), with ramification profile P4, interchanging τ (1) = 1 and τ (a1 + 1) = j. Then by Lemma 9.5, we have B′(C1) = B′(C4). Similarly, we can deduce B′(C2) = B′(C4). Therefore B′(C1) = B′(C2) = B′(C4). Symmetrically, we also have B′(C1) = B′(C2) for 2 . Note that (1, n+1 2 , j) ◦ P1 ◦ ( n+1 each P2 = ( n+1 2 )(1, 2). Therefore, we can conclude that B′(C1) = B′(C2). 2 + 1) = n since a1 2 , j), 1 ≤ j < n+1 2 ) = (1, 2)(2, n+1 2 + 1 6= 1, n+1 2 . If j = n+1 Now we assume that a1 = n − 1. In this case, n is odd. We will deal with the case when P1 and P2 fix four marked points. By relabeling the poles, we may assume that P1 fixes n. Consider a multi-scale differential X(n − 1, τ, 1, [(0, 1)]) ∈ ∂C2 containing only one (fixed) pole pτ (n) in the bottom level component. The other fixed poles are labeled by τ (1) and n+1 2 . Suppose that P2 does not fix n (So n > 3). Then pn is contained in the top level component. By relabeling the saddle connections, we may assume that τ (n − 1) = n since n − 1 6= 1, n+1 2 . Consider another multi-scale differential X(n − 1, τ, 1, [(0, −1)]). It has a ramification profile P3 that fixes τ (n−1) = n. By (2) of Lemma 9.5, we have B′(C2) = B′(C3). So we can reduce to the case when P2 fixes n. By relabeling the poles other than pn, we may assume that P1(1) = P1(1) and P1(i) = P1(n+1−i) for each i = 2, . . . , n−1. Since P1 and P2 have the same cycle type, there exists a permutation σ fixing n, such that P2 = σ ◦ P1 ◦ σ−1. Therefore, it is enough to show that B′(C1) = B′(C2) whenever P2 = σ ◦ P1 ◦ σ−1 It is obvious that σ ◦ P1 ◦ σ−1 = P1 for each for each transposition σ = (i, j), 1 ≤ i < j ≤ n − 1. σ = (j, n + 1 − j), 2 ≤ j ≤ n − 1. So it remains to show B′(C1) = B′(C2) when P2 = (1, j) ◦ P1 ◦ (1, j) for each 1 < j ≤ n+1 2 , then we can take τ = (j, n − 2)(n + 1 − j, 2). In particular, τ (n − 2) = j. We have X(n − 1, τ, 1, [(0, 1)]) ∈ ∂C1. Consider a multi-scale differential X(n − 1, τ, 1, [(0, −1)]), with ramification profile P4, interchanging τ (1) = 1 and τ (n − 2) = j. Then by Lemma 9.5, we have B′(C1) = B′(C4). Similarly, we can deduce B′(C2) = B′(C4). Therefore B′(C1) = B′(C2) = B′(C4). Symmetrically, we also have B′(C1) = B′(C2) for each P2 = ( n+1 2 )(1, 2). Therefore, we conclude that B′(C1) = B′(C2). 2 . Note that (1, n+1 2 , j), 1 ≤ j < n+1 2 ) = (1, 2)(2, n+1 2 , j) ◦ P1 ◦ ( n+1 2 . If j = n+1 Finally, assume that a1 is odd. We will show that B′(C1) = B′(C2) when P1 fixes two marked points and P2 fixes four marked points. 54 If P1 fixes two marked points and a1 is odd, then we consider X(n − 1, τ, 1, P r) ∈ ∂C1 containing the only fixed pole in the bottom level component. By relabeling the saddle connections, we may assume that P1(τ (i)) = P1(τ (n + 1 − i)) for i = 1, . . . , n and P r = [(0, 0)]. Consider a multi-scale differential X(n − 1, τ, 1, [(0, a1)]) with ramification profile P3. Since a1 is odd, τ ( a1+1 ) are also fixed by P3. By (1) of Lemma 9.5, we have B′(C1) = B′(C3). We already have B′(C3) = B′(C2) since P3 fixes four (cid:3) marked points. Therefore, B′(C1) = B′(C2) = B′(C3). ) and τ ( a1+n 2 2 Proposition 9.12. The strata R1(r, r, −2r) and R1(r, r, −r, −r) does not have any non-hyperelliptic con- nected component with rotation number r. Proof. If R1(r, r, −2r) has a non-hyperelliptic component D with rotation number r, then by Theorem 6.1, this component is adjacent to a non-hyperelliptic component C of R1(2r, −2r) with rotation number R. Then gcd(R, r) = r, so R = r or 2r. This is contradiction to Proposition 7.6. Therefore D does not exist. The (cid:3) same argument works for R1(r, r, −r, −r) with Proposition 7.7. By combining above propositions and Proposition 9.7, we have the following Lemma 9.13. Let D := gcd(a1, d) and r|D. Suppose that µ 6= (r, r, −2r), (r, r, −r, −r) or (n, n, −2n). The stratum R1(µ) = R1(a1, a2, −b1, . . . , −bn) has a unique non-hyperelliptic connected component with rotation number r. Proof. If n = 1, then this is a usual meromorphic stratum and therefore proven in [2]. So assume that n > 1. First, suppose that R1(µ′) is not one of the special strata dealt in Section 7.3. Fix any positive integer R1|d such that gcd(a1, R1) = r. Then by Theorem 7.11, there exists a unique non-hyperelliptic component CR1 of R1(µ′) with rotation number R1. Let D1 = B(CR1 ). Then the rotation number of D1 is equal to r. Assume the contrary — that there exists another non-hyperelliptic connected component D2 of R1(µ) with rotation number r. Then by Proposition 9.1, D2 = B(CR2) for some R2|d. We have r = gcd(a1, R2) = gcd(a1, R1). By Proposition 9.7, we have D1 = B(CR1) = B(CR2 ) = D2. Now suppose that µ′ = (2n, −2n). If a1 is odd, then R1(a1, a2, −2n) is connected by Proposition 9.11 and the rotation number is equal to gcd(a1, 2) = 1. If a1 is even, then R1(a1, a2, −2n) has at most two con- nected components by Proposition 9.11. Since gcd(a1, 2) = 2, there are at least two connected components, corresponding to rotation numbers r = 1, 2. If µ′ = (12, −34) and r = 3, then there are exactly two possible cases µ = (3, 9, −34) and (6, 6, −34), which is proven by Proposition 9.8 and Proposition 9.9. If µ′ = (2r, −2r) or (2r, −r, −r), then r|D if and (cid:3) only if m = 2 and a1 = a2 = r, which is excluded by assumption. Now we are ready to prove Theorem 1.7. Proof of Theorem 1.7. Let R1(µ) = R1(a1, . . . , am, −b1, . . . , −bn). We denote a = a1 + · · · + am, d = gcd(a, b1, . . . , bn) and D = gcd(a1, . . . , am, d). For each r|D, we need to prove that there exists a unique non-hyperelliptic component of R1(µ) with rotation number r. Denote the corresponding single-zero stratum by R1(µ′). If µ′ = (r, −r), then r|D only if m = 1. If µ′ = (2r, −2r) or (2r, −r, −r), then r|D if and only if m = 1 or m = 2 and a1 = a2 = r. These cases are exceptional cases and proven in [2], Proposition 7.6, Proposition 7.7 and Proposition 9.12. If µ′ = (12, −34) and r = 3, then the case m = 1 is exceptional and there are in fact exactly two non- hyperelliptic components. This is proven in [15]. The cases when m = 2 are proven in Proposition 9.8 and Proposition 9.9. If m > 2, then R1(µ) has at least one zero of order 3 if 3|D. By Theorem 6.1 and Proposition 4.5, any non-hyperelliptic component D of R1(µ) with rotation number 3 is adjacent to the unique non-hyperelliptic component of R1(3, 9, −34) with rotation number 3. So D is unique. Let µ′ = (2n, −2n). If m = 2 and a1 = a2 = n, then R1(µ) has no non-hyperelliptic component by Proposition 9.10. First, suppose that ai is odd for some i. Then D = 1 and 1 is the only possible rotation number. Let D be a non-hyperelliptic component of R1(µ). By Proposition 9.1, D is adjacent to a connected component of C of R1(µ′). We can break up the zero of C to obtain the stratum R1(ai, a − ai, −2n), which is connected by Lemma 9.13. So R1(µ) is connected. Now suppose that all ai are even. Then D = 2 and there are two possible rotation numbers r = 1, 2. Since m > 2, there exists i such that ai 6= a − ai. By 55 Lemma 9.13, the stratum R1(ai, a − ai, −2n) has two (non-hyperelliptic) connected components, thus R1(µ) has two (non-hyperelliptic) connected components corresponding to r = 1, 2. Now assume that µ′ 6= (12, −34), (2n, −2n), (2r, −2r) or (2r, −r, −r). Then there exists a unique non- hyperelliptic component Cr of R1(µ′) with rotation number r by Theorem 7.11. By breaking up the zero from Cr, we obtain a non-hyperelliptic component of R1(µ) with rotation number r, proving the existence. We need to prove the uniqueness. Let D be a non-hyperelliptic component of R1(µ) with rotation number r. By Proposition 9.1, D is adjacent to a non-hyperelliptic component CR of R1(µ′) with rotation number R|d. Assume that R is the smallest among the components that D is adjacent to. Then the rotation number r of D is equal to gcd(R, D). It suffices to prove that R = r because then we have C = B(Cr) and C is unique. For each i = 1, . . . , m, we can break up the zero of CR into two zeroes of orders ai and a − ai to obtain a non-hyperelliptic component Di of R1(ai, a − ai, −b1, . . . , −bn). By Lemma 9.13, Di is a unique non-hyperelliptic component with rotation number Ri := gcd(R, ai) ≤ R. If Ri < R, then by breaking up the zero of CRi, we can again obtain Di. So C is adjacent to CRi and this contradicts to the minimality of (cid:3) R. Thus Ri = R for each i and therefore R = gcd(R, a1, . . . , am) = gcd(R, D) = r, as desired. 10. Higher genus strata In this section, we will classify the connected components of strata Rg(µ) for genus g > 1, completing the proof of Theorem 1.6 10.1. Higher genus single-zero strata. First, we work with a single-zero stratum of genus g > 1. We will prove that any non-hyperelliptic connected component of Rg(µ) can be obtained by a surgery called bubbling a handle from a connected component of a genus g − 1 single-zero stratum. Here is one difference from [2] and [13]: a flat surface in the genus one residueless single-zero stratum R1(µ) cannot be obtained by bubbling a handle from a genus zero residueless flat surface (because there do not exist such flat surfaces). So our base case of the induction has to be g = 1, not g = 0. This is why we treated genus one strata separately in Section 7. Even though we have different base cases, bubbling machinery will still allow us to enumerate the connected components of Rg(µ) similarly to [2] and [13]. 10.2. Unbubbling a handle. Recall that bubbling a handle at z operation ⊕z is given in Section 4.8, as in [3]. Since we are dealing with single-zero strata in this section, we will drop z in the notation and simply write it as ⊕. Recall that for a connected component C of a single-zero stratum Rg(µ), we have C ⊕s1 = C ⊕s2 if gcd(a + 2, s1) = gcd(a + 2, s2). This is because the multi-scale differential used for bubbling a handle with angle 2πs is contained in H1(a + 2, −a − 2), and its rotation number is equal to gcd(a + 2, s). So we can always assume that s|a + 2, by replacing s by gcd(a + 2, s). In fact, we can further extend the range of multi-scale differentials used for bubbling a handle. Consider a two-level multi-scale differential X(Id, C1, [(0, v)]) ∈ H1(a + 2, −a − 2). By Proposition 7.1, the rotation number of this multi-scale differential is given by gcd(a+2, C1, v). If gcd(a+2, C1, v) = s, then the component C ⊕ s can be obtained by gluing the pole of X(Id, C1, [(0, v)]) to the zero of a flat surface in C and plumbing all the nodes. Here we give the residueless version of [13, Lemma 14] and [2, Prop 6.1]. Lemma 10.1. Let C be a non-hyperelliptic connected component of a single-zero stratum Rg(µ) of genus g > 1. Then there exists a connected component C′ of Rg−1(a − 2, −b1, . . . , −bn) such that C = C′ ⊕ s for some 1 ≤ s ≤ a − 1. If bn > 2, then C′ can be chosen to be non-hyperelliptic. Proof. By Theorem 6.1, there exists a flat surface X ∈ C that has a multiplicity one saddle connection γ. By shrinking γ, we obtain X ∈ ∂C. The top level component X0 has zeroes at the two nodes and its genus is equal to g − 1. If X0 is contained in a non-hyperelliptic stratum, then we can merge two zeroes of X0 and obtain a curve X ′ in a component C′ of Rg−1(a − 2, −b1, . . . , −bn). If bn > 2, then C′ can be chosen to be non-hyperelliptic by Proposition 9.1. Therefore C = C′ ⊕ s for some s, see Figure 27. If X0 is hyperelliptic, we will move around more in ∂C until we land on the case when X0 is non- hyperelliptic. First, assume that X0 still has a multiplicity one saddle connection. Then by shrinking this saddle connection, we can merge two zeroes of X0 and degenerate to a single-zero hyperelliptic component C1 of Rg−1(a − 2, −b1, . . . , −bn). Therefore, C = C1 ⊕ s. If a is even and s = a 2 , then the rotation number of the flat surface E in R1(a, −a) used for bubbling a handle is equal to a 2 . In particular, the flat surface 56 C = C′ ⊕ s C = C′ ⊕ s shrink ... g a ... g − 1−1g a shrink ... −1g g − 1 ∈ C′ remove a handle a Figure 27. Unbubbling a handle C′ ... −1g g − 1 a − 2 shrink simple s.c. ... g a shrink simple s.c. ... −1g g − 1 a s h p a i r r i n k o f s . c . ... −1g g − 1 a ... −1g g − 1 ... −1g g − 1 plumb shrink ... −1g g − 1 plumb ... −1g g − 1 levels 0 and -1 Q1 Q2 levels -1 and -2 1 a Y ... −1g g − 1 simple s.c. Q1 Q2 a a Q1 < Q2 ... −1g g − 1 plu m b levels 0 a n d -1 plumb −b1 shrink −b1 levels -1 and -2 simple s.c. Q1 1 a a −b1 Q2 a Figure 28. Finding a non-hyperelliptic X0 in hyperelliptic. So C is hyperelliptic since it is obtained by gluing two hyperelliptic components. This is a contradiction, so s 6= a In particular, E is non-hyperelliptic and thus we can degenerate E into a 2 . two-level multi-scale differential in R1(a, −a) such that Q1 < Q2 by Proposition 7.15. By plumbing the level transition between the levels 0 and -1, we obtain a two-level multi-scale differential with the top level component contained in the stratum Rg−1(Q1 − 1, Q2 − 2, −b1, . . . , −bn). This is exactly the case when X0 is non-hyperelliptic. See the upper line of Figure 28. Now we assume that any continuous deformation of X0 does not have a multiplicity one saddle connection. By Theorem 6.1, this means X0 is contained in a double-zero hyperelliptic component with 2g+2 fixed marked points. Let p1 be a fixed pole of the smallest order. By Proposition 6.20, we may assume that X0 has a pair of parallel saddle connections with multiplicity two bounding the polar domain of p1. By shrinking this pair of saddle connections, we obtain a three-level multi-scale differential in ∂C. By plumbing the level transition between the levels -1 and -2, we obtain a two-level multi-scale differential Y whose bottom level component Y−1 is contained in R1(a, −b1, −(a − b1)). By assumption on p1, we have a − b1 > b1. Therefore, if the 57 bottom level component Y−1 is hyperelliptic, then the ramification profile of Y−1 must fix both poles. In particular, it fixes the node between the two levels. Since the top level component Y0 is still hyperelliptic, and two components of Y intersect at one node, we can conclude that C is hyperelliptic by Lemma 5.2. This is a contradiction, so Y−1 is not hyperelliptic. Then by Proposition 7.15, Y−1 degenerates into a two-level differential such that Q1 < Q2. Again, by plumbing the level transition between the levels 0 and -1, we land (cid:3) on the case when X0 is non-hyperelliptic. See the lower line of Figure 28. By applying this lemma repeatedly, we obtain the following Corollary 10.2. Assume that µ 6= (2n + 2g − 2, −2n). Let C be a non-hyperelliptic component of a single- zero stratum Rg(µ) of genus g > 1. Then there exists a non-hyperelliptic component C′ of R1(a − 2(g − 1), −b1, . . . , −bn) and integers 1 ≤ si ≤ a − 2(g − 1) + 2i − 1 for i = 1, . . . , g − 1 such that C = C′ ⊕ s1 ⊕ · · · ⊕ sg−1. Non-hyperelliptic components of a genus one stratum R1(µ) are classified by rotation number by The- orem 1.7, which we already proved in Section 9. So the formula in the above corollary can be rewritten as C = Cr ⊕ s1 ⊕ · · · ⊕ sg−1. (10.3) where Cr is the unique non-hyperelliptic component with rotation number r. We prove the following proposition, which is the residueless version of [2, Prop. 6.2]. Proposition 10.4. Assume that bn > 2. Then any non-hyperelliptic component of the single-zero stratum Rg(µ) of genus g > 1 is obtained by repeatedly bubbling a handle as one of the following two possibilities: C1 ⊕ 1 ⊕ · · · ⊕ 1 ⊕ 1 , C1 ⊕ 2 ⊕ · · · ⊕ 1 ⊕ 1 , where C1 is the unique non-hyperelliptic component of R1(a − 2(g − 1), −b1, . . . , −bn) with rotation number one. Moreover, the two components above are the same if and only if any bi is odd. Proof. First, we will prove that we can reduce to the case g = 2. Assume that in genus 2, Cr ⊕ s for any r| gcd(bi), 1 ≤ s ≤ a − 2g + 2 is equal to one of the two possibilities —C1 ⊕ 1 and C1 ⊕ 2. By Equation (10.3), then in any genus g > 1, we can write C = C1 ⊕ 1 ⊕ s2 ⊕ · · · ⊕ sg−1 or C1 ⊕ 2 ⊕ s2 ⊕ · · · ⊕ sg−1. Consider the g-level graph we obtain when bubbling g − 1 handles from C1, depicted in Figure 29. Let X be a g-level multi-scale differential corresponding to the graph. The top level component of X is contained in C1, a connected component of R1(a − 2(g − 1), −b1, . . . , −bn). If a − 2(g − 1) ≤ 4, then this stratum is either R1(2, −2), R1(3, −3), R1(4, −4) or R1(4, −22). Each of these strata are already treated in [2] (for the first three strata) and in Proposition 7.5 (for the last one). So we may assume that a − 2(g − 1) > 4. At level -1 of X, we have a flat surface in H1(a − 2(g − 1), −a + 2(g − 1)) of rotation number one or two. So this is not hyperelliptic, as the rotation number of the hyperelliptic component of H1(a − 2(g − 1), −a + 2(g − 1)) is equal to a−2(g−1) > 2. We can plumb the all level transitions of X except for the top level. Then we obtain at level -1 a non-hyperelliptic flat surface Y in Hg−1(a, 2g − 2 − a). Again by [2], we can conclude that the connected component of Hg−1(a, 2g − 2 − a) containing Y is one of the two possibilities: 2 H0 ⊕ 1 ⊕ · · · ⊕ 1 , H0 ⊕ 2 ⊕ · · · ⊕ 1 , where H0 denotes the connected stratum H0(a − 2g, −a + 2(g − 1)). This ends the proof of the reduction to g = 2. Therefore, if we prove that in genus 2 any component Cr ⊕ s is equal to one of the two possibilities C1 ⊕ 1 and C1 ⊕ 2, then we can complete the proof. In fact, we will prove that for g = 2, any nonhyperelliptic component is equal to Cr ⊕ 1 for some r = 1, 2, and C2 ⊕ 1 = C1 ⊕ 2 when C2 exists. Let C = Cr ⊕ s. Since Cr ⊕ s1 = Cr ⊕ s2 if gcd(s1, a + 2) = gcd(s2, a + 2) (see Section 4.8), we may assume that s|a + 2. Thus gcd(d, s) ≤ gcd(a, s) ≤ 2. First, assume that Cr is not a connected component of the special strata R1(12, −34) or R1(2n + 2, −2n−1, −4) for odd n, studied in Section 7.3. By Proposition 7.15, then there exists a multi-scale dif- ferential X ∈ ∂Cr with Q1 < Q2. By plumbing the nodes of X, we obtain a flat surface X ∈ Cr with a 58 unbubble ... g a ... 1 1 . . . 1 a ∈ C1 ... 1 ∈ C1 plumb levels from -1 to -(g-1) g − 1−1g a ∈ Hg−1(a, 2g − 2 − a) unbubble ∈ C1 ... 1 1 . . . 1 a C1 ⊕ 1 ⊕ . . . ⊕ sg−1 or C1 ⊕ 2 ⊕ . . . ⊕ sg−1 C1 ⊕ 1 ⊕ . . . ⊕ 1 or C1 ⊕ 2 ⊕ . . . ⊕ 1 Figure 29. Reduction to g = 2 α′ β′ 2πR + π 2πs + π z Figure 30. Saddle connections α′ and β′ in X ′ multiplicity one saddle connection α of index R = Q1. In particular, R 6= a 2 . We can find another simple closed curve γ so that {α, γ} form a symplectic basis of X. Then r = gcd(d, Ind α, Ind γ) and R = Ind α is divisible by r. Since bubbling a handle is a local surgery, the surface X ′ ∈ Cr ⊕ s still has a multiplicity one saddle connection α′ deformed from α. The index of α′ depends on the direction that we bubble the handle, i.e., on the choice of prong-matching used to plumb the node in the middle of Figure 6. Let β be the cross curve of the cylinder that we bubbled at z. The index of β is equal to s. After bubbling a handle, β deforms to a saddle connection in X ′, denoted by β′. So α′ and β′ intersect at z. Since R 6= a 2 , it is possible to choose a direction of bubbling a handle so that α′ and β′ intersect non-transversely, as in Figure 30. This is not possible only when Cr is a special connected component introduced in Section 7.3, and s = a+2 2 . These cases will be treated separately in Section 10.4. 2 and s ≤ a+2 Then Ind α′ = R, since the angle at z on the left side of α is unchanged. By shrinking α′, we obtain a two-level multi-scale differential X ∈ ∂Cr ⊕ s. The top level component X0 has two zeroes of orders R − 1 and a + 1 − R at the nodes s1, s2, respectively. The index of β′ is equal to s. So the rotation number of Y0 divides gcd(d, R − 1, s). If Y0 is contained in a hyperelliptic component, then two zeroes are interchanged by the involution σ. Therefore, R − 1 = a + 1 − R, a contradiction. So Y0 is contained in a non-hyperelliptic component. Therefore by Proposition 9.7, the connected component containing Y0 is adjacent to Cr′, where r′| gcd(d, R−1, s). Thus Cr⊕s = Cr′ ⊕s′ for some s′| gcd(R, a+2). Note that r′ ≤ gcd(d, s) ≤ gcd(a, a+2) ≤ 2. So we always have C = Cr ⊕ s for some r = 1, 2 and s|a + 2. However, if r = 2, then R were also even throughout the discussion. So r′| gcd(2, R − 1) = 1 and finally we always have C = C1 ⊕ s for some s|a + 2. See Figure 31. For any n ≤ R ≤ a − n, there exists a multi-scale differential X(Id, C, [(0, 1)]) ∈ ∂C1 such that Q1 = n i=1 Ci = R. Therefore, we can find a flat surface X ∈ C1 with a multiplicity one saddle connection of index R. Similarly for C2, for any even R such that n ≤ R ≤ a − n, we can find a flat surface X ∈ C2 with a P multiplicity one saddle connection of index R. 59 Cr ⊕ s ... 1 bubble plumb levels 0 and -1 Cr ... 1 a ... shrink α′ R a + 2 −R ... 1 a + 2 plumb ... 1 horizontal node R a + 2 −R a + 2 a + 2 a + 2 C1 ⊕ s′ ... 1 shrink simple s.c. R a + 2 −R a + 2 Figure 31. Navigating the boundary of Cr ⊕ s Assume that C = C1 ⊕ s and a is even. If s = a+2 2 − 1 and by the same argument in the previous paragraphs, C = Cr ⊕ s′ for some r = 1, 2 and s′| gcd(R, a + 2)|4. We claim that s′ < s for any case. If s′ = s = a+2 2 , then a = 0, 2 or 6. First two cases are impossible (a = 0) and has only one trivial case H(2, −2) which is already excluded (a = 2). So a = 6, but this is also impossible because gcd(R, a + 2) = gcd(2, 8) = 2 and therefore a = 2s′ − 2 ≤ 2. So we can assume that C = Cr ⊕ s for some r = 1, 2 and s < a+2 2 . 2 , then we choose R = a If a 2 is even, then we can choose R = a 2 . Then gcd(d, R − 1, s) = 1 and gcd(R, a + 2) = 2. Therefore 2 − 1, we have gcd(R, a + 2) = 1 since R is C = C1 ⊕ s′ for some s′ = 1, 2. Now for C1 ⊕ 2, by taking R = a odd. Thus C1 ⊕ 2 = Cr ⊕ 1 for some r = 1, 2. If a 2 is odd, then we first consider C2 ⊕ s. We choose even R = a gcd(R, a + 2) = 4. Thus C2 ⊕ s = C1 ⊕ s′ for some s′ = 1, 2, 4. For C1 ⊕ s, we choose R = a gcd(R, a + 2) = 1. Thus C1 ⊕ s = Cr ⊕ 1 for some r = 1, 2. So in any case, we can reduce to Cr ⊕ 1 for r = 1, 2. If all bi are even, then both C1 and C2 exist. Furthermore two components C1 ⊕ 1 and C2 ⊕ 1 are distinguished by spin parity which will be recalled in Section 10.3. Moreover by Proposition 10.4, we have C2 ⊕ 1 = C1 ⊕ 2. If any bi is odd, then C2 does not (cid:3) exists because d is odd. Therefore we always have C = C1 ⊕ 1. 2 − 1, then gcd(d, R − 1, s) = 1 and 2 , then 10.3. Spin structure of flat surfaces. Recall that in the classification of the connected components of usual strata in [2],[13] with g ≥ 2, the non-hyperelliptic connected components are distinguished by spin parity, which can be expressed in terms of the flat geometry as follows by the work of Johnson [12]. Let α1, β1, . . . , αg, βg be simple closed curves on a flat surface X that form a symplectic basis of H1(X, Z). Then the spin parity of X is: g Xi=1 (Ind αi + 1)(Ind βi + 1) ∈ Z/2Z. (10.5) Proposition 10.6. Suppose for X ∈ R1 that all poles are of even order, i.e. all bi are even. then the spin parity of X is equal to 1+(the rotation number of X) in Z/2Z. Proof. Let α, β be simple closed curves in X that form a symplectic basis of H1(X, Z). Then the rotation number r of X is equal to gcd(d, Ind a, Ind b) where d = gcd(a, b1, . . . , bn) is even by assumption. By (10.5), (cid:3) the spin parity is equal to 1+(the rotation number of X) in Z/2Z. Lemma 10.7. Let C be a non-hyperelliptic component of Rg(µ). Then the spin parity of C ⊕ s is equal to s + 1+(the spin parity of C) in Z/2Z. Proof. Choose X ∈ C and closed curves α1, β1, . . . , αg, βg on X forming a symplectic basis of H1(X, Z). Let X ′ ∈ C ⊕ s be the flat surface obtained by bubbling a handle from X. Then X ′ contains two saddle connections αg+1, βg+1 that are the boundary and the cross curve, respectively, of the flat cylinder used for bubbling a handle. We have Ind αg+1 = 0, Ind βg+1 = s. The curves α1, β1, . . . , αg+1, βg+1 now form a (cid:3) symplectic basis of X ′, so the spin parity of C ⊕ s is equal to s + 1+(the spin parity of C). 60 ∈ C3 ⊂ R1 9, −33 (cid:0) (cid:1) ∈ R1(14, −3, −11) C ⊕ 7 p1 p2 p3 p4 p1 p2 p3 p4 p1p2 p3 1 plumb 7 7 levels 0 and -1 1 7 7 14 14 shrink 1 p4 } Y plumb pair of s.c. 7 7 levels -1 and -2 14 C1 ⊕ 1 p1 p2 p3 p1 p2 p3 p1 p2 p3 plu m b levels 0 a n d -1 level-2 a n d -3 p1 p2p3 p4 1 1 14 Z p4 } X plumb 3 11 levels -1 and -2 p4 3 11 shrink pair of s.c. 14 14 p4 } W 3 11 14 Figure 32. Finding Z ∈ C ⊕ 7 ∩ C1 ⊕ 1 10.4. Proof of Theorem 1.6 for single-zero strata: special cases. It remains to deal with the special cases, to complete the proof of Proposition 10.4. In particular, we need to classify the connected components written as C ⊕ a+2 2 , where C is a special non-hyperelliptic connected component introduced in Section 7.3. Proposition 10.8. Let C be a non-hyperelliptic component of R1(12, −34) with rotation number 3. Then C ⊕ 7 = C1 ⊕ 1. Proof. We will find a multi-scale differential contained in C ⊕ 7 ∩ C1 ⊕ 1. Recall from Proposition 9.8 that B(C) ⊂ R1(62, −34) contains a two-level multi-scale differential Y with the top level component in R1(9, −33) with rotation number 3, and the bottom level component in R0(6, 6, −3, −11), containing p4. Therefore, C ⊕ 7 contains a two-level multi-scale differential Z with the same top level component and the bottom level component contained in the connected stratum R1(14, −3, −11). It remains to show that C1 ⊕ 1 also contains Z. Let X = X(3, Id, (1, 1, 1, 1), [(0, 0)]) ∈ C1 ⊂ R1(12, −34). The bottom level component is in R0(12, −3; −4, −7), containing p4. It has two saddle connections β1, β2. By breaking up the zero into two zeroes of orders 10 and 2, with a proper choice of prong-matching, the saddle connections β1, β2 remain parallel. By shrinking them, we obtain a two-level multi-scale differential W with top level component W0 ∈ H0(9, −4, −7). By keeping track of the prongs of (X−1, v− 1 ). Thus the levels 0 and -1 form a multi-scale differential X(3, Id, (1, 1, 1), [(0, 0)]) ∈ R1(9, −33) with rotation number 3. Therefore we can conclude that (cid:3) Z ∈ C1 ⊕ 1. See Figure 32. 1 ), we obtain (W0, v− 1 , w− 1 , w− Proposition 10.9. Let Cr be the non-hyperelliptic component of R1(2n + 2, −2n−1, −4) for odd n, with rotation number r = 1, 2. Then C2 ⊕ (n + 2) = C1 ⊕ 1. Proof. Let X = X(n − 1, (n − 1, n), 1, [(0, −1)]) ∈ C. The bottom level component X−1 is in R0(2n + 2, −2; −n, −n−2), containing two saddle connections β1, β2 bounding the polar domain of pn−1. By breaking 61 up the zero into two zeroes of order n + 1, with a proper choice of prong-matching, β1, β2 remain to be parallel. By shrinking them, we obtain a two-level multi-scale differential W with top level component W0 ∈ H0(2n, −n, −n−2). By keeping track of the prongs of (X−1, v− ). Thus the levels 0 and -1 form a multi-scale differential X(Id, 1, [(0, −n − 4)]) ∈ R1(2n, −2n−2, −4) with rotation number 1. Thus C2 ⊕ (n + 2) contains a two-level multi-scale differential Z with the top level component Z0 ∈ R1(2n, −2n−2, −4) with rotation number 1, and the bottom level component Z−1 ∈ R1(2n + 4, −2n − 2, −2) with rotation number 1. It remains to prove that C1 ⊕ 1 also contains Z. 1 ), we obtain (W0, v− 1 , w− , w− n+3 2 n+3 2 −1 ∈ R0(2, 2n, −2, −2n − 2). By keeping track of the prongs of (X ′ −1 into two zeroes of orders 2 and 2n, and shrinking the saddle connections β′ Consider X ′ = X(n − 1, (n − 1, n), (1, . . . , 1, 2), [(0, 0)]) ∈ C1. By breaking up the zero of the bottom 1 and β′ level component X ′ 2, we obtain W ′ with the top level component W ′ 0 ∈ H0(2n, −n − 1, −n − 1) and the bottom level component 1 , w− W ′ 2 ). So the levels 0 and -1 form a multi-scale differential X(Id, (1, . . . , 1, 2), [(0, −1)]) ∈ R1(2n, −2n−2, −4) with rotation number 1. Thus C1 ⊕ 1 contains a two-level multi-scale differential Z ′ with the top level component Z ′ −1 ∈ R1(2n + 4, −2n − 2, −2) with rotation number 1. So Z and Z ′ are contained in the boundary of the same connected component, (cid:3) proving that C2 ⊕ (n + 2) = C1 ⊕ 1. 0 ∈ R1(2n, −2n−2, −4) with rotation number 1, and the bottom level component Z ′ 1 ), we obtain (W ′ −1, v− 1 , w− 0, v− Now, we need to prove the following result for the hyperelliptic stratum R1(2n, −2n), similar to Proposi- tion 10.4. Proposition 10.10. Let C1 be a hyperelliptic component of R1(2n, −2n), n ≥ 2, with ramification profile P1. For any other (hyperelliptic) component C2 of R1(2n, −2n) and any 1 ≤ s ≤ n, the component C2 ⊕ s is equal to C1 ⊕ 1 or C1 ⊕ 2 Note that C2 ⊕ (n + 1) is hyperelliptic by Lemma 5.2. Also, two components C1 ⊕ 1 and C1 ⊕ 2 are distinguished by spin parity. Proof. We will construct an element of C2 ⊕ s in the following way. We will consider a two-level multi-scale differential W ∈ R1(2n + 2, −(2n + 2)) defined as follows. If s > 2, then W ∈ R1(Id, s, [(0, 0)]). If s = 2, then W ∈ R1(Id, 4, [(0, 2)]). If s = 1, then W ∈ R1(Id, 3, [(0, 1)]) ∈ R1(2n + 2, −(2n + 2)). In any case, the rotation number of W is equal to s. Also, the top level component W0 has two zeroes of orders at least two. By identifying the pole of W and the unique zero of a flat surface in C2, we obtain a three-level multi-scale differential contained in C2 ⊕ s. First assume that C2 has a fixed marked pole, say pn. By Proposition 6.20, we can find a flat surface X in C2 with a pair of parallel saddle connections β1, β2 bounding the polar domain of pn. By breaking up the zero of X into two zeroes with proper choice of prong-matching, the saddle connection β1, β2 remain parallel. This is because the total order around a zero is at least 6π and an angle bounded by β1, β2 is equal to 2π. By shrinking them, we obtain a two-level multi-scale differential Z with the top level component Z0 ∈ R1(2n − 2, −2n−1) and the bottom level component Z−1 ∈ R0(a1, a2, −2, −2n). Therefore, C2 ⊕ s contains a two-level multi- scale differential Y with the top level component Y0 ∈ R1(2n − 2, −2n−1) and the bottom level component Y−1 ∈ R1(2n + 2, −2, −2n) with rotation number r dividing 2. Since Y0 is hyperelliptic, Y−1 must be non-hyperelliptic. So Y−1 can degenerate to X(Id, (1, 1), [(0, r)]) ∈ R1(2n + 2, −2, −2n) where r|2 is its rotation number. By merging the level transition between level 0 and level -1, the new top level component is contained in the connected stratum R1(1, n−1, −2n). Since this stratum is adjacent to C1, we can conclude that C2 ⊕ s = C1 ⊕ s′ for some s′ = 1, 2. Now assume that C2 does not have a fixed marked pole, but a pair pn−1, pn of poles interchanged by hyperelliptic involution. If n = 2, then we are only taking care of C2 ⊕ 1. In this case, by the previous paragraph, C ⊕ 1 = C2 ⊕ 1 for other component C with a fixed marked pole. So we can assume that n > 2 and X = X(n − 2, τ, 1, [(0, v)] ∈ C2. The bottom level component X−1 has three parallel saddle connection. By breaking up the zero of X−1 into two zeroes with proper choice of prong-matching, these three saddle connections remain parallel. This is because the total order around a zero is at least 6π and the sum of two angles bounded by three saddle connections is also equal to 4π. After plumbing all level transitions and shrinking these three saddle connections, we obtain a two-level multi-scale differential Z with the top level component Z0 ∈ R1(2n − 4, −2n−2) and the bottom level component Z−1 ∈ R0(a1, a2, −22, −(2n − 2)). 62 Therefore, C2 ⊕ s contains a two-level multi-scale differential Y with the top level component Y0 ∈ R1(2n − 4, −2n−2) and the bottom level component Y−1 ∈ R1(2n+2, −22, −(2n−2)) with rotation number r dividing 2. Since Y0 is hyperelliptic, Y−1 must be non-hyperelliptic. So Y−1 can degenerate to X(Id, (1, 1, 2), [(0, r)]) ∈ R1(2n + 2, −2, −2n) where r|2 is its rotation number. By merging the level transition between level 0 and level -1, the new top level component is contained in the connected stratum R1(3, n − 3, −2n). Since this (cid:3) stratum is adjacent to C1, we can conclude that C2 ⊕ s = C1 ⊕ s′ for some s′ = 1, 2. In conclusion, combining Proposition 10.4, Proposition 10.10, Proposition 10.6 and Lemma 10.7, we can classify all non-hyperelliptic components of single-zero strata of genus g > 1, completing the proof of Theorem 1.6 for single-zero strata. 10.5. Proof of Theorem 1.6 for multiple-zero strata. Now we will complete the proof of Theorem 1.6 by classifying the connected components of a multiple-zero stratum Rg(µ) = Rg(a1, . . . , am, −b1, . . . , −bn) of genus g > 1. We denote a := a1 + · · · + am and µ′ := (a, −b1, . . . , −bn) as before. In Proposition 9.1, we have shown that every non-hyperelliptic component Rg(µ) is adjacent to a non-hyperelliptic component of a single-zero stratum. In other words, the breaking up the zero map B : {non-hyperelliptic components of Rg(µ′)} → {non-hyperelliptic connected components of Rg(µ)} is surjective. This gives an upper bound for the number of non-hyperelliptic components of any multiple-zero stratum. In order to completely classify the non-hyperelliptic components, we need to determine when B(C1) = B(C2) for two non-hyperelliptic components C1, C2 of Rg(µ′), similarly to how it is done in [2, Prop. 7.3] for the usual meromorphic strata. The proof there only uses the fact that any connected component can be obtained by bubbling a handle, which is also true for non-hyperelliptic components of residueless strata of genus g > 1 by Lemma 10.1. Thus we have Lemma 10.11. Let Rg(µ) be a single-zero stratum of genus g > 1 of even type. Assume that a = a1 + a2 for an odd a1. Consider the function B : {connected components of Rg(µ)} → {connected components of Rg(a1, a2, −b1, . . . , −bn)} obtained by breaking up the zero. Let Codd, Ceven be the two non-hyperelliptic components of Rg(µ) distin- guished by spin parity. Then B(Codd) = B(Ceven). We can finally prove Theorem 1.6. Proof of Theorem 1.6. We use induction on m > 0. If m = 1, then Rg(µ) is a single-zero stratum, already treated in Section 10.1. For m > 1, denote the corresponding single-zero stratum by Rg(µ′), as in Section 9. We can consider the breaking up the zero function B : {non-hyperelliptic components of Rg(µ′)} → {non-hyperelliptic components of Rg(µ)}. By Proposition 9.1, B is surjective. If Rg(µ) is of even type, then so is Rg(µ′), and thus Rg(µ′) has exactly two non-hyperelliptic components. So Rg(µ) has at most two non-hyperelliptic components. Note that in this case, breaking up the zero preserves the spin parity computed by (10.5) since it is a local surgery. Therefore, we can construct two non-hyperelliptic flat surfaces in Rg(µ) with odd and even spin parities. Since the spin parity is invariant within a connected component, we can conclude that Rg(µ) has exactly two non-hyperelliptic components, distinguished by spin parity. If Rg(µ) is not of even type, and Rg(µ′) is also not of even type, then Rg(µ′) has a unique non-hyperelliptic component. Then obviously Rg(µ) has a unique non-hyperelliptic component. If Rg(µ) is not of even type while Rg(µ′) is, then all bi are even and Rg(µ′) has two non-hyperelliptic components C1 and C2. Since Rg(µ) is not of even type, by relabeling the zeroes, we may assume that a1 is odd. Let µ′′ = (a1, a−a1, −b1, . . . , −bn). We can consider two functions B′ : {non-hyperelliptic components of Rg(µ′)} → {non-hyperelliptic components of Rg(µ′′)} B′′ : {non-hyperelliptic components of Rg(µ′′)} → {non-hyperelliptic components of Rg(µ)} given by breaking up the zero. Then B = B′′ ◦ B′. By (1) of Lemma 10.11, B′(C1) = B′(C2). Therefore, (cid:3) B(C1) = B(C2) and Rg(µ) has a unique non-hyperelliptic component. 63 References [1] M. Bainbridge, D. Chen, Q. Gendron, S. Grushevsky, and M. M¨oller, The moduli space of multi-scale differentials, arXiv preprint arXiv:1910.13492, (2022). [2] C. Boissy, Connected components of the strata of the moduli space of meromorphic differentials., Comment. Math. Helv., 90 (2015), pp. 255–286. [3] C. Boissy, Moduli space of meromorphic differentials with marked horizontal separatrices, Algebraic & Geometric Topol- ogy, 20 (2020), pp. 2373 – 2412. [4] M. Brandt, J. Bruce, M. Chan, M. Melo, G. Moreland, and C. Wolfe, On the top-weight rational cohomology of ag , arXiv preprint arXiv:2012.02892, (2022). [5] M. Chan, S. Galatius, and S. Payne, Tropical curves, graph complexes, and top weight cohomology of mg, Journal of the American Mathematical Society, 34 (2021), pp. 565–594. [6] D. Chen and Q. Chen, Principal boundary of moduli spaces of abelian and quadratic differentials, Annales de l’Institut Fourier, 69 (2019), pp. 81–118. [7] D. Chen and Q. Gendron, Towards a classification of connected components of the strata of k-differentials, Documenta mathematica Journal der Deutschen Mathematiker-Vereinigung, 27 (2022), pp. 1031–1100. [8] D. Chen, M. M¨oller, A. Sauvaget, and D. Zagier, Masur-Veech volumes and intersection theory on moduli spaces of Abelian differentials, Invent. Math., 222 (2020), pp. 283–373. [9] B. Dozier, Measure bound for translation surfaces with short saddle connections, Geometric and Functional Analysis, 33 (2023), pp. 1–47. [10] A. Eskin, H. Masur, and A. Zorich, Moduli spaces of Abelian differentials: the principal boundary, counting problems, and the Siegel-Veech constants, Publ. Math. Inst. Hautes ´Etudes Sci., 97 (2003), pp. 61–179. [11] Q. Gendron and G. Tahar, Quadratic differentials with prescribed singularities, arXiv preprint arXiv:2111.12653, (2021). [12] D. Johnson, Spin structures and quadratic forms on surfaces, Journal of The London Mathematical Society-second Series, 2 (1980), pp. 365–373. [13] M. Kontsevich and A. Zorich, Connected components of the moduli spaces of Abelian differentials with prescribed singularities, Invent. Math., 153 (2003), pp. 631–678. [14] E. Lanneau, Connected components of the strata of the moduli spaces of quadratic differentials, Ann. Sci. ´Ec. Norm. Sup´er. (4), 41 (2008), pp. 1–56. [15] M. Lee and G. Tahar, One-dimensional strata of residueless meromorphic differentials, arXiv preprint arXiv:2310.13128, (2023). [16] S. Mullane, Strata of differentials of the second kind, positivity and irreducibility of certain Hurwitz spaces, Annales de l’Institut Fourier, 72 (2022), pp. 1379–1416. [17] G. Tahar, Counting saddle connections in flat surfaces with poles of higher order, Geometriae Dedicata, 196 (2018), pp. 145–186. Stony Brook University, Department of Mathematics, Stony Brook, NY 11794-3651 Email address: [email protected] 64
synthetic_cpt
3
Cold-Start_Data_Selection_for_Few-shot_Language_Model_Fine-tuning_A_Prompt-Based_Uncertainty_Propagation_Approach.pdf
A network-based biomarkers discovery of Cold/Hot ZHENG chronic gastritis and Cold/Hot herbs of formulae Boyang Wanga, Pan Chena, Peng Zhanga and Shao Lia,* aInstitute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRist, Department of Automation, Tsinghua University, 100084 Beijing, China Abstract Objective: To discover biomarkers and uncover the mechanism of Cold/Hot ZHENG (syndrome in traditional Chinese medicine) chronic gastritis (CG) and Cold/Hot herbs in traditional Chinese medicine (TCM) formulae on systematic biology. Background: CG is a common inflammatory disease and the diagnosis of CG in TCM can be classified into Cold ZHENG (Asthenic Cold) and Hot ZHENG (Excess Hot). However, the molecular features of Cold/Hot ZHENG in CG and the mechanism of Cold/Hot herbs in formulae for CG remained unclear. Methods: Based on data of 35 patients of Cold/Hot ZHENG CG and 3 scRNA-seq CG samples, we conduct analysis with transcriptomics datasets and algorithms, to discover biomarkers for Cold/Hot ZHENG CG. And we collected 25 formulae (with traditional effects related to Cold/Hot ZHENG) for CG and corresponding 89 Cold/Hot herbs (including Warm/Cool herbs) to discover features and construct target networks of Cold/Hot herbs on the basis of network target and enrichment analysis. Results: Biomarkers of Cold/Hot ZHENG CG represented by CCL2 and LEP suggested that Hot ZHENG CG might be characterized by over-inflammation and exuberant metabolism, and Cold ZHENG CG showed a trend of suppression in immune regulation and energy metabolism. And biomarkers of Cold/Hot ZHENG showed also significant changes in the progression of gastric cancer. And biomarkers and pathways of Hot herbs intend to regulate immune responses and energy metabolism, while those of Cold herbs were likely to participate in anti-inflammation effect. Conclusion: In this study, we found that the biomarkers and mechanism of Cold/Hot ZHENG CG and those of Cold/Hot herbs were closely related to the regulation of immune and metabolisms. These findings may reflect the mechanism, build bridges between multiple views of Cold/Hot ZHENG and Cold/Hot herbs, and provide a research paradigm for further achieving precision TCM. Keywords: Cold/Hot ZHENG, chronic gastritis, Cold/Hot properties, traditional Chinese medicine, network targets 1. Introduction Chronic gastritis (CG) is defined as inflammatory diseases of the gastric mucosa and is classified as chronic superficial gastritis (CSG) and chronic atrophic gastritis (CAG) based on the histopathologic patterns and endoscopic appearances of the gastric mucosa [1]. The prevalence of CG may exceed 50% worldwide [2]. The progressive deterioration of atrophic gastritis, which subsequently leads to dysfunction of the gastric mucosa, is also the highest independent risk factor for gastric cancer [3]. To date, the etiology of CG remains incompletely understood. It can be caused by a range of factors such as stress, alcohol, irrational use of nonsteroidal anti- inflammatory drugs, and H. pylori infection, leading to an imbalance between offensive acid-pepsin secretion and defensive mucosal factors such as cell shedding and mucin secretion [4]. Regarding stress risks, physiologic stress can result in dysregulation of gastric pH, which contributed to gastritis. In the stressed state, increased levels of histamine and acetylcholine result in elevated acid production, thus inducing or worsening gastritis [5]. The present therapy of gastritis is to alleviate inflammation and associated dyspeptic symptoms, and specific treatments should be determined accordingly depending on each individual’s condition. In China, traditional Chinese medicine (TCM) therapy is an important complementary treatment option for CG [6]. ZHENG, a TCM theoretical understanding of the symptomatic profiles of disease, is used to recognize and understand the non-healthy physiological states of patients from a holistic view. In TCM clinical diagnosis, there are different types of ZHENG for the same disease depending on the phenotype profiles. All diagnostic and therapeutic approaches in TCM are based on the typology of ZHENG. In the diagnosis of gastritis in TCM, the patients with CG can be classified into two main types: Cold ZHENG and Hot ZHENG. The CG associated with Cold ZHENG has characteristics of cooling of limbs, loose stool, clear abundant urine, white-greasy tongue coating. The CG associated with Hot ZHENG is characterized by red tongue, yellow-dense tongue coating, thirst, dry mouth, deep-colored urine, and dysphoria with a feverish sensation. In our previous study, we have evaluated the biological basis of CG associated with Cold/Hot ZHENG, suggesting that the metabolism-immune network imbalance has the potential to be a new perspective in the development of sub-typing and individualized treatment for CG [7]. According to the rules of TCM diagnosis and treatment, patients with Cold ZHENG should be treated by herbs with hot property (Hot herbs) and patients with Hot ZHENG should be treated by herbs with cold property (Cold herbs) for thousands of years in China. For the treatment of CG in TCM, CG associated with Cold and Hot ZHENG are treated with herbs that have hot and cold properties, respectively. However, the biological mechanisms behind Hot and Cold herbs for the treatment of Cold and Hot ZHENG remain unclear. Recent advances in TCM research are currently associated with the rapid development of concepts in network pharmacology and systems biology that provide approaches to understanding the rules of TCM diagnosis and treatment [8-10]. The aim of this study is to investigate the mechanisms of action of Cold/Hot herbs in the treatment of CG associated with Cold/Hot ZHENG through a network pharmacology approach. 2. Results 2.1 Outlier of the whole study As shown in Figure1, we conducted a comprehensive analysis on gastritis and widely used TCM for gastritis from the perspective of Cold/Hot ZHENG and network targets in this study. First and foremost, based on seeds genes from Cold/Hot biological network [11] proposed by Li and microarray for CAG and CSG with Cold/Hot ZHENG [7], we constructed Cold/Hot biological network for CG to discover the features and biomarkers of Cold/Hot ZHENG. On the basis of the biological network and machine learning algorithms, features and biomarkers of immune and metabolism were obtained. Besides, we collected 29 formulae for GC and corresponding 132 herbs recorded in these formulae from Pharmacopoeia of China. Cold/Hot information and Meridian information of these herbs were collected from Pharmacopoeia. The distribution of Meridian information of the herbs were counted and the compounds composition of these herbs was obtained from commonly-used TCM database Herbiomap [12] and Symmap [13]. After filtering herbs without recording or without, 25 formulae, 89 Cold/Hot herbs, 19 other herbs and 2853 compounds were kept for further study and the targets profiles for these herbs and formulae were characterized by our previous network-based algorithms [14,15]. Based on the constructed network, features and biomarkers of Cold/Hot which were mainly related to immune and metabolism were acquired. Finally, as a combination of Cold/Hot biological network for CG and targets profiles for Cold/Hot TCM, Cold/Hot targets biological network of Cold/Hot TCM for CG. This network described the most-frequently targeted Cold/Hot genes of Cold/Hot TCM and uncovered the mechanism of Cold/Hot CG and corresponding Cold/Hot herbs in formulae for CG to some extension. Figure 1. The overall outlier of the study from two perspectives including Cold/Hot ZHENG CG and Cold/Hot herbs 2.2 Molecular features of Cold/Hot ZHENG CG In 2013, Li collected 35 patients of Cold/Hot ZHENG CG for microarray measurement, including 17 patients with Cold ZHENG (8 for chronic superficial gastritis and 9 for chronic atrophic gastritis) and 18 patients with Hot ZHENG (8 for chronic superficial gastritis and 10 for chronic atrophic gastritis). In order to find key molecules related to Cold/Hot ZHENG in CG, we collected seeds gene from previous Cold/Hot network model as background for Cold/Hot ZHENG. PLS-DA analysis successfully grouped Hot and Cold ZHENG CG for CAG patients and CSG patients, respectively (Figure 2A). VIP (variable importance) for each gene in CAG patients or CSG patients was calculated and 26 of the seeds genes with VIP greater than 1 both occurred in CAG patients and CSG patients (Figure 2B). Besides, from another perspective, DEGs (differentially expressed genes) in CAG patients and CSG patients were calculated by limma model to find significantly expressed genes between Cold/Hot ZHENG in both CAG patients and CSG patients. Among these DEGs, 112 genes were differentially expressed in both CAG patients and CSG patients (adjust p value <0.05, BH adjustment). And 11 genes were up-regulated in both Cold ZHENG CAG patients and CSG patients, while 47 of them were up- regulated in both Hot ZHENG of two disease conditions (Figure 2C). Combining these two analyses of different perspectives, both in statistical methods and hierarchical clustering methods, Hot ZHENG patients were distinct from Cold ZHENG patients (Figure 2D). This result suggested that there existed gene expression patterns between these two conditions of CG which might be able to mined from our analysis and further constructed biological networks. Figure 2. Analysis for finding representative molecules between Cold/Hot ZHENG CG. (A) PLS- DA analysis for Cold/Hot ZHENG CAG (left) and CSG (right). (B) Venn plot showing genes with VIP larger than 1 in both Cold/Hot ZHENG CAG and CSG. (C) Venn plot showing the differential expression of genes in microarray of Cold/Hot ZHENG CAG and CSG. (D) Heat map of 47 genes both up-regulated in Hot ZHENG CAG and CSG and 11 genes up-regulated in Cold ZHENG CAG and CSG for CAG patients (left) and CSG patients (right). 2.3 Immune and metabolic characteristics of Cold/Hot ZHENG CG Based on our found molecular features of Cold/Hot ZHENG CG, we further payed attention to the biological processes or pathways enriched by these molecular features. Firstly, we performed KEGG pathway enrichment [16] and Gene Ontology (GO) enrichment [17] on 1846 DEGs (1241 genes up-regulated in Cold ZHENG and 605 genes up-regulated in Hot ZHENG) of CAG patients. It was found that pathways and biological processes related to immune and metabolism were significantly enriched (Figure 3A). Further, we performed Gene Set Enrichment Analysis (GSEA) [18] on CAG patient and GSEA terms that significantly enriched were shown in Figure x. It could be found that, GSEA terms related to immune, inflammation, cytokines and chemokines (Figure 3B-D) were activated in Hot ZHENG CAG patient (also can be defined as inhibited terms in Cold ZHENG CAG patient, NES < -1), while terms related to metabolism and secretion of peptide, hormone, steroid (Figure 3B, E), as well as cellular junctions and adhesion were activated in Cold ZHENG CAG patient (NES > 1). These findings suggested that in Hot ZHENG CG, pathways or biological processes related to immune and inflammation might be over-developed, while in Cold ZHENG CG, the main distinguished features turned out to be activating in endocrine and energy metabolism. Apart from CG, these modules of immune regulation and metabolism had also been reported in researches about other diseases [19-21]. Figure 3. Enrichment analysis for immune and metabolic characteristics of Cold/Hot ZHENG CG (A) Dot plot of KEGG and GO enrichment analysis for 1846 DEGs of CAG patients. (B) GSEA for genes and their expression of CAG patients. (C)-(D) GSEA result for Chemokine Pathway, Inflammatory Response and Peptide Secretion, respectively. As inferred by our previous findings, we focused on the immune and inflammation characteristics in CG. Based on CIBERSORT algorithm [22], proportions of different immune cells were deconvoluted in Cold/Hot ZHENG CG, respectively. It could be found that some immune cells, represented by M1 macrophages, showed a significantly different proportion in Hot ZHENG CG than Cold ZHENG CG (Figure 4A). The proportions of both M1 and M2 macrophages were significantly higher in Hot ZHENG CG than that of Cold ZHENG CG, which might confirm the findings that from the perspective of immune and inflammation regulation, the most distinguishing features between Hot ZHENG and Cold ZHENG gastritis is the over-inflammation in Hot ZHENG and the suppression of immune in Cold ZHENG. Figure 4. Network construction of Cold/Hot ZHENG CG. (A) Inference of the proportion of immune cells significantly changed in Cold ZHENG CAG and Hot ZHENG CAG. (B) Expression of previous reported and newly found biomolecules for Cold/Hot ZHENG CAG in single-cell level of CAG patients. (C) Box plots showing the expression of biomolecules during the progression of gastric cancer. In addition, we filter out 3 Hot ZHENG CAG samples from large scale scRNA-seq of human gastric cancer progression [3]. In this cellular level measurements of expression of genes in these characteristic pathways and biological processes related to immune regulation and metabolism, we focused on some key molecules in Cold/Hot ZHENG CG. It was found that biomarkers of Cold ZHENG CG which participated in these key pathways and biological processes, like HTR2B, CRH, NOS1 and LEP, hardly expressed in any cell types in Hot ZHENG CG samples. These genes were reported in previous study [7], and were found to be related to key link in Cold ZHENG, including 5-HT related gene HTR2B, corticotropin releasing hormone related genes CRH, CRHR1 and POMC, leptin related gene LEP and nitric oxide related gene NOS1. On the contrary, genes related to immune and inflammation were relatively higher expressed, especially in macrophages, which was consistent with our above findings that macrophages significantly increased in Hot ZHENG CG. These genes included CCL2, CD14, NFKB1, IL10RA, TNF and JAK2, which were related to inflammation, cytokines, chemokines and immune regulation (Figure 4B). Based on public omics data, it was also found that some of these biomarkers showed significant changes in the progression from gastritis to dysplasia and gastric cancer (Figure 4C), and were reported to participant in the progression of gastric cancer, such as LEP, CCL2, CD14 and TNF. The expression of these biomolecules and their related biomolecules was found to be associated with gastric cancer progression and prognosis [23-26], which also inferred that pathways or cells related to these biomarkers may also play roles in cancer progression and prognosis. Besides, some other seeds genes also showed a significantly differential expression in the progression of gastric cancer (Figure S2). These findings inferred us that the key biomolecules in Cold/Hot ZHENG might also play important roles in other disease progressions which need further analyses and researches. These complex features in immune regulation and metabolism, as well as biomolecules like TNF, VEGF, TGFB and NFKB1, also reflect the potential risk of CG, especially CAG in inflammation-induced tumorigenesis according to our previous constructed tumorigenesis network [27,28], and also be reported in researches of tumorigenesis in other digestive systems diseases like chronic hepatitis and enteritis [29,30]. Finally, based on combination of multi-omics data and machine learning algorithms, we constructed a homogeneous biological network, composed of key seeds genes of Cold/Hot ZHENG and DEGs in both two kinds of CG (Figure S1). In this network, the interactions between every two genes were collected from STRING database. It could be found that many of the seeds genes played important roles in these network with high connections, especially immune and inflammation related genes like CCL2, CD14, NFKB1, IL2RB, JAK2, VEGFC, TGFB3 and IL10RA, most of which showed high expression in macrophages of Hot ZHENG CG patients. Besides, some genes related to endocrine and energy metabolism including SSTR2, SSTR5, HTR1A, CRH, CRHR1 and POMC also had high degrees in this network. It is worthy to be noticed that not only the biomolecules in the network for Cold/Hot ZHENG CG, but also the genes with close functional relationship or biological relationship may play important role in the further diagnosis and mechanism uncovering of Cold/Hot ZHENG CG. According to our enrichment result, metabolism features of CG is also of vital importance in the diagnosis of Cold/Hot ZHENG CG. Metabolism related to peptide is significantly enriched, and the interactions between peptide and protein participate in various fundamental cellular functions [31]. The pathologically elevated steroid hormones may be accompanied by leptin resistance, which weakens normal energy expenditure and thermogenesis [32]. In our previous study, we found that serum level of leptin in CAG patients associated with Cold ZHENG was significantly higher than normal subjects [7]. Therefore, the presence of pathologically elevated leptin levels in patients with cold ZHENG means that their reduced energy expenditure and thermogenesis may be due to leptin resistance. Conditional Dlx1/2-null mice showed a loss of growth hormone-releasing hormone neurons with higher somatostatin expression and lower energy expenditure [33]. A previous study also showed that somatostatin in the paraventricular nucleus of the hypothalamus could inhibit thermogenesis [34], suggesting that SSTR involves in energy expenditure. It has been reported that 5-HT could inhibit thermogenesis through Htr3 in brown adipose tissue [35]. Besides, in the median preoptic nucleus, the thermoregulatory response is initiated by stimulation of GABA neurons, suggesting that GABA plays an important role in the process of immune regulation and energy expenditure [36]. Last but not the least, the suppression of tight junction and gap junction was associated with the activation of gene networks of adaptive immunity [37]. And tight junction was reported to be related to immune suppression in COVID-19 [38]. 2.4 Characteristics of formulae for CG From the perspective of system biology, we focused on the potential effect on formulae for CG. We measured some typical pathways or biological processes for Cold/Hot ZHENG CG in our above findings in all 29 formulae recorded in Pharmacopoeia. It was found that in specific pathways or biological processes in immune regulation, inflammation and steroid dominated energy metabolism, the potential effect of different formulae differed from others in immune-related pathways and biological processes like immune cells, immune response, cytokines and chemokine. Besides, some other pathways and biological processes related to energy reverse metabolism, nitric oxide. On the contrary, steroid metabolic process, response to steroid hormone, steroid hormone mediated signaling pathway, regulation of inflammatory response and response to oxidative stress were consistently significantly enriched and might be a coincident potential mechanism of these formulae against CG (Figure 5A). Figure 5. Potential mechanism of formulae recorded in Pharmacopoeia of China for CG. (A) Dot plot showing the potential effect of formulae on the representative pathways and biological processes of Cold/Hot ZHENG CG. (B) Occurrence of the six labels including ZI YIN, XIAO JI, SAN HAN, QING RE, HUO XUE and XING QI in these formulae. (C) Heat map showing the correlation between six labels and the proportion of Cold or Hot herbs in a formula. (D) Word cloud showing pathways enriched by formulae with different traditional effects. All the 25 formulae could be pasted with six labels for traditional effects, including ZI YIN, XIAO JI, SAN HAN, QING RE, HUO XUE and XING QI. According to TCM experience, these six labels were corresponded with specific effects, in which ZI YIN, XIAO JI, SAN HAN, QING RE, HUO XUE and XING QI means nourishing humors, eliminating food stagnation, dispelling Cold, removing Hot, promoting the restoration of vitality and achieving smooth air flow, respectively. As shown in Figure 5B and 5C, XING QI was the most frequent effect for these formulae, and it was positively correlated (Wilcoxon test, P value < 0.05) with the proportion of Hot TCM in a formula. Besides, ZI YIN had significantly positive correlation with the proportion of Cold TCM, while SAN HAN had significantly positive correlation with the proportion of Hot TCM. In addition, XIAO JI and XING QI showed a relatively positively correlation with the proportion of Hot TCM in a formula. These findings might uncover the material basis of these related six labels for traditional effects. Further, in the four significantly different labels for traditional effects in formulae with Cold/Hot herbs, we focused on the pathways of formulae with them to uncover the mechanism of these four traditional effects. It was found that pathways belonging to signal transduction, endocrine and immune were the most important in these four traditional effects, which might consistently support our findings that metabolism and immune regulation were the key mechanism of Cold/Hot ZHENG CG. 2.5 Herbs and depiction of their target profiles in formulae against CG Another main finding of this study falls in the various mechanism of actions of Cold/Hot herbs in traditional formulae for CG. We collected 29 traditional formulae for CG and their corresponding herbs from Pharmacopoeia. These 29 formulae totally included 242 herbs (132 unique herbs). In annotating the compound composition for these herbs from two large databases Symmap and Herbiomap, 108 of these 132 herbs were successfully matched with the compounds as well as Cold/Hot information and meridian information. In total, 2853 unique compounds were found in these herbs and annotated with PubChem CID [39] for further targets prediction. Target profiles of these compounds were calculated by our previous network-based algorithm DrugCIPHER-SC [14] and top 100 druggable targets of the profiles were chosen as the targets for further analysis. In order to measure the holistic targets of formulae and herbs, a previous statistical strategy [15] was performed and targets with occurrence significance less than 0.05 (BH adjustment) were chosen. Figure 6. Analysis of Cold/Hot herbs in formulae for CG. (A) Heat map showing the Cold/Hot properties and meridian of these Cold/Hot herbs. (B) Visualization of the normalized Cold/Hot score of Cold/Hot ZHENG seeds genes in which some of the genes near two axes weren’t shown and were hidden in the read circle. (C)-(D) Wordcloud plot for targets of Cold/Hot herbs and shared targets in which word size represents the ratio of normalized Cold score and normalized Hot score. (E)-(F) Dot plot showing KEGG and GO enrichment of targets of Cold/Hot herbs. Cold/Hot information is one of the most important information in TCM. According to Pharmacopoeia, the Hot and Cold properties have been divided into seven levels: Cold, light Cold, Cool, Ping (a kind of state which has no tendency to Cold or Hot), light Warm, Warm and Hot (including great Hot). The first three levels belong to Cold category and the last three levels belong to Hot category. Herbs belonging to Warm held the largest population with the amount of 45 and those of Cold held the second largest population of 31, while herbs belonging to Hot, light Warm and Cool held the least population. Apart from Cold/Hot information, meridian was also of vital importance for herbs, for the reason that it might figure out the place where herbs took effect. Herbs in traditional formulae for CG mainly includes 12 kinds of meridian, including triple energizer (tri-jiao, a special TCM term), large intestine, small intestine, heart, pericardium, liver, lung, kidney, stomach, gallbladder, spleen and bladder. It was found that one herb might include more than one meridian, which showed that herbs might take effect in many tissues. It could be found that spleen was the most frequent destination of these TCM and liver, stomach, heart, lung and kidney ranked the second to the sixth. On the contrary, triple energizer, pericardium, gallbladder and bladder ranked at the end with the amount less than 10. Besides, we also observed the cross mapping of Cold/Hot properties and meridian (Figure 6A), and it was worthy to be noticed that the most frequent pair was Warm and spleen. This finding was corresponding with previous hypothesis, for the reason that spleen is an important immune-related organ and one of the most important effect of Hot TCM is enhancing immune regulation. However, the distribution of Cold TCM seems to be more concentrated in stomach, kidney, liver, lung and heart rather than spleen. 2.6 Molecular features of targets of Cold/Hot TCM for CG In consideration of the complex composition of TCM, many herbs might share the same targets according to network target analysis. So, in order to find the potential tendencies for Cold/Hot TCM respectively, we set a threshold of 0.7 (see Methods) and divided targets of Cold/Hot TCM into three classes: Hot TCM targets, Cold TCM targets and shared targets. Shared targets were the class of targets that might be targeted by a number of Hot TCM and Cold TCM at the meantime (Figure 6B). Biomolecules related to inflammation and immune regulation like TNF, ILR1, VEGFA, TLR2 and IL2RG were classified as Cold TCM targets, inferring that potential effects of Cold TCM might including anti-inflammation and immune regulation (Figure 6C). On the other hand, those involved in energy metabolism including metabolic process of steroid, hormone, 5-HT, SSTR, NO, CRH and GABA were targeted by Hot TCM, together with those playing import roles in immune response like IL6R, CXCR1 (Figure 6D). Besides, shared targets were composed of biomolecules participating many important biological processes, like IL1B, CD4, AR, ESR1, ESR2, NFKB1, TGFB1. KEGG and GO enrichment were also performed on Hot herbs targets and Cold herbs targets. It was found that, in the enrichment result of Cold herbs targets, biological processes related to inflammatory response, cytokines, chemokines and immune cells represented by macrophages were significantly enriched, as well as inflammatory pathways including TNF, VEGF, HIF-1a signaling pathways. Apart from these immune-related pathways or biological processes, those related to lipid metabolism were also significantly enriched, like fatty acid oxidation and lipid oxidation (Figure 6E). The enrichment result of Hot herbs targets was much different, which mainly fell in biological processes bound up with inhibitory neurotransmitter like 5-HT, GABA, hormone including steroid hormone, corticosteroid hormone, peptide hormone and other endocrine hormones (Figure 6F). Besides, the result also included cellular processes of T cells, like T cell activation and proliferation. In general, these findings showed that in the one hand, the mechanism of Cold herbs against Hot ZHENG mainly fell in immune-related and inflammation-related factors, as well as metabolism like lipid metabolism to some extent. On the other hand, the treatment of Hot herbs against Cold ZHENG CG were mainly related to neurotransmitter and endocrine, which further proved the closer relationship between Cold ZHENG CG and stress-induced factors, and showed therapeutic potential of Hot TCM in this regard. These results preliminarily revealed the mechanisms of action of Cold/Hot herbs in the treatment of CG associated with Cold/Hot ZHENG (Figure 7). These functional characteristics of Cold/Hot herbs in CG formulae for immune regulation and metabolism regulation were also found to be therapeutic targets in researches of other formulae for other diseases [40-43]. Figure 7. Tai Chi Diagram showing the regulation programs of Cold/Hot herbs dominated by bi- directional regulation on inflammation/immune and energy metabolism. Cold herbs showed a trend to suppress the over-inflammation and over exuberant energy metabolism in Hot ZHENG, while Hot herbs preferred to enhance and recover immunity and energy metabolism in Cold ZHENG. Based on what we found for molecular features of Cold/Hot TCM, two specific networks for the Cold/Hot herbs targets respectively, according to the interaction recoded in STRING database (Figure S3). In Cold herbs targets, INS, TLR2, VEGFA, TNF, IL1R1 and TGFB3 showed a vital role in the targets network of Cold herbs for CG. And in Hot herbs targets, 5-HT related genes HTR1A and HTR1B, NOS1, GABA- related genes GAB family and CRH, as well as somatostatin receptor including SSTR2 and SSTR5 were found in the targets network of Hot herbs for CG. In other words, Hot herbs might regulate Cold ZHENG gastritis through regulating endocrine and energy metabolism. Apart from these genes, targets like CCL2, IL6, JAK2, IL2RA and CXCR1 were also of vital importance in the Hot herbs’ targets network, which might infer us that another potential mechanism of Hot herbs against Cold ZEHNG gastritis was to regulate immune responses. 3. Materials and Methods 3.1 Differential analysis and PLS-DA for Cold/Hot ZHENG CG To find the significantly differential expressed genes in Cold/Hot ZHENG CG, R package limma [44] were used to construct the generalized linear model. Genes with significant changes (log2(Fold Change)  1 or  -1, adjust P value < 0.05, BH correction) were considered as differential expressed genes in CAG and CSG, respectively. The DEGs in Cold/Hot ZHENG CG were defined as the DEGs in both Cold/Hot ZHENG CAG and CSG (adjust P value < 0.05, BH correction). PLS-DA (Partial Least Squares Discriminant Analysis) was performed based on R package mixOmics [45] v6.14.1. VIP (Variable Importance in the Projection) for seeds genes of Cold/Hot ZHENG was calculated to estimate the importance of each seeds gene in contributing to distinguish Cold/Hot ZHENG CG (VIP > 1) with function PLSDA.VIP(). 4.2 Enrichment analysis and immune characteristics of Cold/Hot ZHENG CG To find the enriched pathways or biological processes which biomolecular features of Cold/Hot ZHENG CG involved in, enrichment analyses were performed based on R package clusterProfiler [46], including KEGG enrichment, GO enrichment and GSEA (Gene Set Enrichment Analysis). Pathways or biological processes significantly enriched (adjust P value < 0.05, BH correction) were kept for further analysis. Besides, for GSEA results, pathways or biological processes significantly enriched were further divided into ‘active’ or ‘inhibit’ based on their positive or negative NES (Normalized Enrichment Score) value. 3.3 Targets prediction of formulae, herbs and corresponding compounds In order to predict the potential targets of compounds in herbs of formulae for CG, based on our network-based computational algorithm, durgCIPHER-CS [14], the genome-wide targets and druggable targets were calculated. Top 100 targets of the druggable targets for each compound were considered as the target profile for each compound. A computational strategy [15] was performed to calculate the holistic targets of our collected formulae for CG and the Cold/Hot herbs that made up them. Targets with significant occurrence (adjust P value < 0.05, BH correction) were listed as the holistic targets of the herbs or formulae. 3.4 Definition of Cold/Hot herbs’ targets and network construction Taking the complex composition of herbs into consideration, many biomolecular might be the potential targets for many herbs. A strategy was implemented there to define targets for Hot herbs, Cold herbs and both Cold/Hot herbs. For each target, the counts of Cold/Hot herbs targeting on it were normalized on the total amount of Cold/Hot herbs and the ratio of the normalized counts of Cold/Hot herbs was used to divided targets into three groups, including targets of Hot herbs, targets of Cold herbs and shared targets. For Cold/Hot ZHENG CG, the biological network was constructed by the seeds genes with VIP value larger than 1 and DEGs in both CSG and CAG. Node side of each seeds gene depended on the VIP value. Besides, the biological networks for the targets of Cold/Hot herbs were constructed by the unions of targets of Cold/Hot herbs, respectively and seeds genes with VIP value larger than 1. All the gene-gene interactions in these networks were collected from STRING database [47]. 3.5 Analysis of Cold/Hot TCM and traditional efficacy To depict the potential mechanisms of formulae on features of Cold/Hot ZHENG CG, based on the holistic targets of formulae, KEGG and GO enrichment was performed through R package clusterProfiler (adjust P value < 0.05, BH correction). Six labels for traditional effects, including ZI YIN, XIAO JI, SAN HAN, QING RE, HUO XUE and XING QI could paste on these formulae according to TCM experience. And whether the label showed correlation with the composition of Cold/Hot herbs depended on Spearman correlation test (P value < 0.05). 4. Discussion Hot and Cold ZHENG is a dominating theory in TCM and they represent different conditions and phenotypes within one disease [48,49]. For example, in previous studies, significant differential symptoms in a cohort of cases with SARS were closely related to Cold ZHENG [50]. However, the mechanism of Cold/Hot ZHENG, as well as that of “Hot herbs for Cold ZHENG, Cold herbs for Hot ZHENG” remain unclear. Taking two kinds of CG, especially CAG as breakthrough, in this study, we conducted a comprehensive study of Cold/Hot ZHENG CG and corresponding traditional formulae for gastritis composed of Cold/Hot herbs to uncover the mechanism of two traditional properties, Hot and Cold for diseases and herbs originating from ancient China. Based on microarray datasets for CG of different ZHENG and advanced machine learning algorithms, we found the genes of vital importance as well as pathways or biological processes significantly enriched and constructed a biological network of seeds genes of Cold/Hot ZHENG and DEGs in CG of different ZHENG. It was found that pathways and biological processes related to immune and inflammation regulation were significantly active in Hot ZHENG patients and their related biomolecules were considered as hub nodes in this network due to their high degrees. Meanwhile, pathways and biological processes related to steroid and hormones were found to be active in Cold ZHENG and the corresponding biomolecules were also of vital importance in the network. In general, the findings in this study could infer us that the main differences between Hot ZHENG and Cold ZHENG CG might be the over-inflammation in Hot ZHENG, as well as the suppression of immune and energy metabolism in Cold ZHENG. Consistent with our previous study that hormone-related biological processes are predominant in the Cold ZHENG network and immune-related biological processes are predominant in the Hot ZHENG network [11]. More specifically, they might suggest that there are different biological mechanisms between Hot and Cold ZHENG CG, implying that different specific treatment strategies are required for CG depending on ZHENG. Chemokines and cytokines like CCL2 were important biomolecules representing Hot ZHENG from the aspect of immune regulation and inflammation [7], while leptin and nitric oxide involved energy metabolism was another representative difference between Hot and Cold ZHENG. Then, based on network analysis, we carried out network targets analysis and described the holistic targets profiles of formulae for CG and their corresponding composition of Cold/Hot herbs. Apart from the target information, we also took meridian and Cold/Hot information into account in order to find the potential relationship between Cold/Hot properties and the tissues where TCM might take effect in. Targets of Cold/Hot herbs were divided into three groups and interestingly, it was found that targets of Hot herbs were significantly enriched in metabolism, regulation or endocrine of GABA, hormones, steroids and 5-HT, which were reported to be closely related to energy metabolism and thermogenesis, while targets of Cold herbs were mainly enriched in inflammation regulation like TNF, HIF-1, VEGF signaling pathway as well as immune response including cytokines, chemokines and cellular processes of immune cells. In our constructed biological targets network for Cold/Hot herbs against CG, biomolecules related to inflammation and immune regulation like TNF, TLR2, TGFB3, IL2RG, IL1R1 and VEGF were of vital importance for Cold herbs, while those involved in energy metabolism like SSTR, HTR, GABA and CRH, as well as those related to immune response like CCL2, IL6, IL2RA, IL2RB and JAK2 played important roles in the network for Hot TCM. Immune-related pathways and biomolecules like TLR2 and CD14 were potential targets of Weifuchun capsule, which was clinically used for CAG, composed of Cold/Hot herbs and performed effect on both Cold/Hot ZHENG, like regulating immune response and anti-inflammation [51]. Huangqi Jianzhong decoction, a formula for Cold ZHENG CG, showed protective effects in CAG rats might be due to the balance of energy expenditure [52]. Taken together, the thermogenesis and immune- enhancing effects of Hot herbs may contribute to the therapeutic effect on Cold ZHENG CG. For herbs with cold property, their treatment of Hot ZHENG CG relies mainly on their anti-inflammatory effects, such as involving the TNF signaling pathway, NF-κB signaling pathway, and VEGF signaling pathway. Zuojin Pill is used in the treatment of Hot CG because it contains the cold property herb (Huanglian, dried rhizome of Coptis chinensis Franch., Coptis teeta Wall., and Coptis deltoidea C.Y.Cheng and P.K.Hsiao), which has been shown to have anti-inflammatory effects on CAG in rats by inhibiting the NF-κB signaling pathway [53]. Another widely used formula for CAG, Moluodan, was reported to reduce the inflammation level, as well as increasing lipid accumulation in MNNG-induced cells [54]. Serum levels of TNF-α, IL-8, and VEGF have been reported to be associated with the severity of CG, the severe degree of neutrophil infiltration in CG, and the severity of precancerous lesions in the stomach, respectively [55]. Weiqi decoction formula has been reported to reduce VEGF levels in CAG rats [56]. The inhibition of the inflammatory response is the main therapeutic route for cold property herbs in the treatment of Hot ZHENG CG. Leptin is considered to be a link between the neuroendocrine and immune systems and could be a possible target for intervention in immunometabolism-mediated pathophysiology and it has been reported that leptin resistance individuals have lower NK cell count and function than normal individuals [57]. Gastric mucosal leptin expression was significantly higher in H. pylori- positive patients than in negative patients [58]. Thus, in the context of immune- metabolic imbalances, leptin appears to be the pivotal molecule in the treatment of Cold/Hot CG with Cold/Hot herbs. Considering the difference in the immune status of Hot and Cold ZHENG CG and the risk of transformation of CG into cancer, we analyzed and compared the association between Cold/Hot ZHENG CG and Cold/Hot tumors. The hot tumors are characterized by immune activation with T cell infiltration, whereas Cold tumors show lack of T cell infiltration or absence [59]. In this study, we found that the biological processes related to immunity, inflammation, cytokines and chemokines were activated in patients with Hot ZHENG CAG and inhibited in patients with Cold ZHENG CAG. Some key seeds genes of Hot/Cold ZHENG play an important role in converting cold tumors into hot tumors through increased T-cell infiltration, such as TGFB, CCL2, VEGF, and TLR [60]. TGFB is an immunosuppressive molecule, and its inhibition increases T cell infiltration [61]. Loss or low expression of specific chemokines and their corresponding receptors, such as CCL2 and CCL5, reduces infiltration of effector T lymphocytes [62]. It has been reported that VEGF could interrupt T-cell priming, inhibit DC maturation and exhaust CTLs [63]. Thus, the recognition of Cold/Hot ZHENG of tumors, such as gastric carcinoma, is particularly important due to the different immune modulation by Cold/Hot herbs treatment. Last but not least, in order to achieve personalized and precise medical treatment, precision medicine [64], as well as precision TCM [65] which originates from it, provides a new insight for present medical strategy. The diagnosis of Cold/Hot ZHENG is a holistic observation which potentially represents the states of immune regulation and energy metabolism for patients. And decision on Cold/Hot herbs based on corresponding Cold/Hot ZHENG may be a kind of precision TCM on the basis of the macroscopic phenotypes combined with traditional experience. With the help of researches to uncover the mechanism of Cold/Hot ZHENG and herbs, the potential law of precision TCM from the perspective of Cold and Hot may be revealed and thus facilitate newer and more precise medical strategy. There were still some limitations in this study. Firstly, we summarized the mechanism of Cold/Hot ZHENG CG and Cold/Hot herbs for CG formulae to the level of pathways and biological processes, combined with present studies and proposed some potential biomarkers of these mechanisms. However, these biomarkers haven’t been verified in detail. Then, compounds of each TCM in the formulae for CG were collected from databases for TCM. However, the recorded data in these databases might not be as accurate as the result detected by high performance liquid chromatography analysis or other detection analysis. Besides, in the perspective of formulae, we haven’t completely uncovered the mechanism for each of them due to their complex composition and some of them were composed of both Hot and Cold TCM, forming complicated effect on chronic gastritis. Fortunately, these shortcomings of data constitution could be partly made up by our network-based algorithms and network targets analysis. And the further analysis of each formulae might need be treated and analyzed respectively to reveal the comprehensive and specific mechanism for them each. In conclusion, from two starting points, we conducted exhaustive analysis to find vital biomolecules and biological features for Cold/Hot ZHENG CG and Cold/Hot herbs for CG based on the combination of gene expression data, network analysis, statistic models and machine learning algorithms. And it was found that two specific characteristics between Hot and Cold ZHENG were the differences in immune and inflammation responses, as well as those in endocrine, energy metabolism, and thermogenesis. In general, Hot ZHENG CG showed a trend of over-inflammation and exuberant energy metabolism, and that in Cold ZHENG CG was the suppress of immune regulation and energy metabolism. Besides, in the aspect of Cold/Hot herbs, Hot herbs preferred to target on the biomolecules or biological processes of immune response and energy metabolism, while Cold herbs had the potential effect on inflammation and immune regulation. This study didn’t only uncover the potential mechanism of Cold/Hot ZHENG CG and Cold/Hot herbs in formulae for CG, but also might potentially provide a new insight for diagnosis of Cold/Hot ZHENG in diseases and further offer better and more precise medication strategies for Cold/Hot ZHENG patients to achieve precision TCM in the treatment of diseases like gastritis. Author contributions Conceptualization, S.L.; supervision, S.L.; investigation, B.Y.W and P.C.; data curation B.Y.W and P.C.; formal analysis B.Y.W.; writing—original draft preparation, B.Y.W. and P.C.; writing—review and editing, S.L. and P.Z.; visualization, B.Y.W. All authors have read and agreed to the published version of the manuscript. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments and funding This work was supported by the National Natural Science Foundation of China, China [81225025 and 62061160369]. Reference 1. Du, Y.; Bai, Y.; Xie, P.; Fang, J.; Wang, X.; Hou, X.; Tian, D.; Wang, C.; Liu, Y.; Sha, W.; et al. Chronic gastritis in China: a national multi-center survey. BMC Gastroenterol 2014, 14, 21, doi:10.1186/1471-230X-14-21. 2. 3. Sipponen, P.; Maaroos, H.I. Chronic gastritis. Scand J Gastroenterol 2015, 50, 657-667, doi:10.3109/00365521.2015.1019918. Zhang, P.; Yang, M.; Zhang, Y.; Xiao, S.; Lai, X.; Tan, A.; Du, S.; Li, S. Dissecting the Single- Cell Transcriptome Network Underlying Gastric Premalignant Lesions and Early Gastric Cancer. Cell Rep 2019, 27, 1934-1947 e1935, doi:10.1016/j.celrep.2019.04.052. 4. Qin, F.; Liu, J.Y.; Yuan, J.H. Chaihu-Shugan-San, an oriental herbal preparation, for the treatment of chronic gastritis: a meta-analysis of randomized controlled trials. J Ethnopharmacol 2013, 146, 433-439, doi:10.1016/j.jep.2013.01.029. 5. Elhadidy, M.G.; El Nashar, E.M.; Alghamdi, M.A.; Samir, S.M. A novel gastroprotective effect of zeaxanthin against stress-induced gastritis in male rats targeting the expression of HIF- 1alpha, TFF-1 and MMP-9 through PI3K/Akt/JNK signaling pathway. Life Sci 2021, 273, 119297, doi:10.1016/j.lfs.2021.119297. 6. Tang, X.D.; Lu, B.; Zhou, L.Y.; Zhan, S.Y.; Li, Z.H.; Li, B.S.; Gao, R.; Wang, F.Y.; Wang, P.; Yang, J.Q.; et al. Clinical practice guideline of Chinese medicine for chronic gastritis. Chin J Integr Med 2012, 18, 56-71, doi:10.1007/s11655-012-0960-y. 7. 8. Li, R.; Ma, T.; Gu, J.; Liang, X.; Li, S. Imbalanced network biomarkers for traditional Chinese medicine Syndrome in gastritis patients. Sci Rep 2013, 3, 1543, doi:10.1038/srep01543. Li, S.; Zhang, B. Traditional Chinese medicine network pharmacology: theory, methodology and application. Chin J Nat Med 2013, 11, 110-120, doi:10.1016/S1875- 5364(13)60037-0. 9. Li, S. Network target: a starting point for traditional Chinese medicine network pharmacology. China Journal of Chinese Materia Medica 2011, 36, 2017-2020. 10. 张彦琼; 李梢. 网络药理学与中医药现代研究的若干进展. 中国药理学与毒理学杂志 2015, 29, 883-892. 11. Li, S.; Zhang, Z.Q.; Wu, L.J.; Zhang, X.G.; Li, Y.D.; Wang, Y.Y. Understanding ZHENG in traditional Chinese medicine in the context of neuro-endocrine-immune network. IET Syst Biol 2007, 1, 51-60, doi:10.1049/iet-syb:20060032. 12. Zibo Ouyang, S.L. HerbBioMap2.0 Database Platform Building & Mining. Tsinghua university, Tsinghua university, 2014. 13. Wu, Y.; Zhang, F.; Yang, K.; Fang, S.; Bu, D.; Li, H.; Sun, L.; Hu, H.; Gao, K.; Wang, W.; et al. SymMap: an integrative database of traditional Chinese medicine enhanced by symptom mapping. Nucleic Acids Res 2019, 47, D1110-D1117, doi:10.1093/nar/gky1021. 14. Zhao, S.; Li, S. Network-based relating pharmacological and genomic spaces for drug target identification. Plos One 2010, 5, e11764, doi:10.1371/journal.pone.0011764. 15. Liang, X.; Li, H.; Li, S. A novel network pharmacology approach to analyse traditional herbal formulae: the Liu-Wei-Di-Huang pill as a case study. Mol Biosyst 2014, 10, 1014-1022, doi:10.1039/c3mb70507b. 16. Kanehisa, M.; Furumichi, M.; Sato, Y.; Ishiguro-Watanabe, M.; Tanabe, M. KEGG: integrating viruses and cellular organisms. Nucleic Acids Res 2021, 49, D545-D551, doi:10.1093/nar/gkaa970. 17. Gene Ontology, C. The Gene Ontology resource: enriching a GOld mine. Nucleic Acids Res 2021, 49, D325-D334, doi:10.1093/nar/gkaa1113. 18. Subramanian, A.; Tamayo, P.; Mootha, V.K.; Mukherjee, S.; Ebert, B.L.; Gillette, M.A.; Paulovich, A.; Pomeroy, S.L.; Golub, T.R.; Lander, E.S.; et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci U S A 2005, 102, 15545-15550, doi:10.1073/pnas.0506580102. 19. Huang, Y.; Li, S. Detection of characteristic sub pathway network for angiogenesis based on the comprehensive pathway network. BMC Bioinformatics 2010, 11 Suppl 1, S32, doi:10.1186/1471-2105-11-S1-S32. 20. Lin, X.M.; Hu, L.; Gu, J.; Wang, R.Y.; Li, L.; Tang, J.; Zhang, B.H.; Yan, X.Z.; Zhu, Y.J.; Hu, C.L.; et al. Choline Kinase alpha Mediates Interactions Between the Epidermal Growth Factor Receptor and Mechanistic Target of Rapamycin Complex 2 in Hepatocellular Carcinoma Cells to Promote Drug Resistance and Xenograft Tumor Progression. Gastroenterology 2017, 152, 1187-1202, doi:10.1053/j.gastro.2016.12.033. 21. Guo, Y.C.; Bao, C.; Ma, D.C.; Cao, Y.B.; Li, Y.D.; Xie, Z.; Li, S. Network-Based Combinatorial CRISPR-Cas9 Screens Identify Synergistic Modules in Human Cells. Acs Synth Biol 2019, 8, 482-+, doi:10.1021/acssynbio.8b00237. 22. Newman, A.M.; Steen, C.B.; Liu, C.L.; Gentles, A.J.; Chaudhuri, A.A.; Scherer, F.; Khodadoust, M.S.; Esfahani, M.S.; Luca, B.A.; Steiner, D.; et al. Determining cell type abundance and expression from bulk tissues with digital cytometry. Nat Biotechnol 2019, 37, 773-782, doi:10.1038/s41587-019-0114-2. 23. Geng, Y.; Wang, J.; Wang, R.; Wang, K.; Xu, Y.; Song, G.; Wu, C.; Yin, Y. Leptin and HER-2 are associated with gastric cancer progression and prognosis of patients. Biomed Pharmacother 2012, 66, 419-424, doi:10.1016/j.biopha.2012.03.002. 24. Qu, Y.; Wang, X.; Bai, S.; Niu, L.; Zhao, G.; Yao, Y.; Li, B.; Li, H. The effects of TNF- alpha/TNFR2 in regulatory T cells on the microenvironment and progression of gastric cancer. Int J Cancer 2022, 150, 1373-1391, doi:10.1002/ijc.33873. 25. Zhu, Q.; Zhang, X.; Zhang, L.; Li, W.; Wu, H.; Yuan, X.; Mao, F.; Wang, M.; Zhu, W.; Qian, H.; et al. The IL-6-STAT3 axis mediates a reciprocal crosstalk between cancer-derived mesenchymal stem cells and neutrophils to synergistically prompt gastric cancer progression. Cell Death Dis 2014, 5, e1295, doi:10.1038/cddis.2014.263. 26. Companioni, O.; Bonet, C.; Garcia, N.; Ramirez-Lazaro, M.J.; Lario, S.; Mendoza, J.; Adrados, M.M.; Poves, E.; Espinosa, L.; Pozo-Kreilinger, J.J.; et al. Genetic variation analysis in a follow-up study of gastric cancer precursor lesions confirms the association of MUC2 variants with the evolution of the lesions and identifies a significant association with NFKB1 and CD14. Int J Cancer 2018, 143, 2777-2786, doi:10.1002/ijc.31839. 27. Guo, Y.; Bao, C.; Ma, D.; Cao, Y.; Li, Y.; Xie, Z.; Li, S. Network-Based Combinatorial CRISPR- Cas9 Screens Identify Synergistic Modules in Human Cells. Acs Synth Biol 2019, 8, 482- 490, doi:10.1021/acssynbio.8b00237. 28. Guo, Y.; Nie, Q.; MacLean, A.L.; Li, Y.; Lei, J.; Li, S. Multiscale Modeling of Inflammation- Induced Tumorigenesis Reveals Competing Oncogenic and Oncoprotective Roles for Inflammation. Cancer Res 2017, 77, 6429-6441, doi:10.1158/0008-5472.CAN-17-1662. 29. Wang, L.F.; Liu, Y.S.; Yang, B.; Li, P.; Cheng, X.S.; Xiao, C.X.; Liu, J.J.; Li, S.; Ren, J.L.; Guleng, B. The extracellular matrix protein mindin attenuates colon cancer progression by blocking angiogenesis via Egr-1-mediated regulation. Oncogene 2018, 37, 601-615, doi:10.1038/onc.2017.359. 30. Zhao, X.; Fu, J.; Xu, A.; Yu, L.; Zhu, J.; Dai, R.; Su, B.; Luo, T.; Li, N.; Qin, W.; et al. Gankyrin drives malignant transformation of chronic liver damage-mediated fibrosis via the Rac1/JNK pathway. Cell Death Dis 2015, 6, e1751, doi:10.1038/cddis.2015.120. 31. Lei, Y.; Li, S.; Liu, Z.; Wan, F.; Tian, T.; Li, S.; Zhao, D.; Zeng, J. A deep-learning framework for multi-level peptide-protein interaction prediction. Nat Commun 2021, 12, 5465, doi:10.1038/s41467-021-25772-4. 32. Park, H.K.; Ahima, R.S. Physiology of leptin: energy homeostasis, neuroendocrine function and metabolism. Metabolism 2015, 64, 24-34, doi:10.1016/j.metabol.2014.08.004. 33. Lee, B.; Kim, J.; An, T.; Kim, S.; Patel, E.M.; Raber, J.; Lee, S.K.; Lee, S.; Lee, J.W. Dlx1/2 and Otp coordinate the production of hypothalamic GHRH- and AgRP-neurons. Nat Commun 2018, 9, 2026, doi:10.1038/s41467-018-04377-4. 34. Atrens, D.M.; Menendez, J.A. Somatostatin and the paraventricular hypothalamus: modulation of energy balance. Brain Res 1993, 630, 238-244, doi:10.1016/0006- 8993(93)90662-7. 35. Oh, C.M.; Namkung, J.; Go, Y.; Shong, K.E.; Kim, K.; Kim, H.; Park, B.Y.; Lee, H.W.; Jeon, Y.H.; Song, J.; et al. Regulation of systemic energy homeostasis by serotonin in adipose tissues. Nat Commun 2015, 6, 6794, doi:10.1038/ncomms7794. 36. Richard, D. Cognitive and autonomic determinants of energy homeostasis in obesity. Nat Rev Endocrinol 2015, 11, 489-501, doi:10.1038/nrendo.2015.103. 37. Adamovsky, O.; Buerger, A.N.; Vespalcova, H.; Sohag, S.R.; Hanlon, A.T.; Ginn, P.E.; Craft, S.L.; Smatana, S.; Budinska, E.; Persico, M.; et al. Evaluation of Microbiome-Host Relationships in the Zebrafish Gastrointestinal System Reveals Adaptive Immunity Is a Target of Bis(2-ethylhexyl) Phthalate (DEHP) Exposure. Environ Sci Technol 2020, 54, 5719-5728, doi:10.1021/acs.est.0c00628. 38. Tian, W.; Zhang, N.; Jin, R.; Feng, Y.; Wang, S.; Gao, S.; Gao, R.; Wu, G.; Tian, D.; Tan, W.; et al. Immune suppression in the early stage of COVID-19 disease. Nat Commun 2020, 11, 5859, doi:10.1038/s41467-020-19706-9. 39. Kim, S.; Chen, J.; Cheng, T.; Gindulyte, A.; He, J.; He, S.; Li, Q.; Shoemaker, B.A.; Thiessen, P.A.; Yu, B.; et al. PubChem in 2021: new data content and improved web interfaces. Nucleic Acids Res 2021, 49, D1388-D1395, doi:10.1093/nar/gkaa971. 40. Zhang, S.; Lai, X.; Wang, X.; Liu, G.; Wang, Z.; Cao, L.; Zhang, X.; Xiao, W.; Li, S. Deciphering the Pharmacological Mechanisms of Guizhi-Fuling Capsule on Primary Dysmenorrhea Through Network Pharmacology. Front Pharmacol 2021, 12, 613104, doi:10.3389/fphar.2021.613104. 41. Li, S.; Lu, A.P.; Wang, Y.Y.; Li, Y.D. Suppressive effects of a Chinese herbal medicine qing- luo-yin extract on the angiogenesis of collagen-induced arthritis in rats. Am J Chin Med 2003, 31, 713-720, doi:10.1142/S0192415X03001430. 42. Zhou, W.; Lai, X.; Wang, X.; Yao, X.; Wang, W.; Li, S. Network pharmacology to explore the anti-inflammatory mechanism of Xuebijing in the treatment of sepsis. Phytomedicine 2021, 85, 153543, doi:10.1016/j.phymed.2021.153543. 43. Zuo, J.; Wang, X.; Liu, Y.; Ye, J.; Liu, Q.; Li, Y.; Li, S. Integrating Network Pharmacology and Metabolomics Study on Anti-rheumatic Mechanisms and Antagonistic Effects Against Methotrexate-Induced Toxicity of Qing-Luo-Yin. Front Pharmacol 2018, 9, 1472, doi:10.3389/fphar.2018.01472. 44. Ritchie, M.E.; Phipson, B.; Wu, D.; Hu, Y.; Law, C.W.; Shi, W.; Smyth, G.K. limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res 2015, 43, e47, doi:10.1093/nar/gkv007. 45. Rohart, F.; Gautier, B.; Singh, A.; Le Cao, K.A. mixOmics: An R package for 'omics feature selection and multiple data integration. PLoS Comput Biol 2017, 13, e1005752, doi:10.1371/journal.pcbi.1005752. 46. Yu, G.; Wang, L.G.; Han, Y.; He, Q.Y. clusterProfiler: an R package for comparing biological themes among gene clusters. Omics 2012, 16, 284-287, doi:10.1089/omi.2011.0118. 47. Szklarczyk, D.; Gable, A.L.; Nastou, K.C.; Lyon, D.; Kirsch, R.; Pyysalo, S.; Doncheva, N.T.; Legeay, M.; Fang, T.; Bork, P.; et al. The STRING database in 2021: customizable protein- protein networks, and functional characterization of user-uploaded gene/measurement sets. Nucleic Acids Res 2021, 49, D605-D612, doi:10.1093/nar/gkaa1074. 48. 吕爱平; 李梢; 王永炎. 从主观症状的客观规律探索中医证候分类的科学基础. 中医杂志 2005, 01, 4-6. 49. Su, S.B.; Jia, W.; Lu, A.P.; Li, S. Evidence-Based ZHENG: A Traditional Chinese Medicine Syndrome 2013. Evid-Based Compl Alt 2014, 2014, doi:Artn 484201 10.1155/2014/484201. 50. Li, S.; Wang, R.Q.; Zhang, Y.L.; Zhang, X.G.; Layon, A.J.; Li, Y.D.; Chen, M.Z. Symptom combinations associated with outcome and therapeutic effects in a cohort of cases with SARS. Am J Chinese Med 2006, 34, 937-947, doi:Doi 10.1142/S0192415x06004417. 51. Wang, B.; Zhou, W.; Zhang, H.; Wang, W.; Zhang, B.; Li, S. Exploring the effect of Weifuchun capsule on the toll-like receptor pathway mediated HES6 and immune regulation against chronic atrophic gastritis. J Ethnopharmacol 2022, 303, 115930, doi:10.1016/j.jep.2022.115930. 52. Liu, Y.; Jin, Z.; Qin, X.; Zheng, Q. Urinary metabolomics research for Huangqi Jianzhong Tang against chronic atrophic gastritis rats based on (1) H NMR and UPLC-Q/TOF MS. J Pharm Pharmacol 2020, 72, 748-760, doi:10.1111/jphp.13242. 53. Wen, J.; Wu, S.; Ma, X.; Zhao, Y. Zuojin Pill attenuates Helicobacter pylori-induced chronic atrophic gastritis in rats and improves gastric epithelial cells function in GES-1 cells. J Ethnopharmacol 2022, 285, 114855, doi:10.1016/j.jep.2021.114855. 54. Zhou, W.; Zhang, H.; Wang, X.; Kang, J.; Guo, W.; Zhou, L.; Liu, H.; Wang, M.; Jia, R.; Du, X.; et al. Network pharmacology to unveil the mechanism of Moluodan in the treatment of chronic atrophic gastritis. Phytomedicine 2022, 95, 153837, doi:10.1016/j.phymed.2021.153837. 55. Siregar, G.A.; Halim, S.; Sitepu, V.R. Serum TNF-a, IL-8, VEGF levels in Helicobacter pylori infection and their association with degree of gastritis. Acta Med Indones 2015, 47, 120- 126. 56. Yin, J.; Yi, J.; Yang, C.; Xu, B.; Lin, J.; Hu, H.; Wu, X.; Shi, H.; Fei, X. Weiqi Decoction Attenuated Chronic Atrophic Gastritis with Precancerous Lesion through Regulating Microcirculation Disturbance and HIF-1alpha Signaling Pathway. Evid Based Complement Alternat Med 2019, 2019, 2651037, doi:10.1155/2019/2651037. 57. Abella, V.; Scotece, M.; Conde, J.; Pino, J.; Gonzalez-Gay, M.A.; Gomez-Reino, J.J.; Mera, A.; Lago, F.; Gomez, R.; Gualillo, O. Leptin in the interplay of inflammation, metabolism and immune system disorders. Nat Rev Rheumatol 2017, 13, 100-109, doi:10.1038/nrrheum.2016.209. 58. Jun, D.W.; Lee, O.Y.; Lee, Y.Y.; Choi, H.S.; Kim, T.H.; Yoon, B.C. Correlation between gastrointestinal symptoms and gastric leptin and ghrelin expression in patients with gastritis. Dig Dis Sci 2007, 52, 2866-2872, doi:10.1007/s10620-006-9651-x. 59. Duan, Q.; Zhang, H.; Zheng, J.; Zhang, L. Turning Cold into Hot: Firing up the Tumor Microenvironment. Trends Cancer 2020, 6, 605-618, doi:10.1016/j.trecan.2020.02.022. 60. Liu, Y.T.; Sun, Z.J. Turning cold tumors into hot tumors by improving T-cell infiltration. Theranostics 2021, 11, 5365-5386, doi:10.7150/thno.58390. 61. Wang, M.; Wang, S.; Desai, J.; Trapani, J.A.; Neeson, P.J. Therapeutic strategies to remodel immunologically cold tumors. Clin Transl Immunology 2020, 9, e1226, doi:10.1002/cti2.1226. 62. DeNardo, D.G.; Ruffell, B. Macrophages as regulators of tumour immunity and immunotherapy. Nat Rev Immunol 2019, 19, 369-382, doi:10.1038/s41577-019-0127-6. 63. Ni, J.J.; Zhang, Z.Z.; Ge, M.J.; Chen, J.Y.; Zhuo, W. Immune-based combination therapy to convert immunologically cold tumors into hot tumors: an update and new insights. Acta Pharmacol Sin 2022, doi:10.1038/s41401-022-00953-z. 64. Aronson, S.J.; Rehm, H.L. Building the foundation for genomics in precision medicine. Nature 2015, 526, 336-342, doi:10.1038/nature15816. 65. Wang, W.J.; Zhang, T. Integration of traditional Chinese medicine and Western medicine in the era of precision medicine. J Integr Med 2017, 15, 1-7, doi:10.1016/S2095- 4964(17)60314-5.
synthetic_cpt
7
Generate_Annotate_and_Learn_NLP_with_Synthetic_Text.pdf
4 2 0 2 l u J 4 ] A R . h t a m [ 2 v 3 2 4 3 0 . 5 0 4 2 : v i X r a Generalized Baer and Generalized Quasi-Baer Rings of Skew Generalized Power Series R. M. Salema, R. E. Abdel-Khaleka, M. M. Hamamb aDepartment of Mathematics, Faculty of Science, Al-Azhar Univ., Nasr City 11884, Cairo, Egypt. bDepartment of Mathematics, Faculty of Science, Assiut Univ., Assiut 71515, Egypt. Abstract Let R be a ring with identity, (S , ≤) an ordered monoid, ω : S → End(R) a monoid homomorphism, and A = R [[S , ω]] the ring of skew generalized power series. The concepts of generalized Baer and generalized quasi-Baer rings are generalization of Baer and quasi-Baer rings, respectively. A ring R is called gen- eralized right Baer (generalized right quasi-Baer) if for any non-empty subset S (right ideal I) of R, the right annihilator of S n (In) is generated by an idempotent for some positive integer n. Left cases may be defined analogously. A ring R is called generalized Baer (generalized quasi-Baer) if it is both generalized right and left Baer (generalized right and left quasi-Baer) ring. In this paper, we examine the behavior of a skew generalized power series ring over a generalized right Baer (generalized right quasi-Baer) ring and prove that, under specific conditions, the ring A is generalized right Baer (generalized right quasi-Baer) if and only if R is a generalized right Baer (generalized right quasi-Baer) ring. Mathematics Subject Classification (2020): 16D25, 06F05, 16S60, 16U99, 16W60. Keywords: Baer rings, quasi-Baer rings, generalized Baer rings, generalized quasi-Baer rings, generalized power series ring, skew generalized power series ring. Email addresses: rsalem_02@ hotmail.com (R. M. Salem), [email protected] (R. E. Abdel-Khalek), [email protected] (M. M. Hamam) 1. Introduction Throughout this article, R denotes an associative ring with identity, and rR (S ) = {a ∈ R | sa = 0, f or all s ∈ S } is the right annihilator of a nonempty subset S in R. In [7], Kaplansky introduced Baer rings as rings in which the right annihilator of every nonempty subset of R is generated by an idempotent. Clark defined quasi- Baer rings in [3] as rings in which the right annihilator of every right ideal of R is generated by an idempotent. Baer rings are clearly quasi-Baer rings. In a re- duced ring R, R is Baer if and only if R is quasi-Baer. The definitions of Baer and quasi-Baer rings are left-right symmetric by [7, Theorem 3] and [3, Lemma 1]. According to Moussavi et al. [14], a ring R is called generalized right quasi- Baer if for any right ideal I of R, the right annihilator of In is generated by an idempotent for some positive integer n, depending on I. The class of generalized right quasi-Baer rings includes the right quasi-Baer rings and is closed under direct product and also under some kinds of upper triangular matrix rings. Example (4.4) in [14] is an example of a generalized right quasi-Baer ring which is not generalized left quasi-Baer, and hence the definition of generalized quasi-Baer ring is not left-right symmetric. In [15] K. Paykan and A. Moussavi defined a generalized right Baer rings as rings in which the right annihilator of S n is generated by an idempotent for some positive integer n, where S is a non-empty subset of R and S n is a set that contains elements a1a2. . . an such that ai ∈ S for 1 ≤ i ≤ n. A ring is called generalized Baer if it is both generalized right and left Baer ring. Baer rings are clearly generalized right (left) Baer. Also, the class of generalized right (left) Baer rings is obviously included in the classes of generalized right (left) quasi Baer rings. Example (2.2) in [15] shows that there are various classes of generalized quasi-Baer ring which are not generalized Baer. Also, there are rich classes of generalized right Baer rings which are not Baer (see [15, Example 2.3]). In [5] we examine the behavior of a skew generalized power series ring over a semi-Baer (semi-quasi Baer) rings. In this paper, we study the relation be- tween the generalized Baer (generalized quasi Baer) rings and its skew generalized power series ring extensions and determine the conditions under which a ring of skew generalized power series R [[S , ω]] is generalized Baer (generalized quasi Baer) whenever R is generalized Baer (generalized quasi Baer) and vice versa. 2. Skew Generalized Power Series Rings The construction of generalized power series rings was considered by Higman in [6]. Paulo Ribenboim studied extensively in a series of papers (see [17–21]) the 2 rings of generalized power series. In [13] Mazurek and Ziembowski generalized this construction by introducing the concept of the skew generalized power series rings. An ordered monoid is a pair (S , ≤) consisting of a monoid S and a compatible order relation ≤ such that if u ≤ v, then ut ≤ vt and tu ≤ tv for each t ∈ S . (S , ≤) is called a strictly ordered monoid if whenever u, v ∈ S such that u < v (i.e., u ≤ v and u , v), then ut < vt and tu < tv for all t ∈ S . Recall that an ordered set (S , ≤) is called artinian if every strictly decreasing sequence of elements of S is finite, and (S , ≤) is called narrow if every subset of pairwise order-incomparable elements of S is finite. Thus (S , ≤) is artinian and narrow if and only if every nonempty subset of S has at least one but only a finite number of minimal elements. Let R be a ring, (S , ≤) a strictly ordered monoid, ω : S → End(R) a monoid homomorphism, where ωs denote the image of s under ω, for each s ∈ S , that is ωs = ω(s), and A the set of all maps f : S → R such that supp( f ) = {s ∈ S : f (s) , 0} is artinian and narrow subset of S . Under pointwise addition A is an abelian subgroup of the additive group of all mappings f : S → R. For every s ∈ S and f, g ∈ A the set Xs ( f, g) = {(u, v) ∈ S × S : uv = s, f (u) , 0, g (v) , 0)} is finite by [18, 4.1]. Define the multiplication for each f, g ∈ A by: f g (s) = P(u,v)∈Xs( f,g) f (u) ωu(g (v)). (by convention, a sum over the empty set is 0). With pointwise addition and multiplication as defined above, A becomes a ring called the ring of skew generalized power series whose elements have coefficients in R and exponents in S . For each r ∈ R and s ∈ S one can associate the maps cr, es ∈ A defined by : cr(x) =    r 0 if x = 1s otherwise , es(x) =    1R 0 if x = s otherwise It is clear that r → cr is a ring embedding of R into A and s → es is a monoid embedding of S into the multiplicative monoid of A and escr = cωs(r)es. Moreover, the identity element of A is a map e : S → R defined by e (1S ) = (1R) and e (s) = 0 for each s ∈ S \ {1s}. Let R be a ring and σ an endomorphism of R. The construction of the skew generalized power series rings generalizes many classical ring constructions such as the skew polynomial rings R[x, σ] if S = N ∪ {0} and ≤ is the trivial order, skew power series rings R[[x, σ]] if S = N ∪ {0} and ≤ is the natural linear order, skew Laurent polynomial rings R[x, x−1; σ] if S = Z and ≤ is the trivial order 3 where σ is an automorphism of R, skew Laurent power series rings R[[x, x−1; σ]] if S = Z and ≤ is the natural linear order where σ is an automorphism of R. Moreover, the ring of polynomials R[x], the ring of power series R[[x]], the ring of Laurent polynomials R[x, x−1], and the ring of Laurent power series R[[x, x−1]] are special cases of the skew generalized power series rings, if we consider σ to be the identity map of R. 3. Main Results An ordered monoid (S , ≤) is called positively ordered if 1 is the minimal ele- ment of S . Definition 3.1 ([1]). An endomorphism σ of a ring R is called compatible if for all a, b ∈ R, ab = 0 if and only if aσ(b) = 0. Definition 3.2 ([9]). An endomorphism σ of a ring R is called rigid if for ev- ery a ∈ R, aσ(a) = 0 if and only if a = 0. Let R be a ring, (S , ≤) a strictly ordered monoid, and ω : S → End(R) a monoid homomorphism. As in [12], a ring R is S -compatible (S -rigid) if ωs is compatible (rigid) for every s ∈ S . Definition 3.3 ([11]). An ordered monoid (S , ≤) is said to be quasitotally or- dered (and ≤ is called a quasitotal order on S ) if ≤ can be refined to an order (cid:22) with respect to which S is a strictly totally ordered monoid. Recall that a ring R is said to be (S , ω)-Armendariz if whenever f g = 0 for f, g ∈ R[[S , ω]], then f (s).ωs(g(t)) = 0 for all s, t ∈ S (see [12, Definition 2.1]). Proposition 3.4 ([12, Proposition 4.10]). Let R be a ring, (S , ≤) a strictly or- dered monoid, and ω : S → End(R) a monoid homomorphism. Assume that R is (S , ω)-Armendariz. If f is an idempotent of R [[S , ω]], then f (1) is an idempotent of R and f = c f (1). Proposition 3.5. Let R be an (S , ω)-Armendariz ring, (S , ≤) a quasitotally ordered monoid, and ω : S → End(R) a monoid homomorphism. Set A = R [[S , ω]] the ring of skew generalized power series. (1) If A is a generalized right Baer ring, then R is a generalized right Baer ring. 4 (2) If R is an S -compatible ring and A is a generalized right quasi-Baer ring, then R is a generalized right quasi-Baer ring. (1) Let X be a non-empty subset of R. Then B = {cx : x ∈ X} is a Proof. non-empty subset of A. Since A is a generalized right Baer, there exists f ∈ A such that rA(Bn) = f A with f 2 = f . Proposition 3.4 implies that f (1) is an idem- potent element of R. We want to prove that rR(Xn) = f (1)R. Since f ∈ rA(Bn), we have (cx1cx2. . . cxn) f = 0 for all cx1 cx2. . . cxn ∈ Bn and x1, x2, . . . , xn ∈ X. Thus 0 = (cx1 cx2. . . cxn) f (1) = cx1 (1)ω1(cx2(1)). . . ω1(cxn (1))ω1( f (1)) = x1x2. . . xn f (1) for all x1x2. . . xn ∈ Xn. Hence f (1) ∈ rR(Xn), which implies that f (1)R ⊆ rR(Xn). On the other hand, if a ∈ rR(Xn), then (x1x2. . . xn)a = 0 for all xi ∈ X with 1 ≤ i ≤ n. Thus (cx1cx2. . . cxn)ca(1) = cx1 (1)ω1(cx2 (1)). . . ω1(cxn(1))ω1(ca(1)) = (x1x2. . . xn)a = 0. Which implies that (cx1 cx2. . . cxn)ca = 0 for all cxi ∈ B. There- fore, ca ∈ rA(Bn) = f A and ca = f g for some g ∈ A. Now, a = ca(1) = ( f g)(1) = f (1)ω1(g(1)) ∈ f (1)R. That is rR(Xn) ⊆ f (1)R, which follows that rR(Xn) = f (1)R. Hence R is a generalized right Baer ring. (2) Let I be a right ideal of R. Then I [[S , ω]] = { f ∈ A| f (s) ∈ I f or any s ∈ S } is a right ideal of A. Since A is a generalized right quasi-Baer, there exists f ∈ A such that rA(In[[S , ω]]) = f A with f 2 = f . Proposition 3.4 implies that f (1) is an idempotent element of R. We want to prove that rR(In) = f (1)R. Since f ∈ rA(In[[S , ω]]), we have (g1g2. . . gn) f = 0 for all g1, g2, . . . , gn ∈ I[[S , ω]]. Since cik ∈ I[[S , ω]] for all ik ∈ I with 1 ≤ k ≤ n, we have (ci1ci2. . . cin) f = 0. Con- sequently, ((ci1ci2. . . cin) f )(1) = ci1(1)ω1(ci2(1)). . . ω1(cin(1))ω1( f (1)) = 0 which implies that i1i2. . . in f (1) = 0 for all i1, i2, . . . , in ∈ I. Hence f (1) ∈ rR(In), which implies that f (1)R ⊆ rR(In). On the other hand, if a ∈ rR(In), then (i1i2. . . in)a = 0 for all i1, i2, . . . , in ∈ I. Since gk(sk) ∈ I for all gk ∈ I[[S , ω]] and sk ∈ S with 1 ≤ k ≤ n, we have g1(s1)g2(s2). . . gn(sn)a = 0. Since R is S -compatible, we have g1(s1)ωs1(g2(s2))ωs1 s2(g3(s3)). . . ωs1 s2. . . sn−1(gn(sn))ωs1 s2. . . sn(ca(1)) = 0. Which implies that (g1g2. . . gnca)(s) = P(s1,s2,. . . ,sn,1)∈Xs(g1,g2,. . . ,gn,ca) g1(s1)ωs1(g2(s2))ωs1 s2(g3(s3)). . . ωs1s2. . . sn(ca(1)) = 0. Thus ca ∈ rA(In[[S , ω]]) = f A and ca = f g for some g ∈ A. Now, a = ca(1) = ( f g)(1) = f (1)ω1(g(1)) ∈ f (1)R. That is rR(In) ⊆ f (1)R, which follows that rR(In) = f (1)R. Hence R is a generalized right quasi-Baer ring. Proposition 3.6. Let R be an S -compatible (S , ω)-Armendariz ring, (S , ≤) a qu- asitotally ordered monoid and ω : S → End(R) a monoid homomorphism. Set A = R [[S , ω]] the ring of skew generalized power series. 5 (1) If R is a generalized right Baer ring, then A is a generalized right Baer ring. (2) If R is a generalized right quasi-Baer ring, then A is a generalized right quasi- Baer ring. Proof. (1) Let B be a non-empty subset of A. Then U = { f (s) : f ∈ B, s ∈ S } is a non-empty subset of R. Since R is a generalized right Baer, there exists b ∈ R = cb. We want to prove such that rR(Un) = bR with b2 = b which implies that c2 b that rA(Bn) = cbA. Since b ∈ rR(Un), it follows that f1(s1) f2(s2). . . fn(sn)b = 0 for all fi(si) ∈ U with 1 ≤ i ≤ n. Thus f1(s1) f2(s2). . . fn(sn)cb(1) = 0. Since R is S -compatible, then f1(s1)ωs1( f2(s2)). . . ωsn−1( fn(sn))ωsn(cb(1)) = 0. Thus ( f1 f2. . . fncb)(s) = P(s1,s2,. . . ,sn,1)∈Xs( f1, f2,. . . , fn,cb) f1(s1)ωs1( f2(s2))ωs1 s2( f3(s3)). . . ωs1 s2. . . sn(cb(1)) = 0. It follows that cb ∈ rA(Bn) which implies that cbA ⊆ rA(Bn). Now, let f ∈ rA(Bn). Then f1 f2. . . fn f = 0 for all f1 f2. . . fn ∈ Bn. Since R is an (S , ω)-Armendariz ring, we get f1(u1)ωu1( f2(u2)). . . ωun−1( fn(un))ωun( f (v)) = 0 for all u1, u2, . . . , un, v ∈ S . Moreover, Since R is S -compatible, we get f1(u1) f2(u2). . . fn(un) f (v) = 0. Thus f (v) ∈ rR(Un) = bR for all v ∈ S . There- fore, for all v ∈ S there exists r ∈ R such that f (v) = br = (cbcrev)(v). Thus f = cbcrev , which implies that f ∈ cbA. That is rA(Bn) ⊆ cbA, which follows that rA(Bn) = cbA. Hence A is a generalized right Baer ring. (2) Let J be a right ideal of A. For every s ∈ S , set Js = { f (s)| f ∈ J, s ∈ S }, and J∗ = ∪(s∈S )Js. Let I be the right ideal generated by J∗. Since R is a generalized right quasi-Baer ring, there exists b ∈ R such that rR(In) = bR with b2 = b. Therefore, cb is an idempotent element of A. We want to prove that rA(Jn) = cbA. Since b ∈ rR(In), it follows that i1i2i3. . . inb = 0 for all i j ∈ I with 1 ≤ j ≤ n. Since gi(si) ∈ I for all gi ∈ J and si ∈ S , we have g1(s1)g2(s2). . . gn(sn)b = 0. Thus g1(s1)g2(s2). . . gn(sn)cb(1) = 0. Since R is S - compatibe, g1(s1)ωs1(g2(s2))ωs1 s2(g3(s3)). . . ωs1 s2. . . sn−1(gn(sn))ωs1 s2. . . sn(cb(1)) = 0. Thus (g1g2. . . gncb)(s) = P(s1,s2,. . . ,sn,1)∈Xs(g1,g2,. . . ,gn,cb) g1(s1)ωs1(g2(s2))ωs1 s2(g3(s3)). . . ωs1s2. . . sn(cb(1)) = 0. It follows that cb ∈ rA(Jn) which implies that cbA ⊆ rA(Jn). Now, let g ∈ rA(Jn). Then g1g2. . . gng = 0 for all g1, g2, . . . , gn ∈ J. Since R is an (S , ω)-Armendariz ring, we get g1(u1)ωu1(g2(u2)). . . ωun−1(gn(un))ωun(g(v)) = 0 for all u1, u2, . . . , un, v ∈ S . Moreover, Since R is S -compatible, we get g1(u1)g2(u2). . . gn(un)g(v) = 0. Thus g(v) ∈ rR(In) = bR for all v ∈ S . Therefore, for all v ∈ S there exists r ∈ R such that g(v) = br = (cbcrev)(v). Thus g = cbcrev , which implies that g ∈ cbA. That is rA(Jn) ⊆ cbA, which follows that rA(Jn) = cbA. 6 Hence A is a generalized right quasi-Baer ring. By combining Proposition 3.5 and Proposition 3.6, we obtain the following Theorem. Theorem 3.7. Let R be an S -compatible (S , ω)-Armendariz ring, (S , ≤) a qua- sitotally ordered monoid and ω : S → End(R) a monoid homomorphism. Set A = R [[S , ω]] the ring of skew generalized power series. Then A is a generalized right Baer (quasi-Baer) ring if and only if R is a generalized right Baer (quasi- Baer) ring. Liu Zhongkui called a ring R an S -Armendariz ring if whenever f, g ∈ R[[S ]] (the ring of generalized power series) satisfy f g = 0, then f (u)g(v) = 0 for each u, v ∈ S (see [10]). Corollary 3.8. Let R be an S -Armendariz ring and (S , ≤) a quasitotally ordered monoid. Set A = R[[S ]] the ring of generalized power series. Then A is a gen- eralized right Baer (quasi-Baer) ring if and only if R is a generalized right Baer (quasi-Baer) ring. From [8], a ring R is called a power-serieswise Armendariz ring if whenever j=0 bix j satisfy f (x)g(x) = 0 we have i=0 aixi and g(x) = P∞ power series f (x) = P∞ aib j = 0 for every i and j. Corollary 3.9. Let R be a power-serieswise Armendariz ring. Then R[[x]] is a generalized right quasi-Baer ring if and only if R is a generalized right quasi-Baer ring. Corollary 3.10 ([15, Theorem 3.20 and Theorem 3.21]). Let R be a power- serieswise Armendariz ring. Then R[[x]] is a generalized right Baer ring if and only if R is a generalized right Baer ring. i=0 aixi , g(x) = Pn Rege and Chhawchharia in [16] introduced the notion of an Armendariz ring. They defined a ring R to be an Armendariz ring if whenever polynomials f (x) = j=0 b jx j ∈ R[x] satisfy f (x)g(x) = 0, then aib j = 0 for every Pm i and j. (The converse is always true.) The name “Armendariz ring” was cho- sen because Armendariz [2, Lemma 1] had noted that a reduced ring satisfies this condition. Note that Power-serieswise Armendariz rings are Armendariz, how- 7 ever the converse need not be true by example (2.1) in [8]. Corollary 3.11 ([4, Proposition 1 and Proposition 2]). Let R be an Armendariz ring. Then R[x] is a generalized right quasi-Baer ring if and only if R is a gener- alized right quasi-Baer ring. Corollary 3.12 ([15, Theorem 3.14 and Theorem 3.15]). Let R be an Armendariz ring. Then R[x] is a generalized right Baer ring if and only if R is a generalized right Baer ring. References [1] S. Annin, Associated primes over skew polynomial rings, Communications in Algebra, 30 (2002), pp. 2511–2528. [2] E. P. Armendariz, A note on extensions of baer and pp-rings, Journal of the Australian Mathematical Society, 18 (1974), pp. 470–473. [3] W. E. Clark, Twisted matrix units semigroup algebras, Duke mathematical journal, 34 (1967), pp. 417–423. [4] S. Ghalanzardekh, H. Javadi, and M. Khoramdel, Polynomial extensions of generalized quasi-baer rings, Ukrainian Mathematical Journal, (2010). [5] M. Hamam, R. E.-S. Abdel-Khaleq, and R. M. Salem, Semi-baer and semi- quasi baer properties of skew generalized power series rings, Assiut Uni- versity Journal of Multidisciplinary Scientific Research, 53 (2024), pp. 255– 266. [6] G. Higman, Ordering by divisibility in abstract algebras, Proceedings of the London Mathematical Society, 3 (1952), pp. 326–336. [7] I. Kaplansky, Rings of operators (mimeographed), University of Chicago, (1955). [8] N. K. Kim, K. H. Lee, and Y. Lee, Power series rings satisfying a zero divisor property, Communications in Algebra®, 34 (2006), pp. 2205–2218. 8 [9] J. Krempa, Some examples of reduced rings, in Algebra Colloq, vol. 3, 1996, pp. 289–300. [10] Z. Liu, Special properties of rings of generalized power series, Communica- tions in Algebra, 32 (2004), pp. 3215–3226. [11] G. Marks, R. Mazurek, and M. Ziembowski, A new class of unique prod- uct monoids with applications to ring theory, in Semigroup Forum, vol. 78, Springer, 2009, pp. 210–225. [12] , A unified approach to various generalizations of armendariz rings, Bulletin of the Australian Mathematical Society, 81 (2010), pp. 361–397. [13] R. Mazurek and M. Ziembowski, On von neumann regular rings of skew generalized power series, Communications in Algebra, 36 (2008), pp. 1855– 1868. [14] A. Moussavi, H. Haj Seyyed Javadi, and E. Hashemi, Generalized quasi-baer rings, Communications in Algebra®, 33 (2005), pp. 2115–2129. [15] K. Paykan and A. Moussavi, A generalization of baer rings, International Journal of Pure and Applied Mathematics, 99 (2015), pp. 257–275. [16] M. B. Rege and S. Chhawchharia, Armendariz rings, Proceedings of the Japan Academy, Ser. A, Mathematical Sciences, 73 (1997), pp. 14–17. [17] P. Ribenboim, Rings of generalized power series: Nilpotent elements, in Abhandlungen aus dem Mathematischen Seminar der Universit¨at Hamburg, vol. 61, Springer, 1991, pp. 15–33. [18] , Noetherian rings of generalized power series, Journal of pure and ap- plied algebra, 79 (1992), pp. 293–312. [19] , Rings of generalized power series, Journal of algebra, 168 (1994), pp. 71–89. [20] , Special properties of generalized power series, Journal of algebra, 173 (1995), pp. 566–586. [21] , Semisimple rings and von neumann regular rings of generalized power series, Journal of Algebra, 198 (1997), pp. 327–338. 9
synthetic_cpt
1
Uncertainty-Guided_Optimization_on_Large_Language_Model_Search_Trees.pdf
The Creation of Puffin, the Automatic Uncertainty Compiler Nicholas Graya,b, ∗, Marco de Angelisa, Scott Fersona aInstitute for Risk and Uncertainty, University of Liverpool, Liverpool, United Kingdom, L69 7ZX [email protected] 1 2 0 2 t c O 5 2 ] S M . s c [ 2 v 3 5 1 0 1 . 0 1 1 2 : v i X r a Abstract An uncertainty compiler is a tool that automatically translates original computer source code lacking explicit un- certainty analysis into code containing appropriate uncertainty representations and uncertainty propagation algorithms. We have developed an prototype uncertainty compiler along with an associated object-oriented uncertainty language in the form of a stand-alone Python library. It handles the specifications of input uncertainties and inserts calls to intrusive uncertainty quantification algorithms in the library. The uncertainty compiler can apply intrusive uncertainty propagation methods to codes or parts of codes and therefore more comprehensively and flexibly address both epistemic and aleatory uncertainties. Keywords: Uncertainty Analysis; Uncertainty Compiler; Probability Bounds Analysis 1. Introduction Modern science and engineering is all about numerical calculation. With the inexorable growth of computer power, more of these calculations are being undertaken with ever more complex computer simulations. These developments mean that new computation-intensive technologies are being explored, such as digital twins (see [1, Sec. 2.2.3.3] or [2]). Scientists and engineers need to make calculations even when there is uncertainty about the quantities involved, yet the tools they are commonly using do not allow this to be done intrusively. As a result many analysts work with computer codes that do not take full account of uncertainties. Within the numerical calculations essential to engineering, there are two types of uncertainty: aleatory and epistemic. Aleatory uncertainty arises from the natural variability in changing environments and material properties, errors in manufacturing processes or inconsistencies in the realisations of systems. Aleatory uncertainty cannot be reduced by empirical effort. Epistemic uncertainty is caused by measurement imperfections or lack of perfect knowledge about a system. This could be due to not knowing the full specification of a system in the early phases of engineering design or ignorance about the expected manufacturing variations or deployment conditions. Imperfect scientific understanding of the underlying physics or biology involved causes uncertainty in predictions about the future performance of a system even after the design specifications have been decided. If uncertainties are small they can often be neglected or swept away by looking at the worst-case scenarios. However, in situations where the uncertainty is large, this approach is suboptimal or impossible, especially if it would impact a decision. Instead, a strategy of comprehensively accounting for the two kinds of uncertainty is needed that can propagate imprecise and variable numerical information through calculations. Because analysts are typically unwilling to rewrite their codes, various simple strategies have been used to remedy the problem, such as elaborate sensitivity studies or wrapping the program in a Monte Carlo loop. These approaches treat the program like a black box because users consider it uneditable. However, whenever it is possible to look inside the source code, it is better characterised as a crystal box because the operations involved are clear but fixed and unchangeable in the mind of the current user. ∗Corresponding author Preprint submitted to Elsevier October 26, 2021 2. The Problem with Monte Carlo The most common approach to deal with uncertainty is to wrap code within a Monte Carlo shell. In this approach the calculations are repeated with random values for selected input variables. This is done for a large number of iterations, and the distribution of resulting outputs can be analysed. Such tools exists in many programming langages: DAKOTA for C++ [3], COSSAN [4] and UQLab [5] for MATLAB or UQpy for Python [6]. Olivier et al. give an excellent overview of many more software packages that are availible for non-intrusive uncertainty quantification [6]. Under such an approach random values are chosen and then the calculations are performed and the output stored, this is done for a number of iterations and total outputs can be analysed after the process has been completed. In order to demonstrate the potential problems with such an approach we can consider a simple example. Suppose we have five variables 𝑥1, . . . , 𝑥5 which are known to all have a value between 0 and 1 but no further information is known about the values. Suppose we need to perform the calculation 𝑦 = 𝑥1 𝑥2 𝑥3 𝑥4 𝑥5 + + + + (1) 4.5. A number can be randomly generated for 𝑥1, 𝑥2, etc with the knowledge that some bad thing will happen if 𝑦 ≥ and these can be used in order to calculate the value of 𝑦 for 𝑁 iterations. After this is complete we can plot a histogram to show the distribution for 𝑦. Since we do not have any information about the distribution for 𝑥1, . . . , 𝑥5 it seems sensible to assume that all values are equally likely and use a uniform distribution. Figure 1 shows these histograms for various 𝑁. From this we can see that as 𝑁 the histogram resembles a normal distribution. Whatever the number of replications used in the Monte Carlo simulation, we can estimate that the probability of the bad thing happening. With 106 replications, this estimate is Pr 4. However, it seems reasonable to consider whether we have confidence that the event is so rare. We had no information about the distributions of the = 2.53 10− 4.5 × ≥ 𝑦 ( ) → ∞ five values except that they were between 0 and 1. Nor did we have knowledge about what dependencies there might be between the variables. From this information we cannot rule out the possibility that each 𝑥 value is much more likely to be closer to 1 than 0, or that there is some dependence between the 𝑥 values such that if 𝑥1 is high then all the others are also likely to be high. Thus, the way that the uncertainty has been characterised may be significantly underestimating the risk [7]. There have been several engineering failures that were due in part to underestimating risks in ways similar to this example [7, 8]. Before the 1986 Challenger Disaster, NASA management had predicted the probability of failure with loss of vehicle and crew as 1 in 105 flights [9]. This turned out to be a gross underestimation of the true risk, which after the retirement of the fleet stood at 2 in 135. The Fukushima Daiichi nuclear disaster was due in part to underestimating the risk of a tsunami of the magnitude that caused the disaster and in failing to understand that collocating the backup generators created dependence that destroyed the planned engineered redundancy when the site was flooded during the event [10, p. 48]. The probabilities of satellites colliding in orbit can be underestimated through the use of probabilities [11], leading to false confidence that they are not going to hit each other. Performing uncertainty analysis by simply wrapping a simulation code in a Monte Carlo loop may not give a full account of the uncertainties that are present within a simulation. The probabilities of extreme events are especially difficult to correctly estimate when either the distributions of input variables are not known or any inter-variable dependencies are not known. There are other limitations of this simplistic Monte Carlo approach, including false confidence [11], and problems arising from confounding epistemic and aleatory uncertainties [12]. 3. Puffin Strategies are needed that automatically translate original source code into code with appropriate uncertainty representations and propagation algorithms. Perez et al. introduced a MATLAB toolbox to perform automatic uncertainty propagation using unscented transform, however more general approaches are needed [13]. In this paper 2 Fig. 1. Normalised histogram for the Monte Carlo simulation of Equation 1 for increasing number of iterations. we describe an uncertainty compiler for this purpose, named Puffin, along with an associated language. It handles the specifications of input uncertainties and inserts calls to an object-oriented library of intrusive uncertainty quantification (UQ) algorithms. In theory, the approach could work with any computer language and any flavour of uncertainty propogation. There are several components that are needed for the creation of Puffin as shown in Figure 2. Puffin needs a language of uncertainty, “Puffin Language”, that allows users to specify what uncertainties should be associated with the variables within the source code. This language should be simple and independent of the source language. For every source language that Puffin is to work with, there has to be an intrusive UQ library that Puffin Language can be translated into. Puffin Language does not need to be a textual programming language, it could instead be a visual language as part of a graphical user interface. Every type of uncertainty that is expressible in Puffin Language must be supported by an object constructor in the uncertainty library written in the source language. Section 4 discusses this Puffin Language component. The other component that Puffin needs is a transcompiler [14] that translates a user’s source code into an UQ enriched code expressed in the same language. There are three subcomponents to this: a Reader that is able to read the source language, a Translator that is able to read the specified uncertainties in Puffin Language, and a Writer that 3 Fig. 2. The different components of Puffin and the UQ library it depends on. For each source language that Puffin is able to read, the parts highlighted in red need to be created. The intrusive uncertainty quantification library and Puffin Language need to mirror each other with a direct translation being available for all uncertainty specifications. combines the results of both to output a new script with the specified uncertainties. ANTLR, a parser/lexer generator, can be used to generate the Reader and Writer [15]. ANTLR requires a grammar specification for the source language. Fortunately, ANTLR grammar files have been defined for many popular programming languages [16] and these can be used as a starting point. The Reader scans the input script and identifies the assignment operators within that may have uncertainties that need to be specified. The Translator reads Puffin Language specifications created by the user and translates the uncertainties into the source language. For each source language these conversions need to be specified. The Writer reproduces the script with the required uncertainties and necessarily changes required for the analysis to run without issue. We have designed these components for Python. 4. Puffin Language Puffin depends on an uncertainty language to express what uncertainties are present within their scripts. This language enables users to specify the uncertainties about the variables involved in their code before compiling it into a new script with UQ enriched code. The language currently enables calculations with five types of uncertain objects that have relevance in engineering [12]: • Interval: unknown value or values for which sure bounds are known [17], • Probability distribution: random values varying according to specified law such as normal, uniform, binomial, etc., with known parameters [18], • Probability box: random values for which the probability distribution cannot be specified exactly but can be bounded [19], • Confidence structure: inferential uncertainty about a constant parameter compatible with both Bayesian and frequentest paradigms [20], and • Natural language expressions: uncertain values indicated by linguistic hedge words such as ‘about 7.2’ or ‘at most 12’ [21, 22, 23]. Libraries that add some/all of these objects are available in C++ [24], Python [25], MATLAB [26], R [27 or 28], Julia [29]. There are many other uncertain objects that could be included within such a language of uncertainty such as second-order distributions or meta-distributions [30, 31], fuzzy numbers [32], possibility distributions [33], consonant structures [34], info-gap models [35] and others. 4.1. Intervals An interval is an uncertain number representing values from an unknown distribution over a specified range, or perhaps a single value that is imprecisely known even though it may in fact be fixed and unchanging. Intervals thus 4 embody epistemic uncertainty. Intervals can be specified by a pair of scalars corresponding to the lower and upper bounds of the interval, such as as [ which is equivalent to 2 5 0, 1 ] 3, 7 or . ] [ [ ± ] 4, 5 ] [ . They can also be expressed as a value plus or minus some error, such Interval arithmetic computes with ranges of possible values, as if many separate calculations were made under different scenarios. However, the actual computations made by the software are done all at once, so they are very efficient. Basic binary operations ( , + , − , × ÷ ) can be performed using interval arithmetic: 𝑎, 𝑏 [ ] + [ 𝑐, 𝑑 = 𝑎 [ + ] 𝑐, 𝑏 𝑑 , ] + 𝑎, 𝑏 ] − [ 𝑐, 𝑑 = 𝑎 ] [ − 𝑎𝑐, 𝑎𝑑, 𝑏𝑐, 𝑏𝑑 [ = 𝑑, 𝑏 𝑐 , − , max ] 𝑎𝑐, 𝑎𝑑, 𝑏𝑐, 𝑏𝑑 ) ( )] 𝑎, 𝑏 [ 𝑐, 𝑑 ] ] × [ min [ ( 𝑎, 𝑏 [ 𝑐, 𝑑 ] ] ÷ [ = min (cid:20) (cid:18) 𝑎 𝑐 , 𝑎 𝑑 , 𝑏 𝑐 , 𝑏 𝑑 , max (cid:19) (cid:18) 𝑎 𝑐 , 𝑎 𝑑 , 𝑏 𝑐 , 𝑏 𝑑 (cid:19) (cid:21) . and assuming that 0 ∉ 𝑐, 𝑑 [ ] (2) (3) (4) (5) Intervals can be propagated through all common mathematical functions such as exp, sin, log, etc. This is relatively straightforward if the function is monotonic as this implies that the endpoints of the input interval correspond to the endpoints of the output interval. For example, when calculating the exponential of an interval, exp 0, 1 ]) ( [ = exp 0 ) ( [ , exp 1 ( )] ≈ [ 1, 2.718 ] (6) For a non-monotonic function such as sine it is not necessarily the case that the endpoints of the interval correspond the the endpoints of the output function. For example, it is not the case that sin 0, 𝜋 = sin 0 ) ( [ , sin 𝜋 ( )] = 0, 0 ] [ ]) ( [ (7) 0, 𝜋 because there are many 𝑥 values such that sin using the maxima of the function within the domain, in this case 𝜋 > 0 for 𝑥 ∈ [ 𝑥 ( ) ] 2 . Hence, . The true width of the interval can be calculated sin 0, 𝜋 = 0, 1 . ] [ ]) ( [ (8) 4.2. Probability Distributions and Probability Boxes A probability distribution is a mathematical function that gives the probabilities of occurrence for different possible values of a random variable. Probability bounds analysis integrates interval analysis and probability distributions using probability boxes (p- boxes) [19]. They can be considered as interval bounds on a probability distribution, therefore one can think of a probability distribution as a special case of a p-box. Figure 3 shows an example of a probability distribution. P-boxes characterise both epistemic and aleatory uncertainty. A p-box can be expressed mathematically as = 𝑥 ) F ( 𝐹 𝑥 ( ) [ , 𝐹 𝑥 ( )] , 𝐹 𝑖 ( ) ≥ 𝐹 𝑖 𝑥 ) ∀ ( ∈ R is the function that defines the left bound of the p-box (the blue in Figure 3) and 𝐹 where 𝐹 𝑥 ( ) bound of the p-box (the orange line in Figure 3) (9) defines that right 𝑥 ( ) As with intervals, standard arithmetic operations can be performed on p-boxes (and therefore probability distribu- 𝐵 tions). For two p-boxes , then and , 𝐴 , 𝐵 𝐴 = = 𝑥 𝑥 𝑥 𝑥 𝑥 𝑥 A( ) [ ( ) ( )] B ( ) [ ( ) ( )] = 𝑥 ) C( 𝑥 A( ) ◦ B ( = 𝑥 ) 𝐶 𝑥 ( ) [ , 𝐶 𝑥 ( )] (10) 5 ) x ≤ X ( ` S 1 0.8 0.6 0.4 0.2 0 −8 −6 −4 −2 2 4 6 8 0 t Fig. 3. Probability box for a normal distribution with 𝜇 = 1, 1 [− ] and 𝜎 = 1, 2 . ] [ (11a) (11b) (12a) (12b) where if ◦ ∈ [+ , or , ×] 𝐶 𝐶 𝑧 𝑧 ) ) ( ( 𝑦 = inf 𝑧=𝑥 ◦ = sup 𝑧=𝑥 𝑦 ◦ h h 𝐴 𝑥 ( ) ◦ , 1 𝐵 𝑦 ) ( 𝐴 𝑥 ( ) ◦ 𝐵 𝑦 ( ) − (cid:17)i 1, 0 min (cid:16) max (cid:16) (cid:17)i (cid:17)i 𝑧 𝑧 𝐶 ( = 1 + ) 𝐶 ( ) = sup 𝑧=𝑥 𝑦 𝑦 inf 𝑧=𝑥 ◦ h max min 𝐴 𝑥 ( ) ◦ , 0 𝐵 𝑦 ) ( 𝐴 (cid:16) 𝑥 ( , 0 𝐵 𝑦 ) ( ) ◦ (cid:17) i ◦ . Naturally, division is only valid if 0 ∉ h (cid:16) if , ◦ ∈ [− [36, p. 89]. Within a programming language a p-box can be expressed by using the name of the probability distribution, or ÷] B some shorthand for the name, as the function and the arguments as intervals. For example, the p-box shown in Figure 3 could be generated using Normal([-1,1],[1,2]). P-boxes can be defined in situations where the shape of the distribution is unknown but some empirical evidence about the data is known, such as the minimum, maximum, mean, standard deviation, etc. In this situation bounds can be created such that they are consistent with all the available information [37]. 4.3. Confidence Boxes Confidence boxes (c-boxes) are imprecise generalisations of traditional confidence distributions, which, like Stu- dent’s 𝑡-distribution, encode frequentist confidence intervals for parameters of interest at every confidence level [20, 38]. They are analogous to Bayesian posterior distributions in that they characterise the inferential uncertainty about dis- tribution parameters estimated from sparse or imprecise sample data, but they have a purely frequentist interpretation that makes them useful in engineering because they offer a guarantee of statistical performance through repeated use. Unlike confidence intervals which cannot usually be used in mathematical calculations, c-boxes can be propagated through mathematical expressions using the ordinary machinery of probability bounds analysis, and this allows analysts to compute with confidence, both figuratively and literally, because the results also have the same confidence interpre- tation [39]. For instance, they can be used to compute probability boxes for both prediction and tolerance distributions. 6 Confidence boxes can be computed in a variety of ways directly from random sample data. There are c-boxes both for parametric problems (where the family of the underlying distribution from which the data was randomly generated is known to be normal, binomial, Poisson, etc.), and for non-parametric problems in which the shape of the underlying distribution is unknown. C-boxes account for the uncertainty about a parameter that comes from the inference about observations, including the effect of small sample size, but also the effects of imprecision in the data and demographic uncertainty which arises from trying to characterise a continuous parameter from discrete data observations. For example, it is possible to specify a c-box in the binomial case of having 𝐾 successes out of 𝑁 trials, based upon Clopper-Pearson confidence intervals [34, 40, 41]. This 𝐾-out-of-𝑁 c-box is specified as = 𝑘, 𝑛 ) [ beta 𝑘, 𝑛 ( 𝑘 1 ) + − , beta 1, 𝑛 𝑘 ( + . 𝑘 )] − KN ( (13) 4.4. Natural Language Uncertainty In order to make uncertainty analysis as simple as possible, users should be able to input their uncertainties using natural language expressions such as about or almost. Humans are more likely to express their uncertainties in terms of hedged expressions around a round number, rather than as a percentage or probability. Table 1 lists some hedge words and their possible interpretations. Hedge words can be interpreted as intervals, p-boxes [21], or consonant c-boxes [42]. Hedged Numerical Expression Possible Interpretation about 𝑥 ) ( around 𝑥 ) ( count 𝑥 ) ( almost 𝑥 ) ( over 𝑥 ) ( above 𝑥 ) ( below 𝑥 ) ( at most 𝑥 ) ( at least 𝑥 ( ) order 𝑥 ( ) between 𝑥 and 𝑦 𝐾 out of 𝑁 𝑥 [ 𝑥 [ 2 ± 10 ± 𝑥 [ 0.5 𝑑 10− 10− × × √𝑥 ] 10− ] 𝑑 ] 𝑑, 𝑥 𝑑 10− × 𝑑 10− × 𝑑, 𝑥 10− ± × 0.5 2 [ [ 𝑥 − 𝑥, 𝑥 + 𝑥, 𝑥 + [ 𝑥 2 [ − ] ] ] ] × 0, 𝑥 ] [ 𝑥, ∞] [ 2, 5𝑥 𝑥 / 𝑥, 𝑦 [ ] , beta 𝑘 ] [ beta 𝑘 ( + [ 1, 𝑛 − ) 𝑘, 𝑛 ( 𝑘 1 )] + − Table 1. Hedge expressions and their mathematical equivalent. Note: 𝑑 is the number of significant figures of 𝑥. 4.5. Logical Operations with Uncertain Objects When making decisions it is often the case that two values need to be compared with each other. Asking whether an observed value is greater than, equal to, or less than some threshold value is fundamental. For example, if a decision relies on some observed value 𝑋 being less than 1, when we know the value of 𝑋 accurately then it is easy to make such a comparison. However, if there is some uncertainty about the value of 𝑋 then this comparison may not be so easy. For intervals, 𝑋 = 𝑎, 𝑏 [ ] and 𝑌 = 𝑐, 𝑑 [ ] , then and 𝑋 < 𝑌 =    𝑋 > 𝑌 =    1 0 𝑏 < 𝑐 𝑎 𝑐 ≥ 0, 1 ] [ otherwise 𝑐 𝑏 ≤ 𝑎 > 𝑑 otherwise 0 1 0, 1 ] [ 7 (14) (15) (16) (17) (18a) (18b) (19) (20) with 0 and 1 denoting true and false respectively, and [0,1] being the Boolean equivalent of “I don’t know”. We can call [0,1] the dunno interval. This implies that we cannot say whether an uncertain value characterised by an interval is larger or smaller than another unless the interval is entirely greater or less than the other interval. For the equality comparison, 𝑋 == 𝑌 = 0, 1 𝑌 or 𝑏 𝑎 ∈ 𝑌 ∈ ] otherwise, [ 0    introduce a new Boolean operator (===) to test for whether two uncertain numbers are equivalent in form, when asking for equivalence between intervals it is never possible to say that one value is equal to another. We can The dunno interval can be converted into a true Boolean using operators such as always or sometimes so that we can get sometimes always 𝑋 < 𝑌 ( ) sometimes 𝑋 < 𝑌 ( ) 𝑋 === 𝑌 = 1 𝑎 = 𝑐 and 𝑏 = 𝑑 otherwise. 0    always 0, 1 0, 1 ]) ]) ( [ ( [ = 0 = 1 1 𝑏 < 𝑐 = otherwise 1 0 𝑎 < 𝑑 otherwise. 0   =     There are methods that are able to deal with more nuanced ways of using logical operations with intervals, see [43] as an example. There are also different logic systems such as fuzzy logic that could be used in order to make logical operations with uncertain numbers [44]. 4.6. Repeated Variables and Dependency When performing intrusive uncertainty analysis it would be ideal to always obtain best possible results that are guaranteed to bound the true value without overestimating the uncertainty. The uncertainty can be inflated or artifactually high if careful consideration of the dependence between, and repetition of, uncertain numbers is not undertaken. This problem appears to be ubiquitous to many, if not all, uncertainty calculi [12]. For example, if 𝑎 = × oppositely dependent on each other, such that a low value of 𝑎 is always matched with a high value of 𝑏, then 𝑎 . However, if it were the case that 𝑎 and 𝑏 were 𝑏 is and 𝑏 = , then 𝑎 8, 15 𝑏 = 2, 3 4, 5 ] [ ] [ ] [ × the much narrower interval 10, 12 . [ ] Repetition of variables can also artifactually inflate the amount of uncertainty present within the output. For example if 𝑎 = , 𝑏 = 1, 2 ] [ 1, 1 ] [− and 𝑐 = 𝑎𝑏 + 3, 4 [ ] 𝑎𝑐 = = = but then 1, 2 [ ] × [− 1, 1 1, 2 3, 4 ] ] × [ ] + [ 3, 8 ] ] + [ 2, 2 [− 1, 10 [ ] 𝑎 𝑏 ( 𝑐 ) + = = = [ [ [ 1, 2 1, 2 1, 1 3, 4 ]) ] + [ ] ([− 2, 5 ] ] × [ 2, 10 ] 8 (21) (22) Fig. 4. Addition of two p-boxes with different dependencies. 𝑎𝑐 is greater than the uncertainty Although algebraically these two expressions should be equal, the uncertainty of 𝑎𝑏 . This is due to the fact that the uncertain variable 𝑎 is repeated within the former but appears only once about 𝑎 in the latter. In essence the uncertainty about 𝑎 has been considered twice when performing the first calculation. The + + 𝑏 𝑐 ( ) amount of this artifactual uncertainty can be reduced by transforming the original equation into a single-use expression where uncertain variables are only used once. If this is not possible, there are other techniques that can be used to reduce this artifactual uncertainty (e.g. [45, 46, 47, 48]) For distributions and p-boxes, significant artifactual uncertainty reduction can be made if the dependence between the variables is known [36]. Figure 4 shows the result of adding two separate p-boxes, 𝐴 = U , ]) 𝐵 = U , together with different dependencies between 𝐴 and 𝐵. The Fréchet bounds are used when the dependence between 𝐴 and 𝐵 is unknown, thus it is the most general case and is guaranteed to bound the correct 2, 3 5, 7 0, 1 4, 6 ( [ ( [ ]) [ ] ] [ , , answer. As such in Figure 4 the Fréchet bounds cover all the other dependencies, as it is the operation that is defined in equations 10, 11 and 12. Perfect, or comonotonic, is where there is perfect positive dependence between the two variables, with the highest possible correlation coefficient. Opposite, or countermonotonic, is perfect negative dependence between the two variable with the lowest possible correlation coefficient. Independence is where there is no dependence between the two variables. It should not be assumed that variables are independent unless this is known because wrongly assuming independence can lead to incorrectly reducing the amount of uncertainty and understating tail risks. In general, dependence between uncertain quantities can be expressed through the use of correlation coefficients or copulas or bounds on copulas more generally [49, 50, 51, 52]. This can include named copulas such as independence, opposite and perfect, as shown in Figure 4, or other copula families parameterised by a numerical correlation coefficient. Independence implies the correlation is zero, although zero correlation does not imply independence. Likewise, a correlation of one implies perfect dependence, but, depending on the copula family, perfect dependence may not imply can be used to indicate that the variables are equal in value, i.e., equal in distribution correlation one. The symbol and perfectly positively correlated. ≡ These dependencies can be stored within a matrix, such as that shown in Table 2. This matrix can be checked for feasibility by checking that it is positive semi-definite and that there are no conflicting dependencies within the table. For example the matrix shown in Table 3 is not logically consistent for continuous variables. This is because a high value of x implies a high value of z, since they are positively dependent on each other. Meanwhile, a high z implies y must be low, due to their opposite dependence. This means there must also be dependence between x and y, not the independence as has been specified within the table. 4.7. Other issues Aside from dependencies and sensitivity to repeated variables, there are other issues that distinguish simple deterministic calculations from uncertainty quantifications. For example, in uncertainty quantification analysts may need to consider ensembles and backcalculations. Probability distributions describe properties or behaviours across a population of entities. Statisticians call such a population the “reference class” or “ensemble”. Uncertainty quantification implicitly represents many calculations over 9 a i ≡ f o b i i f 0 c ≡ i f o d f f f f e o 0 o f a b c d e Table 2. Matrix showing dependencies between several variables. (f - Fréchet, i - Independence, o - Opposite, - Equal in value) ≡ z p o x i p y i o x y z Table 3. Dependency matrix that does not make logical sense. (i - Independence, p - Perfect, o - Opposite) interacting ensembles, and it can be extremely important to keep in mind what the values in a probability distribution represent. For instance, if the post-operative risks of prostatectomy is fifty percent erectile dysfunction, it would make a huge difference to a patient whether this means that 50% of his future attempts at sex fail or that 50% of patients are permanently impotent. Does a system reach 10% of criticality or does it reach criticality 10% of the time? Uncertainty quantifications that do not explicitly define what the distributions in an analysis represent in terms of their respective ensembles may be meaningless. Puffin allows users to annotate their codes to specify and document the ensemble described by any distribution or other uncertain quantity, although, in general, it is the responsibility of the analyst to ensure that the calculations used make sense. Another wrinkle that makes uncertainty quantification different from its analogous deterministic calculations is the importance of backcalculation. Backcalculation is a mathematical operation for finding solutions to equations involving variability or uncertainty that guarantee some desired performance. Such problems are ubiquitous in engineering design. Backcalculation solves questions such as 1. What dimensional constraints on a component are necessary to ensure that it fits in its place in a machine given spatial tolerances? 2. How much propellant is needed to guarantee sufficient fuel given the mission contingencies and unforeseen variabilities? 3. How much shielding is needed on a spacecraft to ensure that the total ionizing radiation experienced inside the craft does not exceed some tolerable threshold, given that radiation in space varies over time in an imperfectly known way? The Puffin UQ library has algorithms to solve backcalculations that involve intervals, distributions and p-boxes (when solutions exist), but it is the responsibility of the analyst to ensure that they are deployed appropriately to yield calculations that make sense in the engineering context. 5. Compiler Puffin consists of its intrusive UQ library, a code inspector/editor, and an uncertainty compiler. Puffin’s uncertainty compiler does five things: 1. Parses the input source code into expression tree(s), 2. Identifies the variables in any assignment operations, 3. Replaces or modifies some or all of these assignments in the expression trees according to options and specifi- cations provided by the user, 10 (cid:14)(cid:152)(cid:15) (cid:139) ˚ (cid:153) (cid:14)(cid:153)(cid:15) (cid:151) ˚ (cid:156) (cid:14)(cid:154)(cid:15) (cid:152) ˚ (cid:139) (cid:14)(cid:155)(cid:15) (cid:14)(cid:156)(cid:15) (cid:158)˚(cid:139)ß(cid:151)(cid:7)(cid:152) (cid:14)(cid:157)(cid:15) Æ(cid:228)(cid:181)˛(cid:239)(cid:10)(cid:158)(cid:11) (cid:14)(cid:152)(cid:15) (cid:139) ˚ (cid:153) (cid:14)(cid:153)(cid:15) (cid:151) ˚ (cid:156) j 9 8 e (cid:14)(cid:152)(cid:15) (cid:139) ˚ ˛(cid:213)(cid:228)˝(cid:139)˙(cid:10)(cid:153)Ł(cid:14)(cid:151)(cid:231)(cid:152)Ł(cid:151)(cid:231)(cid:153)(cid:15)(cid:11) (cid:14)(cid:153)(cid:15) (cid:151) ˚ (cid:139)(cid:151)(cid:213)(cid:244)(cid:239) (cid:156) j 9 8 e (cid:14)(cid:152)(cid:15) (cid:139) ˚ ˛(cid:213)(cid:228)˝(cid:139)˙(cid:10)(cid:153)Ł.(cid:10)(cid:151)(cid:231)(cid:152)Ł(cid:151)(cid:231)(cid:153)(cid:11)(cid:11) (cid:14)(cid:153)(cid:15) (cid:151) ˚ .(cid:10)(cid:155)(cid:231)(cid:156)Ł(cid:156)(cid:231)(cid:156)(cid:11) (cid:14)(cid:154)(cid:15) (cid:152) ˚ (cid:139) (cid:14)(cid:155)(cid:15) (cid:14)(cid:156)(cid:15) (cid:158)˚(cid:139)(cid:158)(cid:158)(cid:10)˝(cid:244)˙(cid:10)(cid:139)Ł(cid:151)Ł ˝¢(cid:239)†(cid:213)(cid:158) ˚ ?‹?(cid:11)Ł(cid:152)Ł ˝¢(cid:239)†(cid:213)(cid:158) ˚ ?Æ?(cid:11) (cid:14)(cid:157)(cid:15) Æ(cid:228)(cid:181)˛(cid:239)(cid:10)(cid:158)(cid:11) Fig. 5. The result of using the compiler whilst defining the uncertainty in Puffin on a simple pseudocode script. 4. Translates the expression trees, with amended assignments, into the target language equipped with its intrusive UQ library, and 5. Analyses the output code to detect repeated variables and other functional dependencies that affect calculations and suggests improvements for computing uncertainties. In order to explain what is happening in these steps it is useful to consider a simple pseudocode script, shown in the top left corner of Figure 5. For step 1, the simple script has then been broken into a parse tree which can be seen in Figure 6. From this tree Puffin detects the assignment operators which define a variable. These include lines 1 and 2, the leaves highlighted magenta on the parse tree, but not those that assign a value based upon a mathematical expression (line 5), a function or directly from another variable (line 3). In theory, such variables could also be edited by the user, but Puffin assumes that only explicit assignments will have uncertainty. Once the assignments have been found they can be displayed in the Puffin language, shown in the top right panel of Figure 5 where the assignments can then be edited with the appropriate user-specified uncertainties. These uncertainties need to then be translated to the source language, along with the rest of the parse tree. This translation may include altering any functions that depend the amended variables. In this case, the infix operators in the definition of d in line 5 (+,*) have been identified within the parse tree and replaced with an explicit call to the UQ library functions (add, mul) which also have as an argument the dependence operation that is to be used. The lower panel of Figure 5, the value ’f’ of this argument corresponds to making no assumption about the intervariable dependence between a and b, and perfect dependence (comonotonity) between their product and the variable c. Puffin should only highlight numeric objects, not characters, strings or other non-numeric classes. In strongly typed programming languages like C, FORTRAN, and Pascal, the problem of distinguishing numeric from other types of objects is easy. In Python, R or Julia, the type of any object is not detectable until runtime and can even change during execution. Puffin will also need to be able to recognise objects that are collections of numeric values such as lists or 11 ‹(cid:21)˙¢ Ł(cid:239)˝(cid:239) Ł(cid:239)˝(cid:239) Ł(cid:239)˝(cid:239) Ł(cid:239)˝(cid:239) Ł(cid:239)˝(cid:239) (cid:139)Ł›˝(cid:239) (cid:139)Ł›˝(cid:239) (cid:139)Ł›˝(cid:239) (cid:139)Ł›˝(cid:239) (cid:139)Ł›˝(cid:239) (cid:139)Ł›˝(cid:239) ‹(cid:244)˛(cid:152) (cid:139) (cid:139) ˚ ˚ (cid:153) (cid:153) (cid:151) (cid:151) ˚ ˚ (cid:156) (cid:156) (cid:152) ˚ (cid:139) (cid:151) ˚ ¢(cid:5)Æ(cid:228) ¢(cid:5)Æ(cid:228) Æ(cid:228)(cid:181)˛(cid:239) (cid:158) ¢(cid:5)Æ(cid:228) ¢(cid:5)Æ(cid:228) ˘ ˘ (cid:152) (cid:152) (cid:139) (cid:139) (cid:242) (cid:242) (cid:151) (cid:151) Fig. 6. Parse tree for the simple pseudocode script. Abbreviations: stmt - statement, asgmt - assignment, func - function, expr - expression. (cid:14)(cid:152)(cid:15) (cid:139) ˚ (cid:152) (cid:14)(cid:153)(cid:15) (cid:151) ˚ (cid:153)(cid:231)(cid:156) (cid:14)(cid:154)(cid:15) (cid:152) ˚ (cid:154) (cid:14)(cid:155)(cid:15) (cid:14)(cid:156)(cid:15) (cid:158)˚(cid:139)ß(cid:151)(cid:7)(cid:152) (cid:14)(cid:157)(cid:15) Æ(cid:228)(cid:181)˛(cid:239)(cid:10)(cid:158)(cid:11) (cid:14)(cid:152)(cid:15) (cid:139) ˚ .(cid:10)(cid:151)(cid:231)(cid:156)Ł(cid:152)(cid:231)(cid:156)(cid:11) (cid:14)(cid:153)(cid:15) (cid:151) ˚ .(cid:10)(cid:153)(cid:231)(cid:155)(cid:156)Ł(cid:153)(cid:231)(cid:156)(cid:156)(cid:11) (cid:14)(cid:154)(cid:15) (cid:152) ˚ .(cid:10)(cid:151)(cid:231)(cid:160)(cid:156)Ł(cid:152)(cid:231)(cid:151)(cid:156)(cid:11) (cid:14)(cid:155)(cid:15) (cid:14)(cid:156)(cid:15) (cid:158)˚(cid:139)(cid:158)(cid:158)(cid:10)˝(cid:244)˙(cid:10)(cid:139)Ł(cid:151)Ł?‹?(cid:11)Ł(cid:152)Ł?‹?(cid:11) (cid:14)(cid:157)(cid:15) Æ(cid:228)(cid:181)˛(cid:239)(cid:10)(cid:158)(cid:11) Fig. 7. Result of using Puffin automatically on a simple pseudocode script. dictionaries. It will also need to be able to look at what is inside the lists and highlight those which have numeric objects within the list and allow of the individual objects to have uncertainties added. Alternatively it could be the case that the whole of the list has the same uncertainty, something which should be possible. Puffin can be run automatically without any user input at all. Under default settings, automatic uncertainty compilation replaces floating-point constants with intervals interpreted from the significant figures used in the source code assignments and uses that information as a proxy for the uncertainty (for an example see Figure 7). In this mode all the steps of the steps happen concurrently without requiring any further input from an end user. When using this mode the compiler will need to tread carefully around mathematical constants such as 𝜋 or 𝑒 for which there is no uncertainty. Ideally it would allow users to minimally specify what values are precise constants. 5.1. Control Flow and Functions For loops and functions here are potential stumbling blocks for Puffin. Figure 8 shows a simple pseudocode script with a function and a for loop. The first for loop each i is simply just a control variable with start and end variable. The individual value of i is irrelevant and as such would have no uncertainty about it. The second for loop is a ‘for each’ loop implying that the code needs to do something for each value within some iterable object, under this scenario it may be the case that there is uncertainty about the object within the list. In this case the code is setting the value of initial_velocity as each value within the the list, for each iteration of the for loop. In this case it may be the case that there is uncertainty within the object, in which case Puffin should recognise this and all users to change the code such that the objects within the list can have uncertainty added to them. Puffin will also need to have a way of dealing with local variables within functions. For example, the function in Figure 8 has two local variables, s for the distance that the object travels and g for the acceleration due to gravity. It is conceivable that both of these variables have some uncertainty associated with them and as such Puffin should be able 12 (cid:5) ˚ (cid:5) ˘ (cid:152) Ł ˚ (cid:152)(cid:151) › ˚ (cid:160)(cid:231)(cid:159)(cid:152) (cid:255) ˚ Łª(cid:228)(cid:239)(cid:10)(cid:244)(cid:212)(cid:153) ˘ (cid:153)(cid:242)›ßŁ(cid:11) (cid:228)¢(cid:239)(cid:244)(cid:228)˛ (cid:255) (cid:14) (cid:152)(cid:15) (cid:158)¢‹ (cid:152)(cid:139)˙(cid:152)(cid:244)˙(cid:139)(cid:239)¢t¢˙(cid:213)(cid:152)(cid:181)(cid:239)(cid:6)(cid:10)(cid:244)(cid:11)(cid:7) (cid:14) (cid:153)(cid:15) (cid:14) (cid:153)(cid:15) (cid:14) (cid:154)(cid:15) (cid:14) (cid:155)(cid:15) (cid:14) (cid:156)(cid:15) (cid:14) (cid:157)(cid:15) (cid:5) ˚ (cid:152) (cid:14) (cid:158)(cid:15) ‹(cid:213)(cid:228) (cid:181) (cid:181)˛ (cid:10)(cid:152)(cid:7)(cid:152)(cid:151)(cid:11)(cid:7) (cid:14) (cid:159)(cid:15) (cid:14) (cid:160)(cid:15) (cid:14)(cid:152)(cid:151)(cid:15) ‹(cid:213)(cid:228) (cid:181)˛(cid:181)(cid:239)(cid:181)(cid:139)˙+(cid:255)¢˙(cid:213)(cid:152)(cid:181)(cid:239)(cid:6) (cid:181)˛ (cid:10)(cid:151)(cid:231)(cid:154)Ł(cid:151)(cid:231)(cid:155)Ł(cid:152)(cid:231)(cid:152)Ł(cid:153)(cid:231)(cid:154)Ł(cid:154)(cid:231)(cid:158)(cid:11)(cid:7) (cid:14)(cid:152)(cid:152)(cid:15) (cid:14)(cid:152)(cid:153)(cid:15) (cid:14)(cid:152)(cid:154)(cid:15) (cid:14)(cid:152)(cid:155)(cid:15) (cid:139) ˚ (cid:152) (cid:14)(cid:152)(cid:156)(cid:15) (cid:151) ˚ (cid:151)(cid:231)(cid:156) (cid:14)(cid:152)(cid:157)(cid:15) (cid:181)‹ (cid:139) ˝ (cid:151)Ø (cid:14)(cid:152)(cid:158)(cid:15) (cid:14)(cid:152)(cid:159)(cid:15) ¢˙Ł¢Ø (cid:14)(cid:152)(cid:160)(cid:15) (cid:14)(cid:153)(cid:151)(cid:15) (cid:152) ˚ (cid:153) (cid:152) ˚ (cid:152) ‹(cid:21)˛(cid:139)˙+(cid:255)¢˙(cid:213)(cid:152)(cid:181)(cid:239)(cid:6) ˚ (cid:152)(cid:139)˙(cid:152)(cid:244)˙(cid:139)(cid:239)¢t¢˙(cid:213)(cid:152)(cid:181)(cid:239)(cid:6)(cid:10)(cid:181)˛(cid:181)(cid:239)(cid:181)(cid:139)˙+(cid:255)¢˙(cid:213)(cid:152)(cid:181)(cid:239)(cid:6)(cid:11) Æ(cid:228)(cid:181)˛(cid:239)(cid:10)‹(cid:21)˛(cid:139)˙+(cid:255)¢˙(cid:213)(cid:152)(cid:181)(cid:239)(cid:6)(cid:11) Fig. 8. Pseudocode script with functions and for loops. to detect the variables and offer the ability to edit them so that uncertainty is handled. This could be done using a dot notation, meaning that the local g can be accessed using calculateVelocity.g. If statements and other logical control structures may also pose issues for Puffin. In line 16 of Figure 8 the logical operation within the statement would need changing to ensure that the statement runs as expected, see Section 4.5. Ideally, the analyst would decide what should happen if the statement a < b returned a dunno, [0,1], result by using the adverb operators discussed in equations 19 and 20. This may require additional editing to deal with situations where an uncertain result should be handled differently to a certain true or false. All the code and variables are not necessarily contained within one script. For example, classes and functions are often placed in other files in order to improve readability or to avoid repetitions. Ideally Puffin would be able to parse several scripts at the same time and remember the context for all the individual objects. It is also often the case that scripts read data from other files when running. Under this scenario it may be difficult to use Puffin to express the uncertainty directly within the script, although import functions could be modified to add in the uncertainties. For instance, anytime a floating-point number is read from the file, its significant digits could be interpreted to specify an interval around the value. So, for example, the value ‘3.56’ would be understood as the interval [3.555, 3.565]. Another approach might be to get Puffin to parse the data file and add the uncertainties in to the file directly. This would require changing the import function to be able to handle uncertain datafiles. Many computers languages that are not purely functional support functions that specify their parameters with "call by reference" which means that the memory location of a value is passed to the function rather than a copy of the true value. This convention can allow the function to change the values of those parameters in the calling routine not just locally within the function. Python does this by default with objects more complicated than integers, floating-point number, and strings such as lists, dataframes and numpy arrays. Puffin will need to be careful in handling functions that use the call by reference method of passing argument. The presence of uncertainty implies multiple function definitions might be useful. For instance sqrt applied to ranges that might include negative numbers could have three possible behaviours. Abnormal termination, for example Python’s math.sqrt returns a domain error if passed a negative number; yielding imaginary results, such as Python’s 13 1M+Q/BM; OR (cid:14)(cid:152)(cid:15) (cid:139) ˚ (cid:153) (cid:14)(cid:153)(cid:15) (cid:151) ˚ (cid:156) (cid:14)(cid:154)(cid:15) (cid:152) ˚ (cid:139) ˘ (cid:151) (cid:152)(cid:231)(cid:158)¢Æ¢˛(cid:158)Ł ˚ (cid:12)(cid:139)Ø?Æ?Ł (cid:151)Ø?Æ?(cid:13) (cid:14)(cid:155)(cid:15) (cid:158)˚(cid:152)ı(cid:10)(cid:152)#(cid:139)ß(cid:151)(cid:11) *Q/2 (cid:14)(cid:152)(cid:15) (cid:139) ˚ (cid:153) (cid:14)(cid:153)(cid:15) (cid:151) ˚ (cid:156) (cid:14)(cid:154)(cid:15) (cid:152) ˚ (cid:139) ˘ (cid:151) (cid:14)(cid:155)(cid:15) (cid:158)˚(cid:152)ı(cid:10)(cid:152)#(cid:139)ß(cid:151)(cid:11) 8 1M+Q/BM; Ok (cid:14)(cid:152)(cid:15) (cid:139) ˚ (cid:153) (cid:14)(cid:153)(cid:15) (cid:151) ˚ (cid:156) (cid:14)(cid:154)(cid:15) (cid:152) ˚ (cid:139) ˘ (cid:151) (cid:14)(cid:155)(cid:15) (cid:158)˚(cid:158)(cid:181)(cid:255)(cid:10)(cid:152)Ł(cid:10)(cid:152)#˝(cid:244)˙(cid:10)(cid:139)Ł(cid:151)Ł?‹?(cid:11)Ł?(cid:213)?(cid:11) 1M+Q/BM; Oj (cid:14)(cid:152)(cid:15) (cid:139) ˚ (cid:153) (cid:14)(cid:153)(cid:15) (cid:151) ˚ (cid:156) (cid:14)(cid:154)(cid:15) (cid:152) ˚ (cid:139) ˘ (cid:151) (cid:14)(cid:155)(cid:15) (cid:158)˚(cid:239)(cid:139)˛(cid:10)(cid:139)(cid:239)(cid:139)˛(cid:10)(cid:139)(cid:11)˘(cid:139)(cid:239)(cid:139)˛(cid:10)(cid:151)(cid:11) 8 Fig. 9. Three possible encodings for the dependencies in the code shown on the left. 1. An example of this can be seen when cmath.sqrt; or ignoring the negative values, returning [0,1] for using numpy.sqrt([-1,1]) in Python which returns [NaN, 1]. Similar facilities already exist to handle NaN’s or missing values. 1, 1 [− p ] 5.2. Coping with Dependency and Repeated Variable Problems When it comes to dealing with the issues of dependency and repeated variables there are a couple of approaches that could be used in order to help reduced the problems discussed in Section 4.6. The simplest approach from a Puffin perspective would be for the libraries within each language to be able to handle the dependencies directly. This could be done if each object kept track of what other objects it depends on and in what way. For the example that has been used in Figure 9–Encoding #1, the variable c would remember that it is dependent on the variables a and b and therefore on line 4 it would know what the correct arithmetic would be to ensure as little artifactual uncertainty as possible. Puffin would insert this dependence directly in the translation as shown with the grey text in the Encoding #1, then at run time the / and ∗operators would automatically invoke the correct algorithms that respect the dependencies between the variables detected. This approach would have demands on memory. It requires initialised variables to have dependencies specified or a default dependence if they are unspecified. The other way of treating the dependence would be for Puffin to parse over the script in order to detect the dependencies directly at compile time. These dependencies can then be stored within a matrix as discussed in Section 4.6. This matrix would need to be accessible for an analyst to add in assumptions about dependencies not observable from the code. For example, if variables a and b are independent this cannot be directly inferred from the code and therefore Fréchet would be assumed unless the analyst stated otherwise by editing the matrix. In this 1In this instance the square brackets correspond to Python’s list object not an interval 14 scenario Puffin would have to replace any infix operators with function calls with the appropriate dependence. In Figure 9–Encoding #2, the multiplication a∗b has been replaced by the multiply function specifying that Fréchet should be used for the dependencies. Similarly the division operator has been replaced by a function with the method defined as opposite. Finally where it is possible Puffin should be able to rearrange the equations such that any repeated variables are removed. Such a method would require Puffin to have a directory of multi-use to single-use rearrangements as well as a way of matching the written code to the mathematical expression. Another, smarter, approach would be to have a symbolic algebra system that is able to rearrange to a single-use expression on the fly. The simplest version of this is for it to happen just across one line, for instance replacing with c = a*b+a*c c = a*(b+c). A more complex approach is to consider repetitions globally to try to reduce the repetitions by detecting repetitions that happen over multiple lines. For instance in the example code in Figure 9, there is a hidden repetition across lines 3 and 4 because 𝑐 = 𝑎 𝑏. So 1 This expression can be rearranged into the single-use expression − 1 ∗ 𝑏 − 𝑑 = 𝑐 𝑎 = 𝑎 + 𝑎 𝑏 ∗ . 𝑏 + 𝑑 = tan arctan 𝑎 ( ) + arctan 𝑏 ( )) ( which is the change made in Encoding #3. Such a transformation would be in the directory mentioned above. Care would need to be taken to ensure that the right variable gets the rearrangement. Take the following kinematics equation to find the position 𝑠 of a particle at time 𝑡 𝑠 = 𝑢𝑡 1 2 + 𝑎𝑡2 (25) where 𝑢 is the initial velocity and 𝑎 is the acceleration of the particle. This equation has a single repetition for both 𝑢 and 𝑎 but 𝑡 is repeated. If there is uncertainty associated with 𝑡 then this equation can be rearranged into a single-use expression 𝑠 = 𝑎 2 𝑡 + 𝑢 √2𝑎 (cid:19) (cid:18)r 2 𝑢2 2𝑎 . − (26) This equation contains repetitions of 𝑎 and 𝑢 and as such may only be preferred if there is no uncertainty associated with either 𝑎 or 𝑢. If there is uncertainty associated with either then it may be best not to perform the rearrangement or to intersect possible rearrangements to obtain the best possible expression. There are additional issues that Puffin could face when rearranging equations. For example if the uncertainty about 𝑎 includes negative numbers then √2𝑎 is likely to be problematic. Alternatively, there could be problems if 𝑎 straddles 0 because this would result in a division by zero. A strategy for dealing with this may be to perform the calculations in using both Equation 25 and 26 and intersect them. 5.3. Hermeneutic Problems There are several problems that could occur when it comes to translating a script because it is difficult to understand the intent of the programmer from the code. An example of this can be found in line 3 of the psuedocode example in Figure 5. There is potential for confusion when it comes to the assignment c = a as there are a couple of different interpretations as to what such a command implies when it comes to the uncertainty. The first is that we are implying that c and a are the same object but have 15 (23) (24) (cid:14)(cid:152)(cid:15) (cid:5) ˚ (cid:4)(cid:242)¢(cid:5)Æ(cid:10)(cid:10)#(cid:152)(cid:242)(cid:151)ß(cid:239)(cid:11)ı(cid:10)(cid:153) (cid:14)(cid:7)(cid:15) ß˝(cid:11)(cid:11)(cid:242)(cid:152)(cid:213)Ł(cid:10)(cid:239)ߣª(cid:228)(cid:239)(cid:10)(cid:10)˜ı˝ (cid:14)(cid:7)(cid:15) (cid:11)#(cid:10)(cid:151)(cid:212)(cid:153)(cid:11)ı(cid:10)(cid:155)(cid:242)˝(cid:212)(cid:153)(cid:11)(cid:11)˘Æ†(cid:181)(cid:151)(cid:11) (cid:14)(cid:152)(cid:15) (cid:139)(cid:152) ˚ (cid:10)#(cid:152)(cid:242)(cid:151)ß(cid:239)(cid:11)ı(cid:10)(cid:153)(cid:242)˝(cid:11) (cid:14)(cid:153)(cid:15) (cid:0)(cid:151) ˚ Łª(cid:228)(cid:239)(cid:10)(cid:10)˜ı˝(cid:11)#(cid:10)(cid:151)(cid:212)(cid:153)(cid:11)ı(cid:10)(cid:155)(cid:242)˝(cid:212)(cid:153)(cid:11)(cid:11) (cid:14)(cid:154)(cid:15) (cid:139)(cid:153) ˚ (cid:239)ß(cid:0)(cid:151)˘Æ†(cid:181)(cid:151) (cid:14)(cid:155)(cid:15) (cid:5)(cid:152) ˚ ¢(cid:5)Æ(cid:10)(cid:139)(cid:152)(cid:11) (cid:14)(cid:156)(cid:15) (cid:5)(cid:153) ˚ (cid:152)(cid:213)Ł(cid:10)(cid:139)(cid:153)(cid:11) (cid:14)(cid:157)(cid:15) (cid:5) ˚ (cid:4)(cid:242)(cid:5)(cid:152)(cid:242)(cid:5)(cid:153) (a) Coded in a single line (b) Coded across multiple lines Fig. 10. Two different way in which Equation 28 might be coded. been given different names for some reason, under this scenario they should be considered equivalent to each other and therefore the calculation a + c could be rearranged to the single use expression 2*a. A second interpretation is to consider that the line could have been written as c = 1*a and the 1 has been dropped as it would have had no mathematical impact on the calculation, this implies that they are perfectly dependent on each other in the same way that c = -1*a implies negative dependence. The calculation a + c would therefore need to be performed using perfect dependence. A third interpretation would be to consider that it is saying that c is a copy of a, they have the same uncertainty but their realisations are not necessarily related to each other but they have the same distribution shape. In the third scenario it would be sensible to make no assumptions about the dependencies between a and c and therefore Fréchet should be used. Knowledge of which of these scenarios is correct depends on the context of the script, something which Puffin is unable to make an assumption about by itself. Another potential interpretation problem can occur because when creating code people naturally favour making their code readable. For example, the equation of motion for a damped harmonic oscillator can be given by 𝑥 ′′ 𝑡 ( ) + 𝑏 𝑚 𝑥 ′ 𝑡 ( ) + 𝑘 𝑚 𝑥 𝑡 ( ) = 0 (27) where 𝑏 is a damping constant, 𝑚 is the mass of the oscillator and 𝑘 is the spring constant. This equation can be solved analytically to find = 𝐴 exp 𝑥 𝑡 ( ) 𝑏𝑡 − 2𝑚 (cid:18) (cid:19) cos 𝑡 𝑘 𝑚 − 𝑏2 4𝑚2 + 𝜙0 ! r (28) where 𝐴 is a constant and 𝜙0 is the initial angle. In Figure 10 the equation has been coded in two different ways. In 10a the equation has been coded on a single line, as the equation is quite complicated it is likely that the programmer who is coding the equation would want to split it into multiple parts as has been done in 10b. There are no mathematical difference between the two approaches as they will lead to the same value. Puffin however would have to be careful around breaking up equations in such a way, strategies would be needed to ensure that breaking the lines up would not have a detrimental effect on how the code operated. Care would also need to be taken about the dependency tracking throughout the split equation, from example if there was uncertainty about the damping constant 𝑏 then it would be difficult to assess the dependence between lines 4 and 5, especially since the cosine function is not monotonic. It may be better to use other techniques to solve the ODE (Equation 27) such as VSOPE or VNODE [53, 54, 55, 56, 57]. Making such a change would again require knowledge of what the calculation is and what exactly it is doing, something unlikely to be obvious from parsing the code. 6. Discussion There are several reasons why Puffin may be unlikely to work as intended. The view that deterministic calculations can be translated to analogous computations for uncertainty quantification simply by replacing point values with uncertain structures like intervals and distributions ignores the issues of 1. computational burden, 16 2. repeated variables, 3. dependencies, 4. input specification, 5. conditionals, 6. ensembles, and 7. backcalculations. This paper has described several strategies to mitigate the complexities arising from these issues. It is also likely to be the case that Puffin would not be able to introduce the most perfect uncertainty translations. Manually editing code or creating a new uncertainty aware script from scatch will likely outperform the automatic changes made by Puffin. This problem would not be unique to Puffin however, in general hand coding is always likely to outperform source-to-source translation [58]. Puffin is almost the exact opposite of an optimising compiler, which aims to minimise a programme’s execution time and memory requirements. By replacing objects with uncertain equivalents both of these will almost certainly increase. For instance, if intervalising calculations increases the computational time 5- to 20-fold, clearly a simulation limited by computational time would need to be scaled back. Distributions or p-boxes would be still more burdensome. Of course, efficiency is not always a critical issue and this extra computational effort does pay for global uncertainty propagation and what computer scientists call automatic result verification [59]. Moreover, Puffin’s implementation of modern uncertainty quantification could be more efficient and more comprehensive than simply embedding a deterministic computation inside a Monte Carlo shell with millions of replications. It is unrealistic to expect that simply replacing computations involving integer and floating-point variables with analogous imprecise computations involving corresponding intervals, distributions and p-boxes will yield correct and useful results. The repeated variable problem and, more generally, the dependency problem, discussed in sections 4.6 and 5.2, can artifactually inflate the uncertainty of a complex computation. Even if the calculations are technically correct in the sense that they enclose the true uncertainty, the naive application of intrusive uncertainty quantification can sometimes yield results with massively inflated uncertainty that renders them practically useless. Having massive uncertainty is not the problem itself; the true uncertainty may actually be large. The problem is when it the uncertainty artifactually depends on the way the analysis was structured and does not reflect the features of the underlying computational problem. The analysts who most need Puffin are those who have never heard of a p-box and who aren’t sure what normal distributions or intervals are. Puffin offers multiple ways of specifying uncertainties in order to cater to the needs of such analysts. Puffin offers multiple ways of specifying uncertain inputs for user unfamiliar with the details that they require: • Significant-digit intervals • Measurement intervals (manufacturer or GUM conventions) • English-language hedge words (‘about’, ‘less than’, etc.) • Poisson model counts • Moment or moment-range specifications • Equivalent binomial count (k out of n confidence box) • Single-sample confidence intervals • Mean normal range (n & range normal distribution) • Fermi strategies 17 • Distribution-free specification of p-boxes It is also possible to run Puffin in a way that doesn’t require any inputs, transforming assignments into significant digit intervals. Irrespective of how the uncertainties are expressed within Puffin, it is worth remembering that uncertain garbage in will lead to uncertain garbage out. The general problem of uncertainty analysis is hard and it is difficult to create software that comprehensively solves all these problems and the development if Puffin is likely to be difficult. However, many practical problems are simpler than the most general problem and when this is the case it would be extremely useful to use a tool like Puffin which is able to handle uncertainty analysis intrusively within code. Puffin is intended to be open source and to be continuously co-developed by an interested community, adding in functionality and extending it as fashionable programming languages change and uncertainty quantification techniques develop. Code Availability Puffin is currently in development, and the current version can be found on GitHub[60]. We welcome suggestions and collaborators via GitHub. Acknowledgements This work has been funded by the Engineering and Physical Science Research Council (EPSRC) through the programme grant “Digital twins for improved dynamic design”, EP/R006768/1. References [1] M. Shafto, M. Conroy, R. Doyle, E. Glaessgen, C. Kemp, J. LeMoigne, L. Wang, Modeling, Simulation, Information Technology and Processing Roadmap - Technology Area 11, Tech. rep., National Air and Space Administration, Washington, DC, USA (2012). [2] S. Boschert, R. Rosen, Digital Twin—The Simulation Aspect, in: Mechatronic Futures, Springer International Publishing, Cham, 2016, pp. 59–74. doi:10.1007/978-3-319-32156-1. [3] B. M. Adams, W. J. Bohnhoff, K. R. Dalbey, J. P. Eddy, M. S. Eldred, D. M. Gay, K. Haskell, P. D. Hough, L. P. Swiler, DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis: Version 5.0 Reference Manual, Tech. rep., Sandia National Laboratories, Albuquerque, NM, United States (2010). [4] E. Patelli, COSSAN: A Multidisciplinary Software Suite for Uncertainty Quantification and Risk Management, in: R. Ghanem, D. Higdon, H. Owhadi (Eds.), Handbook of Uncertainty Quantification, Springer International Publishing, Cham, 2015, pp. 1–69. doi:10.1007/978-3-319-11259-6_59-1. [5] S. Marelli, B. Sudret, UQLab: A Framework for Uncertainty Quantification in Matlab, in: Vulnerabil- ity, Uncertainty, and Risk, American Society of Civil Engineers, Liverpool, UK, 2014, pp. 2554–2563. doi:10.1061/9780784413609.257. [6] A. Olivier, D. G. Giovanis, B. Aakash, M. Chauhan, L. Vandanapu, M. D. Shields, UQpy: A general purpose Python package and development environment for uncertainty quantification, Journal of Computational Science 47 (2020) 101204. doi:10.1016/j.jocs.2020.101204. [7] W. Oberkampf, Simulation informed decision making [Conference Presentation], in: Virtual Conference on Epistemic Uncertainty in Engineering, 2021, https://www.youtube.com/watch?v=i4L3fUpr59s. 18 [8] E. Paté-Cornell, On "Black Swans" and "Perfect Storms": Risk Analysis and Management When Statistics Are Not Enough, Risk Analysis 32 (11) (2012). doi:10.1111/j.1539-6924.2011.01787.x. [9] R. P. Feynman, Appendix F - Personal Observations on Reliability of Shuttle, in: Report of the Presidential Commission on the Space Shuttle Challenger Accident, Vol. 2, US Government Printing Office, Washington, DC, USA, 1986, https://history.nasa.gov/rogersrep/v2appf.htm. [10] Y. Amano, The Fukushima Daiichi Accident Report by the Director General, Tech. rep., International Atomic Energy Agency, Vienna, Austria (2015). [11] M. S. Balch, R. Martin, S. Ferson, Satellite conjunction analysis and the false confidence theorem, Pro- ceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 475 (20180565) (2019). doi:10.1098/rspa.2018.0565. [12] M. Beer, S. Ferson, V. Kreinovich, Imprecise probabilities in engineering analyses, Mechanical Systems and Signal Processing 37 (1-2) (2013) 4–29. doi:10.1016/j.ymssp.2013.01.024. [13] D. A. Perez, H. Gietler, H. Zangl, Automatic Uncertainty Propagation Based on the Unscented Transform, in: 2020 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), IEEE, Dubrovnik, Croatia, 2020, pp. 1–6. doi:10.1109/I2MTC43012.2020.9129581. [14] E. Ilyushin, D. Namiot, On source-to-source compilers, International Journal of Open Information Technologies 4 (5.) (2016) 48–51, http://www.scirp.org/journal/doi.aspx?DOI=10.4236/jsea.2013.64A005. [15] T. Parr, The Definitive ANTLR 4 Reference, The Pragmatic Bookshelf, Dallas, USA, 2012. [16] Grammars written for ANTLR v4, https://github.com/antlr/grammars-v4. [17] R. E. Moore, R. B. Kearfott, M. J. Cloud, Introduction to Interval Analysis, Vol. 110, Society for Industrial and Applied Mathematics, Philadelphia, USA, 2009. [18] A. H.-S. Ang, W. S. Tang, Probability Concepts in Engineering : Emphasis on Applications in Civil & Environ- mental Engineering, 2nd Edition, John Wiley & Sons, Ltd, Hoboken, N.J., USA, 2007. [19] S. Ferson, V. Kreinovich, L. Ginzburg, D. S. Myers, K. Sentz, Constructing Probability Boxes and Dempster- Shafer Structures, Tech. Rep. January, Sandia National Laboratories, Albuquerque, NM, United States (2003). [20] M. S. Balch, Mathematical foundations for a theory of confidence structures, International Journal of Approximate Reasoning 53 (7) (2012) 1003–1019. doi:10.1016/j.ijar.2012.05.006. [21] S. Ferson, J. O’Rawe, A. Antonenko, J. Siegrist, J. Mickley, C. C. Luhmann, K. Sentz, A. M. Finkel, Natural language of uncertainty: Numeric hedge words, International Journal of Approximate Reasoning 57 (2015) 19–39. doi:10.1016/J.IJAR.2014.11.003. [22] P.-A. Jean, S. Harispe, S. Ranwez, P. Bellot, J. Montmain, Uncertainty detection in natural language: A proba- bilistic model, in: Proceedings of the 6th International Conference on Web Intelligence, Mining and Semantics, ACM, Nîmes France, 2016, pp. 1–10. doi:10.1145/2912845.2912873. [23] S. Lefort, M.-J. Lesot, E. Zibetti, C. Tijus, M. Detyniecki, Interpretation of approximate numerical expressions: Computational model and empirical study, International Journal of Approximate Reasoning 82 (2017) 193–209. doi:10.1016/j.ijar.2016.12.004. 19 [24] W. F. Mascarenhas, Moore: Interval Arithmetic in C++20, in: G. A. Barreto, R. Coelho (Eds.), 37th Conference of the North American Fuzzy Information Processing Society, Vol. 831, Springer International Publishing, Fortaleza, Brazil, 2018, pp. 519–529. doi:10.1007/978-3-319-95312-0_45. [25] Probability Bounds Analysis for Python 3, https://pypi.org/project/pba/ or https://github.com/Institute-for-Risk-and-Uncertainty/pba-for-python/. [26] Probability Bounds Analysis for MATLAB, https://github.com/Institute-for-Risk-and-Uncertainty/pba-for-matlab. [27] Probability Bounds Analysis for R, https://github.com/ScottFerson/pba.r. [28] HYRISK, https://cran.r-project.org/web/packages/HYRISK/index.html. [29] Probability Bounds Analysis for Julia, https://github.com/AnderGray/ProbabilityBoundsAnalysis.jl. [30] M. Haenggi, Meta Distributions–Part 1: Definition and Examples, IEEE Communications Letters (2021) 1– 1doi:10.1109/LCOMM.2021.3069662. [31] M. Haenggi, Meta Distributions–Part 2: Properties and Interpretations, IEEE Communications Letters (2021) 1–1doi:10.1109/LCOMM.2021.3069681. [32] J. G. Dijkman, H. V. A. N. Haeringen, S. J. D. E. Lange, Fuzzy Numbers, Journal of Mathematical Analysis and Applicatins 92 (1983) 301–341. [33] D. Dubois, H. Prade, Interval-valued Fuzzy Sets , Possibility Theory and Imprecise Probability, in: Proceedings of the Joint 4th Conference of the European Society for Fuzzy Logic and Technology and the 11th Rencontres Francophones Sur La Logique Floue et Ses Applications, Barcelona, Spain, 2005. [34] M. S. Balch, New two-sided confidence intervals for binomial inference derived using Walley’s imprecise posterior likelihood as a test statistic, International Journal of Approximate Reasoning 123 (2020) 77–98. doi:10.1016/j.ijar.2020.05.005. [35] Y. Ben-Haim, Info-Gap Decision Theory: Decisions Under Severe Uncertainty, 2nd Edition, Academic Press, Oxford, UK, 2006. [36] S. Ferson, R. B. Nelsen, J. Hajagos, D. J. Berleant, J. Zhang, W. T. Tucker, L. R. Ginzburg, W. L. Oberkampf, Dependence in probabilistic modeling, Dempster-Shafer theory, and probability bounds analysis, Tech. Rep. 19094, Sandia National Laboratories, Albuquerque, NM, USA (2004). [37] S. Ferson, RAMAS Risk Calc 4.0 Software: Risk Assessment with Uncertain Numbers, Lewis Publishers, Boca Raton, Florida, USA, 2002, https://books.google.co.uk/books?id=tKz7UZRs0CEC. [38] S. Ferson, M. Balch, K. Sentz, J. Siegrist, Computing with Confidence, in: Proceedings of the Eighth International Symposium on Imprecise Probability: Theory and Applications, Compiègne, France, 2013, http://www.sipta.org/isipta13/proceedings/papers/s013.pdf. [39] A. Wimbush, N. Gray, S. Ferson, Singhing with Confidence: Visualising the Performance of Confidence Struc- tures, arXiv:2106.04433 [stat]http://arxiv.org/abs/2106.04433 (Jun. 2021). arXiv:2106.04433. [40] L. D. Brown, T. T. Cai, A. Dasgupta, Interval Estimation for a Binomial Proportion, Statistical Science 16 (2) (2001) 101–117. [41] C. J. Clopper, E. S. Pearson, The Use of Confidence or Fiducial Limits Illustrated in the Case of the Binomial, Biometrika 26 (4) (1934) 404–413. 20 [42] D. Hose, M. Hanss, A universal approach to imprecise probabilities in possibility theory, International Journal of Approximate Reasoning 133 (2021) 133–158. doi:10.1016/j.ijar.2021.03.010. [43] V. Kreinovich, Decision Making Under Interval Uncertainty (and Beyond), in: P. Guo, W. Pedrycz (Eds.), Human- Centric Decision-Making Models for Social Sciences, Springer Berlin Heidelberg, Berlin, Heidelberg, 2014, pp. 163–193. doi:10.1007/978-3-642-39307-5_8. [44] L. A. Zadeh, Fuzzy Logic, Computer 21 (4) (1988) 83–93. doi:10.1109/2.53. [45] L. H. De Figueiredo, J. Stolfi, Affine arithmetic: Concepts and applications, Numerical Algorithms 37 (2004) 147–158. [46] E. Goubault, S. Putot, A zonotopic framework for functional abstractions, Formal Methods in System Design 47 (3) (2016) 302–360. doi:10.1007/s10703-015-0238-z. [47] A. Gray, M. De Angelis, S. Ferson, E. Patelli, What’s Z-X, when Z = X+Y? dependency tracking in interval arithmetic with bivariate sets, in: 9th International Workshop on Reliable Engineering Computations, Virtual Conference, 2021, pp. 27–28. [48] W. Krämer, Generalized Intervals and the Dependency Problem, Proceedings in Applied Mathematics and Mechanics 684 (2006) 683–684. doi:10.1002/pamm.200610. [49] P. Embrechts, F. Lindskog, A. Mcneil, Modelling Dependence with Copulas and Applications to Risk Management, doi:10.1016/B978-044450896-6.50010-8. in: Handbook of Heavy Tailed Distributions in Finance, Elsevier, 2003, pp. 329–384. [50] R. B. Nelsen, An Introduction to Copulas, 2nd Edition, Springer Series in Statistics, Springer, New York, New York, USA, 2006. [51] H. Joe, Dependence Modeling with Copulas, Chapman & Hall/CRC, Boca Raton, Florida, USA, 2014. [52] A. Gray, D. Hose, M. De Angelis, M. Hanss, S. Ferson, Dependent Possibilistic Arithmetic using Copulas, in: Proceedings of the Twelth International Symposium on Imprecise Probabilities: Theories and Applications, Vol. 147, Proceedings of Machine Learning Research, Granada, Spain (Virtual), 2021, pp. 173–183. [53] N. S. Nedialkov, K. R. Jackson, G. F. Corliss, Validated solutions of initial value problems for ordinary differential equations, Appl. Math. Comput. (1999) 48. [54] N. S. Nedialkov, K. R. Jackson, J. D. Pryce, An Effective High-Order Interval Method for Validating Existence and Uniqueness of the Solution of an IVP for an ODE, Reliable Computing 7 (6) (2001) 17. [55] N. Nedialkov, Interval Tools for ODEs and DAEs, in: 12th GAMM - IMACS International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics (SCAN 2006), IEEE, Duisburg, Germany, 2006, pp. 4–4. doi:10.1109/SCAN.2006.28. [56] Y. Lin, M. A. Stadtherr, Validated solutions of initial value problems for parametric ODEs, Applied Numerical Mathematics 57 (2007) 1145–1162. doi:10.1016/j.apnum.2006.10.006. [57] J. A. Enszer, D. M. Andrei, M. A. Stadtherr, Probability bounds analysis for nonlinear population ecology models, Mathematical Biosciences 267 (2015) 97–108. doi:10.1016/j.mbs.2015.06.012. [58] D. A. Plaisted, Source-to-Source Translation and Software Engineering, Journal of Software Engineering and Applications 06 (04) (2013) 30–40. doi:10.4236/jsea.2013.64A005. 21 [59] E. Adams, U. Kulisch, Scientific Computing with Automatic Result Verification, Academic Press, Boston, MA, USA, 1993. [60] Puffin, https://github.com/ngg1995/Puffin. 22
synthetic_cpt
1
Shift-Collapse_Acceleration_of_Generalized_Polarizable_Reactive_Molecular_Dynamics_for_Machine_Learning-Assisted_Computational_Synthesis_of_Layered_Materials.pdf
8 1 0 2 g u A 3 1 ] T G . h t a m [ 1 v 2 7 1 4 0 . 8 0 8 1 : v i X r a ARC SHIFT NUMBER AND REGION ARC SHIFT NUMBER FOR VIRTUAL KNOTS K. KAUR, A. GILL, AND M. PRABHAKAR Abstract. In this paper, we formulate a new local move on virtual knot diagram, called arc shift move. Further, we extend it to another local move called region arc shift defined on a region of a virtual knot diagram. We establish that these arc shift and region arc shift moves are unknotting operations by showing that any virtual knot diagram can be turned into trivial knot using arc shift (region arc shift) moves. Based upon the arc shift move and region arc shift move, we define two virtual knot invariants, arc shift number and region arc shift number respectively. 1. Introduction Virtual knot theory introduced by L.H. Kauffman [4] extends classical knot theory to more general study of knots in the thickened surfaces of higher genus. Classical knot theory as a subclass deals with knots in thick- ened sphere. Unlike in classical knots, crossing change fails to be an un- knotting operation for virtual knots. In this paper, we propose a new local move on a virtual knot diagram, called arc shift, which happens to be an unknotting operation for virtual knots assisted by generalized Reidemeister moves. Gaining motivation from the region crossing change defined in [8], we extend the notion of arc shift to region arc shift defined on a region in virtual knot diagram which turns out to be an unknotting operation too. Minimum number of arc shift(respectively, region arc shift) moves needed to unknot a virtual knot K is defined as arc shift number (respectively, region It is well arc shift number ) of K denoted by A(K)(respectively, R(K)). known that every virtual knot diagram can be made trivial via a sequence of forbidden moves and generalized Reidemeister moves as shown in [3, 7]. The forbidden number F (K) in [1] is defined as the minimum number of forbidden moves needed to deform K into a trivial knot. We prove that for a virtual knot K, R(K) ≤ F (K) and provide an explicit example where the inequality R(K) ≤ F (K) is indeed a strict inequality. This paper is organized as follows. In Section 2, we briefly recall the defini- tion of virtual knots including Gauss diagrams and forbidden moves. Section 3 defines the arc shift move, discussing its Gauss diagram version and prop- erties of the arc shift move. In Section 4, we prove arc shift as an unknotting 2010 Mathematics Subject Classifications. 57M25, 57M27. Key words and phrases. virtual knot; Gauss diagram; forbidden moves. 1 2 K. KAUR, A. GILL, AND M. PRABHAKAR operation and discuss bound on the arc shift number. Lastly in Section 5, we extend arc shift to region arc shift and compare it with forbidden moves. 2. Prelimenaries L.H. Kauffman [4] introduced virtual knot theory with a motivation to study knots embedded in thickened surfaces of arbitrary genus and also to provide knot theory a completely combinatorial territory dealing with Gauss diagrams and Gauss codes. A virtual knot diagram is a 4-regular (each node having degree four) planar graph with extra structure on its nodes. The extra structure includes two types of crossings at nodes with one being the classical crossing adhering over (under) information and other one called the virtual crossing. A virtual crossing is indicated by a small circle around the node with no sense of over and under information (See Fig. 1). Figure 1. Classical and Virtual crossings Two virtual knot diagrams are declared equivalent if they are related by a sequence of generalized Reidemeister moves depicted in Fig. 2. Virtual knots are defined as equivalence classes of virtual knot diagrams modulo generalized Reidemeister moves. Figure 2. Generalized Reidemeister moves As a consequence of virtual Reidemeister moves a segment in the virtual knot diagram consisting of virtual crossings only can be freely moved in the plane. While moving such segment of the knot, we keep the end points fixed such that all the new places it crosses the diagram transversally are marked as virtual crossings. Moving such a segment is termed as Detour move (Fig. 3). ARC SHIFT NUMBER AND REGION ARC SHIFT NUMBER FOR VIRTUAL KNOTS 3 Figure 3. Detour Move Sign of a classical crossing c also known as local writhe of c is defined as shown in the Fig. 4. Figure 4. Local writhe or sign of a crossing In [2] an approach to virtual knot theory goes via Gauss diagrams. Definition 2.1. Gauss diagram G(D) corresponding to a classical(virtual) knot diagram D is an oriented circle with a base point where each classical crossing is marked two times with respect to overpass and underpass. Two markings are then joined by an arrow (chord) oriented from overpass to underpass with a sign attached to each arrow equal to the local writhe of the corresponding crossing (Fig. 5). Figure 5. Gauss diagram corresponding to virtual figure eight knot Analogous version of Reidemeister moves on Gauss diagrams are shown in Fig. 6. Virtual moves do not affect Gauss diagrams since virtual crossings are not accounted in Gauss diagrams. An alternative way to define virtual knots is by considering equivalence classes of Gauss diagrams modulo the moves shown in Fig. 6. 4 K. KAUR, A. GILL, AND M. PRABHAKAR Figure 6. Reidemeister moves on Gauss diagram Two moves listed in Fig. 7(a) are known as forbidden moves. Gauss diagram for forbidden moves are shown in Fig. 7(b). Terminology inspires from the fact that if allowed, these moves will leave whole of virtual knot theory trivial, i.e., any virtual knot can be turned trivial using forbidden moves. This fact is proved in [3, 7, 2]. Figure 7. Forbidden moves Fh and Ft 3. Arc shift move Definition 3.1. In a virtual knot diagram D, we define an arc, say (a, b) as the segment passing through exactly one pair of crossings (classical/virtual) (c1,c2) with a incident to c1 and b incident to c2. In Fig. 8, arc (a, b) passes through crossings (c1,c2) and arc (e, f ) passes through crossings (c2,c3). Figure 8. Arc (a, b) and (e, f ) Definition 3.2. Two arcs (a, b) and (c, d) in a virtual knot diagram D are said to be equivalent if they pass through same pair of crossings (c1,c2) and ARC SHIFT NUMBER AND REGION ARC SHIFT NUMBER FOR VIRTUAL KNOTS 5 the segment common in both (a, b) and (c, d) is again an arc passing through the same pair of crossings (c1,c2). For example arcs (a, b) and (c, d) depicted in Fig. 9 are equivalent arcs. Figure 9. Equivalent Arcs (a, b) and (c, d) Arc shift move: In a virtual knot diagram D, let (a, b) be an arc passing through the pair of crossings (c1,c2). Without loss of generality assume that c1 is classical crossing while c2 being virtual. By arc shift move on the arc (a, b), we mean cutting the arc at two points near a and b and identifying the loose ends on one side with loose ends on the other side in the way as shown in Fig. 10. While applying the arc shift move some new crossings may arise in the diagram, we label them as virtual crossings. Figure 10. Arc shift on arc (a, b) Remark 1. There can be many possible ways to join the loose ends in the diagram while applying arc shift move. Therefore, there are number of diagrams corresponding to arc shift move on the arc (a, b) in D, two such diagrams are shown in Fig. 11. However, in all such diagrams the strands joining the loose ends contains only virtual crossings. Figure 11. Equivalent diagrams corresponding to arc shift on arc (a, b) 6 K. KAUR, A. GILL, AND M. PRABHAKAR Therefore, as a result of detour move (Fig. 3) any two such diagrams are equivalent by virtual Reidemeister moves. Considering equivalence of all these diagrams, we denote the diagram obtained from D as a result of arc shift on the arc (a, b) by D(a,b). Remark 2. As an effect of arc shift move on the arc (a, b) in an oriented virtual knot diagram D, orientation in the encircled region gets reversed as shown in Fig. 12. Figure 12 Depending on the crossing type (classical/virtual) and crossing informa- tion (over/under), an arc can have different possible local configurations. However, five of these local configurations are enough to summarize effects of all possible arc shift moves in Gauss diagram. Corresponding to these five cases, we denote the respective arc shift moves by ¯Ah, ¯At, ¯Aht, ¯Ath and ¯As(see Fig. 13). Figure 13. arc shift move on arc (a, b) ARC SHIFT NUMBER AND REGION ARC SHIFT NUMBER FOR VIRTUAL KNOTS 7 Moves corresponding to ¯Ah, ¯At, ¯Aht, ¯Ath and ¯As in Gauss diagram are shown in Fig. 14. Figure 14. Gauss diagram analogues to the arc shift moves shown in Fig. 13 Adjacent ends of two arrows in a Gauss diagram G(D) may not always correspond to an arc in D. There might be number of consecutive virtual crossings in D between the crossings c1, c2 corresponding to adjacent pair of ends of the arrows. However, D can be altered to an equivalent diagram D(cid:48) having an arc containing (c1,c2) as explained in the following proposition. Proposition 1. Let D be a virtual knot diagram and (c1,c2) be a pair of classical crossings in D such that the segment between c1 and c2 contains only virtual crossings. Then, there exists an virtual knot diagram D(cid:48) equivalent to D where crossings (c1,c2) are contained in an arc. Proof. Consider the diagram D and apply Detour move on each of the verti- cal segments containing virtual crossings as shown in Fig. 15. After applying Detour move finite number of times in D, the segment between c1 and c2 becomes free of virtual crossings. Diagram D(cid:48) obtained as a result is equiv- alent to D and arc (a, b) in D(cid:48) contains crossings (c1,c2) as required in the (cid:3) proposition. 8 K. KAUR, A. GILL, AND M. PRABHAKAR Figure 15 As a consequence of proposition 1, we have following theorem, which tells that to check whether two virtual knot diagrams are related by arc shift moves and virtual Reidemeister moves, it is enough to check the equivalence of corresponding Gauss diagrams by moves given in Fig. 14. Theorem 3.1. Let G(D1) and G(D2) be Gauss diagrams related by a fi- nite sequence of diagrammatic moves ¯Ah, ¯At, ¯Aht and ¯Ath given in Fig. 14. Then, corresponding virtual knot diagrams D1 and D2 can be obtained from each other by respective arc shift moves and virtual Reidemeister moves. Proof. Choose first move in the sequence relating G(D1) and G(D2). Con- sider the classical crossings c1 and c2 in D1 corresponding to pair of adjacent arrows affected by first move in Gauss diagram G(D1). Proposition 1 guar- antees existence of a diagram D(cid:48) equivalent to D1 by virtual Reidemeister moves such that crossings (c1,c2) are contained in an arc (a, b) in D(cid:48). Both D1 and D(cid:48) being equivalent by virtual Reidemeister moves have identical Gauss diagrams, therefore, arc shift move on the arc (a, b) in D(cid:48) results in first move chosen from the sequence. Virtual knot diagram so obtained from D(cid:48) is related to D1 by one arc shift move and virtual Reidemeister moves. Similarly the process continues and for the last move in the sequence we get the virtual knot diagram D2 corresponding to G(D2) and related to D1 via (cid:3) arc shift moves and virtual Reidemeister moves. Virtual knot diagram D(a,b) contains an arc containing same pair of crossings (c1,c2) as contained by arc (a, b) in D. Applying arc shift move again on the corresponding arc in D(a,b) results in a diagram equivalent to original diagram D as we discuss in the following proposition. Proposition 2. Let D be a virtual knot diagram and D(cid:48) is obtained from D by applying arc shift move twice on an arc (a, b). For the resulting diagram we have D(cid:48) ∼ D. Proof. Let (a, b) be an arc in D passing through the pair of crossings (c1,c2). For convenience assume that both the crossings (c1, c2) are classical having crossing information as shown in Fig. 16(1). We obtain Fig. 16(2) by ap- plying arc shift on the arc (a, b) in Fig. 16(1). Again, applying arc shift in Fig. 16(2) results in the diagram D(cid:48) shown in Fig. 16(3) where if we ap- ply V R2 move in each of the two encircled regions we get Fig. 16(4). In ARC SHIFT NUMBER AND REGION ARC SHIFT NUMBER FOR VIRTUAL KNOTS 9 Fig. 16(4), if we apply three V R2 moves in the encircled region, we obtain Fig. 16(5) which is identical to diagram D we started with, i.e , Fig. 16(1). Thus, D(cid:48) is equivalent to D as required. Similarly all the other cases involving (c1, c2) having different crossing type (cid:3) (classical/virtual) and crossing information (over/under) follows. Figure 16 By applying single arc shift move in equivalent diagram of an oriented virtual knot diagram we can realize switch in the sign of any crossing c(without changing the crossing information). While doing this, all the other crossings remain unaffected as discussed in the following proposition. Proposition 3. Let D be a virtual knot diagram and c be any crossing in D. Then, there exists a diagram D(cid:48) obtained from D by applying an arc shift move such that the crossing c(cid:48) in D(cid:48) corresponding to c is of opposite sign, i.e., sign(c(cid:48))= - sign(c). Proof. Consider any crossing c in D as shown in Fig. 17(1). Now, first apply V R1 move in Fig. 17(1) that results in Fig. 17(2) which has an arc (b, d) containing crossing c and a virtual crossing. Apply an arc shift move on the arc (b, d) in Fig. 17(2) to obtain Fig. 17(3) where the segment from b to e contains only virtual crossings. Using Detour move in Fig. 17(3) we get diagram D(cid:48) in Fig. 17(4) where sign(c(cid:48))= - sign(c) as required. Sign of other crossings remains unchanged as all the other crossings in D and D(cid:48) (cid:3) have same local orientation. 10 K. KAUR, A. GILL, AND M. PRABHAKAR Figure 17. sign(c(cid:48))= - sign(c) Proposition 4. Let D and D(cid:48) be two virtual knot diagrams that differ by a R3 move, then D(cid:48) can be obtained from D by applying three arc shift moves. Proof. Consider the Gauss diagrams of D and D(cid:48) related by a R3 move(see Fig. 18). Arc shift moves ¯Ath, ¯Ah and ¯At applied in succession realize same changes in the Gauss diagram as done by a R3 move as shown in the Fig. 18. By theorem 3.1, virtual knot diagrams D and D(cid:48) corresponding to the Gauss diagrams G(D) and G(D(cid:48)) are related by arc shift moves and virtual Reidemeister moves. Therefore, three arc shift moves together with some virtual Reidemeister moves are enough to realize a R3 move in D. (cid:3) Figure 18. R3 move realized via arc shift moves ARC SHIFT NUMBER AND REGION ARC SHIFT NUMBER FOR VIRTUAL KNOTS11 H. Murakami and Y. Nakanishi [6] defined the ∆-move as shown in Fig. 19 and established that a classical knot can be unknotted using the ∆-move. Figure 19 In the folllowing lemma we show that, a ∆-move can be realized by arc shift moves and virtual Reidemeister moves. Lemma 1. Given a virtual knot diagram D, let D(cid:48) is obtained from D by applying a ∆-move. Then, there exists arc shift moves which applied in D gives an equivalent diagram of D(cid:48). Proof. Consider the Gauss diagram G(D) corresponding to virtual knot dia- gram D as shown in Fig. 20 and apply ∆-move to get G(D(cid:48)). Now, applying three arc shift moves ¯Aht, ¯Ath and ¯Aht in sequence realizes same change in G(D) as by a ∆-move. By theorem 3.1, Gauss diagrams G(D) and G(D(cid:48)) correspond to virtual knot diagrams related by arc shift moves and virtual (cid:3) Reidemeister moves and hence the result follows. Figure 20. ∆-move realized using arc shift moves As ∆-move is an unknotting operation for classical knots, Lemma 1 guar- antees that any classical knot diagram can be transformed into trivial knot diagram using arc shift moves and virtual Reidemeister moves. 12 K. KAUR, A. GILL, AND M. PRABHAKAR 4. Arc shift as an unknotting operation for virtual knots Lemma 1 ensures that classical knots considered as a subclass of vir- tual knots can be unknotted using arc shift moves and virtual Reidemeister moves. However, the result generalizes to every virtual knot as we prove in this section. We use Gauss diagrams to prove the result using the fact that a Gauss diagram defines virtual knot uniquely upto equivalence by moves in the Fig. 6. A Gauss diagram in which no two arrows intersect is called parallel chord diagram and corresponds to a trivial knot. Theorem 4.1. Every virtual knot diagram D can be transformed into trivial knot diagram using arc shift moves and generalized Reidemeister moves. Proof. It is enough to prove that using arc shift moves and generalized Rei- demeister moves in D, Gauss diagram G(D) corresponding to D can be turned into a parallel chord diagram. With anticlockwise orientation on G(D) choose a random arrow and consider the next arrow adjacent to the head of chosen arrow along orientation. Crossings in D corresponding to the two arrows may have only virtual crossings between them. By proposition 1, there exists an equivalent diagram D(cid:48) of D where both the crossings are contained in an arc (a, b). Arc shift on the arc (a, b) in D(cid:48) moves across head of the chosen arrow with adjacent arrow in G(D) and also switches signs of both arrows. Continue the process for all the arrows encountered with head of the chosen arrow along orientation till we reach a Gauss diagram having no arrow between head and tail of the random arrow we started with. In the process, we used virtual Reidemeister moves and arc shift moves in D to realize change in Gauss diagram G(D) that makes a randomly chosen arrow free of intersections by other arrows. Repeating the process for all the arrows one by one gives us a Gauss diagram where no two arrows intersect each other, i.e.,a parallel chord diagram as required. Sign of the some of the arrows might change in the whole process but has no affect on the final result as any parallel chord diagram irrespective of the signs of the chords corresponds to trivial knot. Fig. 21 shows an example of turning a Gauss diagram into parallel chord diagram. (cid:3) Figure 21. Turning a Gauss diagram into parallel chord diagram we give an example of theorem 4.1. ARC SHIFT NUMBER AND REGION ARC SHIFT NUMBER FOR VIRTUAL KNOTS13 Example 1. Consider the virtual knot diagram D of left handed virtual trefoil knot(Fig. 22). D is transformed into trivial knot using single arc shift move on the arc (a, b) followed by generalized Reidemeister moves. Figure 22. Unknotting virtual trefoil using arc shift move Proposition 5. Let D and D(cid:48) be two virtual knot diagrams. Let n and m be the minimum number of arc shift moves needed to transform D and D(cid:48) respectively into trivial knot. If D ∼ D(cid:48) then n = m. Proof. Since both D and D(cid:48) are equivalent, there exists a sequence of gen- eralized Reidemeister moves relating D with D(cid:48). Suppose D can be turned into trivial knot using n number of arc shift moves and some generalized Reidemeister moves. Using sequence of generalized Reidemeister moves re- lating D with D(cid:48) we first transform D(cid:48) into D and then use n arc shift moves to turn D into trivial knot. Similarly if m number of arc shift moves are needed to turn D(cid:48) into trivial knot then D can also be made trivial using m arc shift moves. Taking minimum over all such m and n gives desired (cid:3) result. This motivates us to define the arc shift number for a virtual knot. Definition 4.1. For any virtual knot K, the arc shift number of K, A(K) is the minimum number of arc shift moves needed to turn a diagram of K into trivial knot. Since any diagram of trivial knot can be converted into unknot using generalized Reidemeister moves, no arc shift move is needed and hence A(K) is zero for trivial knot. However, a diagram of nontrivial knot necessarily needs arc shift moves to be converted into trivial knot. Therefore A(K) is strictly positive for a nontrivial classical or virtual knot. L.H. Kauffman [5] defines parity of a crossing c of a virtual knot dia- gram K. The parity of c is odd if odd number of classical crossings are encountered while moving along the diagram on any path that starts and ends both at c, otherwise the crossing is called even. Likewise, a crossing c is odd(even) if and only if the chord corresponding to c in Gauss diagram intersects odd(even) number of chords. Chords corresponding to odd(even) 14 K. KAUR, A. GILL, AND M. PRABHAKAR crossings are referred as odd(even) chords respectively. Denote by Odd(K) the set containing all the odd crossings in K, then sum of the signs of all the crossings in Odd(K) is called odd writhe of K denoted by J(K), i.e., J(K) = (cid:80) c∈Odd(K) sign(c). Equivalently, J(K) is the sum of the signs of odd chords in the Gauss dia- gram corresponding to K. J(K) is a virtual knot invariant and is zero for classical knots. As a consequence, whenever J(K) is nonzero K is necessar- ily nonclassical. We give a lowerbound on A(K) in terms of odd writhe J(K) by analyzing the change occurring in J(K) for two virtual knot diagrams that differ by a single arc shift move. Proposition 6. If D and D(cid:48) are two virtual knot diagrams that differ by an arc shift move, then either J(D(cid:48)) = J(D) or J(D(cid:48)) = J(D) ± 2. Proof. Let the arc shift move be one of ¯Ah, ¯At, ¯Aht or ¯Ath. Consider the Gauss diagrams corresponding to D and D(cid:48). We note that arc shift moves ¯Ah, ¯At, ¯Aht and ¯Ath moves adjacent ends of two chords past each other switching both the signs. As a result, parity of the two chords involved gets flipped(odd/even to even/odd) and remaining chords maintains the same parity. With a slight abuse of notation we denote both crossings and chords corresponding to them by c1,c2 and assume that sign(c1) = ε1, sign(c2) = ε2. Let c(cid:48) 2 denotes the corresponding chords after applying the arc shift move, thus sign(c(cid:48) 2 have parity opposite to c1,c2 respectively. We discuss all four cases based on the parity of c1,c2 and note the corresponding change in odd writhe J(D). Case 1: When both c1,c2 are even. c1,c2 being both even do not contribute to J(D), while c(cid:48) contribute in J(D(cid:48)). We have 1) = −ε1, sign(c(cid:48) 2) = −ε2 and c(cid:48) 2 both being odd 1,c(cid:48) 1,c(cid:48) 1,c(cid:48) (1) J(D(cid:48)) = J(D) + sign(c(cid:48) 1) + sign(c(cid:48) 2) = J(D) + (−ε1) + (−ε2) = J(D) − (ε1 + ε2), Case 2: When both c1,c2 are odd. c1,c2 being both odd contribute to J(D), while c(cid:48) contributein J(D(cid:48)). We have 1,c(cid:48) 2 both being even do not (2) J(D(cid:48)) = J(D) − sign(c1) − sign(c2) = J(D) − ε1 − ε2 = J(D) − (ε1 + ε2), Case 3: When c1 is even and c2 is odd. Only c2 contributes to J(D), while among c(cid:48) 1,c(cid:48) 2 only c(cid:48) 1 being odd contributes ARC SHIFT NUMBER AND REGION ARC SHIFT NUMBER FOR VIRTUAL KNOTS15 in J(D(cid:48)). We have (3) J(D(cid:48)) = J(D) − sign(c2) + sign(c(cid:48) 1) = J(D) − ε2 + (−ε1) = J(D) − (ε1 + ε2), Case 4: When c1 is odd and c2 is even. This case is similar to case 3. Since only possible values for ε1 + ε2 are 0, −2, +2, we have either J(D(cid:48)) = J(D) or J(D(cid:48)) = J(D) ± 2. Only remaining arc shift move ¯As switches sign of the corresponding single chord thus keeping parity of all the chords unaltered. As a result if affected chord is even then odd writhe remains same, i.e., J(D(cid:48)) = J(D) (cid:3) and if affected chord is odd then we have J(D(cid:48)) = J(D) ± 2. In the following theorem using proposition 6, we provide a lowerbound to the arc shift number A(K). Theorem 4.2. For a virtual knot K, arc shift number A(K) ≥ |J(K)|/2. Proof. Let A(K) = n and K = K0 → K1 → K2 → K3 → · · · → Kt be the sequence realizing A(K). Kt is unknot diagram and each Ki is obtained from Ki−1 by either an arc shift move or generalized Reidemeister move. Exactly n terms in the sequence correspond to arc shift move to realize A(K). We have, (4) |J(Kt) − J(K0)| = |J(Kt) − J(Kt−1) + J(Kt−1) − J(Kt−2) + · · · + J(K1) − J(K0)| ≤ |J(Kt) − J(Kt−1)| + |J(Kt−1) − J(Kt−2)| + · · · + |J(K1) − J(K0)|. Note that J(Kt) = 0 and J(K0) = J(K). In the inequality (5) exactly n sums correspond to arc shift moves and rest all corresponding to generalized Reidemeister moves. Using invariance of odd writhe and proposition 5, we have, (5) |J(K)| ≤ 2n. Thus n ≥ |J(K)|/2 and hence the result follows. (cid:3) We use Theorem 4.2 to determine arc shift number for virtual left hand trefoil knot, shown in Fig. 23. Example 2. Virtual left hand trefoil knot K has A(K) = 1. From the diagram K (Fig. 23) it is immediate that J(K) = −2 as both the crossings are odd crossings. Using theorem 4.2 we have A(K) ≥ 1 and as it was shown in the example 1, K can be simplified into trivial knot using one arc shift move, hence A(K) ≤ 1. We conclude that 1 ≤ A(K) ≤ 1, i.e., A(K) = 1. 16 K. KAUR, A. GILL, AND M. PRABHAKAR Figure 23 5. Region arc shift In this section, we define region arc shift operation(RAS) at a region in virtual knot diagram and establish it as an unknotting operation for virtual knots assisted by generalized Reidemeister moves. For a given virtual knot diagram D in R2, a region is a connected component of the complement of four-valent graph DG in R2, where DG is obtained from D by replacing each classical and virtual crossings with a vertex. Region arc shift operation at a region R of diagram D is a local transforma- tion on D involving arc shift operations at each arc incident on the boundary ∂R of region R. The diagram obtained from D as a result of region arc shift at the region R is denoted by DR. Figure 24. Region arc shift on region R1 and R2 In Fig. 24, Diagrams DR1 and DR2 are obtained from diagram D by applying region arc shift operation at regions R1 and R2 respectively. Observe that all the arcs incident on the boundary of regions R1 and R2 undergoes arc shift as required. Proposition 7. Let D be a virtual knot diagram and R be a region of D. If we apply the region arc shift operation consecutively two times on R, then resulting diagram (DR)R is equivalent to D. Proof. Result follows from proposition 2. (cid:3) Next we prove that region arc shift operation is an unknotting operation for virtual knots along with generalized Reidemeister moves. In [3], T. Ka- nenobu proved that any virtual knot diagram can be deformed into trivial knot by applying forbidden moves and Reidemeister moves finitely many ARC SHIFT NUMBER AND REGION ARC SHIFT NUMBER FOR VIRTUAL KNOTS17 times. S. Nelson [7] gave an alternative proof of the same using Gauss dia- grams. Therefore, to prove that region arc shift operation is an unknotting operation, it is indeed enough to show that both forbidden moves can be realized by region arc shift operation. Proposition 8. Let D be a virtual knot diagram and D(cid:48) be the diagram obtained from D by applying forbidden move Fu. Then, there exists a region R in D such that RAS at region R results in the diagram DR equivalent to D(cid:48). Proof. Consider the diagram D and a specific region R in D as shown in the Fig. 25. Boundary of region R contains three arcs α, β and γ. Applying region arc shift at region R results in arc shift moves on the arcs α, β and γ. Figure 25. Realizing forbidden move Fh using region arc shift It is easy to see from Fig. 25 that while arc shift on the arc α corresponds to ¯Ah, both β and γ correspond to ¯As. We observe from Fig. 25 that the diagram D(cid:48) has same Gauss diagram as has the diagram obtained by applying RAS at region R in D. Thus, D(cid:48) is equivalent to DR and the result follows. Similar result can be identically proved for forbidden move Fo also. (cid:3) As a consequence of proposition 8 we state following corollary without proof. Corollary 1. Every virtual knot diagram D can be transformed into unknot using region arc shift operations and generalized Reidemeister moves. On a similar note as the arc shift number we define region arc shift number as follows. Definition 5.1. For a given virtual knot K, the region arc shift number R(K) is the minimum number of region arc shift operations required to deform K into trivial knot. It is easy to observe that region arc shift number is a virtual knot invariant. The region arc shift number for a virtual knot K is zero if and only if K is 18 K. KAUR, A. GILL, AND M. PRABHAKAR trivial knot. Virtual knots shown in Fig. 26 have region arc shift number one. Region arc shift at region R in both the diagrams gives a trivial knot diagram. Figure 26. Region arc shift number is 1 for both knots In the following theorem we compare two numbers R(K) and F (K) and provide a relation between them in form of an inequality. Theorem 5.1. If K is a virtual knot, then R(K) ≤ F (K). Proof. Suppose that forbidden number for the virtual knot K is n. If D is a diagram of K, then there exists a sequence involving generalized Reidemeis- ter moves and n number of forbidden moves which transforms D into trivial knot diagram. Using proposition 7, we can realize each forbidden move by applying a single region arc shift operation. Replacing n forbidden moves with n region arc shift operations in the above sequence, we can deform D (cid:3) into trivial knot. Thus R(K) ≤ n and hence the result follows. Remark 3. Strict inequality in R(K) ≤ F (K) may hold for some virtual knots. As an example, virtual knot shown in Fig. 27 has F (K) = 2, while R(K) = 1. Figure 27 One of the important consequence of forbidden moves is the forbidden detour move, denoted by FD, shown in Fig. 28 . It was proved in [3] that it needs both the forbidden moves Fu and Fo to realize FD including some generalized ARC SHIFT NUMBER AND REGION ARC SHIFT NUMBER FOR VIRTUAL KNOTS19 Reidemeister moves. FD move has the affect of moving across an adjacent head with tail in the Gauss diagram corresponding to a virtual knot diagram, see Fig. 28. Figure 28. Forbidden detour move We realize FD move in a single region arc shift operation at a specific region R in the diagram obtained from original diagram by V R2 move. As shown in the Fig. 29, region R in the diagram obtained by V R2 move in D contains arcs α, β and γ as its boundary. Figure 29. FD move via region arc shift at region R Therefore, using the similar argument as in the proof of proposition 8, diagrams obtained by FD move in D and RAS at region R are equivalent virtual knot diagrams and hence the result follows. References [1] A. Crans, B. Mellor, S. Ganzell, The forbidden number of a knot, Kyungpook Math. J. 55 (2015), no. 2, 485–506. [2] M. Goussarov, M. Polyak, O. Viro Finite-type invariants of classical and virtual knots, Topology 39 (2000), no. 05, 1045-1068. [3] T. Kanenobu, Forbidden moves unknot a virtual knot, Journal of Knot Theory and Its Ramifications 10 (2001), no. 01, 89–96. [4] L. H. Kauffman, Virtual knot theory, European J. Combin. , 20 (1999), no. 7, 663690. [5] L. H. Kauffman, A self-linking invariant of virtual knots, Fund. Math. 184 (2004), 135–158. [6] H. Murakami, On a certain move generating link-homology, Math. Ann. 284 (1989), no. 01, 75–89. [7] S. Nelson, Unknotting virtual knots with Gauss diagram forbidden moves, Journal of Knot Theory and Its Ramifications 10(2001), no. 6, 931-935. [8] A. Shimizu, Region crossing change is an unknotting operation, J. Math. Soc. Japan 66 (2014), no. 3, 693-708. 20 K. KAUR, A. GILL, AND M. PRABHAKAR Kirandeep Kaur, Department of Mathematics, Indian Institute of Technol- ogy Ropar, Nangal Road, Rupnagar, Punjab 140001, INDIA E-mail address: [email protected] Amrendra Gill, Department of Mathematics, Indian Institute of Technol- ogy Ropar, Nangal Road, Rupnagar, Punjab 140001, INDIA E-mail address: [email protected] Madeti Prabhakar, Department of Mathematics, Indian Institute of Tech- nology Ropar, Nangal Road, Rupnagar, Punjab 140001, INDIA E-mail address: [email protected]
synthetic_cpt
4
UltraFeedback_Boosting_Language_Models_with_High-quality_Feedback.pdf
ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback Ganqu Cui * 1 Lifan Yuan * 1 2 Ning Ding 1 Guanming Yao 1 3 Bingxiang He 1 Wei Zhu 4 Yuan Ni 4 Guotong Xie 4 Ruobing Xie 5 Yankai Lin 6 Zhiyuan Liu 1 Maosong Sun 1 7 4 2 0 2 l u J 6 1 ] L C . s c [ 2 v 7 7 3 1 0 . 0 1 3 2 : v i X r a Abstract Learning from human feedback has become a pivot technique in aligning large language mod- els (LLMs) with human preferences. However, acquiring vast and premium human feedback is bottlenecked by time, labor, and human capabil- ity, resulting in small sizes or limited topics of current datasets. This further hinders feedback learning as well as alignment research within the open-source community. To address this issue, we explore how to go beyond human feedback and collect high-quality AI feedback automati- cally for a scalable alternative. Specifically, we identify scale and diversity as the key factors for feedback data to take effect. Accordingly, we first broaden instructions and responses in both amount and breadth to encompass a wider range of user-assistant interactions. Then, we meticu- lously apply a series of techniques to mitigate an- notation biases for more reliable AI feedback. We finally present ULTRAFEEDBACK, a large-scale, high-quality, and diversified AI feedback dataset, which contains over 1 million GPT-4 feedback for 250k user-assistant conversations from vari- ous aspects. Built upon ULTRAFEEDBACK, we align a LLaMA-based model by best-of-n sam- pling and reinforcement learning, demonstrating its exceptional performance on chat benchmarks. Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models, serving as a solid founda- tion for future feedback learning research. of China *Equal contribution 4PingAn Technology 1NLP Group, DCST, IAI, BNRIST, Tsinghua University 2University of Illinois Urbana-Champaign 6Renmin 3ModelBest.Inc University Innovation Center for Language Ability. Correspondence to: Ganqu Cui <[email protected]>, Lifan Yuan <li- [email protected]>, Wei Zhu <[email protected]>, Zhiyuan Liu <[email protected]>, Maosong Sun <[email protected]>. 7Jiangsu Collaborative 5Tencent Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). 1 1. Introduction Large language models (LLMs) (OpenAI, 2022; 2023) have demonstrated proficiency in generating fluent text as well as solving various language-oriented tasks. Trained on massive corpora through likelihood maximization techniques, these LLMs have equipped the ability to execute diverse tasks in response to user directives (Ouyang et al., 2022; Wei et al., 2022a; Sanh et al., 2022). Unfortunately, relying solely on imitation learning during training leads to well-known issues - LLMs may generate convincing but incorrect or unsafe content that deviates from human preferences (Stiennon et al., 2020; Perez et al., 2022). To further align LLMs with human preferences, learning from human feedback (Ouyang et al., 2022; Askell et al., 2021; Bai et al., 2022a; Touvron et al., 2023b) has been introduced and widely adopted by leading corporations. Over a period, feedback learning has been widely applied to closed-source models but scarcely used in open-source models. Many factors hinder the implementation of feedback learn- ing in the open-source community, but the first and primary issue is data. Preference data, which rates and compares different responses given the same prompt, is central to feed- back learning. When scaled sufficiently, preference data re- flects the intrinsic values of the annotators. Such annotators are often assumed, by default, to be human beings who can provide the most flexible and accurate supervision signals, yet the data they generate is severely bounded by factors like financial resources, time, and knowledge. As a result, exist- ing preference datasets are either small in scale (Wu et al., 2023) or limited on specific tasks (Stiennon et al., 2020; Nakano et al., 2021). To this end, more efficient and prin- cipled methods to scale preference data are on the horizon. This study aims to scale feedback data in an efficient man- ner. Specifically, we explore AI feedback (Bai et al., 2022b; Lee et al., 2023), which substitutes human annotators with advanced LLMs. Compared with human feedback, AI feed- back is more scalable, which means (1) it is easier to collect and expand with lower cost; (2) its quality improves as the LLM annotators become more capable. In previous re- search, it is shown that advanced AI systems are capable of conducting chatbot evaluations (Dubois et al., 2023; Zheng et al., 2023a), giving textual critiques (Ye et al., 2023; Wang ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback et al., 2023c), or assisting human annotators (Saunders et al., 2022). However, open-source LLMs have not yet benefited from AI feedback through the lens of feedback learning. This paper establishes a comprehensive AI feedback collec- tion pipeline. Besides scalability, we prioritize diversity of both instructions and responses for holistic language model alignment. Particularly, we compile a diverse array of over 60, 000 instructions and 17 models from multiple sources to produce comparative conversations in broad top- ics and quality. Then, we adopt a bunch of techniques to alleviate annotation biases and improve feedback qual- ity to the greatest extent. These include (1) decomposing annotation documents into four different aspects, namely instruction-following, truthfulness, honesty, and helpfulness, to reduce ambiguity; (2) providing objective grading criteria and reference responses for score calibration; (3) asking GPT-4 for detailed textual critique before scores as chain-of- thought (Wei et al., 2022b) rationales. Comprehending all above, we finally build ULTRAFEEDBACK, a million-scale AI feedback dataset for aligning open-source LLMs. We comprehensively validate the advantage of AI feed- back in boosting open-source models with ULTRAFEED- BACK. By fine-tuning a LLaMA2-13B model (Touvron et al., 2023b), we build a state-of-the-art reward model UltraRM, which significantly outperforms existing open- source reward models. Based on UltraRM, we enhance a powerful open-source model UltraLM (Ding et al., 2023; Touvron et al., 2023a) with best-of-n sampling and PPO. Experiments show that both strategies improve the model dramatically. Moreover, we fine-tune a critique model that could criticize and judge model responses. We also con- duct a detailed analysis of the consistency and inconsistency between AI and human feedback. To summarize, our contributions are three-fold: (1) To the best of our knowledge, we for the first time demonstrate the beneficial effect of scaled AI feedback on open-source chat LLMs. (2) We establish a systematic and sizable pipeline to collect high-quality and diversified AI feedback. (3) We release a suite of resources for feedback learning research, including a dataset, reward model, and critique model. 2. ULTRAFEEDBACK 2.1. Overview Inspired by the data engineering principles in supervised fine-tuning (Ding et al., 2023; Chiang et al., 2023; Xu et al., 2023), we identify scalability and diversity as pivot factors of the overall generalizability of preference data. We argue that existing preference data suffer from satisfying either one of the two factors. To be specific, human feedback collection usually relies on human annotators to compare a pair of completions (Stiennon et al., 2020; Nakano et al., 2021; Ouyang et al., 2022; Bai et al., 2022a). Thus, the data is hard to scale up due to time and budget constraints, especially for open-source researchers. On the other hand, existing AI feedback approaches (Bai et al., 2022b; Lee et al., 2023) reduce human involvement and enjoy scalability via capable LLMs, but they are limited to specific domains (Bai et al., 2022b; Lee et al., 2023) or forms (Ye et al., 2023) and hence lack the necessary diversity to boost LM performance under broader contexts. To this end, we take into account scalability and diversity in all three stages of the preference data collection process: collecting instructions, sampling completions, and annotat- ing comparison pairs. The overview of the data collection pipeline is shown in Figure 1. Firstly, we collect a large- scale and diversified instruction set to enhance LLMs’ ca- pabilities from four aspects: (1) Follow Instructions: LLMs should respond to humans without deviating from the re- quirements. (2) Helpful and Informative: LLMs should provide useful and correct answers to address the given problems. (3) Truthful: LLMs’ output should be grounded in the instructions and real-world knowledge, and avoid in- troducing any self-contradiction. (4) Honesty: LLMs should know what they (don’t) know and express uncertainty to- wards the given problem. For the second stage, to avoid the sameness of comparison responses, we build a pool of distinct models at different capability levels to sample completions. Finally, to overcome the issues concerning scalability (Nakano et al., 2021; Stiennon et al., 2020) and quality (Ethayarajh et al., 2022), we seek scalable AI feed- back from GPT-4, and explore several techniques to improve the reliability. Next, we will introduce our data construction pipeline in detail. 2.2. Instruction Collection We select instructions that target four distinct but all- important abilities of language models, namely instruction- following, truthfulness, honesty, and helpfulness. Specif- ically, we include all instructions from TruthfulQA (Lin et al., 2022) and FalseQA (Hu et al., 2023) training set for truthfulness. For instruction-following and helpfulness, we randomly sample 10k instructions from Evol-Instruct (Xu et al., 2023) and UltraChat (Ding et al., 2023) respectively, and sample 20k from ShareGPT (Chiang et al., 2023). We finally include FLAN (Longpre et al., 2023) to improve LLMs’ helpfulness in various NLP tasks due to the task diversity within FLAN. We adopt a stratified sampling strat- egy following (Mukherjee et al., 2023), randomly picking 3k instructions from the “CoT” subset and sampling 10 instruc- tions per task for the other three subsets, while excluding those with overly long instructions. In particular, honesty will be assessed by TruthfulQA and FLAN as they both contain reference answers, based on which it is easier for the annotator to judge if the uncertainty expressed in LLMs’ responses calibrates with the accuracy. We then conduct 2 ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback Figure 1. ULTRAFEEDBACK construction process. We sample instructions and models from large pools to guarantee diversity, then query GPT-4 with detailed illustrations for fine-grained and high-quality annotations in both textual and numerical formats. a data contamination detection (see Appendix B). Finally, we obtain 63, 967 instructions of various types from the six publicly available high-quality datasets. 2.3. Completion Sampling To guarantee that the collected responses are dissimilar and well-distributed, we include different models to generate completions for each instruction. To alleviate the potential spurious correlation between text styles and response quality within the dataset, we introduce intervention by selecting not only different series of models at different levels, but also models with different model sizes, architectures, and train- ing data within the same model series. This strategy enables one type of text style to present responses of different qual- ity levels, namely the response of one series of models may be better or worse than another depending on model sizes, thus avoiding the establishment of spurious correlations. Specifically, we set up a pool of 17 models: (1) For commer- cial models, we choose GPT-4, gpt-3.5-turbo (Chat- GPT), and Bard 1; (2) For LLaMA-series, we choose UltraLM-13B/65B (Ding et al., 2023), WizardLM-7B- v1.1/13B-v1.2/70B-v1.1 (Xu et al., 2023), Vicuna-33B-v1.3 (Chiang et al., 2023), LLaMA2-7B/13B/70B-Chat (Touvron et al., 2023b), and Alpaca-7B (Taori et al., 2023); (3) For Non-LLaMA series, we choose MPT-30B-Chat (MosaicML, 2023), Falcon-40B-Instruct (Almazrouei et al., 2023), Star- Chat (Tunstall et al., 2023), and Pythia-12B (Biderman et al., 2023). We randomly sample four different models from the pool to complete each instruction. To further improve diversity in model responses, we elicit distinct model behaviors by adding different principles be- fore completing each instruction. Following Sun et al. 1https://bard.google.com/ (2023) and Mukherjee et al. (2023), we first hand-craft one principle for each aspect and then automize the procedure by invoking GPT-4 to curate another ten based on the human- written example. According to dataset characteristics, each data source is assigned with different principle prompts. We randomly sample a corresponding principle for each com- pletion and add it to the system prompt to induce model behaviors. The principles can be found in Appendix G.1, and the effects of different principles are plotted in Figure 6. 2.4. AI Feedback Annotation After generating 255, 864 model completions based on the 63, 967 instructions, we employ GPT-4 to provide two types of feedback for each completion: (1) scalar scores that indicate the fine-grained quality regarding multiple aspects, and (2) textual critique that gives detailed guidance on how to improve the completion. These lead to over 1 million feedback data in total. Preference Annotation. Regarding the potential subjec- tivity and randomness of GPT-4 annotation, we apply four techniques to improve the annotation quality: (1) Decompo- sition. To reduce ambiguity and the difficulty of annotation, we decompose the overall quality assessment into four fine- grained assessments, namely instruction-following, truth- fulness, honesty, and helpfulness. (2) Standard. For each aspect, we provide GPT-4 with detailed documentation of scores from 1 to 5 for reference, thus avoiding variable and subjective standards. See Appendix G.2 for an example. (3) Reference. To prevent inconsistency ratings across differ- ent runs, we wrap one instruction and all its completions into the prompt and ask GPT-4 to score four completions simultaneously to reduce randomness. (4) Rationale. Be- sides scoring each response, GPT-4 is required to generate a rationale on how the response should be scored according to the documentation. Combining all the techniques, we 3 Instruction PoolModel PoolMPTChatGPTLLaMABard…Evol-Instruct…Why is the problem always DNS?Because it is a core component of the internet…GPT-4 Preference AnnotationInstruction-following>AThe statement is a humorous exaggeration…BI'd like to clarify that the concept of…CThe phrase is a common saying among some IT…DHonestyABCDDBAC=>>=>Truthfulness=DACB>>Helpfulness=DBAC>>Text A is near alignment with the task goal…Text B is correct and confident…Text C is mostly truthful, but it contains…Text D is correct and provides a basic…Comparison Data ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback Table 1. Statistics of existing preference and critique datasets. The average length refers to the number of tokens. Dataset # Convs Prompt Length Response Length Critique Length Fine- Grained? Feedback Format # Pairs # Critique Annotator OASST1 OpenAI WebGPT Anthropic Helpful OpenAI Summ. QA Feedback 35,905 38,925 118,263 60,674 11,378 167.6 50.9 185.7 326.4 155.8 SelFee Shepherd 178,331 1,316 100.3 95.3 ULTRAFEEDBACK 255,864 185.1 Preference Dataset - - - - - (cid:37) (cid:37) (cid:37) (cid:33) (cid:33) Critique Dataset 89.4 67.2 143.1 (cid:33) (cid:33) (cid:33) 221.1 188.2 94.6 36.6 107.9 243.9 97.6 305.3 Scalar Scalar Ranking Scalar Scalar 17,966 19,578 118,263 92,858 17,118 - - - - - Human Human Human Human Human Text Text - - 316,026 1,317 AI Human Scalar & Text 340,025 255,864 AI finally have four fine-grained scalar scores and rationales for each response. Critique Generation. Besides scalar reward, we also seek textual critique from GPT-4. We prompt GPT-4 to act as a tu- tor and provide detailed suggestions specified for each com- pletion to help models improve rather than propose answers directly. Different from the above comparison-oriented an- notations, critique prompts are generated separately from an overall perspective for each completion. The prompts can be found in Appendix G.2. 2.5. Dataset Statistics We compare ULTRAFEEDBACK with current open-source datasets in Table 1. ULTRAFEEDBACK stands out to be the largest one among all preference and critique datasets, which is at least twice as large as other datasets. Also, its completions and critiques are the longest. Moreover, we highlight that ULTRAFEEDBACK is the only dataset that provides both scalar preferences and textual feedback, enabling it to serve as a preference and critique dataset simultaneously. Overall, ULTRAFEEDBACK outperforms previous datasets in both scale and diversity, and we also validate its high quality by experiment in Section 3. 2.6. ULTRAFEEDBACK-Powered Models Based on ULTRAFEEDBACK, we develop UltraRM, an ad- vanced open-source reward model that provides preferences for AI responses given user instructions. Additionally, we train a critique model UltraCM from the textual feedback in ULTRAFEEDBACK. UltraCM could interact with human and AI assistants more flexibly in text. UltraRM. For reward modeling, we train UltraRM based on LLaMA2-13B (Touvron et al., 2023b). Specifically, we train three versions of UltraRM. We mix several open-source datasets with ULTRAFEEDBACK to train UltraRM. The open-source datasets include Stanford SHP (Ethayarajh et al., 2022), OpenAI Summarization (Stiennon et al., 2020), and Anthropic Helpful (Bai et al., 2022a). To validate the quality of UltraFeedback, we also train one model with merely the fine-grained scores of this dataset, i.e. averaging the preference scores in each aspect to get a final reward score. Further, to compare the effectiveness of the fine-grained scores and overall scores, we replace the fine-grained scores in UltraRM with the assessment ratings in critique generation, while remaining the open-source datasets. The details for dataset processing can be found in Appendix E.1. We keep the training strategy, including loss objective and training hyperparameters, exactly the same as Touvron et al. (2023b). UltraCM. We also train a critique model stemming from ULTRAFEEDBACK to boost future research in learning from textual feedback (Wang et al., 2023d). UltraCM has the same initialization as UltraRM but is trained solely on UL- TRAFEEDBACK critique data, i.e. 255, 864 textual feedback in total. Given a response, we fine-tune the model to give a corresponding critique that judges the response, figures out flaws, and provides suggestions for improvement. 3. Experiments To validate the effect of AI feedback, we first evaluate Ul- traRM on human preference benchmarks in Section 3.1. Next, we test UltraRM in enhancing chat language models with two strategies, namely best-of-n sampling (Section 3.2) and reinforcement learning (Section 3.3). Finally, we evalu- ate the feedback quality of UltraCM in Appendix E.3. 3.1. Reward Modeling Setup. To evaluate how UltraRM aligns with human prefer- ence, we conduct experiments on four human annotated preference datasets, OpenAI WebGPT (Nakano et al., 2021), OpenAI Summarization (Stiennon et al., 2020), Anthropic HH-RLHF (Bai et al., 2022a), and Standford SHP. On 4 ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback Table 2. Reward modeling accuracy (%) results. We compare our UltraRM with baseline open-source reward models. LLaMA2 results are taken from (Touvron et al., 2023b). The highest results are in bold and the second highest scores are underlined. Model Backbone Model Open? Anthropic Helpful OpenAI WebGPT OpenAI Summ. Stanford SHP Moss Ziya OASST SteamSHP LLaMA2 Helpfulness LLaMA-7B LLaMA-7B DeBERTa-v3-large FLAN-T5-XL LLaMA2-70B UltraRM w/ Only ULTRAFEEDBACK w/ Overall Score LLaMA2-13B LLaMA2-13B LLaMA2-13B ✓ ✓ ✓ ✓ ✗ ✓ ✓ ✓ 61.3 61.4 67.6 55.4 72.0 71.0 66.7 71.0 58.1 61.8 - 62.6 - 65.2 65.1 62.0 59.0 60.3 71.8 48.4 75.5 74.0 66.8 73.0 54.6 57.0 53.9 51.6 80.0 73.7 68.4 73.6 Avg. 58.3 60.1 - 54.5 - 71.0 66.8 69.9 each dataset, we calculate the rewards of two responses for one prompt and predict which one is more preferred. We compare our UltraRM-UF, UltraRM-Overall, and Ul- traRM with open-source baselines, including Moss (Zheng et al., 2023b), Ziya (IDEA-CCNL, 2021), OASST 2, and SteamSHP (Ethayarajh et al., 2022). We also report the results in LLaMA2 (Touvron et al., 2023b), although their reward models are not released. Results. The preference prediction accuracy results are reported in Table 2. As we can see, the UltraRM series outperforms baseline reward models except for the closed LLaMA2 reward model (much larger) by a large margin, indicating that UltraRM series are the best open-source reward models. Notably, our reward model can still surpass all other baselines even without mixing open-source datasets. These results reveal that, ULTRAFEEDBACK is highly consistent with human preference, and its high quality as well as diversity enable strong out-of-distribution generalization. On average, the model trained with only ULTRAFEEDBACK outperforms open-source baseline models by over 6.3 percent in accuracy, while mixing open-source datasets with overall scores and fine-grained scores of ULTRAFEEDBACK achieves 3.1 and 4.2 percent more improvement respectively. We highlight that the OpenAI WebGPT dataset has no train- ing and test splits, and neither most baselines nor we train reward models on this dataset3, making it a fair benchmark to evaluate the generalization ability of reward models. Ob- viously, UltraRM series are significantly better, reaching 2.6% absolute points improvement over baselines. Another intriguing finding is that adding open-source datasets has a minor effect on the WebGPT dataset, which again proves the transferability advantage of ULTRAFEEDBACK. On another benchmark Stanford SHP, UltraRM also achieves 2https://huggingface.co/OpenAssistant/ reward-model-deberta-v3-large-v2 3The OASST and LLaMA2 Helpfulness reward model used WebGPT dataset for training. To prevent data leakage, we do not report their performance on WebGPT. remarkable performance. A noteworthy finding is that, despite exhibiting comparably on the other three datasets, the reward model trained with overall scores discernably lags behind the other two vari- ants on WebGPT. There can be two potential explanations for this observation. First, fine-grained annotation, which scores model outputs from different aspects respectively, provides a more precise assessment for each completion than aggregating evaluation into an overall number. Second, in the overall quality annotation process, each sample is sent to GPT-4 separately whereas, in fine-grained rating, all four completions are scored at the same time, which may provide GPT-4 with cross-references and prevent it from applying inconsistent standards, reducing the impact of randomness. These superiorities demonstrate the high quality of our fine- grained preference data, and we advocate future work to adopt the fine-grained annotation schema and rate multiple completions at one time. 3.2. Best-of-n Experiments Figure 2. Win rate against text-davinci-003 on AlpacaEval benchmark. We sample n responses and choose the one with the highest reward. Setup. To verify that UltraRM could serve as a good indica- tor of response quality, we conduct best-of-n experiments. 5 1XPEHU6DPSOHV5HZDUG:LQ5DWH5HZDUG:LQ5DWH ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback Table 3. Head-to-head comparison results on three public benchmarks. The baseline is text-davinci-003 in AlpacaEval and gpt-3.5-turbo in Evol-Instruct and UltraChat. The judge is GPT-4. The highest win rates are in bold. Model ChatGPT Size AlpacaEval Win (%) Evol-Instruct Win / Tie / Lose (%) UltraChat Win / Tie / Lose (%) Average Win (%) - 89.4 - - Vicuna-13B-v1.5 LLaMA2-13B-Chat WizardLM-13B-v1.2 OpenChat-13B-v3.2super LLaMA2-70B-Chat UltraLM-13B Vicuna-13B-v1.3 WizardLM-13B-v1.1 Vicuna-33B-v1.3 UltraLM-13B-PPO 13B 13B 13B 13B 70B 13B 13B 13B 33B 13B LLaMA2 33.0 / 23.9 / 43.1 44.1 / 11.9 / 44.0 55.5 / 17.4 / 27.1 55.5 / 11.0 / 33.5 56.4 / 13.8 / 29.8 LLaMA 39.9 / 14.7 / 45.4 36.7 / 17.4 / 45.9 54.1 / 14.7 / 31.2 50.0 / 17.0 / 33.0 57.8 / 10.1 / 32.1 - 81.1 89.2 89.5 92.7 80.7 82.1 86.3 89.0 86.3 34.5 / 38.2 / 27.3 53.5 / 21.3 / 25.2 59.7 / 25.5 / 14.8 58.7 / 26.7 / 14.5 54.0 / 28.6 / 17.4 38.2 / 34.8 / 27.0 41.3 / 33.2 / 25.5 56.1 / 26.0 / 17.9 57.7 / 25.7 / 16.6 64.9 / 15.6 / 19.5 - - 59.5 68.1 67.9 67.7 52.9 53.4 65.5 65.6 69.7 On the AlpacaEval benchmark, we randomly sample 16 completions from the original UltraLM-13B and calculate their corresponding rewards. We then select the best-of- {1, 2, 4, 8, 16} completions as the final response. The sam- pling parameters are set to temperature = 1 and top-p = 1. Results. We present results in Figure 2. Apparently, we can see the win rate on AlpacaEval increases proportionally with rewards. This validates that our UltraRM gives rigorous rewards that reflect the overall response quality. Notably, the best-of-n sampling strategy is surprisingly effective. The initial UltraLM-13B model achieves a 76.53% win rate for a single sampling, and a simple best-of-2 sample increases the win rate to 84.64%. With more samples, we can get even more high-quality responses, and the final best-of-16 win rate hits 91.54%. The best-of-n sampling is universally applicable across models and tasks, which enhances models without training. Please refer to Appendix F.2 for cases. 3.3. PPO Experiments Setup. Given the state-of-the-art UltraRM, we aim to push the upper bound of open-source chat language models with RLAIF. Specifically, we perform PPO over UltraLM- 13B (Ding et al., 2023) to get its PPO version, UltraLM- 13B-PPO. We tune UltraLM for 80 iterations on the UL- TRAFEEDBACK prompts. In each iteration, we collect 512 samples and update the policy model with a mini-batch size of 64. The learning rate is fixed at 1e-6. Benchmarks. We conduct experiments on three public benchmarks, namely AlpacaEval (Li et al., 2023), Evol- Instruct (Xu et al., 2023), and UltraChat (Ding et al., 2023). On each benchmark, we ask GPT-4 to judge which response is better given the same instruction. AlpacaEval adopts text-davinci-003 as the competitor model, while we compete with gpt-3.5-turbo on Evol-Instruct and Ul- traChat. To avoid position bias, we randomly switch the comparing responses. For all models, we use the same decoding parameter with temperature = 0.7 and top-p = 1. Results. We report experiment results in Table 3. We take the official results on the AlpacaEval leaderboard for base- line models and conduct evaluations by ourselves for other results. Overall, our UltraLM-13B-PPO achieves the high- est average win rate on the three benchmarks, outperform- ing all other open-source models. Among LLaMA-based models, UltraLM-13B-PPO overtakes other models by at least 3.6 percent on average. Even when compared with the much larger LLaMA2-70B-Chat model, our model still holds the advantage, illustrating the huge benefit of RLAIF alignment. Our model also reaches the highest win rate on two of the benchmarks, Evol-Instruct and UltraChat, against the more powerful gpt-3.5-turbo. It is worth noting that, compared with the original UltraLM-13B, the PPO process benefits the model greatly, leading to a 16.8 percent enhancement. We provide cases in Appendix F.3. 4. Agreement with Human Preferences Baselines. We compare UltraLM-13B-PPO with lead- ing open-source models and proprietary models, including LLaMA2-Chat (Touvron et al., 2023b), Vicuna (Chiang et al., 2023), WizardLM (Xu et al., 2023), OpenChat (Wang et al., 2023a), and ChatGPT (OpenAI, 2022). The inclusivity of human preferences is known to be hard to capture (Dubois et al., 2023). Heavily relying on AI feed- back, it is essential to measure and monitor the agreement between AI and human preferences. In this section, we con- duct experiments to see (1) to what extent AI annotations 6 ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback are consistent with human preferences (Section 4.1) and (2) how reliable AI evaluations are (Section 4.2). 4.1. Annotation Consistency In Section 3.1, we show that the reward models trained on ULTRAFEEDBACK could predict human preference accurately. To further analyze to what extent AI feedback could capture human preference, we randomly sample 400 comparison pair from ULTRAFEEDBACK, AlpacaEval, Evol-Instruct and UltraChat test sets (100 each) and ask 3 independent annotators to compare those pairs (win/tie/lose). The annotators are undergraduate and graduate students. We present the agreement ratio between GPT-4 and annotators, as well as annotators themselves in Table 4. On average, GPT-4 judge exhibits 59.7% agreement rate with human labelers, which matches previous human evaluation on MT-Bench (Zheng et al., 2023a). We also observe similar agreement rates among annotators. Notably, the agreement between GPT-4 and the majority votes of three annotators raises to 68.6%, meaning that GPT-4 better reflects the collective human preferences. Table 4. Agreement between judges on 400 samples from ULTRA- FEEDBACK, AlpacaEval, Evol-Instruct, and UltraChat test sets . A-1, A-2, A-3 are three human judges. “Majority” stands for the agreement between each judge and other three judges’s majority votes. We include tie votes and the random agreement is 33%. Judge A-1 A-2 A-3 Average Majority GPT-4 A-1 A-2 A-3 59.2% 60.8% 59.1% 59.7% 58.1% 54.7% 57.3% 55.4% 58.1% 56.4% - 58.1% 54.7% 55.4% - - 68.6% 60.3% 63.3% 62.0% 4.2. Reliability of AI Evaluation We first supplement another AI evaluation using Claude- 3 Sonnet (Anthropic, 2024) to investigate the agreement among different series of AI models. The prompts are the same as the GPT-4 evaluation. Then, we compare both of our AI evaluation results with human annotations to examine if AI evaluations reliably correlate with humans. Particu- larly, we use the majority votes of the three annotators and filter out samples with all different votes. We present GPT-4, Claude-3, and human evaluation results on the remaining 266 pairs in Table 5. Overall, Claude-3 shares the same trend as GPT-4 and further increases our models’ win rates. Human evaluations are mostly consistent with GPT-4, giving a 64.3% against 67.3% average winning rate. We notice that human labelers tend to assign more ties than GPT-4, leading to slightly lower winning rates. For fine-grained analysis, we categorize the evaluated sam- ples into reasoning, writing, and QA tasks. The categorical comparison results are presented in Figure 3. It is shown 7 that human evaluations are mostly consistent with GPT-4, where they both prefer our models on writing and QA tasks. On reasoning tasks including coding, math, and logic, hu- man and GPT-4 judgments diverge on ties and losses, where GPT-4 gives fewer ties but more losses. To delve into the dis- crepancy deeper, we ask another expert labeler to determine the ground truth answer for each question. In this sense, a model wins when it gives the correct answer while the other does not, and vice versa. The two models tie when they both successfully or unsuccessfully answer the question. The final win/tie/lose rate comes at 42.1%/26.3%/31.6% and closely matches human evaluations. The GPT-4 judge, in this case, potentially underestimated our model’s reasoning performance and still has space to improve. Table 5. Human evaluation results. We use majority votes from three human judges and compare GPT-4, Claude-3, and human evaluations on the same 266 samples. Evol-Instruct UltraChat Judge AlpacaEval Win (%) Win / Tie / Lose (%) Avg. Win (%) GPT-4 Claude-3 Human 83.9 95.1 78.5 57.1 / 8.8 / 34.1 59.6 / 1.4 / 39.0 68.1 / 17.6 / 14.3 61.0 / 17.1 / 21.9 73.5 / 6.8 / 19.7 46.3 / 19.5 / 34.1 67.3 76.1 64.3 Figure 3. Catrgorical comparison of human and GPT-4 judgments. Human judgments are majority votes from three annotators. Sam- ple numbers of each category are in parentness. 5. Analysis In this section, we further analyze how ULTRAFEED- BACK enhances language models on different subjects (Sec- tion 5.1) and tasks (Section 5.2). 5.1. Question Type Breakdown on types question different Figure 4 reports the UltraLM-13B-PPO and UltraLM- 13B scores versus gpt-3.5-turbo on the Evol-Instruct test set. We observe that UltraLM-13B-PPO overtakes ChatGPT on 22/29 subjects, especially on writing-related tasks such as academic writing. Our model is also well-aligned with human values, getting higher scores on toxicity, ethics, and TruthfulQA. On some difficult subjects like roleplay, reasoning, and counterfactual, our model is still on par with ChatGPT, indicating the strong advanced model capability. Compared with the original UltraLM-13B, including PPO boosts the model in multiple aspects, WinTieReasoning (57)LoseWinTieWriting (61)LoseWinTieQA (148)Lose020406080Percentage (%)Comparison of Human and GPT4 Judgements (in %)HumanGPT4 ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback Figure 4. Comparison results between UltraLM-13B-PPO, UltraLM-13B, and gpt-3.5-turbo on Evol-Instruct test set, where gpt-3.5-turbo scores are 100%. Table 6. Exact match scores (%) for UltraLM-13B and UltraLM-13B-PPO on model capability benchmarks. Model BoolQ HellaSwag RACE-h RACE-m MultiRC TriviaQA NQ PIQA OBQA ARC-E ARC-C Avg. UltraLM-13B UltraLM-13B-PPO 85.0 83.5 59.8 62.6 66.1 66.8 73.5 74.2 83.2 83.7 50.8 52.5 19.4 22.1 73.5 74.9 57.0 57.0 76.1 76.1 51.5 53.9 63.3 64.3 professional knowledge (economy, chemistry, music, literature) and reasoning ability (reasoning, complex format, code generation, math). Meanwhile, our model falls behind gpt-3.5-turbo on math and code-related tasks, which might be attributed to the limitation of base model ability and the lack of relevant data in ULTRAFEEDBACK. Table 9 in Appendix E.5 provides additional results on the UltraChat test set and reaches the same conclusion. We leave this as our future work. 5.2. Does RLAIF Benefit Model Capability? To test whether RLAIF impacts base model capability, we conduct experiments on nine commonly used benchmarks including question answering and multiple-choice questions (See Appendix E.4 for details). We compare UltraLM-13B before and after PPO. The results in Table 6 demonstrate marginal improvements over these benchmarks with about 1 absolute point. We note that this is in line with established conclusions (OpenAI, 2023) regarding RLHF, which state that RLHF could produce more preferable responses, but has a minor effect on model capability. 6. Related Work Feedback Learning for LLMs. Incorporating human feedback with imitation learning or reinforcement learn- ing (Schulman et al., 2017; Rafailov et al., 2023) has been the mainstream approach to align LLMs with human prefer- ences in leading cooperations (Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a; Glaese et al., 2022; OpenAI, 2022; 2023; Touvron et al., 2023a). However, human feed- back relies on human capabilities, which makes it hard to scale up and apply to superhuman tasks. Accordingly, some researchers proposed scalable oversight, which aims to supervise potent AI models by models themselves (Irv- ing et al., 2018; Leike et al., 2018; Christiano et al., 2018). Empirically for LLMs, Bai et al. (2022b) first presented Constitutional AI to let LLMs refine their responses given a set of regulations. Lee et al. (2023) and Burns et al. (2023) further validated that learning from AI feedback could sur- pass human feedback on some specific tasks. More broadly, our work verified that scaled AI feedback could enhance the general ability of open-source chat models. Data for LLM Alignment. The importance of data scala- bility and quality has been widely recognized in the litera- ture on instruction tuning (also known as SFT). Early works collected various NLP tasks or real user conversations to conduct instruction tuning and observed that LLMs could generalize well across different tasks (Wei et al., 2022a; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022). After the release of ChatGPT, most recent research on SFT emphasized the importance of data construction and reached conclusions that scalability, diversity, as well as quality, are vital for the final performance (Ding et al., 2023; Taori et al., 2023; Chiang et al., 2023; Xu et al., 2023). However, when it goes to the feedback learning stage, the importance of data engineering has not been well illustrated. Among current preference datasets, some of them focus on specific tasks (e.g. summarization (Stiennon et al., 2020), search-based question answering (Nakano et al., 2021), safety-oriented scenarios (Ji et al., 2023), and math problems (Lightman et al., 2023)), thus cannot boost general chat models. Some datasets are small in scale (Wu et al., 2023; Wang et al., 2023c) or provide only community votes as coarse-grained preferences (Ethayarajh et al., 2022; Askell et al., 2021). Therefore, a large general-purpose preference dataset with diverse instructions and fine-grained annotations in the open- source community is urgently in need, which motivates us to construct ULTRAFEEDBACK. 7. Conclusion In this paper, we proposed to enhance open-source LLMs with scaled AI feedback. Through meticulous designation, we constructed ULTRAFEEDBACK, a large-scale and di- 8 7R[LFLW\$FDGHPLF:ULWLQJ(QWHUWDLQPHQW/LWHUDWXUH0XVLF%LRORJ\6SRUW$UW7HFKQRORJ\&KHPLVWU\(WKLFV3KLORVRSK\:ULWWLQJ&RXQWHUIDFWXDO/DZ&RPPRQ6HQVH(FRQRP\3K\VLFV7UXWKIXO4$+LVWRU\0HGLFLQH5ROHSOD\&RPSXWHU6FLHQFH5HDVRQLQJ0XOWLOLQJXDO&RPSOH[)RUPDW&RGH*HQHUDWLRQ0DWK&RGH'HEXJ8OWUD/0332&KDW*37$YHUDJH8OWUD/0%3328OWUD/0% ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback verse AI feedback dataset. With the data, we embarked on a thorough exploration of AI feedback’s multifaceted utilities, including modeling human preferences, improving chat lan- guage models, and training critique models. Our analysis further delved deep into human agreement and model ca- pability evaluations, revealing some nuanced insights. We believe that AI feedback would become a scalable and reli- able source for future AI oversight. We hope our work could serve as an early exploration and data support in this area, facilitating researchers in the open-source community. In fu- ture work, we will continue exploring diverse, high-quality, and scalable preference data construction, expanding AI feedback in multi-turn dialogues, complex reasoning, cod- ing, and safety scenarios. Impact Statement Aligning AI systems, especially advanced LLMs, is impor- tant for the safety and trustworthiness in their applications. We manage to enhance open LLMs with scaled AI feedback which is an underexplored research direction. With high efficiency and low cost, leveraging AI feedback could sig- nificantly reduce the consumation of human labors, leading to more scalable alignment. We should also raise attention to the limitations of AI feedback, LLMs could be biased towards certain features, such as answer positions (Zheng et al., 2023a), response lengths, and certain styles. In this way, such biases might lead to inaccurate or unfair annota- tions and evaluations. By overcoming these biases, more precise and helpful AI feedback can be obtained. In terms of ULTRAFEEDBACK, we could expect it to improve a consid- erable amount of open-source LLMs and narrow their gaps with close-sourced models. We did not add safety-oriented conversations intentionally, so there could still be toxicity and unethical behaviors in the aligned models if prompted adversarially. We believe our paradigm is still useful for enhancing model safety, and are extensively working on it. Alongside data, we also release a series of models for feedback learning research. The reward model and critique model can be directly used to align LLMs for more pref- ered behaviors. On the other hand, although our models are potent in solving tasks and giving feedback, they may also generate hallucinations and falsehoods. The risk of misuse is a severe threat to open LLMs, which calls for appropriate regulation and supervision. Acknowledgement This work is supported by the National Key R&D Program of China (No.2022ZD0116312), National Natural Science Foundation of China (No. 62236004). References Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Al- shamsi, Alessandro Cappelli, Ruxandra Cojocaru, Mer- ouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. Falcon-40B: an open large language model with state-of-the-art performance. 2023. Anthropic. Introducing the next generation of claude. 2024. Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a help- ful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022b. Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien, Eric Hal- lahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. Pythia: A suite for analyzing large language models across train- ing and scaling. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Hon- olulu, Hawaii, USA, Proceedings of Machine Learning Research, 2023. Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, 2020. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, 9 ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback Pranav Shyam, Girish Sastry, Amanda Askell, et al. Lan- guage models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Liang, and Tatsunori B Hashimoto. Alpacafarm: A sim- ulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387, 2023. Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, Ilya Sutskever, Jeff Wu, and OpenAI. Weak- to-strong generalization: Eliciting strong capabili- ties with weak supervision. ArXiv, abs/2312.09390, 2023. URL https://api.semanticscholar. org/CorpusID:266312608. Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta. Understanding dataset difficulty with V-usable informa- tion. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Ma- chine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 5988–6008. PMLR, 17–23 Jul 2022. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Hen- rique Ponde de Oliveira Pinto, Jared Kaplan, Harri Ed- wards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yong- hao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https: //lmsys.org/blog/2023-03-30-vicuna/. Paul Francis Christiano, Buck Shlegeris, and Dario Amodei. Supervising strong learners by amplifying weak experts. ArXiv, abs/1810.08575, 2018. URL https://api. semanticscholar.org/CorpusID:53041432. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044, 2019. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023. Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, An- thony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot lan- guage model evaluation, September 2021. URL https: //doi.org/10.5281/zenodo.5371628. Amelia Glaese, Nat McAleese, Maja Trkebacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, et al. Improving alignment of dialogue agents via targeted human judgements. arXiv preprint arXiv:2209.14375, 2022. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Ja- cob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021. Shengding Hu, Yifan Luo, Huadong Wang, Xingyi Cheng, Zhiyuan Liu, and Maosong Sun. Won’t get fooled again: Answering questions with false premises. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki (eds.), Pro- ceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, 2023. IDEA-CCNL. Fengshenbang-lm. https://github. com/IDEA-CCNL/Fengshenbang-LM, 2021. Geoffrey Irving, Paul Francis Christiano, and Dario Amodei. Ai safety via debate. ArXiv, abs/1805.00899, 2018. URL https://api.semanticscholar. org/CorpusID:22050710. Jiaming Ji, Mickel Liu, Juntao Dai, Xuehai Pan, Chi Zhang, Ce Bian, Ruiyang Sun, Yizhou Wang, and Yaodong Yang. Beavertails: Towards improved safety alignment of llm via a human-preference dataset. arXiv preprint arXiv:2307.04657, 2023. 10 ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettle- moyer. Triviaqa: A large scale distantly supervised chal- lenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551, 2017. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long Papers), pp. 252–262, 2018. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. Transactions of the Association for Computa- tional Linguistics, 7:453–466, 2019. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. Race: Large-scale reading com- prehension dataset from examinations. arXiv preprint arXiv:1704.04683, 2017. Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Carbune, and Abhinav Rastogi. Rlaif: Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267, 2023. Jan Leike, David Krueger, Tom Everitt, Miljan Mar- tic, Vishal Maini, and Shane Legg. Scalable agent alignment via reward modeling: a research direction. ArXiv, abs/1811.07871, 2018. URL https://api. semanticscholar.org/CorpusID:53745764. Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tat- sunori B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models. https://github. com/tatsu-lab/alpaca_eval, 2023. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Ed- wards, Bowen Baker, Teddy Lee, Jan Leike, John Schul- man, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint arXiv:2305.20050, 2023. Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods. In Smaranda Muresan, Preslav Nakov, and Aline Villavi- cencio (eds.), Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, 2022. Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, and Adam Roberts. The flan collection: Designing data and methods for effective in- struction tuning. CoRR, abs/2301.13688, 2023. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sab- harwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789, 2018. MosaicML. Introducing mpt-30b: Raising the bar for open- source foundation models, 2023. URL www.mosaicml. com/blog/mpt-30b. Accessed: 2023-06-22. Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sa- haj Agarwal, Hamid Palangi, and Ahmed Hassan Awadal- lah. Orca: Progressive learning from complex explanation traces of GPT-4. CoRR, abs/2306.02707, 2023. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shan- tanu Jain, Vineet Kosaraju, William Saunders, et al. We- bgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021. OpenAI. Chatgpt: Optimizing language models for dia- logue, 2022. OpenAI. Gpt-4 technical report, 2023. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Car- roll Wainwright, Pamela Mishkin, Chong Zhang, Sand- hini Agarwal, Katarina Slama, Alex Ray, et al. Train- ing language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022. Joon Sung Park, Joseph C O’Brien, Carrie J Cai, Mered- ith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behav- ior. arXiv preprint arXiv:2304.03442, 2023. Ethan Perez, Saffron Huang, H. Francis Song, Trevor Cai, Roman Ring, John Aslanides, Amelia Glaese, Nat McAleese, and Geoffrey Irving. Red teaming language models with language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emi- rates, December 7-11, 2022, pp. 3419–3448. Association for Computational Linguistics, 2022. doi: 10.18653/v1/ 2022.emnlp-main.225. URL https://doi.org/10. 18653/v1/2022.emnlp-main.225. Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun. Commu- nicative agents for software development. arXiv preprint arXiv:2307.07924, 2023. 11 ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, et al. Tool learning with foundation models. arXiv preprint arXiv:2304.08354, 2023. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A question answer- ing challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937, 2018. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Er- mon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tat- sunori B Hashimoto. Stanford alpaca: An instruction- following llama model, 2023. Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaf- fin, Arnaud Stiegler, Arun Raja, Manan Dey, M Sai- ful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chh- ablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault F´evry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. Multitask prompted training enables zero-shot task generalization. In The Tenth International Conference on Learning Repre- sentations, ICLR 2022, Virtual Event, April 25-29, 2022, 2022. URL https://openreview.net/forum? id=9Vrb9D0WI4. William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and Jan Leike. Self-critiquing models for assisting human evaluators. CoRR, abs/2206.05802, 2022. doi: 10.48550/ARXIV. 2206.05802. URL https://doi.org/10.48550/ arXiv.2206.05802. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Rad- ford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflex- ion: Language agents with verbal reinforcement learning, 2023. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with hu- man feedback. Advances in Neural Information Process- ing Systems, 33:3008–3021, 2020. Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David D. Cox, Yiming Yang, and Chuang Gan. Principle-driven self-alignment of language models from scratch with minimal human supervision. ArXiv, abs/2305.03047, 2023. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Mar- tinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b. Lewis Tunstall, Nathan Lambert, Nazneen Rajani, Edward Beeching, Teven Le Scao, Leandro von Werra, Sheon Han, Philipp Schmid, and Alexander Rush. Creating a coding assistant with starcoder. Hugging Face Blog, 2023. https://huggingface.co/blog/starchat. Guan Wang, Sijie Cheng, Xianyuan Zhan, Xiangang Li, Sen Song, and Yang Liu. Openchat: Advancing open-source language models with mixed-quality data. arXiv preprint arXiv:2309.11235, 2023a. Peiyi Wang, Lei Li, Liang Chen, Zefan Cai, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhi- fang Sui. Large language models are not fair evaluators, 2023b. Tianlu Wang, Ping Yu, Xiaoqing Ellen Tan, Sean O’Brien, Ramakanth Pasunuru, Jane Dwivedi-Yu, Olga Golovneva, Luke Zettlemoyer, Maryam Fazel-Zarandi, and Asli Ce- likyilmaz. Shepherd: A critic for language model genera- tion. arXiv preprint arXiv:2308.04592, 2023c. Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, and Heng Ji. Mint: Evaluating llms in multi-turn interaction with tools and language feedback. arXiv preprint arXiv:2309.10691, 2023d. Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Atharva Naik, Ar- jun Ashok, Arut Selvan Dhanasekaran, Anjana Arunku- mar, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Kuntal Kumar Pal, Maitreya Patel, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, 12 ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback with mt-bench and chatbot arena. arXiv:2306.05685, 2023a. arXiv preprint Rui Zheng, Shihan Dou, Songyang Gao, Wei Shen, Bing- hai Wang, Yan Liu, Senjie Jin, Qin Liu, Limao Xiong, Lu Chen, et al. Secrets of rlhf in large language models part i: Ppo. arXiv preprint arXiv:2307.04964, 2023b. Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Sa- van Doshi, Shailaja Keyur Sampat, Siddhartha Mishra, Sujan Reddy A, Sumanta Patro, Tanay Dixit, and Xudong Shen. Super-naturalinstructions: Generalization via declarative instructions on 1600+ NLP tasks. In Proceed- ings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pp. 5085– 5109, 2022. URL https://doi.org/10.18653/ v1/2022.emnlp-main.340. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners. In The Tenth International Conference on Learn- ing Representations, ICLR 2022, Virtual Event, April 25-29, 2022, 2022a. URL https://openreview. net/forum?id=gEZrGCozdqR. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Pro- cessing Systems, 35:24824–24837, 2022b. Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A Smith, Mari Ostendorf, and Hannaneh Hajishirzi. Fine-grained human feedback gives better rewards for language model training. arXiv preprint arXiv:2306.01693, 2023. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wiz- ardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023. Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhi- wei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, et al. Retroformer: Retrospective large language agents with policy gradient optimization. arXiv preprint arXiv:2308.02151, 2023. Seonghyeon Ye, Yongrae Jo, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, and Minjoon Seo. Selfee: Iterative self-revising llm empowered by self-feedback genera- tion. Blog post, May 2023. URL https://kaistai. github.io/SelFee/. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuo- han Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Judging llm-as-a-judge Gonzalez, and Ion Stoica. 13 ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback A. Limitations In constructing ULTRAFEEDBACK, we made an assumption that powerful LLMs like GPT-4 are capable of imitating human annotators and fair evaluators. Although more and more works accepted this assumption and demonstrated high agreement between human and LLM feedbacks (Dubois et al., 2023; Lee et al., 2023; Bai et al., 2022b), LLMs still cannot model human preference precisely under all situations. How to efficiently and accurately collect preference data and conduct rigorous evaluation are still challenging. We leave this as future work for further investigation. Another limitation is that ULTRAFEEDBACK only provides single-turn dialogues to improve the utility of LLMs due to time and budget restrictions. We will also expand ULTRAFEEDBACK to cover more tasks and scenarios. B. Data Contamination To avoid data contamination which could result in unfair even wrong evaluations, we did careful decontamination for ULTRAFEEDBACK. Following GPT-3 (Brown et al., 2020) and evaluation-harness (Gao et al., 2021), we search for 13-gram matches between AlpacaEval, Evol-Instruct, and UltraChat test set. We found in total 48 contamination samples and filtered out them. However, we did not conduct a thorough examination of contamination over other evaluation datasets because of the huge amount of datasets. Therefore, we suggest researchers decontaminate ULTRAFEEDBACK with their evaluation datasets before using it. C. UltraFeedback Statistics We summarize the scores for each model over different aspects in Figure 5. Overall, the rankings are consistent with model capabilities. For example, the GPT series is the best in all aspects, and larger models are generally better than smaller ones. The distinction among different aspects also exists. For instance, the LLaMA2-Chat models received higher scores on honesty, since they are aligned with human values with RLHF (Touvron et al., 2023b). We also showcase how different principles stimulate diverse model behaviors. We average the score of each aspect when applying different principles to models, and plot them in Figure 6. D. Training Details D.1. UltraRM We construct each comparison pair as a binary selection, with one completion being chosen and the other rejected. We optimize the reward model to select preferred completion by minimizing the binary ranking loss: Lranking = − log (σ (rθ (x, yc) − rθ (x, yr) − m(r))) (1) where θ represents the reward model, rθ (x, yc) is its scalar reward prediction towards the chosen text, rθ (x, yr) is that towards the rejected text, and m (r) is the absolute difference between the annotated reward of two texts. We set the m (r) = 0 for datasets with only preference rankings and normalize the margins to (0, 1] to avoid training instability due to a mismatch among the score scales of the datasets. Following Touvron et al. (2023b), we train the 13B reward model for one epoch with the batch size being 512 pairs (i.e., 1024 completions) and the learning rate being 1e-5. We adopt the cosine learning rate decay strategy with a warm-up ratio of 3% and a final learning rate of 1e-6. D.2. UltraCM We train LLaMA2-13B for two epochs with a batch size of 256 and a learning rate of 2e-5. We adopt the same learning rate scheduler as reward modeling. E. Experiment Details E.1. Dataset Details for UltraRM Training We mix ULTRAFEEDBACK with other open-source preference datasets for reward modeling. Stanford SHP is a community- based preference dataset collected from 18 different topics, adopting a strict filtering strategy to ensure text quality and reliability of preferences. We follow the guidelines in the official repository to further filter the dataset, only retaining 14 ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback Figure 5. Average scores for each model over the four aspects. 15 2345Preference Score4.694.674.314.224.184.094.064.044.013.913.783.683.553.283.022.952.62Honesty2345Preference Score4.764.764.384.354.224.184.124.094.033.963.883.783.623.453.422.992.83Truthfulness2345Preference Score4.024.03.843.83.643.633.623.583.493.333.283.23.072.92.542.42.24Helpfulness2345Preference Score4.534.514.134.063.973.923.913.763.623.593.563.393.163.012.892.562.27Instruction FollowingModelsGPT-4GPT-3.5 TurboWizardLM-70BBARDVicuna-33BMPT-30B-ChatWizardLM-13BLLaMA-2-70B-ChatUltraLM-65BLLaMA-2-13B-ChatUltraLM-13BWizardLM-7BLLaMA-2-7B-ChatStarChatAlpaca-7BPythia-12BFalcon-40B-Instruction ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback Figure 6. Different principles stimulate diverse model behaviors. preferences with a score ratio greater than 2 and using at most 5 comparison pairs for each post via random sampling. OpenAI Summarize consists of human-written completions and human-annotated preferences, with the instructions being much longer than ULTRAFEEDBACK. Hence, we include the high-quality dataset to enhance the subsequent reward model for long-text scenarios. We adopt the same comparison pair filtering method to avoid the reward model overfitting certain instructions. Anthropic Helpful is another human-annotated dataset. We incorporate all its samples into our training dataset to supplement multiturn dialogs data. For ULTRAFEEDBACK, we directly adopt the overall score obtained in critique annotation as the preference score for UltraRM-Overall, while for fine-grained versions, we average the scores of all aspects for each sample as the final preference score. Finally, the training dataset for our reward model contains a total of 749, 702 comparison pairs, with 340, 025 from ULTRAFEEDBACK, 198, 556 from Stanford SHP, 92, 858 from OpenAI Summarize, and 118, 263 from Anthropic Helpful. E.2. Additional Reward Modeling Experiments We observed that the SteamSHP model is different from other reward models in the input format, for it accepts two responses simultaneously and outputs which one is better (text-to-text format). During the experiment, we found that there is a position bias issue for this approach, where the reward model tends to prefer the first responses. To eliminate this issue, we average the scores from two runs exchanging response orders to get the final scores. We report the detailed results in Table 7. E.3. Critique Modeling Setup. To assess the ability of UltraCM to provide reliable critique, we employ GPT-4 to score the quality of critique based on detailed documentation. we follow (Wang et al., 2023c) to randomly sample 50 instructions from PIQA (Bisk et al., 2020), OpenBookQA (OBQA) (Mihaylov et al., 2018), CommonsenseQA (Talmor et al., 2018), AlpacaFarm (Dubois et al., 2023), and FairEval (Wang et al., 2023b). We also supplement HumanEval (Chen et al., 2021), MBPP (Austin et al., 16 gpt-4gpt-3.5-turbowizardlm-70bbardllama-2-70b-chatvicuna-33bwizardlm-13bmpt-30b-chatllama-2-13b-chatultralm-65bultralm-13bllama-2-7b-chatwizardlm-7bstarchatfalcon-40b-instructalpaca-7bpythia-12b2.53.03.54.04.55.0Honesty ScoreEffects of Different Principles on Honesty ScoresHelpfulnessTruthfulnessHonestyVerbalized Calibrationgpt-4gpt-3.5-turbobardwizardlm-70bmpt-30b-chatvicuna-33bwizardlm-13bllama-2-70b-chatultralm-65bllama-2-13b-chatultralm-13bwizardlm-7bllama-2-7b-chatalpaca-7bstarchatfalcon-40b-instructpythia-12b3.03.54.04.55.0Truthfulness ScoreEffects of Different Principles on Truthfulness ScoresHelpfulnessTruthfulnessHonestyVerbalized Calibrationgpt-4gpt-3.5-turbowizardlm-70bbardwizardlm-13bvicuna-33bllama-2-70b-chatllama-2-13b-chatmpt-30b-chatultralm-13bultralm-65bllama-2-7b-chatwizardlm-7bstarchatalpaca-7bpythia-12bfalcon-40b-instruct2.02.53.03.54.0Helpfulness ScoreEffects of Different Principles on Helpfulness ScoresHelpfulnessTruthfulnessHonestyVerbalized Calibrationgpt-4gpt-3.5-turbowizardlm-70bbardvicuna-33bwizardlm-13bmpt-30b-chatllama-2-70b-chatultralm-65bllama-2-13b-chatultralm-13bwizardlm-7bllama-2-7b-chatstarchatalpaca-7bpythia-12bfalcon-40b-instruct2.02.53.03.54.04.5Instruction_following ScoreEffects of Different Principles on Instruction following ScoresHelpfulnessTruthfulnessHonestyVerbalized Calibration ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback Table 7. Reward modeling results for SteamSHP with different sample orders. Dataset Chosen first Rejected first Avg. Anthropic Helpful OpenAI WebGPT OpenAI Summ. Stanford SHP 72.0 38.8 55.4 72.4 52.9 62.6 52.8 44.0 48.4 71.8 31.4 51.6 Table 8. Feedback quality of each model on different datasets rated by GPT-4. The best performance on each dataset is marked in bold, and the second has been underlined. Model PIQA OBQA Common- senseQA Alpaca- Farm Fair- Eval Human- Eval MBPP MATH GSM8K Avg. gpt-3.5-turbo LLaMA2-13B-Chat Vicuna-13B-v1.5 WizardLM-13B-v1.2 Shepherd-13B SelFee-13B UltraCM-13B 6.08 5.92 5.66 5.90 3.48 6.00 6.00 6.12 5.04 5.58 5.52 3.64 5.32 6.12 6.04 5.66 5.42 5.82 3.48 5.74 6.02 6.44 5.26 5.58 5.66 3.04 5.88 5.98 6.32 5.74 5.82 5.88 3.30 5.94 6.18 6.14 4.64 4.86 5.28 3.08 4.84 5.74 6.48 4.82 5.20 5.34 3.20 5.12 5.56 5.98 3.88 4.56 4.30 3.10 4.46 5.84 5.94 4.30 4.84 4.90 2.76 5.40 5.88 6.17 5.03 5.28 5.40 3.23 5.41 5.92 2021), MATH (Hendrycks et al., 2021), and GSM8K (Cobbe et al., 2021) to evaluate critique quality on coding and math tasks. We then generate model completions for the instructions in the same way as Section 2.2. We adopt two categories of models for comparison. First, we compare with four general-purpose models, gpt-3.5-turbo, LLaMA2-13B-Chat, Vicuna-13B-v1.5, and WizardLM-13B-v1.2. Then, we adopt two specifically trained critique models, SelFee4 and Shepherd (Wang et al., 2023c) 5. We apply the baseline models and UltraCM to provide feedback on model completions respectively. Finally, we rate the quality of the critique from 1 to 7 using GPT-4, 1 being the worst and 7 being the best. The prompt is adapted from (Wang et al., 2023c). Results. The scores of feedback quality are presented in Table 8. Overall, the performances of UltraCM almost approach gpt-3.5-turbo and dramatically surpass other models of both categories. To be specific, UltraCM achieves comparable performance with gpt-3.5-turbo on commonsense reasoning and mathematics reasoning. However, on AlpacaFarm and code datasets, UltraCM still exhibits deficiencies. Compared with two critique models, we find that (the community- trained) Shepherd almost always fails to provide high-quality feedback. SelFee achieves the highest average scores after gpt-3.5-turbo and UltraCM, but it dramatically falls short on HumanEval and MATH. We highlight the comparison between UltraCM and the other three general-purpose models. All four models are trained from LLaMA2-13B, but UltraCM is the only one trained to provide textual critique rather than enhancing knowledge or reasoning capability. However, the feedback of UltraCM consistently gains higher scores than other models across all tasks and datasets, indicating that criticizing is a learnable task and employing an expert critic is more effective than an expert for downstream tasks in providing feedback. With more powerful backbone models, we believe ULTRAFEEDBACK will greatly benefit autonomous agents (Park et al., 2023; Qin et al., 2023; Qian et al., 2023) and feedback learning (Yao et al., 2023; Shinn et al., 2023) research. E.4. Capability Experiments We use nine datasets in Section 5.2 to test the model capability. For world knowledge, we adopt NaturalQues- tions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017). For commonsense reasoning, we use PIQA (Bisk et al., 2020), HellaSwag (Zellers et al., 2019), OpenBookQA (Mihaylov et al., 2018), and ARC (Clark et al., 2018). For reading comprehension, we use BoolQ (Clark et al., 2019), RACE (Lai et al., 2017) and MultiRC (Khashabi et al., 2018). For evaluation, we simply ask models to answer the questions directly with answers (e.g. with options A, B, C, D or Yes/No). 4https://huggingface.co/kaist-ai/selfee-13b 5Note that Wang et al. (2023c) did not open source their model weights, so we use the model from the community that has been trained on their data: https://huggingface.co/reciprocate/shepherd-13b 17 ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback Table 9. Relative scores (%) versus gpt-3.5-turbo across different question types on UltraChat evaluation set. Commonsense World Knowledge Professional Knowledge Difficult Ability Math Reasoning Easy Moderate Easy Vicuna Set Biology Physics Model Writing Overall UltraLM-13B Vicuna-13B-v1.3 Vicuna-13B-v1.5 LLaMA2-13B-Chat Vicuna-33B-v1.3 WizardLM13B-v1.1 LLaMA2-70B-Chat OpenChat-13B-v3.2super WizardLM13B-v1.2 UltraLM-13B-PPO 95.6 93.2 95.7 97.1 98.5 100.7 100.5 98.6 102.5 97.7 113.7 113.4 115.8 114.6 113.4 113.9 116.5 121.2 122.0 123.5 106.8 106.4 106.6 108.5 114.0 112.1 106.7 112.6 110.3 113.6 111.7 109.6 104.9 109.3 105.1 106.9 111.5 116.1 114.3 131.1 103.3 107.1 105.0 107.7 109.0 113.0 109.0 110.1 111.7 118.4 102.1 106.0 100.1 105.9 109.9 108.1 106.6 106.0 108.6 113.2 105.1 108.9 101.2 108.0 112.8 110.7 109.4 110.0 109.0 120.2 89.7 84.7 94.8 91.3 84.4 89.9 99.0 89.3 96.3 93.0 71.0 79.0 73.2 75.0 86.7 76.8 77.6 82.9 79.7 78.8 98.6 98.4 99.1 98.6 103.0 102.6 103.6 104.7 103.8 101.7 98.8 98.8 99.0 100.2 102.4 102.6 103.2 103.9 104.9 105.7 We then match the output with the ground truth and calculate the exact match scores. E.5. Question Type Breakdown Table 9 reports the type-specific performance of our model and baselines compared with gpt-3.5-turbo. As is shown, our UltraLM-13B-PPO gets the highest average score, especially excels on the commonsense, world knowledge as well as professional knowledge questions. In the meantime, our model does not show advantages in math and reasoning tasks, which is consistent with the above results. F. Case Study F.1. UltraFeedback We present a case sampled from ShareGPT in Table 10. The four assistants sampled to complete this instruction are Falcon-40B-Instruct, GPT-4, LLaMA2-70B-Chat, and UltraLM-13B. We also present GPT-4 annotations on the aspect of instruction following. GPT-4 accurately points out the lack of details in Falcon-40B-Instruct and GPT-4 responses. F.2. Best-of-n Sampling We present a case comparing best-of-1, best-of-2, and best-of-16 sampling in Table 11. The best-of-1 sample is incorrect at all, so it received a low reward of −0.73. To compare, the best-of-2 sample is partially correct, as it only addressed the @ symbol in decorating staticmethod. The example it provided is also problematic. The best-of-16 sample, however, correctly illustrated the usage of the @ symbol in Python with a valid example. Thus it received a much higher reward from the reward model. Another worth-noting point is that the best-of-2 sample is longer than the best-of-16 sample, which verified that our UltraRM does not depend on length shortcut to evaluate the response quality. F.3. UltraLM-13B-PPO We compare UltraLM-13B before and after PPO from an AlpacaEval case in Table 12. As we can see, the original UltraLM-13B made mistakes in describing Lady Gaga’s characters in American Horror Story. The UltraLM-13B-PPO, on the other hand, correctly answered the question. F.4. UltraCM-13B To comprehensively evaluate and analyze the critiques our model generates, we conducted case study in Table 13. On GSM8K benchmark, we find that UltraCM-13B could give meaningful critiques on incorrect answers, pointing out flaws and provide suggestions. However, the suggestions are sometimes unnecessarily long, which indicates the imperfection of the model. We will continue to improve the critique quality in future work. 18 ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback Table 10. An example of ULTRAFEEDBACK comparing generations from 4 models from the aspect of instruction-following. User: what’s the general consensus best time to take vitamin D supplements. Response Falcon-40B-Instruct: A good rule of thumb is to take vitamin D supplements in the morning or afternoon so that they can be properly absorbed by the body. However, it is best to consult with a healthcare professional to determine a specific dosing regimen for your individual needs. GPT-4: The general consensus on the best time to take vitamin D supplements varies, as individual needs and absorption rates can differ. However, many experts recommend taking vitamin D supplements with a meal containing healthy fats for better absorption. Some studies suggest that taking vitamin D supplements in the morning may be preferable to nighttime due to its potential effects on sleep. Keep in mind that it is crucial to follow the dosage recommended by a healthcare professional, as individual needs may vary. As always, it’s best to consult with a healthcare provider for personalized advice on supplement timing and dosage. LLaMA2-70B-Chat: Hello! I’m happy to help you with your question. The general consensus among health professionals is that the best time to take vitamin D supplements depends on various factors, including your personal schedule, sun exposure, and health status. Here are some general guidelines to consider: 1. Morning: Taking vitamin D supplements in the morning can help you start your day with adequate levels of this essential nutrient. This may be particularly helpful if you have a busy schedule and may not have time to take them later in the day. 2. After sun exposure: Vitamin D is produced in the skin when it’s exposed to sunlight. Taking a vitamin D supplement after spending time outdoors can help boost your levels and ensure you’re getting enough. 3. With a meal: Vitamin D is a fat-soluble vitamin, which means it’s best absorbed when taken with a meal that contains healthy fats. Having a meal with a source of fat, such as avocado, nuts, or seeds, can enhance the absorption of vitamin D. 4. Consistency: The most important thing is to establish a consistent routine and take your vitamin D supplements at the same time every day. This will help maintain a stable level of vitamin D in your body. It’s important to note that excessive intake of vitamin D can be harmful, so it’s essential to consult with a healthcare professional to determine the appropriate dosage for your individual needs. They can help you identify the best time to take vitamin D supplements based on your specific circumstances. Remember, it’s always a good idea to consult with a healthcare professional before starting any new supplement regimen. They can assess your individual needs and provide personalized recommendations for your health and well-being. UltraLM-13B: According to current research and guidelines, the general consensus is that the best time to take vitamin D supplements is either in the morning or early afternoon, as our bodies produce more vitamin D when exposed to sunlight during this time. However, it’s important to note that the optimal time to take vitamin D supplements can vary depending on individual factors such as sun exposure, geographical location, and overall health. For people living in areas with limited sunlight or those who spend most of their time indoors, taking vitamin D supplements with a meal can be an effective way to ensure adequate intake. Some research also suggests that taking larger doses of vitamin D, such as 2,000-5,000 IU (international units), as a single dose a week or every few weeks, may be more effective for increasing vitamin D levels and maintaining them over time. Keep in mind that the optimal time to take vitamin D supplements may also be influenced by medications or supplements you are currently taking, so it’s best to consult with a healthcare professional for personalized advice based on your individual needs and circumstances. Annotation (Instruction-Following) Rating: 3 Rationale: The text partially complies with the instruction, providing a general rule of thumb for taking vitamin D sup- plements. However, it lacks detail and does not address the various factors that may influence the best time to take the supplements. Rating: 4 Rationale: The text almost fully aligns with the instruction, providing a general consensus on the best time to take vi- tamin D supplements and mentioning individual factors. However, it could provide more detail on the various fac- tors that may influence the best time to take the supplements. Rating: 5 Rationale: The text comprehensively addresses the task goal by providing a general consensus on the best time to take vitamin D supplements, consider- ing various factors. It also acknowl- edges the importance of consulting with a healthcare professional for personal- ized advice. Rating: 5 Rationale: The text fully aligns with the instruction, providing a general con- sensus on the best time to take vitamin D supplements and considering individ- ual factors. It also emphasizes the im- portance of consulting with a healthcare professional for personalized advice. 19 ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback User: what does the @ in python do Table 11. Comparative cases of best-of-1, best-of-2, and best-of-16 sampling. Best-of-1: In Python, the @ symbol is used as a way to reference a class within its own namespace. This is often referred to as a class object. When you create an instance of a class, you are creating a new object that is stored in the namespace of the class. Using the @ symbol before a class declaration allows you to bypass. Reward: −0.73 Best-of-2: The @ symbol in Python is used in front of a class to indicate that the class is decorated with the @staticmethod decorator. This decorator is sandwiched between the class definition and the name of the class. A static method in Python is a method that can be called directly on a class, like a regular method, but does not have access to the instance data (self). It usually takes no parameters and does not modify the object’s state directly. Instead, it operates on the class itself. As a result, if you define a method as staticmethod, it will become a static method and you can call it on an instance of the class without having to create an instance of the class first. For example, a Rectangle class could define the perimeter method as a static method: @staticmethod def perimeter(width, height): return 2*width +2*height You can then call this perimeter method on a Rectangle instance as follows: rect =Rectangle(10, 5) print(rect.perimeter()) # Output: 40 In this example, the Rectangle class defines a perimeter method, which is static by nature. The @staticmethod decorator simply serves to inform the Python interpreter that the method you are defining is indeed intended to be a static method. Moreover, you could also use this decorator with classmethod or lamdba function syntax. The ‘@staticmethod‘ decorator is by default aligned with classmethod, but the other formats are used for cases where you need more control over the function’s self context, or more specific function memory. In summary, the @ symbol is a Python decorator that converts a method into a static method, which is able to be called on an instance of the class without requiring the creation of an actual instance. This functionality makes your code cleaner, more readable, and easier to maintain. Please note that Python has several other symbols and decorators that can modify the behavior of your code or enrich the functionality of your objects, such as @staticmethod, @classmethod, @property, and @newmethod. However, these will generally require some level of understanding and configuration before they can be effectively adapted for your applications. Remember that these decorators should not be confused with the general syntax to create a static method. If you don’t see the @ symbol in front of a class method, then it is by default considered as a method. Adding static to the method implies decorating the method with the staticmethod decorator. To keep the code interesting, decorators are an elegant and expressive way to integrate various functional programming concepts into Python, improving encapsulation and abstraction across scripts and classes. Reward: −0.10 Best-of-16: In Python, the @ symbol is used to define a decorator, which is a way to add functionality to or modify the behavior of functions or objects (not class methods). A decorator is essentially a decoration around the original function, executing some arbitrary code or modifying the function’s properties before the function is executed. When a function is correctly decorated using the @ symbol, a decorator function is applied to it. The decorator function can then modify the behavior of the original function, by, for example, adding arguments, modifying attributes of the function, or executing additional code before or after the function is executed. Here’s an example of how to use a simple decorator to add some additional functionality to a function: from functools import wraps def login_required(func): @wraps(func) def wrapper(username, *args, **kwargs): if username !="secret": raise Exception("Please enter the correct username and password.") return func(username, *args, **kwargs) return wrapper @login_required def secret_page(username): print(f"Hello, {username}! You are now on the secret page.") In this example, we are defining a decorator login required to handle the authentication of our user. It will raise an exception if the provided ‘username‘ argument is incorrect. And then we use this decorator to wrapper the interesting function secret page with this restriction. This approach allows you to modify the behavior of a function at runtime, making it a very flexible and powerful tool for Python development. Reward: 0.42 20 ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback User: who does lady gaga play in american horror story Table 12. Comparative cases of UltraLM-13B and UltraLM-13B-PPO. UltraLM-13B: The question is asking for the character that Lady Gaga has played in the television series American Horror Story. Lady Gaga has appeared in multiple seasons of the show and has played different characters in each one. The following are the characters Lady Gaga has played in the series: 1. Season 1: American Horror Story (2011) - Gaga played the role of The Countess or Elizabeth Johnson, a wealthy and sophisticated owner of a Los Angeles apartment building who resides with her ventriloquist dummy. 2. Season 6: American Horror Story: Hotel (2016) - In this season, Lady Gaga portrayed German singer-songwriter Elizabeth along with her twin brother Dennis. They have gained fame in the 1970s with their horror rock collaboration, The Countess. Gaga’s portrayal in American Horror Story has added a unique dimension to the series, showcasing her acting talent, and has become a fan-favorite character. UltraLM-13B-PPO: Lady Gaga played the role of The Countess in American Horror Story: Hotel. The Countess is a mysterious and enigmatic character who runs the Hotel Cortez, a luxurious and decadent hotel that serves as a front for a series of gruesome murders. The Countess is a seductive and charismatic figure who preys on her victims, using her beauty and charm to lure them into her web. Throughout the season, viewers are taken on a twisted and terrifying journey as they uncover the dark secrets of the Hotel Cortez and the sinister forces that lurk within its walls. Lady Gaga’s performance in American Horror Story: Hotel was widely praised for its depth, complexity, and sheer intensity, cementing her status as one of the most talented and versatile actresses of her generation. Table 13. Case of UltraCM-13B on GSM8K. User: A hotel in the Philippines charges 1000 pesos for a 12-hour stay or 1600 pesos for a 24-hour stay. After 12 hours, visitors have the option to add 70 pesos for every additional hour. Cameron arrives at 5 pm at the hotel and wants to leave at 10 am the next morning. He decides to go with the option of adding on 70 pesos for every hour after the 12-hour mark instead of paying for 24 hours. How much money would Cameron save? Answer: 250 Vicuna-33B: First, let’s determine the cost of Cameron’s stay if he were to stay for 12 hours. Since he arrives at 5 pm, his stay would actually be 13 hours (from 5 pm to 6 am). The cost for a 12-hour stay is 1000 pesos. Next, let’s calculate the cost of a 24-hour stay. The cost for a 24-hour stay is 1600 pesos. Now, we need to calculate the cost of Cameron’s stay if he were to stay for 13 hours and pay the additional 70 pesos for every hour after the 12-hour mark. For the first 12 hours, the cost is 1000 pesos. For the additional 1-hour period (from 12 hours to 13 hours), the cost is 70 pesos. Therefore, the total cost for Cameron’s 13-hour stay would be 1000 pesos + 70 pesos = 1070 pesos. Now, we can calculate the amount Cameron would save by choosing the 13-hour option over the 24-hour option. The savings would be the difference between the two costs: Savings = 1600 pesos (24-hour cost) - 1070 pesos (13-hour cost) avings = 530 pesos So, Cameron would save 530 pesos by choosing the option of adding 70 pesos for every additional hour after the 12-hour mark instead of paying for 24 hours. UltraCM-13B Critique: Your calculation of the hotel stay cost is incorrect. Cameron’s stay from 5 pm to 10 am is 17 hours, not 13 hours. You should calculate the cost for a 17-hour stay instead. Also, you didn’t need to calculate the savings by staying for 13 hours. Just focus on the correct calculation for the 17-hour stay. The rest of your answer is detailed and understandable, but the numbers are wrong due to the incorrect calculation of the hotel stay hours. Please revise this. For improvement, always double-check your calculations to avoid such mistakes. Make sure to understand the problem clearly before you start solving it. Focus on what the question is asking you to find and ignore unnecessary details that might confuse you. As an AI, accuracy is key. A small mistake can lead to a wrong conclusion and misinformation, which is not our goal as AI assistants. Keep practicing and focus on your accuracy, try to avoid rush decisions and take your time to understand the questions fully before answering them. Overall Score: 6 21 ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback G. Prompt G.1. Principle In model completion, we sample one system prompt to align model behavior from different aspects. The human-written examples for each aspect are as follows: Human-written Principle Prompts **Honesty:** The assistant should be honest about whether it knows the answer and express its uncertainty explicitly. Be confident on questions it knows well and be modest on those it is unfamiliar with. Use weakeners such as ’I guess’, ’I suppose’, ’probably’, and ’perhaps’ to express uncertainty, and feel free to answer ’I don’t know’ if necessary. — **Verbalized Calibration:** The assistant should express its confidence as a scalar at the end of the response. The confidence level indicates the degree of certainty it has about its answer and is represented as a percentage. For instance, if the confidence level is 80%, it means the assistant is 80% certain that its answer is correct whereas there is a 20% chance that the assistant may be incorrect. The format is as follows: [Question] [Answer] Confidence: [The assistant’s confidence level, numerical numbers only, e.g. 80%] Here, tags like [Question] and [Answer] are placeholders and should be omitted in the response. — **Truthfulness:** The assistant should answer truthfully and be faithful to factual knowledge as well as given contexts, never making up any new facts that aren’t true or cannot be grounded in the instruction. — **Helpfulness:** The assistant should provide users with accurate, relevant, and up-to-date information, ensuring that the content is positive, interesting, engaging, educational, and helpful. G.2. Annotation We first showcase the template that prompts GPT-4 to annotate the quality of four given completions from the aspect of instruction following. Then, we present the template to annotate critique feedback. Annotation Template for Instruction Following **Instruction Following Assessment** Evaluate alignment between output and intent. Assess understanding of task goals and restrictions. **Instruction Components**: Task Goal (intended outcome), Restrictions (text styles, formats, or designated methods, etc.). **Scoring**: Rate outputs 1 to 5: 1. **Irrelevant**: No alignment. 2. **Partial Focus**: Addresses one aspect poorly. 3. **Partial Compliance**: - (1) Meets goals or restrictions, neglecting others. - (2) Acknowledges both but slight deviations. 4. **Almost There**: Near alignment, minor deviations. 5. **Comprehensive Compliance**: Fully aligns, meets all requirements. 22 ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback Annotation Template for Critique Feedback Given my answer to an instruction, your role is to provide specific and constructive feedback for me. You should find the best way for me to learn from your feedback and improve my performance. You should consider multiple aspects of my answer, including helpfulness, truthfulness, honesty, and to what extent the answer follows instructions. — ### Instruction {instruction} ### Answer completion — Please act as a teacher and provide specific and constructive feedback. Besides describing the weaknesses of the answer, you should also provide specific suggestions to guide me toward understanding how to improve. Please note, however, that your suggestions should help me better complete the instructions, but you should not introduce new requirements that are not mentioned in the instructions. Your feedback should focus on enhancing my ability to think critically and respond accurately. However, never explicitly provide the reference answer, nor do polite phrases be required. Only respond with concise feedback in chat style. Finally, score the overall quality of the answer from 1 to 10, where 1 is the worst and 10 is the best. Format ### Feedback [Your feedback] Overall Score: [1-10] — ### Feedback 23
synthetic_cpt
1
Towards_Semi-Automated_Construction_of_Laboratory_Test_Result_Comprehension_Knowledgebase_for_a_Patient-Facing_Application.pdf
8 1 0 2 t c O 7 ] G A . h t a m [ 1 v 0 2 2 3 0 . 0 1 8 1 : v i X r a Towards a specialization map modulo semi-orthogonal decompositions Xiaowen Hu Abstract We propose a conjecture on the existence of a specialization map for derived cate- gories of smooth proper varieties modulo semi-orthogonal decompositions, and verify it for K3 surfaces and abelian varieties. 1 Introduction In this paper we study the following question: given a family of smooth projective varieties over, say, a punctured disc, and the knowledge of their bounded derived category of coherent sheaves, what can we say about the derived category of the limit fiber? One motivation is the well-known conjecture of Dubrovin which predicts that a smooth projective variety has semisimple quantum cohomology if and only if its derived category of coherent sheaves has a full exceptional collection (see [Dub98], and also [Bay04]). Since quantum cohomology is deformation invariant, it suggests that the property of having a full exceptional collection is also invariant under deformations. In [Hu18] we showed that this is true locally; more precisely, given a smooth proper scheme X over a locally noetherian scheme S, if for one fiber Xs0, Db(Xs0) has a full exceptional collection, then so does the geometric fibers in an open neighborhood. It remains to investigate, with the additional hypothesis that S is connected, whether Db(Xs) has a full exceptional collection for each fiber Xs. This reduces to the following : Question 1.1. Let R be a discrete valuation ring, K its fraction field, and k its residue field. Denote S = Spec(R), the generic point of S by η, and the closed point of S by 0. Let X be a smooth projective scheme over S. Suppose Db(Xη) has a full exceptional collection. Then does Db(X0) has a full exceptional collection? Now, given a field k, we consider the abelian group freely generated by the equivalence classes of derived categories of coherent sheaves of smooth projective varieties over k, and then modulo the relation of the form [T ] = [S1] + ... + [Sn] (1) if there is a semi-orthogonal decomposition T = hS1, ..., Sni. We call the resulting group the Grothendieck group of strictly geometric triangulated cat- egories over k, and denote it by K0(sGTk). For brevity we denote the class of Db(X) in K0(sGTk) by [X]. If Db(X) has a full exceptional collection of length n, then [X] = n[Spec(k)]. 1 So a question weaker than 1.1 is, with the same hypothesis, whether [X0] = n[Spec(k)], where n is the length of the full exceptional collection of Db(Xη). Furthermore, for this weaker question, one can weaken the hypothesis, i.e., instead of assuming Db(Xη) has a full exceptional collection, we now only assume [Xη] = n[Spec(K)]. More generally, we propose the following conjecture. Conjecture 1.2. There is a natural group homomorphism ρsgt : K0(sGTK) → K0(sGTk). If such a map ρsgt does exist, we call it the specialization map of Grothendieck group of strictly geometric triangulated categories. The validity of conjecture 1.2 would be an evidence to a positive answer to question 1.1. This conjecture is inspired also by [NS17] and [KT17], where the existence of certain specialization maps are used to show that stable rationaly and rationality are closed properties in a smooth proper family. For example, in [NS17], it is shown that there is a natural group homomorphism ρVar : K0(VarK) → K0(Vark). It is not hard to see that there is a canonical surjective homomorphism (see section 2) µ : K0(Vark)/(L − 1) → K0(sGTk). It should be believed that µ is not an isomorphism, but this problem seems still open. In this paper we propose a definition of the map ρsgt, and verify the well-definedness for K3 surfaces and abelian varieties. A more natural object to study than K0(sGTK) is the group generated by the admis- sible subcategories of derived categories of coherent sheaves of smooth projective varieties, modulo the same kind of relations (1), and one can propose a conjecture parallel to conjec- ture 1.2. A closedly related notion is the Grothendieck ring of pre-triangulated categories introduced in [BLL04]. Acknowledgement I am grateful to Lei Zhang, Zhan Li, Qingyuan Jiang, Ying Xie, Shizhuo Zhang, Xin Fu, Feng Qu and Lei Song for helpful discussions. Part of this paper is inspired by a workshop in SUSTech organized by Zhan Li. This work is supported by 34000-31610265, NSFC 34000-41030364 and 34000-41030338. 2 Definitions and the conjecture Throughout this section, denote by k a field of characteristic zero, R = k[[t]], and denote by K the fraction field of R. For a smooth proper scheme X over K, an snc model of X is a proper scheme X over R with the properties that X is regular, XK is isomorphic to X and the special fiber X0 = X ×R k is an snc divisor of X , which means n X0 = miDi [i=1 as divisors, where Di is an irreducible smooth proper scheme over k, mi are positive integers, i∈I Di for I ⊂ {1, ..., n}, then dim DI = dim X + 1 − |I| for all and if one writes DI = subsets I of {1, ..., n}. Our only use of the assumption of the characteristic 0 is that such fields admit resolution of singularities and the weak factorization theorems hold in this case ([AKMW02], [W lo03], [AT16]). In particular, snc models always exists. T 2 Definition 2.1. A k-linear triangulated category T is geometric, if there is a smooth proper scheme Y over k such that T is equivalent to an admissible triangulated subcategory of Db(Y ). A k-linear triangulated category T is strictly geometric, if there is a smooth proper scheme Y over k such that T is equivalent to Db(Y ). Let K0(GTk) (resp., K0(sGTk)) be the quotient of the free abelian group generated by the equivalence classes of geometric (resp., strictly geometric) triangulated categories modulo the relations of the form [T ] = [T1] + ... + [Tn] if there is a semi-orthogonal decomposition hS1, ..., Sni of T such that Si is equivalent to Ti for 1 ≤ i ≤ n. In particular, the class of the zero category is equal to zero. Recall the following two theorems of Orlov on the semi-orthogonal decomposition of projective bundles and blow-ups (see [Orl92], or [Huy06, chapter 8]). Theorem 2.2. Let Y be a smooth projective variety over k, E be a vector bundle of rank r over Y , and π : P(E) → Y the projective bundle. Then there is a semi-orthogonal decomposition Db(P(E)) = hπ∗Db(Y ) ⊗ O(a), ..., π∗Db(Y ) ⊗ O(a + r − 1)i (2) for every integer a. Theorem 2.3. Let X be a smooth projective variety over k and Y a smooth closed subvariety of X of codimension c ≥ 2, BlY X the blowup of X along Y . Then there is a semi-orthogonal decomposition Db(BlY X) = hD−c+1, ..., D−1, Db(X)i (3) such that Di is equivalent to Db(Y ) for −c + 1 ≤ i ≤ −1. Denote by K0(Vark) the Grothendieck group of varieties over k. Recall that K0(Vark) is the group generated by the isomorphism classes of smooth schemes over k modulo the relations [X] = [Y ] + [U ] where X is a smooth scheme over k, Y is a closed subscheme of X which is also smooth over k, and U = X − Y . The following theorem of Bittner [Bit04, theorem 3.1] gives an equivalent definition. Theorem 2.4. Let k be a field of characteristic zero. Then K0(Vark) is isomorphic to the group generated by the isomorphism classes of smooth proper schemes over k modulo the relations [X] − [Y ] = [BlY X] − [E] where X is a smooth proper scheme over k, Y is a smooth closed subscheme of X, BlY X is the blow-up of X along Y , and E is the corresponding exceptional divisor on BlY X. 3 Corollary 2.5. Suppose k is a field of characteristic zero. Then there is a natural surjective homomorphism of groups µk : K0(Vark)/[L − 1] → K0(sGTk) such that µk([X]) = [Db(X)]. Proof : By theorem 2.4, it suffices to show [Db(X)] − [Db(Y )] = [Db(BlY X)] − [Db(E)] (4) (5) and [Db(P1)] = 2[Db(Spec(k))]. (6) By (2.2), [Db(Pn)] = (n + 1)[Db(Spec(k))], thus (6) holds. Suppose the codimension of Y in X is c, then by (2.3), [Db(BlY X)] = (c − 1)[Db(Y )] + [Db(X)], and by (2.2), [Db(E)] = c[Db(Y )], so (5) follows. Since K0(Vark) and K0(sGTk) both are generated by the isomorphism classes of smooth proper schemes over k, µk is surjective. Remark 2.6. We have ignored the ring structure of K0(Vark). To obtain a ring structure on something like K0(sGTk) or K0(GTk), one need take into account the DG structrues (see [BLL04]), and there is then a map like (4). Now let k and K be the fields as defined at the beginning of this section. The following theorem is [NS17, prop. 3.2.1]. Theorem 2.7. There is a unique group homomorphism ρvar : K0(VarK) → K0(Vark) such that for a smooth proper scheme X over K, an snc model X of X over R with Xk = niDi, Xi∈I one has ρvar([X]) = (1 − L)|J|−1[D◦ J ], (7) where DJ = j∈J Dj, and D◦ J = DJ \( X∅6=J⊂I i∈I\J Di). T S The homomorphism ρvar is called the specialization map of the Grothendieck group of varieties. 4 Conjecture 2.8. There are natural maps and ρgt : K0(GTK ) → K0(GTk) ρsgt : K0(sGTK) → K0(sGTk). In view of theorem 2.7 and corollary 2.5, the conjecture for K0(sGT) means that there is a homomorphism ρsgt making the following diagram commutative K0(VarK ) ρvar K0(Vark) µK µk K0(sGTK) ρsgt / / K0(sGTk), (8) and since µK is surjective, such ρsgt is unique if it exists. For a field L, denote by ML the abelian group freely generated by the isomorphism classes of smooth proper schemes over L. Set In particular, PDi = Di. We define a map PDJ = P(NDJ /X ). by ρ : MK → K0(sGTk) ρ([X]) = (−1)|J|−1[Db(PDJ )], X∅6=J⊂I or equivalently, by theorem 2.2, ρ([X]) = (−1)|J|−1|J| · [Db(DJ )]. (9) X∅6=J⊂I By (7) a simple computation shows that ρvar([X]) = (−1)|J|−1[PDJ ]. X∅6=J⊂I Therefore ρ is a natural candidate for ρsgt. In other words, conjecture 2.8 for ρsgt reduces to the following. Conjecture 2.9. The homomorphism ρ : MK → K0(sGTk) factors through the canonical surjective homomorphism MK ։ K0(sGTK): ρ K0(sGTk) MK %❑❑❑❑❑❑❑❑❑❑ K0(sGTK) . 5 / / (cid:15) (cid:15) (cid:15) (cid:15) / / % 7 7 To prove the conjecture, one need to show: (i) given X as a representative of its class [Db(X)] in K0(sGTK ), ρ([X]) is independent of the choice of the snc model X ; (ii) ρ([X]) is independent of the choice of the representative X. In fact, (i) is needed for the well-definedness of ρ. We state it as follows. Theorem 2.10. ρ([X]) does not depend on the choice of X . Proof : One can show this by using the weak factorization theorem [AKMW02], [W lo03] and [AT16]. The quickest way is to apply theorem 2.7 and corollary 2.5. I have no idea how to do step (ii) at present. In this paper I only provide some evidence for it. More precisely, for some examples of derived equivalent smooth proper K-schemes X and X ′, I am going to verify ρ([X]) = ρ([X ′]). (10) The first kind of examples are birational derived equivalent X and X ′. Lemma 2.11. Let X be a smooth proper scheme over K. (i) Let Y be a smooth closed subscheme of X. Denote by E the exceptional divisor on the blowup BlY X. Then ρ([BlY X]) = ρ([X]) − ρ([Y ]) + ρ([E]). (ii) Let E be a vector bundle over X of rank r. Then ρ(P(E)) = rρ([X]). Proof : Use corollary 2.5 and theorem 2.7. Example 2.12 (Standard flips). Let X be a smooth projective scheme over K and Y a smooth closed subscheme of X of codimension l + 1, such that Y ∼= Pm and the normal ∼= O(−1)l+1. Then one can perform the standard flip and obtain a smooth bundle NY /X projective scheme X ′. By [BO95, theorem 3.6], X and X ′ are derived equivalent. By lemma 2.11 one deduces that ρ([X]) + lρ([Pm]) = ρ([BlY X]) = ρ([X ′]) + lρ([Pm]), so ρ([X]) = ρ([X ′]). Similarly, one can also try to check (10) for Mukai flops ([Kaw02],[Nam03]), and two non-isomorphic crepant resolutions of a Calabi-Yau 3-fold. In the following sections I will verify (10) for K3 surfaces and abelian varieties, under some additional assumptions. 3 Specialization map K3 surfaces In this section we verify (10) for derived equivalent K3 surfaces which have semistable degenerations over R. Throughout this section we consider only algebraic K3 surfaces. 6 3.1 Mukai pairings and period mappings In this subsection we recall the Mukai pairing on K3 surfaces and its relation to derived equivalences (see e.g., [BBR09, chapter 4] and [Huy06, chapter 10]), and then introduce a corresponding notion of period mapping. Let X be a K3 surface over C. The Mukai pairing on H ∗(X, Z) is defined by h(α0, α1, α2), (β0, β1, β2)i := a1.β1 − α0.β2 − α2.β0 ∈ Z, where αi, βi ∈ H 2i(X, Z). The corresponding lattice is E8(−1)⊕2 ⊕ U ⊕4. Set H 2,0(X) = H 2,0(X), H 0,2(X) = H 0,2(X), H 1,1(X) = H 0(X) ⊕ H 4(X) ⊕ H 1,1(X). e e The resulting weight two Hodge structure {H even(X, Z), H(X, Z). The following characterization of derived equivalent K3 surfaces is due to [Muk87], H p,q(X)} is denoted by e e [Orl97]. See also [Huy06, corollary 10.7, proposition 10.10]. Theorem 3.1. Two algebraic K3 surfaces X and Y over C are derived equivalent if and H(Y, Z) with respect to the Mukai only if there is a Hodge isometry between pairing. If ΦP : Db(X) → Db(Y ) is an equivalence with kernel P ∈ Db(X × Y ), the induced map H(X, Z) and e e e ΦH P : H(X, Z) → H(Y, Z), α 7→ q∗(ch(P )td(X × Y ) · p∗α) is a Hodge isometry, where p : X × Y → X, q : X × Y → Y are the two projections. e e As an analogue of the usual period domains, we introduce a notion to study the variation H(X, Z). of e Definition 3.2. Let M be the Mukai lattice E8(−1)⊕2 ⊕ U ⊕4, Q(·, ·) the corresponding symmetric bilinear pairing on HC. The Mukai period domain DM is defined to be the classifying space of the following data: (i) a filtration of complex subspaces 0 = F 3 ⊂ F 2 ⊂ F 1 ⊂ F 0 = MC of MC, such that dimC(F 2) = 1, dimC(F 1) = 23; (ii) Q(F p, F 3−p) = 0 for all p; (iii) Q(v, ¯v) > 0 for v ∈ F 2. Notice that the condition (iii) together with condition (ii) implies that F 1 ∩ F 2 = 0, thus induces a weight two integral Hodge structure on M. Proposition 3.3. (i) DM is an open subset (in the analytic topology) of a subvariety of a flag variety; (ii) For a family of K3 surface X → S, where S is a simply connected complex manifold, H ∗(X0, Z) ∼= M as lattices for some point 0 of S, there is a and an isomorphism canonical holomorphic map φ : S → DM , such that H(Xs, Z) ∼= φ(s), e for any point s of S. e 7 Proof: Both statements follow from the usual argument for the period map of unpolar- X /S is a ized K3 surfaces, see [Huy16, chapter 6]. For example, Xs) = R0π∗Ω2 s∈S H 0(Xs, Ω2 holomorphic subbundle of 4 i=0 Riπ∗Z ⊗Z OS, so φ is holomorphic. ` L More generally, for a family of K3 surface X → S, where S is a complex manifold which H ∗(X0, Z) ∼= M as lattices for some is not necessarily simply connected, and an isomorphism 0 ∈ S, there is a canonical holomorphic map φ : S → Γ\DM , where Γ = AutZ(M, Q), the group of automorphisms of the lattice (M, Q), or even the image of π1(S) in AutZ(M, Q). However, the quotient Γ\DM is not Hausdorff, as remarked in [Huy16, p. 104]. e 3.2 Degeneration of K3 surfaces We first recall the theorem on the degeneration of K3 surfaces due to Kulikov [Kul77] (see also [PP81], [Fri84]). Theorem 3.4. Let π : X → ∆ be a semistable degeneration of K3 surfaces. Then there exists a birational modification of this semistable degeneration, such that the restriction of π to ∆∗ = ∆\{0} remains unchanged, and KX becomes trivial. After such a modification, the degenerate fiber π−1(0) = X0 can be one of the following types: (I) X0 is a smooth K3 surface; (II) X0 = r i=0 Vi, r ≥ 1, V0 and Vr are rational surface, Vi are ruled elliptic surfaces for 1 ≤ i ≤ r − 1, Vi ∩ Vj = ∅ for |i − j| > 1, and Vi ∩ Vj is an elliptic curve for |j − i| = 1 and is a section of the ruling on Vi, if Vi is a ruled elliptic surface; S (III) X0 = Vi, and each Vi is a smooth rational surface, with all the double curves rational, and the dual graph is a triangulation of S2. S Moreover, the three types of degenerations are characterized by the monodromy action T on H 2(Xt, Z), 0 6= t ∈ ∆: (I) T = id; (II) T − id 6= 0, (T − id)2 = 0; (III) (T − id)2 6= 0, (T − id)3 = 0. In the following we say that a semistable degeneration of K3 surfaces with KX trivial, is of type (I), (II) or (III), if it is of the corresponding type described above. Proposition 3.5. Let π : X → ∆ be a type (II) semistable degeneration of K3 surfaces. Denote by LH i(X0) the limit Hodge structure on H i(Xt). Denote by E the elliptic curve which is isomorphic to the base elliptic curves of the ruled elliptic surfaces appearing in X0. Then (i) W1H 2(X0) ∼= H 1(E)⊕r as integral pure Hodge structures; (ii) W1H 2(X0) ∼= W1LH 2(X0) as integral pure Hodge structures. 8 Proof: Notice that in general the pure graded pieces in a mixed Hodge structure are rational Hodge structures. However in our case there are natural integral Hodge structures on W1H 2(X0) and W1LH 2(X0) inducing the rational ones as we will see. So it suffices to show the isomorphisms as rational Hodge structures. r i=0 Vi. For j = 1, ..., r, let Uj = r i=j Vi, and Dj = j−1 i=0 Vi and U ′ (i) Let X0 = j = Vj−1 ∩ Vj. The Mayer-Vietoris exact sequence S S S ... → H k−1(Dj) → H k(X0) → H k(Uj) ⊕ H k(U ′ j) → H k(Dj) → ... provides an extension of pure Hodge structures 0 → H 1(Dj) → W1H 2(X0) → W1H 2(Uj) ⊕ W1H 2(U ′ j) → 0. By induction on r, W1H 2(X0) is a successive extension of H 1(D1), ..., H 1(Dr). But we can choose different orders of the cuts of X0, which give the splittings. Hence there is a canonical isomorphism W1H 2(X0) ∼= r i=1 H 1(Di) of Hodge structures. (ii) By [Fri84, lemma 3.6], the Clemens-Schmidt sequence L H4(X0) → H 2(X0) → LH 2(X0) N =T −1 −−−−−→ LH 2(X0) is exact over Z, and is an exact sequence of mixed Hodge structures. Since N (W1LH 2(X0)) = 0, we have W1H 2(X0) ∼= W1LH 2(X0) as Hodge structures. 3.3 The specialization map Proposition 3.6. Let R be an integral domain, K the fraction field of R. Let X and Y be smooth projective schemes over R, and X = XK, Y = YK. Suppose Φ : Db(X) → Db(Y ) is an exact functor which is an equivalence of triangulated categories. Then there exists 0 6= r ∈ R and P ∈ Db(X ×R Y) such that for every point s ∈ Spec(R[ 1 r ]), the Fourier- Mukai transform ΦPs : Db(Xs) → Db(Ys) induced by Ps is an equivalence, wherer Xs and Ys are the fiber over the point ιs : Spec(κ(s)) → Spec(R[ 1 sP, and moreover, ΦPs = Φ. r ]), and Ps = Lι∗ Proof: By [Orl03, theorem 3.2.2], there exists P ∈ Db(X ×K Y ) such that Φ = ΦP . Shrinking Spec(R) if necessary, one can find P ∈ Db(X ×R Y) such that PK = P . Let L be a very ample line bundle over X. Shrinking Spec(R) if necessary, there exists a relatively ample line bundle L on X over R, such that L restricts to L. Set d E = L⊗i, Mi=0 where d = dim X = dim Y . By [Orl09, theorem 4], Es is a classical generator of Db(Xs), namely, the smallest triangulated subcategory of Db(Xs) containing Es and closed under isomorphisms and taking direct summands, is Db(Xs). Set Q = P ∨ ⊗ L 2ωY [d], Q = P ∨ ⊗ p∗ L p∗ 2ωY/R[d]. 9 Then Q = QK, and for every point s ∈ Spec(R), the Fourier-Mukai transform ΦQs : Db(Ys) → Db(Xs) is a left adjoint of ΦPs : Db(Xs) → Db(Ys). Moreover, by hypothesis, ΦQK = ΦQ is an inverse of ΦPK = ΦP . Thus the adjoint map ΦQ ◦ ΦP (EK ) → EK is an isomorphism. By the semi-continuity theorem (for perfect complexes, [EGAIII, 7.7.5]), shrinking Spec(R) if necessary, for every point s ∈ Spec(R), the adjoint map ΦQs ◦ ΦPs(Es) → Es is an isomorphism. By induction on the generating time of the objects of Db(Xs) with respect ot Es, this implies that the adjoint morphism of functors ΦQs ◦ ΦPs → idDb(Xs) is an isomorphism. Thus ΦPs is fully faithful. Finally starting from a very ample line bundle on Y , and shrinking Spec(R) if necessary, we find that ΦQs is also fully faithful, and we are done. Theorem 3.7. Let R be the local ring C[T ](T ), and K = C(T ). Let X and Y be smooth projective surfaces over K with trivial canonical bundles. Suppose that X and Y are de- rived equivalent, and both have semistable degenerations over R. Then ρ([X]) = ρ([Y ]) in K0(sGTC). Proof: Let XR and YR be semistable degenerations of X and Y over R, respectively. Denote the point (T ) of Spec(R) and the point (T ) of Spec(C[T ]), both by 0. Then there exists an affine open subset U of Spec(C[T ]) and schemes X and Y over U , such that (i) restricting to U \{0}, X and Y are smooth, and each geometric fiber is a K3 surface; (ii) the base changes of X and Y to Spec(R) are isomorphic to XR and YR, respectively. By proposition 3.6, there is an open subset V of U containing 0, and P ∈ Db(X ×U Y ×U (V − {0})) such that the Fourier-Mukai transform ΦPt : Db(Xt) → Db(Yt) is an equivalence, for all t ∈ V −{0}. Without loss of generality we assume V = U . Consider the analytic topology of U . Taking an open disk ∆ of U containing 0, and consider X and Y restricting over ∆, we can apply the result of the previous subsections to study the fiber X0 and Y0. By theorem 2.10, birational modifications preserving X − X0 does not change ρ([X]). So by the first statement of theorem 3.4, we can assume KX∆ and KY∆ trivial, such that X0 and Y0 are described by theorem loc. cit. For a point t ∈ ∆ − 0, let ΦH ΦPt. By theorem 3.1, ΦH Pt : Pt : H ∗(Xt) → H ∗(Yt) the map on cohomology induced by H(Yt, Z) is a Hodge isometry. Recall that H(Xt, Z) → ΦH Pt(α) = qt∗(ch(Pt) e e td(Xt × Yt) · p∗ t α). p 10 Since ch(Pt) we have a commutative diagram p td(Xt × Yt) is a restriction of an algebraic cohomology class on X∆∗ ×∆∗ Y∆∗, H 2(Xt) ⊕ H 0(Xt) ⊕ H 4(Xt) NX / H 2(Xt) ⊕ H 0(Xt) ⊕ H 4(Xt) (11) ΦH Pt ΦH Pt H 2(Yt) ⊕ H 0(Yt) ⊕ H 4(Yt) NY / / H 2(Yt) ⊕ H 0(Yt) ⊕ H 4(Yt). So the smallest integer i such that N i cases separately. X = 0 is equal to that for NY . We consider the three (i) NX = NY = 0. By theorem 3.4, X0 and Y0 are K3 surfaces. By proposition 3.3, there H(Y0, Z). So by theorem 3.1, X0 and Y0 H(X0, Z) and is a Hodge isometry between are derived equivalent, so ρ([X]) = ρ([Y ]). (ii) NX 6= 0, NY 6= 0, N 2 X = N 2 Y = 0. Then with the notation of theorem 3.4, we have e e ρ([X]) = [V0] + [Vr] + [Vi] − 2r[E] = [V0] + [Vr] − 2[E]. r−1 Xi=1 Since e(X) = 0, we have e(V0) + e(Vr) − 2e(E) = 0, thus e(V0) + e(Vr) = 0. Since V0 and Vr are rational surfaces, we have [V0] + [Vr] = 0. Therefore ρ([X]) = −2[E]. It suffices to show EX ∼= EY . The diagram (11) induces an isomorphism of Hodge structures NX ( H(Xt, Z)) = NX(H 2(Xt, Z)) and NY ( H(X, Z)) ∼ −→ NY ( H(Y, Z)) H(Yt, Z)) = NY (H 2(Yt, Z)). By defi- But NX( nition of the weight filtration on LH 2(Xt) and LH 2(Yt), NX (H 2(Xt)) = W1LH 2(Xt) and NY (H 2(Yt)) = W1LH 2(Yt), as pure Hodge structures. So by proposition 3.5, e EX ∼= EY . e e e (iii) N 2 X 6= 0, N 2 Y 6= 0, N 3 X = N 3 Y = 0. Since e(X0) = 0 and all the components of X0 are rational, ρ([X]) = 0. The same holds for Y0. So we are done. 4 Specialization map for abelian varieties In the final result In this section we verify (10) for derived equivalent abelian varieties. (corollary 4.14) we need to assume that k is an algebraically closed field, because theorem 4.3 need this assumption. However we still state the intermediate statements in a more general setting. 11 / (cid:15) (cid:15) (cid:15) (cid:15) 4.1 Derived equivalences of abelian abelian varieties In this subsection we collect some theorems on derived equivalent abelian varieties due to Mukai, Polishchuk and Orlov. Our references are [Muk87b], [Pol96], [Orl02], and also [Huy06, Chapter 9]. Theorem 4.1 ([Muk87b]). Let S be a scheme, p : A → S an abelian scheme, and q : At → S its dual abelian scheme: A ×S At A πA {✇✇✇✇✇✇✇✇✇✇ $❍❍❍❍❍❍❍❍❍❍ p πAt $❍❍❍❍❍❍❍❍❍ z✉✉✉✉✉✉✉✉✉✉ q At S Denote by P the Poincar´e invertible sheaf on A ×S At. Then the Fourier-Mukai functor . Φ : Db(A) → Db(At), Φ(E) = RπAt∗(Lπ∗ A(E) ⊗ L P) is an equivalence. Let A, B be abelian schemes over S. Suppose f : A×S At → B ×S Bt is a homomorphism of abelian varieties. Write f as a matrix f = (cid:18) α β δ γ (cid:19) where α : A → B, β : At → B, γ : A → Bt, and δ : At → Bt. Define a homomorphism ˜f : B ×S Bt → A ×S At by ˜f = (cid:18) δt −βt αt −γt . (cid:19) Definition 4.2. An isomorphism f : A×S At → B ×S Bt is called a symplectic isomorphism if f −1 = ˜f . Theorem 4.3 ([Pol96]). Let k be an algebraically closed field, A and B two abelian varieties over k. If there is a sympelctic isomorphism f : A ×k At → B ×k Bt, then A and B are derived equivalent. Theorem 4.4 ([Orl02]). Let k be a field, A and B two abelian varieties over k. If A and B are derived equivalent, then there exists a sympelctic isomorphism f : A ×k At → B ×k Bt. 4.2 Degeneration and Mumford-K¨unnemann construction From now on, we fix a complete discrete valuation ring R, and let m be the maximal ideal of R, K be the fraction field of R, k the residue field of R, and denote S = Spec(R). Denote by η and 0 the generic and the closed point of S, respectively. In this subsection, we recall some notions in the theory of degeneration of abelian va- rieties. Our references are [FC90, chapter 2, 3], [Lan13, chapter 3, 4]. Then we state a theorem of K¨unnemann [K¨unn98] on the construction of an snc model of an abelian vari- ety over K which admits a split ample degeneration, or called the Mumford-K¨unnemann construction (see also [Mum72]). 12 { $ $ z Definition 4.5. Let A be a abelian variety over K. A semistable degeneration of A over S is a semiabelian scheme G over S with an isomorphism Gη ∼= A. By definition, there is an extension 0 → T0 → G0 → A0 → 0 where A0 is an abelian variety over k, and T0 is a torus over k. If T0 is a split torus, G is called a split degeneration of A. Definition 4.6. An ample degeneration of A is a pair (G, L ) where G is a semiabelian degeneration of A over S and L is a cubical invertible sheaf on G such that Lη is ample. In fact the condition implies that L is relatively ample. Denote Si = Spec(R/mi). For a semiabelian scheme G over S, denote Gfor = lim G×S Si, and Lfor the corresponding formal completion of L. For an ample degeneration (G, L), there is the associated Raynaud extension 0 → T → G π −→ A → 0, G is an algebraization of the formal scheme Gfor, T is a torus over S, and such that is an abelian scheme over S, and there is a cubical ample invertible sheaf e algebraization of Lfor. Definition 4.7. A Split ample degeneration of A is a triple (G, L, M), where G is a split degeneration of A, (G, L) is an ample degeneration, and M is a cubical ample invertible sheaf on A L which is the e A such that π∗M ∼= L. e e e e e By the rigidity of tori [DG70, X. theorem 3.2], T0 is split implies that T is split. More- over, the character group of T is a constant abelian sheaf over S, and we denote the associated constant group by X. There is a notion of dual semiabelian scheme Gt over S, and the corresponding torus T t is also split. We denote the constant character group of T t by Y . R × R>0) ∪ {0}. There is a natural action of Definition 4.8. Consider the cone C = (X ∗ Y on C via addition. A Y -admissible polyhedral cone decomposition of C is a (possibly infinite) rational polyhedral cone decomposition {σ}α∈I of C such that the collection of the cones σα is invariant under the action of Y and there are only finitely many orbits. Theorem 4.9. ([K¨unn98, theorem 3.5]) Let (G, L , M ) be a split ample degeneration. Then there is a projective regular model P , and an admissible cone decomposition {σα}α∈I of C , and we denote by I + Y the corresponding orbit space with the orbit of the zero cone removed, such that (i) the reduced special fiber (P0)red is a strict normal crossing divisor on P ; (ii) (P0)red has a natural stratification with strata Gσα for α ∈ I + Y , where Gσα is a semi- abelian scheme fitting into an exact sequence 0 → Tσα → Gσα → A0 → 0, where A0 is the abelian part of the Raynaud extension, and Tσα is a split torus; (iii) the closure Pσα of the stratum Gσα is the disjoint union of all Gσβ such that α is a face of β, and Pσα = Gσα ×Tσα Zσα. is a contraction product, where Tσα → Zσα an open torus imbedding into a smooth projective toric variety. 13 4.3 Degeneration and derived equivalence Proposition 4.10. Let A be an abelian varieties over K, which has a split degeneration over R. Then A has a split ample degeneration over R. Proof : By the assumption there is a semi-abelian scheme G over R such that GK ∼= A and G0 fits into an extension 0 → T0 → G0 → A0 → 0 such that T0 is a split torus over k and A0 is an abelian variety over k. By [MB85, I, 2.6] and [Ray70, XI, 1.13] (see also [Lan13, remark. 3.3.3.9]), there is an ample cubical invertible sheaf L over G. Thus L ⊗ [−1]∗L is also an ample cubical invertible sheaf over G. Let 0 → T → G → A → 0 be the corresponding Raynaud extension. Then by [Lan13, cor. 3.3.3.3, prop. 3.3.3.6] and e [Ray70, XI, 1.11], the invertible sheaf Lfor ⊗ [−1]∗Lfor over Gfor is isomorphic to an ample pullback Mfor over Afor which is algebraizable. This provides a split ample degeneration of A. e e e e Lemma 4.11. Let 0 → T → G → A → 0 be an extension of an abelian variety A by a split torus, over a field k, and T ֒→ Z be an open torus embedding of T into a smooth complete toric variety Z. Then in K0(Vark) one has [G ×T Z] = [Z] · [A]. Proof : By the assumption, G is an fppf T -torsor over A. Since T is split, G is a prod- uct of fppf Gm-torsors over A, thus it is also a product of Zariski Gm-torsors, by Hilbert theorem 90. So there is a locally closed stratification {Uα} of A such that (G ×T Z)|Uα is isomorphic to Z × Uα, hence the conclusion. Theorem 4.12. Let (R, m) be a complete discrete valuation ring, K the fraction field of R, and k the residue field of R. Let A and B be two abelian varieties over K, which are derived equivalent. Then the following holds. (i) A has semistable reduction if and only if B has semistable reduction. (ii) A has a split degeneration if and only if B has a split degeneration. (iii) In case of (i), denote the abelian part of the special fiber of the semistable reduction of ∼ A (resp., B) by A0 (resp., B0). Then there is a symplectic isomorphism A0 ×k At −→ 0 B0 ×k Bt 0. Proof: (i) By theorem 4.4 there is a symplectic isomorphism A ×K At ∼= B ×K Bt. Then by [MP17, proposition 2.10], A and B are isogenous. Thus the conclusion (i) follows from [BLR90, §7.3, corollary 3]. (ii) Suppose A has a split ample degeneration over R. By [FC90, §2.2], At has a split ample degeneration over R. Let A (resp., A ′) be the N´eron model of A (resp., of At) 14 ∼= (B ×R B′)◦ over R. By the functoriality of N´eron models, A ×R A ′ is the N´eron model of A ×K At. By theorem 4.4, A ×K At is isomorphic to B ×K Bt. Let B (resp., B′) be the N´eron model of B (resp., of Bt) over R. Thus A ×R A ′ ∼= B ×R B′, so their special fibers have isomorphic identity components, i.e. (A ×R A ′)◦ k. By (i), A, At, B, Bt all k have semistable reductions over R. Thus (Ak)◦, (A ′)◦, (B)◦ and (B′)◦ are all semi-abelian varieties over k, hence are geometrically connected. Thus by [EGAIV, 4.5.8] (Ak)◦ ×k (A ′ k)◦ and (Bk)◦ ×k (B′ k)◦ are connected and thus are isomorphic to (A ×R A ′)◦ k. Let T (resp. T ′) be the torus part of Bk (resp. B′ k). Then T ×k T ′ is a split torus. Consider the character group X(T ) (resp. X(T ′)) of T (resp. T ′), which are ´etale sheaves of torsion free abelian groups of finite type. The product X(T ) × X(T ′) is the character group of T ×k T ′, and is therefore a constant sheaf by the splitness of T ×k T ′. Considering the action of Gal(ks/k) on X(T )(ks) and X(T ′)(ks), one sees that both X(T ) and X(T ′) are constant sheaves over k´et, and therefore T and T ′ are split tori over k. (iii) By theorem 4.4 there is an isomorphism f : A ×K At ∼−→ B ×K Bt of the form f = (cid:18) α β δ γ (cid:19) such that δt −βt αt −γt α β δ γ · (cid:19) (cid:18) (cid:19) = id. (cid:18) By the functoriality of N´eron models the isomorphism f extends to an isomorphism F : A ×R A ′ ∼−→ B ×R B′ of the form such that (cid:18) δt − βt αt γt − e e · (cid:19) F = α γ e e β δ ! e e α γ e e β δ ! e e k)◦ and B◦ = id. Considering the special fibers and using the proof of (ii), one obtains a symplectic isomor- e k ×k (A ′ phism between the abelian parts of A ◦ k ×k (B′ k)◦. e Proposition 4.13. Let A and B be two abelian varieties over K, which are derived equiv- alent, and suppose that A has a split degeneration over R. Then A and B have snc models P and Q over R, respectively, such that either P0 and Q0 are symplectically isomorphic abelian varieties over k, or [P0] = [Q0] = 0 in K0(sGTk). Proof : By theorem 4.12 (ii) and proposition 4.10, both A and B has split ample degeneration over R. By theorem 4.12 (iii), if A has good reduction over R, then so does B, and A0 and B0 are symplectic isomorphic. If A does not have a good reduction over R, then by theorem 4.9 and lemma 4.11, [P0] = (−1)jα−1jα[A0] × [Zσα] Xa∈I + Y in K0(Vark), where jα = dim A + 1 − dim A0 − dim Zσα = dimR C − dimR σα. 15 Since each face of σα appears in the above sum, a simple manipulation shows that (−1)jα−1jα[Zσα] Xa∈I + Y is equal to a linear combination of split tori in K0(Vark), so [P0] ≡ 0 mod (L − 1). Then by corollary 2.5 and the definition (9) of ρ , [P0] = 0 in K0(sGTk). By theorem 4.12, B also has a split ample but not good degeneration over R, thus one has [Q0] = 0 in K0(sGTk), too. So we are done. Corollary 4.14. Let (R, m) be a complete discrete valuation ring, K the fraction field of R, and k the residue field of R. Suppose that k is algebraically closed of characteristic 0. Let A and B be two abelian varieties over K, which are derived equivalent. Suppose A has a semistable reduction over R. Then ρ([A]) = ρ([B]). Proof : By theorem 4.12 (i), both A and B semistable reductions over R, which are automatically split degenerations because k is algebraically closed. Applying proposition 4.13 and theorem 4.3 we obtain the conclusion. 5 Open problems 1. Although our (conjectural) definition of ρsgt does not assume the existence of semistable degeneration over R, in the above verifications we need to assume this to apply the results for the degeneration of these varieties. It is natural to make the following conjecture. Theorem 4.12 provides an example for it. Conjecture 5.1. Let R be a DVR, K its fraction field. Let X and Y be derived equivalent smooth projective varieties over K. Then X has semistable degeneration (resp., good reduction) over R if and only if Y has semistable degeneration (resp., good reduction) over R. This suggests to take into consideration the Galois action on the derived categories, and ask whether there is a N´eron-Ogg-Shafarevich-Grothendieck type criterion for the types of degenerations. 2. Does there exist a smooth projective variety X over k such that [X] = m[Spec(k)] in K0(sGT) but Db(X) does not have a full exceptional collection? If there are such varieties, are their quantum cohomology semisimple? The limit fibers of a family of varieties with full exceptional collections are candidates for this. References [AKMW02] Abramovich, Dan; Karu, Kalle; Matsuki, Kenji; W lodarczyk, Jaros law. Tori- fication and factorization of birational maps. J. Amer. Math. Soc. 15 (2002), no. 3, 531–572. 16 [AT16] Abramovich, D; Temkin, M. Functorial factorization of birational maps for qe schemes in characteristic 0. Preprint, arXiv:1606.08414. [BBR09] Bartocci, Claudio; Bruzzo, Ugo; Hern´andez Ruip´erez, Daniel. Fourier-Mukai and Nahm transforms in geometry and mathematical physics. Progress in Mathematics, 276. Birkh¨auser Boston, Inc., Boston, MA, 2009. [Bay04] Bayer, Arend. Semisimple quantum cohomology and blowups. Int. Math. Res. Not. 2004, no. 40, 2069–2083. [Bit04] Franziska Bittner. The universal Euler characteristic for varieties of characteristic zero. Compos. Math., 140(4):1011–1032, 2004. [BLL04] Bondal, Alexey I.; Larsen, Michael; Lunts, Valery A. Grothendieck ring of pretri- angulated categories. Int. Math. Res. Not. 2004, no. 29, 1461–1495. [BO95] Bondal A, Orlov D. Semiorthogonal decomposition for algebraic varieties. arXiv preprint alg-geom/9506012, 1995. [BLR90] Bosch, Siegfried; L¨utkebohmert, Werner; Raynaud, Michel. N´eron models. Ergeb- nisse der Mathematik und ihrer Grenzgebiete, 21. Springer-Verlag, Berlin, 1990. [DG70] M. Demazure and A. Grothendieck (eds.), Sch´emas en groupes (SGA 3), II: Groupes de type multiplicatif, et structure des sch´emas en groupes g´en´eraux, Lecture Notes in Mathematics, vol. 152, Springer-Verlag, 1970. [Dub98] Dubrovin, B. Geometry and analytic theory of Frobenius manifolds. Proceedings of the International Congress of Mathematicians, Vol. II (Berlin, 1998). Doc. Math. 1998, Extra Vol. II, 315–326. [EGAIII] Grothendieck A. El´ements de g´eom´etrie alg´ebrique (r´edig´es avec la collaboration III. Etude cohomologique des faisceaux coh´erents, Premiere de Jean Dieudonn´e): partie. Publications Math´ematiques de l’IHES, 1961, 11: 5–167. [EGAIV] Grothendieck, A. ´El´ements de g´eom´etrie alg´ebrique (r´edig´es avec la collaboration de Jean Dieudonn´e). IV. ´Etude locale des sch´emas et des morphismes de sch´emas. II. (French) Inst. Hautes ´Etudes Sci. Publ. Math. No. 24 1965. [FC90] Faltings, Gerd; Chai, Ching-Li. Degeneration of abelian varieties. With an appendix by David Mumford. Ergebnisse der Mathematik und ihrer Grenzgebiete, 22. Springer- Verlag, Berlin, 1990. [Fri84] Friedman, Robert. A new proof of the global Torelli theorem for K3 surfaces. Ann. of Math. (2) 120 (1984), no. 2, 237–269. [Hu18] Hu, Xiaowen. Deformation of exceptional collections. Preprint arXiv:1805.04050, 2018. [Huy06] Huybrechts, Daniel. Fourier-Mukai transforms in algebraic geometry. Oxford Uni- versity Press, 2006. [Huy16] Huybrechts, Daniel. Lectures on K3 surfaces. Cambridge Studies in Advanced Mathematics, 158. Cambridge University Press, Cambridge, 2016. 17 [Kaw02] Kawamata, Yujiro. D-equivalence and K-equivalence. J. Differential Geom. 61 (2002), no. 1, 147–171. [KT17] Kontsevich M, Tschinkel Y. Specialization of birational types. arXiv preprint arXiv:1708.05699, 2017. [Kul77] Kulikov, Vik. S. Degenerations of K3 surfaces and Enriques surfaces. Izv. Akad. Nauk SSSR Ser. Mat. 41 (1977), no. 5, 1008–1042, 1199. [K¨unn98] K¨unnemann, Klaus. Projective regular models for abelian varieties, semistable reduction, and the height pairing. Duke Math. J. 95 (1998), no. 1, 161–212. [Lan13] Lan, Kai-Wen. Arithmetic compactifications of PEL-type Shimura varieties. Lon- don Mathematical Society Monographs Series, 36. Princeton University Press, Prince- ton, NJ, 2013. [MP17] L´opez Mart´ın, Ana Cristina; Tejero Prieto, Carlos. Derived equivalences of Abelian varieties and symplectic isomorphisms. J. Geom. Phys. 122 (2017), 92–102. [MB85] Moret-Bailly, Laurent. Pinceaux de vari´et´es ab´eliennes. Ast´erisque No. 129 (1985). [Muk87] Mukai, S. On the moduli space of bundles on K3 surfaces. I. Vector bundles on algebraic varieties (Bombay, 1984), 341–413, Tata Inst. Fund. Res. Stud. Math., 11, Tata Inst. Fund. Res., Bombay, 1987. [Muk87b] Mukai, Shigeru. Fourier functor and its application to the moduli of bundles on an abelian variety. Algebraic geometry, Sendai, 1985, 515–550, Adv. Stud. Pure Math., 10, North-Holland, Amsterdam, 1987. [Muk94] Mukai, Shigeru. Abelian variety and spin representation, in: Proceedings of the Symposium Hodge Theory and Algebraic Geometry, Sapporo, 1994, University of Warwick, 1998, pp. 110–135. [Mum72] D. Mumford. An analytic construction of degenerating abelian varieties over com- plete rings, Compositio Math. 24 (1972), 239–272. [Nam03] Namikawa, Yoshinori. Mukai flops and derived categories. J. Reine Angew. Math. 560 (2003), 65–76. [NS17] Nicaise, J., Shinder, E. The motivic nearby fiber and degeneration of stable ratio- nality. arXiv preprint arXiv:1708.02790. [Orl92] Orlov, D. O. Projective bundles, monoidal transformations, and derived categories of coherent sheaves. Izv. Ross. Akad. Nauk Ser. Mat. 56 (1992), no. 4, 852–862; translation in Russian Acad. Sci. Izv. Math. 41 (1993), no. 1, 133–141. [Orl97] Orlov, D. O. Equivalences of derived categories and K3 surfaces. Algebraic geometry, 7. J. Math. Sci. (New York) 84 (1997), no. 5, 1361–1381. [Orl02] Orlov, D. O. Derived categories of coherent sheaves on abelian varieties and equiv- alences between them. Izv. Ross. Akad. Nauk Ser. Mat. 66 (2002), no. 3, 131–158; translation in Izv. Math. 66 (2002), no. 3, 569–594. 18 [Orl03] Orlov, D. O. Derived categories of coherent sheaves and equivalences between them. (Russian) Uspekhi Mat. Nauk 58 (2003), no. 3(351), 89–172; translation in Russian Math. Surveys 58 (2003), no. 3, 511–591. [Orl09] Orlov, Dmitri. Remarks on generators and dimensions of triangulated categories. Mosc. Math. J. 9 (2009), no. 1, 153–159. [PP81] Persson, Ulf; Pinkham, Henry. Degeneration of surfaces with trivial canonical bun- dle. Ann. of Math. (2) 113 (1981), no. 1, 45–66. [Pol96] Polishchuk, A. Symplectic biextensions and a generalization of the Fourier-Mukai transform. Math. Res. Lett. 3 (1996), no. 6, 813–828. [Ray70] Raynaud, Michel. Faisceaux amples sur les sch´emas en groupes et les espaces ho- mog`enes. Lecture Notes in Mathematics, Vol. 119 Springer-Verlag, Berlin-New York 1970. [Sch73] Schmid, Wilfried. Variation of Hodge structure: the singularities of the period mapping. Invent. Math. 22 (1973), 211–319. [W lo03] W lodarczyk, Jaros law. Toroidal varieties and the weak factorization theorem. In- vent. Math. 154 (2003), no. 2, 223–331. School of Mathematics, Sun Yat-sen University, Guangzhou 510275, P.R. China Email address: [email protected] 19
synthetic_cpt
3
TrueTeacher_Learning_Factual_Consistency_Evaluation_with_Large_Language_Models.pdf
TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models Zorik GekhmanT,G,∗ Jonathan HerzigG Roee AharoniG Chen ElkindG Idan SzpektorG T Technion - Israel Institute of Technology GGoogle Research [email protected] {zorik|jherzig|roeeaharoni|chenel|szpektor}@google.com 3 2 0 2 t c O 8 1 ] L C . s c [ 3 v 1 7 1 1 1 . 5 0 3 2 : v i X r a Abstract Factual consistency evaluation is often con- ducted using Natural Language Inference (NLI) models, yet these models exhibit limited suc- cess in evaluating summaries. Previous work improved such models with synthetic training data. However, the data is typically based on perturbed human-written summaries, which often differ in their characteristics from real model-generated summaries and have limited coverage of possible factual errors. Alterna- tively, large language models (LLMs) have re- cently shown promising results in directly eval- uating generative tasks, but are too computa- tionally expensive for practical use. Motivated by these limitations, we introduce TrueTeacher, a method for generating synthetic data by an- notating diverse model-generated summaries using a LLM. Unlike prior work, TrueTeacher does not rely on human-written summaries, and is multilingual by nature. Experiments on the TRUE benchmark show that a student model trained using our data, substantially outper- forms both the state-of-the-art model with simi- lar capacity, and the LLM teacher. In a system- atic study, we compare TrueTeacher to existing synthetic data generation methods and demon- strate its superiority and robustness to domain- shift. We also show that our method generalizes to multilingual scenarios. Lastly, we release our large-scale synthetic dataset (1.4M examples), generated using TrueTeacher, and a checkpoint trained on this data.1 1 Introduction Generative summarization models are prone to generate summaries that are factually inconsistent with respect to the corresponding input documents (Goodrich et al., 2019; Kryscinski et al., 2019), limiting their applicability in real-world scenarios. ∗Work done during an internship at Google Research. 1Our dataset and model are available at: https://github.com/google-research/ google-research/tree/master/true_teacher Figure 1: A real example from our data generation process. We fine-tune summarization models with dif- ferent capacities, and use them to produce a diverse set of model-generated summaries of CNN/DM articles, which we label for consistency using a 540B LLM. Since factual consistency evaluation could be cast as a Natural Language Inference (NLI) task, NLI models are often used to evaluate consistency (Falke et al., 2019a; Maynez et al., 2020; Laban et al., 2022). However, NLI models exhibit lim- ited success in evaluating factual consistency in summarization (Falke et al., 2019b; Kryscinski et al., 2020), since NLI datasets lack the entail- ment phenomena that naturally arise in abstrac- tive summarization (Khot et al., 2018). For ex- ample, single-sentence premise-hypothesis pairs are shorter than document-summary pairs (Mishra et al., 2021; Schuster et al., 2022). To address this domain mismatch, previous work proposed various approaches for generating syn- thetic training data (Kryscinski et al., 2020; Yin et al., 2021; Utama et al., 2022; Balachandran et al., 2022). The data is typically generated by perturb- ing human-written summaries to introduce factual inconsistencies. While these perturbations are ef- fective, they are limited to factual error categories that can be covered by the perturbation logic. In addition, since simulating factual errors is chal- lenging, such perturbations may fail to introduce factual errors, leading to incorrect labels.2 Finally, since the synthetic summaries are based on human- written summaries, they may differ in style from real model-generated summaries, which can reduce the effectiveness of the synthetic data. An alternative approach to augmenting NLI mod- els with synthetic data, is to directly prompt large language models (LLMs) to evaluate factual consis- tency. Recently, there has been a growing evidence for the effectiveness of LLMs in evaluating gener- ative tasks (Kocmi and Federmann, 2023; Wang et al., 2023; Liu et al., 2023), including factual consistency in summarization (Chen et al., 2023). However, LLMs are still too computationally ex- pensive to be heavily used in practice. To make the best of both worlds we propose TrueTeacher, a simple and effective synthetic data generation method that leverages model-generated summaries and the reasoning abilities of LLMs (Huang and Chang, 2022). In TrueTeacher, we first train a diverse collection of summarization models with different capacities. Next, we use these mod- els to summarize each document in a given corpus (Figure 1). The resulting document-summary pairs are then annotated by prompting a LLM to predict the corresponding factual consistency label. We apply TrueTeacher using FLAN-PaLM 540B (Chung et al., 2022) to generate a large-scale syn- thetic dataset, which is used to train a student model. Experiments on the summarization sub- set of the TRUE benchmark (Honovich et al., 2022) show that augmenting existing NLI data with TrueTeacher data improves a state-of-the-art model’s ROC-AUC from 82.7 to 87.8, while main- taining similar model capacity. The resulting model even outperforms its LLM teacher, despite the latter having a ×50 larger capacity. We also compare TrueTeacher to existing syn- thetic data generation methods. To this end, we design a systematic study to re-evaluate existing methods with a "fair comparison" in a challeng- ing setting. Our results indicate that existing ap- proaches fail to generalize to documents derived from a distribution different from the one used for 2As we also demonstrate in §4.3. synthetic data generation. In contrast, TrueTeacher demonstrates robustness by successfully generaliz- ing to documents from new domains. Finally, we apply TrueTeacher to generate multilingual synthetic data. While existing data generation methods are often limited to English (Utama et al., 2022; Balachandran et al., 2022), TrueTeacher can use a multilingual LLM. Results on the mFACE dataset (Aharoni et al., 2022), show improvements on 35 out of 45 languages when us- ing our method. This demonstrates the usefulness of multilingual synthetic data and the effectiveness of TrueTeacher in generating such data. To summarize, this work includes the following contributions: • We introduce TrueTeacher, a synthetic data generation approach based on annotating model-generated summaries with LLMs, and demonstrate its effectiveness and robustness. • We evaluate FLAN-PaLM 540B on the task of factual consistency evaluation and show that its knowledge can be distilled into a signifi- cantly smaller model using our method. • We conduct a systematic study, re-evaluating existing synthetic data generation methods for the task in an apples-to-apples comparison and identify their limitations. • We perform the first experiment in generating multilingual synthetic data for factual consis- tency, and demonstrate its usefulness. • We release a large-scale dataset comprised of 1.4 million TrueTeacher examples, and verify its quality with human evaluation. We addi- tionally release a state-of-the-art consistency evaluation model trained on this data.1 2 TrueTeacher In this section we describe TrueTeacher, our approach for generating synthetic examples for the task of factual consistency evaluation in Our main motivation is to summarization. inconsistencies that occur in real use factual model-generated summaries, instead of relying on perturbed human-written summaries. To this end, we generate a diverse set of summaries using gener- ative summarization models of different capacities, and leverage a LLM to label them for factual con- sistency. Some of the generated summaries are ex- pected to contain factual errors, and we hypothesize does not require gold summaries, which allows it to be used with any collection of documents D, and makes it more scalable than previous methods (Yin et al., 2021; Utama et al., 2022; Balachandran et al., 2022). Finally, a LLM is prompted to label all in O for consistency w.r.t. the summaries their source documents, resulting with labels {l1,1, . . . , l1,k, . . . lr,k}.4 Figure 1 illustrates a real example of this process for a single document di ∈ D. Each document, summary, and label (di, si,j, li,j) are then used as a synthetic example for training a factual consistency classifier. Since we leverage LLMs for labeling, our approach is likely to benefit from the ongoing progress in LLMs quality. Furthermore, previous approaches often rely on language-specific components (e.g., Information Extraction), which limits their appli- cability in multiple languages. Since recent LLMs are pretrained on multilingual data, our method can be easily applied to non-English languages, as we show in §5. 3 Experimental Setup We use TrueTeacher to generate a synthetic dataset for factual consistency evaluation in summariza- tion (§3.1), and experiment with it to evaluate the effectiveness and usefulness of our method (§4). 3.1 TrueTeacher Instantiation To apply TrueTeacher, we instantiate the summa- rization datasets T , the pre-trained LM s and the documents corpus D. We use XSum (Narayan et al., 2018) as T , T5 pre-trained models (Raf- fel et al., 2020) as LM s = {T5-small, T5-base, T5-large, T5-3B, T5-11B}, and documents from CNN/DailyMail (Hermann et al., 2015) as D. As our teacher model, we employ FLAN-PaLM 540B (Chung et al., 2022). This model was instruc- tion fine-tuned, including training on the closely- related NLI task.5 Therefore, we expect it to gen- eralize well to factual consistency evaluation.6 We use zero-shot prompting for simplicity, and since applying few-shot or chain-of-thought prompting did not improve performance in early experiments.7 4See §3.1 and §A.1 for our prompting implementation. 5https://github.com/google-research/FLAN/blob/ e9e4ec6e2701182c7a91af176f705310da541277/flan/ task_splits.py#L109 6We validate this expectation in §4.1 and §4.4. 7In §A.1 we discuss potential reasons to this. Figure 2: Our data generation process. We train a col- lection of generative summarization models, use them to summarize documents and label the resulting sum- maries for factual consistency using a LLM. that a strong-performing LLM can generalize to the task and label them with sufficient quality to be use- ful for training. The usage of model-generated sum- maries not only yields more realistic texts, but also allows to potentially include rare errors, which can be harder to incorporate with perturbation logic. Our data generation process is illustrated in Figure 2. First, we train a variety of summa- rization models (upper diagram). We use a col- lection of one or more summarization training sets T = {sd1, sd2, . . . , sdn} and different pre- trained LM s = {lm1, lm2, . . . , lmm} to fine- tune a collection of summarization models SM = {sm1, sm2, . . . , smk}, where k = n × m.3 Using different pretrained LMs allows to diversify the expected consistency errors, e.g., errors made by large or small models. The choice of summariza- tion training sets allows to control for the nature of the resulting summaries, e.g., focusing on abstra- tive training sets to increase output abstractiveness. Next, we obtain model-generated summaries and annotate them (lower diagram). We choose a docu- ments corpus D = {d1, d2, . . . , dr} and use all the summarization models in SM to summarize all the documents in D, resulting in a collection of model- generated output summaries O = {s1,1, . . . sr,k}, where si,j is the summary of document di gener- ated by summarization model smj. TrueTeacher 3We note that the pretrained LM s here refer to the mod- els that we are fine tuning for summarization, and they are different from the LLM that we use as the teacher. Summaries Source # Consistent # Inconsistent T5-11B T5-3B T5-large T5-base T5-small Total 233,815 229,097 195,681 161,177 88,129 907,899 39,423 45,662 81,986 118,480 190,012 475,563 Table 1: Our generated dataset statistics. Extensive implementation details about our FLAN- PaLM usage are provided in §A.1 and §A.2. Applying TrueTeacher in this setup resulted in ∼1.4M synthetic training examples (Table 1), which we use to train a student model for factual consistency evaluation.8 In §4, we provide evi- dence for the dataset’s quality through human eval- uation (§4.4), its usefulness for improving NLI models in a challenging setting (§4.1), and its supe- riority over other existing synthetic datasets (§4.2). In early experiments, we also explored data fil- tering based on prompting FLAN-PaLM for self- verification (details in §A.5). This resulted in an increase in the labeling accuracy. Yet, surprisingly, training the student model on the filtered data did not improve performance in comparison to train- ing on the full dataset.9 Thus, for simplicity, we conduct experiments using the full dataset. 3.2 Evaluation To compare between consistency evaluation mod- els, we use the TRUE benchmark (Honovich et al., 2022), focusing on its summarization subset: MNBM (Maynez et al., 2020), FRANK (Pagnoni et al., 2021), SummEval (Fabbri et al., 2020), QAGS-X and QAGS-C (Wang et al., 2020). For additional details about these datasets, we refer the reader to Honovich et al. (2022). Following Honovich et al., we use ROC-AUC in a binary clas- sification setting as our evaluation metric. 3.3 Baselines We compare the performance of factual consistency evaluation models trained on TrueTeacher data, against the top performing models on the TRUE benchmark: QuestEval (Scialom et al., 2021), Q2 (Honovich et al., 2021), SUMMACZS (Laban et al., 2022), T5-11B fine tuned on ANLI (Honovich 8Implementation details for our trained models are in §A.3. 9This could be attributed to the high-quality of the initial labels and the student model’s robustness to noise. et al., 2022), WeCheck (Wu et al., 2023), and the Ensemble from Honovich et al. (2022).10 We also compare TrueTeacher data generation mechanism to existing methods for synthetic data generation. We consider the following approaches: DocNLI (Yin et al., 2021). Reformatted NLI, question answering and summarization datasets, in- cluding the CNN/DM corpus. The summarization- based positive examples are based on concatenated gold summaries. The negative examples are then generated using word/entity replacements. FactCC (Kryscinski et al., 2020). The docu- ments are from CNN/DM. The consistent sum- maries are randomly sampled sentences from the document, which are optionally injected with noise or paraphrased. The inconsistent summaries are ob- tained by rule-based transformations, such as sen- tence negation and entity/pronoun/number swaps. FactEdit (Balachandran et al., 2022). The posi- tive examples are based on gold summaries from CNN/DM. For the negative examples, an infilling model is trained using sentences from the docu- ments, employing the OpenIE framework (Banko et al., 2007) to mask predicates and arguments. Each predicate and argument phrase in the sum- mary is then iterativelly masked and infilled with the model’s lower order beam candidates. Falsesum (Utama et al., 2022). The positive examples are based on gold summaries from CNN/DM. For the negative examples, predicates and arguments are detected in the document and the summary using the OpenIE (Banko et al., 2007) framework. Randomly selected predicates and ar- guments from the summary are then masked and infilled using predicates and arguments from the document, or by "hallucinating" new content. For this purpose a dedicated infilling model is trained. 4 Experiments and Analysis Our main experiments are in §4.1 and §4.2, followed by various analyses and ablations in §4.3, §4.4, §4.5 and §4.6. We design our experiments to address the following research questions (RQs): • RQ1: What is the performance of FLAN-PaLM 540B in factual consistency evaluation in sum- marization? Is it a good choice for a teacher? 10We discuss WeCheck in §6, and refer the reader to Hon- ovich et al. (2022) for a detailed description of other baselines. MNBM QAGS-X FRANK SummEval QAGS-C Average QuestEval (Scialom et al., 2021) Q2 (Honovich et al., 2021) SUMMACZS (Laban et al., 2022) T5-11B w. ANLI (Honovich et al., 2022) WeCheck (Wu et al., 2023) Ensemble (Honovich et al., 2022) FLAN-PaLM 540B (Chung et al., 2022) T5-11B w. ANLI + TrueTeacher full 65.3 68.7 71.3 77.9 83.0 76.6 76.0 78.1 56.3 70.9 78.1 83.8 81.4 85.8 88.1 89.4 84.0 87.8 89.1 82.1 88.1 91.2 91.4 93.6 70.1 78.8 81.7 80.5 79.8 82.9 83.7 88.5 64.2 83.5 80.9 89.4 82.6 87.7 85.2 89.4 68.0 77.9 80.2 82.7 83.0 84.8 84.9 87.8 Table 2: ROC-AUC results on the summarization subset of the TRUE benchmark (Honovich et al., 2022). • RQ2: Can TrueTeacher facilitate training of a competitive model w.r.t. state-of-the-art models? • RQ3: What is the quality of the data gener- ated using TrueTeacher compared to existing syn- thetic data generation methods? We address RQ1 and RQ2 in §4.1. To address RQ1, we evaluate FLAN-PaLM 540B against com- petitive models for factual consistency evaluation. To address RQ2, we use our full dataset from §3.1 to train our best-performing model, and evaluate it in the exact same setting. Finally, RQ3 is ad- dressed in §4.2, where we conduct a systematic study, comparing existing methods to TrueTeacher, while controlling for factors such as the synthetic data size and the documents used for data synthesis. 4.1 Main Results on the TRUE Benchmark We address RQ1 by evaluating FLAN-PaLM 540B on the task and present the results in Table 2. FLAN-PaLM 540B achieves an impressive perfor- mance, with an average ROC-AUC of 84.9 com- pared to 83.0 of the best single-model baseline, and performs on-par with the Ensemble. This demon- strates the chosen LLM’s capability for the task, and its potential as a teacher for smaller models. To address RQ2, we fine-tune T5-11B (Raffel et al., 2020) over our full dataset (§3.1) mixed with ANLI (Nie et al., 2020). Table 2 shows that including TrueTeacher data in the training set, substantially improves the strong-performing T5-11B w. ANLI baseline from an average ROC- AUC of 82.7 to 87.8 (+5.1), while maintaining exactly the same model capacity. This strong result demonstrates the high effectiveness of TrueTeacher in a challenging setup. Notably, our model sets the new state-of-the-art result on the benchmark, outperforming the ×50 times larger LLM that we used as the teacher (84.9 → 87.8). This can be attributed to large-scale knowledge distillation on a specific task, while the LLM is trained to per- form many tasks. Additionally, the smaller model is trained on target-domain data (documents and model-generated summaries) which can further im- prove performance (Gururangan et al., 2020). 4.2 Re-evaluating Synthetic Data Generation Methods – A Study Previous studies on synthetic data generation have used different experimental setups, making it dif- ficult to compare their results. In this section, we design a systematic study to re-evaluate existing methods in a standardized setup. We first discuss our study design choices followed by the results. Previous work has demonstrated that synthetic data can improve NLI-based models. However, they typically used relatively small-capacity mod- els, whereas Honovich et al. (2022) recently demon- strated significant performance gains by scaling up to T5-11B fine-tuned on ANLI. We therefore adopt this competitive baseline, to which we add syn- thetic data from each method. For ablation, we include variants trained solely on synthetic data (without ANLI), and also repeat our study using the smaller-capacity T5-base model. To preform a fair comparison, we restrict the number of examples from each evaluated method to 100k, randomly sampled with balanced labels. To evaluate domain-shift robustness, we fur- ther restrict the synthetic training examples to ones that were generated only based on CNN/DM docu- ments,11 and then consider the XSum-based evalu- ation sets as out-of-domain.12 11Some methods are based exclusively on CNN/DM while others use additional datasets, more details in §3.3. 12SummEval and QAGS-C are based on documents from CNN/DM, MNBM and QAGS-X use documents from XSum, and FRANK has documents from both CNN/DM and XSum. We split FRANK to FRANK-C and FRANK-X which contain its CNN/DN based and XSum based subsets respectively. Training data ANLI FactEdit FactEdit + ANLI DocNLI DocNLI + ANLI FactCC FactCC + ANLI Falsesum Falsesum + ANLI TrueTeacher TrueTeacher + ANLI ANLI FactEdit FactEdit + ANLI DocNLI DocNLI + ANLI FactCC FactCC + ANLI Falsesum Falsesum + ANLI TrueTeacher TrueTeacher + ANLI B 1 1 - 5 T e s a b - 5 T Average scores QAGS-C SummEval FRANK-C FRANK FRANK-X QAGS-X MNBM In-domain Out-of-domain CNN/DM-based XSUM-based TRUE 83.4 87.8 88.9 89.1 87.8 83.1 84.7 90.3 90.7 84.9 88.4 74.9 61.4 68.7 71.4 75.2 74.0 72.8 80.9 82.9 77.3 81.9 74.2 77.0 78.9 72.9 72.0 79.0 83.3 85.4 85.8 85.0 85.8 63.7 59.4 60.0 66.5 66.7 72.7 73.2 74.2 73.4 73.6 78.0 85.6 77.2 81.1 83.0 81.9 81.6 84.7 85.8 87.0 88.8 89.6 73.1 59.4 62.2 66.7 74.4 78.7 78.8 82.0 83.3 79.1 81.4 90.7 83.7 88.0 89.2 88.2 84.1 89.5 89.8 91.6 93.6 93.9 81.3 73.6 78.5 77.9 84.9 83.2 83.2 86.4 86.5 88.0 89.3 93.2 76.0 86.1 92.4 93.7 67.5 89.6 84.5 90.5 94.4 93.9 80.6 51.9 73.6 81.0 83.3 71.9 66.8 71.6 72.6 82.6 86.4 88.0 69.4 76.2 83.8 84.2 72.7 82.9 70.8 75.2 86.5 87.8 77.2 48.0 72.2 75.2 78.7 71.0 71.5 65.0 66.0 79.9 81.9 73.9 53.1 59.8 67.0 68.0 55.0 71.5 53.9 60.5 76.1 76.3 77.0 58.4 75.5 71.6 74.8 62.7 63.2 53.1 58.7 78.3 78.5 81.1 85.0 82.0 80.7 (-0.4) 83.0 (+1.9) 81.7 (+0.6) 80.6 (-0.5) 81.2 (+0.1) 84.2 (+3.1) 87.2 (+6.1) 87.8 (+6.7) 86.2 (+5.1) 87.9 (+6.8) 66.2 (-18.8) 74.0 (-11.0) 81.1 (-3.9) 82.0 (-3.0) 65.1 (-19.9) 81.3 (-3.7) 69.7 (-15.3) 75.4 (-9.6) 85.7 (+0.7) 86.0 (+1.0) 74.2 (-7.8) 78.4 (-1.6) 80.4 (-1.6) 80.0 (-2.0) 74.8 (-7.2) 82.4 (+0.4) 78.0 (-4.0) 80.8 (-1.2) 85.2 (+3.2) 86.4 (+6.4) 70.6 78.3 74.8 60.1 (-10.5) 63.6 (-7.0) 68.2 (-2.4) 72.1 (+1.5) 75.3 (+4.7) 74.9 (+4.3) 79.0 (+8.4) 79.9 (+9.3) 76.7 (+6.1) 80.4 (+9.8) 52.8 (-25.5) 73.8 (-4.5) 75.9 (-2.4) 78.9 (+0.6) 68.5 (-9.8) 67.2 (-11.1) 63.2 (-15.1) 65.8 (-12.5) 80.3 (+2.0) 82.3 (+4.0) 60.2 (-14.6) 71.0 (-3.8) 72.5 (-2.3) 76.1 (+1.3) 72.7 (-2.1) 72.8 (-2.0) 71.9 (-2.9) 73.5 (-1.3) 79.4 (+4.6) 81.9 (+7.1) Table 3: ROC-AUC results on TRUE comparing different synthetic data generation methods. For each model size, average scores are compared to the corresponding ANLI-only baseline (difference is listed in parentheses). Table 3 presents the results of our study. We cal- culate three average scores: for in-domain test sets based on CNN/DM documents, for out-of-domain test sets based on XSum documents, and for the original datasets from TRUE. In-Domain Results Most methods outperform the corresponding ANLI-only baseline, demonstrat- ing the usefulness of synthetic data. Predictably, all methods improve with larger models and a comple- mentary effect is often observed when mixing syn- thetic data with ANLI. The best results are obtained by mixing ANLI with Falsesum or TrueTeacher data and using T5-11B, with a substantial improve- ment over the corresponding ANLI-only baseline (in-domain score increase from 81.1 to 87.9). Out-of-domain Results While most methods perform well in-domain, their performance drops significantly on the out-of-domain test sets. Most of the evaluated methods underperform the corre- sponding ANLI-only baseline with similar model capacity. For some methods, performance dete- riorates dramatically; e.g. Falsesum – despite its impressive in-domain performance, its out-of- domain score falls significantly below the ANLI- only baseline. This suggests that some methods overfit to documents from the distribution used to generate the synthetic data. Based on this find- ing, we encourage future research to prioritize out- of-domain evaluation. Interestingly, even though TrueTeacher’s relative improvement is smaller com- pared to the in-domain setup, it is still the only method with higher out-of-domain score compared to the corresponding ANLI-only baseline. This demonstrates the robustness of TrueTeacher to do- main shift, which may be due to the use of model- generated summaries that increase the variability of the resulting synthetic data. Overall Results on TRUE Due to the poor out- of-domain performance of the existing methods, TrueTeacher is the only method that consistently outperforms the ANLI-only baseline on the TRUE benchmark. Notably, TrueTeacher + ANLI with T5- base (81.9) performs on par with the ANLI-only baseline using T5-11B (82.0). Additionally, the TrueTeacher-based variant using T5-11B (85.2) al- ready performs on-par with the 540B LLM teacher (84.9, Table 2), even though we used only 100k syn- thetic examples in this experiment, and did not use ANLI data. When comparing TrueTeacher + ANLI with T5-11B and 100k examples (Table 3) to the equivalent variant using the full dataset (Table 2), we observe a performance increase (86.4 → 87.8), which demonstrates TrueTeacher’s scalability. We conclude that TrueTeacher yields high quality data and generalizes well for new domains, which we at- tribute to the usage of model-generated summaries. 4.3 Qualitative Analysis Figure 3 presents a case study with a randomly sam- pled document, and the corresponding inconsistent summaries generated with each of the evaluated Class #Ex. Precision Recall F1 Consistent Inconsistent 41 59 80.0 98.0 97.6 83.1 87.9 89.9 Table 4: Human evaluation results. It introduces a nuanced factual error by replacing "Los Angeles firefighters" with A firefighter and also by hallucinating new content (the text in bold red font). This case study further illustrates the challenges of perturbing texts to introduce factual inconsistencies and re-iterates the importance in using model-generated summaries. 4.4 Human Evaluation To further assess the quality of the synthetic data produced by TrueTeacher, we perform human eval- uation carried out by domain experts.13 We evalu- ate 100 examples from our dataset,14 using binary judgements based on the attribution definition from Rashkin et al. (2021). The labeling accuracy of the sampled examples from our data stands at 89%, which demonstrates its high quality. Table 4 further presents the precision, recall and F1 scores for the consistent and inconsistent classes. More details on the human evaluation are available in §A.8. 4.5 Ablating Summary Distribution and Label Correctness There are two key differences between TrueTeacher and perturbation-based synthetic data generation methods: (1) the distribution of the summaries15 and (2) the correctness of the generated labels.16 Each of these differences may lead to the better quality of TrueTeacher w.r.t the baselines. To mea- sure the impact of each difference, we isolate them in a controlled ablation study. We create 2 ab- lated variants, using Falsesum as a recent baseline method for synthetic data generation. The results are presented in Table 5. LabelAblation is an ablation created by label- ing the document-summary pairs from Falsesum’s data using FLAN-PaLM 540B.17 Comparing 1310 NLP researchers, each with at least one year of experi- ence in factual consistency evaluation. 14We randomly sampled 50 positively and 50 negatively labeled examples from our synthetic dataset. 15Model-generated vs. human-written perturbed. 16Both methods may yield wrong labels. Perturbations might not introduce inconsistencies, as seen in §4.3, while TrueTeacher can have errors due to LLM mislabeling. 17We used the same 100k examples as Falsesum + ANLI baseline, and the same LLM prompt as in TrueTeacher. Figure 3: A case study comparing factually inconsistent summaries of the same document generated using dif- ferent methods. Content replacements are highlighted using the same color for the original and the replaced text. Added content is in bold red font. methods. FactEdit used the second gold-summary and replaced "to flooding call" with "rescue", in- troducing a grammatical error rather than a factual error, demonstrating the potential problems with using lower-beam completions as proxy for factual errors. DocNLI uses all the gold summaries con- catenated. While replacing "morning" with "night" introduces a factual error, three other edits fail to introduce factual errors, demonstrating the limi- tations of using simple word/entity replacements. FactCC used the first sentence from the article and successfully introduced factual error by an entity swap from "firetruck" to "fire engine". The para- phrase highlighted in green increases the abstrac- tiveness, but the paraphrase in orange introduces a grammatical error that is less likely to be made by a strong summarization model. The noise in- jection used by FactCC (duplicating or removing random tokens) is colored in red, but its useful- ness is questionable. Falsesum uses the first gold summary, and its perturbation model predicts the removal of "Tuesday morning" and the replacement of the "sinkhole" argument with "water", failing to introduce a factual error, since the sinkhole is referred to as "water-logged sinkhole" in the ar- ticle. Finally, TrueTeacher uses an abstractive summary generated by a real summarization model. CNN/DailyMail ID: 372f7e02e5bb17bac3a1b2260c6ac78414f97ee3Article: LOS ANGELES, California (CNN) -- Los Angeles firefighters and city crews worked for several hours Tuesday to rescue one of their own: a 22-ton firetruck that was nearly swallowed by a water-logged sinkhole. Two firefighters crawled out of the truck's windows after it sank Tuesday morning. No one was injured. The incident happened after four firefighters took the truck to the San Fernando Valley neighborhood of Valley Village, where flooding had been reported… … Gold Summaries: 1. Los Angeles firetruck nearly swallowed by sinkhole Tuesday morning.2. Firefighters in truck were responding to flooding call when incident happened.3. Two firefighters escaped truck through windows; no injuries reported.FactEditFirefighters in truck were responding rescue when incident happened .DocNLILos Angeles firetruck nearly destroyed by sinkhole Tuesday night . Firefighters in truck were responding to emergency call when it happened . Two firefighters escaped truck through windows ; no injuries reported .FactCCLOS LOS ANGELES, California ((CNN) - Los Angeles firefighters and crews worked Two on Tuesday to rescue one of their ownown: a 22-ton fire engine nearly swallowed by a sinkhole filled with waterwater.FalsesumLos Angeles firetruck nearly swallowed by water.TrueTeacherA firefighter has rescued a truck that sank in Los Angeles, causing extensive flooding. Variant Summary Distribution Labeling Quality T5-11B T5-Base Falsesum + ANLI TrueTeacher + ANLI Model-generated Human-written perturbed Falsesum FLAN-PaLM 540B 86.4 (+6.9%) 80.8 73.5 81.9 (+11.4%) LabelAblation SummaryAblation Human-written perturbed FLAN-PaLM 540B 85.3 (+5.6%) 85.5 (+5.8%) Model-generated Falsesum (proxy) 78.9 (+7.3%) 79.1 (+7.6%) Table 5: Average ROC-AUC on TRUE for the ablated variants. Falsesum + ANLI and TrueTeacher + ANLI are copied from Table 3 for reference. LabelAblation to Falsesum + ANLI allows us to examine the effect of using FLAN-PaLM labels instead of the original Falsesum labels, while controlling for the summaries distribution. LabelAblation outperforms Falsesum + ANLI by 5.6%, which shows that performance gains can be obtained using summaries generated with exist- ing synthetic data generation methods combined with second-stage improved labeling quality. How- ever, TrueTeacher is substantially simpler and also results in better performance. SummaryAblation is an ablation created by flip- ping labels on a random portion of TrueTeacher’s data, such that the expected labeling accuracy is similar to Falsesum (More details in §A.9). Com- paring SummaryAblation to Falsesum + ANLI al- lows us to examine the effect of changing the sum- mary distribution from human-written perturbed to model-generated, while controlling for the la- beling quality. SummaryAblation outperforms Falsesum + ANLI by 5.8%, a similar improve- ment as observed for LabelAblation (5.6%). This demonstrates that label correctness and summary distribution have a similar effect on the perfor- mance, but they also have a complimentary effect as the best performance of 86.4 ROC-AUC is ob- tained only when they are combined together. 4.6 Abstractiveness Analysis Advances in large scale pretraining (Devlin et al., 2019; Lewis et al., 2020) and the availability of rel- evant datasets (Narayan et al., 2018), enabled rapid progress in abstractive summarization, which bet- ter imitates the way humans summarize (Koh et al., 2023) and is also preferred by humans (Goyal et al., 2022). This motivates us to focus on generating abstractive synthetic summaries. We compare the abstractiveness degree of differ- ent methods using the extractive fragment coverage and density measures from Grusky et al. (2018). Following Utama et al. (2022) we multiply these Coverage ↓ Density ↓ Combined ↓ FactEdit DocNLI FactCC Falsesum TrueTeacher 0.86 0.85 0.93 0.88 0.86 2.92 15.66 8.16 2.98 2.41 2.67 15.20 7.93 2.76 2.15 Table 6: Average abstractiveness scores (lower is better), measured on a random sample of 5k examples. measures to obtain a combined score.18 Table 6 presents the abstractiveness scores, and a density plot is available in the Appendix (Figure 5). We ob- serve higher abstractiveness for model-based meth- ods (FactEdit, Falsesum and TrueTeacher), suggest- ing that rule-based methods might be less useful with the recent shift towards abstractive summariza- tion. TrueTeacher produces the most abstractive summaries with lowest combined score. 5 Multi-Lingual Data Generation for Factual Consistency Evaluation Utilizing a multilingual LLM enables a straightfor- ward application of TrueTeacher to multiple lan- guages. This contrasts with recent approaches that rely on NLP components only available for high- resource languages, e.g., information extraction (Utama et al., 2022; Balachandran et al., 2022). In this section, we examine TrueTeacher’s usefulness for multilingual factual consistency evaluation. We first generate multilingual synthetic data us- ing TrueTeacher. This time we train a single sum- marization model by fine tuning mT5-XXL (Xue et al., 2021) on XLSum (Hasan et al., 2021) and use it to summarize documents from WikiLingua (Ladhak et al., 2020), which we then label for con- sistency with our LLM. For the purposes of this experiment we focus on a subset of WikiLingua documents in 4 languages: English (en), French 18We provide additional technical details in §A.6. Training data # Improved languages Avg. ROC-AUC Per ex. Per lang. ANLI+XNLI +TrueTeacher en +TrueTeacher en,fe,es,ge - 32 / 45 35 / 45 73.3 75.7 77.2 71.6 73.8 75.3 Table 7: Multilingual results on the mFACE test set. (fe), Spanish (es) and German (de).19. After gener- ating the dataset for these 4 languages, we sample 100k examples, by randomly sampling 25k in each language with balanced labels (as illustrated in Ta- ble 9 in the Appendix). For ablation, we also cre- ate an English-only variant, by randomly sampling 100k English examples with balanced labels.20 We use the resulted data to train multilingual con- sistency evaluation models and evaluate them on the mFace test set (Aharoni et al., 2022), containing 3150 examples in 45 languages. As a strong base- line we follow Aharoni et al. and fine-tune mT5- XXL (Xue et al., 2021) on the ANLI (Nie et al., 2020) and XNLI (Conneau et al., 2018) datasets. We then assess whether adding our synthetic data to the training set can improve this model. Table 7 presents the results overview, full re- sults in all 45 languages are available in Table 10 (Appendix). Adding English-only summarization- based synthetic data, already improves results on 32 out of 45 languages and increases the avg. ROC- AUC from 71.6 to 73.8. Yet, using the same amount of multi-lingual examples improved the performance even more, with avg. ROC AUC of 75.3. This demonstrates the added value in generating multi-lingual synthetic examples using TrueTeacher, laying the ground for future work. 6 Related Work Previous work proposed methods for generating synthetic training data for factual consistency eval- uation, by perturbing gold summaries (Yin et al., 2021; Kryscinski et al., 2020; Balachandran et al., 2022; Utama et al., 2022; Soleimani et al., 2023).21 A key advantage of TrueTeacher, is the ability to leverage real model-generated summaries, leading to superior performance and robustness. The utility of model-generated outputs was also highlighted by Wu et al. (2023), who proposed a weakly super- 19They are the most prevalent languages in PaLM’s pre- training data (Chowdhery et al., 2022) 20Also based on WikiLingua, generated with the same pro- cess like the 25k English subset of our multilingual dataset. 21We provide extensive review of these methods in §3.3. vised consistency evaluation model that leverages probabilistic labels derived from aggregated scores of other consistency evaluation models. Our work proposes a simpler solution, that is also inherently multilingual. Another line of work for adapting NLI-based models for summarization, focuses on better pro- cessing of long texts, splitting the documents into sentences to create shorter premise-hypothesis pairs (Laban et al., 2022; Schuster et al., 2022). Recent work attempts to assess LLMs’ capability for evaluating generative tasks (Kocmi and Feder- mann, 2023; Wang et al., 2023; Liu et al., 2023). Luo et al. (2023) evaluated ChatGPT (OpenAI, 2022) speciffically on the task of factual consis- tency evaluation in summarization. Yet, Aiyappa et al. (2023) argued that ChatGPT’s "closed" nature risks data leakage (training-test contamination).22 Chen et al. (2023) performed a study of LLMs as factual consistency evaluators, using a variety of prompting methods. Previous work also attempted to distill knowl- edge from LLMs (West et al., 2022; Hsieh et al., 2023), as well as to leverage LLMs for data anno- tation (Wang et al., 2021; Ding et al., 2022), and synthetic data generation (Agrawal et al., 2022; Liu et al., 2022; Bitton et al., 2023). As far as we aware, our work is the first to leverage LLMs for data generation for factual consistency evaluation. 7 Conclusion We introduced TrueTeacher, a simple and highly effective method for generating synthetic data for Instead of per- factual consistency evaluation. turbation of human-written summaries like done in previous work, TrueTeacher leverages realistic model-generated summaries, which are annotated by prompting a large language model. Using our method, we generate a large-scale synthetic dataset, which we are making publicly available. Our experimental results show that this dataset substantially enhances the performance of a state-of-the-art model. In our systematic study, we compare TrueTeacher to existing approaches and further demonstrate its effectiveness and robust- ness. Our study highlights the importance of out-of- domain evaluation, which we hope will be adopted in future work. Lastly, we show that TrueTeacher generalizes well to multilingual scenarios, present- ing additional advantage over existing methods. 22While FLAN’s instruction fine-tuning data is public. 8 Limitations Noisy synthetic data TrueTeacher relies on a LLM for labeling model generated summaries. This process may result in some frequency of noisy synthetic examples for which the label is incor- rect. This can affect the overall quality of the stu- dent model that trains on this data. In our experi- ments we validated the quality of our synthetic data with human evaluation, however this should be re- examined when generating data for new domains. In addition, we experimented with different filter- ing approaches, but found that training on filtered data with higher labeling accuracy, did not improve the performance of the student model. We encour- age future work to further examine such automatic filtering. Reliance on LLMs In this work we use a 540B LLM to label 1.4M model generated summaries. This requires non-negligible resources that may not be available to the whole community. To mitigate this, we release our collected synthetic data and the corresponding model checkpoint. In addition, the decreasing inference cost of proprietary LLMs, and the availability of open-source LLMs (Touvron et al., 2023) can further assist. Effect of low-resource languages Our multilin- gual experiments (§5) focus on a subset of WikiLin- gua documents in only 4 languages: English (en), French (fe), Spanish (es) and German (de), that are the most prevalent in our LLM’s pre-training data. As can be seen in our full results (Table 9 in the Appendix), our multilingual data successfully improves low-resource languages as well. We did not fully explore the effect of adding additional languages to our synthetic data, especially low- resource ones. We believe that there is a trade- off between language coverage and labeling qual- ity. i.e, while generating the synthetic data in low- resource languages will increase language cover- age, it can lead to poor labeling quality by our LLM. We did not fully explore the exact sweet-spot for how many languages to include in our synthetically labeled training data, leaving this for future work. References Priyanka Agrawal, Chris Alberti, Fantine Huot, Joshua Maynez, Ji Ma, Sebastian Ruder, Kuzman Ganchev, Dipanjan Das, and Mirella Lapata. 2022. Qameleon: Multilingual QA with only 5 examples. CoRR, abs/2211.08264. Roee Aharoni, Shashi Narayan, Joshua Maynez, Jonathan Herzig, Elizabeth Clark, and Mirella Lapata. 2022. mface: Multilingual summarization with fac- tual consistency evaluation. CoRR, abs/2212.10622. Rachith Aiyappa, Jisun An, Haewoon Kwak, and Yong- Yeol Ahn. 2023. Can we trust the evaluation on chatgpt? CoRR, abs/2303.12767. Vidhisha Balachandran, Hannaneh Hajishirzi, William W. Cohen, and Yulia Tsvetkov. 2022. Correcting diverse factual errors in abstractive summarization via post-editing and language model infilling. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 9818–9830. Association for Computational Linguistics. Michele Banko, Michael J. Cafarella, Stephen Soder- land, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, In- dia, January 6-12, 2007, pages 2670–2676. Yonatan Bitton, Shlomi Cohen-Ganor, Ido Hakimi, Yoad Lewenberg, Roee Aharoni, and Enav Weinreb. 2023. q2d: Turning questions into dialogs to teach models how to search. Shiqi Chen, Siyang Gao, and Junxian He. 2023. Eval- uating factual consistency of summaries with large language models. CoRR, abs/2305.14069. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vin- odkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, An- drew M. Dai, Thanumalayan Sankaranarayana Pil- lai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language mod- eling with pathways. CoRR, abs/2204.02311. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Web- son, Shixiang Shane Gu, Zhuyun Dai, Mirac Suz- gun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Y. Zhao, Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. CoRR, abs/2210.11416. Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: evaluating cross- lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, Brussels, Belgium, Octo- ber 31 - November 4, 2018, pages 2475–2485. Asso- ciation for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Bosheng Ding, Chengwei Qin, Linlin Liu, Lidong Bing, Shafiq R. Joty, and Boyang Li. 2022. Is GPT-3 a good data annotator? CoRR, abs/2212.10450. Alexander R Fabbri, Wojciech Kry´sci´nski, Bryan Mc- Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2020. Summeval: Re-evaluating summariza- tion evaluation. arXiv preprint arXiv:2007.12626. Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019a. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In Proceedings of the 57th Con- ference of the Association for Computational Lin- guistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2214–2220. Association for Computational Linguistics. Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019b. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 2214–2220, Florence, Italy. Asso- ciation for Computational Linguistics. Ben Goodrich, Vinay Rao, Mohammad Saleh, and Pe- ter J. Liu. 2019. Assessing the factual accuracy of generated text. CoRR, abs/1905.13322. Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022. News summarization and evaluation in the era of GPT-3. CoRR, abs/2209.12356. Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Vol- ume 1 (Long Papers), pages 708–719. Association for Computational Linguistics. Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don’t stop pretraining: In Adapt language models to domains and tasks. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342–8360, Online. Association for Computational Linguistics. Tahmid Hasan, Abhik Bhattacharjee, Md. Saiful Is- lam, Kazi Samin Mubasshir, Yuan-Fang Li, Yong- Bin Kang, M. Sohel Rahman, and Rifat Shahri- yar. 2021. Xl-sum: Large-scale multilingual ab- stractive summarization for 44 languages. In Find- ings of the Association for Computational Linguis- tics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of Findings of ACL, pages 4693–4703. Association for Computational Linguistics. Karl Moritz Hermann, Tomás Kociský, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neu- ral Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1693– 1701. Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022. TRUE: re-evaluating factual consistency evaluation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 3905–3920. Association for Computational Linguistics. Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, and Omri Abend. 2021. $qˆ2$: Evaluating factual consistency in knowledge- grounded dialogues via question generation and ques- tion answering. In Proceedings of the 2021 Confer- ence on Empirical Methods in Natural Language Pro- cessing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 7856–7870. Association for Computational Linguis- tics. Cheng-Yu Hsieh, Chun-Liang Li, Chih-kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alex Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. 2023. Dis- tilling step-by-step! outperforming larger language models with less training data and smaller model In Findings of the Association for Compu- sizes. tational Linguistics: ACL 2023, pages 8003–8017, Toronto, Canada. Association for Computational Lin- guistics. ACL 2020, Online, July 5-10, 2020, pages 7871–7880. Association for Computational Linguistics. Jie Huang and Kevin Chen-Chuan Chang. 2022. To- wards reasoning in large language models: A survey. CoRR, abs/2212.10403. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. Scitail: A textual entailment dataset from science question answering. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Arti- ficial Intelligence (IAAI-18), and the 8th AAAI Sym- posium on Educational Advances in Artificial Intel- ligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5189–5197. AAAI Press. Tom Kocmi and Christian Federmann. 2023. Large language models are state-of-the-art evaluators of translation quality. CoRR, abs/2302.14520. Huan Yee Koh, Jiaxin Ju, Ming Liu, and Shirui Pan. 2023. An empirical survey on long document sum- marization: Datasets, models, and metrics. ACM Comput. Surv., 55(8):154:1–154:35. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large lan- guage models are zero-shot reasoners. In NeurIPS. Wojciech Kryscinski, Nitish Shirish Keskar, Bryan Mc- Cann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empir- ical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 540–551. Association for Computational Linguistics. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 9332– 9346. Association for Computational Linguistics. Philippe Laban, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. 2022. Summac: Re-visiting nli- based models for inconsistency detection in summa- rization. Trans. Assoc. Comput. Linguistics, 10:163– 177. Faisal Ladhak, Esin Durmus, Claire Cardie, and Kath- leen R. McKeown. 2020. Wikilingua: A new bench- mark dataset for cross-lingual abstractive summariza- tion. CoRR, abs/2010.03093. Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi. 2022. WANLI: worker and AI collabora- tion for natural language inference dataset creation. In Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 6826–6847. Association for Computational Linguistics. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-eval: NLG evaluation using GPT-4 with better human alignment. CoRR, abs/2303.16634. Zheheng Luo, Qianqian Xie, and Sophia Ananiadou. 2023. Chatgpt as a factual inconsistency evalu- ator for abstractive text summarization. CoRR, abs/2303.15621. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Sean Welleck, Bodhisattwa Prasad Majumder, Shashank Gupta, Amir Yazdanbakhsh, and Peter Clark. 2023. Self-refine: Iterative refinement with self-feedback. CoRR, abs/2303.17651. Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan T. McDonald. 2020. On faithfulness and fac- tuality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 1906–1919. Association for Com- putational Linguistics. Anshuman Mishra, Dhruvesh Patel, Aparna Vijayaku- mar, Xiang Lorraine Li, Pavan Kapanipathi, and Kar- tik Talamadupula. 2021. Looking beyond sentence- level natural language inference for question answer- ing and text summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, NAACL-HLT 2021, On- line, June 6-11, 2021, pages 1322–1336. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1797–1807. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language under- standing. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4885–4901. Association for Computational Linguistics. OpenAI. 2022. https://openai.com/blog/chatgpt/. Chatgpt, Artidoro Pagnoni, Vidhisha Balachandran, and Yulia Tsvetkov. 2021. Understanding factuality in abstrac- tive summarization with FRANK: A benchmark for factuality metrics. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 4812–4829. Association for Com- putational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. J. Mach. Learn. Res., 21:140:1–140:67. Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Michael Collins, Dipanjan Das, Slav Petrov, Gau- rav Singh Tomar, Iulia Turc, and David Reitter. 2021. Measuring attribution in natural language generation models. CoRR, abs/2112.12870. Tal Schuster, Sihao Chen, Senaka Buthpitiya, Alex Fabrikant, and Donald Metzler. 2022. Stretching sentence-pair NLI models to reason over long doc- uments and clusters. In Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 394–412. Association for Computational Lin- guistics. Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang, and Patrick Gallinari. 2021. Questeval: Summariza- tion asks for fact-based evaluation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Vir- tual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 6594–6604. Association for Computational Linguistics. Amir Soleimani, Christof Monz, and Marcel Worring. 2023. NonFactS: NonFactual summary generation for factuality evaluation in document summarization. In Findings of the Association for Computational Linguistics: ACL 2023, pages 6405–6419, Toronto, Canada. Association for Computational Linguistics. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971. Prasetya Utama, Joshua Bambrick, Nafise Sadat Moosavi, and Iryna Gurevych. 2022. Falsesum: Gen- erating document-level NLI examples for recogniz- ing factual inconsistency in summarization. In Pro- ceedings of the 2022 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 2763–2776. Association for Computational Linguistics. Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the fac- tual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5008–5020. Association for Computa- tional Linguistics. Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxi- ang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023. Is chatgpt a good NLG evaluator? A preliminary study. CoRR, abs/2303.04048. Shuohang Wang, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2021. Want to reduce la- In Findings of the beling cost? GPT-3 can help. Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Re- public, 16-20 November, 2021, pages 4195–4205. Association for Computational Linguistics. Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He, Kang Liu, and Jun Zhao. 2023. Large language mod- els are better reasoners with self-verification. CoRR, abs/2212.09561. Peter West, Chandra Bhagavatula, Jack Hessel, Jena Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, and Yejin Choi. 2022. Symbolic knowledge distillation: from general language mod- els to commonsense models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 4602–4625, Seat- tle, United States. Association for Computational Linguistics. Wenhao Wu, Wei Li, Xinyan Xiao, Jiachen Liu, Sujian Li, and Yajuan Lv. 2023. Wecheck: Strong factual consistency checker via weakly supervised learning. Proceedings of the 61th Annual Meeting of the Asso- ciation for Computational Linguistics, ACL 2023. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mt5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 483–498. Association for Computational Linguistics. Wenpeng Yin, Dragomir R. Radev, and Caiming Xiong. 2021. Docnli: A large-scale dataset for document-level natural language inference. In Find- ings of the Association for Computational Linguis- tics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of Findings of ACL, pages 4913–4922. Association for Computational Linguistics. A Appendix A.1 FLAN-PaLM Prompt Design To apply FLAN-PaLM for factual consistency eval- uation, we experimented with zero-shot, few-shot and chain-of-thought prompting strategies, and var- ious formats for each strategy. We chose the best performing strategy and format, based on the accu- racy on a development set.23 Table 8 presents the accuracy of each prompt type on the development set. We observed only minor performance differ- ences, and thus we opted for the simplest solution that is the zero-shot prompt. While we cannot know the exact reasons for why few-shot and chain-of- thought did not improve performance, we can offer potential explanations. (1) Since the model was fine-tuned on NLI datasets, it is able to effectively generalize to factual consistency evaluation, mak- ing further demonstrations via few-shot prompting unnecessary in this case. (2) The performance with the zero-shot prompt is already notably high (89%, §4.4) and thus our particular LLM is less likely to benefit from chain-of-thought prompting. (3) It could be the case that only a few reasoning steps are needed to evaluate consistency in our particular setup and thus chain-of-thought is not necessarily better in this case. Below, we describe our top-performing zero- shot, few-shot and chain-of-thought prompts. Zero-shot Prompt Since FLAN-PaLM was in- struction fine-tuned on NLI, we designed our prompt to resemble an NLI prompt (e.g. using "premise" and "hypothesis" instead of "document" and "summary"). Our final prompt is as follows: Premise: {document} Hypothesis: {summary} Can the hypothesis be inferred from the premise? Answer using "Yes" or "No" only. "consistent" Few-shot Prompt We use two few-shot examples, one and one "inconsistent". We randomly sample these examples from the development set examples shorter than 200 words.23 We limit ourselves to two short examples since summarization examples can include long documents, and thus few-shot may lead to too long context length. Our final prompt is as follows: 23For development set we use the FactCC dataset (Kryscin- ski et al., 2020) with 1,431 examples containing summaries of documents from CNN/DailyMail, manually annotated for factual correctness. Following (Utama et al., 2022), we merge the dev and test sets. Premise: (CNN) Desperate migrants from Africa and the Middle East keep heading to Europe, with 978 res- cued Friday in the Mediterranean Sea, the Italian Coast Guard said Saturday via Twitter. The migrants were picked up 30 miles off the coast of Libya, said European Parliament member Matteo Salvini, the leader of Italy’s far-right Northern League. In the first three months of 2015, Italy registered more than 10,000 migrants arriv- ing, the International Organization for Migration said, and about 2,000 were rescued at sea during the first weekend of April in the Channel of Sicily. Most mi- grants recorded this year come from countries in West Africa as well as Somalia and Syria, the IMO said. They use Libya as a country of transit. At least 480 migrants have died while crossing the Mediterranean since the beginning of the year, often because of bad weather and overcrowded vessels used by smugglers, the IMO said. Sometimes the captains and crews abandon the ships, leaving passengers to fend for themselves. At this time last year, there were fewer than 50 deaths reported, the IMO said. Most of the migrants are asylum seekers, vic- tims of trafficking or violence, unaccompanied children and pregnant women. Hypothesis: the migrants were picked up 30 miles off the coast of libya. Can the hypothesis be inferred from the premise? An- swer using "Yes" or "No" only. Answer: Yes Premise: (CNN) A nuclear submarine being repaired at a Russian shipyard has caught on fire, according to a law enforcement source speaking to Russia’s state-run news agency ITAR-Tass. "The submarine is in a dry dock," Tass reports, citing the source, and there is no ammunition on board. "The rubber insulation between the submarine’s light and pressure hull is on fire," Tass reported. Russia’s RIA Novosti news agency says insu- lation caught on fire as welding work was being done on the submarine. Tass reported that the fire began on a sub in the Zvyozdochka shipyard in northwestern Russia. Zvyozdochka spokesman Yevgeny Gladyshev told the news agency that the sub had been undergoing repairs since November 2013. "Nuclear fuel from the sub’s re- actor has been unloaded," he reportedly said. "There are no armaments or chemically active, dangerous sub- stances, fissionable materials on it," Gladyshev said to Tass. "The enterprise’s personnel left the premises when the submarine caught fire, no one has been injured. The fire presents no threat to people and the shipyard." Hypothesis: "the rubber insulation between the subma- rine’s light and pressure hull is on fire," russia’s ria novosti news agency says. Can the hypothesis be inferred from the premise? An- swer using "Yes" or "No" only. Answer: No Premise: {document} Hypothesis: {summary} Can the hypothesis be inferred from the premise? An- swer using "Yes" or "No" only. Answer: Chain-of-thought Prompt Following Kojima et al. (2022) we append "Let’s think step by step" to the prompt to facilitate a step-by-step rea- soning before predicting the label. Our final prompt is as follows: Premise: {document} Hypothesis: {summary} Q: Can the hypothesis be inferred from the premise? Answer using "Yes" or "No" only. A: Let’s think step by step Prompt type Dev accuracy zero-shot few-shot chain-of-thought 93.6 93.2 93.8 This prompt successfully unlocked a step by step reasoning. Below is FLAN-PaLM’s response for- mat, where {answer} is either "yes" or "no". Table 8: FLAN-PaLM accuracy on the development set23 using different prompting strategies. {reasoning steps}. So, the answer is {answer}. Language ISO 639-1 consistent inconsistent Example input: Premise: (CNN) Georgia Southern University was in mourning Thursday after five nursing students were killed the day before in a multivehicle wreck near Sa- vannah. Caitlyn Baggett, Morgan Bass, Emily Clark, Abbie Deloach and Catherine (McKay) Pittman – all juniors – were killed in the Wednesday morning crash as they were traveling to a hospital in Savannah, ac- cording to the school website. Fellow nursing students Brittney McDaniel and Megan Richards were injured as was another person, who was not identified by the Georgia State Patrol. The young women were on their way to finish their first set of clinical rotations ... ... ... Hypothesis: georgia southern university was in mourn- ing after five nursing students died. Q: Can the hypothesis be inferred from the premise? Answer using "Yes" or "No" only. A: Let’s think step by step The output for this example is: Georgia Southern University was in mourning Thursday after five nursing students were killed the day before in a multivehicle wreck near Savannah. So, the answer is yes. A.2 Inference with FLAN-PaLM We used the zero-shot prompt (see §A.1). The vast majority of FLAN-PaLM’s responses were either "Yes" or "No", and a tiny fraction of the responses were "It’s impossible to say". During the labeling phase, we let FLAN-PaLM generate the output (predict mode), and label as "consistent" if the generated output is "Yes" and "inconsistent" in case the output is "No". We discard the "It’s impossible to say" ex- amples. In order to measure ROC-AUC in a binary classification setting, we compute the model’s prob- ability of generating "Yes" (score mode) and use it as the example-level factual consistency score. A.3 Fine tuning T5 English Spanish French German en es fr de total 12,500 12,500 12,500 12,500 50,000 12,500 12,500 12,500 12,500 50,000 Table 9: Our multilingual dataset statistics. Premise: {document} Hypothesis: {summary} The model is trained to predict "1" if the sum- mary is factually consistent and "0" otherwise. We use a learning rate of 10−4 and a batch size of 32. During training, we use a maximum input length of 512 tokens and truncate the premise if needed.24 During inference we use a maximum input length of 2048 tokens. We train for a maximum of 20 epochs, evaluate a checkpoint every 1k steps and choose the checkpoint with the best ROC-AUC on a development set.23 In our study we make sure to use the same training regime for all baselines. The ANLI-only results in Table 3 are from our experiments, while in Table 2 we use the results reported in previous work. For the summarization models we fine tune the corresponding T5 models on the XSum training set (Narayan et al., 2018) in a similar fashion and use the ROUGE score on the XSum development set as a stopping criteria. A.4 Additional Details About Our Dataset As mentioned in §3.1, we create the dataset based on documents from CNN/DailyMail (Hermann et al., 2015). We do not use the gold summaries, and we only use examples from the training set. In our experiments with the full dataset (§4.1), we balance the labels by randomly sampling 475,563 positive examples (see Table 1). We fine tune our T5 models for factual consistency evaluation using the following input format: 24In early experiments we saw that training with longer maximum input length resulted with comparable performance. Figure 4: Self-verification prompting. If the LLM clas- sified the summary as consistent, we prompt it again and ask it for its certainty. If the answer is “Yes” (consis- tent with the original reasoning), we keep the example, otherwise we filter it out. A.5 Data Filtering with Self-verification As mentioned in §3 we also explored data filter- ing based on prompting FLAN-PaLM for self- verification. Our proccess is based on 3 steps. (1) Detect potential examples in our dataset that are likely to be labeled incorrectly by the LLM. (2) Prompt the LLM to self-verify its earlier prediction and filter out examples that the model is uncertain of. This leads to a smaller dataset with improved labeling accuracy. (3) Train the factual consistency evaluation model on the filtered dataset. This ap- proach is based on 2 observations: 1. In early experiments, we saw that our LLM has extremely high precision for the inconsistent class. This can also be seen in our human eval- uation (Table 4). This means that almost all the errors occur when the LLM predicts that the summary is consistent. Following this, we only consider filtering examples classified as consistent by the LLM. 2. Inspired by the work of Weng et al. (2023) and Madaan et al. (2023), we use a self verification prompt. If the LLM classified the summary as consistent, we prompt it again and ask it for its certainty. If the answer is “Yes” (i.e. it is consis- tent with the original reasoning path), we keep the example, otherwise we filter it out. This proccess is illustrated in Figure 4. The self-verification prompt is as follows: Premise: {document} Hypothesis: {summary} Are you sure that the summary can be inferred from the docu- ment? Answer using "Yes" or "No" only. This approach filtered-out 15% of the dataset. Figure 5: Visualization of the density of the combined abstractivness score. The plot is actually measuring the extractiveness degree, so lower x-values mean higher abstractiveness. When we qualitatively analyzed the filtered exam- ples, it seems that the majority of the filtered exam- ples indeed had a wrong label, and that applying this filtering mechanism increases the labeling ac- curacy by approximately 5%. While this filtering mechanism results in higher labeling accuracy, we did not observe a perfor- mance gain when filtering the training data in this way. For TrueTeacher + ANLI with T5-11B (on a sample of 100k examples) we got an average of 86 ROC-AUC on TRUE using the filtered data, slightly below the 86.4 using the unfiltered data (Table 3). As mentioned in Footnote 9, we attribute this to the fact that the labeling accuracy is high to begin with (89%, section 4.4) and that the model is likely robust to some amount of labeling noise. Following this, for simplicity, our official method does not use filtering. A.6 Abstractiveness Analysis: Additional Details As our backbone metrics we use the Extractive Fragment Coverage and Density measures defined by Grusky et al. (2018). Coverage measures the percentage of words in the summary that are part of an extractive fragment with the article, quanti- fying the extent to which a summary is derivative of a text. Density measures the average length of the extractive fragment to which each word in the summary belongs, quantifying how well the word sequence of a summary can be described as a series of extractions. Our Combined score is obtained by multiplyng the Coverage and the Density scores, similar to Utama et al. (2022). To further illustrated the differences in the abstractiveness of different methods, we include a visualization of the density of the combined abstractivness score in Figure 5. 2.50.02.55.07.510.012.515.017.5Extractiveness0.00.10.20.30.4DensityFactEditDocNLIFactCCFalsesumOurs ANLI+XNLI +100K en +100K en/es/de/fe amharic arabic azerbaijani bengali burmese chinesesimp. chinese trad. english french gujarati hausa hindi igbo indonesian japanese kirundi korean kyrgyz marathi nepali oromo pashto persian pidgin portuguese punjabi russian scottish gaelic serbian cyrillic serbian latin sinhala somali spanish swahili tamil telugu thai tigrinya turkish ukrainian urdu uzbek vietnamese welsh yoruba # wins # > ANLI+XNLI Per lang. avg. Per example avg. 63.1 87.8 59.6 90.4 59.0 87.6 82.5 80.2 91.9 50.8 69.5 72.2 62.2 77.6 97.7 83.5 87.3 70.1 75.2 55.2 81.2 56.4 43.5 70.0 79.6 77.7 88.8 59.0 84.2 39.7 72.9 85.1 80.7 88.1 63.9 55.9 78.8 79.9 87.0 55.5 69.0 54.6 89.8 83.0 69.0 5 - 73.3 71.6 67.2 89.0 68.6 94.3 64.5 86.4 82.6 74.7 94.1 52.0 67.7 79.9 62.8 84.1 98.9 89.3 82.3 77.4 78.7 59.1 83.7 68.2 42.3 81.4 79.5 81.5 85.1 58.8 79.3 42.2 74.9 88.6 85.9 89.2 69.8 62.3 83.8 82.9 86.6 67.0 63.8 59.3 84.4 83.4 69.0 15 32 75.7 73.8 68.6 87.7 65.5 98.5 57.9 89.9 83.2 80.0 97.1 51.5 73.7 86.5 75.7 85.8 99.6 90.4 89.9 79.0 73.6 57.2 83.3 67.7 45.8 77.1 79.0 78.2 81.2 63.1 85.5 43.6 76.1 86.6 89.1 92.2 66.0 60.4 86.8 86.1 86.6 65.9 75.3 58.8 88.1 83.9 77.2 25 35 77.2 75.3 article?". In our work we use the average attribu- tion score (between 0 to 1) and treat summaries as factually consistent if the score is larger than 0.5. We focus on the test split of XLSum containing 3150 examples in 45 languages (i.e., 70 examples in each language). In §5 we refer to Table 7 with the results overview, and we provide the full results for all languages in Table 10. A.8 Human Evaluation We instructed the participants to review the docu- ment and its corresponding summary, and to evalu- ate the summary based on the attribution definition provided by Rashkin et al. (2021), using binary judgements. To avoid a common confusion be- tween factual inconsistency and contradiction, we also provided the following instruction: In this task you will evaluate the factual consistency of a system-generated summary. The system’s goal is to summarize the original source document, while remain- ing truthful to it. Your goal is to evaluate whether the system-generated summary is consistent w.r.t. the source document. Summary will be considered consistent if all of the information in the summary can be verified from the source document (i.e., for the summary to be inconsistent, the document does not necessarily need to contradict it, it can also fail to support some facts). In an early experiment, we found that using crowd workers without domain expertise and sub- stantial time investments resulted in extremely low- quality ratings. Following this, all our raters were NLP researchers, each with at least one year of spe- cific experience in the task of factual consistency evaluation, with significant time allocation and no more than 10 examples per rater.25 These steps ensured high quality ratings. Table 10: ROC-AUC results on the mFace test set. A.9 Adding noise to TrueTeacher A.7 Using the mFace dataset In §5 we report results on the mFace dataset (Aha- roni et al., 2022). Aharoni et al. performed large scale human evaluation of summaries of documents from the XLSum corpus (Hasan et al., 2021), pro- duced by different summarization models. Each summary was rated for quality, attribution and in- formativeness. We use the attribution scores in our work. The attribution evaluation is based on the attribution definition provided in Rashkin et al. (2021), with the participants asked "Is all the in- formation in the summary fully attributable to the In §4.5 we create SummaryAblation by flipping labels to a random portion of TrueTeacher’s data, such that the expected labeling accuracy is sim- ilar to Falsesum. Falsesum’s labeling method is coupled with the data generation, thus we need an approximation for its labeling quality. We estimate Falesum’s labeling accuracy as 83.5%, according to Utama et al. (2022)’s human evaluation (we aver- age the Intrinsic and Extrinsic results), while ours is 89% (§4.4). So to mimic Falsesum’s quality we flipped TrueTeacher’s labels in order to add addi- tional 5.5% errors. 25We found that it is sufficient to use one rater per example (unlike in our experiments with the crowd workers).
synthetic_cpt
2
Dynamic_Sparse_No_Training_Training-Free_Fine-tuning_for_Sparse_LLMs.pdf
Published as a conference paper at ICLR 2024 DYNAMIC SPARSE NO TRAINING ○: TRAINING-FREE FINE-TUNING FOR SPARSE LLMS Yuxin Zhang1† Lirui Zhao1† Mingbao Lin2 Yunyun Sun3 Yiwu Yao3 Xingjia Han3 Shiwei Liu4,5,6 Rongrong Ji1,7‡∗ Jared Tanner4 4 2 0 2 b e F 6 2 ] I A . s c [ 3 v 5 1 9 8 0 . 0 1 3 2 : v i X r a 1Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University 2 Tencent Youtu Lab 3Huawei Technologies, 4University of Oxford, 5University of Texas at Austin 6Eindhoven University of Technology, 7Institute of Artificial Intelligence, Xiamen University ABSTRACT The ever-increasing large language models (LLMs), though opening a potential path for the upcoming artificial general intelligence, sadly drops a daunting obstacle on the way towards their on-device deployment. As one of the most well- established pre-LLMs approaches in reducing model complexity, network pruning appears to lag behind in the era of LLMs, due mostly to its costly fine-tuning (or re-training) necessity under the massive volumes of model parameter and training data. To close this industry-academia gap, we introduce Dynamic Sparse No Training (DS○T1), a training-free fine-tuning approach that slightly updates sparse LLMs without the expensive backpropagation and any weight updates. Inspired by the Dynamic Sparse Training, DS○T minimizes the reconstruction error between the dense and sparse LLMs, in the fashion of performing iterative weight pruning-and-growing on top of sparse LLMs. To accomplish this purpose, DS○T particularly takes into account the anticipated reduction in reconstruction error for pruning and growing, as well as the variance w.r.t. different input data for growing each weight. This practice can be executed efficiently in linear time since its obviates the need of backpropagation for fine-tuning LLMs. Extensive experiments on LLaMA-V1/V2, Vicuna, and OPT across various benchmarks demonstrate the effectiveness of DS○T in enhancing the performance of sparse LLMs, especially at high sparsity levels. For instance, DS○T is able to outperform the state-of-the-art Wanda by 26.79 perplexity at 70% sparsity with LLaMA-7B. Our paper offers fresh insights into how to fine-tune sparse LLMs in an efficient training-free manner and open new venues to scale the great potential of sparsity to LLMs. Codes are available at https://github.com/zyxxmu/DSnoT. 1 INTRODUCTION Large language models (LLMs) (Zhang et al., 2022a; Touvron et al., 2023a; Brown et al., 2020) have recently emerged as the new favorite in various domains of natural language processing (NLP) (Wei et al., 2022b;a; Bubeck et al., 2023). Nevertheless, LLMs face a significant constraint: their extensive parameterization and computational demands present substantial challenges in terms of storage and deployment. For example, the GPT-175B model (Brown et al., 2020) eats up 320G of memory to load its parameters in FP16 precision, requiring at least five A100-80G GPUs for inference (Frantar & Alistarh, 2023). In response to this issue, there has been a surge of interest in compressing LLMs, as it holds the promise of LLMs while remarkably reducing memory usage and computational costs. To date, the majority of current effort for LLM compression falls into quantization (Yao et al., 2022; Lin et al., 2023; Frantar et al., 2022; Dettmers et al., 2023; 2022; Xiao et al., 2023; Shao et al., 2024; Ma et al., 2024), which compresses LLMs by diminishing the number of bits employed to represent weights or hidden states. ∗†Equal contribution ‡Corresponding author: [email protected] 1Pronounced “DS No T”. 1 Published as a conference paper at ICLR 2024 Figure 1: Perplexity on WikiText-2 (left) and running time (right) of different methods for pruning LLaMA-V1 model family at 60% sparsity rate. Without any training, DS○T consistently improves the performance of sparse LLMs, all within a linear time spectrum. On the other hand, network pruning (LeCun et al., 1989; Han et al., 2015; Mocanu et al., 2018), a technique that removes superfluous weights to create a sparse and lightweight model, has received relatively little attention (Frantar & Alistarh, 2023; Sun et al., 2023). The plausible reason is that, network pruning usually appreciates at least one, usually many, iterations of fine-tuning or re-training to guarantee top performance (Frankle & Carbin, 2019; Yin et al., 2023). This fine-tuning step would cause a significant amount of compute and memory footprints due to the colossal model size and massive training data of modern LLMs, which even unnerves large corporations, let alone individual researchers. Two previous arts have explored the possibility to scale pruning to billion-level LLMs without any fine-tuning. SparseGPT (Frantar & Alistarh, 2023) formulates LLM pruning as a layer-wise weight reconstruction problem, where the target falls into mitigating the output discrepancy, w.r.t., recon- struction error, between dense and sparse LLMs. To solve the row-Hessian challenge, i.e., the need for calculating the expensive inversion of a huge matrix for each row individually, SparseGPT itera- tively applies OBS (Hassibi et al., 1993) to individually prune and updates weights in a column-wise manner, ultimately reaching the same optimal solution as applying the closed-form regression recon- struction. Wanda (Sun et al., 2023) proposes a new pruning metric that takes both weight magnitude and their corresponding input activations into consideration, performing on part with SparseGPT without the need for the expensive second-order information. The intuition behind Wanda lies in the existence of emergent outlier feature dimensions in large-scale LLMs which are significantly larger than typical features and meanwhile are essential for the optimal performance of LLMs (Dettmers et al., 2022). While these two approaches enable LLM pruning without performing fine-tuning, their performance is still far from satisfactory, e.g., starting to lose performance at 20% sparsity with LLaMA-30B. Therefore, it is imperative to enable fine-tuning for sparse LLMs to fully unlock the potential of sparsity to escalate the affordability of LLMs. In a parallel vein, Dynamic Sparse Training (DST), as outlined in previous research (Mocanu et al., 2018; Liu et al., 2019; Evci et al., 2020), has garnered considerable attention recently due to its significant saving potentials in the context of neural network training. Instead of training an entire network, DST selectively updates and maintains a subset of the network throughout the training pro- cess, while allowing the sparse network topology to dynamically evolve via a weight operation (Mo- canu et al., 2018). Given its demonstrated efficacy in achieving efficient training, DST seems to be a promising candidate for efficient LLMs fine-tuning. However, it is essential to note that DST in- trinsically requires the training of subnetworks via backpropagation, and the effectiveness of mask adaptation highly relies on a sufficient number of weight updates (Liu et al., 2021). Moreover, prior studies have indicated its failure when employed for fine-tuning small-scale BERT-level language models (Liu et al., 2023). Fortunately, it is noteworthy that the pruning-and-growing step employed in DST solely stands as a training-free methodology, enabling sparse mask adaptation based on certain weight status, e.g., magnitude (Mocanu et al., 2018). This offers an alternative perspective for addressing the aforemen- tioned challenge: While fine-tuning sparse LLMs through backpropagation can result in substantial computational overhead, we can explore the possibility of iteratively updating sparse mask in a training-free fashion as a viable alternative. Based on this intuition, we introduce a training-free 2 Published as a conference paper at ICLR 2024 fine-tuning approach – Dynamic Sparse No Training (DS○T). This approach empowers the fur- ther refinement of sparse LLMs without any weight updates. To facilitate mask adaptation in favor of the sparse reconstruction problem, we propose new criteria for mask pruning and growing, by considering both the expectation and variance of the reconstruction error reduction when recovering a specific weight. It is worth emphasizing that the DS○T functions independently of the need for computationally intensive operations, such as gradient or Hessian matrices. Instead, it exclusively relies on a singular matrix multiplication operation to assess the reconstruction error. We conduct comprehensive experiments to evaluate the effectiveness of DS○T with a variety of LLMs, including LLaMa-V1 (Touvron et al., 2023a) and LLaMa-V2 (Zhang et al., 2022a), Vi- cuna (Chiang et al., 2023), and OPT families (Zhang et al., 2022a), from 7 billion to 70 billion parameters. Our results demonstrate that DS○T consistently improves the performance of sparse LLMs by a good margin, especially at high sparsity levels > 50%. For instance, DS○T is able to improve the performance over Magnitude pruning, SparseGPT, and Wanda by 1.1e6, 4.31, and 1.87 perplexity with OPT-13B on WikiText-2 at 60% sparsity only using 7.3s on a single NVIDIA A100 GPU. Our work provides fresh insights in efficient sparse LLM fine-tune without weight updates and we hope to encourage more research in exploring benefits of sparsity in LLMs. 2 RELATED WORK Network Sparsification. The process of eliminating redundant weights, known as network sparsi- fication or network pruning, has served as a practical strategy to diminish the complexity of deep neural networks over the past decades (LeCun et al., 1989; Han et al., 2015). Despite the substantial body of literature, network pruning can be roughly classified based on the granularity of sparsity and the dependency of the pre-trained dense models. I. Granularity of Sparsity: The granular- ity of sparsity varies from coarse grains to fine grains. The coarse-grained granularity can be a group of weights (Gray et al., 2017; Ding et al., 2017), a complete neuron (Jiang et al., 2018); a filters/channels (Li et al., 2017), or an attention head (Voita et al., 2019), etc. On the other hand, fine-grained granularity eliminates the least important weights based on the selected criteria, regard- less of where they are (Gale et al., 2019). The advantage of coarse-grained sparsity is its pronounced acceleration effect, which yet typically suffers from larger performance loss. Fine-grained sparsity enjoys performance superiority compared to other more structured forms of sparsity but receives limited support in common hardware. Nonetheless, recent advancements of dedicated fine-grained sparse patterns, such as N:M sparsity (Zhou et al., 2021; Zhang et al., 2022b), can be effectively accelerated. As such, this paper focuses on fine-grained network pruning. II. Dependency of Pre- trained Networks: In parallel, sparsification techniques can be grouped into dense-to-sparse, and sparse-to-sparse methods based on the necessity of an over-parameterized dense network. The for- mer entails embarking from a pre-trained dense model and discovering a sparse network (Han et al., 2015; Wen et al., 2016; Molchanov et al., 2017; Gale et al., 2019; Kurtic et al., 2022), usually fol- lowed by a retraining process to recover the optimal accuracy. On the other hand, sparse-to-sparse methods aim to train sparse neural networks from scratch, omitting any preliminary steps involving dense pre-training (Mocanu et al., 2018; Lee et al., 2019; Evci et al., 2020; Wang et al., 2020; Liu et al., 2021). Among them, Dynamic Sparse Training (DST) (Mocanu et al., 2018; Evci et al., 2020; Liu et al., 2021) stands out and receives upsurging interest due to its promise in saving both training and inference phases. In contrast to the conventional practices of pre-training followed by pruning, DST distinguishes itself by commencing with a randomly initialized sparse neural net- work. During a single training run, it dynamically adjusts the sparse network topology by such as pruning-and-growing, without the need for pre-training, while maintaining moderate training costs by, for example, keeping the similar sparsity ratios across all varying masks (Mostafa & Wang, 2019; Dettmers & Zettlemoyer, 2019; Yuan et al., 2021; Jayakumar et al., 2020). While the crux of this paper focuses on the first category, i.e., pruning a pre-trained LLM model, our proposed method is mainly inspired by the pruning-and-growing utilized in DST to iteratively refine the binary masks in a training-free manner, even though we do not conduct weight training as such. Another line of research, akin to our approach, demonstrates the existence of “supermasks” within randomly initialized network (Zhou et al., 2019; Ramanujan et al., 2020; Huang et al., 2022) or pre-trained networks (Mallya et al., 2018; Wortsman et al., 2020; Zhang et al., 2023), exhibiting the capacity to achieve commendable performance solely by seeking binary masks. However, it is imperative to note that these methods heavily rely on backpropagation, which is ill-suited for LLMs. 3 Published as a conference paper at ICLR 2024 Pruning of LLMs. Compared to the well-established promise of pruning in pre-LLM small-scale models, the advancement of pruning in the context of LLMs appears to exhibit relatively modest progress. Firstly, traditional pruning generally requires at least one iteration of re-training to recover performance. Considering the substantial model size and massive datasets associated with LLMs, the prospect of conducting such resource-intensive re-training becomes a formidable challenge. To mitigate the above challenge, researchers have introduced pruning algorithms specifically devised for LLMs compression. Ma et al. (2023) explored structured sparse LLM by applying Taylor prun- ing (Molchanov et al., 2017) to remove entire weight rows, followed by the parameter efficient fine- tuning (PEFT) technique (Hu et al., 2021) fine-tuning. However, the fine-tuning phase still demands a considerable amount of data while the performance suffers a significant degradation, attributed primarily to the coarse-grained level of sparsity. Recent research endeavours have evolved towards the direction of unstructured pruning in one-shot without fine-tuning, demonstrating significant pro- gresses. SparseGPT (Frantar & Alistarh, 2023) incorporates the Hessian inverse for pruning and subsequent residual weight updates, whereas Wanda (Sun et al., 2023) directly arrives at a sparse LLM model by a criterion depicted by the multiplication of the absolute values of weights and their activations with the aim to preserve outliers (Dettmers et al., 2022) emerged in LLMs. DS○T serves as an orthogonal perspective and can be organically integrated on top of them. 3 DYNAMIC SPARSE NO TRAINING – DS○T Preliminary. LLM pruning entails the removal of a certain proportion of pre-trained weights to obtain a sparse LLM, with the objective of achieving minimal discrepancy between the output of the sparse and dense models (Hassibi et al., 1993). Solving this problem can be very arduous given the immense scale of LLMs. Therefore, it is more practical to formalize LLM pruning as a layer- wise reconstruction problem (Hubara et al., 2021; Frantar & Alistarh, 2023). Denote the weights of one dense LLM layer as W ∈ RCout,Cin , where Cout and Cin stand for the number of output and input channels respectively. Supposing we have N calibration samples, the input activation can be represented as A ∈ RCin,N ×L with L be the sequence length. Pruning can be viewed as devising a binary mask M ∈ {0, 1}Cout,Cin to indicate whether weights are removed or not. Hence, the problem of LLM pruning given a specific pruning rate p can be formalized as: min M,W || W ∗ A − (M ⊙ W) ∗ A (cid:123)(cid:122) (cid:125) ∆ (cid:124) ||2, s.t. 1 − ∥M∥0 Cout · Cin = p, (1) where ∗, ⊙, || · ||2 denote matrix multiplication, dot product operation, and ℓ2 norm, respectively. Note we refer ∆ ∈ RCout,N ·L as to the reconstruction error for ease of the following text. Dynamic Sparse No Training. The problem defined in Eq. (1) can be addressed from two complementary per- spectives. Firstly, it can be resolved through the initial- ization of sparse networks i.e., devising criteria to prune weights that exhibit minimal impact on model output. For instance, SparseGPT (Frantar & Alistarh, 2023) employs second-order Hessian inverses, while Wanda (Sun et al., 2023) considers products of weight and activation norm as the guide for weight removal. Secondly, for the obtained sparse networks, the remaining weights can be naturally fine-tuned to further compensate for the reconstruction er- ror (Han et al., 2015). Unfortunately, this requires sub- stantial training resources, which is not practical given the large volumes of LLMs. Therefore, SparseGPT adjusts the remaining weights via an iterative OBS update (Hassibi & Stork, 1992), which as a consequence remarkably reduces the computing demands. Figure 2: Framework of DS○T. In this work, our focus is on the second part, i.e., how to efficiently reduce the reconstruction error of a given pruned sparse network to its dense counterpart? Instead of fully fine-tuning (Han et al., 2015) or partially updating the pruned LLMs (Frantar & Alistarh, 2023) to recover performance, we intro- duce an ultra-efficient yet effective alternative to refine the sparse mask after pruning based on their 4 Reconstruction ErrorDense WeightsLLM PruningGrowingPruningUpdate Published as a conference paper at ICLR 2024 contribution to the reconstruction error. Our approach is inspired by the pruning-and-growing opera- tion used in Dynamic Sparse Training (Mocanu et al., 2018; Evci et al., 2020). DST incorporates the processes of weight pruning and weight growing within the framework of sparse network training, contributing to the discovery of improved sparse topologies. Note that this pruning-and-growing operation solely serves as a training-free approach that is able to adapt sparse masks towards a de- sirable perspective, e.g., loss minimization. Based on this insight, we propose DS○T, a training-free fine-tuning method for sparse LLMs that strips weights updating in DST and keeps the pruning-and- growing by converting the optimization objective to the reconstruction error of each weight row. We isolate pruning-and-growing from network training, and formulate it as an iterative approach to progressively optimize sparse masks towards the desirable ones achieving minimal reconstruction error represented by Eq. (1). Specifically, DS○T starts with a sparse LLM which can be pruned by any existing crite- ria (Jaiswal et al., 2023; Sun et al., 2023; Fran- tar & Alistarh, 2023). Then, it performs itera- tive weight growing and pruning by looking at the reconstruction error as defined in Eq. (1), with especially-designed criteria to decrease the output discrepancy between sparse LLMs and their dense counterparts. The framework of DS○T is illustrated in Figure 2 and its main parts are detailedly described below. Algorithm 1: Pseudocode of DS○T. Input: A sparse layer with weight W⊙, maximum cycle T , update threshold ϵ. Workflow of DS○T: Initialize reconstruction error ∆ via Eq. (1) for r = 1 to Cout do for t = 1 to T do Obtain the growing index i via Eq. (2). Obtain the pruning index j via Eq. (3). Mr,i = 1 Mr,j = 0 Update reconstruction error ∆r via Eq. (1). Growing Criterion. As each output neu- ron is computed independently, we use one weight row Wr and the corresponding mask Mr for illustration. Given sparse weight row Mr⊙Wr, we attempt to revive pruned weight that leads to the most decrease on ∆r across different input activations. Therefore, our growing criterion considers both the expectation and vari- ance of the reconstruction error change when recovering a weight back. In particular, the index i of the revived weights is derived as follows: return Fine-tuned sparse weights W ⊙ M. if ∆r < ϵ then break i =    arg max k arg min k ¬Mr,k · Wr,k · E[Ar]/Var(Ar), if E[∆r] > 0, ¬Mr,k · Wr,k · E[Ar]/Var(Ar), otherwise, (2) where E(·) and Var(·) stand for the expectation and variance of given inputs across N × L different tokens. To explain, E[Ar] · Wr represents the expected influence of weight growing on ∆r. Thus, based on the sign of the reconstruction error ∆r, we can determine which weight should be restored to approach the decrease of ∆r. Furthermore, we consider introducing the variance of the input activation to achieve a more robust revival. This is intuitive because if the influence of weight on ∆r exhibits high variance across different inputs, restoring it may not result in stable error reduction. Pruning Criterion. After choosing revived weights, we need to select another weight for pruning in order to maintain a fixed sparsity rate. However, the circumstances here are distinct: if we prune weights based on the impact of reconstruction error change as per Eq. (2), there is a risk of removing weights that significantly influence the output. This concern becomes especially critical when prun- ing LLMs due to the presence of emergent large magnitude features within them (Dettmers et al., 2022; Wei et al., 2022a; Schaeffer et al., 2023). To alleviate this, we utilize a transformed version of the Wanda metric (Sun et al., 2023). In addition to its standard criterion for pruning weights, we mandate that the selected weights should also contribute positively towards the reduction of recon- struction error when being pruned. This helps in preserving critical weights from removal without compromising the stable decrease of reconstruction error during the training-free fine-tuning pro- cess. Therefore, the pruning index j is obtained as follows: j =    arg min k,Mr,k·Wr,k·E[Ar]<0 arg min k,Mr,k·Wr,k·E[Ar]>0 Mr,k · |Wr,k| · ||Ar||2, if E[∆r] > 0, Mr,k · |Wr,k| · ||Ar||2, otherwise. (3) 5 Published as a conference paper at ICLR 2024 Table 1: WikiText-2 Perplexity comparison for pruning LLMs at 60% sparsity rate. Method Dense Magnitude w. DS○T SparseGPT w. DS○T Wanda w. DS○T 7B 5.68 5.6e2 66.70 10.41 9.65 10.69 10.22 LLaMA-V1 13B 5.09 30B 4.10 2.3e2 30.71 15.97 10.81 8.43 7.73 8.75 8.46 6.81 6.69 6.56 6.44 LLaMA-V2 Vicuna OPT 65B 3.56 8.18 7.37 5.83 5.64 5.90 5.75 7B 5.47 6.9e3 40.01 10.14 9.67 10.79 10.59 13B 4.88 70B 3.32 10.11 9.41 13.35 6.77 7.88 7.57 8.40 8.18 5.10 5.07 5.25 5.20 13B 5.94 14.39 12.02 10.02 9.38 9.54 9.18 13B 10.12 1.1e6 2.4e2 21.23 16.92 15.88 14.01 Workflow. Given the criteria depicted above, the workflow of DS○T is outlined in Algorithm 1. In particular, it iteratively performs weight growing and pruning with respect to Eq. (2) and Eq. (3), with the reconstruction error updated until it reaches a pre-defined threshold. Meanwhile, we set a maximum pruning-and-growing cycle T to prevent certain rows from being unable to reach the settled threshold ϵ. Remark. It’s noteworthy that Algorithm,1 outlines the processing of each row in a sequential man- ner, primarily for the sake of simplicity. However, it’s imperative to acknowledge that each row can, in fact, undergo parallel processing by employing a binary indicator to assess whether a particular row has satisfied the termination condition. Furthermore, the DS○T process eliminates the neces- sity for resource-intensive procedures such as backpropagation or the computation of gradient and Hessian matrices. Instead, it relies solely on several matrix multiplications to calculate the recon- struction error, a task that can be executed efficiently on GPUs. Subsequently, during each iteration of the DS○T process, the only operation is to update the reconstruction error through straightfor- ward addition and subtraction operations during the pruning-and-growing process. This approach effectively circumvents the introduction of additional algorithmic complexity. In summary, DS○T preserves the simplicity associated with pruning LLMs, akin to the approaches employed in Wanda and Magnitude pruning. 4 EXPERIMENTAL RESULTS 4.1 SETTINGS Implementation details. The implementation details of our proposed DS○T are presented as fol- lows, mostly conforming to the existing setups (Frantar & Alistarh, 2023; Sun et al., 2023). In context to pruning configuration, we adhere to SparseGPT (Frantar & Alistarh, 2023), where a uni- form sparsity is imposed for all layers with the first embedding layer and the final classification head skipped. Meanwhile, the calibration data consists of 128 segments, each with 2048 tokens. These segments are randomly selected from the first shard of the C4 dataset (Raffel et al., 2020). For the hyper-parameter settings, we set the maximum cycle T = 50 and the update threshold ϵ = 0.1 in all experiments. Given sparse LLMs, we apply DS○T to fine-tune each layer in a progressive man- ner. We implement DS○T in PyTorch (Paszke et al., 2019) and use the HuggingFace Transformers library (Wolf et al., 2019) for handling models and datasets. All pruning experiments are conducted on NVIDIA A100 GPUs with 80GB of memory. Baselines. We principally work with the LLaMA-V1 (Touvron et al., 2023a), LLaMA-V2 (Touvron et al., 2023b), Vicuna (Chiang et al., 2023), and OPT families (Zhang et al., 2022a), from 7 billion to 70 billion parameters, which are among the most powerful and open-source Large Language Models (LLMs) in the field today. We run DS○T on sparse LLMs pruned by various methods including (1) Magnitude-based pruning (Han et al., 2015) that discards weights based on their magnitudes. (2) SparseGPT (Frantar & Alistarh, 2023) that utilizes second-order Hessian inverses to ascertain unim- portant weights. (3) Wanda (Sun et al., 2023) that removes weights with the smallest magnitudes multiplied by the corresponding input activation norms. Evaluation. In accordance with prior studies (Frantar et al., 2022; Dettmers et al., 2023; Yao et al., 2022; Frantar & Alistarh, 2023), we assess the performance of pruned models by calcu- 6 Published as a conference paper at ICLR 2024 Table 2: WikiText-2 perplexity performance of DS○T for fine-tuning sparse LLaMA-V1-7B/65B pruned by the Wanda metric at varying sparsity rates. LLaMA-V1-7B LLaMA-V1-65B Sparsity 50% 60% 70% 80% 90% 50% 60% 70% 80% 90% Wanda w. DS○T 7.26 7.12 10.69 10.22 88.84 62.05 4.80e3 4.12e3 6.41e5 8.43e4 4.57 4.54 5.90 5.75 15.24 12.93 2.06e3 1.82e3 3.21e4 2.09e4 lating the perplexity of language generation experiments on separate validation sets derived from WikiText2 (Merity et al., 2016). While perplexity has served as a stable and robust indicator of the generative performance of models (Dettmers & Zettlemoyer, 2023), we also examined the zero-shot capabilities of pruned models. In detail, we report the accuracy in six zero-shot tasks including PIQA (Bisk et al., 2020), StoryCloze (Mostafazadeh et al., 2017), ARC Easy and Challenge (Clark et al., 2018), HellaSwag (Zellers et al., 2019) and OpenBookQA (Mihaylov et al., 2018). We imple- ment the lm-eval-harness (Gao et al., 2021) for the execution of all zero-shot tasks, with the report including both the accuracy results on each benchmark and overall average accuracy. 4.2 LANGUAGE MODELING Quantitative results. The results for fine-tuning sparse LLM models at a uniform sparsity rate of 60% are presented in Table 1. Irrespective of the datasets used for evaluation, DS○T consis- tently delivers performance improvement for sparse LLMs with their original sizes varying from 7B to 70B. For instance, when pruning LLaMA-V1 with 7B parameters, DS○T is able to enhance the performance of Magnitude (Jaiswal et al., 2023), SparseGPT (Frantar & Alistarh, 2023), and Wanda (Sun et al., 2023) by 4.94e2, 0.76, and 0.47 perplexity on the Wikitext-2 validation sets, respectively. It is worth noting that, without any weight updating, DS○T consistently demonstrates better performance than SparseGPT, which requires expensive second-order Hessian inverses to up- date the sparse model. For larger models, the efficacy of DS○T is still hold with performance gain from 13.35 to 6.77 perplexity when fine-tuning sparse LLaMA-V2-70B obtained by magnitude pruning (Han et al., 2015). These findings suggest DS○T’s versatility, being adaptable to boost the performance of sparse LLMs with different parameter budgets. Varying Sparsity Rates. We further investigate the efficacy of DS○T when fine-tuning sparse LLMs with varying pruning rates. Table 2 shows that DS○T offers effective performance enhance- ment across various pruning methods at different sparsity levels. Particularly, this improvement becomes increasingly evident as the sparsity level grows. Table 3: Time overhead (in seconds) for pruning LLaMA-V1 model family. Table 5: Wikitext-2 perplexity comparison for pruning LLaMA-V1 model family with N:M pattern. Method SparseGPT Wanda 7B 13B 30B 65B 1285 209 1.9 0.3 721 1.1 337 0.5 Wanda+DS○T 4.3 7.4 15.7 23.7 Table 4: Comparion with LoRA fine- tuning using 50% sparse LLaMA-7B. Method Wanda+LoRA Wanda+DS○T Time Cost Perplexity 4h 4.3s 6.87 7.12 Method Dense SparseGPT w. DS○T Wanda w. DS○T SparseGPT w. DS○T Wanda w. DS○T Sparsity - 4:8 4:8 4:8 4:8 2:4 2:4 2:4 2:4 7B 5.68 8.61 8.32 8.57 8.45 11.00 10.03 11.53 10.89 13B 5.09 7.40 7.05 7.40 7.25 9.11 8.36 9.58 9.05 30B 4.10 6.17 6.10 5.97 5.91 7.16 6.82 6.90 6.76 65B 3.56 5.38 5.12 5.30 5.26 6.28 5.80 6.25 6.14 Computing efficiency. We further demonstrate the efficiency of DS○T. Following Wanda, we only report the total pruning time and exclude the forward pass process shared by all methods. Table 3 It is indeed compares the quantitative wall-clock overhead evaluated on NVIDIA A100 GPUs. encouraging to observe that, as a fine-tuning approach, DS○T maintains a comparable computing time to Wanda, while demonstrating significantly higher efficiency compared to SparseGPT. Comparison with LoRA Fine-tuning. To further demonstrate the ultra efficiency of DS○T in terms of fine-tuning, we also compare DS○T with parameter efficient fine-tuning (PEFT) method 7 Published as a conference paper at ICLR 2024 Table 6: Zero-shot Accuracy comparison for pruning LLaMA-V1 model family at 60% sparsity rate. Params Method PIQA HellaSwag StoryCloze ARC-e ARC-c OBQA Mean 7B 13B 30B 65B Dense SparseGPT w. DS○T Wanda w. DS○T Dense SparseGPT w. DS○T Wanda w. DS○T Dense SparseGPT w. DS○T Wanda w. DS○T Dense SparseGPT w. DS○T Wanda w. DS○T 78.7 73.1 73.7 73.0 73.2 79.1 75.6 75.8 74.9 75.0 81.1 76.8 77.3 77.7 78.1 81.2 79.6 79.9 79.9 80.9 56.9 44.8 47.2 43.6 43.7 59.9 49.0 51.5 48.9 49.1 63.3 55.0 58.0 56.7 56.7 64.6 58.3 59.8 58.9 59.6 76.8 71.5 72.3 69.7 70.0 78.4 74.8 75.8 74.5 75.1 79.1 78.4 78.8 79.1 79.7 80.2 80.5 80.4 80.6 80.2 75.3 62.6 62.8 62.8 63.6 77.4 68.4 69.8 68.9 69.2 80.4 74.7 74.8 76.2 76.8 81.3 77.4 78.1 78.2 78.2 41.8 30.2 30.9 30.3 30.8 46.5 36.2 36.3 34.9 35.4 52.9 43.3 45.6 46.5 46.6 52.9 46.6 46.9 47.1 47.7 34.0 24.4 29.4 25.0 25.8 33.2 27.6 28.8 27.6 28.0 36.0 32.2 32.8 31.6 32.6 38.2 33.4 34.6 34.8 36.0 60.6 51.1 52.7 50.7 51.2 62.4 55.2 56.3 54.9 55.3 65.4 60.1 61.2 61.3 61.7 66.4 62.6 63.3 63.3 63.7 LoRA (Hu et al., 2021). Table 4 presents a comparison of the time and performance of both methods in fine-tuning sparse LLaMA-7B. LoRA leverages the complete C4 dataset for a 5-hour fine-tuning and achieved a perplexity of 6.84. In stark contrast, DS○T only requires a brief duration of 4.3s and 128 samples to deliver a comparable performance, 7.12 perplexity. Taking into consideration the additional parameter burden incorporated by LoRA, the efficiency and practicality of DS○T is hold. N:M Fine-grained Sparsity. Compared with unstructured sparsity, N:M fine-grained sparsity offers more practical speedup on the NVIDIA Ampere sparse tensor core (Nvidia, 2020). Thus, we also evaluate the effectiveness of DS○T on N:M fine-grained sparsity. Given the unique pattern of N:M sparsity that stipulates N non-zero components within M consecutive weight block, our implemen- tation of DS○T involves a restriction on the position of pruning-and-growing weights. In particular, we select the pruned weight within the same block as the revived weight, thus the N:M charac- teristic is still maintained after fine-tuning. Table 5 lists the results for pruning LLaMA-V1 model family at 2:4 and 4:8 sparse patterns. Interestingly, even with the aforementioned extra restriction, DS○T can achieve more significant performance improvement compared to previous methods. For instance, when pruning LLaMA-V1 with 7B parameters, DS○T archives a perplexity of 10.89, en- hancing Wanda (11.53) by a noticeable 0.64 ppl. Similar findings can be concluded when it comes to other models and sparse patterns. These results highlight the effectiveness of DS○T in boosting the performance of sparse LLMs, even with more complex sparsity constraints. 4.3 ZERO-SHOT TASKS Following (Frantar & Alistarh, 2023; Sun et al., 2023), we also provided the accuracy performance of the LLaMA-V1 model family pruned at 50% sparsity rate on seven downstream zero-shot tasks. Av- eraging the accuracy over all tasks suggests DS○T’s efficacy for enhancing sparse LLMs of any size. Particularly, DS○T improves the average accuracy of SparseGPT by 1.6% when pruning LLaMA- V1-7B (52.7% for DS○T and 51.1% for SparseGPT). For task-wise performance, DS○T is benefi- cial on all tasks, while there is not a fixed superiority for fine-tuning models obtained by different pruning methods. This phenomenon may evidence the reported relatively noisy evaluation results from these zero-shot experiments (Dettmers et al., 2022). However, the advantages of consistent performance improvement and efficiency of DS○T for zero-shot tasks are obvious. 8 Published as a conference paper at ICLR 2024 Figure 3: (left) Effect of the update schedule (T, ϵ) and (right) number of calibration sequences. 4.4 PERFORMANCE ANALYSIS Next, we investigate the influence of the components within DS○T, unfolds as its update schedule, pruning-and-growing criteria, and robustness to calibration samples. All experimental setups are based on the LLaMA-7B model pruned by the Wanda metric (Sun et al., 2023) with 60% sparsity. Update schedule. In Figure 3 (left), we examine the performance of DS○T under different hyper- parameter setting for the update schedule, including the maximum cycle C and stop threshold ϵ. The best performance is obtained with 50 cycles and 0.1 updating threshold. To analyze, smaller C and larger ϵ both lead to an insufficient procedure for the decrease in reconstruction error. In contrast, running DS○T without termination conditions also resulted in poor performance, most likely due to over-fitting of calibration data. Robustness to calibration samples. In Figure 3 (right), we show the performance of pruning meth- ods with varying numbers of sampled sequences for calibration. As can be observed, SparseGPT suffers serious performance degradation when calibration samples are limited, mostly due to the difficulty in estimating Hessian inverses in such cases. Fortunately, DS○T consistently the perfor- mance of SparseGPT, even if only very few samples are given. These results further highlight the robustness of DS○T for mitigating the reconstruction error. Growing Table 7: Effect of the pruning and growing criteria. Pruning-and-growing criteria. We further investigate the influence on criteria for prune and grow in Table 7. Note that when we transfer Eq. (2) to the prune criteria, the elec- tion of extreme values is also correspond- ingly reversed. As for the prune criterion, it can be seen that pruning weights that could bring the most reduction in reconstruction error actually led to a significant perfor- mance decrease. This indicates that while pursuing the reduction of reconstruction error, it is also essential to keep weights that exhibit an extremely large influence on the output, e.g., weights within outlier channel. On the other hand, our proposed criteria based on the expectation and variance of the reconstruction error reduction achieved the best results among all growing criteria. |Wr,k| · ||Ar||2 Eq. (2) Eq. (3) |Wr,k| · ||Ar||2 Eq. (3) 10.49 10.61 10.37 10.72 11.24 10.52 10.27 10.84 10.22 Pruning Eq. (2) 5 CONCLUSION In this work, we introduce DS○T, a training-free fine-tuning approach that enhances the perfor- mance of sparse LLMs without the expensive backpropagation or any weight updates. Taking in- spiration from the success of sparse training in the pre-LLM pruning age, DS○T adapts iterative weights growing and pruning in a sparse LLM, with a transferred target for minimizing the recon- struction error between dense and sparse LLMs outputs. To furnish guidance in the selection of weights to be pruned and grown, we introduce novel criteria that take into account the expectation and variance of the reconstruction error reduction by growing each weight concerning different in- puts. Extensive experiments on pruning representative LLMs across various language benchmarks demonstrate the efficiency and effectiveness of DS○T in boosting the performance of sparse LLMs. 9 Published as a conference paper at ICLR 2024 ACKNOWLEDGEMENT This work was supported by National Key R&D Program of China (No.2022ZD0118202), the Na- tional Science Fund for Distinguished Young Scholars (No.62025603), the National Natural Sci- ence Foundation of China (No. U21B2037, No. U22B2051, No. 62176222, No. 62176223, No. 62176226, No. 62072386, No. 62072387, No. 62072389, No. 62002305 and No. 62272401), and the Natural Science Foundation of Fujian Province of China (No.2021J01002, No.2022J06001). REFERENCES Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical com- monsense in natural language. In Proceedings of the AAAI conference on artificial intelligence (AAAI), volume 34, pp. 7432–7439, 2020. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems (NeurIPs), 33:1877–1901, 2020. S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Ka- mar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023), 2023. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018. Tim Dettmers and Luke Zettlemoyer. Sparse networks from scratch: Faster training without losing performance. arXiv preprint arXiv:1907.04840, 2019. Tim Dettmers and Luke Zettlemoyer. The case for 4-bit precision: k-bit inference scaling laws. In International Conference on Machine Learning (ICML), pp. 7750–7774. PMLR, 2023. Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. Llm. int8 (): 8-bit matrix multiplication for transformers at scale. Advances in Neural Information Processing Systems (NeurIPs), 2022. Tim Dettmers, Ruslan Svirschevski, Vage Egiazarian, Denis Kuznedelev, Elias Frantar, Saleh Ashk- boos, Alexander Borzunov, Torsten Hoefler, and Dan Alistarh. Spqr: A sparse-quantized repre- sentation for near-lossless llm weight compression. arXiv preprint arXiv:2306.03078, 2023. Caiwen Ding, Siyu Liao, Yanzhi Wang, Zhe Li, Ning Liu, Youwei Zhuo, Chao Wang, Xuehai Qian, Yu Bai, Geng Yuan, et al. Circnn: accelerating and compressing deep neural networks using In Proceedings of the 50th Annual IEEE/ACM International block-circulant weight matrices. Symposium on Microarchitecture, pp. 395–408, 2017. Utku Evci, Trevor Gale, Jacob Menick, Pablo Samuel Castro, and Erich Elsen. Rigging the lottery: Making all tickets winners. In International Conference on Machine Learning (ICML), pp. 2943– 2952, 2020. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations (ICLR), 2019. Elias Frantar and Dan Alistarh. Massive language models can be accurately pruned in one-shot. In International Conference on Machine Learning (ICML), 2023. 10 Published as a conference paper at ICLR 2024 Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. Gptq: Accurate post-training In International Conference on Learning compression for generative pretrained transformers. Representations (ICLR), 2022. Trevor Gale, Erich Elsen, and Sara Hooker. The state of sparsity in deep neural networks. arXiv preprint arXiv:1902.09574, 2019. Trevor Gale, Matei Zaharia, Cliff Young, and Erich Elsen. Sparse gpu kernels for deep learning. In International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–14, 2020. Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, et al. A framework for few-shot language model evaluation. Version v0. 0.1. Sept, 2021. Scott Gray, Alec Radford, and Diederik P Kingma. Gpu kernels for block-sparse weights. arXiv preprint arXiv:1711.09224, 3:2, 2017. Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems (NeurIPS), pp. 1135–1143, 2015. Babak Hassibi and David Stork. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in Neural Information Processing Systems (NeurIPS), pp. 164–171, 1992. Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In IEEE international conference on neural networks, pp. 293–299. IEEE, 1993. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, arXiv preprint and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv:2106.09685, 2021. Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, et al. You can have bet- ter graph neural networks by not training weights at all: Finding untrained gnns tickets. arXiv preprint arXiv:2211.15335, 2022. Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Joseph Naor, and Daniel Soudry. Accel- erated sparse neural training: A provable and efficient method to find n: m transposable masks. Advances in Neural Information Processing Systems (NeurIPs), 34:21099–21111, 2021. Ajay Jaiswal, Shiwei Liu, Tianlong Chen, and Zhangyang Wang. The emergence of essential spar- sity in large pre-trained models: The weights that matter. arXiv preprint arXiv:2306.03805, 2023. Siddhant Jayakumar, Razvan Pascanu, Jack Rae, Simon Osindero, and Erich Elsen. Top-kast: Top- k always sparse training. Advances in Neural Information Processing Systems (NeurIPs), 33: 20744–20754, 2020. Chunhui Jiang, Guiying Li, Chao Qian, and Ke Tang. Efficient dnn neuron pruning by minimizing layer-wise nonlinear reconstruction error. In IJCAI, volume 2018, pp. 2–2, 2018. Eldar Kurtic, Daniel Campos, Tuan Nguyen, Elias Frantar, Mark Kurtz, Benjamin Fineran, Michael Goin, and Dan Alistarh. The optimal bert surgeon: Scalable and accurate second-order pruning for large language models. arXiv preprint arXiv:2203.07259, 2022. Yann LeCun, John Denker, and Sara Solla. Optimal brain damage. In Advances in Neural Informa- tion Processing Systems (NeurIPS), pp. 598–605, 1989. Namhoon Lee, Thalaiyasingam Ajanthan, and Philip Torr. Snip: Single-shot network pruning based on connection sensitivity. In International Conference on Learning Representations (ICLR), 2019. Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. In International Conference on Learning Representations (ICLR), 2017. 11 Published as a conference paper at ICLR 2024 Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, and Song Han. Awq: arXiv preprint Activation-aware weight quantization for llm compression and acceleration. arXiv:2306.00978, 2023. S Liu, DC Mocanu, ARR Matavalam, Y Pei, and M Pechenizkiy. Sparse evolutionary deep learn- ing with over one million artificial neurons on commodity hardware. arxiv. arXiv preprint arXiv:1901.09181, 2019. Shiwei Liu, Lu Yin, Decebal Constantin Mocanu, and Mykola Pechenizkiy. Do we actually need dense over-parameterization? in-time over-parameterization in sparse training. In International Conference on Machine Learning, pp. 6989–7000. PMLR, 2021. Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, Ajay Jaiswal, and Zhangyang Wang. Sparsity may cry: Let us fail (current) sparse neural networks together! arXiv preprint arXiv:2303.02141, 2023. Xinyin Ma, Gongfan Fang, and Xinchao Wang. Llm-pruner: On the structural pruning of large language models. arXiv preprint arXiv:2305.11627, 2023. Yuexiao Ma, Huixia Li, Xiawu Zheng, Feng Ling, Xuefeng Xiao, Rui Wang, Shilei Wen, Fei Chao, and Rongrong Ji. Affinequant: Affine transformation quantization for large language models. In International Conference on Learning Representations (ICLR), 2024. Arun Mallya, Dillon Davis, and Svetlana Lazebnik. Piggyback: Adapting a single network to mul- tiple tasks by learning to mask weights. In Proceedings of the European conference on computer vision (ECCV), pp. 67–82, 2018. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789, 2018. Decebal Constantin Mocanu, Elena Mocanu, Peter Stone, Phuong H Nguyen, Madeleine Gibescu, and Antonio Liotta. Scalable training of artificial neural networks with adaptive sparse connec- tivity inspired by network science. Nature Communications, 9:1–12, 2018. Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Pruning convolutional neural networks for resource efficient inference. In International Conference on Learning Repre- sentations (ICLR), 2017. Hesham Mostafa and Xin Wang. Parameter efficient training of deep convolutional neural net- works by dynamic sparse reparameterization. In International Conference on Machine Learning (ICML), pp. 4646–4655, 2019. Nasrin Mostafazadeh, Michael Roth, Annie Louis, Nathanael Chambers, and James Allen. Lsdsem 2017 shared task: The story cloze test. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, pp. 46–51, 2017. Nvidia. Nvidia a100 tensor core gpu architecture, 2020. www.nvidia.com/content/dam/enzz/Solutions/Data-Center/ nvidia-ampere-architecture-whitepaper.pdf. https:// Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NeurIPS), pp. 8026–8037, 2019. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551, 2020. 12 Published as a conference paper at ICLR 2024 Vivek Ramanujan, Mitchell Wortsman, Aniruddha Kembhavi, Ali Farhadi, and Mohammad Raste- gari. What’s hidden in a randomly weighted neural network? In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11893–11902, 2020. Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo. Are emergent abilities of large language models a mirage? arXiv preprint arXiv:2304.15004, 2023. Wenqi Shao, Mengzhao Chen, Zhaoyang Zhang, Peng Xu, Lirui Zhao, Zhiqian Li, Kaipeng Zhang, Peng Gao, Yu Qiao, and Ping Luo. Omniquant: Omnidirectionally calibrated quantization for large language models. In International Conference on Learning Representations (ICLR), 2024. Mingjie Sun, Zhuang Liu, Anna Bair, and J Zico Kolter. A simple and effective pruning approach for large language models. arXiv preprint arXiv:2306.11695, 2023. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. arXiv preprint arXiv:1905.09418, 2019. Chaoqi Wang, Guodong Zhang, and Roger Grosse. Picking winning tickets before training by preserving gradient flow. In International Conference on Learning Representations (ICLR), 2020. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yo- gatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. Transactions on Machine Learning Research, 2022a. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems (NeurIPs), 35:24824–24837, 2022b. Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. Advances in Neural Information Processing Systems (NeurIPs), 29, 2016. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, et al. Huggingface’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771, 2019. Mitchell Wortsman, Vivek Ramanujan, Rosanne Liu, Aniruddha Kembhavi, Mohammad Rastegari, Jason Yosinski, and Ali Farhadi. Supermasks in superposition. Advances in Neural Information Processing Systems (NeurIPs), 33:15173–15184, 2020. Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han. Smoothquant: In International Accurate and efficient post-training quantization for large language models. Conference on Machine Learning (ICML), pp. 38087–38099. PMLR, 2023. Zhewei Yao, Reza Yazdani Aminabadi, Minjia Zhang, Xiaoxia Wu, Conglong Li, and Yuxiong He. Zeroquant: Efficient and affordable post-training quantization for large-scale transformers. Advances in Neural Information Processing Systems (NeurIPs), 35:27168–27183, 2022. Lu Yin, Shiwei Liu, Meng Fang, Tianjin Huang, Vlado Menkovski, and Mykola Pechenizkiy. Lot- tery pools: Winning more by interpolating tickets without increasing training or inference cost. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), volume 37, pp. 10945– 10953, 2023. Geng Yuan, Xiaolong Ma, Wei Niu, Zhengang Li, Zhenglun Kong, Ning Liu, Yifan Gong, Zheng Zhan, Chaoyang He, Qing Jin, et al. Mest: Accurate and fast memory-economic sparse training framework on the edge. Advances in Neural Information Processing Systems (NeurIPs), 34: 20838–20850, 2021. 13 Published as a conference paper at ICLR 2024 Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a ma- chine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christo- pher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022a. Yuxin Zhang, Mingbao Lin, Zhihang Lin, Yiting Luo, Ke Li, Fei Chao, Yongjian Wu, and Rongrong Ji. Learning best combination for efficient n: M sparsity. In Advances in Neural Information Processing Systems (NeurIPS), 2022b. Yuxin Zhang, Mingbao Lin, Fei Chao, Yan Wang, Ke Li, Yunhang Shen, Yongjian Wu, and Ron- grong Ji. Lottery jackpots exist in pre-trained models. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023. Aojun Zhou, Yukun Ma, Junnan Zhu, Jianbo Liu, Zhijie Zhang, Kun Yuan, Wenxiu Sun, and Hong- sheng Li. Learning n: M fine-grained structured sparse neural networks from scratch. In Interna- tional Conference on Learning Representations (ICLR), 2021. Hattie Zhou, Janice Lan, Rosanne Liu, and Jason Yosinski. Deconstructing lottery tickets: zeros, signs, and the supermask. In Advances in Neural Information Processing Systems (NeurIPS), pp. 3597–3607, 2019. 14 Published as a conference paper at ICLR 2024 A APPENDIX A.1 COMPLEMENTARY EXPERIMENTAL RESULTS In this section, we supplement the main paper with more experimental outcomes, including a wider spectrum of results at varying sparsity rates, robustness analysis under random seeds. Varying Sparsity Rates. This part delivers extended results of DS○T when fine-tuning sparse LLMs at alternating sparsity rates as a supplement to Section ??. The performance of various LLMs with sparsity rates oscillating between 10% and 90%, are presented in Table 8. Beneficial enhance- ments are consistently observable at all examined sparsity levels when employing DS○T, with the significance of improvements escalating concurrently with the increase in sparsity. It is notewor- thy that the acceleration resultant from unstructured sparsity comes into play predominantly at high sparsity levels (exceeding 60%) Gale et al. (2020), thereby accentuating the indispensable efficacy of DS○T. Table 8: WikiText-2 perplexity performance for fine-tuning LLMs at varying sparsity rates. Model Method 10% 20% 30% 40% 50% 60% 70% 80% 90% LLaMA-V1-7B Wanda LLaMA-V1-7B w. DS○T LLaMA-V1-13B Wanda LLaMA-V1-13B w. DS○T LLaMA-V2-7B Wanda LLaMA-V2-7B w. DS○T LLaMA-V2-13B Wanda LLaMA-V2-13B w. DS○T 5.70 5.68 5.10 5.09 5.49 5.48 4.91 4.89 5.82 5.73 5.13 5.11 5.59 5.49 4.99 4.91 6.00 5.89 5.25 5.05 5.74 5.65 5.13 5.01 6.39 6.28 5.51 5.29 6.06 5.85 5.37 5.25 7.26 7.12 6.15 6.08 6.92 6.81 7.88 7.57 10.69 10.22 8.75 8.46 10.79 10.59 8.30 8.13 OPT-13B OPT-13B Wanda w. DS○T 10.13 10.12 10.09 10.08 10.12 10.11 10.63 10.41 11.92 11.28 15.88 14.01 88.84 62.05 55.89 43.31 75.01 53.12 46.05 33.19 55.07 45.10 4.80e3 4.12e3 3.66e3 1.12e3 2.36e3 1.12e3 1.06e3 2.59e2 13722 8.43e3 6.41e5 8.43e4 1.54e6 1.95e5 7.87e3 2.35e3 1.22e5 3.49e4 7.61e5 2.33e5 Robustness Analysis. We further perform a robustness analysis of DS○T. Given that the results in Table 1 is evaluated under a fixed calibration set, Table 9 show the results with different calibration sets under 5 random seeds. The variance across random seeds is very low, suggesting the stability of DS○T, corroborating its efficacy as a tool in fine-tuning sparse LLMs. Table 9: WikiText validation perplexity for pruning LLaMA-V1 and LLaMA-V2 models at 60% sparsity. We report the mean and standard deviation under 5 random seeds. Method Dense SparseGPT w. DS○T LLaMA-V1 LLaMA-V2 7B 13B 7B 13B 5.68 (±0.00) 5.09 (±0.00) 5.47 (±0.00) 4.88 (±0.00) 10.42(±0.04) 9.64(±0.03) 8.43(±0.02) 7.73(±0.02) 10.14 (±0.03) 9.68(±0.03) 7.88(±0.01) 7.57(±0.01) Wanda w. DS○T(±0.01) 10.69(±0.01) 10.22(±0.01) 8.75(±0.01) 8.46(±0.01) 10.79(±0.01) 10.59(±0.01) 8.40(±0.01) 8.18(±0.01) 15
synthetic_cpt
3
kNN-Adapter_Efficient_Domain_Adaptation_for_Black-Box_Language_Models.pdf
kNN-BOX: A Unified Framework for Nearest Neighbor Generation Wenhao Zhu∗, Qianfeng Zhao∗, Yunzhe Lv∗, Shujian Huang, Siheng Zhao, Sizhe Liu, Jiajun Chen National Key Laboratory for Novel Software Technology, Nanjing University, China {zhuwh, qianfeng, lvyz, zhaosh, liusz}@smail.nju.edu.cn, {huangsj, chenjj}@nju.edu.cn 3 2 0 2 b e F 7 2 ] L C . s c [ 1 v 4 7 5 3 1 . 2 0 3 2 : v i X r a Abstract Augmenting the base neural model with a token-level symbolic datastore is a novel gen- eration paradigm and has achieved promis- ing results in machine translation (MT). In this paper, we introduce a unified frame- work kNN-BOX, which enables quick de- velopment and interactive analysis for this novel paradigm. kNN-BOX decomposes the datastore-augmentation approach into three modules: datastore, retriever and combiner, thus putting diverse kNN generation meth- ods into a unified way. Currently, kNN-BOX has provided implementation of seven popular kNN-MT variants, covering research from per- formance enhancement to efficiency optimiza- tion. It is easy for users to reproduce these existing work or customize their own models. Besides, users can interact with their kNN gen- eration systems with kNN-BOX to better un- derstand the underlying inference process in In experiment section, we a visualized way. apply kNN-BOX for machine translation and three other seq2seq generation tasks, namely, text simplification, paraphrase generation and question generation. Experiment results show that augmenting the base neural model with kNN-BOX leads to a large performance im- provement in all these tasks. The code and doc- ument of kNN-BOX is available at https: //github.com/NJUNLP/knn-box. 1 Introduction Equipping the base neural model with a symbolic datastore is a novel paradigm for enhancing genera- tion quality. Khandelwal et al. apply this paradigm in machine translation, known as kNN-MT, and achieves promising results, especially in MT do- main adaptation and multilingual MT. Afterwards, the following work keep optimizing this approach, making it a more mature methodology, e.g., dynam- ically deciding the usage of retrieval results (Zheng *Equal Contribution. kNN-BOX decomposes the datastore- Figure 1: augmentation approach into three modules, namely, datastore, retriever and combiner, thus putting diverse kNN generation methods into a unified way. et al., 2021), building a light and explainable datas- tore (Zhu et al., 2022). However, we notice that these kNN generation methods are implemented with diverse codebases, e.g., Fairseq1, Transformers2 and JoeyNMT 3, which hinders fair comparison between these meth- ods and potential fusion of latest research advances. Interpretability is another remaining issue in kNN generation research, as the community can still not well understand why kNN generation works. More in-depth analysis needs to be conducted for this novel paradigm. In this paper, we introduce a unified framework kNN-BOX for nearest neighbor generation, which supports quick development and interactive anal- ysis. Our framework decomposes the datastore- augmentation approach into three modules: datas- tore, retriever and combiner, thus putting diverse 1https://github.com/facebookresearch/ fairseq 2https://github.com/huggingface/ transformers 3https://github.com/joeynmt/joeynmt kNN generation methods into a unified way (Fig. 1). Up till now, kNN-BOX has released implemen- tation of seven popular kNN-MT models, covering research from performance enhancement (Khandel- wal et al., 2021; Jiang et al., 2021; Zheng et al., 2021; Jiang et al., 2022) to efficiency optimiza- tion (Martins et al., 2022; Wang et al., 2022; Zhu et al., 2022), which can help users to quickly re- produce existing works and make fair comparison between them. Moreover, users can easily fuse advanced models with kNN-BOX, for example, jointly using a better combiner and a lighter datas- tore, to achieve the best of two worlds. Another useful feature of kNN-BOX is support- ing visualized interactive analysis. Via our pro- vided web page, users can interact with their lo- cal kNN model and observe its inference process, e.g. the content and distribution of its retrieval re- sults (Fig. 3). We hope kNN-BOX can help the community to better understand the interpretability problems in kNN generation, e.g., why it works. We conduct experiments on benchmark machine translation datasets. Experiment results show that kNN-BOX is a reliable platform for model repro- duction and development. In addition, we take a step further and apply kNN-BOX for three other seq2seq tasks, i.e., text simplification, paraphrase generation and question generation. Experiment re- sults show that augmenting the base neural model with kNN-BOX is also beneficial in these tasks, which shows the great potential of nearest neighbor generation and the wide usage of our kNN-BOX toolkit. 2 Background: kNN-MT Before introducing kNN-BOX, we recap kNN- MT approach in this section. Generally, kNN-MT framework aims at memorizing translation knowl- edge in parallel corpus C into a datastore D and use it to augment the NMT model M during inference. Memorizing Knowledge into Datastore To ex- tract translation knowledge, translation pair (X , Y) is fed into M for teacher-forcing decoding. At time step t, the continuous representation of the translation context (X , Y<t), i.e. the hidden state ht from the last decoder layer, is taken as key: ht = M(X , Y<t) and the target token yt is taken as value. Each key-value pair explicitly memorizes the translation knowledge: generating the value token at the de- coder hidden state key. With a single forward pass over the entire corpus, the full datastore D can be constructed: D = {(ht, yt) | ∀yt ∈ Y, (X , Y) ∈ C}, (1) Generating with Memorized Knowledge The constructed datastore is then combined with the base NMT model as an augmentation memory. Dur- ing inference, the NMT model retrieves related knowledge from the datastore to adjust its own translation prediction. Specifically, the NMT model uses the contextu- alized representation of the test translation context (X , Y<t) to query the datastore for nearest neigh- bor representations and the corresponding target tokens Nk = {(hj, yj)}k j=1. The retrieved entries are then converted to a distribution over the vocab- ulary: pknn(y|X , Y<t) ∝ (cid:88) 1(y = yj) · s(ht, hj) (hj ,yj )∈Nk (2) where s measures the similarity between ht and hj: s(ht, hj) = exp[ −d(ht, hj) T ] Here, d denotes L2-square distance and T is the temperature. In the end, the output distribution of the NMT model and symbolic datastore are inter- polated with the weight λ: p(y|X , Y<t) = λ · pknn(y|X , Y<t) + (1 − λ) · pnmt(y|X , Y<t) (3) Recent Advances in kNN-MT To make kNN- MT more effective, efficient and explainable, vari- ous methods have been devised. Zheng et al. (2021) and Jiang et al. (2022) propose to dynamically de- cide the usage of retrieval results to exclude poten- tial noise in nearest neighbors. Jiang et al. (2021) explore the setting of multi-domain adaptation and remedy the catastrophic forgetting problem. In- spired by He et al. (2021), Martins et al. (2022) introduce three ways to improve the efficiency of kNN-MT, i.e. dimension reduction, datastore prun- ing and adaptive retrieval. Later, Wang et al. (2022) propose to reduce dimension and prune datastore with a learnable network. Recently, Zhu et al. (2022) explore the interpretability issue in kNN- MT and builds a light and more explainable datas- tore according to the capability of the NMT model. 3 Unified Framework: kNN-BOX This sections describes how we design and im- plement kNN-BOX, and introduce how users run kNN-BOX for developing kNN generation models and interacting with the deployed model visually. 3.1 Design and Implementation We develop kNN-BOX based on the most widely- used generation framework Fairseq, thus making it easy to apply kNN-BOX for other generation tasks. The overall workflow of kNN-BOX is illustrated in Figure 2. For better compatibility and extensi- bility, we decompose the datastore-augmentation Datastore, approach into three modules: Retriever and Combiner, where each mod- ule has its own function: • Datastore: save generation knowledge as key-values pairs (Eq. 1). • Retriever: retrieve nearest neighbors from the datastore during inference. • Combiner: convert retrieval results to a dis- tribution (Eq. 2) and interpolate the output distribution of the NMT model and symbolic datastore (Eq. 3). This design enables diverse kNN models to be implemented in a unified way. For a specific kNN variant, it usually makes a modification on one of the three modules, compared to vanilla kNN generation model. Therefore, users can customize the corresponding module and quickly develop the desired kNN model. Supporting visual interactive analysis is another useful feature of kNN-MT. By saving intermediate computation results, we enable kNN-BOX to visu- alize the inference process. We hope this feature will help users to better understand their own kNN- MT model’s strengths and weaknesses, instead of using it as a black box. 3.2 Usage Reproducing Existing Work Until now, kNN- BOX has released implementation of seven popular kNN-MT models 4, covering research from per- formance enhancement to efficiency optimization. 4They are vanilla kNN-MT (Khandelwal et al., 2021), Adaptive kNN-MT (Zheng et al., 2021), Smoothed kNN- MT (Jiang et al., 2021), Robust kNN-MT (Jiang et al., 2022), PCK kNN-MT (Wang et al., 2022), Efficient kNN-MT (Mar- tins et al., 2022), PLAC kNN-MT (Zhu et al., 2022). Figure 2: Overall workflow of augmenting the base neural model with kNN-BOX. Besides, kNN-BOX has also provided the corre- sponding shell scripts to run them, enabling users to quickly reproduce existing work. Detailed guid- ance can be found in README.md. Developing New Models kNN-BOX is designed not only for reproduce existing work, but also for developing new models on new tasks. For each module, users can pick one of its implementation from kNN-BOX or customize their own version, and combine three modules together to build a new kNN model. In this process, only few lines of codes needs to be added, which will save users a lot of time. More importantly, this implementa- tion fashion enables users to easily build a fused model, e.g., combining the most explainable data- store (PLACDatstore) with the strongest com- biner (RobustCombiner). To perform gener- ation tasks other than machine translation, users only need to switch the training corpus to build a task-specific datastore. Visualizing Translation Process By running our provided script to launch a web page (shown in Fig. 3), users can interact with their kNN model visually. In the upper input window, user can type in text and tune generation hyperparameters in the upper-left panel. Then the generation results will be displayed. Taking kNN-MT as an example, af- ter clicking a word in the translation, users can see the translation probability given by both NMT model and kNN-MT model. Moreover, detailed information of the retrieved datastore entries will be displayed in the bottom panel. By selecting on a certain nearest neighbor point, users can see the corresponding value token, translation context and Figure 3: A screenshot of interactive interface provided by kNN-BOX, where users can interact with their own kNN model and analyze its inference process visually. The upper panel allows users to type in text and tune hyperparameters. The middle panel displays the generation result and prediction distribution of each decoding step. The bottom panel shows the relative distribution of query and retrieval results, and more detailed information of each nearest neighbor. query-key distance. Overall, the visualization page can help user to interact with their kNN generation model and explore its inner working process. 4 Experiments To evaluate the effectiveness of kNN-BOX, we con- duct experiments on machine translation and three other seq2seq tasks. We introduce experimental set- tings in Section 4.1 and reports experiment results in Section 4.2. 4.1 Experimental Settings Dataset For machine translation, we adopt four German-English OPUS datasets 5 (Medical, Law, IT and Koran) (Tiedemann, 2012), which are used in almost all kNN-MT work. We use TED dataset 6 (Qi et al., 2018) to evaluate kNN-BOX on multi- lingual machine translation 7. Moreover, we con- duct experiments on two text simplification dataset: Newsela-Auto 8 and Wiki-Auto 9 (Jiang et al., 2020), a paraphrase generation dataset: QQP 10, and a question generation dataset: Quasar-T 11 (Dhingra et al., 2017) to demonstrate effectiveness of kNN-BOX on these generation tasks. 6https://github.com/neulab/ word-embeddings-for-nmt 7We evaluate English-centric translation performance on ten languages: Cs, Da, De, Es, Fr, It, Nl, Pl, Pt, Sv 8https://newsela.com/data/ 9https://github.com/chaojiang06/ wiki-auto/tree/master/wiki-auto/ACL2020/ 10https://www.kaggle.com/c/ quora-question-pairs 5https://opus.nlpl.eu/ 11https://github.com/bdhingra/quasar Model Reference Law Medical IT Koran Scale↓ BLEU↑ Scale↓ BLEU↑ Scale↓ BLEU↑ Scale↓ BLEU↑ Base Neural Model Ng et al., 2019 Vanilla kNN-MT Adaptive kNN-MT Smoothed kNN-MT Robust kNN-MT PCK kNN-MT Efficient kNN-MT PLAC kNN-MT Khandelwal et al., 2021 Zheng et al., 2021 Jiang et al., 2021 Jiang et al., 2022 Wang et al., 2022 Martins et al., 2022 Zhu et al., 2022 - 100% 100% 100% 100% 90% 57% 55% 45.5 61.3 62.9 63.3 63.6 62.8 59.9 62.8 100% 100% 100% 100% 100% 90% 58% 55% 40.0 54.1 56.1 56.8 57.1 56.4 52.3 56.2 - 100% 100% 100% 100% 90% 63% 60% 38.4 45.6 47.2 47.7 48.6 47.4 44.9 47.0 - 100% 100% 100% 100% 90% 66% 75% 16.3 20.4 20.3 19.9 20.5 19.4 19.9 19.9 Table 1: Some works implemented by kNN-BOX. Scale refers to the relative datastore size compared to a full datastore that covers all target language token occurrences in the parallel corpus. Smaller scale means a lighter datastore and higher BLEU means better translation quality. Directions Model Avg. Cs Da De Es Fr It Nl Pl Pt Sv En → X X → En M2M-100 29.1 + kNN-BOX 32.6 M2M-100 33.4 + kNN-BOX 37.7 20.7 22.3 27.5 31.3 36.2 40.2 40.0 44.5 26.7 29.5 31.8 37.1 35.1 39.2 36.6 42.0 33.7 38.7 35.1 40.4 29.8 33.5 33.4 38.4 27.7 31.9 31.9 36.2 15.6 17.9 21.1 24.9 31.9 37.1 38.9 41.8 33.7 36.0 37.3 41.0 Table 2: Effect of augmenting M2M100 with kNN-BOX on multilingual TED dataset. For brevity, we only show the effect of applying Robust kNN with kNN-BOX. “En → X” and “X → En” denotes translating English into other languages and translating other languages into English respectively. Base Neural Model On OPUS dataset, we fol- low previous kNN-MT work and use the winner model of WMT’19 De-En news translation task (Ng et al., 2019) as the base model. On multilingual TED dataset, we use M2M100 (Fan et al., 2021) as the base model, which is a many-to-many mul- tilingual translation model. On the rest of dataset, Transformer (Vaswani et al., 2017) is used as the base model. Metric We use BLEU score calculated by sacre- bleu12 to evaluate the generation quality for all tasks except text simplification, where we use SARI score (Xu et al., 2016) calculated by easse13 to evaluate simplification quality. 4.2 Main Results kNN-BOX can help user to quickly augment the base NMT model with kNN methods. By running our provided shell scripts, users can quickly reproduce existing kNN-MT models. Table 1 show the translation performance of these mod- els on OPUS dataset. We see that augmenting the base neural machine translation model with a data- store brings significant performance enhancement. 12https://github.com/mjpost/sacrebleu 13https://github.com/feralvam/easse Among these methods, Robust kNN-MT achieves the highest BLEU scores, and PLAC kNN-MT builds a lightest datastore while maintaining trans- lation performance. Table 2 reports experiment results on TED dataset. We can see that applying kNN-BOX brings large performance improvement on all translation directions. Besides, we also carefully compare the repro- duced results with the results produced by the orig- inal implementation. We find that two groups of results are well-aligned (shown in Appendix A), demonstrating that kNN-BOX is reliable platform for reproducing kNN-MT models. kNN-BOX shows great potential in other seq2seq generation tasks as well Apart from machine translation task, we further evaluate kNN- BOX on three other seq2seq tasks: text simplifi- cation, paraphrase generation and question gen- eration. Experiment results are shown in Table 3. Augmenting the base neural model with kNN-BOX brings performance enhancement in all three tasks. The performance improvement on three tasks is up to 2.4 SARI, 1.1 BLEU and 6.1 BLEU respec- tively, which shows the great potential of studying datastore-augmentation in generation tasks and the wide usage of our toolkit. Task Dataset Metric Base Model kNN-BOX Text Simplification Paraphrase Generation Question Generation Wiki-Auto Newsela-Auto QQP Quasar-T SARI SARI BLEU BLEU 38.6 35.8 28.4 9.6 39.4 38.2 29.5 15.7 Table 3: The performance of applying kNN-BOX on three other seq2seq tasks: text simplification, paraphrase generation and question generation. Here, we apply the vanilla kNN generation method for augmentation. Bold text indicates the higher score across two models. Augmenting base neural models in these tasks with kNN-BOX also bring large performance improvement. Datastore Retriever Combiner Scale↓ BLEU↑ BasicDatastore PCKDatastore BasicRetriever BasicRetriever EfficientDatastore BasicRetriever EfficientDatastore BasicRetriever BasicRetriever BasicRetriever PLACDatastore PLACDatastore BasicCombiner AdaptiveCombiner AdaptiveCombiner RobustCombiner AdaptiveCombiner RobustCombiner 100% 90% 57% 57% 55% 55% 61.3 62.8 61.5 61.8 62.8 63.7 Table 4: Effect of fusing advanced datastore and combiner. Smaller scale means a lighter datastore and higher BLEU means better translation quality. kNN-BOX accelerates the fusion of lasted re- search advances A potential drawback of imple- menting kNN-MT with diverse codebases is hin- dering the fusion of lasted research advances. With kNN-BOX, research advances on Datastore, Combiner and Retriever can be fused con- veniently. Table 4 shows the performance of par- tial mixed models on OPUS-Law dataset, where we jointly use different datastore and combiner. We can see that using PLACDatastore and RobustCombiner together achieve strong trans- lation performance with a much smaller datastore. 5 Conclusion and Future Work This paper introduces kNN-BOX, an open-sourced toolkit for nearest neighbor generation. kNN- BOX decomposes datastore-augmented approach into three decoupled modules: Datastore, Retriever and Combiner, thus putting di- verse kNN generation methods into a unified way. kNN-BOX provides implementation of several kNN-MT models, covering research from perfor- mance enhancement and efficiency optimization, which can help users to quickly reproduce existing work. kNN-BOX also enjoys great extensibility, which can be used to develop new models and be applied for new generation tasks. More importantly, kNN-BOX supports users to interact with their de- ployed model in a visualized way, which enables in-depth analysis on the inner working process of the model. In experiment section, we show that kNN-BOX can not only be applied for enhancing neural machine translation model, but also for en- hancing neural generation model in other seq2seq tasks, like text simplification, paraphrase genera- tion and question generation. In the future, we will keep update this toolkit to provide implementation of more retrieve-and- generate methods and optimize the framework to make it more user-friendly, and explore the possi- bility to apply kNN-BOX for long-range sequence generation, e.g., document-level machine transla- tion. References Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen. 2017. Quasar: Datasets for question an- arXiv preprint swering by search and reading. arXiv:1707.03904. Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond english-centric mul- tilingual machine translation. The Journal of Ma- chine Learning Research, 22(1):4839–4886. Junxian He, Graham Neubig, and Taylor Berg- Kirkpatrick. 2021. Efficient nearest neighbor lan- guage models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, and Wei Xu. 2020. Neural CRF model for In Pro- sentence alignment in text simplification. ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 7943– 7960, Online. Association for Computational Lin- guistics. Hui Jiang, Ziyao Lu, Fandong Meng, Chulun Zhou, Jie Zhou, Degen Huang, and Jinsong Su. 2022. Towards robust k-nearest-neighbor machine transla- In Proceedings of the 2022 Conference on tion. Empirical Methods in Natural Language Processing, pages 5468–5477, Abu Dhabi, United Arab Emi- rates. Association for Computational Linguistics. Qingnan Jiang, Mingxuan Wang, Jun Cao, Shanbo Cheng, Shujian Huang, and Lei Li. 2021. Learning kernel-smoothed machine translation with retrieved In Proceedings of the Conference on examples. Empirical Methods in Natural Language Processing (EMNLP). Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2021. Nearest neigh- In International Confer- bor machine translation. ence on Learning Representations (ICLR). Pedro Martins, Zita Marinho, and Andre Martins. 2022. Efficient machine translation domain adaptation. In Proceedings of the Workshop on Semiparametric Methods in NLP: Decoupling Logic from Knowl- edge. Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR’s WMT19 news translation task submission. In Proceedings of the Conference on Machine Trans- lation (WMT). Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Pad- manabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neu- ral machine translation? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). Jörg Tiedemann. 2012. Parallel data, tools and inter- In Proceedings of the Eighth In- faces in OPUS. ternational Conference on Language Resources and Evaluation (LREC). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems (NeurIPS). Dexin Wang, Kai Fan, Boxing Chen, and Deyi Xiong. Efficient cluster-based k-nearest-neighbor 2022. machine translation. In Proceedings of the Annual Meeting of the Association for Computational Lin- guistics (ACL). Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401–415. Xin Zheng, Zhirui Zhang, Junliang Guo, Shujian Huang, Boxing Chen, Weihua Luo, and Jiajun Chen. 2021. Adaptive nearest neighbor machine transla- tion. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Wenhao Zhu, Shujian Huang, Yunzhe Lv, Xin Zheng, and Jiajun Chen. 2022. What knowledge is needed? towards explainable memory for knn-mt domain adaptation. arXiv preprint arXiv:2211.04052. A Performance Alignment between kNN-BOX’s implementation and original implementation Table 5 compares the reproduced results with kNN- BOX and the results produced by the original im- plementation, where the same base neural model and the same dataset is used. Comparison results show that there is only a minor gap between two groups of results, demonstrating that the reliability of kNN-BOX. Model Law Medical IT Koran Base NMT14 (cid:44)→ kNN-BOX Vanilla kNN-MT 15 (cid:44)→ kNN-BOX Adaptive kNN-MT 16 (cid:44)→ kNN-BOX PCK kNN-MT 17 (cid:44)→ kNN-BOX Robust kNN-MT 18 (cid:44)→ kNN-BOX 45.5 45.5 61.3 61.3 62.9 62.9 63.1 62.8 63.8 63.6 40.0 40.0 54.1 54.1 56.6 56.1 56.5 56.4 57.0 57.1 38.4 38.4 45.6 45.6 47.6 47.2 47.9 47.4 48.7 48.6 16.3 16.3 20.4 20.4 20.6 20.3 19.7 19.4 20.8 20.5 Table 5: BLEU scores of original implementation and kNN-BOX’s implementation. “(cid:44)→ kNN-BOX’ denotes the reults reproduced using our framework. 14https://github.com/facebookresearch/ fairseq 15https://github.com/urvashik/knnmt 16https://github.com/zhengxxn/ adaptive-knn-mt 17https://github.com/tjunlp-lab/PCKMT 18https://github.com/DeepLearnXMU/ Robust-knn-mt
synthetic_cpt
2
Finding_needles_in_a_haystack_Sampling_Structurally-diverse_Training_Sets_from_Synthetic_Data_for_Compositional_Generalization.pdf
Haystack: A Panoptic Scene Graph Dataset to Evaluate Rare Predicate Classes Julian Lorenz Florian Barthel Daniel Kienzle Rainer Lienhart University of Augsburg Augsburg, Germany {julian.lorenz,florian.barthel,daniel.kienzle,rainer.lienhart}@uni-a.de 3 2 0 2 p e S 5 ] V C . s c [ 1 v 6 8 2 2 0 . 9 0 3 2 : v i X r a Abstract Current scene graph datasets suffer from strong long-tail distributions of their predicate classes. Due to a very low number of some predicate classes in the test sets, no reliable metrics can be retrieved for the rarest classes. We construct a new panoptic scene graph dataset and a set of metrics that are designed as a benchmark for the predictive performance especially on rare predicate classes. To construct the new dataset, we propose a model-assisted annotation pipeline that efficiently finds rare predicate classes that are hidden in a large set of images like needles in a haystack. Contrary to prior scene graph datasets, Haystack con- tains explicit negative annotations, i.e. annotations that a given relation does not have a certain predicate class. Neg- ative annotations are helpful especially in the field of scene graph generation and open up a whole new set of possibili- ties to improve current scene graph generation models. Haystack is 100% compatible with existing panop- tic scene graph datasets and can easily be integrated with existing evaluation pipelines. Our dataset and code can be found here: https://lorjul.github.io/ haystack/. It includes annotation files and simple to use scripts and utilities, to help with integrating our dataset in existing work. 1. Introduction In scene graph generation, models are trained to de- tect and classify interactions between objects in an image. These interactions are called relations and are composed of three components: subject, predicate, and object. Ex- isting methods have improved over the last years but are still struggling with the long-tail distribution of the pred- icate classes in scene graph datasets [1, 18] and therefore perform worse on rare predicates. Much research is conducted to find methods that can tackle the long-tail problem of scene graph datasets. Al- though these methods can reduce the performance gap be- tween head and tail classes, they are still limited by the Figure 1. Schematic comparison of the different annotation struc- tures for our Haystack dataset and the PSG dataset. Our dataset prefers more annotations for rare predicates over full annotations of an image. Additionally, our dataset contains explicit negative annotations which must be implicitly derived for PSG. lack of available relations with tail predicates in existing datasets. For example, due to very small test sets, exist- ing methods cannot be reliably evaluated on rare predicates. Additionally, commonly used metrics from the Recall@k family can only provide insights on an image-level, without paying too much attention on a per relation basis. We define a new set of metrics that can evaluate rela- tions individually and provide substantial new information about existing methods. Our metrics can grade the model’s understanding of a specific predicate as well as influences between predicates before they are ranked for the final in- ference output. However, our metrics require reliable annotations, in- cluding negative annotations. Negative annotations show which predicates are not part of a specific relation. These annotations are not explicitly given for current scene graph datasets, preventing in-depth analysis on current test sets. To address this issue, we construct a new panoptic scene graph dataset that includes explicit negative annotations for rare predicate classes. Because existing test sets are rather lacking for rare predicate classes, we decide to create a new test dataset from scratch. Contrary to most prior scene graph datasets, our dataset is not a subset of Visual Genome [5] but SA-1B [4]. Therefore, our dataset can simply be used in addition to existing training and evaluation pipelines without having to deal with overlapping datasets. We use an efficient annotation pipeline that is designed to get many annotations for rare predicate classes as fast as ABCDABCDPSGReal WorldABCDHaystackEEETrue NegativeFalse NegativeTrue PositiveExplicitImplicit possible. Therefore, we first identify two main problems why existing scene graph datasets struggle with tail classes: 1. Annotators are tasked to annotate one image after the other without prioritizing images with potential rare predicates. 2. Annotators look at the image as a whole and have a set of predicates to choose from and tend to select more basic predicates instead the more informative but rare predicates. We address these problems using a model-assisted an- notation pipeline that searches through a large amount of 11 million images from the SA-1B [4] dataset and retrieves promising candidates for manual annotation. Our dataset is directly compatible with existing panop- tic scene graph datasets and can be easily integrated with existing evaluation pipelines. Our main contributions are: 1. An active learning inspired annotation pipeline that can be used to efficiently build scene graph datasets with a focus on rare predicate classes. We use model- assisted proposals to find rare predicate classes in a large set of unlabeled images. 2. With our pipeline, we build the Haystack scene graph dataset that contains about 25,000 relations with rare predicate classes for more than 11,300 images. It in- cludes negative annotations and can be used for better model evaluation on rare predicate classes. 3. A set of metrics that provide more in-depth insights into results on rare predicates and which are used to compare existing approaches. 2. Related Works 2.1. Scene Graph Datasets One of the first large scene graph datasets used for scene graph generation is Visual Genome [5]. It contains more than 100,000 images but has some ill-suited properties, e.g. 33,877 different object classes and 40,480 different predi- cate classes. These classes are mostly raw labels by annota- tors with only very slight data post processing. To improve this, Xu et al. took Visual Genome and constructed the com- monly used VG-150 [12] variant, keeping only the most fre- quent 50 predicate classes and 150 object classes. Although this variant drastically reduced the number of different pred- icate classes to the most relevant ones, many predicates are still redundant. Yang et al. identified these issues and created the PSG [14] dataset. It is based on the intersection of images from Visual Genome and COCO and contains 48749 images with panoptic segmentation masks and a total of 56 predicate classes. The authors tackled the issues of prior scene graph Figure 2. Example images with annotations that were missed in the PSG ground truth. datasets and decided to use a completely new set of pred- icate classes. They focused on a less redundant predicate vocabulary that can still be used to concisely represent the given scene as thorough as possible. Annotators were then given the fixed set of predicates and encouraged to use more informative predicates whenever applicable. Additionally, the authors made sure that not only salient regions of an image were covered with relation annotations. Still, predicate classes in the PSG dataset follow a long- tail distribution like prior datasets. However, compared to Visual Genome, PSG contains more reliable annotations and we decide to choose this dataset as the training set for our new test set. Annotating scene graphs extensively is very difficult. Al- though annotations for PSG are much more complete com- pared to Visual Genome, there are still a lot of images with many missing annotations. See figure 2 for example images where annotations were missed. Our annotation pipeline reuses the predicate classes from the PSG dataset but adds a whole new set of images, con- taining mostly rare predicate classes. Generating exhaustive scene graph datasets is a near impossible task and we decide to go a different route with the Haystack dataset. Instead of focusing only on positive annotations, we include negative annotations as well (figure 1). We will argue in section 3.3 why this dataset structure is superior when evaluating indi- vidual rare predicates classes. 2.2. Scene Graph Generation with Long-Tail Data ”over” right, would already achieve a Recall@k of 0.42. Both Visual Genome and derivatives like PSG suffer from a long-tail distribution of the predicate classes. In the case of PSG, the 3 most frequent predicate classes ”on”, ”beside”, and ”over” make up 52% of all available predi- cate labels in the dataset. More than a quarter of all predi- cate classes have less than 100 annotations in the dataset. This is a known problem in the field of scene graph generation and has been approached for example using re- sampling [2], reweighting [13], or predicate grouping [3]. Zhang et al. proposed a method to automatically relabel ex- isting datasets during training and convert less informative annotations to more informative ones on the fly. Zhou et al. built on this work and developed a model that works with panoptic scene graph datasets [17]. However, all of these methods have to evaluate on the same lacking test sets with very few samples for rare pred- icate classes. With our dataset, they can be evaluated on a more reliable test set. 2.3. Metrics for Scene Graph Generation In scene graph datasets, ground truth annotations are in- complete, making it difficult to apply arbitrary metrics from other fields in machine learning. To use standard metrics like accuracy, positive labels and negative labels are re- quired, too. However, scene graph datasets only contain positive annotations for the underlying relations. This is usually not a problem for classification tasks because nor- mally, there exists exactly one label per data sample. For scene graph datasets, the situation is different. Here, rela- tions can have zero, one, or even multiple predicate classes assigned. Consequently, if a predicate class is missing from a relation in the ground truth, it doesn’t necessarily mean that the predicate is not suited for that relation. Quite to the contrary, many images from current scene graph datasets contain images with many missed annotations. Therefore, the lack of a predicate class can only serve as a guess for a negative annotation. To cope with this problem, most work on scene graph generation uses Recall@k [8] or a variant of it. Recall@k is calculated at an image level. Starting from a model out- put tensor that contains one row per possible relation and one column for each available predicate class, the rows are ranked by their most confident predicate score. Next, given the set of ground truth relations, we can check how many re- lations are covered by the top k ranked predicates from the model output tensor and calculate a ratio for the image. Re- call@k is the average over all these ratios. The Recall@k metric gives insight into how good the model is at filter- ing the most relevant relations on an image. However, its main disadvantage is that it favors frequent predicate classes over rare ones. On PSG, a hypothetical model that would only get all relations with the predicates ”on”, ”beside”, or A metric that is better suited to analyze the performance with the long-tail distribution in mind is the mean Re- call@k, a variant of Recall@k that first calculates individ- ual scores for every available predicate class and averages the values afterwards. This way, every predicate class has the same influence towards the final metric score. Another variant is the ”No Graph Constraint Recall@k” [9, 15] that allows multiple predicates per relation for the ranking. Metrics from the Recall@k family provide insights into how good the tested model is at ranking relevant relations on an image. 3. Methods Traditionally, scene graph datasets are annotated on a per-image basis [5, 14]. The annotator is tasked to annotate as many relations between objects as possible on a given im- age. To ensure a good quality of the annotations, annotators are encouraged to use more informative predicate classes whenever possible. However, this only works to a certain degree and the annotators must have a good overview over all available predicate classes to choose correctly. There- fore, we must change two fundamental steps in the annota- tion process to shift the focus to the tail classes: First, images must be sorted by the estimated chance to find rare predicates. We use a model-assisted approach for this task. Second, annotators should not have to keep the whole set of available predicates in mind. If given the choice, annota- tors tend to use more broader predicate classes like ”on” or ”beside” [16]. Therefore, we essentially reduce the annota- tion task to a binary one and use the proposal model to only show relations that are expected to have a given predicate class. In this case, the annotator only has to know about one predicate and is less likely to make any errors. 3.1. Annotation Pipeline An overview of our annotation pipeline can be seen in figure 3. From a large image database, we first ex- tract objects together with their segmentation masks using MaskDINO [6]. Next, we use these masks as ground truth data for inference with a pretrained scene graph generation model. We use this model as a proposal algorithm to se- lect relations that are likely to contain rare predicate classes. Starting from there, annotators are tasked to verify the var- ious predicates and label the proposed relations as either correct or incorrect. Contrary to prior scene graph datasets, we publish negative annotations, too, which opens the doors to a whole new set of training and evaluation techniques. To increase the diversity of our dataset, we cluster all available images in distinct groups and sample from them uniformly. Source images Instead of extending existing scene graph datasets, we decide to start from scratch and introduce a Figure 3. Overview of our annotation pipeline for Haystack. The pipeline is designed to find rare predicates in a very large set of images (the SA-1B dataset). We first cluster the images, to increase the diversity and then calculate segmentation masks for each image. These segmentation masks are compatible with the PSG dataset. Next, we apply a scene graph model on the set of images and let it predict scores for all possible relations on the images. The scores are ranked and annotated by hand. In contrast to existing scene graph datasets, this allows us to publish negative annotations as well, which can be used later for an improved training. DINOv2 [10]. DINOv2 is trained without supervision on a large set of images and produces features that can be used for further processing even without requiring a retraining of the backbone. We compute features with the ViT-L check- point. Next, we apply k-Means and put the images into 50 disjoint clusters. The number of clusters was empirically selected by iteratively changing the number of clusters on a smaller set of images. 50 clusters is a convenient trade- off that produces diverse clusters that still contain varying images inside them. Some example images for the first 5 clusters are shown in figure 4. Not all images from SA-1B are suitable candidates for our scene graph dataset, e.g. lo- gos, or portraits. We manually inspect example images for each cluster and decide whether to exclude all images from a cluster. We use the remaining clusters as pools for our proposal algorithm. To propose new relations for annotation, we first sample uniformly from the set of clusters and for each se- lected cluster, we apply the model-guided proposal algo- rithm to rank the most promising candidates for manual an- notation. With the combination of both clusters and network guided proposals, we make sure that we generate relevant relation candidates which are based on diverse images. Segmentation masks The images from SA-1B are not compatible with PSG because they are lacking the re- quired panoptic segmentation masks. There are segmen- tation masks available, but they were extracted with SAM [4] and don’t resemble the 133 thing and stuff classes from PSG. Annotating the missing segmentation masks by hand would be very inefficient and error prone. Therefore, we use MaskDINO [6], trained on the object classes of PSG and collect predictions for the full SA-1B dataset. MaskDINO is a foundation model, capable to do object detection and segmentation. It achieves state of the art results on COCO Figure 4. Example images of the first 5 clusters. Each row is a separate cluster. The clusters were calculated using k-Means with features from DINOv2 [10]. For some clusters, the contained im- ages were not suitable for our task like the fourth cluster above. completely new set of images for our scene graph dataset. To increase the chance of finding rare predicates, we pro- cess all images from the SA-1B [4] dataset. SA-1B con- tains more than 11 million high quality images from dif- ferent domains. We iterate over all available images and filter them depending on a proposal algorithm that we will explain later. Increase diversity Because we use a trained neural net- work to propose new relations, there will be a tendency to a certain group of images. This would reduce the diversity of our dataset. However, generating a diverse dataset is impor- tant to improve the robustness of trained models. Thus, we first cluster the images from SA-1B based on features from Retrain0.730.440.560.420.280.410.910.820.760.780.230.130.210.120.080.130.120.080.420.820.730.280.560.410.780.440.230.910.760.21Feature Extractor + Clustering(DINOv2 + k-Means)Segmentation Model(MaskDINO)Proposal Model(e.g. VCTree)Human FeedbackRelationsSegmented ObjectsRankedRelationsImage ClustersLabelledRelationsQualityAnnotationsSA-1B and select all visible relations until enough data is avail- able. However, there are two disadvantages to this ap- proach: First, it is very inefficient to provide extensive an- notations for each image. Our pre-processed images con- tain on average 16 objects per image, which would result in about 240 possible relations per image. But second and more importantly, annotators would not focus on rare pred- icate classes if they annotated the image as a whole. We actively prevent this phenomenon by fixing the pred- icate and showing potential relation candidates one after the other. The annotator can label the relation with one of three choices (top row in figure 5): 1. Positive annotation: the fixed predicate does in fact fit the proposed relation. 2. Negative annotation: the fixed predicate does not fit the proposed relation, but another predicate would fit. To speed up the annotation process, the annotator does not label the correct positive annotation. 3. No relation: there is no predicate that would fit the proposed relation. This is a shortcut to applying option 2 for all predicates. Regardless of the outcome, we store both positive and negative annotations for later use. Additionally, annotators can decide to skip a proposed image if they are not sure about the annotation. Model-assisted proposals We use a scene graph gener- ation model to propose probable relation candidates to the annotators. Because our pipeline does not depend on a spe- cific choice of model, we choose the top-performing model VCTree [11] from the PSG paper[14]. We train it on the original PSG dataset and then calculate all possible relations between all available objects in all available images from the selected cluster and calculate a score for each predicate class. The score is normalized with the softmax function to prevent the model from focusing too much on certain im- ages. Given these scores, we can rank the processed relation candidates. It is worth noting, that most scene graph models contain a dedicated output for ”no relation”. In order to focus on proposals that are likely to show a relation, we rank our proposals by dividing the predicted score for a predicate by the predicted score for ”no relation”. Hence, relations with high ”no relation” scores, are shown less frequently. Additional annotation options Although MaskDINO provides impressive segmentation masks for our dataset, the masks are not always correct. In this case, annotators have the choice to mark objects as faulty and exclude them from further processing (bottom row in figure 5). We will not use these objects for training or evaluation. Filter nonsense To improve relation selection, we fil- ter subject-predicate-object triplets unlikely to be viable for our new dataset. For instance, annotations like ”table- Figure 5. A screenshot of our annotation interface. The annotator is given a fixed predicate to label, in this case ”jumping from”. On every image, one subject and one object are highlighted in differ- ent colors. The annotator can choose to classify the proposal as correct/incorrect. Additionally, if the segmentation mask would have errors, there are two buttons to exclude the respective subject or object from further proposals. In this example, the annotator would click on ”correct”. instance segmentation and indeed, the vast majority of the returned segmentation masks for our task is almost pixel- perfect and suitable for further processing. Masks that are not good enough will be filtered later in our annotation pipeline. Predicate renaming We observe that annotators that are not familiar with the PSG dataset have difficulties to apply the selected predicate class definitions. This is due to mis- understandings of the predicate classes when translated to other languages. To ensure that our test set is 100% aligned with the definitions from PSG, we decide to rename the ex- isting predicate classes for our annotators. For every predicate class, we select a set of images that contain at least one relation with the given predicate. Next, without showing the actual list of predicate classes from PSG, we let the annotators decide on a predicate label for the given set of relations. Annotators are free to describe the relation in their own words how they feel it would fit best. Afterwards we check if the proposed new predicate name does indeed describe the predicate class. During this process, annotators for example renamed the predicate class ”playing” to ”engaged in activity using”. This new label makes the difference between the PSG pred- icates ”playing” and ”playing with” much more evident for our annotators. For the final dataset, we convert the re- named labels back to the original ones. Annotation interface To add relation labels for the given images and their PSG-compatible segmentation masks, annotators could just label one image after the other drinking-water” are eliminated. We use PSG statistics to count how often a subject appears with a predicate, re- gardless of the object. Subject-predicate combinations with one or zero samples in PSG are considered noise and ex- cluded. The same applies to predicate-object combinations. Although this reduces dataset diversity, our proposal algo- rithm can still suggest never-before-seen subject-predicate- object triplets if either subject-predicate or predicate-object pairs exist in PSG. For example, PSG contains relations with ”dog-eating” and ”eating-banana” but not ”dog-eating- banana”. Note that out-of-set triplets are less likely due to using proposals from PSG pretrained models. Retrain the proposal model Finally, we use our found annotations to retrain the proposal model during the anno- tation process. Once the new proposal model is trained, we use it as an improved proposal algorithm. 3.2. Dataset Properties Our Haystack dataset is designed to contain as many rare predicates as possible, to provide a reliable test set for rare predicate classes. The dataset can be easily combined with existing scene graph datasets, such as PSG. To ensure com- patibility, we reuse the predicate classes from PSG, but fo- cus on the tail classes. Haystack contains more than 25,000 relation annotations on a total of more than 11,300 images. Using our annota- tion pipeline, annotators were able to find 9% positive an- notations out of all proposed annotations for rare predicate classes in total (see figure 8 for per predicate ratios). Fig- ure 7 shows a list of all positive annotations in our dataset compared with the PSG test set. Contrary to PSG, we annotate on a per-predicate basis in- stead of a per-image one. Consequently, the annotation den- sity per image is lower compared to PSG. But at the same time, Haystack contains more rare predicate classes than PSG. For example, Haystack contains more than 10 times more relations with the predicate ”cooking” or ”climbing”. See figure 6 for the sample size increase on the rarest pred- icate classes. For every image that contains at least one annotated re- lation, we provide the respective segmentation mask with the same resolution as SA-1B, that is, 1500 pixels for the shorter edge. Additionally, we provide an annotation file that uses the same file format as PSG and is 100% com- patible. Haystack can be effortlessly integrated into current PSG-based scene graph pipelines by appending our annota- tions to the existing JSON annotation files. We provide a small utility script in our repository that facilitates this step even further. 3.3. Evaluation with the Haystack Dataset Figure 6. Relative dataset size of Haystack compared to the PSG test set (log scaled). Figure 7. Distribution over all positive annotations in our dataset and the PSG test set. Our dataset contains more labels for almost all rare predicates when compared with the PSG test set. Figure 8. Percentage of how many annotations per predicate are positive in the dataset. Some predicates are easier to find because the used model proposals are better. For example, riding can be easily retrieved. ranks all relation predictions for an image and returns the ra- tio of how many ground truth annotations are covered by the top k ranked predictions. The score is then averaged over all images. This design choice is required because only posi- tive annotations are available in previous datasets. If a miss- ing annotation between a subject and an object would imply a negative annotation, the model output would be compared with many false negative ground truth values. For the final R@k score, predicates are essentially competing with each other for the top ranked positions. R@k scores two different aspects at the same time: the model’s capability of recognizing predicates for different relations and ranking them by relevance for the final output. This makes sense for final evaluation but does not provide fine grained insights into a model’s performance. There- fore, we define three metrics that evaluate the two aspects separately. Previous work usually evaluates its results using the Re- call@k (R@k) and Mean Recall@k (mR@k) metrics. R@k A fundamental requirement of the proposed metrics is the availability of negative annotations. For prior scene abouttohitbitingcatchingchasingcleaningclimbingcookingcrossingdrinkingdrivingeatingenteringexitingfallingofffeedingflyingovergoingdownguidingjumpingfromjumpingoverkickingkissingleaningonlyingononbackofopeningpaintedonpickingplayingwithpullingpushingrunningonslicingswingingtalkingtothrowingtouching1%10%100%1000%RatioPredicateonbackoffallingoffgoingdownpaintedonrunningoncrossinglyingonflyingoverjumpingoverjumpingfromguidingkissingeatingdrinkingfeedingbitingcatchingpickingplayingwithchasingclimbingcleaningtouchingpushingpullingopeningcookingtalkingtothrowingslicingdrivingabouttohitkickingswingingenteringexitingleaningon020406080100120140PositiveAnnotationsPredicateDatasetPSGHaystackabouttohitbesidebitingcarryingcatchingchasingcleaningclimbingcookingcrossingdrinkingdrivingdrivingoneatingenclosingenteringexitingfallingofffeedingflyingovergoingdownguidingholdingininfrontofjumpingfromjumpingoverkickingkissingleaningonlookingatlyingononbackofopeningoverpaintedonparkedonpickingplayingplayingwithpullingpushingridingrunningonsittingonslicingstandingonswingingtalkingtothrowingtouchingwalkingon0.00.20.4CountPredicate graph datasets, these could be derived from positive annota- tions or subject-object pairs that have no annotation. How- ever, as mentioned in section 2.1, relying on implicit nega- tive annotations does not provide a reliable base for metrics. With the Haystack dataset, explicit negative annotations be- come available and our new metrics can be calculated with- out the risk of noisy ground truths due to implicit negative annotations. Our metrics can be used to analyze different aspects of model performance. A usual inference task for scene graph generation is to process an input image and return a list of all visible relations in the image. Using R@k, we can cal- culate a score that represents how successful a model can achieve this task. However, R@k only looks at the bigger picture. We design the Predicate ROC-AUC (P-AUC) score as the ROC-AUC over individual predicate scores. More pre- cisely, to calculate the P-AUC for a fixed predicate class p, we first collect all relations that have a positive or negative ground truth annotation for p. Next, we calculate the corre- sponding predictions for each relation and only look at the scores that relate to p. We now have a list of confidences and a list of labels and can calculate the ROC-AUC. The ROC-AUC has some beneficial properties for our task: It is invariant to scale and transformation and can, therefore, score any predicate regardless of the average confidence. This is important because many predicate classes like ”car- rying” or ”pushing” are often predicted with very low scores compared to other predicates. P-AUC describes the model’s capability to decide whether a predicate class is applicable to a relation, regardless of the predictions for other predi- cates. To understand how the predicate scores interfere with each other, we define two displacement metrics: Predi- cate Dominance Overestimation (PDO) scores how much a predicate displaces other predicates, whereas Predicate Discrimination Disadvantage (PDD) determines how much a predicate is displaced by other predicates. Both metrics are defined for a fixed predicate p. Let n be the number of possible predicate classes. Each ground truth annotation of a relation can be represented as a vector l ∈ {0, 1, −1}n (lp = 0 if there is no annotation for this relation, lp = 1 if p would be a correct predicate class for the relation, lp = −1 if not). For every relation l, a model ranks the predicate classes by their confidence scores (low rank corresponds to high confidence): r ∈ [0, n − 1]n ⊂ Nn. Let Rp be the set of annotated relations for predicate p: Rp = {(l, r) | lp ̸= 0}. We construct the set Pp of all positively annotated rela- tions and the set Tp,k of relations that were predicted with a score from the top k predictions. Note that we set the fraction in equation 3 to 1 if the denominator is 0. Tp,k = {(l, r) ∈ Rp | rp < k} ⊂ Rp Pp = {(l, r) ∈ Rp | lp = 1} ⊂ Rp P DOp := 1 − P DDp := 1 − 1 n − 1 1 n − 1 n−1 (cid:88) k=1 n−1 (cid:88) k=1 |Tp,k ∩ Pp| |Tp,k| |Tp,k ∩ Pp| |Pp| (1) (2) (3) (4) A high PDD score results when the predicate appears rarely in the top scores but would have been expected to be there, i.e. when it is displaced by other predicates. A high PDO score appears when a predicate is too often in the top scores but is not expected there, i.e. it displaces other predicates. Note that PDD and PDO go hand in hand and should always be evaluated together. PDD and PDO are defined using recall and precision scores respectively and are therefore robust against unbal- anced labels. A metric susceptible to the positive-negative ratio would return skewed results because the Haystack dataset contains varying amounts of negative ratios depend- ing on the predicate class. 4. Experiments We evaluate the top 3 performing predicate classification models [7, 11, 15] (the ResNet-101 variant) from the PSG paper [14] and report our proposed metrics on the Haystack dataset. We cannot evaluate our metrics on the PSG test set because then we would rely on implicit negative annotations that inevitably perturb the metrics. In table 1, we report metric scores for selected rare pred- icate classes that are present in the Haystack dataset. Addi- tionally, we show the R@50 score on the PSG test set for reference. Finally, we report the mean value over all predi- cate classes for each metric. For most rare predicate classes, R@50 returns the lowest possible value of 0.0, making it virtually impossible to derive any interesting information from it. In contrast, our methods report different values even for predicate classes that are difficult to predict. The correlation between P-AUC and R@50 is about 0.22, indicating that the two metrics are indeed looking at different aspects of the model output. Some predicates like ”eating” or ”playing with” have very low scores on R@50 but very high on P-AUC. This indicates that such predicates are understood by the model but rarely make it to the top 50 predicates. For ”playing with”, this could for example happen where many people are playing together or interac- tions between other objects in the image are deemed more important. Values for PDO are expected to be low for rare predi- cate classes which usually don’t displace other predicates. Predicate on back of going down painted on lying on jumping over guiding eating drinking catching playing with chasing climbing cleaning pushing pulling opening cooking throwing slicing about to hit kicking swinging entering exiting enclosing leaning on mean VCTree GPSNet MOTIFS P-AUC ↑ 0.48 0.89 0.66 0.68 0.51 0.53 0.90 0.49 0.46 0.91 0.42 0.67 0.66 0.51 0.49 0.63 0.62 0.51 0.50 0.70 0.67 0.83 0.57 0.49 0.99 0.62 0.63 PDD ↓ 0.27 0.24 0.19 0.67 0.53 0.44 0.19 0.50 0.74 0.44 0.72 0.84 0.77 0.65 0.33 0.90 0.83 0.62 0.45 0.26 0.85 0.46 0.45 0.70 0.05 0.15 0.51 PDO ↓ R@50 ↑ 0.62 0.59 0.48 0.35 0.73 0.69 0.33 0.77 0.48 0.51 0.71 0.58 0.61 0.64 0.44 0.51 0.48 0.72 0.85 0.49 0.45 0.23 0.55 0.63 0.70 0.74 0.57 0.00 0.00 0.02 0.22 0.00 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.05 0.00 0.00 0.00 0.00 0.67 0.25 0.08 0.00 0.00 0.03 0.00 0.05 P-AUC ↑ 0.30 0.61 0.61 0.59 0.51 0.61 0.64 0.49 0.39 0.80 0.40 0.72 0.67 0.57 0.43 0.43 0.59 0.42 0.50 0.73 0.59 0.73 0.61 0.49 0.90 0.78 0.58 PDD ↓ 0.37 0.38 0.21 0.40 0.50 0.47 0.24 0.52 0.65 0.39 0.53 0.94 0.84 0.61 0.27 0.92 0.77 0.58 0.43 0.33 0.69 0.62 0.45 0.73 0.12 0.13 0.50 PDO ↓ R@50 ↑ 0.53 0.65 0.52 0.54 0.71 0.69 0.56 0.78 0.64 0.73 0.60 0.35 0.39 0.49 0.51 0.40 0.39 0.63 0.83 0.38 0.56 0.35 0.47 0.55 0.62 0.71 0.56 0.03 0.07 0.00 0.19 0.00 0.00 0.04 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.07 0.00 0.00 0.00 0.00 0.70 0.00 0.16 0.00 0.00 0.03 0.00 0.05 P-AUC ↑ 0.64 0.38 0.60 0.34 0.61 0.54 0.56 0.59 0.35 0.83 0.23 0.68 0.64 0.64 0.43 0.60 0.63 0.57 0.37 0.64 0.65 0.76 0.52 0.58 0.72 0.68 0.57 PDD ↓ 0.27 0.57 0.24 0.45 0.61 0.72 0.23 0.61 0.87 0.60 0.63 0.68 0.91 0.70 0.63 0.84 0.84 0.58 0.72 0.35 0.84 0.73 0.43 0.42 0.11 0.18 0.57 PDO ↓ R@50 ↑ 0.58 0.66 0.51 0.54 0.66 0.62 0.52 0.76 0.60 0.42 0.63 0.65 0.67 0.49 0.50 0.54 0.54 0.63 0.76 0.56 0.62 0.46 0.55 0.73 0.72 0.77 0.60 0.00 0.07 0.00 0.26 0.00 0.00 0.09 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.05 0.00 0.00 0.08 0.00 0.74 0.25 0.16 0.00 0.00 0.07 0.00 0.07 Table 1. Metric results for three different scene graph generation models, evaluated on the Haystack dataset for the predicate classification task. We add the Recall@50 metric, calculated on the PSG test set for reference. For Predicate Discrimination Disadvantage (PDD) and Predicate Dominance Overestimation (PDO), lower scores are better. For Predicate ROC-AUC (P-AUC) and Recall@k (R@50), higher scores are better. The bottom row is the average over all rows above and represents a unified score for the whole dataset. The highest ranked predicate with PDO is ”slicing”, which makes sense because subjects and objects that are in a ”slic- ing” relation with each other usually don’t have many alter- native predicates that would make sense. This information could not be derived from the R@50 metric. In general, VCTree performs best when evaluated with a standard mR@50 compared to the other two methods and performs best on our three metrics as well. It has the highest Predicate ROC-AUC score, indicating the best understand- ing of rare predicate classes and the lowest PDD and PDO scores, which demonstrate that the model is more suitable at deciding between predicates within a relation. The gap to GPSNet is small though and GPSNet could be improved by focusing more on the predicates independently. With the P-AUC, we can see that existing models are indeed capable of understanding rare predicate classes like ”playing with”. The R@50 metric does not provide this kind of information and only tells us that the predicates loose against other predicates on the image. However, with the PDD and PDO metrics, we can detect that this problem already occurs at a relation level. Future scene graph model architectures should take this into account and improve their predicate ranking on individual relations. Existing models appear to already have a fundamental understanding of the individual predicate classes. 5. Conclusion We presented the Haystack scene graph dataset and showed how our annotation pipeline is specifically designed to assist existing scene graph datasets with rare predicate classes. We use a model-assisted approach to streamline the annotation process and generate as many rare predicates as fast as possible. The Haystack dataset enables us to develop new scene graph metrics that are tweaked towards deeper relation-level insights into model predictions, with a focus on rare predicate classes. With reliable negative annotations available, many metrics from other fields in computer vi- sion and statistics can be applied to the scene graph context. In the future, we will continue our research in this direc- tion and increase the size and quality of our dataset. We are excited to see how other authors will integrate explicit negatives from our dataset into their work. [13] Shaotian Yan, Chen Shen, Zhongming Jin, Jianqiang Huang, Rongxin Jiang, Yaowu Chen, and Xian-Sheng Hua. PCPL: Predicate-correlation perception learning for unbiased scene graph generation. In Proceedings of the 28th ACM Interna- tional Conference on Multimedia, MM ’20, pages 265–273. Association for Computing Machinery. 3 [14] Jingkang Yang, Yi Zhe Ang, Zujin Guo, Kaiyang Zhou, Wayne Zhang, and Ziwei Liu. Panoptic scene graph gen- eration. 2, 3, 5, 7 [15] Rowan Zellers, Mark Yatskar, Sam Thomson, and Yejin Choi. Neural motifs: Scene graph parsing with global con- text. In Conference on Computer Vision and Pattern Recog- nition, 2018. 3, 7 [16] Ao Zhang, Yuan Yao, Qianyu Chen, Wei Ji, Zhiyuan Liu, Maosong Sun, and Tat-Seng Chua. Fine-grained scene graph generation with data transfer. 3 [17] Zijian Zhou, Miaojing Shi, and Holger Caesar. HiLo: Ex- ploiting high low frequency relations for unbiased panoptic scene graph generation. 3 [18] Guangming Zhu, Liang Zhang, Youliang Jiang, Yixuan Dang, Haoran Hou, Peiyi Shen, Mingtao Feng, Xia Zhao, Qiguang Miao, Syed Afaq Ali Shah, and Bennamoun. Scene graph generation: A comprehensive survey. ArXiv, abs/2201.00443, 2022. 1 References [1] Xiaojun Chang, Pengzhen Ren, Pengfei Xu, Zhihui Li, Xiao- jiang Chen, and Alex Hauptmann. A comprehensive survey of scene graphs: Generation and application. IEEE Transac- tions on Pattern Analysis and Machine Intelligence, 45(1):1– 26, jan 2023. 1 [2] Alakh Desai, Tz-Ying Wu, Subarna Tripathi, and Nuno Vas- concelos. Learning of visual relations: The devil is in the tails. pages 15404–15413. 3 [3] Xingning Dong, Tian Gan, Xuemeng Song, Jianlong Wu, Yuan Cheng, and Liqiang Nie. Stacked hybrid-attention and group collaborative learning for unbiased scene graph gener- ation. pages 19427–19436, 2022. 3 [4] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Doll´ar, and Ross Girshick. Segment Anything, Apr. 2023. arXiv:2304.02643 [cs]. 1, 2, 4 [5] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalan- tidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. 123(1):32–73, 2017. 1, 2, 3 [6] Feng Li, Hao Zhang, Huaizhe Xu, Shilong Liu, Lei Zhang, Lionel M Ni, and Heung-Yeung Shum. Mask DINO: To- wards a Unified Transformer-Based Framework for Object Detection and Segmentation. 3, 4 [7] Xin Lin, Changxing Ding, Jinquan Zeng, and Dacheng Tao. Gps-net: Graph property sensing network for scene graph generation. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3743–3752, 2020. 7 [8] Cewu Lu, Ranjay Krishna, Michael Bernstein, and Li Fei- Fei. Visual relationship detection with language priors. In European Conference on Computer Vision, 2016. 3 [9] Alejandro Newell and Jia Deng. Pixels to graphs by associa- tive embedding. Advances in Neural Information Process- ing Systems, 2017-December:2172–2181, 2017. 31st An- nual Conference on Neural Information Processing Systems, NIPS 2017 ; Conference date: 04-12-2017 Through 09-12- 2017. 3 [10] Maxime Oquab, Timoth´ee Darcet, Th´eo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mah- moud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Herv´e Je- gou, Julien Mairal, Patrick Labatut, Armand Joulin, and Pi- otr Bojanowski. DINOv2: Learning robust visual features without supervision. 4 [11] Kaihua Tang, Hanwang Zhang, Baoyuan Wu, Wenhan Luo, and Wei Liu. Learning to compose dynamic tree structures for visual contexts. In 2019 IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 6612– 6621. ISSN: 2575-7075. 5, 7 [12] Danfei Xu, Yuke Zhu, Christopher B. Choy, and Li Fei-Fei. Scene graph generation by iterative message passing. 2
synthetic_cpt
1
Large_language_model_based_framework_for_automated_extraction_of_genetic_interactions_from_unstructured_data.pdf
GAMEDX: GENERATIVE AI-BASED MEDICAL ENTITY DATA EXTRACTOR USING LARGE LANGUAGE MODELS 4 2 0 2 y a M 1 3 ] L C . s c [ 1 v 5 8 5 0 2 . 5 0 4 2 : v i X r a Mohammed-Khalil Ghali, Abdelrahman Farrag, Hajar Sakai, Hicham El Baz, Yu Jin, Sarah Lam School of Systems Science and Industrial Engineering State University of New York at Binghamton Binghamton, NY, USA mghali1, afarrag1, hsakai1, helbaz1, yjin, [email protected] ABSTRACT In the rapidly evolving field of healthcare and beyond, the integration of generative AI in Electronic Health Records (EHRs) represents a pivotal advancement, addressing a critical gap in current information extraction techniques. This paper introduces GAMedX, a Named Entity Recognition (NER) approach utilizing Large Language Models (LLMs) to efficiently extract entities from medical narratives and unstructured text generated throughout various phases of the patient hospital visit. By addressing the significant challenge of processing unstructured medical text, GAMedX leverages the capabilities of generative AI and LLMs for improved data extraction. Employing a unified approach, the methodology integrates open-source LLMs for NER, utilizing chained prompts and Pydantic schemas for structured output to navigate the complexities of specialized medical jargon. The findings reveal significant ROUGE F1 score on one of the evaluation datasets with an accuracy of 98%. This innovation enhances entity extraction, offering a scalable, cost-effective solution for automated forms filling from unstructured data. As a result, GAMedX streamlines the processing of unstructured narratives, and sets a new standard in NER applications, contributing significantly to theoretical and practical advancements beyond the medical technology sphere. Keywords Named Entity Recognition · Large Language Models · Generative Ai · Medical Data Extraction · Prompts Engineering 1 Introduction The integration of Artificial Intelligence (AI) in healthcare, particularly through Electronic Health Records (EHRs), marks a significant advancement in medical technology. This progression is essential for enhancing healthcare delivery and improving patient outcomes, aiming at efficiently extracting and analyzing patient information from EHRs, which contain a blend of structured data such as coded diagnoses and medications and unstructured data, including clinical narratives and notes. While structured data entry in EHRs offers numerous benefits and is increasingly prevalent, its practical use by clinicians remains limited due to the added documentation burden [1]. Consequently, healthcare providers often prefer documenting patient information through clinical narratives [2]. These narratives, rich in detailed patient information, are crucial for enhancing the accuracy of diagnostic and prognostic models [3, 4]. However, the free-text format of these narratives poses a significant challenge: they are not readily amenable to computational analysis, which typically requires structured data. This challenge is further compounded by the intrinsic complexities of clinical text, including irregularities like ambiguous medical jargon and nonstandard phrase structures. Despite the powerful capabilities of Natural Language Processing (NLP) to comprehend medical language in healthcare settings [5], such irregularities make it difficult for standard NLP tools to perform effectively when applied to clinical text, which necessitates domain-specific expertise for accurate annotation [6]. However, the integration of Large Language Models (LLMs) into the healthcare sector is not without its constraints, particularly due to the confidentiality requirements governing clinical information. These requirements significantly restrict the availability and utilization of public datasets, which are essential for training and fine-tuning LLMs. This constraint is further compounded by the need for secure and compliant IT system integration in healthcare [7]. The sensitivity of patient data requires robust security measures to prevent unauthorized access and ensure data privacy [8]. Additionally, healthcare IT systems often involve complex, diverse software ecosystems, requiring LLMs to be adaptable and interoperable with various existing platforms and data formats [9]. This results in a limited supply and restricted distribution of these resources, leading to the creation of clinical NLP datasets that are limited and institution-specific [10]. Each healthcare institution tends to possess unique, domain-specific data that is distinct from data held by other institutions. Consequently, this situation gives rise to a collection of diverse and institution-specific datasets, complicating the development of broadly applicable NLP tools in the healthcare field. These models that do not integrate these elements are generally restrained to tasks where labels are naturally generated in the course of clinical practice, such as the prediction of International Classification of Diseases Codes [11] or mortality risk assessments [12]. Figure 1: Example of a patient-doctor dialogue with annotated data elements for NER, highlighting the extraction of patient names, medications, symptoms, conditions, and precautions. In response to these challenges, there is an emerging trend and a pressing need to develop AI-enabled, next-generation Generative Pretrained Transformer (GPT) or LLMs specifically tailored for the healthcare industry [13]. These advanced models not only provide accurate and error-free medical information but also address the ethical, legal, and practical concerns inherent in their deployment in sensitive healthcare environments. This paper proposes developing GAMedX, an advanced Named Entity Recognition (NER) system utilizing LLMs. The system is designed to precisely extract essential information from medical conversations and dictations. The intended outcome is a marked improvement in the efficiency of completing structured healthcare forms, with an emphasis on reliability, consistency, and a seamless operational workflow. GAMedX is extended beyond the technological integration, including critical real-world impacts such as accuracy, processing speed, user satisfaction, compliance, and the smooth integration of the solution into existing healthcare systems. This paper is structured as follows: Section 2 reviews related literature on the integration of LLMs in the healthcare sector for information extraction, with a focus on prompt engineering and pretrained LLMs for clinical NLP applications. Section 3 details the proposed novel model. Section 4 presents the evaluation results of this model. Finally, the conclusion and directions for future work are discussed in Section 5. 2 Literature review The integration of AI in the healthcare domain has been significantly advanced by developments in NLP. Most NLP solutions leverage deep learning models [14] based on neural network (NN) architectures, a rapidly evolving area within machine learning. Initially, Convolutional Neural Networks (CNNs) [15] and Recurrent Neural Networks (RNNs) [16] were employed in early deep learning applications for NLP. They struggled with processing long-term dependencies and contextual information in large text sequences [17]. However, transformer architectures, notably the Bidirectional 2 Encoder Representations from Transformers (BERT) [18], have recently set a new standard in NLP. These models are distinguished by their self-attention mechanism [17], which efficiently processes the relative significance of each word in a sentence, enhancing understanding of context and relationships within the text. This capability has led to transformers overperforming other models in various NLP tasks. For instance, in NER [19, 20], key entities in the text were identified and categorized, such as names of people, organizations, or locations; relation extraction transformers [21, 22, 23] discern and extract relationships between entities within a text; sentence similarity tasks [24, 25, 26] involve evaluating the degree of similarity or relatedness between two sentences; natural language inference [27, 28] is about determining the logical relationship between a pair of sentences, such as whether one implies, contradicts, or is neutral to the other; question answering [29, 30] these models comprehend a given text and accurately respond to questions based on that text, demonstrating a deep understanding of content and context. The healthcare and medical sectors are facing the challenge of streamlining medical documentation processes, which are essential but also labour-intensive and time-consuming. Addressing this issue has increased interest in LLMs to develop improved NER systems. These systems are designed not only to extract and interpret information from medical dialogues and dictations accurately but also to efficiently summarize this information. However, general transformer models such as GPT often struggle to extract accurate information due to their training on more general datasets rather than specialized healthcare data. Additionally, LLMs have been shown to have significant drawbacks, such as generating misinformation, falsifying data, and contributing to plagiarism [31]. The challenges extend further when implementing NER across different languages, each with its unique linguistic features and complexities. For example, Chinese medical text NER faces unique challenges, such as intricate terminology, variable entity lengths, and context-dependent entity classifications [32]. These concerns are particularly severe in the healthcare context, where the accuracy of information is paramount. Therefore, based on the literature, the development of such systems may follow one of three approaches: the first involves building LLMs that are specifically trained on healthcare data; the other two approaches involve adapting pre-existing models to clinical text through prompt engineering techniques or tuning specific layers, which can help guide the models to better understand and process medical-specific language and terms. Figure 2: Overview of LLM development methods – Pre-Training on diverse sources, Fine-Tuning, and Prompting. Training transformer models specifically for the healthcare sector is essential due to the significant differences in syntax and vocabulary between clinical text and the text typically used in general NLP. Clinical text is replete with specialized terminology and unique sentence structures that general language models may not recognize or understand properly [33]. Furthermore, the vastness of human language, with its nearly limitless combinations of words, sentences, meanings, and syntax, necessitates models that can comprehend and generate language with a high degree of accuracy [34]. The process of training these transformers typically occurs in two stages. The first is language model pretraining, where the model learns from a large corpus of unlabeled text, gaining an understanding of language through self-supervised learning objectives. The second stage is fine-tuning, where the model is refined to perform specific tasks using labeled training data. This process, known as transfer learning, allows the application of a model trained on one task to be adapted for another, leveraging the broad linguistic knowledge it has acquired. Recent studies have highlighted the superiority of large transformer models trained on massive text corpora over their predecessors, noting their enhanced language understanding and generation abilities. The significance of transformer models has directed research into extensive models, such as the GPT-3 [35], which boasts 175 billion parameters and is trained on over 400 billion words 3 of text, demonstrating remarkable performance. In the biomedical domain, specialized models like BioBERT [36] and PubMedBERT [36], each with 110 million parameters, have been trained on PubMed’s biomedical literature to better capture the language used in that field. NVIDIA has also developed the BioMegatron models, ranging from 345 million to 1.2 billion parameters [37], using an extensive collection of text derived from PubMed. However, scaling transformer models in the clinical domain has been limited. This is partly due to the sensitive nature of clinical narratives, which often contain Protected Health Information, and the substantial computational resources required to train larger models. For instance, ClinicalBERT [38], a substantial model with 110 million parameters, was developed using the MIMIC-III dataset, which includes 0.5 billion words of clinical narratives and stands as one of the largest models tailored for the clinical domain. Yet, recent advancements have led to the emergence of GatorTron [39], which represents a significant evolution in clinical language models. Trained with 8.9 billion parameters on a corpus exceeding 90 billion words, including 82 billion from de-identified clinical texts, GatorTron showed significant improvements in model scale and performance on clinical NLP tasks. However, this approach is burdened by substantial challenges. The development and training of LLMs demand extensive computational resources, necessitating significant financial investments [40]. Additionally, for effective training, LLMs require vast amounts of data, often meaning that healthcare institutions must collaborate and share data. This aspect poses a particular challenge in the healthcare sector, where strict data privacy regulations and institutional data protection agreements are in place. Such constraints can lead to the creation of data silos, impeding the free exchange of information necessary for training these models [40]. Furthermore, not every hospital or healthcare institution has the financial capacity to either rent these pretrained models or invest in training their own model. Prompt-based learning is a technique where a pre-trained model is adapted to perform specific tasks by using carefully constructed text prompts. These prompts are designed to provide context and instruction, guiding the model in its response. Prompts are crucial in leveraging the model’s extensive pre-existing language understanding for new, specific applications. The process of prompt-based learning typically involves two primary steps: designing the prompt involves creating a clear, concise textual instruction that explains the task to the model. The prompt sets the context and often includes specific keywords to guide the model’s response in the desired direction, as well as in-context examples, in which prompts are supplemented with in-context examples. These examples serve as demonstrations of the task, showing the model the type of input, it will receive and the expected format of its response. The inclusion of examples varies; it could range from none “zero-shot”, relying only on the prompt’s guidance, to several “few-shot”, providing a more comprehensive context. These approaches allow the LLMs to be quickly adapted to a wide range of general- domain tasks with little to no specific prior training [41, 42, 43]. Prompt-based learning has shown significant progress in classification tasks such as multiple-choice questions [44], demonstrating its adaptability even in complex scenarios like coreference resolution. In these instances, models are prompted to simplify the task by choosing between two potential antecedents for a pronoun or confirming the correctness of a single antecedent [45]. This method’s effectiveness relies heavily on the generation of a comprehensive list of potential antecedents, necessitating additional tools or multiple queries, potentially increasing computational demands. However, the literature reveals challenges in effectively dealing with information extraction tasks. These challenges include the extraction of multiple interconnected concepts essential for understanding patient information [46]. Furthermore, there are issues with overlapped and nested concepts, where a single concept might be categorized under multiple labels or linked to various relations, complicating annotations [47]. Efficiently extracting relations is challenging, as enumerating all concept combinations before classification leads to a skewed positive-negative ratio, given that only a few combinations have actual relations [48]. Additionally, the adaptability of concept and relation extraction methodologies across different institutional settings raises concerns about the portability and generalizability of the model, underscoring the need for strategies that can accommodate the diversity of clinical documentation practices [49, 50]. To tackle the issue of effectively extracting clinical concepts and their relations, a unified prompt-based Machine Reading Comprehension (MRC) architecture is proposed, utilizing state-of-the-art transformer models [51]. The study benchmarks its MRC models against existing deep learning models on two key datasets from the National NLP Clinical Challenges (n2c2) [52]: one focusing on medications and adverse drug events from 2018 [53] and the other on the relations of social determinants of health (SDoH) from 2022 [54]. However, the practice of human-designed, discrete prompts necessitates prior domain knowledge, limiting the method’s generalizability across different NLP tasks without undergoing a similar process of "prompt-based learning" for each new task. The study’s reliance on datasets derived from the publicly available Medical Information Mart for Intensive Care III (MIMIC-III) database raises concerns about data leakage, especially when utilizing models pre-trained on this same dataset. In addition, the approach is dependent on privately pre-trained models, which are not accessible for external validation or replication, further compounding these limitations. An approach was proposed to address this challenge by accessing the underlying model and using labeled data for training the extraction layer [55]. However, this requirement can pose significant limitations in scenarios where such 4 access is restricted or where labeled data is scarce or expensive to obtain. Another challenge is related to the output generated by the LLMs, such as those observed with InstructGPT [56], which, despite incorporating extraction examples into its training, fails to produce structured outputs. This limitation is significant as it necessitates additional steps to convert the model’s text-based responses into a structured form that can be readily analyzed or integrated into existing healthcare systems. A handcrafted guided prompt design was proposed aiming at a unified output structure of LLMs for diverse clinical information extraction tasks, demonstrating versatility across zero- and few-shot learning scenarios [57]. Despite its advancements, the model faces limitations, including difficulties in precisely matching detailed clinical schemas at the token level, a tendency to generate non-trivial answers even when none are required, and restrictions imposed by data use policies that limit the scope of training and evaluation datasets. Moreover, the model’s primary reliance on English-language texts and data from dictated notes may not capture the full diversity of clinical documentation practices, posing challenges for its application across different linguistic and healthcare contexts. To conclude the literature review, it is crucial to note that existing methodologies for extracting medical information predominantly involve training or fine-tuning LLMs to align with specific testing data requirements. A common strategy includes fine-tuning models; however, this often relies on proprietary, non-free, pre-trained medical LLMs and adopts a non-unified model approach [58]. Such practices may limit accessibility, scalability, and the generalizability of solutions across different healthcare settings due to the proprietary nature of the models and the tailored approach to model training. In this study, GAMedX introduces an innovative wrapping approach designed to achieve a unified structure format for a NER system, focusing on the comprehensive understanding of multiple interconnected concepts to enhance patient information processing. GAMedX, leveraging open-source LLMs, offers a straightforward, cost-efficient solution. It is aimed at optimizing hospital resources and services, directing attention and funds more effectively towards improving patient health outcomes. GAMedX stands as a testament to the potential of harnessing advanced NLP technologies to not only advance clinical data processing but also to significantly contribute to the overall efficiency of healthcare delivery. 3 Data For the experiments conducted with the proposed approach, two datasets are considered. Each dataset contains textual data generated subsequent to medical encounters. Additionally, these datasets’ annotations are Named Entity Recognition (NER) task-oriented. The first dataset is a competition dataset that was synthetically generated by Prediction Guard. The competition is the Data 4 Good Challenge 2023 organized by Purdue University and during which we clinched the 1st place in the technical component consisting of using open-source LLMs for information extraction. The second dataset is extracted from Vaccine Adverse Event Reporting System (VAERS) database where the patient’s personal information is protected by being de-identified. As privacy is a key concern in healthcare and keeping the patient’s Protected Health Information (PHI) safe, it is important to mention that both datasets used for experiments are HIPAA compliant either because they were synthetically generated or a de-identification process was involved. 3.1 Dataset 1: Medical Transcripts (Data 4 Good Challenge) During a medical appointment, a conversation takes place between a patient and their provider. The medical concerns discussed are either recorded during the appointment or summarized and dictated by the provider afterward. As a result, multiple audio files are compiled. These files contain different patient’s medical information and are afterward transcribed into text data. The transcripts form a gold mine, however, being raw data, they end up being both challenging and time-consuming to retrieve information from. The information to be retrieved was highlighted in the example described in Figure 1. This dataset was provided by Prediction Guard during the Data 4 Good challenge organized in Fall 2023. There are 2001 transcripts in this dataset and each of them corresponds to six labels. 3.2 Dataset 2: Vaccine Adverse Event Reporting System (VAERS) Jointly managed by both the US Centers for Disease Control and Prevention (CDC) and the U.S. Food and Drug Admin- istration (FDA), the Vaccine Adverse Event Reporting System (VAERS) is set up for post-vaccination administration adverse events collection and analysis [59]. As a result, uncommon patterns of adverse events can be identified and therefore indicate any potential safety issues with a specific vaccine. Therefore, this database provides an efficient tool to ensure a continued safety while administering vaccine by enabling early warning signals identifications and contributing to the public health decisions regarding vaccination recommendations. Additionally, this data is regularly updated and consists of a general comprehensive list of definitions, descriptions, and abbreviations. Similar to [60], 91 annotated safety reports are considered where the narratives are symptoms descriptions. The adverse events to extract 5 Table 1: Tasks and Datasets Description Task Dataset Description Example Extraction Medical Textual Data Information Extraction Medical Transcripts (Data 4 Good Competition) Vaccine Adverse Event Reporting System (VAERS) Given a medical transcript, extract six patient-related information Given a medical report, extract the post- vaccination adverse events "During my visit with Ilana Bellinger, an 85-year-old patient who presented [. . . ]" "Pt started feeling dizzy immediately after vaccine was given [. . . ]" - Name: Ilana Bellinger - Age: 85 Dizziness are those related to a nervous system disorder in instances of Guillain-Barre Syndrome (GBS) associated with influenza vaccinations, as reported in the VAERS data. 4 Methodology A unified approach leveraging open-source pre-trained LLMs is proposed in this paper to extract structured information from healthcare textual data. Given the challenging nature of information extraction from unstructured data, an advanced Named Entity Recognition (NER) process was developed using a LLM wrapper. The goal is to automate redundant and time-consuming medical documentation and form-filling while ensuring a reliable system capable of being seamlessly integrated into the healthcare facility infrastructure. All this is, however, subject to one additional key consideration consisting of only harnessing open-source LLMs to ensure that no additional cost would be required to for smoothly run the process. The elaborated unified framework is summarized in Figure 3, based on which experiments were conducted and evaluations were carried out. 4.1 Loading and Preprocessing Data The first step consists of obtaining the dataset subject to experiments. In each dataset Dk, the document di is either a medical transcript or report and the output Oi consists of the extracted information: Dk = {(d1, O1), (d2, O2), . . . , (dm, Om)}, (1) where k = {1,2} refers to the dataset considered, and m the dataset size. The textual data compromises transcripts of patient-doctor conversations and dictations as well as medical reports. All documents can be either in ‘.txt’ or ‘.json’ format. Once loaded, they are transformed by using LangChain’s “Recursive text Splitter” in order to keep the semantically related chunks together while enabling batch processing imposed by the token limit and resorted to for computational efficiency. Moreover, if the document considered is not in English, it is translated into English: where Ti refers to the translated text and P the preprocessing operation. Ti = P (di), Bi = (cid:88) Ti, (2) (3) j where Bi refers to the batch resulting from the “Recursive text Splitter” and j = {1, . . . , J} with J being the number of batches. 6 X d e M A G : y g o l o d o h t e m d e p o l e v e d e h t r o f t c a r t s b a l a c i h p a r G : 3 e r u g i F 7 4.2 Prompt Crafting & Pydantic Schema In addition to the general prompt designed to extract the patient’s information or post-vaccination adverse events, a Pydantic Schema is established. It consists of defining the data type and format of the information to be retrieved. This ensures that the extracted data conforms to what is expected: where E(Ti) refers to the prompt engineering and pydantic schema process. E(Ti) = “[Task Description][Query: Ti]”, (4) Figure 4: Example of the prompt used 4.3 Pre-trained Open-Source LLMs Used In this research the focus is to leverage open-source LLMs for healthcare-based information extraction. The selected models are Mistral 7B and Gemma 7B. Both these models are open models and therefore offer to developers the freedom to customize them and tailor them to specific downstream tasks. Additionally, being open source promotes both innovation and transparency and leverages AI democratization. For each of the LLMs considered, an API call is invoked with the complete prompt to extract the information intended: Hi = F (E(Ti)), (5) where F represents the LLM API key call. 4.3.1 Mistral 7B Mistral 7B was released by Mistral AI under the Apache 2.0 license, permitting its use without restrictions [61]. It was designed to compete with larger language models in terms of performance with only 7.3 billion parameters. As a result, it surpasses Llama 2 13B on all benchmark, Llama 1 34B on multiple ones, while approaching Code Llama 7B on code-related tasks. Its architecture achieves faster inference thanks to Grouped-Query Attention (GQA) and can deal with longer sequences while minimizing the cost because of the Sliding Window Attention (SWA) integration [61]. Because this model is freely available, it is widely used by researchers and developers to build AI powered tools in a cost-effective manner. Moreover, Mistral 7B has demonstrated its good performance across various tasks such as Common Sense Reasoning, Arithmetic Reasoning, and Code Generation [61], allowing it to be a potential candidate to be used to advance AI applications. 8 4.3.2 Gemma 7B Gemma 7B is part of Google’s Gemma family of LLMs. It is characterized with being lightweight and based on the same technology used for the Gemini models [62]. With only 7B parameters, this model introduces state-of-the-art AI capabilities. Its development is as the heart of Google’s responsible AI development strategy. Therefore, the pre-training was claimed to be conducted safely on well curated data, with a robust and transparent evaluation [65]. Gemma 7B demonstrated superior performance in multiple benchmarks such as Massive Multitask Language Understanding (MMLU) and HellaSwag and this shows its problem-solving ability [65]. Its capabilities are versatile, excelling in reasoning, math, and code tasks [62]. Gemma’s technical report [62] compares three open LLMs (LLaMA 2, Mistral, and Gemma) using multiple benchmark datasets, based on that, Figure 4 was deduced. Figure 5: Benchmarks comparison of the LLMs used The benchmark datasets reported measure the ability of LLMs to conduct a variety of tasks. MMLU is used to evaluate the problem-solving skills, while HellaSwag is appropriate for common sense reasoning testing when it comes for generating the end of a story. Social Interaction QA (SIQA), on one hand, is reserved for evaluating the ability of the LLM to understand people’s social interactions and on the other hand OpenBookQA (OBQA) is suitable for measuring the LLM skill in carrying out advanced Question-Answering. This subset of comparison based on [62] shows that both Mistral 7B and Gemma 7B are very competitive across multiple datasets and tasks. 4.4 In-Context Learning In order to align the knowledge embedded in each of the pre-trained LLMs with the context provided in the prompt without explicitly re-training or fine-tuning on specific datasets for specific tasks, in-context learning was applied. This technique allows the adaptation of the LLM to the downstream tasks, in the case of this research, Medical Transcripts Information Extraction and Medical Reports Post-Vaccination Adverse Events Retrieval. As a result, a more efficient yet not computationally expensive LLM-application is enabled. Few-shots learning was conducted by providing examples in the prompt and allowing the model to have a concrete idea about the expected output. In the literature, few-shot learning commonly involves a range of examples varying between two and five. Based on this, multiple experiments were conducted resulting in choosing a three-shots learning for both Mistral 7B and Gemma 7B. The prompt is updated to the following in the case of in-context learning: E(Ti) = “[Task Description][Examples][Query: Ti]”, (6) 9 The experiments included from zero-shot to five-shots learning. However, only one-shot and few-shots learning are reported based on the results performance: • One-shot: Only one basic example is provided in the prompt. • Few-shots: Two harder examples are added to the previous prompt. 4.5 Performance Evaluation Post-processing is conducted to get the extracted information in the desired format. Following, two types of evaluations are carried out. The first evaluation is a Quantitative Analysis using ROUGE-1 F1 and ROUGE-L F1. ROUGE (Recall-Oriented Understudy for Gisting Evaluation) include a set of metrics commonly used to evaluate automatic text summarization by measuring the overlap on n-grams between the actual information extracted and the LLM output. In this paper, the focus is on two of its variants: ROUGE-1 and ROUGE-L where the first focuses on unigrams and the second on the longest common subsequence. ROUGE-1 F1 and ROUGE-L F1 computations rely on the corresponding Precisions and Recalls. Table 2: Metrics and Formulas Metric Formula ROUGE-1 Precision Number of overlapping unigrams Total number of unigrams in the LLM output ROUGE-1 Recall Number of overlapping unigrams Total number of unigrams in the actual information extracted ROUGE-1 F1 ROUGE-L Precision 2×(ROUGE-1 Precision×ROUGE-1 Recall) ROUGE-1 Precision+ROUGE-1 Recall Length of the longest common subsequence Total number of unigrams in the LLM output ROUGE-L Recall Length of the longest common subsequence Total number of unigrams in the actual information extracted ROUGE-L F1 2×(ROUGE-L Precision×ROUGE-L Recall) ROUGE-L Precision+ROUGE-L Recall However, for the VAERS dataset, conducting a Quantitative Analysis was revealed to not be enough. As a result, a Semantic Analysis is carried out. Two embedding models are used to plot the LLM outputs, and the corresponding actual information extracted using t-distributed Stochastic Neighbor Embedding (t-SNE). t-SNE is a dimensionality reduction technique [63] particularly suited for high dimensional data visualization in a low dimensional space. The process is characterized by preserving the local structure of the data allowing efficient clusters reveal and complex patterns identification. Which is not the case of another popular dimension reduction technique, the Principal Component Analysis (PCA). Additionally, t-SNE is also capable of handling non-linear data contrary to PCA. The textual data is vectorized using two models. On one hand, the first model is BAAI General Embedding (BGE). It was introduced by Beijing Academy of Artificial Intelligence (BAAI) and is characterized by high-dimensional embeddings, versatility, and scalability. It is a cutting-edge approach that stood out by capturing the semantic nuances. On the other hand, the second model considered is Instruct Embedding. Since its training was done on diverse datasets and tasks, resulting in task-oriented representations where a contextual understanding is improved. The choice of two different models was mainly to avoid any bias that might be related to a particular model. Another contrast of the LLM output and actual information extracted is done using the computation of Cosine Similarity. This metric is insightful since it allows the understanding of the semantic similarity in addition to visualize the relationships within high-dimensional data representing, in our case the LLM output and the actual information extracted. 5 Results The prompt was carefully defined, and the in-context learning examples were meticulously selected for each of the datasets considered in this research. The experiments were conducted using both open-source LLMs, Mistral 7B and Gemma 7B. For all cases, the temperature was set to 0.1 in order to limit randomness and reduce the LLM creativity while the max number of tokens was fixed at 1000. Given the nature of the output, ROUGE-1 F1 and ROUGE-L F1 were picked to evaluate the performance of our proposed approach. Table 3 summarizes the resulting performance metrics. 10 Table 3: Results Summary Model Strategies Competition Dataset VAERS Dataset ROUGE-1 F1 ROUGE-L F1 ROUGE-1 F1 ROUGE-L F1 Mistral One Shot Mistral Few Shots Gemma One Shot Gemma Few Shots 97% 98% 97% 98% 98% 98% 97% 98% 58% 63% 60% 63% 57% 62% 59% 62% 5.1 Quantitative Analysis 5.1.1 Competition Dataset ROUGE-1 F1 and ROUGE-L F1 scores are significantly high across all models and experiments, nearly reaching the perfect score of 1. This indicates excellent performance in capturing the essence of the input content. Moreover, there is a consistent pattern of F1 scores across both the few-shot and one-shot experiments for each model, suggesting that the methodology is robust regardless of the amount of initial data provided or in-context learning examples included in the prompt. These results reinforce the conclusion that the methodology employed is highly effective for key information extraction in healthcare text analysis and is generalizable across different LLMs. 5.1.2 VAERS Dataset In comparison to the competition dataset, the VAERS Dataset shows noticeably lower scores, ranging from 0.57 to 0.63 for both ROUGE-1 and ROUGE-L F1 scores. This suggests that the VAERS Dataset poses a tougher challenge for named entity recognition tasks, likely due to the complexity of medical terminology, diverse entity types, or less consistency in how entities are mentioned. Similar to what was observed in the Competition dataset, employing "Few Shot" learning leads to performance improvements for both models, although these improvements are not very significant. This indicates that providing a small number of targeted examples helps the models adapt to the specific characteristics of the VAERS Dataset, but it doesn’t completely solve all the challenges. 5.2 Semantic Analysis (Case of VAERS Dataset) Although ROUGE-1 and ROUGE-L F1 scores are commonly used to assess summarization performance, they have limitations. Particularly, they may not fully capture the semantic richness and contextual alignment of model outputs. This becomes apparent when examining the low ROUGE scores achieved on the VAERS dataset, where models are challenged to translate everyday language into technical medical terms. ROUGE scores, as seen in the previous analysis table, rely heavily on the exact match of word sequences between the reference summaries and model-generated ones. However, in the context of the VAERS dataset, where reference answers contain specialized medical terminology, these scores might not accurately reflect the model’s ability to comprehend and express the nuanced meanings of symptoms described using different terms in the transcripts. To address this concern, we decided to conduct an additional analysis, for the case of VAERS dataset, using two different embeddings models: BGE and Instruct Embeddings. This approach allows us to capture the semantic relationships between the reference summaries and the model’s responses more effectively. The analysis is carried out by plotting the t-SNE plots for each of the textual dataset where the ground truth is contrasted with each of the LLMs strategies outputs all in their embedding’s formats. The t-SNE was used to decrease the dimensionality of the embeddings space to be able to visualize them in a 2D space. Furthermore, in an attempt to reduce the bias that might result from a single embeddings technique, theses assessments resorted to two embedding techniques: BGE and Instruct Embeddings. Additionally, given that these two embedding techniques highlight different underlying structures of the data, where BGE capture contextual information while Instruct form specific tasks or instructions-based clusters, nuanced insights can be deduced and therefore different angles to visualize similarities and differences within the data points can be analysed. 5.2.1 One-Shot Learning Figure 6 shows the t-SNE plots for both the Mistral and Gemma One-Shot models reveal a moderate level of clustering between the ground truth and our proposed methodology’s output using both embedding models’ answers indicating a certain degree of semantic understanding captured by both models. Meanwhile, there is also a small dispersion in the clusters, suggesting instances where the models’ interpretations diverge from the medical terminology employed in 11 the ground truth. Nevertheless, the cosine similarity graphs support the assertion that both models provide relevant answers. Similarly, both models demonstrate overlap in clusters for ground truth and model answers with both types of embeddings. Figure 6: Figures 6.a and 6.b: t-SNE plot of ground truth and model answers for Mistral and Gemma using one shot prompt. Figure 7: Figures 7.a and 7.b: Cosine similarity plot for Gemma and Mistral models using one shot prompt. The cosine similarity graphs in figure 7 also affirm the models’ capability to grasp the semantic meaning of the symptoms. 5.2.2 Few-Shot Learning In the Mistral Few-Shot Model, the clustering of ground truth and model answers appears to be more compact, particularly for Instruct Embeddings. This suggests that the model may have an improved ability to match medical terminology when provided with additional examples. However, there are still areas of the plot where embeddings do not overlap entirely, indicating some semantic disparities. Nonetheless, the cosine similarity between the two vectors, utilizing both embeddings, demonstrates significant results, affirming the model’s capability to produce answers closely aligned with the ground truth. Similarly, the t-SNE visualization for the Gemma Few-Shot Model also indicates a trend towards tighter clustering between the ground truth and model answers. This underscores the notion that with an increased number of examples, the model becomes more adept at capturing the nuanced language of the ground truth. 12 Figure 8: Figures 8.a and 8.b: t-SNE plot of ground truth and model answers for Mistral and Gemma using few shots prompt. Figure 9: Figures 9.a and 9.b: Cosine similarity plot for Gemma and Mistral models using few shots prompt. 6 Conclusions and Future Work To explore leveraging open-source LLMs for medical textual data information extraction, we have demonstrated the effectiveness of GAMedX, a prompt engineering-based unified approach where pydantic schema is employed. GAMedX is a LLM agnostic approach and for experimentation purposes, Mistral 7B and Gemma 7B were resorted to. Additionally, to prove the versatility of the approach when it comes to extracting information from unstructured textual healthcare data, two datasets were used for retrieving various patient-related information in a structured format and reliable manner. Thus, enabling the potential use of GAMedX in various healthcare applications. The experiments are followed by a comprehensive analysis of the approach’s performance showing its robustness across different contexts. The evaluation comprises two parts, the first one consists of a quantitative analysis through ROUGE scores, carried out for both datasets, while the second one is a semantics analysis conducted for VAERS dataset and utilizing t-SNE with both BGE and Instruct Embeddings. Both analyses, highlight the significant potential of LLMs in enhancing useful and accurate information extraction from medical textual data. Moving forward, we aim to refine our proposed approach by exploring other open-source LLMs, with a particular focus on expanding our methodology to identify which LLM integration corresponds more conveniently to which type of healthcare textual data. Furthermore, investigating various NLP tasks relevant in healthcare (e.g., sentiment analysis) present an exciting venue for future research. GAMedX applications and potential enhancements not only contributes academically to the research revolving around open-source LLMs application, but also constitutes a practical tool able 13 to automate multiple auxiliary tasks in healthcare, and therefore save healthcare professional’s time for those who need it the most: the patients. References [1] Ruth A Bush, Cynthia Kuelbs, Julie Ryu, Wen Jiang, and George Chiang. Structured data entry in the electronic medical record: perspectives of pediatric specialty physicians and surgeons. Journal of medical systems, 41:1–8, 2017. [2] Stéphane M Meystre, Guergana K Savova, Karin C Kipper-Schuler, and John F Hurdle. Extracting information from textual documents in the electronic health record: a review of recent research. Yearbook of medical informatics, 17(01):128–144, 2008. [3] Huiying Liang, Brian Y Tsui, Hao Ni, Carolina CS Valentim, Sally L Baxter, Guangjian Liu, Wenjia Cai, Daniel S Kermany, Xin Sun, Jiancong Chen, et al. Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence. Nature medicine, 25(3):433–438, 2019. [4] Jie Yang, John W Lian, Yen-Po Harvey Chin, Liqin Wang, Anna Lian, George F Murphy, and Li Zhou. Assessing the prognostic significance of tumor-infiltrating lymphocytes in patients with melanoma using pathologic features identified by natural language processing. JAMA Network Open, 4(9):e2126337–e2126337, 2021. [5] Prakash M Nadkarni, Lucila Ohno-Machado, and Wendy W Chapman. Natural language processing: an introduc- tion. Journal of the American Medical Informatics Association, 18(5):544–551, 2011. [6] Jiaping Zheng, Wendy W Chapman, Rebecca S Crowley, and Guergana K Savova. Coreference resolution: A review of general methodologies and applications in the clinical domain. Journal of biomedical informatics, 44(6):1113–1122, 2011. [7] Brian C Drolet, Jayson S Marwaha, Brad Hyatt, Phillip E Blazar, and Scott D Lifchez. Electronic communication of protected health information: privacy, security, and hipaa compliance. The Journal of hand surgery, 42(6):411–416, 2017. [8] Sandeep Reddy. Evaluating large language models for use in healthcare: A framework for translational value assessment. Informatics in Medicine Unlocked, page 101304, 2023. [9] Rui Yang, Ting Fang Tan, Wei Lu, Arun James Thirunavukarasu, Daniel Shu Wei Ting, and Nan Liu. Large language models in health care: Development, applications, and challenges. Health Care Science, 2(4):255–263, 2023. [10] Fei Xia and Meliha Yetisgen-Yildiz. Clinical corpus annotation: challenges and strategies. In Proceedings of the third workshop on building and evaluating resources for biomedical text mining (BioTxtM’2012) in conjunction with the international conference on language resources and evaluation (LREC), Istanbul, Turkey, pages 21–27, 2012. [11] Zachariah Zhang, Jingshu Liu, and Narges Razavian. Bert-xml: Large scale automated icd coding using bert pretraining. arXiv preprint arXiv:2006.03685, 2020. [12] Yuqi Si and Kirk Roberts. Deep patient representation of clinical notes via multi-task learning for mortality prediction. AMIA Summits on Translational Science Proceedings, 2019:779, 2019. [13] Chiranjib Chakraborty, Manojit Bhattacharya, and Sang-Soo Lee. Need an ai-enabled, next-generation, advanced chatgpt or large language models (llms) for error-free and accurate medical information. Annals of Biomedical Engineering, 52(2):134–135, 2024. [14] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436–444, 2015. [15] Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. Journal of machine learning research, 12:2493–2537, 2011. [16] Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360, 2016. [17] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. [18] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. [19] Juntao Yu, Bernd Bohnet, and Massimo Poesio. Named entity recognition as dependency parsing. arXiv preprint arXiv:2005.07150, 2020. 14 [20] Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. Luke: Deep contextualized entity representations with entity-aware self-attention. arXiv preprint arXiv:2010.01057, 2020. [21] Shengfei Lyu and Huanhuan Chen. Relation classification with entity type restriction. arXiv preprint arXiv:2105.08393, 2021. [22] Deming Ye, Yankai Lin, Peng Li, and Maosong Sun. Packed levitated marker for entity and relation extraction. arXiv preprint arXiv:2109.06067, 2021. [23] Benfeng Xu, Quan Wang, Yajuan Lyu, Yong Zhu, and Zhendong Mao. Entity structure within and throughout: Modeling mention dependencies for document-level relation extraction. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 14149–14157, 2021. [24] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1–67, 2020. [25] Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization. arXiv preprint arXiv:1911.03437, 2019. [26] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32, 2019. [27] Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. Semantics- In Proceedings of the AAAI Conference on Artificial Intelligence, aware bert for language understanding. volume 34, pages 9628–9635, 2020. [28] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019. [29] Zhuosheng Zhang, Junjie Yang, and Hai Zhao. Retrospective reader for machine reading comprehension. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 14506–14514, 2021. [30] Siddhant Garg, Thuy Vu, and Alessandro Moschitti. Tanda: Transfer and adapt pre-trained transformer models for answer sentence selection. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 7780–7788, 2020. [31] Arun James Thirunavukarasu, Darren Shu Jeng Ting, Kabilan Elangovan, Laura Gutierrez, Ting Fang Tan, and Daniel Shu Wei Ting. Large language models in medicine. Nature medicine, 29(8):1930–1940, 2023. [32] Mengyuan Zhang, Jin Wang, and Xuejie Zhang. Using a pre-trained language model for medical named entity extraction in chinese clinic text. In 2020 IEEE 10th International Conference on Electronics Information and Emergency Communication (ICEIEC), pages 312–317. IEEE, 2020. [33] Stephen Wu, Kirk Roberts, Surabhi Datta, Jingcheng Du, Zongcheng Ji, Yuqi Si, Sarvesh Soni, Qiong Wang, Qiang Wei, Yang Xiang, et al. Deep learning in clinical natural language processing: a methodical review. Journal of the American Medical Informatics Association, 27(3):457–470, 2020. [34] Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021. [35] Luciano Floridi and Massimo Chiriatti. Gpt-3: Its nature, scope, limits, and consequences. Minds and Machines, 30:681–694, 2020. [36] Yu Gu, Tinn Robert, Cheng Hao, Lucas Michael, Usuyama Naoto, Liu Xiaodong, Naumann Tristan, Gao Jianfeng, and Poon Hoifung. Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare (HEALTH), 3(1):1–23, 2021. [37] Hoo-Chang Shin, Yang Zhang, Evelina Bakhturina, Raul Puri, Mostofa Patwary, Mohammad Shoeybi, and Raghav Mani. BioMegatron: Larger biomedical domain language model. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu, editors, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4700–4706, Online, November 2020. Association for Computational Linguistics. [38] Emily Alsentzer, John Murphy, William Boag, Wei-Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. Publicly available clinical BERT embeddings. In Anna Rumshisky, Kirk Roberts, Steven Bethard, and Tristan Naumann, editors, Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72–78, Minneapolis, Minnesota, USA, June 2019. Association for Computational Linguistics. 15 [39] Xi Yang, Aokun Chen, Nima PourNejatian, Hoo Chang Shin, Kaleb E Smith, Christopher Parisien, Colin Compas, Cheryl Martin, Anthony B Costa, Mona G Flores, et al. A large language model for electronic health records. NPJ digital medicine, 5(1):194, 2022. [40] Anmol Arora and Ananya Arora. The promise of large language models in health care. The Lancet, 401(10377):641, 2023. [41] Ben Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, S Agarwal, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. [42] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9):1–35, 2023. [43] Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207, 2021. [44] Swaroop Mishra, Daniel Khashabi, Chitta Baral, Yejin Choi, and Hannaneh Hajishirzi. Reframing instructional prompts to gptk’s language. arXiv preprint arXiv:2109.07830, 2021. [45] Xiaohan Yang, Eduardo Peynetti, Vasco Meerman, and Chris Tanner. What gpt knows about who is who. arXiv preprint arXiv:2205.07407, 2022. [46] Andy T Liu, Wei Xiao, Henghui Zhu, Dejiao Zhang, Shang-Wen Li, and Andrew Arnold. Qaner: Prompting question answering models for few-shot named entity recognition. arXiv preprint arXiv:2203.01543, 2022. [47] Xi Yang, Jiang Bian, Ruogu Fang, Ragnhildur I Bjarnadottir, William R Hogan, and Yonghui Wu. Identifying relations of medications with adverse drug events using recurrent convolutional neural networks and gradient boosting. Journal of the American Medical Informatics Association, 27(1):65–72, 2020. [48] Xi Yang, Jiang Bian, Yan Gong, William R Hogan, and Yonghui Wu. Madex: a system for detecting medications, adverse drug events, and their relations from clinical notes. Drug safety, 42:123–133, 2019. [49] Xi Yang, Tianchen Lyu, Qian Li, Chih-Yin Lee, Jiang Bian, William R Hogan, and Yonghui Wu. A study of deep learning methods for de-identification of clinical notes in cross-institute settings. BMC medical informatics and decision making, 19:1–9, 2019. [50] Jeffrey P Ferraro, Ye Ye, Per H Gesteland, Peter J Haug, Fuchiang Tsui, Gregory F Cooper, Rudy Van Bree, Thomas Ginter, Andrew J Nowalk, and Michael Wagner. The effects of natural language processing on cross-institutional portability of influenza case detection for disease surveillance. Applied clinical informatics, 8(02):560–580, 2017. [51] Cheng Peng, Xi Yang, Zehao Yu, Jiang Bian, William R Hogan, and Yonghui Wu. Clinical concept and relation extraction using prompt-based machine reading comprehension. Journal of the American Medical Informatics Association, 30(9):1486–1493, 2023. [52] Aokun Chen, Zehao Yu, Xi Yang, Yi Guo, Jiang Bian, and Yonghui Wu. Contextualized medication information extraction using transformer-based deep learning architectures. Journal of biomedical informatics, 142:104370, 2023. [53] Sam Henry, Kevin Buchan, Michele Filannino, Amber Stubbs, and Ozlem Uzuner. 2018 n2c2 shared task on adverse drug events and medication extraction in electronic health records. Journal of the American Medical Informatics Association, 27(1):3–12, 2020. [54] Kevin Lybarger, Meliha Yetisgen, and Özlem Uzuner. The 2022 n2c2/uw shared task on extracting social determinants of health. Journal of the American Medical Informatics Association, 30(8):1367–1378, 2023. [55] Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. A unified mrc framework for named entity recognition. arXiv preprint arXiv:1910.11476, 2019. [56] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730–27744, 2022. [57] Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, and David Sontag. Large language models are few-shot clinical information extractors. arXiv preprint arXiv:2205.12689, 2022. [58] Awais Ahmed, Xiaoyang Zeng, Rui Xi, Mengshu Hou, and Syed Attique Shah. Med-prompt: A novel prompt engineering framework for medicine prediction on free-text clinical notes. Journal of King Saud University- Computer and Information Sciences, 36(2):101933, 2024. [59] US Centers for Disease Control and Prevention. VAERS - About Us. https://vaers.hhs.gov/about.html. Accessed May 15, 2024. 16 [60] Yiming Li, Jianfu Li, Jianping He, and Cui Tao. Ae-gpt: Using large language models to extract adverse events from surveillance reports-a use case with influenza vaccine adverse events. Plos one, 19(3):e0300919, 2024. [61] Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023. [62] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024. [63] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008. 17
synthetic_cpt
3
SCAR_Efficient_Instruction-Tuning_for_Large_Language_Models_via_Style_Consistency-Aware_Response_Ranking.pdf
Entanglement Oscillations from Many-Body Quantum Scars Nicholas O’Dea∗ and Adithya Sriram∗ Department of Physics, Stanford University, Stanford, CA 94305, USA Quantum scars are nonthermal eigenstates that prevent thermalization of initial states with weight on the scars. When the scar states are equally spaced in energy, superpositions of scars show oscillating local observables that can be detected in experiments. However, we note that scarred models in the literature show fundamentally different scar entanglement dynamics: some show entanglement oscillations while others are completely frozen. We explain this freezing through a no-go theorem which we apply to more than a dozen scarred models in the literature. We also discuss a method for evading the no-go theorem by deforming the scarred models with frozen scar entanglement dynamics. Introduction — The advent of quantum simulators and new experimental platforms has motivated the study of different types of many-body quantum phenomena. These platforms have also uncovered surprises, such as models with non-thermal behavior [1] like periodic re- vivals of observables in special initial states otherwise ex- pected to rapidly thermalize. The state-dependent non- thermal dynamics in this Rydberg atom experiment were identified [2] with the existence of approximate “scar” states with outlier local expectation values in the simu- lated PXP model, prompting a flurry of theoretical in- vestigation into this thermalization-evading phenomenon (see [3–5] for literature reviews) and experimental realiza- tions of PXP and other scarred models [6–12]. Because of their outlier local expectation values, scar states vio- late the eigenstate thermalization hypothesis [13–19] in a weak sense and hence allow initial states with weight on the scar states to evade thermalization. When the scar states are equally split in energy (i.e. form a “scar tower”), superpositions of scars will show periodic motion that can be probed through oscillations in observables. The special initial state in the PXP model also shows non-monotonic entanglement dynamics [2, 20], probed experimentally in another Rydberg atom experi- ment [21]. Though analysis of the PXP model is compli- cated by the fact that its scar states are inexact, defor- mations of the PXP model that improve the lifetimes of the approximate scars show persistent, periodic oscilla- tions in entanglement [22–24]. Investigations into many models of exact scars were motivated because of the PXP model. However, we argue that oscillations in entangle- ment entropy starting from quenches to superpositions of scars are in fact forbidden in many models of exact scars – any superposition of scars in these models has all measures of entanglement frozen in time. This frozen entanglement is not a defect of these models relative to models that show scar entanglement dynamics, but in- stead demonstrates differences in the phenomenology of scarred models. Namely, we prove that scar entanglement dynamics cannot occur whenever the scars’ energies can be repli- cated by an operator H1 that is a sum of single-site op- erators. This occurs quite commonly; see the first 14 entries of Table I. We also show how deformations of these models by unitary rotations can induce entangle- ment oscillations. Our results help elucidate one aspect of the model and experiments which unleashed this flurry of research into quantum many body scars. No-go theorem for entanglement dynamics — Con- sider a set of states {|ϕn⟩} and a Hamiltonian H that has these states as eigenstates; H |ϕn⟩ = En |ϕn⟩. Suppose there exists a sum of single-site operators H1 = (cid:80) j hj that also reproduces these energies, H1 |ϕn⟩ = En |ϕn⟩. Then all superpositions of these states (cid:80) n cn |ϕn⟩ will have time-independent entanglement under the dynam- ics generated by H.1 To see this, note that a state |ψ⟩ = (cid:80) n cn |ϕn⟩ will evolve under H as |ψ(t)⟩ = e−iHt |ψ⟩ = cne−iEnt |ϕn⟩ (cid:88) n (1) = e−iH1t |ψ⟩ . However, by the definition of H1, e−iH1t = ⊗je−ihj t; that is, e−iH1t decomposes into a product of single-site uni- taries and cannot change the entanglement of any state. Since |ψ(t)⟩ = e−iH1t |ψ⟩, the entanglement of |ψ(t)⟩ is time-independent. In particular, all entanglement mono- tones of |ψ(t)⟩ cannot change with time, including the von Neumann and Renyi entanglement entropies across any cut. It also suffices for H1 to reproduce the energies up to a constant energy shift ˜E (i.e. H1 |ϕn⟩ = (En + ˜E) |ϕn⟩), as a global phase cannot change entanglement. We will freely omit such constant shifts when exhibiting H1 for various models in the following. While the theorem above is perfectly general, we will specialize to scar states |ϕn⟩ and scarred Hamiltonians H in the following. In Table I, we collect more than a dozen families of scarred models that lack entanglement ∗ Nicholas O’Dea and Adithya Sriram contributed equally to this work. 1 The value of the entanglement may depend on the cn, but it will be independent of time under the dynamics generated by H. 4 2 0 2 t c O 5 1 ] h c e m - t a t s . t a m - d n o c [ 1 v 2 2 8 1 1 . 0 1 4 2 : v i X r a dynamics because of the existence of an H1 satisfying the conditions of the no-go theorem. Examples — The spin-1 AKLT model is an SO(3)- symmetric Hamiltonian HAKLT = L (cid:88) i=1 P (2) i,i+1 (2) j=1(−1)j(S+ that is a sum of projectors onto two-site spin-2 configu- rations. For ease, we will restrict to an even length L. In periodic boundary conditions, it has a unique spin- 0 ground state |g⟩; the states |ϕn⟩ ∝ (Q†)n |g⟩ with Q† = (cid:80)L j )2 form an equally spaced scar tower with energies 2n. A particular superposition of these scar states, |ψ⟩ ∝ eQ† |g⟩, was proposed [25] as a good initial condition for experiments to see oscillations in observables. The states (cid:80)L i=1 Sz tower are all eigenstates of H1 i , which reproduces the energy of the scars. By our no-go theorem, this means that any and every superposition of the states (Q†)n |g⟩ (including eQ† |g⟩) will not show entanglement dynamics under HAKLT. Al- though local observables (like (S+ i )2 + h.c.) will show os- cillations, the entanglement will show no dynamics at all. In Appendix A, we consider a more involved case that uses all the scars of the model, including those related by SO(3) symmetry; i.e. superpositions of states of the form (S−)m(Q†)n |g⟩. in this As another example, consider the spin-1/2 “domain wall conserving model” [26]: H = L (cid:88) i=1 (cid:2)λ(σx i − σz i−1σx i σz i+1) + ∆σz i + Jσz i σz i+1 (cid:3) (3) i σz i=1 σz This model has two different raising operators [25] R† and P † giving rise to a “pyramid” of scars |ϕn,m⟩ ∝ (P †)m(R†)n| ↓ ... ↓⟩ with energies (2∆ − 4J)n + 2∆m up to a constant shift. The explicit forms of R† and P † are not central to our purposes. Both increase (cid:80) i by 2, while only R† changes the number of domain walls (by 2). In particular, (cid:80)L i |ϕn,m⟩ = (2n+2m−L) |ϕn,m⟩. The state ecR† |ϕ0,0⟩ was proposed in [26] as a possible initial state. However, it is a superposition of the states |ϕn,0⟩, and the choice of H1 = ∆ (cid:80)L i reproduces the energies of these states. This forbids entanglement dy- namics under H starting in any superposition of these states, including ecR† |ϕ0,0⟩. We can find choices of H1 satisfying the conditions of the no-go theorem more gen- erally: “cuts” along the pyramid consisting of |ϕn,m⟩ with m fixed have H1 = ∆ (cid:80)L i=1 σz i , while cuts consisting of |ϕn,m⟩ with n fixed have H1 = (∆ − 2J) (cid:80)L i=1 σz i=1 σz i . However, consider a different cut of scar states with fixed n + m (with 0 < n + m < L); such states are within a fixed (cid:80) i is the only candidate operator for H1, as it is the only sum of single- site operators that has such scar states as eigenstates. i symmetry sector. (cid:80) i σz i σz 2 Model Spin-1 XY [25, 27], q- deformed XY [28] Bond-bimagnon [27, 29] Spin-1/2 Casimir [22], DMI model [30] maximal including Entanglement Oscillations No, by H1 ∝ (cid:80) i Sz i . No, by H1 ∝ (cid:80) No, by H1 ∝ (cid:80) i Sz i . i Sz i . Non-maximal [28] Casimir No, by H1 ∝ (cid:80) i Sz i . Onsager algebra clock model [31] No, by H1 ∝ (cid:80) i Sz i . Kagome XXZ [32] Rainbow tower [35] Perturbed Potts [34] Topological tower [36] Motif Hamiltonians [33] No, by H1 ∝ (cid:80) No, by H1 ∝ (cid:80) No, by H1 ∝ (cid:80) No, by H1 ∝ (cid:80) No, by H1 ∝ (cid:80) No, by H1 ∝ (cid:80) Multi-π-magnon [37] Infinite EPR tower [38] No, by H1 ∝ (cid:80) No, by H1 = (cid:80) Maximal SU (3) Casimir [28, 39] priate constants a and b. No, by H1 ∝ (cid:80) j,σ nj,σ. i Sz i . i Sz i . i Sz i . i Sz i . i Sz i . i Sz i . i(ni ⊗ I + I ⊗ ni). i aSz i + b(Sz Hubbard Generalized Models [30, 40–43] i )2 for appro- Domain Wall Conserv- ing Model [25, 26] Spin-1 AKLT [25, 34, 44, 45] Yes, but requires using both ladder op- erators (see Examples), else no by H1 ∝ (cid:80) i Sz i . Yes, but requires using both ladder op- erators (see Examples and Appendix A), else no by H1 ∝ (cid:80) i Sz i . Some entangle- ment cuts are frozen. Correlated-hopping Bose-Hubbard (H3 [46]) Yes, but some entanglement cuts are frozen. in TABLE I. Non-exhaustive table of models with exact, equally spaced scars taken from the literature. The domain-wall con- serving model, AKLT, and maximal SU (3) Casimir models have “pyramids” of scars generated by two scar creation op- erators. We list the H1 that by the no-go theorem will prevent entanglement oscillations in any superpositions of the scars, whenever such an operator exists. i σz However, for fixed n + m, (cid:80) i is also fixed and cannot reproduce the eigenvalues of H. Superpositions of such states within the same (cid:80)L i=1 σz i sector thus evade the no- go theorem and indeed show entanglement oscillations in Fig. 1. Generalization — There is a useful generalization of the no-go theorem with both weaker assumptions and weaker consequences. For ease, we will first restrict to the special case of spin chains. Suppose there is an Hk which is a sum of commuting k-body terms (with the k sites contiguous) such that H and Hk share the same eigenval- ues on the scar subspace. Suppose the entanglement mea- sure of interest is the entanglement entropy between con- tiguous regions A and B. Then time-evolving a superpo- sition of scars under H will have a bound on entropy oscil- lations of the form maxt S(t)−mint S(t) ≤ 2(k−1) log(2). This is because only those terms of Hk that straddle the cut between the regions will be able to contribute to changes in the entanglement entropy. For the domain wall conserving model, H2 = (cid:80) i+1 satisfies the conditions above, and hence the amplitude of entan- i + Jσz i ∆σz i σz glement oscillations in time is at most 2 log(2) starting in any superposition of the |ϕn,m⟩. In higher dimensions d > 1, more terms in Hk can straddle the boundary between A and B, giving area-law oscillations that are bounded by O((k − 1)d) times the size of the boundary between A and B. Scars from Cartan subalgebra of onsite symmetries — The literature on constructing scar towers from symme- tries often explicitly uses such operators H1 to generate H’s with equal energy splittings between scars, directly making such H’s unable to show scar entanglement dy- namics. For example, one of the constructions in [28] uses the Cartan subalgebra of an explicitly broken non-abelian symmetry to split the scars in energy; such terms yield the spectrum-generating part HSG of the scarred Hamil- tonian H (H|ϕn⟩ = HSG|ϕn⟩). If the symmetry is on- site, these operators HSG are sums of single-site terms, meaning that HSG satisfies the conditions on H1 in the no-go theorem. Scarred Hamiltonians provided by this construction using on-site symmetries cannot generate entanglement dynamics on superpositions of scars, rul- ing out entanglement dynamics in a vast set of models. Even when an on-site symmetry is q-deformed [28] so that some of the symmetry generators are no longer on- site, the analogue of the Cartan subalgebra used to split the scars in energy is still spanned by sums of single-site operators. Further connections to the literature — Our no-go theorem works for wide classes of models by inspection, but we note that a proof of a conjecture in [47] will fur- ther strengthen our no-go theorem for several families of scars. Consider a model H with scar states |ϕn⟩ ∝ (S+)n| ↓↓ ... ↓⟩ such that H |ϕn⟩ = (E0 + n∆) |ϕn⟩ for some E0 and ∆ (i.e. H splits the scars linearly in energy). For such a model, H1 = ∆ (cid:80) j satisfies the conditions of the no- go theorem and prevents scar entanglement dynamics. j Sz Remarkably, conjecture III.2 of [47] holds that all local Hamiltonians with these scars as eigenstates indeed split the scars linearly in energy. If the conjecture holds, this means our no-go theorem immediately forbids scar en- tanglement dynamics in all local Hamiltonians with the scars |ϕn⟩ ∝ (S+)n| ↓↓ ... ↓⟩. This conjecture is moti- vated by results on commutant algebras and is expected to hold in other models with scars from symmetries (see table I in [47]). We also want to highlight the work [48] on “broken unitaries,” which presents a related picture of replacing the time-evolution operator e−iHt with a simpler one on [48] aims to replace e−iHt with a the scar subspace. product of unitaries built of terms or sums of terms of H (e.g. the AKLT time evolution reduces to sequen- tial time evolution generated by the odd and even terms e−iHAKLTt = e−iHoddte−iHevent on the scar states). In the current work, we make more radical replacements where in the case of AKLT, the action of e−iHt on the e.g. scar states is reproduced by e−iSz tott in the main text and 3 e−2iHevent in Appendix A; Sz tot is not a term of HAKLT. Oscillations in deformed models — Our second result concerns a method to evade our no-go theorem: entangle- ment dynamics may be induced via unitary deformations. Consider a Hamiltonian H which hosts a set of scars |ϕn⟩; suppose that there exists an H1 which by the no-go the- orem forbids entanglement dynamics. Deform H by a unitary U ; H → ˜H = U HU † and |ϕn⟩ → | ˜ϕn⟩ = U |ϕn⟩. However, H1 → U H1U † will generically no longer be a sum of single-site operators, and there will generically not be any sum of single site operators that reproduces the spectrum of ˜H on | ˜ϕn⟩. As a result, the conditions of the no-go theorem will no longer be satisfied. There is another way to think about this deformation. n cn | ˜ϕn⟩ under ˜H and the n cn |ϕn⟩ under H are related The time evolution of | ˜ψ⟩ = (cid:80) time evolution of |ψ⟩ = (cid:80) by | ˜ψ(t)⟩ ≡ e−i ˜Ht | ˜ψ⟩ = U e−iHt |ψ⟩ (4) Under the assumptions on H, the state e−iHt |ψ⟩ will have static entanglement. On the other hand, U e−iHt |ψ⟩ can have time-varying entanglement: U will generically change the entanglement of e−iHt |ψ⟩ at different times t by different amounts. Furthermore, suppose H’s scars form an equally spaced tower and so e−iHt |ψ⟩ has some period T . By Eq. 4, | ˜ψ(t)⟩ will also be invariant under t → t + T . As a con- sequence, the entanglement dynamics of | ˜ψ(t)⟩ are also invariant under t → t + T and hence have a period T /n for some integer n. Since the entanglement dynamics are nontrivial and periodic, they will show oscillations and non-monotonicity in time. As an example, we consider the scarred model intro- duced in [30]: HDMI = L (cid:88) (cid:20) J1⃗σi · ⃗σi+1 + J2⃗σi · ⃗σi+2 + hσz i i (cid:21) + Dˆz · (⃗σi × ⃗σi+1) (5) i σz with scars corresponding to the k = 0 magnon states |ϕn⟩ ∝ (S+)n |↓ ... ↓⟩. We call this model “DMI” because of its Dzyaloshinkskii-Moriya interaction ˆz · (⃗σi × ⃗σi+1). The no-go theorem trivially applies with the choice of H1 = h (cid:80) i , and so superpositions of the scars can’t show entanglement dynamics under HDMI. The state |+⟩⊗L = ⊗L (|↑⟩j + |↓⟩j) is a super- position of all the scars and evolves to ⊗L (|↑⟩j + e−2iht |↓⟩j) (up to a global phase) under the dynamics. The entanglement is static in time (the state is unentan- gled at all times), and the state returns to itself after a period of T = π h . choose (cid:80)L deformation . For e−iht We (cid:16) 1√ 2 1√ 2 the j=1 i=1 (cid:17) i π 4 i σx exp entanglement when it acts on ⊗L i=1 σx i+1 unitary U = ̸= 1, −1, U induces (|↑⟩j + e−2iht |↓⟩j). 1√ 2 j=1 4 i σz FIG. 1. Half-chain entanglement dynamics from scar pyramid states in the DWC model in Eq. 3. The half-chain entangle- ment SL/2(t) is plotted for initial states that are uniform su- perpositions of states within three slices of the scar pyramid. Only the vertical slice which has constant (cid:80) σz i but is split in energy by (cid:80) σz i+1 exhibits oscillations. The inset shows the scar pyramid (based on that in [25]) and the respective states involved in the superpositions that we time evolve. The y-axis of the inset is the number NDW of domain walls in the state, and the x axis is n+m (see discussion surrounding Eq. 3). We use L = 8 and Hamiltonian parameters λ = 1, ∆ = 0.1 and J = 1. The results are analogous if random superpositions of scars rather than uniform superpositions are used for the initial state. We chose U to preserve the easily preparable scar super- position |+⟩⊗L, so this state will remain a superposition of the scars in the deformed model and hence a good initial state. The half-chain entanglement entropy is straightforward to calculate and oscillates between 0 and max(SL/2(t)) = 2 log(2) with a period of π 2h . The deformations U come with a cost. For U ’s that can be represented as quantum circuits with local gates, the range of terms in ˜H grows linearly in the depth of U , and so only short-depth U will give physical Hamil- tonians ˜H. If U is given by e−iϵH ′ for some local Hamil- tonian H ′ built of non-commuting terms, then even for small ϵ, the terms in ˜H will be quasilocal with exponen- tially decaying tails. U will also deform the scar states; a finite-depth U will only change their entanglement by an O(1) amount, so if the scars are distinguished by log- law entanglement [26, 27, 45, 49, 50], then the scars will still have logarithmic entanglement after deforming by U . Note that if U is not chosen to preserve the initial state, then the new initial state will in general have a different baseline entanglement entropy. In Fig. 2(b), we show entanglement oscillations of an L = 8 DMI model after a transformation by depth D = 1, 4, 8 local random unitary circuits. As the cir- cuit depth increases, the minimum and average values of the entanglement increase, but the extent maxt SL/2(t)− mint SL/2(t) of the oscillations is not monotonically grow- ing in circuit depth and in fact becomes quite small at FIG. 2. Entanglement dynamics in deformed DMI models ˜H = U HDMIU † for several choices of U . The initial state is L, corresponding to a superposition of ˜H’s scars. For U |+⟩⊗ HDMI in Eq. 5, we use L = 8, J1 = 1, J2 = −0.6, D = 0.4 and i π h = 0.1. (a) For U = exp , the half-chain 4 entanglement shows clear oscillations. The entanglement of the unrotated system H = HDMI is frozen. (b) U is a ran- dom circuit of depth D. High depth circuits increase the total entanglement towards that of a Haar state, but need not in- (c) crease the magnitude of the entanglement oscillations. Oscillations after rotation by unitary U in eq. 6. i σx i σx (cid:80)L i+1 (cid:16) (cid:17) large depth. This may be intuitively understood by the fact that as circuit depth increases, U becomes closer to a Haar-random unitary and U e−iHt|ψ⟩ will look like a Haar-random state. The entanglement entropy of Haar- random states shows only very weak fluctuations about the average value, so despite always having large entan- glement, the periodic fluctuations in time will be small. Indeed, when the circuit depth is equal to L, we observe that the entanglement periodically fluctuates about the value of the entanglement entropy for a Haar random state.2 It is reasonable to expect that only fine-tuned unitaries U can lead to large-magnitude oscillations in the entanglement entropy. A particularly striking example of this may be seen through the following unitary: U = (cid:18) |+⟩⊗L ⟨+|⊗L 0 (cid:19) . 0 UHaar (6) This unitary preserves the initial condition but is Haar random on the remainder of the Hilbert space. Oscilla- tions due to this unitary are shown in Fig. 2(c). Over the course of the time evolution, the state nearly reaches the entanglement of a Haar random state. When the revival 2 Strictly speaking, for a local Haar circuit of depth equal to L, the state is not yet Haar distributed. We expect that fluctuations in entanglement will in fact keep shrinking with circuit depth ecL depth [51]. However, the fluctuations are already up to a ∼ strikingly small. 0246810t1.01.21.41.61.82.0SL/2(t)012345678n+m02468NDWR†P†050100150200250t02log2SL/2(t)(a)H˜H050100t012SL/2(t)(b)D148050100t012(c) 5 to the initial state occurs, the entanglement drops all the way down to 0. As a result, these dynamics essentially os- cillate between a product state and a Haar random state. We share this example to show the power of choosing a unitary U that preserves the simple product initial state of H; note that this example is not physically realizable, as the resulting deformed Hamiltonian ˜H will not be a sum of local operators. Summary and outlook — We have presented a framework for understanding entanglement dynamics in scarred models, giving both a no-go theorem and a method to evade the no-go theorem. The no-go theorem is easy to apply; if there’s a simple Hamiltonian (sum of single-site operators) H1 that reproduces the scars’ energies in some scarred Hamiltonian H, then H can’t generate scar entanglement dynamics. This H1 is some- times (e.g. DMI) but not always (e.g. AKLT) a term in the Hamiltonian H. The no-go theorem applies to a large number of models with exact scars, freezing the entanglement of superpositions of scars. On the other hand, our method for evading the no-go theorem means that scarred Hamiltonians with scar en- tanglement oscillations are at least as numerous as those without; deforming a model without scar entanglement dynamics will generically give one with such dynamics. However, there is a corresponding cost in that the range of the terms in the deformed model will generically be larger than that of the undeformed model, and the terms themselves may possibly be less natural. This points to a two-step construction: deform a model without scar en- tanglement dynamics and then truncate the range of the terms. Such a model will generically lose perfect revivals but may maintain non-monotonicity of the entanglement (the entanglement may show oscillations on top of a lin- ear growth). This would be in line with the phenomenol- ogy seen in the PXP model in the original Rydberg atom experiment; indeed, PXP has been understood [22] as a truncation of an unknown but approximately constructed quasilocal Hamiltonian with exact scars. Acknowledgements — Nicholas O’Dea thanks Wen Wei Ho for a discussion of entanglement oscillations that formed the seed for this work, and he thanks Sanjay Moudgalya for pointing out interesting connections to related literature. We thank Vedika Khemani for encour- aging us to publish this work. Nicholas O’Dea and Adithya Sriram acknowledge fund- ing through Vedika Khemani’s Packard Fellowship in Sci- ence and Engineering and her award from the US De- partment of Energy, Office of Science, Basic Energy Sci- ences, under Early Career Award Nos. DE-SC0021111. Adithya Sriram also acknowledges support from the Na- tional Science Foundation Graduate Research Fellowship. [1] H. Bernien, S. Schwartz, A. Keesling, H. Levine, A. Om- ran, H. Pichler, S. Choi, A. S. Zibrov, M. Endres, M. Greiner, et al., Probing many-body dynamics on a 51-atom quantum simulator, Nature 551, 579 (2017). [2] C. J. Turner, A. A. Michailidis, D. A. Abanin, M. Serbyn, and Z. Papi´c, Weak ergodicity breaking from quantum many-body scars, Nature Physics 14, 745–749 (2018). [3] M. Serbyn, D. A. Abanin, and Z. Papi´c, Quantum many- body scars and weak breaking of ergodicity, Nature Physics 17, 675 (2021). [4] S. Moudgalya, B. A. Bernevig, and N. Regnault, Quan- tum many-body scars and hilbert space fragmentation: a review of exact results, Reports on Progress in Physics 85, 086501 (2022). [5] A. Chandran, T. Iadecola, V. Khemani, and R. Moess- ner, Quantum many-body scars: A quasiparticle perspec- tive, Annual Review of Condensed Matter Physics 14, 443 (2023). [6] W. Kao, K.-Y. Li, K.-Y. Lin, S. Gopalakrishnan, and B. L. Lev, Topological pumping of a 1d dipolar gas into strongly correlated prethermal states, Science 371, 296–300 (2021). [7] D. Bluvstein, A. Omran, H. Levine, A. Keesling, G. Se- meghini, S. Ebadi, T. T. Wang, A. A. Michailidis, N. Maskara, W. W. Ho, et al., Controlling quantum many-body dynamics in driven rydberg atom arrays, Sci- ence 371, 1355 (2021). [8] P. N. Jepsen, Y. K. E. Lee, H. Lin, I. Dimitrova, Y. Mar- galit, W. W. Ho, and W. Ketterle, Long-lived phan- tom helix states in heisenberg quantum magnets, Nature Physics 18, 899–904 (2022). [9] I.-C. Chen, B. Burdick, Y. Yao, P. P. Orth, and T. Iadecola, Error-mitigated simulation of quantum many-body scars on quantum computers with pulse-level control, Phys. Rev. Res. 4, 043027 (2022). [10] G.-X. Su, H. Sun, A. Hudomal, J.-Y. Desaules, Z.-Y. Zhou, B. Yang, J. C. Halimeh, Z.-S. Yuan, Z. Papi´c, and J.-W. Pan, Observation of many-body scarring in a bose- hubbard quantum simulator, Phys. Rev. Res. 5, 023010 (2023). [11] P. Zhang, H. Dong, Y. Gao, L. Zhao, J. Hao, J.-Y. De- saules, Q. Guo, J. Chen, J. Deng, B. Liu, et al., Many- body hilbert space scarring on a superconducting proces- sor, Nature Physics 19, 120 (2023). [12] K. Yang, Y. Zhang, K.-Y. Li, K.-Y. Lin, S. Gopalakr- ishnan, M. Rigol, and B. L. Lev, Phantom energy in the nonlinear response of a quantum many-body scar state, Science 385, 1063 (2024). [13] R. V. Jensen and R. Shankar, Statistical behavior in de- terministic quantum systems with few degrees of free- dom, Phys. Rev. Lett. 54, 1879 (1985). [14] J. M. Deutsch, Quantum statistical mechanics in a closed system, Phys. Rev. A 43, 2046 (1991). [15] M. Srednicki, Chaos and quantum thermalization, Phys. Rev. E 50, 888 (1994). [16] M. Rigol, V. Dunjko, and M. Olshanii, Thermalization and its mechanism for generic isolated quantum systems, Nature 452, 854 (2008). [17] L. D’Alessio, Y. Kafri, A. Polkovnikov, and M. Rigol, From quantum chaos and eigenstate thermalization to statistical mechanics and thermodynamics, Advances in Physics 65, 239 (2016). [18] R. Nandkishore and D. A. Huse, Many-body localization and thermalization in quantum statistical mechanics, Annual Review of Condensed Matter Physics 6, 15–38 (2015). [19] D. A. Abanin, E. Altman, I. Bloch, and M. Ser- byn, Colloquium : Many-body localization, thermaliza- tion, and entanglement, Reviews of Modern Physics 91, 10.1103/revmodphys.91.021001 (2019). [20] W. W. Ho, S. Choi, H. Pichler, and M. D. Lukin, Pe- riodic orbits, entanglement, and quantum many-body scars in constrained models: Matrix product state ap- proach, Phys. Rev. Lett. 122, 040603 (2019). [21] D. Bluvstein, H. Levine, G. Semeghini, T. T. Wang, S. Ebadi, M. Kalinowski, A. Keesling, N. Maskara, H. Pichler, M. Greiner, V. Vuleti´c, and M. D. Lukin, A quantum processor based on coherent transport of en- tangled atom arrays, Nature 604, 451–456 (2022). [22] S. Choi, C. J. Turner, H. Pichler, W. W. Ho, A. A. Michailidis, Z. Papi´c, M. Serbyn, M. D. Lukin, and D. A. Abanin, Emergent su(2) dynamics and perfect quantum many-body scars, Phys. Rev. Lett. 122, 220603 (2019). [23] A. A. Michailidis, C. J. Turner, Z. Papi´c, D. A. Abanin, and M. Serbyn, Slow quantum thermalization and many- body revivals from mixed phase space, Phys. Rev. X 10, 011055 (2020). [24] M. Ljubotina, B. Roos, D. A. Abanin, and M. Serbyn, Optimal steering of matrix product states and quantum many-body scars, PRX Quantum 3, 030343 (2022). [25] D. K. Mark, C.-J. Lin, and O. I. Motrunich, Unified struc- ture for exact towers of scar states in the affleck-kennedy- lieb-tasaki and other models, Phys. Rev. B 101, 195131 (2020). [26] T. Iadecola and M. Schecter, Quantum many-body scar states with emergent kinetic constraints and finite- entanglement revivals, Phys. Rev. B 101, 024306 (2020). [27] M. Schecter and T. Iadecola, Weak ergodicity breaking and quantum many-body scars in spin-1 xy magnets, Phys. Rev. Lett. 123, 147201 (2019). [28] N. O’Dea, F. Burnell, A. Chandran, and V. Khemani, From tunnels to towers: Quantum scars from lie algebras and q-deformed lie algebras, Phys. Rev. Res. 2, 043305 (2020). [29] S. Chattopadhyay, H. Pichler, M. D. Lukin, and W. W. Ho, Quantum many-body scars from virtual entangled pairs, Phys. Rev. B 101, 174308 (2020). [30] D. K. Mark and O. I. Motrunich, η-pairing states as true scars in an extended hubbard model, Phys. Rev. B 102, 075132 (2020). [31] N. Shibata, N. Yoshioka, and H. Katsura, Onsager’s scars in disordered spin chains, Phys. Rev. Lett. 124, 180604 (2020). [32] K. Lee, R. Melendrez, A. Pal, and H. J. Changlani, Exact three-colored quantum scars from geometric frustration, Phys. Rev. B 101, 241111 (2020). [33] E. Chertkov and B. K. Clark, Motif magnetism and quan- tum many-body scars, Phys. Rev. B 104, 104410 (2021). [34] S. Moudgalya, E. O’Brien, B. A. Bernevig, P. Fendley, and N. Regnault, Large classes of quantum scarred hamil- tonians from matrix product states, Phys. Rev. B 102, 085120 (2020). [35] C. M. Langlett, Z.-C. Yang, J. Wildeboer, A. V. Gor- shkov, T. Iadecola, and S. Xu, Rainbow scars: From area to volume law, Phys. Rev. B 105, L060301 (2022). [36] J. Ren, C. Liang, and C. Fang, Deformed sym- 6 metry structures and quantum many-body scar sub- spaces, Physical Review Research 4, 10.1103/physrevre- search.4.013155 (2022). [37] L.-H. Tang, N. O’Dea, and A. Chandran, Multi- magnon quantum many-body scars from tensor oper- ators, Physical Review Research 4, 10.1103/physrevre- search.4.043006 (2022). [38] J. Wildeboer, C. M. Langlett, Z.-C. Yang, A. V. Gor- shkov, T. Iadecola, and S. Xu, Quantum many-body scars from einstein-podolsky-rosen states in bilayer systems, Physical Review B 106, 10.1103/physrevb.106.205142 (2022). [39] J. Ren, C. Liang, and C. Fang, Quasisymmetry groups and many-body scar dynamics, Phys. Rev. Lett. 126, 120604 (2021). [40] O. Vafek, N. Regnault, and B. A. Bernevig, Entangle- ment of exact excited eigenstates of the Hubbard model in arbitrary dimension, SciPost Phys. 3, 043 (2017). [41] S. Moudgalya, N. Regnault, and B. A. Bernevig, η- pairing in hubbard models: From spectrum generating algebras to quantum many-body scars, Phys. Rev. B 102, 085140 (2020). [42] C. N. Yang, η pairing and off-diagonal long-range order in a hubbard model, Phys. Rev. Lett. 63, 2144 (1989). [43] K. Pakrouski, P. N. Pallegar, F. K. Popov, and I. R. Klebanov, Many-body scars as a group invariant sector of hilbert space, Phys. Rev. Lett. 125, 230602 (2020). [44] S. Moudgalya, S. Rachel, B. A. Bernevig, and N. Reg- nault, Exact excited states of nonintegrable models, Phys. Rev. B 98, 235155 (2018). [45] S. Moudgalya, N. Regnault, and B. A. Bernevig, Entan- glement of exact excited states of affleck-kennedy-lieb- tasaki models: Exact results, many-body scars, and vio- lation of the strong eigenstate thermalization hypothesis, Phys. Rev. B 98, 235156 (2018). [46] A. Hudomal, I. Vasi´c, N. Regnault, and Z. Papi´c, Quan- tum scars of bosons with correlated hopping, Communi- cations Physics 3, 10.1038/s42005-020-0364-9 (2020). [47] S. Moudgalya and O. I. Motrunich, Exhaustive charac- terization of quantum many-body scars using commutant algebras (2023), arXiv:2209.03377 [cond-mat.str-el]. [48] P.-G. Rozon and K. Agarwal, Broken unitary picture of dynamics in quantum many-body scars, Phys. Rev. Res. 6, 023041 (2024). [49] C. J. Turner, A. A. Michailidis, D. A. Abanin, M. Serbyn, and Z. Papi´c, Quantum scarred eigenstates in a rydberg atom chain: Entanglement, breakdown of thermalization, and stability to perturbations, Phys. Rev. B 98, 155134 (2018). [50] A. M. Alhambra, A. Anshu, and H. Wilming, Revivals imply quantum many-body scars, Phys. Rev. B 101, 205107 (2020). [51] J. Cotler, N. Hunter-Jones, and D. Ranard, Fluctuations of subsystem entropies at late times, Physical Review A 105, 10.1103/physreva.105.022416 (2022). Appendix A: More details about AKLT In the main text, we showed that superpositions of the AKLT scar states (Q†)n |g⟩ do not show entanglement dynamics under the AKLT Hamiltonian (or any other Hamiltonian that splits these scars equally in energy): 7 From appendix C of [28], the alternating AKLT model annihilates the scars (Q†)n |g⟩; (Heven − H0)(Q†)n |g⟩ = 0. Accordingly, 2Heven(Q†)n |g⟩ = 2H0(Q†)n |g⟩ = HAKLT(Q†)n |g⟩. Since Heven and Hodd share the same SO(3) equality symmetry as HAKLT, holds for all the states related by SO(3) symme- 2Heven(S−)m(Q†)n |g⟩ = 2H0(S−)m(Q†)n |g⟩ = try: HAKLT(S−)m(Q†)n |g⟩. this In particular, we see that 2Heven (likewise 2Hodd) sat- isfies the conditions of the generalization of the no-go theorem in the main text: it is a sum of commuting two- body operators, and so the entanglement entropy of a contiguous region will have oscillations bounded by at most maxt S(t) − mint S(t) ≤ 2 log(2). We can say more. The entanglement between any even-length contiguous region of sites and the rest of the system will be unchanging. For example, suppose the region A of interest was a contiguous region of size r between sites 2j and 2(j + r) − 1. Note that U (t) = e−iHAKLTt reduces to e−2iHevent on the space spanned by (S−)m(Q†)n |g⟩, and e−2iHevent factorizes between A and its complement as no operators in Heven straddle the boundaries between them. Thus, any state that is a su- perposition of the scars (S−)m(Q†)n |g⟩ will have static entanglement between any such region A and its comple- ment. Using Hodd, a similar statement holds for regions A between 2j and 2(j + r), so all even-length contigu- ous regions have frozen entanglement with their comple- ments. Finally, we note that even when considering an odd- length region of sites to avoid the freezing described above, the amplitude of the entanglement oscillations ap- pears to be strikingly small compared to the mean en- tanglement (Fig. 3). The AKLT model thus furnishes an example where despite evading no-go theorems and having entanglement oscillations, the magnitude of the oscillations are quite small. FIG. 3. Half-chain entanglement entropy in the L = 10 spin- 1 AKLT model starting from a random superposition of scar states (S−)m(Q†)n |g⟩. Note that the magnitude of the oscil- lations is orders of magnitude smaller than the mean. the no-go theorem is satisfied by H1 = (cid:80) i . In this appendix, we discuss a larger set of scar states in the AKLT model to illustrate both the breakdown of the no- go theorem and simple extensions of the no-go theorem that still partially restrict the entanglement dynamics. i Sz By virtue of the SO(3) symmetry of the AKLT model, all rotations of the scars (Q†)n |g⟩ are also eigenstates. The scar states (Q†)n |g⟩ have total spin s = 2n and (cid:80) i = 2n, so the states |ϕm,n⟩ = (S−)m(Q†)n |g⟩ for 0 ≤ m ≤ 4n are all eigenstates of the AKLT model with energy 2n. i Sz i Sz This broader class of states can evade the no-go the- orem, as H1 ∝ (cid:80) i will fail to distinguish states with fixed (cid:80) i Sz i (i.e. states with the same value of 2n − m), as shown in Fig. 3. However, there are still useful con- straints that we can make in the spirit of the no-go the- orem. First, decompose HAKLT into its odd and even bonds: Heven = Hodd = L/2 (cid:88) i=1 L/2 (cid:88) i=1 P (2) 2i,2i+1 P (2) 2i−1,2i (A1) 0246810t−5−4−3−2−101SL/2(t)×10−4+3.2
synthetic_cpt
6
On_Domain-Specific_Post-Training_for_Multimodal_Large_Language_Models.pdf
0 2 0 2 p e S 7 ] C A . h t a m [ 1 v 2 4 0 3 0 . 9 0 0 2 : v i X r a Maximal non valuative domains Rahul Kumar1 & Atul Gaur2 Department of Mathematics University of Delhi, Delhi, India. E-Mail: [email protected]; [email protected] Abstract The notion of maximal non valuative domain is introduced and char- acterized. An integral domain R is called a maximal non valuative do- main if R is not a valuative domain but every proper overring of R is a valuative domain. Maximal non valuative domains have at most four maximal ideals. Various properties of maximal non valuative domains are discussed. Conditions are given under which pseudo-valuation do- mains and maximal non pseudo-valuation domains are maximal non valuative domains. Mathematics Subject Classification: Primary 13G05, 13B02, Secondary 13B22, 13B30, 13A15. Keywords: Maximal non valuative domain, valuative domain, valuation do- main, pseudo-valuation domain, B´ezout domain. 1 Introduction Our work is motivated by [3]. An integral domain R with the quotient field qf(R) is said to be a valuative domain, see [3], if for each x ∈ qf(R), either R ⊆ R[x] or R ⊆ R[x−1] has no intermediate ring. In this paper, we introduce the concept of maximal non valuative domains. Let R ⊂ T be an extension of integral domains. Then we say that R is a maximal non valuative subring of T if R is not a valuative domain but each subring of T containing R properly is a valuative domain. Moreover, if T = qf(R), then R is said to be a maximal non valuative domain. In this paper, we discuss various properties of maximal non valuative domain and characterize the same in terms of B´ezout domain. All 1The author was supported by the SRF grant from UGC India, Sr. No. 2061440976. 2The author was supported by the MATRICS grant from DST-SERB India, No. MTR/2018/000707. 2 Rahul Kumar and Atul Gaur rings considered below are integral domains. By an overring of R, we mean a subring of the quotient field of R containing R. A ring with a unique maximal ideal is called a local ring. The symbol ⊆ is used for inclusion and ⊂ is used for proper inclusion. Throughout this paper, qf(R) denotes the quotient field of an integral domain R, R′ denotes the integral closure of R in qf(R). For a ring R, dim(R) denotes the Krull dimension of R. In this paper, we show that if R is a maximal non valuative subring of T , then T is an overring of R, see Theorem 2.1. If R is a maximal non valuative domain, then R′ is a Pr¨ufer domain, see Corollary 2.3. Moreover, if R is not integrally closed, then R has at most three maximal ideals, the set of non- maximal prime ideals of R is linearly ordered by inclusion, and there is at most one maximal ideals of R that does not contain all non-maximal prime ideals of R, see Theorem 2.7; and if R is integrally closed, then R has at least two and at most four maximal ideals, there are exactly two non-maximal prime ideals that are not comparable in case R has exactly two maximal ideals, otherwise the set of non-maximal prime ideals of R is linearly ordered by inclusion, and there are at most two maximal ideals of R that do not contain all non-maximal prime ideals of R, see Proposition 2.8. We characterize integrally closed maximal non valuative domains in terms of B´ezout domains, see Theorem 2.9. Finally, we characterize local non integrally closed maximal non valuative domain R. We also discuss the cases where either R is a pseudo-valuation domain or a maximal non pseudo-valuation subring of R′. For any ring R, Spec(R) denotes the set of all prime ideals of R; Max(R) denotes the set of all maximal ideals of R. As usual, |X| denotes the cardinality of a set X. 2 Results A ring extension R ⊆ T is said to be residually algebraic if for any prime ideal Q of T , T /Q is algebraic over R/(Q ∩ R), see [4, 1]. Moreover, if R ⊆ S is residually algebraic, for any subring S of T containing R, then (R, T ) is said to be a residually algebraic pair, see [1]. In our first theorem, we list some properties of an extension of integral domains in which every intermediate ring is a valuative domain. Theorem 2.1. Let R ⊂ T be a ring extension of integral domains. If each subring of T properly containing R is a valuative domain, then the following hold: (i) R ⊂ T is an algebraic extension. (ii) (R, T ) is a residually algebraic pair. 3 (iii) If R is not a field, then T is an overring of R. Proof. (i) If possible, suppose that R ⊂ T is not an algebraic extension. Then there exists an element t ∈ T \ R which is transcendental over R. Take S = R[t3, t5]. Then S is a subring of T containing R properly. Therefore, S is a valuative domain. Clearly, t ∈ qf(S). Thus, either S ⊆ S[t] or S ⊆ S[t−1] has no intermediate ring. Now, note that S ⊂ R[t + t2, t3, t5] ⊂ S[t] = R[t] and S ⊂ R[t−1 + t2, t3, t5] ⊂ S[t−1] = R[t−1, t3, t5], which is a contradiction. Hence, R ⊂ T is an algebraic extension. (ii) Let S be a subring of T properly containing R and Q be a prime ideal of S. If possible, suppose that R/(Q ∩ R) ⊂ S/Q is not an algebraic extension. Then there exists an element ¯t ∈ S/Q that is not algebraic over R/(Q ∩ R). Take S′ = (R/Q ∩ R)[¯t]. Then S′ = S′′/(Q ∩ S′′) for some subring S′′ of S containing R properly. Therefore, S′′ is a valuative domain. Thus, by [3, Theorem 2.2(i)], S′′ has at most three maximal ideals and hence S′ has at most three maximal ideals, which is a contradiction. (iii) Let K = qf(R). If possible, suppose that T 6⊆ K. Choose t ∈ T \ K. Then t is algebraic over R, by part (i). Therefore, α = tr is integral over R for some non zero r ∈ R. Clearly, α /∈ K. Let n = [K(α) : K]. Then {1, α, α2, . . . , αn−1} is a basis of K(α) over K. Let β be any non zero, non unit of R. Take S = R + αβ3R + α2β3R + · · · + αn−1β3R. Then S is a subring of T properly containing R. Therefore, S is a valuative domain. Note that qf(S) = K(α). Now, if αβ−1 ∈ S, then αβ−1 = r0 + αβ3r1 + α2β3r2 + · · · + αn−1β3rn−1 for some r0, r1, . . . , rn−1 ∈ R. It follows that β−1 = β3r1 ∈ R, a contradiction. Also, if α−1β ∈ S, then α−1β = r0 + αβ3r1 + α2β3r2 + · · ·+ αn−1β3rn−1 for some r0, r1, . . . , rn−1 ∈ It follows that β = r0α + α2β3r1 + α3β3r2 + · · · + αnβ3rn−1. Let R. n−1 i=0 αixi for some x0, x1, . . . , xn−1 ∈ R. Consequently, we have αn = P β = r0α + α2β3r1 + α3β3r2 + · · · + αn−1β3rn−2 + β3rn−1(cid:16) n−1 X i=0 αixi(cid:17) It follows that β = β3rn−1x0, that is, 1 = β2rn−1x0, a contradiction. Thus, αβ−1, α−1β ∈ K(α) \ S. Since S is a valuative domain, either S ⊂ S[αβ−1] or S ⊂ S[α−1β] has no intermediate ring. Assume that S ⊂ S[αβ−1] has no intermediate ring. It follows that either S = S[α] or S[α] = S[αβ−1]. Now, if S = S[α], then α ∈ S. Therefore, α = r0 + αβ3r1 + α2β3r2 + · · · + αn−1β3rn−1 for some r0, r1, . . . , rn−1 ∈ R. Consequently, we have 1 = β3r1, which is a contradiction. Thus, we 4 Rahul Kumar and Atul Gaur have S[α] = S[αβ−1], that is, αβ−1 ∈ S[α]. This gives αβ−1 = (cid:16) n−1 X i=0 αiβ3r0i(cid:17) + (cid:16) n−1 X i=0 αiβ3r1i(cid:17)α + · · · + (cid:16) n−1 X i=0 αiβ3r(n−1)i(cid:17)αn−1 P n−1 i=0 αixi, we get β−1 = β3r ∈ R for some non zero Now, using αn = r ∈ R, a contradiction. Thus, we may assume that S ⊂ S[α−1β] has no intermediate ring. It follows that either S = S[α−1β2] or S[α−1β2] = S[α−1β]. If former holds, then α−1β2 ∈ S. Therefore, α−1β2 = r0 + αβ3r1 + α2β3r2 + · · · + αn−1β3rn−1 for some r0, r1, . . . , rn−1 ∈ R. This gives β2 = (r0 + αβ3r1 + α2β3r2 + · · · + αn−1β3rn−1)α Consequently, we have β2 = β3r for some non zero r ∈ R, that is, 1 = βr, a contradiction. Finally, we assume that S[α−1β2] = S[α−1β], that is, α−1β ∈ S[α−1β2]. This gives α−1β = (cid:16) n−1 X i=0 that is, αiβ3r0i(cid:17)+(cid:16) n−1 X i=0 αiβ3r1i(cid:17)α−1β2+· · ·+(cid:16) n−1 X i=0 αiβ3rmi(cid:17)α−mβ2m, αm−1β = (cid:16) n−1 X i=0 αiβ3r0i(cid:17)αm+(cid:16) n−1 X i=0 αiβ3r1i(cid:17)αm−1β2+· · ·+(cid:16) n−1 X i=0 αiβ3rmi(cid:17)β2m On comparing the coefficient of αm−1, we conclude that β is a multiple of β3, which is again a contradiction. Therefore, T ⊆ K. We now define the maximal non valuative subrings of an integral domain formally. Definition 2.2. Let R be a proper subring of an integral domain T . Then R is said to be a maximal non valuative subring of T if R is not a valuative domain but every subring of T properly containing R is a valuative domain. A domain R is said to be a maximal non valuative domain if R is a maximal non valuative subring of its quotient field qf(R). The next corollary shows that the integral closure of maximal non valuative domain is a Pr¨ufer domain. Corollary 2.3. Let R be a maximal non valuative domain. Then R′ is a Pr¨ufer domain. Moreover, if R′ is local, then R′ is a valuation domain. Proof. Note that (R, qf(R)) is a residually algebraic pair, by Theorem 2.1. Now, the result follows from [1, Corollary 2.8]. 5 An integral domain R is called an i-domain if for each overring T of R, the canonical contraction map Spec(T ) → Spec(R) is injective, see [10]. The next corollary is a direct consequence of Corollary 2.3 and [10, Corollary 2.15]. Corollary 2.4. Let R be a maximal non valuative domain. If R′ is local, then R is an i-domain. Now, in the next proposition we discuss the impact of localization on max- imal non valuative subrings of a domain. Proposition 2.5. Let R be a maximal non valuative subring of an integral domain T and N be a multiplicatively closed subset of R. Then either N −1R is a valuative domain or N −1R is a maximal non valuative subring of N −1T . Proof. If N −1R is valuative, then we are done. Now, assume that N −1R is not valuative. Let S′ be a subring of N −1T containing N −1R properly. Then S′ = N −1S, for some subring S of T properly containing R. Now, by [3, Proposition 2.4], S′ is valuative as S is valuative by assumption. Thus, N −1R is a maximal non valuative subring of N −1T . As a consequence of Proposition 2.5, the requirement of ring to be local in Corollary 2.4 can be dropped. Corollary 2.6. Let R be a maximal non valuative domain. If R is integrally closed, then R is an i-domain. Proof. As i-domain is a local property, it is enough to show that RP is an i-domain for all prime ideals P of R. Let P be a prime ideal of R. Then by Proposition 2.5, either RP is a valuative domain or RP is a maximal non valuative domain. If RP is a valuative domain, then RP is an i-domain, by [3, Corollary 3.3]. Otherwise RP is a maximal non valuative domain. As RP is a local integrally closed domain, the result now follows from Corollary 2.4. In the next theorem, we list some properties of maximal non valuative domains which are not integrally closed. Theorem 2.7. Let R be a maximal non valuative domain. If R is not integrally closed, then the following statements hold: (i) |Max(R)| ≤ 3. (ii) The set of non-maximal prime ideals of R is linearly ordered by inclusion. (iii) There is at most one maximal ideal of R that does not contain all non- maximal prime ideals of R. 6 Rahul Kumar and Atul Gaur Proof. Note that R is a maximal non valuative subring of R′. In particular, R′ is a valuative domain. Consequently, the statements (i), (ii), and (iii) hold for R′, by [3, Theorem 2.2]. As R ⊂ R′ is an integral extension of domains, we conclude that the statements (i), (ii), and (iii) hold for R as well. We now present some properties of maximal non valuative domains which are integrally closed. Proposition 2.8. Let R be a maximal non valuative domain. If R is inte- grally closed, then the following statements hold: (i) 2 ≤ |Max(R)| ≤ 4. (ii) If |Max(R)| = 2, then there are exactly two non-maximal prime ideals of R that are not comparable. Otherwise, the set of non-maximal prime ideals of R is linearly ordered by inclusion. (iii) There are at most two maximal ideals of R that do not contain all non- maximal prime ideals of R. Proof. (i) Note that if R has more than four maximal ideals, then R has a proper overring with four maximal ideals, which is a contradiction by [3, Theorem 2.2(i)]. It follows that |Max(R)| ≤ 4. Now, if R is local, then R is a valuation domain, by Corollary 2.3, a contradiction. (ii) First, assume that M and N are the only maximal ideals of R. Then RM and RN are valuative as R is maximal non valuative. It follows that RM and RN are valuation domains, by [3, Proposition 3.1]. Thus, R is a B´ezout domain with exactly two maximal ideals. Since R is not a valuative domain, M and N do not contains each non-maximal prime ideal of R, by [3, Theorem 3.7]. Consequently, there are at least two non- maximal prime ideals of R that are not comparable. Since RM and RN are valuation domains, there are exactly two non-maximal prime ideals of R that are not comparable. Now, let M1, M2 and M3 be any three maximal ideals of R. Then RMi is a valuative domain for i = 1, 2, 3. It follows that RMi is a valuation domain for i = 1, 2, 3, by [3, Proposition 3.1]. If possible, suppose that P and Q are any two incomparable non-maximal prime ideals of R. Without loss of generality, we may assume that P ⊂ M1 but P 6⊆ M2 and Q ⊂ M2 but Q 6⊆ M1. Since R is maximal non valuative, RM1 ∩ RM2 is a valuative domain, which is a contradiction, by [3, Theorem 3.7]. Thus, the set of non-maximal prime ideals of R is linearly ordered by inclusion. (iii) If |Max(R)| = 2, then nothing to prove. Now, assume that R has exactly three maximal ideals, say M1, M2, and M3. Let Pi be a non-maximal 7 prime ideal of R such that Pi 6⊂ Mi for i = 1, 2, 3. Also by part (ii), we may assume that P1 ⊆ P2 ⊆ P3. Then P3 is a non-maximal prime ideal of R that is not contained in any maximal ideal of R, a contradiction. Finally, assume that R has exactly four maximal ideals, say M1, M2, M3, and M4. Let Pi be a non-maximal prime ideal of R such that Pi 6⊂ Mi for i = 1, 2, 3. Again, by part (ii), we may assume that P1 ⊆ P2 ⊆ P3. Then P3 ⊂ M4. Note that S = RM1 ∩ RM2 ∩ RM4 is a valuative domain. Thus, by [3, Theorem 2.2], at most one maximal ideal of S does not contain each non-maximal prime ideal of S, a contradiction. Therefore, there are at most two maximal ideals of R that do not contain all non-maximal prime ideals of R. In the next theorem, we present a necessary and sufficient condition for an integrally closed domain to be a maximal non valuative domain. Theorem 2.9. Let R be an integrally closed domain. Then the following statements are equivalent: (1) R is a maximal non valuative domain. (2) Exactly one of the following holds: (i) R is a B´ezout domain with exactly two maximal ideals and exactly two non-maximal prime ideals of R are not comparable. (ii) R is a B´ezout domain with exactly three maximal ideals and exactly two maximal ideals of R do not contain exactly one non-maximal prime ideal of R whereas the third maximal ideal of R contains all non- maximal prime ideal of R. (iii) R is a B´ezout domain with exactly four maximal ideals and at most one maximal ideal of R does not contain all non-maximal prime ideals of R. Proof. (1) ⇒ (2) Note that R is a Pr¨ufer domain, by Corollary 2.3. Also, by Proposition 2.8(i), we have 2 ≤ |Max(R)| ≤ 4. It follows that R is a B´ezout domain. Now, assume that |Max(R)| = 2, then (i) follows from Proposition 2.8(ii). Also, if |Max(R)| = 3, then exactly two maximal ideals of R do not contain at least one non-maximal prime ideal of R, by Proposition 2.8(iii) and [3, Theorem 3.7]. Now, assume that M, N and U are maximal ideals of R. If possible, assume that P, Q are non-maximal prime ideals of R such that only U contains both of them. Since RU is a valuation domain, either P ⊂ Q or Q ⊂ P . Without loss of generality, assume that P ⊂ Q. Then by [3, Theorem 3.7], RM ∩ RN ∩ RQ is not a valuative domain, a contradiction as R is maximal non valuative. Thus, M and N do not contain exactly one non-maximal prime ideal of R. Finally, assume that |Max(R)| = 4. Then by 8 Rahul Kumar and Atul Gaur Proposition 2.8(iii), there are at most two maximal ideals of R that do not contain all non-maximal prime ideals of R. If possible, assume that there are exactly two maximal ideals of R that do not contain all non-maximal prime ideals of R. Let M1, M2, M3 and M4 be maximal ideals of R, where M1 and M2 do not contain all non-maximal prime ideals of R. Let P1, P2 be non-maximal prime ideals of R such that P1 6⊂ M1 and P2 6⊂ M2. Moreover, by Proposition 2.8(ii), we may assume that P1 ⊆ P2. Then P2 6⊂ M1. Note that P2 ⊂ M3. Since R is a maximal non valuative domain, S = RM1 ∩RM2 ∩RM3 is a valuative domain, which is a contradiction, by [3, Theorem 3.7]. (2) ⇒ (1) Suppose (i) holds. Then R is not valuative, by [3, Theorem 2.2]. Now, let M, N be maximal ideals of R and P, Q be incomparable non-maximal prime ideals of R. Since R is a Pr¨ufer domain, RM , RN are valuation domains. Thus, we may assume that P ⊂ M, P 6⊂ N and Q ⊂ N, Q 6⊂ M. Now, by [3, Corollary 3.9], it is enough to show that RM ∩ RU and RV ∩ RN are valuative domains for some arbitrary non-maximal prime ideals U, V of R. First, we claim that RM ∩ RU is a valuative domain. If U ⊂ M, then we are done. Therefore, we may assume that U 6⊂ M and so U ⊂ N. It follows that P 6⊆ U. Now if U ⊂ P , then again we are done. Otherwise U = Q, by assumption. Then by [3, Theorem 3.7], RM ∩ RQ is a valuative domain. Similarly, we can prove that RV ∩ RN is a valuative domain. Now, assume that (ii) holds. Then R is not valuative, by [3, Theorem 3.7]. Let S be a proper overring of R. Then S = ∩P ∈X RP for some subset X of Spec(R). Let M, N, and U be maximal ideals of R where M and N do not contain exactly one non-maximal prime ideal of R, whereas U contains all non-maximal prime ideals of R. Now, the following cases arise: Case (i): Let M, N, U ∈ X. Then S = R, a contradiction. Case (ii): Let U ∈ X. Then S = RU is a valuation domain (and so is valuative) as R is a B´ezout domain. Case (iii): Let M ∈ X but N, U /∈ X. Then RM ∩ RU is a subring of S contains R properly. Note that RM ∩ RU is valuative, by [3, Theorem 3.7]. It follows that S is valuative, by [3, Corollary 3.9]. Similarly, if N ∈ X but M, U /∈ X, then we are done. Case (iv): Let M, N ∈ X but U /∈ X. Then S = RM ∩ RN ∩ (∩P ∈X1RP ), where X1 = X \ {M, N}. Without loss of generality, we may assume that X1 does not contains any prime ideal that is either contained in M or N. By [3, Corollary 3.9], we may assume that X1 is non empty. We claim that X1 is a singleton set. If possible, suppose that P, Q ∈ X1. By assumption, we may assume that P ⊂ M, P 6⊂ N and Q ⊂ N, Q 6⊂ M. Since P, Q ⊂ U and RU is a valuation domain, P and Q are comparable, which contradicts our assumption. Thus, we may assume that X1 = {P }. Now, by [3, Theorem 3.7], S = RM ∩ RN ∩ RP is a valuative domain. Case (v): Let M 6∈ X, N 6∈ X, and U 6∈ X. Then S contains a valuation 9 domain RU properly and hence S is a valuation domain. Finally, assume that (iii) holds. Then again by [3, Theorem 3.7], R is not valuative. Let S be a proper overring of R. Then S = ∩P ∈XRP for some subset X of Spec(R). Now, the following cases arise: Case (i): Let all four maximal ideals be in X. Then R = S, which is a contradiction. Case (ii): Let there be more than one maximal ideals in X. Then S is a B´ezout domain with at most three maximal ideals and at most one maximal ideal of S does not contain all non-maximal prime ideals of S. Thus, S is valuative, by [3, Theorem 3.7]. Case (iii): Let there be at most one maximal ideal in X. Subcase (i): Let all the maximal ideals of R contain all non-maximal prime ideals of R. Then S′ = RM is a subring of S contains R properly, where M is a maximal ideal of R. Thus, S′ is a valuation domain and hence S is a valuation domain. Subcase (ii): Let N be the maximal ideal that does not contain all non- maximal prime ideals of R. If N /∈ X, then again S is a valuation domain. Now, assume that N ∈ X. Then take S′′ = RM ∩ RN , where M is a maximal ideal of R other than N. Note that S′′ is a subring of S contains R properly. Now, S′′ is valuative, by [3, Theorem 3.7]. Thus, by [3, Corollary 3.9], S is a valuative domain. Hence, R is a maximal non valuative domain. Corollary 2.10. Let R be a finite dimensional B´ezout domain. Assume that dim(R) = n. Then the following statements hold: (1) If R is a maximal non valuative domain, then |Spec(R)| = n + |Max(R)|, where 2 ≤ |Max(R)| ≤ 4. (2) R is a maximal non valuative domain if and only if exactly one of the following holds: (i) |Spec(R)| = n + 2, R has exactly two maximal ideals with height n, and exactly two non-maximal prime ideals of R are not comparable. (ii) |Spec(R)| = n+3, R has exactly three maximal ideals with exactly two maximal ideals of R do not contain exactly one non-maximal prime ideal of R, and exactly one maximal ideal have height n. (iii) |Spec(R)| = n + 4, R has exactly four maximal ideals, and at least three of these maximal ideals have height n. Proof. By Proposition 2.8 and Theorem 2.9, the result holds. A ring extension R ⊂ T is said to be a minimal extension or R is said to be a maximal subring of R, if there is no ring between R and T , see [5, 9]. Moreover, R ⊂ T is a pointwise minimal extension, if R ⊂ R[x] is minimal 10 Rahul Kumar and Atul Gaur for each x ∈ T \ R, see [3]. Our next theorem gives a necessary and sufficient condition for a domain R to be a maximal non valuative subring of R′ provided R′ is local. Theorem 2.11. Let R be a ring such that R′ is local. Then R is a maximal non valuative subring of R′ if and only if the following statements hold: (i) R′ is a valuation domain. (ii) R ⊂ R′ is not a pointwise minimal extension. (iii) for each ring S such that R ⊂ S ⊆ R′, we have either S = R′ or S ⊂ R′ is a pointwise minimal extension. Proof. Let R be a maximal non valuative subring of R′. Then R′ is a valuative domain and so is a valuation domain, by [3, Proposition 3.1]. Also, R ⊂ R′ is not a pointwise minimal extension, by [3, Proposition 5.1]. Let S be a ring such that R ⊂ S ⊆ R′. Then S is a valuative domain. Thus, by [3, Proposition 5.1], either S = R′ or S ⊂ R′ is a pointwise minimal extension. Conversely, assume that (i), (ii), and (iii) hold. Then R is not a valuative domain by (ii) and [3, Proposition 5.1]. Also, every proper overring of R contained in R′ is a valuative domain, by (i), (iii), and [3, Proposition 5.1]. Recall from [7] that a domain R is said to be a pseudo-valuation domain if for any prime ideal P of R and any x, y in the quotient field of R such that xy ∈ P , then either x ∈ P or y ∈ P . Every PVD R admits a canonically associated valuation overring V , in which every prime ideal of R is also a prime ideal of V and both R and V are local domains with the same maximal ideal, see [7, Theorem 2.7]. In the next theorem, we give several equivalent conditions for a pseudo-valuation domain to be a maximal non valuative domain. Theorem 2.12. Let (R, M) be a pseudo-valuation domain, with canonically associated valuation overring (V, M). Assume that K := R/M, L := R′/M, and F := V /M. Then the following conditions are equivalent: (i) R is a maximal non valuative subring of V ; (ii) R is a maximal non valuative domain; (iii) R′ = V , R ⊂ R′ is not a pointwise minimal extension, and for each ring S such that R ⊂ S ⊆ R′, we have either S = R′ or S ⊂ R′ is a pointwise minimal extension; (iv) L = F , K ⊂ L is not a pointwise minimal extension, and for each ring S such that K ⊂ S ⊆ L, we have either S = L or S ⊂ L is a pointwise minimal extension. 11 Proof. Note that (ii) ⇒ (i) holds trivially by definition. For (i) ⇒ (ii), assume that (i) holds. Let T be a proper overring of R. Then either T ⊆ V or V ⊂ T , by [2, Lemma 1.3]. If T ⊆ V , then T is valuative. If V ⊂ T , then T is a valuation domain. Thus, (ii) holds. Note that (i) ⇔ (iii) follows from Theorem 2.11. Finally it is easy to see (iii) ⇔ (iv). After pseudo-valuation domain the natural question is when maximal non pseudo-valuation ring is a maximal non valuative domain. This we address in the next theorem. Recall from [8] that a maximal non pseudo-valuation subring of a domain S is a proper subring R of S that is not a pseudo-valuation ring but each subring of S properly containing R is pseudo-valuation. Moreover, a ring T is called the unique minimal overring of R in S if R ⊂ T and any intermediate ring A between R and S not equal to R contains T , see [8]. Theorem 2.13. Let R be a maximal non pseudo-valuation subring of R′. Then the following are equivalent: (i) R is a maximal non valuative subring of R′. (ii) R is not valuative, R′ is a valuation domain, and R has a unique minimal overring S in R′ that is valuative. Proof. Let R be a maximal non valuative subring of R′. Then R′ is valuation, by Theorem 2.11. It follows that R has a unique minimal overring, say S in R′, by [8, Theorem 6]. Note that S is valuative as R is a maximal non valuative subring of R′. Conversely, assume that (ii) holds. Let T be a proper overring of R in R′. Then S ⊆ T , by assumption. Since R is a maximal non pseudo-valuation subring of R′, S is a pseudo-valuation domain. It follows that T is valuative, by [3, Corollary 5.4]. Thus, R is a maximal non valuative subring of R′. A proper overring T of a domain R is called the unique minimal overring of R if any proper overring of R contains T , see [6]. Theorem 2.14. Let R be a maximal non pseudo-valuation subring of qf(R). If R is local, then the following are equivalent: (i) R is a maximal non valuative domain. (ii) R is not valuative and R has a unique minimal overring S that is a valuative pseudo-valuation domain with associated valuation overring R′. Proof. Let R be a maximal non valuative domain. Then R is not integrally closed, by Proposition 2.8. Now, (ii) follows from [8, Theorem 4]. Conversely, assume that (ii) holds. Let T be a proper overring of R. Then S ⊆ T , by assumption. It follows that T is valuative, by [3, Corollary 5.4]. Thus, R is a maximal non valuative domain. 12 References Rahul Kumar and Atul Gaur [1] A. Ayache and A. Jaballah, Residually algebraic pairs of rings, Math. Z. 225 (1997) 49-65. [2] M. Ben Nasr and N. Jarboui, Maximal non-Jaffard subrings of a field, Publ. Math. 4 (2000) 157-175. [3] P. J. Cahen, D. E. Dobbs and T. G. Lucas, Valuative domains, J. Algebra Appl. 9 (2010) 43-72. [4] D.E. Dobbs and M, Fontana, Universally incomparable ring homomor- phisms, Bull. Austrl. Math. Soc. 29(3) (1984) 289-302. [5] D. Ferrand and J.-P. Olivier, Homomorphismes minimaux d’anneaux, J. Algebra 16 (1970) 461-471. [6] R. Gilmer and W. Heinzer, Intersections of quotient rings of an integral domain, J. Math. Kyoto Univ. 7 (1967) 133149. [7] J. R. Hedstrom and E. G. Houston, Pseudo-valuation domains, Pacific J. Math. 75(1) (1978) 137147. [8] N. Jarboui and S. Trabelsi, Some results about proper overrings of pseudo-valuation domains, J. Algebra Appl. 15(5) (2016) 1650099. [9] M. L. Modica, Maximal subrings, Ph.D. Dissertation, University of Chicago (1975). [10] I. J. Papick, Topologically defined classes of going-down domains, Trans. Amer. Math. Soc. 219 (1976) 1-37.
synthetic_cpt
4
SyntheT2C_Generating_Synthetic_Data_for_Fine-Tuning_Large_Language_Models_on_the_Text2Cypher_Task.pdf
SyntheT2C: Generating Synthetic Data for Fine-Tuning Large Language Models on the Text2Cypher Task Zijie Zhong1, Linqing Zhong2, Zhaoze Sun2, Qingyun Jin3, Zengchang Qin3, *, and Xiaofan Zhang1, * 1Shanghai Artificial Intelligence Laboratory 2Sino-French Engineer School, Beihang University 3School of Automation Science and Electrical Engineering, Beihang University *Corresponding authors 4 2 0 2 n u J 5 1 ] I A . s c [ 1 v 0 1 7 0 1 . 6 0 4 2 : v i X r a Abstract Integrating Large Language Models (LLMs) with (KG) existing Knowledge Graph databases presents a promising avenue for enhancing LLMs’ efficacy and mitigating their “hallucinations”. Given that most KGs reside in graph databases accessible solely through specialized query languages (e.g., Cypher), there exists a critical need to bridge the divide between LLMs and KG databases by automating the translation of natural language into Cypher queries (commonly termed the “Text2Cypher” task). Prior efforts tried to bolster LLMs’ proficiency in Cypher generation through Supervised Fine-Tuning. However, these explorations are hindered by the lack of annotated datasets of Query-Cypher pairs, resulting from the labor-intensive and domain-specific nature of annotating such datasets. In this study, we propose SyntheT2C, a methodology for constructing a synthetic Query-Cypher pair dataset, comprising two distinct pipelines: (1) LLM-based prompting and (2) template-filling. SyntheT2C facilitates the generation of extensive Query-Cypher pairs with values sampled from an underlying Neo4j graph database. Subsequently, SyntheT2C is applied to two medical databases, culminating in the creation of a synthetic dataset, MedT2C. Comprehensive experiments demonstrate that the MedT2C dataset effectively enhances the performance of backbone LLMs on the Text2Cypher task. Both the SyntheT2C codebase and the MedT2C dataset will be released soon. 1 Introduction (KGs) Knowledge Graphs constitute vital reservoirs of information within the Retrieval- Augmented Generation (RAG) paradigm (Lewis et al., 2020) of Large Language Models (LLMs). Distinguished from other information sources, KGs boast structured and meticulously curated rendering them conducive to seamless data, 1 Figure 1: SyntheT2C builds synthetic data with two pipelines to SFT LLMs so that their performance on Text2Cypher task is enhanced. updates and rectifications. Such attributes position KGs as pivotal instruments for mitigating issues of knowledge cutoff and “hallucinations” within LLMs. Notably, KGs have long served as a core in numerous knowledge-intensive products and applications (Kertkeidkachorn et al., 2023; Cui et al., 2024; Xu et al., 2020). With the advent of LLMs, many researchers have focused on synergizing KGs with LLMs following the RAG framework. The inherent fidelity and adaptability of KGs make them practical assets for deployment in production environments, and also catapult KGs to the forefront of academic research. While KGs represent invaluable repositories of reference information, their efficient utilization re- mains a formidable challenge. Early methodolo- gies involved direct extraction of triplets from KGs, subsequently integrating these text-form triplets directly into the prompts of LLMs (Fatemi et al., 2023). However, this approach often fails to con- currently preserve both semantic and structural nu- ances inherent within the KG. An alternative ap- proach involves querying existing graph databases just like human users, promising accurate and inter- pretable results. Nonetheless, the primary impedi- ment lies in the LLM’s ability to formulate correct and executable queries. To address this limitation, numerous query generation tools or methodologies (Zhang et al., 2022; Abdelaziz et al., 2021; Shen et al., 2023) are proposed, aiming to translate hu- man users’ natural language queries into query lan- guages. This task assumes paramount importance for LLM development for two pivotal reasons: (1) it empowers LLMs to consistently produce reli- able queries, thereby augmenting their utilization of existing KG databases to address knowledge deficits; (2) it facilitates human interaction with KG databases through natural language, substan- tially lowering the barrier to entry for KG database utilization. Among the spectrum of query gen- eration research, the sub-task of translating natu- ral language into the Cypher (Francis et al., 2018) query language for Neo4j (Neo4j, 2012) databases stands out as a prominent research focus. This prominence is attributed to two key factors. Firstly, Neo4j is a widely adopted solution for construct- ing KG databases, positioning Cypher as an essen- tial tool for accessing these extensive repositories. Secondly, Cypher is a query language specifically designed for querying graph structures, offering significantly faster performance than other query languages, such as SQL, when processing graph data. Consequently, our work centers on this sub- task, commonly termed as “Text2Cypher” (T2C). A similar task to the Text2Cypher task is the “Text2SQL” task, wherein researchers endeavor to translate natural language sentences into SQL queries. Leveraging manually annotated datasets like SPIDER (Yu et al., 2019), numerous method- ologies have emerged, including SpCQL (Guo et al., 2022) and SQLNet (Xu et al., 2017). Con- versely, scant attention has been directed towards the Text2Cypher task. Existing approaches typi- cally resort to decomposing the original query into smaller components and translating each part sep- arately. For instance, R3-NL2SQL (Zhou et al., 2023b) partitions the query generation process into CRUD keywords prediction, clause selection, and object type identification. Despite the success of these methods, adapting them to a specific KG database demands substantial extra effort. With the rise of LLMs, using LLMs for Cypher query gener- ation appears promising. Notably, to the best of our knowledge, no endeavors have explored the poten- tial application of LLMs to the Text2Cypher task. Our work aims to bridge this gap in the literature. The Cypher writing performance of vanilla LLMs is not satisfactory. To improve it, we em- ploy SFT, which necessitates a dataset of Question- Query pairs. However, creating such a dataset is challenging as it requires both domain-specific knowledge of the KG’s content and expertise in Cypher’s syntax. Consequently, there is cur- rently no annotated dataset for the Text2Cypher task. To overcome this obstacle, we introduce SyntheT2C, a method designed to produce high- quality synthetic Question-Cypher pairs through two distinct pipelines: LLM-based prompting and template-filling (as shown in Figure 1). The LLM- based prompting pipeline aims to generate Cypher queries with greater semantic flexibility, while the template-filling pipeline focuses on producing syn- tactically complex Cypher queries. The generated Question-Query pairs undergo rigorous automated and manual validation, before being used to fine- tune backbone LLMs. The performance of Cypher generation is evaluated with a manually annotated evaluation dataset, complemented by a qualitative assessment using GPT as a judge. Additionally, we conduct a scalability test by fine-tuning the LLMs with larger synthetic datasets, which demonstrates that the synthetic data generated using our method does not collapse into simple patterns, thereby es- tablishing the robustness of our approach for larger- scale applications. SyntheT2C is tested with two medical databases: the LHY database and the Hetionet database (de- tails in Section 4.1). The generated synthetic dataset, “MedT2C”, will be made public. In conclusion, our main contributions are: (1) We propose the SyntheT2C framework con- taining two pipelines to build synthetic datasets with any Neo4j database. Our method can gener- ate Cypher that are both grammatically correct and syntactically diverse, facilitating the construction of SFT datasets. (2) We test and validate the effectiveness and scalability of the synthetic dataset generated with SyntheT2C. The LLMs after fine-tuning show im- proved Cypher writing abilities. (3) We opensource a synthetic dataset MedT2C of optimal size, ready to be used for SFT. 2 Related works 2.1 Knowledge Graph and graph database In recent years, KGs have emerged as fundamental resources for organizing, representing, and query- ing vast amounts of interconnected information or domain-specific knowledge. These graphs find ap- plications across various domains, including but not limited to, healthcare (Cui et al., 2024; Abu- Salih et al., 2022), finance (Elhammadi et al., 2020; 2 Kertkeidkachorn et al., 2023), and e-commerce (Xu et al., 2020). In the realm of Natural Language Processing and Artificial Intelligence (AI), KGs serve as invaluable sources of context and factual knowledge, enabling systems to reason, infer, and generate responses with enhanced accuracy and coherence. To handle the processing of graph-structured data, a series of graph databases were invented, including Neo4j (Neo4j, 2012), NebulaGraph (Wu et al., 2022), and Amazon Neptune (Bebee et al., 2018). Among them, our work focuses on the Neo4j database (Neo4j, 2012), a widely used graph database management system that excels in model- ing and querying highly interconnected data. Neo4j database employs a powerful query language called Cypher for expressing complex graph patterns and retrieving specific data subsets. 2.2 Large Language Models LLMs are advanced AI models that have been trained on vast amounts of text data to understand and generate human-like language. After the recent breakthrough marked by the release of InstructGPT (Ouyang et al., 2022) by OpenAI, a series of LLMs are released, featuring different advantages and drawbacks, e.g., the series of GPT models (Brown et al., 2020; OpenAI, 2023) by OpenAI, Llama (Meta, 2024) by Meta, Qwen (Bai et al., 2023) by Alibaba Cloud, InternLM (Cai et al., 2024b) by Shanghai AI Lab, etc. LLMs can comprehend and generate text across a wide range of topics and writing styles. Recent researches highlight their ability to utilize external existing tools like calcula- tor, search engine, or databases (Patil et al., 2023; Nakano et al., 2022; Cai et al., 2024a; Qin et al., 2023). This ability is usually abstracted as “Func- tion calling”, and many of its implementations in- volve generating codes or queries with LLMs to interact with external tools. 2.3 Code generation Code Generation is the process of automatically producing executable code from a higher-level rep- resentation or natural language. With the advent of LLMs, code generation has experienced a signifi- cant advancement. LLMs can now be trained on vast amounts of code and programming-related text materials, enabling them to understand and gener- ate code snippets based on given requirements (e.g., Codex (Chen et al., 2021), Polycoder (Xu et al., 2022), and Code Llama (Rozière et al., 2024)). By leveraging the contextual understanding (Dong et al., 2023) and language capabilities of LLMs, code generation becomes more efficient, accurate, and adaptable. Code generation with LLM is not only useful in helping developers to write codes but also in providing a powerful “language” for LLM to interact with other tools: LLMs can be tuned to output executable codes or queries to manipulate external resources. This is the fundamental idea for research in “Function Calling” and Multi-Agent Systems. Current code generation methods rely on two methods for evaluation: either with automatic metrics calculated with an annotated evaluation dataset (Papineni et al., 2002; Lin, 2004; Banerjee and Lavie, 2005; Evtikhiev et al., 2023; Zhou et al., 2023a) or with comparison by a judge (human or powerful LLM like GPT-4) (Zheng et al., 2023). Both evaluation methods are used in our work. 3 Methodology 3.1 Preliminaries The goal of the Text2Cypher task is to automati- cally translate a query q written in natural language to corresponding Cypher query c. With the pro- posed pipelines P1 and P2, a synthetic dataset S is built to fine-tune the backbone LLM L. The syn- thetic data is generated and validated with a Neo4j database B and a series of automatic validators V = [V1, V2, ..., V5]. The synthetic dataset after all the validations is denoted as Sv. Using Sv, L is fine-tuned into Lf t . The Cypher queries generated by L (resp. Lf t) are noted as c1 (resp. c2). 3.2 Synthetic dataset generation Generating the synthetic dataset is not trivial be- cause synthetic data usually has difficulty in bal- ancing grammatical correctness, semantic correct- ness, node coverage, edge coverage, and Cypher complexity. As a result, we propose a method of generation with two pipelines, as illustrated in Fig- ure 2). The LLM-based prompting pipeline (P1), emphasizes semantic variety, while the template- filling pipeline (P2), focuses on syntactic complex- ity. By employing these complementary pipelines, we aim to produce a synthetic dataset that captures the nuanced balance of linguistic, semantic, and structural properties. 3.2.1 LLM-based prompting pipeline This pipeline adopts an idea similar to Knowledge Distillation: to SFT a weaker LLM, we could use 3 Figure 2: Workflow of two pipelines inside SyntheT2C. the Cyphers generated by stronger LLMs. While off-the-shelf LLMs are typically not optimized for Cypher query generation, as Cypher likely con- stitutes only a small fraction of their pre-training data, they have nonetheless demonstrated strong in-context learning capabilities (Dong et al., 2023). Therefore, half of S is built by few-shot prompting GPT-4o (OpenAI, 2023). To further simplify the task of generation and to ensure a higher quality of the generated data, we split the whole genera- tion task into (1) extracting information from the database; (2) determining the question categories; and (3) generating the Cyphers for each category with extracted information. The workflow for the LLM-based prompting method is delineated in Figure 2 (upper part, P1). Initially, we commence by extracting metadata from the KG stored in the Neo4j database B. This extraction includes sampling example nodes and edges to construct few-shot prompts, along with capturing the schema of the database to facilitate the generation of grounded Cyphers. An illustra- tive instance of extracted metadata is provided in Appendix A. Subsequently, this metadata serves as a foundational component in all ensuing prompts, ensuring the generation of executable Cyphers. Be- fore initiating the Cypher generation process, a preliminary step involves prompting the LLM to propose potential question categories, thereby mit- igating the risk of redundant outputs. The back- bone LLM undergoes multiple iterations to propose these question categories, as detailed in the prompt showcased in Appendix B.1. These proposed cate- gories are then consolidated to eliminate duplicates, as instructed in the prompt outlined in Appendix B.3. After the deduplication, GPT-4o is prompted to generate synthetic Question-Cypher pairs with the prompt outlined in Appendix B.2. In our exper- iment, we fix a list of 12 categories (referred to as categories ) to facilitate the comparison. 3.2.2 Template-filling pipeline The second pipeline of Cypher generation adopts the template-filling method, a classic approach in code generation known for its flexible output and potentially complex syntax. We introduce this pipeline as a complement to the first one, leverag- ing manually crafted templates to generate Cyphers with more advanced syntax, thereby enabling back- bone L to solve complicated questions. In this pipeline, depicted in Figure 2 (lower part, P2), numerous templates are initially manually au- thored. Subsequently, actual values from different fields are sampled from the Neo4j database B to populate these templates, resulting in the genera- tion of complete executable Cypher queries. One such template is illustrated in Figure 4. In this example, the subschema is introduced to manage cases where the entire database cannot be loaded at once, necessitating the selection and injection of only the relevant subgraph into the prompt. The variables label_i and prop_j rep- resent the randomly sampled names of nodes and their attributes. These templates are initially crafted taking inspiration from Cypher Generator (Onofrei, 2024), then enriched and verified by the authors. Once these templates are established, synthetic Cyphers with complex syntax can be effortlessly generated. However, it is important to note that crafting and validating these templates require con- siderable time and effort. 3.3 Quality validation To ensure the quality of the generated synthetic Question-Cypher pairs before their application in SFT, it’s imperative to conduct thorough valida- tion to prevent the “garbage in, garbage out” sce- nario. However, manually scrutinizing thousands of Cypher queries is arduous and time-consuming. In response, a suite of automatic validators has been implemented to alleviate the burden of manual in- 4 Figure 3: Illustration of the automatic validators. def prompter(label_1, prop_1, prop_2): subschema = get_subgraph_schema(jschema, [label_1], 2, True) message = { "prompt": "Convert the following question into a Cypher query using the provided graph schema!", "question": f"""Find all {prop_1} for {label_1} that have {prop_2} after January 1, 2020!""", "schema": f"Graph schema: {subschema}", "cypher": f"MATCH (n:{label_1}) WHERE date(n.{prop_2}) > date('2020-01-01') RETURN n.{prop_1}" } return message Figure 4: Example template in Template-filling pipeline. spection. In the end, the Cyphers that pass through these automated validators undergo a final round of meticulous manual validation by researchers. 3.3.1 Automatic validation We propose five automatic validators: the Gram- matical Validator, Semantic Validator, Entity Val- idator, Schema Validator, and Coherence Validator, each playing a crucial role in ensuring the integrity of the generated synthetic data. These validators’ fundamental concepts are illustrated in Figure 3. The LLM used in the validators is GPT-3.5-Turbo. The Grammatical Validator validates the syn- tax correctness of each Cypher in S by executing them in the deployed graph database B. If a Cypher is executed without encountering any “Error/Excep- tions”, it is deemed to have passed this validation. The design of Semantic Validator is inspired by the research in machine translation (Hoang et al., 2018). This validator utilizes an LLM to translate the generated Cypher back into a natural language question. It then computes the semantic similarity between the translated question and the original question. If the similarity score exceeds a prede- fined threshold, the Cypher passes validation. We 5 also implement an alternative version of the Seman- tic Validator, where the LLM assesses semantic similarity directly. Both versions produce coherent validation results, with the latter being adopted for efficiency in subsequent experiments. The prompt used in this validator is presented in Appendix C.1. The Entity Validator assesses the coverage of entities in the generated Cyphers. The entities in the original question q are extracted via Named Entity Recognition (NER) using the spaCy (Honni- bal and Montani, 2017) model en_core_web_sm . Entities in the generated Cypher c are parsed and extracted using Regular Expressions. A successful validation requires 100% coverage of q’s entities in c. English entities are first transformed into lemmas using spaCy for fuzzy matching. Subsequently, the Schema Validator ensures the correctness of relations in the generated Cyphers. Relations in c are extracted via Regular Expres- sions and validated against the schema of B. A Cypher passes this validation only when all con- tained relations are valid edges. Lastly, the Coherence Validator executes the Cypher against B and evaluates the coherence be- tween the execution results and the original ques- tion with LLM, using the prompt presented in Ap- pendix C.2. In the end, only Cyphers that have passed all validations proceed to manual validation. 3.3.2 Manual validation Each Cypher checked by the validators is randomly assigned to two researchers, who independently assess its quality. If both researchers provide a unanimous judgment, their consensus is adopted. In cases of divergent opinions, a third researcher is brought in for further review. The final validation outcome for such Cyphers is determined through a majority vote among the three researchers. 4 Experiments 4.1 LHY and Hetionet Graph databases Throughout our experiment, we employed two Neo4j databases housing general medical knowl- edge in graph form: the LHY Medical Knowledge Database (referred to as “LHY”) and the Hetionet Medical Knowledge Database (referred to as “Het- ionet”). Both databases are publicly accessible, differing primarily in language: the data within the LHY database is presented in Chinese, whereas Hetionet is written in English. The LHY Database (Liu, 2018) serves as the backend database for a Medical Question- Answering system. This database comprises com- prehensive medical knowledge, encompassing a wide array of diseases, symptoms, drugs, and re- lated information. Its content is sourced from med- ical websites, meticulously cleaned, reorganized, and stored within a Neo4j database. There are about 44k entities and 300k relations in it. Hetionet (Himmelstein et al., 2017) is an open and free-to-use database of biomedical knowledge resource implementing “hetnet” model. Aggre- gating insights from 29 public databases, Het- ionet boasts a knowledge network spanning various fields, encompassing a wide array of entities, in- cluding genes, compounds, anatomical structures, diseases, symptoms, side effects, etc. There are ap- proximately 47k entities and 2.2 million relations in the Hetionet database. The detailed statistics of both databases are pre- sented in Appendix D. 4.2 Evaluation dataset and metrics We utilize a dataset comprising 300 manually an- notated and verified samples to evaluate our experi- ments. This dataset includes 150 questions anno- tated based on the Hetionet and LHY databases, respectively. Take Hetionet as an example, for ev- ery category among the 12 categories generated in Section 3.2.1, we employ GPT-3.5-Turbo to gen- erate 10 new questions, forming 120 “in-domain” questions. Additionally, we introduce 3 unseen cat- egories and generate 10 new questions for each new category, totaling 30 “out-of-domain” questions. For each of the 300 questions, the authors write a ground-truth Cypher query, which is then executed against the two databases to get the ground-truth execution results. This annotated dataset allows us to evaluate two aspects of LLMs’ Cypher generating performance: (1) Cypher quality, which is crucial if the gen- erated Cypher is integrated into larger systems; (2) Execution result accuracy, to gauge the quality of the output for end users. 4.2.1 Evaluation of Cypher quality The backbone LLMs, both pre-SFT and post-SFT, are tasked with generating Cyphers for the 300 questions in the evaluation dataset. Using GPT-4o (OpenAI, 2023), we determine the superior Cypher from the two provided versions. For each pair of Cyphers, we conduct two evaluations by varying the order of presenting the Cypher queries in the prompt to mitigate order-induced bias. If evalu- ations of both orders yield identical results, this judgment is accepted as the final outcome; other- wise, it is deemed a “Draw”. 4.2.2 Evaluation of execution result accuracy The generated Cyphers c2 are executed on database B to get execution results resgen. Then the ac- curacy (acc) is calculated with the ground-truth execution results resgt like this: acc = #(resgen ∩ resgt) #(resgen) , (1) where #(.) calculates the cardinality of a set. 4.3 Experiment setup 4.3.1 Cypher LLMs Extensive experiments are conducted with four LLMs, including open/closed-source models. For open-source models, we evaluate Llama3, Qwen2 and InternLM2. For closed-sources model, we test GPT. The exact versions of the backbone LLMs are listed in Appendix E. 4.3.2 Supervised Fine-Tuning We utilize Low-Rank Adaptation (LoRA) to fine- tune the vanilla LLMs. Specifically, the open- source models are trained for 6 epochs with a lin- ear scheduler, starting at a learning rate of 1e-6. AdamW is used as the optimizer, and the training batch size is 6. The fine-tuning of GPT is facilitated by its official API. The experiments on all LLMs, are conducted on Nvidia GeForce 4090 GPU. All the experiments totaled about 1100 GPU hours. 4.4 Supervised Fine-Tuning experiments The backbone LLMs are fine-tuned with the MedT2C dataset, comprising 750 samples gen- erated with the two pipelines and two Neo4j 6 Figure 5: Result of Supervised Fine-Tuning each LLMs with MedT2C. Accuracy annotations marked in white box. databases, totaling 3000 samples. The MedT2C dataset contains high-quality Question-Cypher pairs that passed all the automatic validations as well as the manual validation. In Appendix F we report the passing rate of each validator as a guide for further improvement of MedT2C’s data quality. A list of LLMs including GPT, Llama, Qwen, and InternLM are fine-tuned using MedT2C. We evaluated the change in Cypher writing perfor- mance of these LLMs, and the results are shown in Figure 5. The results show that MedT2C helps the LLM to produce more Cypher queries that are on par with or better than the human annotated ones. In Figure 5, the win rates are calculated in com- parison with the ground-truth Cyphers. We fur- ther conduct an experiment to compare directly the c1 and c2 generated with the same LLM with GPT-4o, using the prompt presented in Appendix G. The comparison results are shown in Figure 6. From these results, we can conclude that while the improvement may appear minor when com- paring against the ground-truth Cyphers, a visible enhancement in Cypher quality is evident when comparing to the Cyphers generated by the pre-SFT model. We explain this difference as follows: the human annotations have a far higher quality than the Cyphers generated by vanilla LLMs. Therefore, even though the LLMs are enhanced after SFT, their output is still inferior to the human-annotated Cyphers, which is why the evaluation results in Figure 5 seem largely unchanged. 4.5 Scaling experiments In this section, we test the scalability of our pipeline for generating synthetic data. We rerun the data generation pipelines to create scaled versions that are 1/16, 1/4, 4, 8, and 16 times the size of the orig- inal MedT2C. Vanilla LLMs are then fine-tuned with these scaled datasets. The results are reported in Figure 7. These results demonstrate that, up to the size of the MedT2C dataset, increasing the size of the synthetic dataset leads to better per- formance, especially in terms of Cypher Quality. Figure 6: Impact of SFT on each LLM. The Cypher gen- erated with pre-SFT and post-SFT LLMs are compared directly with GPT-4o. However, once the size exceeds that of MedT2C, further increasing the dataset size results in either marginal improvements or decreases. Based on this experiment, we determine the optimal size for the published MedT2C dataset (highlighted in red), as it balances efficiency and performance. Figure 7: Plots of scaling test’s results. 4.6 Ablation experiments To evaluate the efficacy of each component intro- duced, we conduct a series of ablation experiments. 7 First, we test the pipelines by running SFT ex- periments using only the data generated by each pipeline individually. We then verify the effec- tiveness of each automatic validator by evaluating them in isolation, using only one validator at a time. Since each component is designed to be modular and independent, we adopt this mode of ablation, rather than removing the components one by one from the complete setting, to emphasize the incre- ment brought by each component separately. For both ablation tests, the backbone LLM is fixed as Llama3. The dataset size is set to be the same as MedT2C (3000 in total). The experiments results are reported in Table 1 and Table 2 respectively. Here the Cypher Quality is calculated with respect to ground-truth Cyphers. Settings Pre-SFT LHY-LLM LHY-Temp. Hetionet-LLM Hetionet-Temp. All (MedT2C) Cypher Quality Result Acc. 38.67%(–) 41.67%(+3.00) 34.67%(-4.00) 42.83%(+4.16) 36.00%(-2.67) 44.00%(+5.33) 27.83%(–) 27.86%(+0.03) 26.54%(-1.29) 33.09%(+5.26) 26.68%(-1.15) 39.65%(+11.82) Table 1: Results of pipeline ablation test. As presented in Table 1, when we use only the data generated by the template-filling pipeline to SFT the Llama3 model, the model’s perfor- mance actually declined after SFT. This can be attributed to the design of the template-filling pipeline, which emphasizes generating syntacti- cally complex Cypher queries. When SFT is per- formed using only this data, the backbone LLM tends to produce unnecessarily complicated Cypher queries (e.g., breaking one query into two and then merging them). While this ability to write more complex Cypher queries is not directly reflected in the evaluation metrics, as the “hard questions” requiring advanced syntactic knowledge constitute only a small portion of the evaluation dataset, such data can enhance the LLM’s generalization capac- ity when combined with data from the LLM-based prompting pipeline. Settings Pre-SFT No validator ✓Grammar V. ✓Semantic V. ✓Entity V. ✓Schema V. ✓Coherence V. All (MedT2C) Cypher Quality Result Acc. 38.67%(–) 38.34%(-0.33) 38.34%(-0.33) 43.67%(+5.00) 40.00%(+1.33) 42.00%(+3.33) 41.33%(+2.66) 44.00%(+5.33) 27.83%(–) 27.96%(+0.13) 28.95%(+1.12) 31.65%(+3.82) 28.03%(+0.20) 26.11%(-1.72) 32.05%(+4.22) 39.65%(+11.82) Table 2: Results of validator ablation test. 8 As shown in Table 2, each individual validator contributes some improvement, either in terms of Cypher quality or the accuracy of the execution re- sults. Notably, the combination of all five validators yields the most significant increase in performance. This can be attributed to the validators’ collective ability to mitigate the majority of the bugs in the SFT dataset, thereby enhancing the overall quality of the generated Cypher queries. 5 Limitations The primary limitation of our work is the challenge in writing the templates. While the templates are designed to be independent from the base Neo4j databases, some adaptation work is still necessary when applying them to new Neo4j databases. Addi- tionally, writing new templates is time-consuming, making the expansion of the current template li- brary difficult. Furthermore, a significant number of the generated Cypher queries are directly filtered out during the construction of MedT2C, resulting in a waste of resources. Developing methods to quickly fix Cyphers that do not satisfy all the vali- dation criteria, instead of simply regenerating more Cyphers, could help reduce the carbon footprint of SyntheT2C. 6 Potential risks Even though SyntheT2C is designed to automati- cally generate synthetic datasets, its usage requires close monitoring and manual validation to prevent the inadvertent inclusion of private or sensitive in- formation. Additionally, the post-SFT LLM should be used with caution. Despite improvements in their Cypher generation performance, there re- mains a slight risk of producing embedded Cyphers that could lead to issues such as Out-of-Memory. 7 Conclusion We present SyntheT2C, a comprehensive frame- work to generate synthetic data for SFT various LLMs on the Text2Cypher task. Our approach encompasses dataset construction, data validation, and SFT evaluation, providing a reference frame- work for future research in the Cypher-related field. Additionally, our findings confirm the effectiveness of synthetic data, suggesting that similar techniques can address problems where annotation is difficult or insufficient. Finally, we will also open-source the MedT2C dataset, aiming to contribute to the technical advancements in relevant topics. References Ibrahim Abdelaziz, Srinivas Ravishankar, Pavan Kapa- nipathi, Salim Roukos, and Alexander Gray. 2021. A semantic parsing and reasoning-based approach to knowledge base question answering. Proceedings of the AAAI Conference on Artificial Intelligence, 35(18):15985–15987. Bilal Abu-Salih, Muhammad AL-Qurishi, Mohammed Alweshah, Mohammad AL-Smadi, Reem Alfayez, and Heba Saadeh. 2022. Healthcare knowledge graph construction: State-of-the-art, open issues, and opportunities. Preprint, arXiv:2207.03771. Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jin- gren Zhou, Xiaohuan Zhou, and Tianhang Zhu. 2023. Qwen technical report. Preprint, arXiv:2309.16609. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguis- tics. Bradley R. Bebee, Daniel Choi, Ankit Gupta, Andi Gutmans, Ankesh Khandelwal, Yigit Kiran, Sainath Mallidi, Bruce McGaughy, Michael Per- sonick, K. Jeric Rajan, Simone Rondelli, Alexan- der Ryazanov, Michael Schmidt, Kunal Sengupta, Bryan B. Thompson, Divij Vaidya, and Shawn Xiong Wang. 2018. Amazon neptune: Graph data manage- ment in the cloud. In International Workshop on the Semantic Web, pages 1–2. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc- Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learn- ers. Preprint, arXiv:2005.14165. Zheng Cai, Maosong Cao, Haojiong Chen, Kai Chen, Keyu Chen, Xin Chen, Xun Chen, Zehui Chen, Zhi Chen, Pei Chu, Xiaoyi Dong, Haodong Duan, Qi Fan, Zhaoye Fei, Yang Gao, Jiaye Ge, Chenya Gu, Yuzhe Gu, Tao Gui, Aijia Guo, Qipeng Guo, Conghui He, Yingfan Hu, Ting Huang, Tao Jiang, Penglong Jiao, Zhenjiang Jin, Zhikai Lei, Jiaxing Li, Jingwen Li, Linyang Li, Shuaibin Li, Wei Li, Yining Li, Hong- wei Liu, Jiangning Liu, Jiawei Hong, Kaiwen Liu, Kuikun Liu, Xiaoran Liu, Chengqi Lv, Haijun Lv, Kai Lv, Li Ma, Runyuan Ma, Zerun Ma, Wenchang Ning, Linke Ouyang, Jiantao Qiu, Yuan Qu, Fukai Shang, Yunfan Shao, Demin Song, Zifan Song, Zhi- hao Sui, Peng Sun, Yu Sun, Huanze Tang, Bin Wang, Guoteng Wang, Jiaqi Wang, Jiayu Wang, Rui Wang, Yudong Wang, Ziyi Wang, Xingjian Wei, Qizhen Weng, Fan Wu, Yingtong Xiong, Chao Xu, Ruil- iang Xu, Hang Yan, Yirong Yan, Xiaogui Yang, Haochen Ye, Huaiyuan Ying, Jia Yu, Jing Yu, Yuhang Zang, Chuyu Zhang, Li Zhang, Pan Zhang, Peng Zhang, Ruijie Zhang, Shuo Zhang, Songyang Zhang, Wenjian Zhang, Wenwei Zhang, Xingcheng Zhang, Xinyue Zhang, Hui Zhao, Qian Zhao, Xiaomeng Zhao, Fengzhe Zhou, Zaida Zhou, Jingming Zhuo, Yicheng Zou, Xipeng Qiu, Yu Qiao, and Dahua Lin. 2024b. Internlm2 technical report. Preprint, arXiv:2403.17297. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka- plan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sas- try, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cum- mings, Matthias Plappert, Fotios Chantzis, Eliza- beth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluat- ing large language models trained on code. Preprint, arXiv:2107.03374. Hejie Cui, Jiaying Lu, Shiyu Wang, Ran Xu, Wenjing Ma, Shaojun Yu, Yue Yu, Xuan Kan, Chen Ling, Tianfan Fu, Liang Zhao, Joyce Ho, Fei Wang, and Carl Yang. 2024. A review on knowledge graphs for healthcare: Resources, applications, and promises. Preprint, arXiv:2306.04802. Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, and Zhifang Sui. 2023. A survey on in-context learning. Preprint, arXiv:2301.00234. Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. 2024a. Large language models as tool makers. Preprint, arXiv:2305.17126. Sarah Elhammadi, Laks V.S. Lakshmanan, Raymond Ng, Michael Simpson, Baoxing Huai, Zhefeng Wang, and Lanjun Wang. 2020. A high precision 9 pipeline for financial knowledge graph construction. In Proceedings of the 28th International Confer- ence on Computational Linguistics, pages 967–977, Barcelona, Spain (Online). International Committee on Computational Linguistics. Mikhail Evtikhiev, Egor Bogomolov, Yaroslav Sokolov, and Timofey Bryksin. 2023. Out of the bleu: How should we assess quality of the code generation mod- els? Journal of Systems and Software, 203:111741. Bahare Fatemi, Jonathan Halcrow, and Bryan Perozzi. 2023. Talk like a graph: Encoding graphs for large language models. Preprint, arXiv:2310.04560. Nadime Francis, Alastair Green, Paolo Guagliardo, Leonid Libkin, Tobias Lindaaker, Victor Marsault, Stefan Plantikow, Mats Rydberg, Petra Selmer, and Andrés Taylor. 2018. Cypher: An evolving query language for property graphs. In Proceedings of the 2018 International Conference on Management of Data, SIGMOD ’18, page 1433–1445, New York, NY, USA. Association for Computing Machinery. Aibo Guo, Xinyi Li, Guanchen Xiao, Zhen Tan, and Xi- ang Zhao. 2022. Spcql: A semantic parsing dataset for converting natural language into cypher. In Pro- ceedings of the 31st ACM International Conference on Information & Knowledge Management, CIKM ’22, page 3973–3977, New York, NY, USA. Associa- tion for Computing Machinery. Daniel Scott Himmelstein, Antoine Lizee, Christine Hessler, Leo Brueggeman, Sabrina L Chen, Dexter Hadley, Ari Green, Pouya Khankhanian, and Ser- gio E Baranzini. 2017. Systematic integration of biomedical knowledge prioritizes drugs for repurpos- ing. eLife, 6:e26726. Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back- translation for neural machine translation. In Pro- ceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18–24, Mel- bourne, Australia. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Huanyong Liu. 2018. Question answering system on medical knowledge graph. Meta. 2024. Introducing meta llama 3: The most capa- ble openly available llm to date. Meta document. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. 2022. Webgpt: Browser- assisted question-answering with human feedback. Preprint, arXiv:2112.09332. Neo4j. 2012. Neo4j - the world’s leading graph database. Silvia Onofrei. 2024. Cypher generation: The good, the bad and the messy. OpenAI. 2023. Gpt-4 technical report. Preprint, arXiv:2303.08774. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, volume 35, pages 27730–27744. Curran Associates, Inc. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311–318. Association for Computational Linguistics. Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremental parsing. To appear. Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. 2023. Gorilla: Large lan- guage model connected with massive apis. Preprint, arXiv:2305.15334. Natthawut Kertkeidkachorn, Rungsiman Nararatwong, Ziwei Xu, and Ryutaro Ichise. 2023. Finkg: A core financial knowledge graph for financial analysis. In 2023 IEEE 17th International Conference on Seman- tic Computing (ICSC), pages 90–93. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge- In Advances in Neural Infor- intensive nlp tasks. mation Processing Systems, volume 33, pages 9459– 9474. Curran Associates, Inc. Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, and Maosong Sun. 2023. Toolllm: Fa- cilitating large language models to master 16000+ real-world apis. Preprint, arXiv:2307.16789. Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, 10 Jade Copet, Faisal Azhar, Hugo Touvron, Louis Mar- tin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve. 2024. Code llama: Open foundation mod- els for code. Preprint, arXiv:2308.12950. Ran Shen, Gang Sun, Hao Shen, Yiling Li, Liangfeng Jin, and Han Jiang. 2023. Spsql: Step-by-step parsing based framework for text-to-sql generation. Preprint, arXiv:2305.11061. Min Wu, Xinglu Yi, Hui Yu, Yu Liu, and Yujue Wang. 2022. Nebula graph: An open source distributed graph database. Preprint, arXiv:2206.07278. Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Ku- mar, and Kannan Achan. 2020. Product knowledge graph embedding for e-commerce. In Proceedings of the 13th International Conference on Web Search and Data Mining, WSDM ’20. ACM. Frank F. Xu, Uri Alon, Graham Neubig, and Vin- cent J. Hellendoorn. 2022. A systematic evalua- tion of large language models of code. Preprint, arXiv:2202.13169. Xiaojun Xu, Chang Liu, and Dawn Song. 2017. Sql- net: Generating structured queries from natural lan- guage without reinforcement learning. Preprint, arXiv:1711.04436. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingn- ing Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2019. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic pars- ing and text-to-sql task. Preprint, arXiv:1809.08887. Minhao Zhang, Ruoyu Zhang, Yanzeng Li, and Lei Zou. 2022. Crake: Causal-enhanced table-filler for question answering over large scale knowledge base. Preprint, arXiv:2207.03680. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judg- ing llm-as-a-judge with mt-bench and chatbot arena. Preprint, arXiv:2306.05685. Shuyan Zhou, Uri Alon, Sumit Agarwal, and Graham Neubig. 2023a. CodeBERTScore: Evaluating code generation with pretrained models of code. In Pro- ceedings of the 2023 Conference on Empirical Meth- ods in Natural Language Processing, pages 13921– 13937, Singapore. Association for Computational Linguistics. Yuhang Zhou, He Yu, Siyu Tian, Dan Chen, Liuzhi Zhou, Xinlin Yu, Chuanjun Ji, Sen Liu, Guangnan Ye, and Hongfeng Chai. 2023b. r3-nl2gql: A hybrid models approach for for accuracy enhancing and hal- lucinations mitigation. Preprint, arXiv:2311.01862. 11 A Example of extracted KG information Here we present the information (metadata) ex- tracted from the KG database “Hetionet” in Figure 8. We stored the metadata of the KG, including the node properties, the relationship properties, and the valid relationships. This information is integrated in the following prompts to ensure that the LLM output is correct Cyphers. In other prompts, this metadata is referred to as schema . B Prompts for LLM-based prompting pipeline In this appendix, we present all the prompts we used in the LLM-based prompting pipeline. B.1 Prompt to propose categories of questions In Figure 9 we show the prompt used to propose candidate categories of questions. We decided to first generate categories of questions instead of gen- erating directly the questions because this practice helps reduce duplicated questions. other prompts, we provided few-shot examples in this prompt. The question and results in the prompt are the original question and execution re- sults used as the input for this validation. D Important statistics of the LHY and the Hetionet databases Here we present the important statistics of the LHY database in Table 3 and Table 4, including the ex- amples of nodes and entities inside this database. The examples in both tables are translated from Chi- nese to English. Similarly, the important statistics of the Hetionet database with examples of nodes and entities are grouped in Table 5 and Table 4. E Exact versions of the backbone LLMs The exact versions of the LLMs used in our experi- ments are listed in Table 7. Except GPT-3.5-Turbo, the backbone LLMs are deployed locally using the versions available on HuggingFace. F Passing rate of MedT2C for each B.2 Prompt to generate questions for each automatic validator category This prompt presented in Figure 11 is used to gener- ate questions in natural language for each proposed category . This prompt includes few-shot exam- ples to help ensure the output Cypher follows the format requirements. B.3 Prompt to merge categories of questions The prompt presented in Figure 10 is used to merge the previously generated categories. The merged and de-duplicated list of categories is then stored and will be referred to as category in later prompts. C Prompts used in automatic validators C.1 Prompt of Semantic Validator Here we present the prompt used in the Semantic Validator in Figure 12. The schema mentioned in this prompt is the metadata presented in Appendix 8. The example represents the few-shot examples written by the authors, here we show the English example for the Hetionet database in Figure 13. Lastly, the json_object in the prompt contains the question and the Cypher query to be evaluated. C.2 Prompt of Coherence Validator In this appendix, we present the prompt used in the Coherence Validator in Figure 14. Similar to The passing rate of MedT2C dataset for each au- tomatic validator is reported in Table 8. The LLM used in the Semantic Validator and the Coherence Validator is GPT-3.5-Turbo. These two validators are not run on the LLM-based prompting pipeline because this pipeline uses GPT-4o. Given that GPT- 4o is more powerful than GPT-3.5-Turbo, it is not accurate to evaluate its output with GPT-3.5-Turbo, nor with GPT-4o itself. Besides, noted that the pass- ing rate of Coherence Validator is especially low compared to other passing rate. This is because for Coherence Validator specifically, the samples that failed any one of the previous validators is judged as False directly to save the calling of GPT API. Therefore the passing rate of Coherence Validator reported here is lower than the actual one, but it does not affect the “All passed” ratio. G Prompts used for Cypher quality evaluation We use GPT-4o to judge the quality of two versions of Cypher queries corresponding to the same set of questions written in natural language. The prompt used for this part is shown in Figure 15. We pro- vide different aspects of evaluation and ask GPT-4o to give detailed reasons when evaluating, because these techniques bring more accurate evaluation results in practice. 12 Node properties are the following: Disease {easy_get: STRING, cure_lasttime: STRING, cured_prob: STRING, name: STRING, desc: STRING, prevent: STRING, cure_way: LIST, cause: STRING, cure_department: LIST},Drug {name: STRING},Food {name: STRING},Check {name: STRING},Department {name: STRING},Producer {name: STRING},Symptom {name: STRING} Relationship properties are the following: recommand_eat {name: STRING}, no_eat {name: STRING}, do_eat {name: STRING}, belongs_to {name: STRING}, common_drug {name: STRING}, drugs_of {name: STRING}, recommand_drug {name: STRING}, need_check {name: STRING}, has_symptom {name: STRING}, acompany_with {name: STRING} The relationships are the following: (:Disease)-[:belongs_to]->(:Department), (:Disease)-[:common_drug]->(:Drug), (:Disease)-[:recommand_drug]->(:Drug), (:Disease)-[:need_check]->(:Check), (:Disease)-[:has_symptom]->(:Symptom), (:Disease)-[:acompany_with]->(:Disease), (:Disease)-[:recommand_eat]->(:Food), (:Disease)-[:no_eat]->(:Food), (:Disease)-[:do_eat]->(:Food), (:Department)-[:belongs_to]->(:Department), (:Producer)-[:drugs_of]->(:Drug) Figure 8: The metadata extracted from the Hetionet database. You are an experienced and useful Python and Neo4j/Cypher developer. I have a knowledge graph for which I would like to generate interesting questions that span 12 categories (or types) about the graph. They should cover single-node questions, two or three more nodes, relationships, and paths. Please suggest 12 categories together with their short descriptions. Here is the graph schema: {schema} Figure 9: The prompt used to generate categories of questions. You are an experienced doctor and you have a knowledge graph for which you would like to generate interesting questions of 12 categories. Here are some candidate categories: {categories_list}. You should merge similar categories and remove the duplicates. Finally, give me a short description of each category. Figure 10: The prompt used to merge proposed categories. 13 You are an experienced Cypher developer and English-speaking doctor and a helpful assistant designed to output JSON Generate {k} questions and their corresponding Cypher statements about the Neo4j graph database with the following schema: {schema} The questions should cover {category} and should be phrased in a natural conversational manner. Make the questions diverse and interesting. Make sure to use the latest Cypher version and that all the queries are working Cypher queries for the provided graph. You may add values for the node attributes as needed. Do not add any comments, do not label or number the questions. Here are some examples of the Question-Cypher pairs to be generated: "question": "What are the diseases that commonly accompany 'Depression'?", "cypher": "MATCH (d1:Disease {{name:'Depression'}}) -[:acompany_with]-> (d2:Disease) RETURN d2.name" "question": "Can you list diseases that commonly accompany 'Cancer'?", "cypher": "MATCH (d1:Disease {{name:'Cancer'}}) -[:acompany_with]-> (d2:Disease) RETURN d2.name", Now it's your turn to generate the question and Cypher pairs: Figure 11: The prompt used to generate questions. Ent. Type Check Department Disease Drug Food Producer Symptom Total # Ent. Examples 3,353 54 8,807 3,828 4,870 17,201 5,998 Bronchography Department of Plastic and Reconstructive Surgery Thrombosed Vasculitis Jingwanhong Hemorrhoid Cream Tomato and Vegetable Beef Ball Soup Tongyi Pharmaceutical Penicillin V Potassium Tablets Hypertrophy of breast tissue 44,111 / Table 3: Entities in LHY Database. Rel. Type # Rel. Examples belongs_to common _drug do_eat drugs_of need _check no_eat recommend_drug 8,844 14,649 22,238 17,315 39,422 22,247 59,467 recommend_eat 40,221 has_ symptom accompany _with 5,998 12,029 <Gynaecology, belongs_to, Obstetrics and Gynaecology> <Yang Qiang, common_drug, Phentolamine mesylate dispersible tablets> <Thoracic spine fracture, do_eat, Blackfish> <Penicillin V Potassium Tablets, drugs_of, Tongyi Pharmaceuticals Penicillin V potassium tablets> < Unilateral emphysema, need_check, Bronchography> <Lip disease, no_eat, Almonds> <Mixed hemorrhoids, recommend_drug, Jingwanhong Hemorrhoid Cream> <Synovial effusion, recommend_eat, Beef Ball Soup with Tomato and Vegetable Punch> <Early Breast Cancer, has_symptom, Hypertrophy of breast tissue> <Valvular insufficiency of the traffic veins of the lower extremities, accompany_with, Thromboembolic vasculitis> Total 294,149 / Table 4: Relations in LHY Database. 14 Ent. Type Anatomy Biological_process Cellular_component Compound Disease Gene Molecular_function Pathway Pharmacologic_class Side_effect Symptom Total # Ent. Examples 402 11,381 1,391 1,552 137 20,945 2,884 1,822 345 5,734 438 Digestive System Protein Sialylation Meiotic Spindle Mannitol Hypertension STRIP2 Vitamin Transporter Activity Glycolysis Decreased Blood Pressure Subileus Ageusia 47,031 / Table 5: Entities in Hetionet Database. Rel. Type # Rel. Examples Anatomy–downregulates–Gene Anatomy–expresses–Gene Anatomy–upregulates–Gene Compound–binds–Gene Compound–causes–Side_Effect Compound–downregulates–Gene Compound–palliates–Disease Compound–resembles–Compound Compound–treats–Disease Compound–upregulates–Gene Disease–associates–Gene Disease–downregulates–Gene Disease–localizes–Anatomy Disease–presents–Symptom Disease–resembles–Disease Disease–upregulates–Gene Gene–covaries–Gene Gene–interacts–Gene Gene–participates–Biological_Process Gene–participates–Cellular_Component Gene–participates–Molecular_Function Gene–participates–Pathway Gene-regulates-Gene Pharmacologic_Class–includes–Compound 102,240 526,407 97,848 11,571 138,944 21,102 390 6,486 755 18,756 12,623 7,623 3,602 3,357 543 7,731 61,690 147,164 559,504 73,566 97,222 84,372 265,672 1,029 <Bronchus, downregulates, GRIA2> <Myocardium, expresses, EFHD1> <Adipose tissue, upregulates, PARM1> <Sildenafil, binds, CYP3A4> <Ciprofloxacin, causes, Visual Disturbance> <Tacrolimus, downregulates, UBE2C> <Fluvoxamine, palliates, Panic Disorder> <Clotrimazole, resembles, Bifonazole> <Reserpine, treats, Hypertension> <Estriol, upregulates, KLHL9> <Parkinson’s Disease, associates, HTR7> <Schizophrenia, downregulates, MLST8> <Migraine, localizes, Brain> <Lung Cancer, presents, Constipation> <Bone Cancer, resembles, Head and Neck Cancer> <Malaria, upregulates, JAK2> <IMP3, covaries, OR8U8> <TRIM27, interacts, MED21> <ABCA1, participates, Lipid Homeostasis> <KLHL14, participates, Neuronal Cell Body> <TOP2B, participates, ATPase Activity> <GGT5, participates, Metabolism> <BCCIP, regulates, HLTF> <Allergens, includes, Benzocaine> Total 2,250,197 / Table 6: Relations in Hetionet Database. LLM name LLM version LLM site GPT Llama3 InternLM2 Qwen2 gpt-3.5-turbo-16k Meta-Llama-3-8B internlm2-7B Qwen2-7B https://platform.openai.com/docs/models/gpt-3.5-turbo https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct https://huggingface.co/internlm/internlm2-base-7b https://huggingface.co/Qwen/Qwen2-7B Table 7: Versions of the backbone LLMs Database Pipeline Grammatical Validator Semantic Validator Entity Validator Schema Validator Coherence Validator All passed LHY LHY Hetionet Hetionet LLM-based prompting Template-filling LLM-based prompting Template-filling 99.69% 99.87% 96.08% 100% N/A 92.34% N/A 91.81% 99.62% 100% 99.08% 99.52% 82.77% 99.87% 61.69% 100% N/A 28.59% N/A 38.15% 83.87% 27.21% 64.79% 36.66% Table 8: MedT2C’s passing rates of each automatic validator. 15 You are an experienced Cypher developer, English Master, and a helpful assistant that helps me to verify whether the cypher is coherent with the question! You will be given a JSON object containing a question and a cypher query. You should first take a look at the schema provided below. The schema is the graph database on which the cypher queries will be run. The schema: {schema} You must organize your answer step by step and in the end, you should make your judgment. Here are three areas that you should pay attention to: 1. whether the output of cypher is coherent with the question, which means that the output of cypher must contain the information that the question asks. 2. If the question points out a piece of key information, you should check whether this key information is pointed out in the cypher. For example, if the question provides a piece of exact information such as the exact name of the disease, this information can not be inconsistent in the cypher. If there is no exact key information, you can skip this area. 3. whether this cypher answers the question provided in the JSON object. You should simulate the cypher step by step according to the schema provided. Then you should judge whether this cypher is in line with the question. You should make your judgment according to these three areas. If there are no problems in these three areas in the cypher, you must answer with 'True'. Otherwise, you should answer with 'False'. Here are some example JSON objects: {example} Now it's your turn to answer! Here is the JSON object you should evaluate: {json_object} Now evaluate carefully the JSON object and provide your answer step by step. Figure 12: The prompt used in Semantic validator. 16 <|Example 1|> { "question": "Which diseases belong to the 'Psychiatry' department?", "cypher": "MATCH (d:Disease)-[:belongs_to]->(dept:Department) WHERE dept.name = 'Neurology' RETURN d.name" }, <|Answer 1|> The cypher is not in line with the question because the question is to find the diseases in the 'Psychiatry' department but the department name in the cypher is 'Neurology' department. Since the key information is inconsistent, I would mark this JSON object as False. <|Example 2|> { "question": "Which foods should be avoided for the disease 'Brain tumor'?", "cypher": "MATCH (d1:Disease {name:'Brain tumor'})-[:no_eat]->(d2:Food) RETURN d2.name" } <|Answer 2|> Firstly, the output of cypher contains the key information 'the food' asked by the question. Secondly, the key information 'Brain tumor' provided in the question is contained in the cypher. Finally, the logic of cypher is exactly similar to the question. So, I think this JSON object is True. <|Example 3|> { "question": "What pathways do the genes 'BRCA1' and 'BRCA2' participate in?", "cypher": "MATCH (g:Gene)-[:PARTICIPATES_GpPW]->(:Pathway) WHERE g.name IN ['BRCA1', 'BRCA2'] RETURN g.name" } <|Answer 3|> There are two errors. Firstly, as the question asks for pathways but the output of cypher is the name of the gene, the output of the cypher is inconsistent with the question. Secondly, the question is to find the pathway that both the genes 'BRCA1' and 'BRCA2' participate in. But the cypher matches the pathways that 'BRCA1' or 'BRCA2' participates in. The logic 'AND' and 'OR' are totally different. Therefore, I think this JSON object is False. Figure 13: The English few-shot examples used in the Semantic Validator. 17 You are an experienced medical assistant who has mastered English and medical knowledge. You will be given a question and the responses given by the doctor. The doctor is very professional, he gives direct responses. But he sometimes misunderstands the problem. Your task is to check if the results are coherent with the question by analyzing the category. For example, if the question asks for food and the answer is food, in this case, it is relevant because the category is the same. Even if the foods don't seem to be directly related, you can not deny them because the doctor is professional. But if the question asks for food, the doctor gives the response on sports. You should point out this error because the category is different. As a medical assistant, you just need to pay attention to whether the category of the answer corresponds to the category that the question asks. You don't need to think about the reasonableness of the answer. Answer with 'True' if the category is the same. Otherwise, answer with 'False'. You need to carefully explain your answer. Here are some examples of questions and results: <Example 1> Question: Find out the diseases associated with the 'Oncology' department. Responses by the doctor: Breast cancer, Pancreatic cancer, Colon cancer Your reply: Breast cancer, pancreatic cancer, and colon cancer belong to the Oncology department. And the question asks for diseases. So I think it is relevant, and my answer is True. <Example 2> Question: Which foods should be avoided for the disease 'Coeliac disease'? Responses by the doctor: Swimming, Running, Biking, Walking Your reply: The responses are sports. But this question asks for food. So I think it is not relevant, my answer is False. Now it's your turn to verify if the responses are relevant to the question. Remember! You just need to pay attention to whether the answer corresponds to the question. You don't need to think about the reasonableness of the answer. Question:{question} Responses by the doctor: {results} Your reply: Figure 14: The prompt used in Coherence Validator. 18 You are an expert in medical field and Cypher query language. You are asked to evaluate the quality of the Cypher queries generated by 2 models for the same question. You will be first given the question written in natural language. Then you will be given the Cypher queries generated by 2 models. Your task is to compare the quality of these two Cyphers and select the better one. You should consider the following aspects when selecting the better Cypher: 1. Syntactical correctness: whether the Cypher query is syntactically correct; 2. Semantic correctness: whether the Cypher query can correctly answer the question; 3. Readability: whether the Cypher query is easy to read and understand; 4. Efficiency: whether the Cypher query is efficient in terms of time and space complexity; 5. Conciseness: whether the Cypher query is concise and clear; 6. Completeness: whether the Cypher query can cover all the necessary information in the database. You should select the better Cypher query based on these aspects. Output your selected Cypher as well as your reasons. Here is the question: { "question": "{{ question }}" } Here are the outputs of the models: [ { }, { } "number": "1", "cypher": "{{ cypher_1 }}" "number": "2", "cypher": "{{ cypher_2 }}" ] Your output should be in the following format, DO NOT output anything other than this JSON object: { "better_cypher": "1", "reason": "reasons why 1 is selected" } Now select the better Cypher and give your reasons: Figure 15: The prompt used in Cypher quality evaluation. 19
synthetic_cpt
4
MAF_Multi-Aspect_Feedback_for_Improving_Reasoning_in_Large_Language_Models.pdf
0 2 0 2 b e F 2 1 ] P A . t a t s [ 1 v 8 4 0 5 0 . 2 0 0 2 : v i X r a A powerful MAF-neutral allele-based test for case-control association studies M. A. Jonkera, J. Pecankab aRadboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, Netherlands bDepartment of Biomedical Data Sciences, Leiden University Medical Center, Leiden, Netherlands Corresponding author: M. A. Jonker, [email protected] December 1, 2018 Abstract In a case-control study aimed at locating autosomal disease variants for a disease of interest, association between markers and the disease status is often tested by comparing the marker minor allele frequencies (MAFs) between cases and controls. For most com- mon allele-based tests the statistical power is highly dependent on the actual values of these MAFs, where associated markers with low MAFs have less power to be detected compared to associated markers with high MAFs. Therefore, the popular strategy of selecting mark- ers for follow-up studies based primarily on their p-values is likely to preferentially select markers with high MAFs. We propose a new test which does not favor markers with high MAFs and improves the power for markers with low to moderate MAFs without sacrificing performance for markers with high MAFs and is therefore superior to most existing tests in this regard. An explicit formula for the asymptotic power function of the proposed test is derived theoretically, which allows for fast and easy computation of the corresponding p-values. The performance of the proposed test is compared with several existing tests both in the asymptotic and the finite sample size settings. Keywords: Case-control study, efficient allele-based test, linkage disequilibrium (LD), power- ful test, p-values, minor allele frequency (MAF) 1 1 Introduction When locating dichotomous trait loci (such as disease variants) at autosomal chromosomes, as- sociation studies of genetic markers are typically conducted using the case-control study design. Over the years, a fair number of genetic association tests suitable for such studies have been proposed [1, 2]. For autosomal markers the native test would be based on genotypic informa- tion, however, tests contrasting the observed marker allele frequencies in the samples of cases and controls are often preferentially used due to their beneficial properties such as an ability to reliably recover signals even under deviations from additivity of allelic effects (e.g. under a dominance or recessive model). Among the existing tests of this type, probably the best known example is the binomial test of equality of allele frequencies in the samples of cases and con- trols, henceforth called the allele-based test (ABT). Other popular alternatives are the chi-square test for association, the Fisher exact test, the logistic regression model (LRM) score test, and the Cochran-Armitage trend test (CATT) [1, 3, 2]. The last of these has the advantage of being appli- cable even when the assumption of Hardy-Weinberg equilibrium is violated, while the score test stands out due to its abilities to adjust for potential confounders and to model multiple markers (including interactions) simultaneously. By definition, a marker is associated with a disease, or more generally with a dichotomous trait, if it is in linkage disequilibrium (LD) with one of its causal genetic variants [2]. For most existing tests, including those mentioned above, the power to detect a marker is highly depen- dent on the degree of LD between the marker and the causal variant. Typically, the stronger the LD the smaller the p-value of the test. However, the p-values also depend on the marker allele frequencies; among markers that are in LD with the same causal variant, markers with high mi- nor allele frequencies (MAFs) are typically much more likely to be detected than markers with low MAFs. Consequently, the strategy of selecting individual markers for follow-up studies pri- marily using the p-values from the existing tests is biased towards selecting markers with high MAFs. The same holds for most alternative strategies for prioritizing markers for follow-up studies that have been proposed in the literature such as ranking markers using the Bayes fac- tor [4, 5], the likelihood ratio signal [6], the frequentist factor [7], or PrPES [8] as the signal measures. A comparison between these strategies and the strategy of ranking markers using the p-values of various allele-based tests and the CATT found that all of the considered strategies re- sulted in highly similar ordering of markers and the markers with the smallest p-values obtained from the ABT tended to be top-ranked by the other methods as well, and vice-versa [6]. In fact, some of the alternative strategies exhibited a tendency to disfavor markers with small MAFs to an even stronger degree than the ABT p-value based ranking. In this paper we propose a novel test which can be viewed as an adjustment of the standard 2 ABT for testing association in case-control studies, which reduces the preferential treatment of markers with high MAFs. We show that the new test has equivalent or superior power compared to the commonly used tests, and the power superiority occurs particularly in situations with low to moderate marker MAFs. We also show how the new test can be made robust against devia- tions from Hardy-Weinberg equilibrium, an important practical concern. We derive an explicit formula for the test’s asymptotic power function, thus allowing for fast and easy computation of the test’s p-values. A comparison is made with the (asymptotic) power function for the standard ABT, the CATT, the chi-square test and the LRM score test in the absence of confounders. In addition to the asymptotic perspective, we also investigate via simulation the power performance of the new test in a finite sample size setting. Finally, we apply the new test to a major depression disorder case-control data set. 2 Methods 2.1 Setting In this paper we define causal variant, or simply variant, to mean a causal genetic locus (e.g. SNP) and marker to mean an observed genetic locus, which may or may not be in LD with a causal variant. For the disease of interest there may be multiple causal variants. The goal is to identify the markers that are in LD with any of the causal variants for the given disease of interest. For simplicity of notation, we assume that there is only one causal variant. In Section 2.7 we briefly discuss the situation with multiple causal variants. The case-control status with respect to a given disease of interest for a random individual from a specific population is denoted by A whenever the individual is a case (i.e. is affected by the disease, thus also called unaffected ) and by U for a control (also called unaffected ). Furthermore, the fraction of the cases in the total population is denoted by π. 1 + pA 1 + pU 2 = pA Suppose the causal variant is biallelic with alleles A1 and A2. Denote the corresponding allele frequencies in the total population as p1 and p2 = 1 − p1 and the frequencies of Ai only among the controls (unaffected) and cases (affected) as pU i and pA i , respectively. Note that, trivially, it holds pU 2 = 1. If a variant exhibits more than two alleles, it can still be treated as biallelic by re-defining one of the alleles, say A2, to denote any allele that is not A1. Further denote the fraction of the cases among the individuals with genotype (Ai, A j) at the causal variant by πi j. Since genotypes are non-ordered, it is assumed that π12 = π21. Further it is assumed that all markers are also biallelic. For a given marker, the two alleles are denoted by M1 and M2 with q1 and q2 the corresponding frequencies in the total population. Similarly denote the frequencies of M1 and M2 only among the controls (“unaffected”) as qU 2 and 1 and qA only among the cases (“affected”) as qA 2 , respectively. Note that, again trivially, it holds 1 and qU 3 1 + qU qU 2 = qA 1 + qA 2 = 1. In the following sections until Section 2.6 it is assumed that the marker alleles are in Hardy- Weinberg equilibrium (HWE). In Section 2.6 we present adjustments of our test (defined in (2) below) aimed at situations where the assumption of HWE is violated. Additionally, throughout the paper it is assumed that genotyping errors can be neglected and that the samples of cases and controls are random and independent selections from the cases and controls in the given population of interest. The total sample consists of N independent individuals of which there are R cases and S controls, where R and S are assumed to be fixed and non-random. In other words, for biallelic markers we observe a total of 2R and 2S alleles for the cases and the controls, respectively. Let R0 and R1 denote the observed counts of genotypes (M1, M1) and (M1, M2) among the cases, respectively, and let S0 and S1 denote the corresponding genotype counts among the controls. We then estimate the frequencies of the allele M1 among the cases and the control by ˆqA 2 (2R0 + 1 = 1 R1)/R and ˆqU 2 (2S0 + S1)/S, respectively. We denote the estimates of the complementary frequencies as ˆqA 1 = 1 1 and ˆqU 2 = 1 − ˆqU 1 . 2 = 1 − ˆqA 2.2 Novel test statistic Inspired by the binomial test of equality of allele frequencies of cases and controls (ABT) for testing H0 : qU 1 , we propose a novel test statistic for testing H0 against H1. We define the statistic as 1 versus H1 : qU 1 = qA 1 (cid:54)= qA W ˆπ = √ m( ˆqU 1 − ˆqA 1 ) (cid:112) ˆq1, ˆπ ˆq2, ˆπ , (1) (2) where m = 2Nλ (1 − λ ) with λ = R/N (i.e. the fraction of the cases in the sample) and ˆqi, ˆπ = ˆπ ˆqA i + (1 − ˆπ) ˆqU i , i = 1, 2, with ˆπ denoting an estimate of the disease prevalence π. This latter estimate cannot be obtained from the case-control data and thus additional external information is required for the estimation. For many diseases suitable estimates of the population prevalence are readily available from literature or other sources such as national registries (see also Section 2.4). √ Assuming that ˆπ is an asymptotically consistent estimator of π, the denominator in W ˆπ q1q2 as the number of observations in the case-control sample and converges in probability to used for estimating π increase to infinity. Consequently, by Slutsky’s lemma and the central limit theorem, it follows that W ˆπ is asymptotically standard normally distributed under the null hypothesis H0. In other words, rejecting H0 whenever |W ˆπ | > ξα/2, where ξα/2 is the upper α/2- quantile of the standard normal distribution, yields a test of H0 against H1 with an asymptotic level of significance of α. 4 Motivation The motivation for the new statistics comes from the following equality, which we derive in Appendix A, that reads qU 1 − qA 1√ q1q2 = ∆ pU 1 − pA 1 √ p1 p2 , (3) √ where ∆ is a common measure for the degree of LD between a marker and a causal variant, de- p1 p2q1q2, where Di j = P(AiM j) − piq j and P(AiM j) denotes the frequency fined as ∆ := D11/ of the joint haplotype (Ai, M j) at the causal variant and the marker in the total population [9]. The equality (3) shows how the relative difference between the allele frequencies among the cases and the controls at the causal variant (the quotient on the right-hand side of (3)) is passed on to the neighboring markers through the multiplication by ∆. An immediate consequence of (3) is this. If the marker allele frequencies among the controls and among the cases are unequal (qU 1 (cid:54)= 0), then ∆ must be non-zero, and vice versa [10, 11]. Since the goal of an asso- ciation analysis is to find markers for which ∆ (cid:54)= 0, it follows that testing the null hypothesis H0 : ∆ = 0 against the alternative hypothesis H1 : ∆ (cid:54)= 0 is equivalent to testing H0 : qU 1 = qA 1 against H1 : qU 1 . Since typically only marker data is available, the equation (3) naturally suggests to use a test statistic that is of the form of the left-hand side of (3). Hence the new statistic W ˆπ . 1 (cid:54)= qA 1 − qA 2.3 Asymptotic power functions: A comparison In this section we present an (asymptotic) power comparison of W ˆπ and several commonly used tests of equality of allele frequencies as well as the classical chi-square test statistic denoted as Tχ 2 [3]. A commonly used frequency-based tests utilize the statistic T defined as T = (cid:113) √ m( ˆqU 1 − ˆqA 1 ) 2 + (1 − λ ) ˆqA 1 ˆqA 2 . λ ˆqU 1 ˆqU Under the null hypothesis of no association T is asymptotically standard normally distributed. In addition to T , two other tests of association are popularly used. Namely the Cochran- Armitage trend test [2], for which we denote the statistic by TCATT, and the LRM score test where the observed minor allele count is the independent variable [1]. Their powers are compared with that of W ˆπ using a theoretical argument. For the sake of brevity, in this paper we only focus on the additive model. However, our in- vestigation (not shown) has indicated that the presented conclusions remain qualitatively true for other genetic models including the dominant and the recessive models as well as other parameter settings. 5 Power comparison between W ˆπ and T : theory It is easy to see that the statistics T and W ˆπ are closely linked. It is straightforward to show that W ˆπ = ˆQ−1 ˆπ T , where ˆQ ˆπ = (cid:112) ˆq1, ˆπ ˆq2, ˆπ 1 ˆqU 2 + (1 − λ ) ˆqA 1 ˆqA 2 . (cid:113) λ ˆqU 1 qU 2 + (1 − λ )qA Assuming that ˆπ is asymptotically consistent and that the fraction of the cases λ is fixed, ˆQ ˆπ converges in probability to Q, where Q2 = q1q2/(λ qU 2 ), as m and as all of the sample sizes underlying ˆπ go to infinity. Under the alternative hypothesis it holds Q (cid:54)= 1, thus T and W ˆπ do not have equal power. However, they do have the same level since under the null 1 = q1). In fact, under the null hypothesis ˆQ ˆπ converges hypothesis it holds Q = 1 (since qU in probability to 1 irrespective of the asymptotic consistency of ˆπ. More specifically, if ˆπ is replaced in ˆQ ˆπ by any value δ ∈ (0, 1), the resulting fraction ˆQδ still converges in probability to 1, meaning that for any δ ∈ (0, 1) in place of ˆπ the corresponding test is valid (see Section 2.4 for further discussion). 1 = qA 1 qA Further investigating the link between W ˆπ and T , an application of the central limit theorem and Slutsky’s lemma yields that for m and the numbers of observations underlying ˆπ all going to infinity it holds √ √ √ W ˆπ = m( ˆqU 1 − ˆqA 1 ) (cid:112) ˆq1, ˆπ ˆq2, ˆπ = m(qU √ 1 − qA 1 ) q1q2 + Op(1) = m∆(pU √ 1 − pA 1 ) p1 p2 + Op(1), where Op(1) denotes a term that is bounded in probability. Note that the last equality follows √ from (3). Consequently, with B = (pU p1 p2, it holds that 1 − pA 1 )/ W ˆπ = √ m∆B + Op(1) and T = √ m∆BQ + Op(1). (4) √ √ √ m∆B and In other words, for large m the power functions of the tests based on W ˆπ and T are respectively m∆BQ. Note that there are three types of quantities at play governed by the terms m is sample-specific and is the same for every marker. ∆, on the other hand, here. The term expresses the degree of LD between the marker and the causal variant and is therefore marker- specific, and so is the term Q. Finally, B is specific to the causal variant and is therefore the same for all markers that are in LD with the same causal variant. In terms of power, the asymptotic approximations in (4) show that for each marker the p- values based on T are weighted by their sample allele frequencies via Q, where Q (cid:54)= 1 under In the case of W ˆπ , however, the term Q is absent, which means the alternative hypothesis. that there is no frequency-based weighing and thus the corresponding p-values are much more 6 comparable over markers with different allele frequencies, especially if these markers are in LD with the same causal variant and thus have the same value for B. 1 qA 1 qU 1 and qA 2 , then qU 2 < q1q2 < qA Suppose that the minor allele M1 is positively correlated with the risk allele at the variant. 1 < q1 < qA Then, M1 will be enriched among the cases, and thus qU If 1 < 1 qA 2 , because the function p → p(1 − p) is concave and symmetric around 1 2 . Recall that λ equals the fraction of cases. It holds Q ≤ 1 if and only if λ ≤ λ0, with λ0 = (qA 2 ), at which point W ˆπ is more powerful than T . In most practical situations λ = 1 2 (more controls than cases). Although, in practice there might be markers for which λ > λ0, this will not be common. For M1 negatively correlated with the risk allele at the variant, the power ordering between W ˆπ and T is reversed. However, settings with strong or even mild negative correlations between the minor allele M1 and the minor risk allele at the causal variant are not generally possible. 1 qU 2 (balanced design) or λ < 1 2 − q1q2)/(qA 2 < q2 < qU 2 . 2 − qU 1 qA 1 qA Power comparison of W ˆπ with T and Tχ 2: numerical results In the top row plots in Figure 1 we provide a numerical comparison of W ˆπ with T and Tχ 2 in terms of their power performances. The asymptotic power functions of W ˆπ (continuous line), T (dashed line) and Tχ 2 (dotted line) are shown as a function of q1 (left plot) and of ∆ (right plot). The number of cases and controls was put at R = S = 10, 000 and the significance level was set to α = 5 × 10−8. Notice that in both plots the power functions of T and Tχ 2 almost completely overlap, which means that the two statistics have almost identical power. Moreover, the power functions of T and Tχ 2 lie fully below that of W ˆπ , which shows W ˆπ to be more powerful than both T and Tχ 2 in the considered setting. In terms of MAF, W ˆπ is the superior performer for a majority of values. Unsurprisingly, the degree of superiority of W ˆπ weakens with increasing q1 until the ordering flips for MAF near 0.5, when the statistics T and Tχ 2 both become (slightly) more powerful than W ˆπ . In the bottom row plots in Figure 1 the power functions for W ˆπ (continuous lines) and T (dashed lines) are given for the same setting as before except here the design is unbalanced with the number of cases and controls set equal to R = 6000 and S = 16, 000 (left) and to R = 16, 000 and S = 6000 (right). It shows the power of T to be dependent on the fraction of the cases in the sample. Clearly, the more unbalanced in favor of the controls the design is the more the corresponding test favors markers with large MAFs. As discussed in the theoretical part, the power function based on W ˆπ is constant as a function of q1, while the power functions of T and Tχ 2 increase with q1. These properties drive the behavior of these statistics for various MAFs. It explains why T and Tχ 2 both favor markers with large MAFs at the cost of those with smaller MAFs, and why W ˆπ does not exhibit such behavior. A direct consequence of these properties is that the p-values based on W ˆπ are much 7 more comparable across markers with different MAFs. Figure 2 (right) shows the power functions for a more prevalent disease. A qualitatively similar behavior has been observed under a number of alternative settings (results not shown). Power comparison of W ˆπ with TCATT and the LRM score test It has been shown that for the additive model and under the assumption of HWE, the LRM score test (with the observed minor allele count as independent variable) is equivalent to the CATT [2]. Consequently, in this setting any test that is more powerful than the CATT is also more powerful than the score test, and vice-versa. In other words, it is sufficient to compare the powers of the test based on W ˆπ and the CATT test. Under HWE, for the CATT test statistic under the additive model (TCATT(1/2)) it holds T 2 = TCATT 2(1/2) ˆq1,p ˆq2,p 2 + (1 − λ ) ˆqA 1 ˆqA 2 λ ˆqU 1 ˆqU , with ˆq1,p and ˆq2,p the pooled sample estimators for the q1 and q2 allele frequencies [2]. This in turn yields W 2 ˆπ = TCATT 2(1/2) q1,pq2,p q1,π q2,π + oP(1), (5) with oP(1) denoting a term that converges in probability to zero. Under the null hypothesis of no association, the fraction term in (5) equals 1, meaning that the tests based on Wπ and TCATT(1/2) have the same asymptotic level of significance. Under the alternative hypothesis, assuming that the minor allele M1 is positively correlated with the causal variant (i.e. the sample of cases is enriched with carriers of the risk alleles at the causal variants) and the prevalence of cases is higher in the pooled sample than in the population, W ˆπ is more powerful than TCATT(1/2). This is because then the fraction term in (5) is expected to exceed one, which leads to q1,p > q1,π and q1,pq2,p > q1,π q2,π . Moreover, under this setting W ˆπ is also more powerful than the LRM score test. Take-away message of the comparisons The theoretical and the numerical results presented in this section show that under HWE and for the additive model, the test based on W ˆπ is, under many relevant situations, more powerful than the test based on T , TCATT, Tχ 2 and the LRM score test. Moreover, the power functions for W ˆπ are constant, indicating that the test does not favor markers with high MAFs, contrary to the other test considered. 8 Figure 1: Power functions for W ˆπ (continuous lines), T (dashed lines), Tχ 2 (dotted line). Additive model with p1 = 0.03, π11 = 0.10, π22 = 0.02, π12 = 0.06. Top row: R = S = 10, 000 (balanced design). Bottom row left: R = 6000, S = 16, 000 (unbalanced design). Bottom row right: R = 16, 000, S = 6000 (unbalanced design). 9 0.00.10.20.30.40.50.00.20.40.60.81.0q1, Delta=0.20, p1=0.03power0.080.100.120.140.160.180.200.00.20.40.60.81.0Delta, q1= 0.05, p1=0.03power0.00.10.20.30.40.50.00.20.40.60.81.0q1, Delta=0.20, p1=0.03power0.00.10.20.30.40.50.00.20.40.60.81.0q1, Delta=0.20, p1=0.03power 2.4 Robustness of W ˆπ against misspecification of π As mentioned, W ˆπ relies on an external source for an accurate estimate of the population preva- lence π which cannot be directly derived from the case-control data at hand. Fortunately, the information on disease prevalence often can be acquired from literature or relevant national reg- istries (e.g. disease prevalences in the Netherlands are published by the National Institute for Public Health and the Environment). If no reliable estimate of π can be obtained, a reasonable value can be guessed by relevant experts. Nonetheless, even if good estimates are available, it is relevant to study the robustness of the performance of W ˆπ with respect to the quality of the esti- mate of π. For a fixed δ ∈ (0, 1) define the test statistic Wδ to be equal to W ˆπ evaluated at ˆπ = δ . Under the null hypothesis, where qU 1 , the denominator of Wδ converges in probability to √ q1q2 irrespective of the value of δ . Consequently, the type I error of the test based on W ˆπ is insensitive to the quality of the prevalence estimate. However, the power of the test is dependent on the estimate for the prevalence. In Figure 2, on the left, the asymptotic power functions of T and Wδ =π , Wδ =0.05, Wδ =0.1, Wδ =0.2, Wδ =0.3 as functions of q1 are plotted. The value of π was set at π = 0.0224, while the other parameters were set at ∆ = 0.20, p1 = 0.03. The figure shows that for δ equal to or near π the power functions are more or less constant with respect to the MAF, while for values of δ far from π the power functions do vary with the MAF (they increase). For values of δ < π (underestimation of the prevalence) the power function of the test is slightly above that for δ = π, although the difference is small and diminishes with increasing allele frequency q1. 1 = qA In Figure 2 (right) the asymptotic power functions correspond to a setting of a more common disease, namely p1 = 0.2, π11 = 0.40, π12 = 0.10, π22 = 0.25, which yields π = 0.16, and R = S = 4000. The power curves are for the test-statistic Wδ with δ = 0.01, δ = 0.05, δ = 0.10, δ = π, δ = 0.20 (ordered top to bottom) and for T (dotted line). The plot shows a flat power function for Wδ =π , a slightly decreasing function for δ < π and a slightly increasing function for δ > π. It also shows the robustness of the power of Wδ against minor misspecification of π for both overestimated and underestimated π. 2.5 Simulation study: Type I error and power for finite samples By their design, the p-values of the considered tests are derived using the asymptotic normality of the underlying test statistics W ˆπ and T . In this section we study the finite sample behavior of W ˆπ , including its robustness against departures from HWE. In an applied setting, while other factors such as the MAF also play a role, it is the sample size that is the primary driving factor of the accuracy of the asymptotic normal approximation underlying the p-values of the tests. The primary goal of the simulation study is to investigate the type I error behavior in a finite 10 Figure 2: Left plot: Power functions of T (dotted line) and Wδ =0.01, Wδ =π , Wδ =0.05, Wδ =0.1, Wδ =0.2, Wδ =0.3 (solid lines, ordered top to bottom) as a function of q1, with p1 = 0.03, π = 0.0224, π11 = 0.10, π12 = 0.06, π22 = 0.02. Right plot: Power functions of T (dotted line) and Wδ =0.01, Wδ =0.05, Wδ =0.1, Wδ =π , Wδ =0.2, Wδ =0.3 (solid lines, ordered top to bottom) as a function of q1, with p1 = 0.2, π = 0.16, π11 = 0.40, π12 = 0.10, π22 = 0.25. 11 0.020.040.060.080.100.40.50.60.70.80.91.0q1, Delta=0.20, p1=0.03power0.020.040.060.080.100.00.20.40.60.81.0q1, Delta=0.20, p1=0.2power sample setting for a variety of MAFs ranging between 0.03 and 0.5, which was the range of MAFs observed in the major depression disorder data set analyzed in Section 3. In a typical GWAS a whole range of MAFs are present, which means that an appropriate measure of the expected type I error, and the one used in our simulation study, is the weighted average of the observed type I errors over the entire range of MAFs with weights equal to the expected relative representation of each MAF in the study. For each MAF we simulated the marker alleles for R cases and for S controls with ∆ = 0 In terms of the ratio of cases to controls (i.e. under the null hypothesis of no association). we considered two scenarios, namely the balanced design with equal numbers of cases and controls (R = S) and an unbalanced design with the number of controls twice the number of cases (S = 2R). We chose to focus on a setting with an excess of controls as it is typically easier to find individuals from the control population. The selected parameter values can be seen in Table 2.5. Under each parameter setting we simulated 5 billion data sets and for each of them we calculated the statistics T , Wδ =0.05, Wδ =0.1, Wδ =0.2, Wδ =0.3 and Wδ =0.4. The tests were performed using the asymptotic standard normal approximation at the significance level α = 5 × 10−8, a value that is typically used in GWAS. The observed type I error for each statistic and each selected MAF was calculated. Note that the number of simulated data sets (billions) had to be very high given the low level of significance, which in turn had to be set low in order to emulate a GWAS setting. The overall type I error estimate was computed as a weighted average of the (estimated) type I errors at a dense grid of MAF values with weights based on the expected relative frequencies of each MAF. Given that in the real-life data set analyzed in Section 3 the observed distribution of the MAFs was very close to uniform between 0.03 and 0.5, it was therefore deemed sufficient to calculate the overall type I error as a simple (i.e. unweighted) mean of the individual simulated error rates at the grid covering the interval from 0.03 to 0.5 (with steps of 0.005). Type I error for finite samples The results of the simulation studies for the type I error are presented in Table 2.5. It shows the ratios of the observed type I errors and the significance level α for fixed values of MAF (in all but the last column of the table). Given the small value of α, we are in fact verifying the accuracy of the far-tail asymptotic normal approximation of the true distributions of the test statistics. The estimates of the expected overall type I error in a GWAS are given in the last column of the table. The simulation results show that for T the observed type I error is slightly inflated for the unbalanced design, while for the balanced designs the statistic performs quite well. The table 12 also shows that the statistic W ˆπ exhibits a slightly inflated overall type I error, although it is worth noting that the inflation is considerably stronger for markers with MAFs below 0.05 and small values of δ and it steadily decreases with increasing sample size. This behavior appears to be a consequence of the low accuracy of the normal asymptotic approximation in the far tails of the distribution. Seeing that the results show a decreasing trend of the inflation with sample size, a remedy would be an increase of the underlying sample size. Crucially, despite the sub-optimal behavior of W ˆπ for the very small MAFs, it needs to be stressed that the power gains achieved by W ˆπ relative to the commonly used tests are not solely or even primarily due to the inflated type I error, since with growing sample size the type I error inflation vanishes while the superior power performance remains. With R = S = 50, 000 the inflation for δ = 0.05 is essentially gone. Besides the increased sample size, an alternative remedy of the type I error inflation is to use a larger value for δ when calculating Wδ . In other words, aim to ”overestimate” the population prevalence of cases if π is small. This can be an especially effective solution if used only for markers with low MAF (e.g. below 0.1). Unsurprisingly of course, this ”overestimation” approach does come at a price in terms of decreased power. A further alternative option is to obtain the p-values for W ˆπ using a permutation approach. This can be done either for all markers or only for the markers for which the type I error is expected to be inflated (typically those with small MAF). Power (type II error) for finite samples Besides the type I error investigation, we also compared the power performances of the various tests in a finite sample size setting. We simulated data under a number of parameter combinations replicating each test 5 million times under each scenario. The significance level was again set at 5 × 10−8 and the empirical power of each test was calculated as the fraction of time the test rejected the null at this level of significance separately for each scenario. The analysis showed that the finite sample empirical power functions are very similar to the asymptotic power functions (plots not shown). 2.6 Adjustments under Hardy-Weinberg Disequilibrium (HWD) Many existing tests, including ours so far, implicitely rely on the validity of the assumption of Hardy-Weinberg equilibrium (HWE). In applications where such assumption is expected not to be appropriate, the usual approach is to rely on the Cochran-Armitage trend test (CATT) and its robustness against departures from HWE. Advantageously, our newly proposed test statistic W ˆπ as well as T can both be robustified against departures from HWE. In [2, 12] an adjusted test statistic THWD is described. It is found by replacing the estimated 13 Table 1: Type I errors divided by α = 5 × 10−8. The column total shows the weighted average of type I errors over the various MAFs. 0.03 0.05 0.10 0.20 0.30 0.40 0.50 total q1 R = S = 5000 T 0.97 Wδ =0.05 6.95 Wδ =0.10 5.41 Wδ =0.20 3.01 Wδ =0.30 1.87 Wδ =0.40 1.16 2R = S = 6000 T 2.25 Wδ =0.05 5.83 Wδ =0.10 4.24 Wδ =0.20 2.27 Wδ =0.30 1.25 Wδ =0.40 0.84 R = S = 10, 000 T 0.91 Wδ =0.05 3.73 Wδ =0.10 3.02 Wδ =0.20 2.05 Wδ =0.30 1.44 Wδ =0.40 1.07 R = S = 20, 000 T 1.02 Wδ =0.05 2.04 Wδ =0.10 1.79 Wδ =0.20 1.37 Wδ =0.30 1.10 Wδ =0.40 1.02 1.02 3.88 3.17 2.21 1.40 1.11 1.84 3.28 2.58 1.61 1.12 0.94 0.93 2.31 1.99 1.51 1.19 1.02 1.05 1.74 1.58 1.34 1.16 1.06 1.04 2.08 1.84 1.46 1.21 1.03 1.32 1.79 1.60 1.23 1.02 0.95 1.00 1.58 1.43 1.22 1.06 0.98 0.92 1.22 1.14 1.03 0.99 0.95 1.03 1.35 1.24 1.13 1.07 1.02 1.10 1.22 1.16 1.05 1.03 0.98 1.14 1.27 1.20 1.18 1.15 1.12 1.02 1.07 1.05 1.04 1.02 1.02 14 1.06 1.16 1.16 1.12 1.08 1.06 1.06 1.14 1.07 0.98 0.98 0.97 0.92 0.97 0.96 0.93 0.91 0.92 1.06 1.10 1.09 1.08 1.08 1.05 1.02 1.03 1.00 1.00 0.99 0.97 0.94 0.98 0.98 0.96 0.92 0.93 1.15 1.11 1.12 1.12 1.11 1.12 1.12 1.09 1.09 1.08 1.09 1.10 0.93 0.93 0.93 0.93 0.93 0.93 0.95 0.91 0.90 0.89 0.89 0.89 1.10 1.09 1.08 1.06 1.05 1.05 1.06 1.06 1.06 1.06 1.06 1.06 1.02 1.62 1.47 1.25 1.10 1.02 1.16 1.46 1.32 1.10 0.99 0.95 1.05 1.33 1.25 1.15 1.07 1.04 1.04 1.17 1.14 1.09 1.06 1.04 11 and qA 1 qU 2 + qA 2 and qA 11 − (qA 1 qA 1 )2, where qU 2 in the denominators of T by suitable estimators of qU products qU 1 )2 1 qA and qA 11 denote the frequency of the genotype (M1, M1) among the controls and among the cases, respectively. The adjustment follows from expressions for the variances of ˆqU 1 derived without the assumption of HWE. The unknown frequencies in these expressions are estimated using the corresponding sample frequencies. The asymptotic normality of the (adjusted) test statistic THWD under the null hypothesis follows by the central limit theorem. Conveniently, the test statistic W ˆπ can be adjusted in an analogous way. To that end we define 1 and ˆqA 11 − (qU 2 + qU 1 qU WHWD = (cid:113) √ m( ˆqU 1 − ˆqA 1 ) , ˆq1, ˆπ ˆq2, ˆπ + ( ˆq11, ˆπ − ˆq2 1, ˆπ ) where ˆq11, ˆπ is an estimate of q11 defined analogously to ˆq1, ˆπ of (2). We also performed simulations that compared the performance of the adjusted test statistic WHWD with the test TCATT(1/2) for different values of δ , to investigate the combination of both the robustness of the test in case of a misspecified population prevalence and deviation from HWE. The results (Appendix B) show that in the considered settings the test based on WHWD is slightly more powerful than the CATT. 2.7 Multiple causal variants Until this point we focused on testing for association between markers and a single causal vari- ant. We showed that the p-values of W ˆπ can be used for identification of markers in strong LD with the causal variant in a way that does not preferentially select markers with high MAF, a property that is rooted in the equation (4), where the term B on the right-hand side is the same for all considered markers. Unfortunately, the argument only applies to the situations with a single variant, given that the term B is causal-variant specific. Since the power function and the p-value of W ˆπ strongly depends on the value of B, only the p-values of markers that are in LD with the same causal variant can be directly used as measures of the degree of LD with one of the causal variants. In practice, this means that one needs to be careful when comparing p-values of markers that are located far apart on the genome, especially if they are located on different chromosomes. This holds for all tests mentioned in this paper. For markers that are in LD with multiple causal variants it is in general very complicated to quantify the corresponding effect towards the p-values of all of these tests. 15 3 Application The newly proposed test based on W ˆπ was applied to a case-control data set to identify the ge- nomic regions that confer risk for and protection against major depressive disorder (MDD). To this day, the efforts to identify such regions have not been very successful [13, 14]. While this might be partially due to a lack of consensus on the exact definition of the condition (MDD) it- self, the possibility that the disease is influenced by many genetic loci each with a small marginal effect could be an even better explanation for the lack of success. Then, in order to detect these loci, a more sensitive statistical test appears to be needed. We believe that our novel test statistic might be able to at least partially answer that call. The analyzed MDD data set was primarily collected using two databases collected in the the individuals affected by MDD, came primarily from the Netherlands [13]. The cases, i.e. purpose-specific Netherlands Study of Depression and Anxiety (NESDA) database, while the controls came from the Netherlands Twin Registry (NTR), a database containing primarily data about twin siblings and their parents. In order to achieve sufficient independence among the controls, for each pedigree from NTR a single individual was randomly selected, thereby making all individuals in the data set (biologically) unrelated and thus statistically independent. The total number of cases and controls equaled 2306 and 1027, respectively. In the analysis, only markers for which the sum of the minor allele frequency among the cases and the controls was at least 0.02 were included, resulting in a data set with over 600K markers. The population prevalence of MDD in the relevant population is accurately estimated as ˆπ = 0.15 [13], which is the value we used in the test statistic W ˆπ . For all markers in the database the p-values corresponding to W ˆπ and T were computed using their asymptotic distribution. Figure 3 shows a scatter-plot of the p-values for the two test statistics. The markers were divided into three categories based on their MAFs in the sample of controls. The three plots show the degree of dissimilarity of the two statistics. The observed general pattern is that for high MAF the p-values of the two tests are highly similar and with decreasing MAF they become increasingly dissimilar. The analysis yielded several markers with p-values below the threshold α = 5.0 × 10−8. With the exception of chromosome 15, all other chromosomes had no more than a single sig- nificant marker. On chromosome 15 we identified 6 markers with p-values below α. These were the markers with RS numbers rs10152733 (MAFs 0.0732 and 0.0245 among the cases and controls, respectively), rs3784362 (0.0212/0.00734), rs1463912 (0.0249/0.00831), rs7168666 (0.0216/0.00734), rs1820416 (0.0210/0.00734), rs4777166 (0.0202/0.00642). Moreover, two of these markers, namely rs10152733 and rs1463912, were also identified as significant by the statistic T . Figure 6 shows a Manhattan plot of the − log(p-value) based on W ˆπ=0.15 for chro- 16 Figure 3: Scatter-plots of the p-values for the test statistics T against those based on W0.15. From left to right: ˆqU 1 ∈ (0.10, 0.25], ˆqU 1 ∈ (0.25, 0.50]. 1 ≤ 0.10, ˆqU mosome 15. [13] contains the results of 10 GWAS studies for MDD. The data for these studies were collected in relation to various conditions, not solely MDD. In addition to MDD, these included recurrent MDD, alcoholism or nicotine dependence, and others. Some of the studies showed significant association between genomic regions and MDD, however, none of these were repli- cated by an independent study. Among the markers identified by our analysis none have been found by one of the studies in [13]. In [14] a genome-wide association meta-analysis based in 135,458 cases and 344,901 controls was conducted. They identified 44 loci. None of these were replicated by our analysis. 4 Discussion Various strategies for selecting markers for follow-up studies using the case-control framework have been proposed in the literature. As discussed in this paper, these include the allele-based test of association which assesses the difference of marker allele frequencies between the cases and the controls, the LRM score test, the chi-square test for association, and the Cochran- Armitage trend test. An often observed shortcoming of the existing strategies is their preference for markers with high MAFs at the expense of markers with low MAFs. In this paper a novel allele-frequency-based test statistic for finding association between ge- netic markers and a disease of interest is proposed. A competitive advantage of the statistic is that it does not favor markers with high MAFs. In light of the known and suspected impor- 17 llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll0e+001e−052e−053e−054e−055e−050.000000.000050.000100.00015p−values Wp−values Tllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll0.000150.000150.000100.000100.000050.000050.000000.000000e+000e+001e−051e−052e−052e−053e−053e−054e−054e−055e−055e−05p−values Wp−values Wp−values Tp−values T Figure 4: Manhattan plot of the − log(p-value) based on W ˆπ=0.15 for chromosome 15. tance of rare alleles, this means that our new test is much more suitable to be used to asses the association of genetic markers with a disease based on the observed p-values. An additional advantage of the new test is that the statistic can be efficiently computed using basic summary statistics of the case-control sample. We derived the asymptotic power func- tion of the test, which allows for efficient computation of the associated p-values, an important strength especially compared to approaches that rely on permutation schemes in order to obtain the p-values. We studied the power performance of the newly proposed test and compared it with a number of commonly used alternative tests under numerous scenarios. The obtained results were favor- able for the new test. It was observed that compared to the existing tests the new test possesses superior power for markers with low MAF. This behavior is unsurprising in light of the fact that the power functions of the new tests are (nearly) constant for various marker allele frequencies, while the power functions of the competing tests generally decline with decreasing MAF. The calculation of the newly proposed test statistic requires the estimation of the prevalence of the disease π. This value cannot be directly obtained from the sampled case-control data alone and the estimation requires external data. This, however, is not a major obstacle for the usage of the test since for many diseases suitable estimates of population prevalence are readily available from sources such as national registries. Furthermore, we showed that the novel statistic is fairly robust against misspecification of the prevalence parameter, which means that even when an accurate estimate of the prevalence is not available for the population of interest, an inaccurate (over-)estimate (e.g. based on a related population) can be used without substantially harming the power of the resulting test. 18 02468SNPs chromosome 15−log(p−value) Besides power, we also studied the type I error of the new test in the context of a finite sample setting. The simulations give evidence that the type I error of the new test is inflated for small MAFs and low prevalence π. The specific degree of inflation depends on the underlying sample size. However, we observed that the inflation decreases with increasing sample size and therefore cannot be the reason for the observed power gains of the new test. Moreover, our simulation showed that the overall type I error for the new test is expected to be only slightly inflated in the context of a genome-wide study with a broad range of MAF values. For many traits, only a small proportion of the variability in the population can be explained by causal variants that have been identified so far [15]. One possible explanation for this ”miss- ing heritability” is the presence of low-frequency variants with relatively strong effect on disease risk. Indeed, rare variants found by re-sequencing have already been described to affect complex diseases [16]. Given the properties of our test statistics, and in the light of the current interest in detecting association between complex phenotypes and low-frequency variants and locating causal variants with small minor allele frequencies [17], we believe that the novel method pre- sented in this paper could prove to be a very useful addition to the landscape of methods available for tackling these important problems of genetics. Acknowledgement We express gratitude to the Netherlands Twin Registry (NTR) and de Nederlandse Studie naar Depressie en Angst (NESDA) for making the data available for application of the theory in this paper. The data was collected with support from NWO (904-61-090; 904-61-193; 480- 04-004; 400-05-717; SPI 56-464-14192), Center for Medical Systems Biology (NWO Ge- nomics); the EU (EU/QLRT-2001-01254); Geestkracht program of ZonMW (10-000-1002), Neuroscience Campus Amsterdam (NCA) and the EMGO+ institute; and institutes involved in NESDA (VU University Medical Centre, Leiden University Medical Centre, GGZinGeest, Rivierduinen, University Medical Centre Groningen, GGZ Lentis, GGZ Friesland, GGZ Dren- the); the Genetic Association Information Network (GAIN); ARRA grants 1RC2 MH089951-01 and 1RC2MH089995-01; FP7-HEALTH-F4-2007-201413; European Research Council (ERC- 230374). Conflict of interest statement The authors have declared no conflict of interest. 19 References [1] D.J. Balding. A tutorial on statistical methods for population association studies. Nat Rev Genet, 7:781–791, 2006. [2] G. Zheng, Y. Yang, X. Zhu, and R. Elston, editors. Analysis of Genetic Association Studies. Springer, 2012. [3] P. D. Sasieni. From genotypes to genes: Doubling the sample size. Biometrics, 53:1253– 1261, 1997. [4] J. A. Wakefield. Reporting and interpretation in genomewide association studies. Int J Epidemiology, 37:641–53, 2008. [5] J. A. Wakefield. Bayes factors for genome-wide association studies: comparison with p-values. Genet Epidemiol, 33:79–86, 2009. [6] U. Stromberg, J. Bjork, P. Vineis, K. Broberg, and E. Zeggini. Ranking of genome-wide association scan signals by different measures. Int J Epidemiology, 38:1364–1373, 2009. [7] S. Wacholder, S. Chanock, M. Garcia-Closas, L. El-ghormli, and N. Rothman. Assessing the probability that a positive report is false: an approach for molecular epidemiology studies. J Nat Cancer Inst, 96:434–41, 2004. [8] U. Stromberg, J. Bjork, K. Broberg, F. Mertens, and P. Vineis. Selection of influential genetic markers among a large number of candidates based on effect estimation rather than hypothesis testing: an approach for genome-wide association studies. Epidemiology, 19: 302–8, 2008. [9] B. Devlin and N. Risch. A comparison of linkage disequilibrium measures for fine-scale mapping. Genomics, 29:311–322, 1995. [10] L. Kruglyak. Prospects for whole-genome linkage disequilibrium mapping of common disease genes. Nature genetics, 22:139–144, 1999. [11] J. K. Pritchard and M. Przeworski. Linkage disequilibrium in humans: Models and data. AJHG, 69:1–14, 2001. [12] D. J. Schaid and S. J. Jacobsen. Biased tests of association: comparison of allele frequen- cies when departing from Hardy-Weinberg proportions. American Journal of Epidemiol- ogy, 149(8):706–711, 1999. 20 [13] D. I. Boomsma, G. Willemsen, and et al. Genome-wide association of major depression: description of samples for the gain major depressive disorder study: NTR and NESDA biobank projects. Eur J Hum Gen, 16:335–342, 2008. [14] N.R. Wray, S. Ripke, et al., and the Major Depressive Disorder Working Group of the Psychiatric Genomics Consortium. Genome-wide association analyses identify 44 risk variants and refine the genetic architecture of major depression. Nat Genetics, 50:668– 681, 2018. [15] T. A. Manolio, F. S. Collins, and et al. Finding the missing heritability of complex diseases. Nature, 461:747–753, 2009. [16] N. J. Schork, S. S. Murray, K. A. Frazer, and E. J. Topol. Common vs. rare allele hypotheses for complex diseases. Current Opinion in Genetics and Development, 19:212–219, 2009. [17] P. C. Sham and S. M. Purcell. Statistical power and significance testing in large-scale genetic studies. Nature Reviews Genetics, 15:335–346, 2014. [18] I. Foppa and D. Spiegelman. Power and sample size calculations for case-control studies of gene-environment interactions with a polytomous exposure variable. Am J Epidemiol, 146(7):596–604, 1997. Appendix A: derivation of the equality (3) In this section we formulate two lemmas which together constitute the proof of equality (3). We note that throughout Appendix A (and only there) we assume that the genotypes are ordered. For simplicity of notation, without loss of generality, we assume that the total number of causal variants equals two. Lemma 4.1. Let there be two causal variants and a marker of interest. Denote the genotypes at the two causal variants as (Ai, A j) and (Bi, B j) with i, j = 1, 2, respectively. Suppose that the marker is in linkage disequilibrium with the first causal variant and in linkage equilibrium with the second causal variant. Then, P(MiM j|A) = qi j + D11i j P(MiM j|U) = qi j + D11i j π11 − π12 π π12 − π11 1 − π + D22i j + D22i j π22 − π12 π π12 − π22 1 − π , , where Di jkl = P(AiA jMkMl) − P(AiA j)P(MkMl), and P(AiA jMkMl) equals the probability that a random individual from the total population has haplotypes (Ai, Mk) and (A j, Ml). Moreover, if HWE holds, then also Di jkl = P(AiMk)P(A jMl) − pi p jqkql. 21 Proof. Write P(MiM j|A) = ∑ k,l,m,n = ∑ k,l,m,n P(MiM j|AkAlBmBn)P(AkAlBmBn|A) P(MiM j|AkAl)P(AkAlBmBn|A) P(MiM j|AkAl)P(AkAl|A) = ∑ k,l = qi j + P(MiM j|A1A1) + P(MiM j|A2A2) p11π11 π p22π22 π + (p12P(MiM j|A1A2) + p21P(MiM j|A2A1)) π12 π − qi j(p11π11 + (p12 + p21)π12 + p22π22) π = qi j + D11i jπ11 π + (D12i j + D21i j)π12 π + D22i jπ22 π = qi j + D11i j π11 − π12 π + D22i j π22 − π12 π , where the last equality follows from D12i j + D21i j = −(D11i j + D22i j), since ∑2,2 expression for P(MiM j|U) is found analogously. The assertion requiring HWE is trivial. k,l=1 Dkli j = 0. The Lemma 4.2. The frequencies of the allele Mk among the cases and the controls satisfy qA k = P(Mk|A) = qk + p1D1k √ = qk + qU k = P(Mk|U) = qk + p1D1k π11 − π12 π p1 p2q1q2∆1k π12 − π11 1 − π − p2D1k (cid:16) p1 π11 − π12 π − p2D1k π22 − π12 π − p2 π12 − π22 1 − π π22 − π12 π √ = qk + p1 p2q1q2∆1k p1 (cid:16) π12 − π11 1 − π − p2 π12 − π22 1 − π (6) (cid:17) (cid:17) , √ where ∆ik = Dik/ (Ai, Mk)-haplotype frequency in the total population. p1 p2q1q2, Dik = P(AiMk) − piqk for i, k = 1, 2, with P(AiMk) denoting the Proof. Define ¯k = 3 − k. Then qA k = 1 2 P(MkM¯k|A) + = qk + D11k¯k + (cid:16) 1 2 1 2 1 2 π11 − π12 π D11¯kk + D11kk P(M¯kMk|A) + P(MkMk|A) (cid:17) π11 − π12 π π22 − π12 π + p2D2k . = qk + p1D1k + (cid:16) 1 2 D22k¯k + 1 2 D22¯kk + D22kk (cid:17) π22 − π12 π Since D1k = −D2k, the above expression further equals the right-hand side of (6). The expression for qU k is found analogously. 22 A crucial consequence of Lemma 4.2 is the equality 1 − qA qU 1 = √ p1 p2q1q2∆11 p1(π12 − π11) + p2(π22 − π12) π(1 − π) . In the case that the marker is in fact the causal variant (i.e. ∆11 = 1, p1 = q1 and p2 = q2), we get 1 − pA pU 1 = p1 p2 p1(π12 − π11) + p2(π22 − π12) π(1 − π) . Combining the two displays immediately yields (3) 5 Appendix B: Comparison of WHWD and TCATT(1/2) First we focus on the type I error behavior. We simulated data under the null hypothesis of no association with 2000 cases and 2000 controls. The MAF of the marker was again varied over a broad range of values and the Wright’s inbreeding coefficient was alternatively set to 0.1 and 0.2. Using the significance threshold of α = 0.05, we repeated one million times the simulation of data and hypothesis testing. The observed type I error rates were all close to α, like it should be (results not shown). Next, we performed simulations to compare the power of the TCATT(1/2) and WHWD statistics under deviations of Hardy Weinberg equilibrium and misspecified population prevalence. The allele frequency p1 was set equal to 0.03 and the Wright’s inbreeding coefficient for the causal variant was alternatively set to 0.1 and 0.2. Given the non-zero value of ∆ (∆ = 0.10), the alleles at the marker were simulated to also be in HWD. We set R = S = 4000 and π11 = 0.10, π12 = 0.06 and π22 = 0.02. The observed power functions are plotted in Figure 5, which shows a slight power superiority of the test based on WHWD over the Cochran-Armitage test. Furthermore, the power of the test WHWD as a function of the MAF q1 is constant, once again illustrating how even the robustified version of the new test does not unjustly prefer markers with high MAF. 23 Figure 5: Power functions for the test based on WHWD (continued lines, for δ = π, δ = 0.05, δ = 0.20 ordered top to bottom) and TCATT(1/2) (dashed lines), for F equal to 0.1 (left plot) and 0.2 (right plot) as a function q1. 24 0.050.100.150.200.250.540.550.560.570.580.59q1, Delta=0.10, p1=0.03power0.050.100.150.200.250.600.620.640.660.68q1, Delta=0.10, p1=0.03power
synthetic_cpt
1
Aligning_CodeLLMs_with_Direct_Preference_Optimization.pdf
Aligning CodeLLMs with Direct Preference Optimization Yibo Miao1, Bofei Gao2, Shanghaoran Quan3, Junyang Lin3, Daoguang Zan4, Jiaheng Liu3, Jian Yang3, Tianyu Liu3*, Zhijie Deng1† 1Shanghai Jiao Tong University 2Peking University 3Alibaba Group 4Institute of Software, Chinese Academy of Sciences {miaoyibo, zhijied}@sjtu.edu.cn, [email protected] Abstract The last year has witnessed the rapid progress of large language models (LLMs) across di- verse domains. Among them, CodeLLMs have garnered particular attention because they can not only assist in completing various program- ming tasks but also represent the decision- making and logical reasoning capabilities of LLMs. However, current CodeLLMs mainly fo- cus on pre-training and supervised fine-tuning scenarios, leaving the alignment stage, which is important for post-training LLMs, under- explored. This work first identifies that the commonly used PPO algorithm may be subop- timal for the alignment of CodeLLM because the involved reward rules are routinely coarse- grained and potentially flawed. We then advo- cate addressing this using the DPO algorithm. Based on only preference data pairs, DPO can render the model rank data automatically, giv- ing rise to a fine-grained rewarding pattern more robust than human intervention. We also contribute a pipeline for collecting preference pairs for DPO on CodeLLMs. Studies show that our method significantly improves the per- formance of existing CodeLLMs on bench- marks such as MBPP and HumanEval. 1 Introduction The past few years have witnessed the rapid develop- ment of large language models (LLMs) (Touvron et al., 2023; Chowdhery et al., 2023; Achiam et al., 2023). LLMs have quickly been used in specific domains like medicine (Thirunavukarasu et al., 2023), laws (Sun, 2023), finance (Yang et al., 2023), etc. LLMs designed for solving coding tasks, referred to as CodeLLMs, are particularly noteworthy due to their potential to auto- mate and streamline programming, including bug detec- tion and code generation, thereby enhancing productiv- ity (Li et al., 2023; Wei et al., 2023; Guo et al., 2024). Current research on CodeLLMs primarily focuses on the accumulation of extensive code-related cor- pora for pre-training, as well as the collection of di- verse instruction-following datasets for supervised fine- tuning (Roziere et al., 2023; Li et al., 2023; Hui et al., *Project lead. †Corresponding author. 2024). Given that the alignment plays an important role in improving the reasoning ability of LLMs (OpenAI, 2024b), seminal works (Le et al., 2022; Liu et al., 2023; Dou et al., 2024) have proposed to enhance CodeLLMs through Proximal Policy Optimization (PPO) (Schul- man et al., 2017). However, we argue that they suffer from limitations in the definition of reward functions. For example, they commonly assign a fixed reward of -1 to any code snippet containing syntax errors that can- not be compiled, irrespective of the error count. As a result, code snippets with a single syntax error are treated no differently from those riddled with multiple errors. This makes the reward model fail to capture the nuanced preference distinctions among various code snippets and hence cannot efficiently guide the align- ment of CodeLLMs. This work proposes to align the CodeLLMs with the help of Direct Preference Optimization (DPO) (Rafailov et al., 2023). Unlike PPO, DPO does not explicitly define a reward model to capture preference, but alter- natively uses model likelihood to represent that. By learning by comparing data pairs, DPO can automati- cally acquire the fine-grained differentiation between samples from coarse rewarding signals. Ideally, after training, a code snippet with few errors can be assigned a higher reward than that containing more errors. Com- pared to defining fine-grained, hand-crafted reward rules for better PPO, our DPO approach enjoys higher flexibil- ity and can reduce the risk of reward hacking associated with flawed hand-crafted reward rules. Given that using data pairs from other models for DPO can be sub-optimal and even lead to model degra- dation (Yan et al., 2024; Lai et al., 2024), we propose to construct on-policy preference data for DPO train- ing, which distinguishes us from Weyssow et al. (2024). Specifically, we introduce external code executors to provide feedback for ranking code generations. An overview of this is depicted in Fig. 2. Empirically, our method has demonstrably increased the performance of CodeQwen1.5 7B on MBPP (Austin et al., 2021) and HumanEval (Chen et al., 2021), enhancing the scores from 0.783 to 0.804 and 0.829 to 0.878, respectively. 2 Methodology 2.1 Issues of PPO for CodeLLMs Let πθ(y|x) denote a large language model (LLM), which generates response y given the user instruction x. 4 2 0 2 t c O 4 2 ] I A . s c [ 1 v 5 8 5 8 1 . 0 1 4 2 : v i X r a Figure 1: Two cases for illustration of the reward difference of different responses given by the DPO reward and rule-based reward. When given a code question, response 1 and response 2 are two responses that have logic errors but the two responses are not the same. Reward difference means the reward of response 1 minus that of response 2. The rule-based reward assigns the same reward to different responses while DPO recognizes the reward difference between the different responses. following rewarding rule: r (x, y) =    +1, −0.3, −0.6, −1, if y passed all unit tests if y failed any unit test if y happened runtime error if y happened compile error. (2) Namely, the reward score is assigned based on the actual excitation state of the model response y. Despite being widely adopted (Liu et al., 2023; Sho- jaee et al., 2023; Dou et al., 2024), the above rule can be too coarse and the reward space is very sparse. However, the exploration of complex tasks like coding in an envi- ronment characterized by sparse reward is particularly challenging. For instance, one response may contain plenty of syntax errors, while another may contain only one, yet both the two responses would receive identical rewards. This may cause considerable confusion for the model to align. Ideally, we expect the reward model to assign differentiated scores that accurately reflect the varying levels of quality among the responses. 2.2 DPO for CodeLLMs Instead of exploring designing more fine-grained re- warding rules in a costly trial-and-error manner, this paper proposes to utilize DPO (Rafailov et al., 2023) to align CodeLLMs more efficiently and reliably. Tech- nically, DPO operates on pre-collected preference data pairs and solves the following problem: L = − (cid:88) log σ (x,y+,y−) (cid:20) β log πθ(y+|x) π0(y+|x) − β log πθ(y−|x) π0(y−|x) (cid:21) , (3) Figure 2: The pipeline of using execution feedback from the code executor to construct preference dataset. Demonstrated by Achiam et al. (2023), PPO (Schulman et al., 2017) algorithm has been the most powerful al- gorithm to align LLMs for performance enhancement. Concretely, PPO maximizes the following objective: max θ Jr(θ) = max θ Ey∼πθ(·|x) (cid:88) x (cid:20) r(x, y) − β log (cid:21) , πθ(y|x) π0(y|x) (1) where r is the reward function,and β is a hyperparameter to control the degree of deviation between the policy model πθ and the reference model π0. The effectiveness of PPO is closely linked to the quality of the reward function r. For coding tasks, we can naturally utilize the execution feedback provided by an external code executor to characterize r, based on specific rules. For example, Le et al. (2022) define the Response 1:class Solution:def sumOfDigits(self, nums: List[int]) -> int:return int(sum(map(int, str(min(nums)))) % 2 == 1)Response 2:class Solution:def sumOfDigits(self, nums: List[int]) -> int:return min(nums)%2==1DPO rewarddifference:Rule-based rewarddifference:1.04690Fine-grained rewardCoarse rewardQuestion:Given an integer array nums, return 0 if the sum of the digits of the minimum integer in nums is odd, or 1 otherwise.Response 1:class Solution:def isSameAfterReversals(self, num: int) -> bool: return not num or num % 10Response 2:class Solution:def isSameAfterReversals(self, num: int) -> bool:if num == 0: return Truereturn num % 10DPO rewarddifference:Rule-based rewarddifference:0.73760Fine-grained rewardCoarse rewardQuestion:Reversing an integer means to reverse all its digits. Given an integer num, reverse num to get reversed1, then reverse reversed1 to get reversed2. Return true if reversed2 equals num. Otherwise return false.Code ExecutorYesNoChosen ResponseRejected ResponseQueryUnit test examplesCodeLLMResponsesPass all tests?Internet Model CodeQwen1.5 7B + RFT + DPO DeepSeek-Coder 6.7B + RFT + DPO CodeLlama-Instruct 7B + RFT + DPO MBPP 0.783 0.767 0.804 0.743 0.717 0.765 0.548 0.556 0.571 MBPP+ 0.667 0.667 0.688 0.651 0.635 0.677 0.458 0.455 0.466 HumanEval 0.829 0.793 0.878 0.768 0.768 0.787 0.390 0.360 0.427 HumanEval+ 0.774 0.738 0.829 0.707 0.701 0.732 0.323 0.305 0.348 Table 1: The pass@1 perfomance of the three CodeLLMs on MBPP, MBPP+, HumanEval and HumanEval+ benchmarks. +RFT/+DPO means the CodeLLM is trained using the RFT/DPO algorithm. The bold indicates the optimal value. Model CodeQwen1.5 7B on-policy DPO off-policy DPO DeepSeek-Coder 6.7B on-policy DPO off-policy DPO MBPP 0.783 0.804 0.799 0.743 0.765 0.735 MBPP+ 0.667 0.688 0.683 0.651 0.677 0.640 HumanEval 0.829 0.878 0.860 0.768 0.787 0.774 HumanEval+ 0.774 0.829 0.799 0.707 0.732 0.695 Table 2: The pass@1 performance of CodeLLMs after on-policy and off-policy DPO training on several benchmarks. "on-policy DPO" means the data used for training is generated by the policy model itself. "off-policy DPO" means the data used for training is generated by another model. The bold indicates the optimal value. where σ denotes the sigmoid function, and y+ and y− denote the preferred (i.e., chosen) response and the less preferred (i.e., rejected) one corresponding to the input x respectively. The term within the sigmoid function (i.e., subtraction between log-ratios) corresponds to an im- plicit reward difference learned by the model (Rafailov et al., 2023). We can observe that the learning of DPO does not hinge on the exact values of the reward function r but only needs the preference data pairs, which can be eas- ily constructed given a coarse rewarding rule. Besides, thanks to the learning-to-rank pattern, DPO can form a fine-grained characterization of the preference differ- ence after witnessing adequate, various data pairs. Fig. 1 provides an example of this. In particular, we collected two incorrect responses for a certain coding problem and we expect the different responses can be assigned different rewards. As shown, DPO can differentiate be- tween the two responses while the hand-crafted rules assign the same reward to them. Details for preference pair construction. We con- struct a set of (chosen, rejected) pairs for the training of DPO on CodeLLMs. As illustrated in Fig. 2, we first aggregate a substantial number of queries (i.e., x) and unit test examples from the Internet, including com- petitive programming platforms like LeetCode1 and Codeforces2. Subsequently, each query is input into the CodeLLM to generate eight distinct responses, from 1https://leetcode.com 2https://codeforces.com which we extract code snippets. The code and the unit test examples are then submitted to the code executor for evaluation. We assess whether the output of the code executor aligns with the ground truth delineated in the unit test examples. If the output is consistent with the ground truth, the response is classified as chosen; otherwise, it is deemed rejected. For each query, we randomly select a single pair of chosen and rejected responses, yielding roughly 3,000 triples (x, y+, y−) to support the training of DPO. 3 Experiments 3.1 Experimental Details CodeLLMs of concern. We selected CodeLLMs that have outstanding performance and have garnered wide attention from the research community, as the SFT model for further alignment. Specifically, we choose CodeQwen1.5-Instruct 7B (Bai et al., 2023), CodeLlama-Instruct 7B (Roziere et al., 2023), and DeepSeek-Coder-Instrcut 6.7B (Guo et al., 2024) as the target models to conduct our experiments. Benchmarks and evaluation. In order to accurately assess the coding capabilities of CodeLLMs, we se- lected the most commonly used MBPP (Mostly Basic Programming Problems) (Austin et al., 2021) and Hu- manEval (Chen et al., 2021) benchmarks. HumanEval comprises 164 Python problems, validated through test cases to evaluate the code produced by CodeLLMs in a zero-shot setting. Similarly, MBPP features 500 prob- lems assessed in a few-shot setting. Further, MBPP+ and HumanEval+ by Liu et al. (2024) added dozens of times more unit test cases to the original MBPP and HumanEval benchmarks, which better reflects the abil- ity of the CodeLLMs to truly understand and solve the coding questions. We also report the performance of CodeLLMs on these two benchmarks. To ensure stable and fair comparisons, for each benchmark, we utilize the greedy decoding strategy, letting the CodeLLMs generate responses to the input code questions. We re- port the pass@1 performance across the benchmarks to illustrate the coding capabilities of the CodeLLMs. 3.2 Main Results To validate the effect of DPO training, we compare the performance of CodeLLMs that use rejection sampling fine-tuning (RFT) (Zelikman et al., 2022; Yuan et al., 2023) and DPO for training, respectively. Specifically, we use the chosen responses from the constructed pref- erence dataset for RFT training and both the chosen and rejected responses from the constructed preference dataset for DPO training. All training hyperparameters can be acquired from the Appendix B. As illustrated in Table 1, while the RFT algorithm does enhance the model’s performance, its improvements are not as sig- nificant as those realized through the DPO algorithm. This discrepancy arises because, during the optimiza- tion process with the DPO algorithm, the model is able to learn from rejected responses, enabling it to avoid generating undesirable patterns during inference and ultimately reduce errors in the generated code snippets. 3.3 On-policy DPO vs. Off-policy DPO Recently, there have been some researchers who also applied DPO in the field of CodeLLM. Weyssow et al. (2024) leverage the Magicoder Evol-Instruct dataset (Wei et al., 2023) as the source of queries. For each query, they randomly select four LLMs from four- teen LLMs to answer the coding problem and leverage ChatGPT to rate the four responses. Then they can get the preference dataset according to the rate from Chat- GPT and the constructed dataset will be used for DPO training. However, using data generated by other mod- els for DPO training is an off-policy mode. We want to emphasize that whether the preference data for DPO training is on-policy or not is very important. As ana- lyzed by (Lai et al., 2024; Yan et al., 2024), off-policy training may lead to the degradation of the model to be optimized. Therefore, directly using the responses generated by other models for DPO may not be a good choice. The recommended choice is to construct the on-policy dataset using the policy model. To validate the argument, we set up our experi- ments as follows: We first used CodeQwen1.5 7B and DeepSeek-Coder 6.7B to construct preference datasets, respectively. In the DPO training stage, we exchange the training data between Deepseek-Coder and Code- Qwen1.5. In other words, we use data generated by one model to train another model. The performances of CodeLLMs after DPO training are shown in Table 2. As shown, using off-policy data leads to sub-optimal results. Notably, when we use preference data gener- ated by CodeQwen1.5 to train the DeepSeek-Coder, the performance after training is even worse than the origi- nal instruct model, which indicates that using off-policy data for DPO training might be harmful to the model. 4 Related Works 4.1 CodeLLMs With the rapid development of LLMs, domain mod- els specifically designed for the field of coding are continuously emerging, providing great convenience for humans. Several pre-trained LLMs have demon- strated significant potential for code generation, in- cluding Santacoder (Allal et al., 2023), CodeGPT (Lu et al., 2021), etc. Moreover, CodeLLMs that under- went fine-tuning demonstrated more competitive perfor- mance, with standout models including Starcoder (Li et al., 2023), CodeLlama (Roziere et al., 2023), Wizard- coder (Luo et al., 2023), DeepSeek-Coder (Guo et al., 2024), and Qwen-Coder (Hui et al., 2024). However, when compared with the state-of-the-art CodeLLMs such as Claude-3.5-Sonnet (Anthropic, 2024) and GPT- 4o (OpenAI, 2024a), the capabilities of these models lag significantly behind. A reason is that these CodeLLMs heavily rely on pre-train and SFT, either lacking align- ment or not performing well. 4.2 Execution Feedback from Code Executor Reinforcement learning from human feedback has proven to be effective (Achiam et al., 2023), but it is highly labor-intensive. Fortunately, the unique nature of coding tasks allows us to leverage execution feedback from a code executor to evaluate whether the code gen- erated by CodeLLMs meets the problem’s requirements. For example, some works use execution feedback from the code executor in inference time, for instance, Zhong et al. (2024) leverage the feedback signal from the code executor to help locate the bug in the code snippet, and iteratively refine the generated code until it passes all test examples. Some works leverage the execution feed- back from the code executor to provide a reward signal according to certain rules (Liu et al., 2023; Dou et al., 2024) and then PPO is applied to align the CodeLLMs for better performance. In this paper, we propose an alternative approach to leveraging execution feedback: utilizing it to construct data for DPO training. 5 Conclusion In this paper, we highlight that current CodeLLMs pri- marily focus on the pre-training and supervised fine- tuning stages, while neglecting the potential of the align- ment stage. The existing works on using PPO to align CodeLLMs may suffer from the issue of coarse reward definition. Therefore, we propose an approach to fur- ther enhance the ability of CodeLLMs by leveraging the execution feedback from the code executor to con- struct a preference dataset for DPO training. Moreover, we have also demonstrated that, for coding tasks, com- pared to off-policy DPO, it is more beneficial to adopt on-policy DPO. In conclusion, this work proposes a practical method that can be directly used to improve the coding performance of models. 6 Limitations One limitation of this work is that the relatively small number of coding problems available on the internet restricts us to constructing a limited set of preference data pairs for DPO training. Due to this constraint, we were unable to investigate the impact of the size of the training data on the model’s final performance during the alignment phase. Future work can be done by exploring synthesizing coding questions for data augmentation. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, et al. 2023. Santacoder: don’t reach for the stars! arXiv preprint arXiv:2301.03988. Anthropic. 2024. Introducing claude. https://www. anthropic.com/claude. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732. Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. 2023. Qwen technical report. arXiv preprint arXiv:2309.16609. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Ka- plan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebas- tian Gehrmann, et al. 2023. Palm: Scaling language modeling with pathways. Journal of Machine Learn- ing Research, 24(240):1–113. Shihan Dou, Yan Liu, Haoxiang Jia, Limao Xiong, Enyu Zhou, Junjie Shan, Caishuang Huang, Wei Shen, Xiaoran Fan, Zhiheng Xi, et al. 2024. Step- coder: Improve code generation with reinforcement learning from compiler feedback. arXiv preprint arXiv:2402.01391. Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Yu Wu, YK Li, et al. 2024. Deepseek-coder: When the large language model meets programming– arXiv preprint the rise of code intelligence. arXiv:2401.14196. Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Day- iheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Kai Dang, et al. 2024. Qwen2. 5-coder technical report. arXiv preprint arXiv:2409.12186. Xin Lai, Zhuotao Tian, Yukang Chen, Senqiao Yang, Xi- angru Peng, and Jiaya Jia. 2024. Step-dpo: Step-wise preference optimization for long-chain reasoning of llms. arXiv:2406.18629. Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven Chu Hong Hoi. 2022. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. Advances in Neural Information Processing Systems, 35:21314–21328. Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. 2023. Starcoder: may the source be with you! arXiv preprint arXiv:2305.06161. Jiate Liu, Yiqin Zhu, Kaiwen Xiao, Qiang Fu, Xiao Han, Wei Yang, and Deheng Ye. 2023. Rltf: Reinforce- ment learning from unit test feedback. arXiv preprint arXiv:2307.04349. Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Ling- ming Zhang. 2024. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation. Advances in Neural In- formation Processing Systems, 36. Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, et al. 2021. Codexglue: A machine learning benchmark dataset arXiv for code understanding and generation. preprint arXiv:2102.04664. Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xi- ubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023. Wizardcoder: Empowering code large language models with evol- instruct. arXiv preprint arXiv:2306.08568. OpenAI. 2024a. Hello gpt-4o. https://openai. com/index/hello-gpt-4o. OpenAI. 2024b. Introducing openai o1. https:// openai.com/o1. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290. Hyperparameter Training epochs Learning rate Learning rate schedule warmup ratio batch size Value 1 5e-6 cosine 0.05 16 Table 3: Hyperparameters of RFT. Hyperparameter Training epochs Learning rate Learning rate schedule warmup ratio batch size beta Value 1 5e-6 cosine 0.05 16 0.1 Table 4: Hyperparameters of DPO. A The Underlying Reward of DPO The DPO algorithm drives that if πθ can maximize the Eq. (1), then the underlying reward can be given by: r(x, y) = β log πθ(y | x) π0(y | x) + C(x), (4) where C : X → R is a scalar function. πθ is the policy model and π0 is the reference model. B Hyperparameter Setting The hyperparameters used for RFT and DPO are shown in Table 3 and Table 4, respectively. Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, et al. 2023. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proxi- mal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Parshin Shojaee, Aneesh Jain, Sindhu Tipirneni, and Chandan K Reddy. 2023. Execution-based code gen- eration using deep reinforcement learning. arXiv preprint arXiv:2301.13816. Zhongxiang Sun. 2023. A short survey of viewing large language models in legal aspect. arXiv preprint arXiv:2303.09136. Arun James Thirunavukarasu, Darren Shu Jeng Ting, Kabilan Elangovan, Laura Gutierrez, Ting Fang Tan, and Daniel Shu Wei Ting. 2023. Large language models in medicine. Nature medicine, 29(8):1930– 1940. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bash- lykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhos- ale, et al. 2023. Llama 2: Open foundation and fine- tuned chat models. arXiv preprint arXiv:2307.09288. Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, and Lingming Zhang. 2023. Magicoder: Source code is all you need. arXiv preprint arXiv:2312.02120. Martin Weyssow, Aton Kamanda, and Houari Sahraoui. 2024. Codeultrafeedback: An llm-as-a-judge dataset for aligning large language models to coding prefer- ences. arXiv preprint arXiv:2403.09032. Yuzi Yan, Yibo Miao, Jialian Li, Yipin Zhang, Jian Xie, Zhijie Deng, and Dong Yan. 2024. 3d-properties: Identifying challenges in dpo and charting a path forward. arXiv preprint arXiv:2406.07327. Hongyang Yang, Xiao-Yang Liu, and Christina Dan Wang. 2023. Fingpt: Open-source financial large language models. arXiv preprint arXiv:2306.06031. Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. 2023. Scal- ing relationship on learning mathematical reason- arXiv preprint ing with large language models. arXiv:2308.01825. Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D Goodman. 2022. Star: Self-taught reasoner boot- strapping reasoning with reasoning. In Proceedings of the 36th International Conference on Neural Infor- mation Processing Systems, pages 15476–15488. Li Zhong, Zilong Wang, and Jingbo Shang. 2024. De- bug like a human: A large language model debugger via verifying runtime execution step by step. In Find- ings of the Association for Computational Linguistics ACL 2024, pages 851–870.
synthetic_cpt
2
Fine-grained_Pluggable_Gradient_Ascent_for_Knowledge_Unlearning_in_Language_Models.pdf
9 0 0 2 n u J 1 1 ] V C . h t a m [ 1 v 1 8 0 2 . 6 0 9 0 : v i X r a CONTINUITY PROPERTIES OF FINELY PLURISUBHARMONIC FUNCTIONS AND PLURIPOLARITY SAID EL MARZGUIOUI AND JAN WIEGERINCK Abstract. We prove that every bounded finely plurisubharmonic func- tion can be locally (in the pluri-fine topology) written as the differ- ence of two usual plurisubharmonic functions. As a consequence finely plurisubharmonic functions are continuous with respect to the pluri-fine topology. Moreover we show that −∞ sets of finely plurisubharmonic functions are pluripolar, hence graphs of finely holomorphic functions are pluripolar. 1. Introduction The fine topology on an open set Ω ⊂ Rn is the coarsest topology that makes all subharmonic functions on Ω continuous. A finely subharmonic function is defined on a fine domain, it is upper semi-continuous with re- spect to the fine topology, and satisfies an appropriate modification of the mean value inequality. Fuglede [9] proved the following three properties that firmly connect fine potential theory to classical potential theory: finely sub- harmonic functions are finely continuous (so there is no super-fine topology), all finely polar sets are in fact ordinary polar sets, and finely subharmonic functions can be uniformly approximated by subharmonic functions on suit- able compact fine neighborhoods of any point in their domain of definition. Another continuity result is what Fuglede calls the Brelot Property, i.e. a finely subharmonic function is continuous on a suitable fine neighborhood of any given point in its domain, [14, page 284], see also [11, Lemma 1]. Similarly, the pluri-fine topology on Ω ⊂ Cn is the coarsest topology that In [8] we makes all plurisubharmonic (PSH) functions on Ω continuous. introduced finely plurisubharmonic functions as plurifinely upper semicon- tinuous functions, of which the restriction to complex lines is finely subhar- monic. We will prove the analogs of two of the results mentioned above. Bounded finely plurisubharmonic functions can locally be written as dif- ferences of ordinary PSH functions (cf. Section 3), hence finely plurisub- harmonic functions are pluri-finely continuous. We also prove a weak form 2000 Mathematics Subject Classification. 32U15, 32U05, 30G12, 31C40. 1 2 SAID EL MARZGUIOUI AND JAN WIEGERINCK of the Brelot Property. Next, finely pluripolar sets are shown to be plu- ripolar. This answers natural questions posed e.g. by [6]. As a corollary we obtain that zero sets of finely holomorphic functions of several complex variables are pluripolar sets. Partial results in this direction were obtained in [4, 5, 8]. A final consequence is Theorem 4.5 concerning the pluripolar hull of certain pluripolar sets. The pluri-fine topology was introduced in [13], and studied in e.g., [3, 2, 7, 8]. In the rest of the paper we will qualify notions referring to the pluri- fine topology by the prefix “F ”, to distinguish them from those pertaining to the Euclidean topology. Thus a compact F -neighborhood U of z will be a Euclidean compact set U that is a neighborhood of z in the pluri-fine topology. 2. Finely plurisubharmonic and holomorphic functions There are several ways to generalize the concepts of plurisubharmonic and of holomorphic functions to the setting of the plurifine topology. See e.g., [6, 8], and in particular [15] where the different concepts are studied and compared. Definition 2.1. Let Ω be an F -open subset of Cn. A function f on Ω is called F -plurisubharmonic if f is F -upper semicontinuous on Ω and if the restriction of f to any complex line L is finely subharmonic or ≡ −∞ on any F -connected component of Ω ∩ L. A subset E of Cn is called F -pluripolar if for every point z ∈ E there is an F -open subset U ⊂ Cn and an F -plurisubharmonic function (6≡ −∞) f on U such that E ∩ U ⊂ {f = −∞}. Denote by H(K) the uniform closure on K of the algebra of holomorphic functions in neighborhoods of K. Definition 2.2. Let U ⊆ Cn be F -open. A function f : U −→ C is said to be F -holomorphic if every point of U has a compact F -neighborhood K ⊆ U such that the restriction f |K belongs to H(K). Remark 2.3. The functions defined in Definition 2.1 are called weakly F - PSH functions in [15], whereas the functions in Definition 2.2 are called strongly F -holomorphic functions. In [15] strongly F -PSH functions (via approximation) and weakly F -holomorphic functions (via holomorphy on complex lines) are defined and it is shown that the strong properties imply the weak ones. The original definition of finely subharmonic functions involves sweeping- out of measures. If one wants to avoid this concept, one can use the next theorem as an alternative definition. FINELY PLURISUBHARMONIC FUNCTIONS AND PLURIPOLARITY 3 Theorem 2.4 (Fuglede [10, 12]). A function ϕ defined in an F -open set U ⊆ C is finely subharmonic if and only if every point of U has a com- pact F -neighborhood K ⊂ U such that ϕ|K is the uniform limit of usual subharmonic functions ϕn defined in Euclidean neighborhoods Wn of K. Recall also the following property, cf. [2], which will be used in the proof of Theorem 4.1 and its corollary. Theorem 2.5. (Quasi-Lindel¨of property) An arbitrary union of F -open subsets of Cn differs from a suitable countable subunion by at most a pluri- polar set. 3. Continuity of Finely PSH Functions Theorem 3.1. Let f be a bounded F -plurisubharmonic function in a bounded F -open subset U of Cn. Every point z ∈ U then has an F -neighborhood O ⊂ U such that f is representable in O as the difference between two locally bounded plurisubharmonic functions defined on some usual neighbor- hood of z. In particular f is F -continuous. Proof. We may assume that −1 < f < 0 and that U is relatively compact in the unit ball B(0, 1). Let V ⊂ U be a compact F -neighborhood of z0. Since the complement ∁V of V is pluri-thin at z0, there exist 0 < r < 1 and a plurisubharmonic function ϕ on B(z0, r) such that (3.1) ϕ(z) < ϕ(z0). lim sup z→z0,z∈∁V Without loss of generality we may suppose that ϕ is negative in B(z0, r) and (3.2) Hence (3.3) ϕ(z) = −1 if z ∈ B(z0, r)\V and ϕ(z0) = − 1 2 . f (z) + λϕ(z) ≤ −λ for any z ∈ U ∩ B(z0, r)\V and λ > 0. Now define a function uλ on B(z0, r) as follows (3.4) uλ(z) = max{−λ, f (z) + λϕ(z)} if z ∈ U ∩ B(z0, r), −λ if z ∈ B(z0, r)\V . ( This definition makes sense because [U ∩ B(z0, r)] and the two definitions of uλ agree on U ∩ B(z0, r)\V in view of (3.3). [B(z0, r)\V ] = B(z0, r), Clearly, uλ is F -plurisubharmonic in U ∩B(z0, r) and in B(z0, r)\V , hence in all B(z0, r) in view of the sheaf property, cf. [8]. Since uλ is bounded in B(z0, r), it follows from [9, Theorem 9.8] that uλ is subharmonic on each complex line where it is defined. It is a well known result that a bounded S 4 SAID EL MARZGUIOUI AND JAN WIEGERINCK function which is subharmonic on each complex line where it is defined, In other words uλ is plurisubharmonic in is plurisubharmonic, cf. [17]. B(z0, r). Since ϕ(z0) = − 1 2 , the set O = {ϕ > −3/4} is an F -neighborhood of z0, and because ϕ = −1 on B(z0, r)\V , it is clear that O ⊂ V ⊂ U. Observe now that −4 ≤ f (z) + 4ϕ(z), for every z ∈ O. Hence (3.5) f (z) = u4(z) − 4ϕ(z), for every z ∈ O. We have shown that f is F -continuous on a neighborhood of each point in (cid:3) its domain, hence f is F -continuous. The proof is inspired by [9, page 88-90]. Corollary 3.2. Every F -plurisubharmonic function is F -continuous. Proof. Let f be F -plurisubharmonic in an F -open subset Ω of Cn. Let d < c ∈ R. The set Ωc = {f < c} is F -open. The function max{f, d} is bounded F -PSH on Ωc, hence F -continuous. Therefore the set {d < f < c} (cid:3) is F -open, and we conclude that f is F -continuous. The following result gives a partial analog to the Brelot property. We re- call the definition of the relative extremal function orpluriharmonic measure of a subset E of an open set Ω, cp. [1, 16] (3.6) U = UE,Ω = sup{ψ ∈ PSH−Ω : ψ ≤ −1 onE}. It is well known that the upper semi-continuous regularization of U, i.e. U ∗(z) = lim supΩ∋v→z U(v) is plurisubharmonic in Ω. Theorem 3.3. (Quasi-Brelot property) Let f be a plurisubharmonic func- tion in the unit ball B ⊂ Cn. Then there exists a pluripolar set E ⊂ B such that for every z ∈ B \ E we can find an F -neighborhood Oz ⊂ B of z such that f is continuous in the usual sense in Oz Proof. Without loss of generality we may assume that f is continuous near the boundary of B. By the quasi-continuity theorem (cf. [16, Theorem 3.5.5] and the remark that follows it, see also [1]) we can select a sequence of relatively compact open subset ωn of B such that the Monge-Amp`ere capacity C(ωn, B) < 1 n , and f is continuous on B \ ωn. Denote by ˜ωn the F -closure of ωn. The pluriharmonic measure U ∗ ωn,B is equal to the pluriharmonic measure ˜ωn,B, because for a PSH function ϕ the set {ϕ ≤ −1} is F -closed, thus U ∗ ϕ|ωn ≤ −1 ⇒ ϕ|˜ωn ≤ −1. Now, using [16, Proposition 4.7.2] (3.7) C(ωn, B) = C ∗(ωn, B) = (ddcU ∗ ωn,B)n = ZΩ ZΩ (ddcU ∗ ˜ωn,B)n = C ∗(˜ωn, B). FINELY PLURISUBHARMONIC FUNCTIONS AND PLURIPOLARITY 5 n ˜ωn. By (3.7), C ∗(E, B) ≤ C ∗(˜ωn, B) ≤ 1 n, for every n. Hence Let E = E is a pluripolar subset of B. T Let z 6∈ E. Then there exists N such that z 6∈ ˜ωN . Clearly, the set B \ ˜ωN is an F -neighborhood of z. Since f is continuous on B \ ωN , it is (cid:3) also continuous on the smaller set B \ ˜ωN (⊂ B \ ωN ). Remark 3.4. The above Quasi-Brelot property holds also for F -plurisub- harmonic functions, in view of Theorem 3.1. 4. F -Pluripolar Sets and Pluripolar Hulls In this section we prove that F -pluripolar sets are pluripolar and apply this to pluripolar hulls. Theorem 4.1. Let f : U −→ [−∞, +∞[ be an F -plurisubharmonic func- tion (6≡ −∞) on an F -open and F -connected subset U of Cn. Then the set {z ∈ U : f (z) = −∞} is a pluripolar subset of Cn Proof of Theorem 4.1. We may assume that f < 0. Let z0 ∈ U, which we can assume relatively compact in B(0, 1). We begin by showing that z0 admits an F -neighborhood Wz0 such that {f = −∞} ∩ Wz0 is pluripolar. If z0 is a Euclidean interior point of U, then f is PSH on a neighborhood of z0 and there is nothing to prove. If not we proceed as in the proof of Theorem 3.1. Thus, let V ⊂ U be a compact F -neighborhood of z0, and ϕ a negative PSH function on B(z0, r) such that (4.1) ϕ(z) = −1 if z ∈ B(z0, r)\V and ϕ(z0) = − 1 2 . Let Φ = UB(z0,r)\V,B(z0,r) be the pluriharmonic measure defined in (3.6). By (4.1), we get ϕ ≤ Φ ≤ Φ∗. In particular − 1 2 ≤ Φ∗(z0). Let fn = 1 n max(f, −n). Then −1 ≤ fn < 0. We define functions vn(z) on B(z0, r) as follows. (4.2) vn(z) = max{−1, 1 −1 4fn(z) + Φ∗(z)} if z ∈ U ∩ B(z0, r), if z ∈ B(z0, r)\V . ( Since vn is analogous to the function uλ in (3.4), the argument in the proof of Theorem 3.1 shows that vn ∈ PSH(B(z0, r)). Now for z ∈ U such that f (z) 6= −∞ the sequence fn(z) increases to 0. Thus {vn} is an increasing sequence of PSH-functions. Let lim vn = ψ. The upper semi-continuous regularization ψ∗ of ψ is plurisubharmonic in B(z0, r). It is a result of [1], see also Theorem 4.6.3 in [16], that the set E = {ψ 6= ψ∗} is a pluripolar subset of B(z0, r). 6 SAID EL MARZGUIOUI AND JAN WIEGERINCK We claim that ψ∗ = Φ∗ on B(z0, r). Indeed, ψ ≤ ψ∗ ≤ Φ∗ because the vn belong to the defining family (3.6) for Φ. Now observe that ψ = Φ∗ on B(z0, r) \ {f = −∞}, because vn = Φ∗ = −1 on B(z0, r)\V . Hence {ψ∗ 6= Φ∗} ⊂ B(z0, r) ∩ {f = −∞}. (4.3) Clearly, the set {ψ∗ 6= Φ∗} is F -open. In view of Theorem 5.2 in [8] it must be empty because it is contained in the −∞-set of a finely plurisubharmonic function. Let z ∈ {Φ∗ > − 2 vn and the claim that 3 } ∩ {f = −∞}. Then it follows from the definition of ψ(z) = − 1 4 + Φ∗(z) = − 1 4 + ψ∗(z). Thus z ∈ E. Now {Φ∗ > − 2 3} is an F -neighborhood of z0. The conclusion is that every point z ∈ U has an F -neighborhood Wz ⊂ U such that Wz ∩{f = −∞} is a pluripolar set. ( If f (z) 6= −∞ we could have chosen Wz such that Wz ∩ {f = −∞} = ∅.) By the Quasi-Lindel¨of property, cf. Theorem 2.5 there is a sequence {zn}n≥1 ⊂ U and a pluripolar subset P of U such that (4.4) Hence (4.5) U = ∪nOzn ∪ P. {f = −∞} ⊂ (∪nOzn ∩ {f = −∞}) ∪ P. This completes the proof since a countable union of pluripolar sets is pluri- (cid:3) polar. Remark 4.2. Corollary 3.2 and Theorem 4.1 give affirmative answers to two questions in [6]. A weaker formulation of Theorem 4.1, but perhaps more useful, is as follows. Corollary 4.3. Let f : U −→ [−∞, +∞[ be a function defined in an F -domain U ⊂ Cn. Suppose that every point z ∈ U has a compact F - neighborhood Kz ⊂ U such that f |Kz is the decreasing limit of usual plurisub- harmonic functions in Euclidean neighborhoods of Kz. Then either f ≡ −∞ or the set {f = −∞} is pluripolar subset of U. As a byproduct we get the following corollary which recovers and gener- alizes the main result in [4] to functions of several variables. Corollary 4.4. Let h : U −→ C be an F -holomorphic function on an F - open subset U of Cn. Then the zero set of h is pluripolar. In particular, the graph of h is also pluripolar. FINELY PLURISUBHARMONIC FUNCTIONS AND PLURIPOLARITY 7 Proof of Corollary 4.4. Let a ∈ U. Definition 2.2 gives us a compact F - neighborhood K of a in U, and a sequences (hn)n≥0, of holomorphic func- tions defined in Euclidean neighborhoods of K such that hn|K −→ h|K, uniformly. For k ∈ N we define vn,k = max(log |hn|, −k) and vk = max(log |h|, −k). Clearly, vn,k converges uniformly on K to vk as n → ∞. Accordingly, vk is F -plurisubharmonic on the F -interior K ′ of K. Since vk is decreasing, the limit function log |h| is F -plurisubharmonic in K ′. Theorem 4.1 shows that the set K ′ ∩ {h = 0} is pluripolar. The corollary follows now by application (cid:3) of the Quasi-Lindel¨of property. The pluripolar hull E∗ Ω of a pluripolar set E relative to an open set Ω is defined as follows. E∗ Ω = {z ∈ Ω : u(z) = −∞}, where the intersection is taken over all plurisubharmonic functions defined in Ω which are equal to −∞ on E. \ The next theorem improves on Theorem 6.4 in [8]. Theorem 4.5. Let U ⊂ Cn be an F -domain, and let h be F -holomorphic in U. Denote by Γh(U) the graph of h over U, and let E be a non-pluripolar subset of U. Then Γh(U) ⊂ (Γh(E))∗ Cn+1. Proof. By Corollary 4.4 the set Γh(E) is pluripolar subset of Cn+1. Let ϕ be a plurisubharmonic function in Cn+1 with ϕ 6≡ −∞ and ϕ(z, h(z)) = −∞, for every z ∈ E. The same arguments as in the proof of Lemma 3.1 in [4] show that the function z 7→ ϕ(z, h(z)) is F -plurisubharmonic in U. Since E is not pluripolar, it follows from Theorem 3.1 that ϕ(z, h(z)) = −∞ (cid:3) everywhere in U. Hence Γh(U) ⊂ (Γh(E))∗ Cn+1. 5. Some further questions Question 1 Let f be an F -plurisubharmonic function defined in an F - open set U ⊆ C2. Suppose that for each point z ∈ U there is a compact F -neighbourhood Kz such that f is continuous (in the usual sense) on Kz. Is it true that f |Kz is the uniform limit of usual plurisubharmonic functions ϕn defined in Euclidean neighborhoods Wn of Kz?. Question 2 It is also interesting to figure out whether the assumption in the above question is automatically fulfilled. This would be the Brelot property for F -plurisubharmonic function. Many other questions remain open. For example, we do not know the answer to the following. Question 3 Is this concept of an F -plurisubharmonic function biholo- morphically invariant? 8 SAID EL MARZGUIOUI AND JAN WIEGERINCK References [1] Bedford, E. and Taylor, B. A: A new capacity for plurisubharmonic functions, Acta [2] Bedford, E. and Taylor, B. A.: Fine topology, Silov boundary and (ddc)n, J. Funct. Math. 149 (1982), 1–40. Anal. 72 (1987), 225–251. [3] Bedford, E.: Survey of pluripotential theory, Several complex variables: Proceedings of the Mittag-Leffler Inst. 1987-1988 (J-E. Fornæss, ed), Math. Notes 38, Princeton University Press, Princeton, NJ, 1993. [4] Edigarian, E. El Marzguioui, S. and Wiegerinck, J.: The image of a finely holomor- phic map is pluripolar, arXiv:math/0701136 [5] Edlund, T. and J¨oricke, B.: The pluripolar hull of a graph and fine analytic contin- uation, Ark. Mat.44 (2006), no. 1, 39–60. [6] El Kadiri, M.: Fonctions finement plurisousharmoniques et topologie plurifine. Rend. Accad. Naz. Sci. XLMem. Mat. Appl. (5) 27, (2003) 77–88 . [7] El Marzguioui, S. Wiegerinck, J.: The pluri-fine topology is locally connected. Po- tential Anal. 25 (2006), no. 3, 283–288. [8] El Marzguioui, S., Wiegerinck, J.: Connectedness in the plurifine topology, Func- tional Analysis and Complex Analysis, Istanbul 2007, 105–115, Contemp. Math., 481, Amer. Math. Soc., Providence, RI, 2009. [9] Fuglede, B.: Finely harmonic functions, Springer Lecture Notes in Mathematics, 289, Berlin-Heidelberg-New York, 1972. [10] Fuglede, B.: Fonctions harmoniques et fonctions finement harmonique,Ann. Inst. Fourier. Grenoble 24.4 (1974) 77–91. [11] Fuglede, B.: Finely harmonic mappings and finely holomorphic functions, Ann. Acad. Sci. Fennicæ 2 (1976), 113–127. [12] Fuglede, B.: Localisation in fine potential theory and uniform approximation by subharmonic functions, J. Functional Anal. 49 (1982), 57–72. [13] Fuglede, B.: Fonctions finement holomorphes de plusieurs variables - un essai. Sem- inaire d‘Analyse P. Lelong P. Dolbeault - H. Skoda, Springer - Verlag, Lecture Notes in Math. 1198 (1986), 133–145. [14] Fuglede, B.: Finely holomorphic functions- a survey, Revue Roumaine Math. Pures Appl. 33 (1988), 283–295. [15] Fuglede, B.: Concepts of plurifinely plurisubharmonic and plurifinely holomorphic functions, preprint (2009) [16] Klimek, M.: Pluripotential Theory, London Mathematical Society Monographs, 6, Clarendon Press, Oxford, 1991. [17] Lelong, P.: Les fonctions plurisousharmoniques, Ann. Sci. ´Ecole Norm. Sup. 62 (1945), 301–328. KdV Institute for Mathematics, Universiteit van Amsterdam, Postbus 94248 1090 GE, Amsterdam, The Netherlands E-mail address: [email protected] KdV Institute for Mathematics, Universiteit van Amsterdam, Postbus 94248 1090 GE, Amsterdam, The Netherlands E-mail address: [email protected]
synthetic_cpt
1
ToolQA_A_Dataset_for_LLM_Question_Answering_with_External_Tools.pdf
3 2 0 2 n u J 3 2 ] L C . s c [ 1 v 4 0 3 3 1 . 6 0 3 2 : v i X r a ToolQA: A Dataset for LLM Question Answering with External Tools Yuchen Zhuang∗, Yue Yu∗, Kuan Wang∗, Haotian Sun, Chao Zhang College of Computing, Georgia Institute of Technology, Atlanta GA {yczhuang, yueyu, kuanwang, haotian.sun, chaozhang}@gatech.edu Abstract Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs’ question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs’ internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs’ ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs’ pre-training data, enabling a more precise evaluation of LLMs’ tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available for the broader scientific community on GitHub 2. 1 Introduction Large Language Models (LLMs) have demonstrated superior performance in a myriad of NLP tasks [3, 7, 37, 36, 47, 54]. These models have captured vast amounts of knowledge from enormous and diverse corpora during pre-training. After instruction fine-tuning [8, 38, 1], they have demonstrated impressive capabilities in information-seeking question answering [57, 23]. Despite their remarkable performance, LLMs face several challenges. For example, they are susceptible to hallucinations— generating plausible yet ungrounded information—which can mislead users and affect content integrity [58, 17, 4]. Additionally, they exhibit weaknesses in numerical reasoning, an essential skill in numerous real-life applications [12, 31, 35, 25, 43, 11]. These limitations highlight the need for techniques that can enhance LLMs’ question-answering abilities. Recent research has shown that these issues can be mitigated by augmenting LLMs with external tools, such as retrieval augmentation [50, 15], math tools [48, 66, 28], and code interpreters [11, 55]. For example, a Wolfram math plugin can enhance numerical reasoning [60], and a verified database can mitigate hallucinations by providing up-to-date fact-checked knowledge [42]. However, existing evaluation methodologies struggle to distinguish whether the model is simply recalling pre-trained information or truly utilizing external tools for problem-solving [32]. This challenge arises, in part, because the external data used for evaluation may have already been exposed to LLMs during the pre-training phase [45]. This exposure can lead to a biased evaluation of LLMs’ tool-use abilities, as the models could just use their ingrained knowledge and their reasoning abilities, bypassing the use of external tools. As a result, these evaluations cannot accurately reflect the true competency of the ∗These authors contributed equally to this work. 2https://github.com/night-chen/ToolQA Preprint. Under review. Figure 1: Pre-trained on vast range of corpus, LLMs possess extensive knowledge, which may overlap with evaluation data. This overlap poses a significant challenge to current evaluation methods, as it becomes difficult to discern whether the model is merely recalling pre-trained information or genuinely employing external tools for problem-solving. models. We need a fair and explicit way to check if LLMs are really good at problem-solving with tools or if they are just using their memorized information. To fill this gap, we introduce ToolQA, a question answering (QA) benchmark to evaluate LLMs’ ability in using external tools for answering questions. ToolQA comprises data from 8 domains and defines 13 types of tools to acquire information from external reference corpora. Each instance in ToolQA consists of a question, an answer, reference corpora, and a list of available tools. ToolQA is unique in that all its questions can be answered only by using appropriate tools to obtain information from the reference corpus. This minimizes the possibility of LLMs answering questions by merely recalling their internal knowledge, and allows for faithfully evaluating LLMs’ abilities in using tools. ToolQA is curated with an automated three-phase process: (1) The first phase, Reference Data Collection, involves gathering various types of public corpora including text, tables, and graphs from different domains. These corpora have no overlap with the LLM pre-training data and will serve as reference corpora for tool-based question answering. (2) The second phase is Human-guided Question Generation with LLMs. In this phase, we generate questions that can only be answered by using tools over the reference corpora. Our approach is a template-based question generation process, which includes human-guided template generation, template validation, and question instantiation with tool attributes. (3) The third phase is Programmatic Answer Generation. This phase produces accurate answers for the generated questions. To ensure answer correctness, we implement operators corresponding to the tools and obtain answers from the reference corpora programmatically. Our three-phase procedure ensures that we generate questions that can only be answered using external knowledge, along with their precise answers. Additionally, the process is highly efficient and requires minimal human labeling efforts. We conducted experiments using both standard LLMs and tool-augmented LLMs to answer questions in ToolQA. Our findings indicate that ChatGPT and Chain-of-thoughts prompting [57], which rely solely on their internal knowledge, have low success rates of approximately 5% for easy questions and 2% for hard questions. In contrast, tool-augmented LLMs such as Chameleon [28] and ReAct [66] perform better by leveraging external tools. For easy questions, the best performance achieved by tool-augmented LLMs is 43.15%, while for hard questions, the best performance drops to 8.2%. Our results and error analysis demonstrate that ToolQA is a challenging benchmark for existing tool-augmented LLM methods, especially for its hard questions that require more complex reasoning about tool composition. 2 Related Work 2.1 Knowledge-Augmented LLMs Several prior works aim to enhance LLMs with explicit external knowledge. Specifically, one line of research focus on retrieval-augmented language models [50, 2, 15, 24, 27, 70, 30, 63], where they use sparse [46] or dense retrieval [20, 14] to extract relevant knowledge from the corpus. These works mainly focus on leveraging free text, without considering multiple types of tools for task solving. On the other hand, Program-of-Thought [5], PAL [11], MathPrompt [13], and Code4Struct [55] 2 O'Neal was drafted by the Orlando Magic with the first overall pick in the 1992 NBA draft. He quickly became one of the best centers in the league…Kobe Bryant was drafted by the Charlotte Hornets with the 13th pick of the 1996 draft, but his draft rights were immediately traded to the Los Angeles Lakers… Jordan joined the Bulls in 1984 as the third overall draft pick and quickly emerged as a league star, entertaining crowds with his prolific scoring…Pre-trainCorpusQuestion: What team did Kobe Bryant start his NBA career with?…RetrieveKobe Bryant was drafted by the Charlotte Hornets with the 13th pick of the 1996 draft, but his draft rights were immediately traded to the Los Angeles Lakers…LLM with implicit knowledgeReasoning with retrieval Directly inputUsing tools or only memorizing?Los Angeles LakersLos Angeles Lakers Figure 2: ToolQA, aiming to faithfully evaluate LLMs’ abilities to use external tools, curates data through three phases: (a) Reference Data Collection; (b) Human-Guided Question Generation; and (c) Programmatic Answer Generation. apply code-based tools to enhance LLMs’ abilities in question answering with a focus on tabular and math-related tasks. Several additional works [48, 28, 49] expand the scope of tool utilization by incorporating different types of basic tools (e.g. calculator, calendar, machine translation) to solve complex reasoning tasks. ART [39], ReAct [66], and Reflexion [51] leverage large language models (LLMs) to auto-generate intermediate reasoning steps as well as actions, thereby improving interpretability and problem-solving abilities in diverse decision-making tasks. In addition, several works have extended this line of learning paradigm to other modalities [64, 61] and other domains [18]. A detailed comparison between existing tool-use LLMs can be found in Appendix A. 2.2 Benchmarks on Tool-Augmented LLMs Earlier tool-augmented LLMs primarily assess single tool usage based on downstream task perfor- mance across existing benchmarks. For example, there are works that study how text retrievers augment LLMs’ performance on open-domain question-answering [19, 65], fact-checking [53], and timely information benchmarks [6, 21, 68, 10]. Besides, the mathematical reasoning abilities of exter- nal calculators and Python interpreters are evaluated using computation-intensive QA datasets [9, 29]. However, these evaluation benchmarks may not faithfully reflect the extent to which models leverage external tools, as some questions could still be correctly answered solely using the internal knowl- edge of the LLMs. ToolQA attempts to mitigate these issues by selecting data from out-of-scope sources that have not been memorized by LLMs. Concurrent with our work, there are several recent benchmarks for evaluating LLMs’ ability in using multiple tools for solving challenging tasks, in- cluding API-Bank [26], APIBench [41], and ToolBench [44, 62]. They mainly focus on constructing high-quality tool chains for LLM fine-tuning and evaluating API call trace accuracy against a fixed ground truth trace. In contrast, ToolQA is unique in that it focuses on the open-ended use of tools for question-answering, rather than benchmarking the intermediate process of tool use. Specifically, ToolQA creates tool-based question-answer pairs and assesses whether LLMs can arrive at the correct answer, regardless of the tool chains used. 3 ToolQA Dataset 3.1 Dataset Details We curate the ToolQA benchmark to evaluate LLMs’ capability in leveraging external tools for question answering. ToolQA consists of data from 8 distinct domains, each instance being a tuple — (question, answer, reference corpora, and tools). The reference corpora are external knowledge sources that can be queried, which can be a text corpus, a tabular database, or a graph. To enable 3 • General Knowledge• Out-Dated Information• Publicly Available DataInternal KnowledgeMost Recent DataProfessional AbilitiesPrivate/Commercial DataExternal Knowledge(a)Reference Data CollectionQuestionQuestionExternal Knowledge(b) Human-Guided Question GenerationDataQuestion TemplatesFlight Data Question Templates:• Did the flight from {Origin}to {Dest} on {Date} get cancelled or diverted? (External Knowledge)• What was the flight distance for the flight from {Origin}to {Dest}on {Date}?(Internal Knowledge)• Which product on {FlightNumber} has the highest price? (Not Mentioned) ... ...(c) Programmatic Answer GenerationQ: Did…{Origin}to {Dest} on {Date}…diverted? LAXITHCLTSFOATLMDW......10/15/2201/09/2205/25/22...A:defquestion_gen(table_row):Origin= table_row["Origin"]Dest= table_row["Dest"]FlightDate= table_row["FlightDate"]...returnquestion,answer obtaining information from the reference corpora, we have developed 13 tools for text retrieval, database operations, code interpretation, mathematical computations, and more. The questions are designed to simulate real-world information-seeking inquiries. However, they cannot be answered directly with LLMs’ internal knowledge, but instead require LLMs to obtain information from the reference corpora via tool use. Table 1 shows the detailed statistics of ToolQA. To reduce human efforts in generating faithful question-answer pairs to evaluate LLMs’ tool-use capabilities, we propose an automatic three-phase process (Figure 2): (1) We first select data from public sources that are unmemorized by LLMs during Reference Data Collection; (2) We adopt Human-Guided Question Generation to steer LLMs to generate valid questions according to pre- defined templates; (3) We produce accurate answers for the generated questions with Programmatic Answer Generation. We detail the three-phase generation process in the following. 3.2 Reference Data and Tools To evaluate LLMs’ ability in using external tools for question answering, it is crucial to ensure that they cannot directly answer the questions with their internal knowledge. To this end, we collect reference corpora that meet the following criteria (Figure 2(a)): 1) The reference corpora should ideally not overlap with the LLM’s pre-training data; 2) The reference corpora should contain context-sensitive facts for generating questions that cannot be directly answered solely based on LLMs’ internal knowledge and reasoning abilities; 3) LLMs should be able to obtain all the necessary information from the reference corpora to correctly answer the questions. Based on these criteria, we define 6 contextual dimensions: temporal, spatial, social, scientific, mathematical, and personal. We collect reference corpora that can yield context-specific questions along one or more of the 6 dimensions. Specifically: 1) Along the temporal dimension, we collect the Flights and Coffee corpora, which contain the latest information that is out of the temporal scope of the LLM’s pre-training data. 2) Along the spatial dimension, we collect Yelp and Airbnb, which are two non-text corpora that can yield questions with spatial contexts. 3) Along the mathematical dimension, we collect the questions from GSM8K that ChatGPT cannot answer correctly with its own mathematical reasoning ability; 4) SciREX emphasizes detailed model performances from the scientific domain [16], where GPT family models can easily hallucinate [36]. 5) To incorporate personal data and avoid privacy issues, we synthesize the personal Agenda corpus with ChatGPT with virtual names and events. 6) In addition, we also select data from the most recent DBLP database and create graphs between authors and papers, where social relational knowledge cannot be understood by LLMs currently. Further details can be found in Appendix B. To obtain information from these reference corpora, we design 13 tools that are available to the LLMs (Table 2). These tools are designed as follows: • Text: AgendaRetriever and SciREXRetreiver are text retrieval tools. They can retrieve relevant information to a given query from the (synthesized) personal agenda corpus and scientific corpus. • Database: Database Loader loads data from the local tabular Database. Data Filter can filter the database according to a set of conditions, each of which is composed of a column name, a relation, and a pre-determined value (e.g., “Date=2022-10-15”). Get Value returns all the values under a certain column in the database. Table 1: Dataset Statistics of ToolQA. Context Topic External Knowledge Easy Hard Format Size # Templates # Questions # Templates # Questions Temporal Spatial Flight Coffee Yelp Airbnb Tabular Database Tabular Database 4078318 5746 Tabular Database Tabular Database 150346 102599 Mathematical GSM8K Professional Ability - Social DBLP Graph 553320 Scientific SciREX Pure-Text Corpus 438 Personal Agenda Pure-Text Corpus 10000 SUM - - - 10 8 11 10 - 10 1 5 55 100 100 100 100 100 100 100 100 800 10 13 10 10 - 10 4 5 62 100 130 100 100 - 100 100 100 730 4 Table 2: Different tools in ToolQA. Tool Types # Tools Tools Text Tools Database Tools Math Tools Graph Tools Code Tools System Tools 2 3 1 4 2 1 Agenda Retriever, SciREX Retriever Database Loader, Data Filter, Get Value WolframAlpha Calculator Graph Loader, Neighbour Checker, Node Checker, Edge Checker Python Interpreter, SQL Interpreter Finish • Math: Calculator is a mathematical tool that treats the input string as a formula and calculates the corresponding result. We use the WolframAlpha API portal as the calculator 3, which can perform both simple computations (e.g., add, subtraction, multiplication) and complicated operations (e.g., averaging, finding maximum values). • Graph: Graph Loader loads the graph from local files for future operations. Neighbour Checker lists all the neighbors of the query node in the graph. Node Checker and Edge Checker return the detailed attribute information of the query node and edge, respectively. • Code: The SQL Interpreter and the Python Interpreter are responsible for interpreting and executing SQL commands and Python code, respectively. They can receive and transform data from other tools, serving as bridges between different tools and the LLM. • System: Finish parses the feedback from execution and returns the answer to finish the task. 3.3 Human-Guided Question Generation The question generation phase aims to generate questions that can be answered by using the available tools over the reference corpora. There are two straightforward strategies to generate questions: 1) letting human experts come up with questions about reference corpora, or 2) relying solely on LLMs to generate questions about the reference corpora. However, both strategies have their drawbacks. While human experts can produce high-quality questions, the entire process is labor-intensive, time- consuming, and hard to scale. Depending solely on LLMs may generate unanswerable questions or hallucinate information that does not exist in the reference data. Besides, some of the LLM-generated questions are too easy and can be directly answered with only LLMs’ internal knowledge. To address these challenges, we propose a human-guided LLM generation approach that uses question templates to bridge human guidance and automatic LLM generation [59, 69]. We first ask ChatGPT to generate candidate question templates from reference data, using prompts such as “Generate some template questions based on the given information and provide the corresponding answers.”. The responses obtained are arrays containing potential question templates. We then perform manual validation to select the templates that cannot be answered with LLMs’ internal knowledge but become answerable with the reference corpora. We provide a comprehensive list of both easy and hard question templates for different reference data in Appendix C and Appendix D. After the high-quality question templates are manually selected, we sample values from the reference data to automatically fill into the templates to generate concrete questions. For example, given the template “Did the flight from {Origin} to {Dest} on {Date} get canceled or diverted?”, we can sample the values “LAX”, “MDW”, “01/09/22” from the reference Flight tabular data and fill into the template to form a question: “Did the flight from LAX to MDW on 01/09/22 get canceled or diverted?” Depending on the difficulty of the questions, we classify them into two classes — easy and hard. Easy questions primarily focus on extracting a single piece of information from external knowledge, thus requiring fewer tools to involve in the solution. Conversely, hard questions require complex operations (e.g., average) and reasoning (e.g., comparison) over multiple information pieces drawn from the reference corpora, requiring more tools and complex reasoning among them. 3.4 Programmatic Answer Generation Our final step is to create accurate answers for the generated questions. To guarantee the validity of these responses, we implement 1) operators, which are functions corresponding to the predefined tools; and 2) tool chains, which are schemas for composing different operators for different question templates. For each question, as we know the true arguments filled into the question template, we can 3https://products.wolframalpha.com/api 5 Table 3: Success rates on easy questions. Flight Coffee Agenda Yelp DBLP SciREX GSM8K Airbnb Average ChatGPT CoT Chameleon ReAct (GPT-3) ReAct (GPT-3.5) 2.0 1.0 30.0 61.0 48.0 0.0 1.0 9.0 90.0 81.0 0.0 0.0 4.0 29.0 24.0 15.0 9.0 8.0 77.0 64.0 0.0 0.0 3.0 28.0 23.0 2.0 0.0 0.0 3.0 2.0 26.0 30.0 27.0 32.0 23.0 0.0 0.0 4.0 25.0 29.0 5.6 5.1 10.6 43.1 36.8 Table 4: Success rate on hard questions. Flight Coffee Agenda Yelp Airbnb DBLP SciREX Average ChatGPT CoT Chameleon ReAct (GPT-3) ReAct (GPT-3.5) 2.0 0.0 3.0 3.0 5.0 2.3 0.8 2.3 10.8 17.7 1.0 0.0 0.0 0.0 7.0 0.0 1.0 0.0 3.0 8.0 2.0 0.0 0.0 0.0 7.0 4.0 3.0 8.0 19.0 5.0 3.0 5.0 0.0 0.0 8.0 2.0 1.4 1.9 5.1 8.2 run the tool chains with the corresponding arguments to programmatically extract answers from the reference data. This process enables automatic generation correct answers to questions, even for those questions that involve multi-step reasoning. Figure 2(c) demonstrates this generation process. When answering a generated question with sampled values “Did the flight from LAX to MDW on 01/09/22 get canceled or diverted?”, we write Python codes to implement the operators over the reference data, including database loader, data filter, and get-value function. Then, the programmatic pipeline runs a tool chain of these operators to automatically generate the correct answer (details in Appendix E). 4 Experiments 4.1 Baselines We evaluate the performance of the following methods on ToolQA, covering both standard LLMs and tool-augmented LLMs: (1) ChatGPT [37]: We directly feed the question into OpenAI’s ChatGPT model (gpt-3.5-turbo) and obtain its response as the final answer. (2) CoT [57, 23]: We use chain-of-thoughts prompting for ChatGPT, adding the prompt "Let’s think step by step:" after the question to leverage LLMs’ reasoning ability for question answering. (3) Chameleon [28] is a recent method that uses LLMs as a controller to use multiple tools for solving subtasks and has shown promising results in reasoning and QA tasks. When running Chameleon on ToolQA, we set the tool pool to our defined tools in § 3.1. (4) ReAct [66] integrates reasoning with tool use by prompting LLMs to generate interleaved verbal reasoning traces and tool calls. This integration has been shown effective in enhancing LLMs’ problem-solving capabilities. We instantiate two versions of ReAct using gpt-3.5-turbo and text-davinci-003. Different from the existing works that mainly provide task-level few-shot exemplars, we provide tool-level demonstrations. We used 8 demonstrations about how to use tools for QA, ensuring that each tool in the pool is covered at least once by the demonstrations. Such tool-level demonstrations provide a concise tutorial to the LLMs for tool use, covering all tool uses with the LLM context limit. Details about the demonstrations and our prompts are included in Appendix F. To assess the performance of methods on the ToolQA benchmark, we normalize both the ground-truth answers and the model predictions to ensure uniformity in format. Success rates are then computed based on the exact match between these normalized answers. We evaluate the model’s ability against the generated question-answer pairs in an open-ended manner, focusing on whether the model can arrive at the correct answer, regardless of the used tool chains. 4.2 Results Comparing Different Tool-Use LLMs. Table 3 and 4 shows the results of different methods on the easy and hard questions. ChatGPT and CoT achieve very poor success rates (< 10) on both easy and hard questions across different tasks. This is expected as the questions in ToolQA cannot be answered solely based on LLMs’ internal knowledge and reasoning. Chameleon achieves slightly better performance, with 10.6% and 1.9% success rates on easy and hard questions, respectively. This is because Chameleon incorporates tool descriptions and integrates human-induced orderings of these tools in its context, enabling it to comprehend and compose different tools for QA. However, Chameleon cannot take feedback from the execution trace, thus often suffering from infeasible 6 (a) Incorrect tool calls of ReAct on ToolQA. (b) Confusion matrix of questions from dif- ferent resources in ToolQA. Figure 3: Analysis of incorrect tool calls and incorrect data sources made by ReAct on ToolQA. actions or omitted arguments in its generated plans. ReAct is the best-performing model. It can use observations in the execution trace to generate its next action, allowing it to iteratively refine its tool use chain and obtain better success rates. Easy vs. Hard Questions. Comparing Table 3 and 4, we observe that all the baselines perform much worse on hard questions. The best method achieves an average success rate of 43.13% on easy questions, while that number drops to 8.24% on hard questions. As mentioned in § 3, the hard questions in ToolQA require more tool calls and more complicated compositions. Current tool- augmented LLMs struggle with answering such hard questions, which requires further development of techniques to improve their ability to reason about the task and generate plans for tool use. GPT-3 vs. GPT3.5. 4 Comparing the different versions of ReAct, we observe that the ReAct (GPT-3) outperforms ReAct (GPT-3.5) on easy questions, yet it shows inferior performance on hard questions. Our hypothesis is that for easy questions, it is more important to learn and follow the format of the tool calls in the context, which GPT-3 is stronger at. For hard questions, the better reasoning and code understanding abilities of GPT-3.5 enables it to come up with “innovative” solutions that never appear in the context, leading to higher success rates. An example can be referred to in § 5.3. 5 Result Analysis and Discussion We analyze the drawbacks and possible improvements of existing tool-augmented LLMs, taking the best-performed ReAct (GPT-3.5) model on the hard questions of ToolQA as an example. 5.1 Main Error Type I: Argument Errors By performing comprehensive error analysis, we found that the most common error type when asking LLMs to use tools for QA is argument error — LLMs calling the tools with wrong arguments. For ReAct, this error type makes 44.56% and 48.23% out of the 377 and 436 error cases on easy and hard questions respectively, as shown in Figure 3(a). Interestingly, ReAct shows different argument error patterns on easy and hard questions. On easy questions, it tends to make more mistakes on database-related tools. For example, the model commits a total of 120 errors when calling LoadDB, FilterDB, and GetValue tools for easy questions, while this number reduces to 95 for hard questions. On the other hand, when dealing with code-related tools (e.g., SQLInterpreter and PythonInterpreter), ReAct makes nearly 10x more errors for hard questions than for easy ones. This phenomenon is likely because the solution logic for hard questions is often more complex and cannot be fully inferred from the context alone. Consequently, the LLMs tend to rely on their understanding of code and programming concepts to tackle these intricate questions. In contrast, for easy questions, the LLMs tend to follow the patterns provided in the context, attempting to combine different database operations to arrive at a solution. 5.2 Main Error Type II: Incorrect Data Source We have conducted an investigation into the data sources preferred by LLMs when answering questions. We found that LLMs also have difficulties in identifying the proper reference corpora answer the questions. This behavior is graphically represented as a confusion matrix in Figure 3(b). Upon examining the figure, it is apparent that for target reference corpora like Flight, Coffee, Airbnb, 4GPT-4 was not included in the evaluation as we have no access to its API. 7 LoadDBFilterDBSQLPythonGetValueCalculaterRetrieveScirexCheckNeighboursCheckEdge050100150265511239222110282103111112029# Wrong Callseasyhardflightscoffeedblpairbnbyelpscirexagendaflightscoffeedblpairbnbyelpscirexagenda050100150200 Figure 4: An example of innovation and hallucination when answering hard questions on Coffee data. Actions and observations shrouded in pink are incorrect, whereas those in green are correct. Terms highlighted in yellow signify hallucinations produced by ReAct (GPT-3.5). (a) Easy questions. (b) Hard questions. Figure 5: Error analysis of ReAct on ToolQA. and Yelp that contain temporal information, LLMs are more likely to query the Agenda corpus for answering questions. Similarly, given that the SciREX knowledge corpora and DBLP graph are both in the scientific domain, LLMs tend to be confused about which source to query when answering scientific questions. 5.3 Main Error Type III: Innovation and Hallucination For in-context tool-augmented LLMs, it is typical to include descriptions and use-case examples of each tool in the prompt. However, as the problem complexity increases with the number of tools, it becomes challenging to encompass all possible instances of compositional tool use as few-shot exemplars. Consequently, it is vital for LLMs to uncover logical relationships among different tools, which have never been encompassed in the human-provided exemplars, to solve challenging tasks — a process we refer to as "innovation." However, these innovative behaviors are a double-edged sword as they are often accompanied by hallucinations. Figure 4 illustrates this phenomenon with a case study, where LLMs answer hard questions with reference Coffee data. Given the context length constraint, the few-shot exemplar only showcases the basic usage of database operations and the SQL interpreter. For the hard question in Figure 4(a), ReAct (GPT-3) strictly follows the operations displayed in the context, leading to failure. On the contrary, ReAct (GPT-3.5) innovatively identifies the SQL interpreter as a possible alternative to database operations, especially when the latter fails repeatedly. However, such innovations can oftentimes lead to hallucinations. As shown in Figure 4(b), when answering another hard question from the Coffee data, ReAct (GPT-3.5) opts to hallucinate certain observations (highlighted in yellow) that are non-existent in the feedback from tool execution. 8 Coffee-HardQuestion: What was the average coffee price from 2019-10-31 to 2021-07-23?Correct Answer: 120.9ReAct(GPT-3)Thought 1: This is a question related to coffee. We need to load the coffee database.Action 1: LoadDB[coffee]Observation 1: We have successfully loaded the coffee database...Thought 2: We need to filter the information related to the query.Action 2: FilterDB[Date>=2019-10-31, Date<=2021-07-23]Observation 2: We have successfully filtered the data (435 rows).Thought 3: We then obtain all the Close values of filtered records.Action 3: GetValue[Close]Observation 3: 101.95, 104.0, 103.65, …, 193.65, 189.0 (435 values)Too Long Context, AbortedReAct(GPT-3.5)...Thought 3: We need to calculate the average coffee price from the filtered data.Action 3: Calculate[AVG(Close)]Observation 3: (data not available)Thought 4: We need to retrieve the result of the previous calculation.Action 4: GetValue[AVG(Close)]Observation 4: The value you are querying does not exist. Please modify it.Thought 5: Oops, I made a mistake in the SQL query. Let me correct it.Action 5: SQLInterpreter[SELECT AVG(Close) FROM coffee.coffee_dataWHERE Date >= '2019-10-31' AND Date <= '2021-07-23']Observation 5: AVG(Close): 120.94999999999989Thought 6: After calculation, we know that the average coffee price from 2019-10-31 to 2021-07-23 is 120.9.Action 6: Finish[120.9]Observation 6: Answer is CORRECTCoffee-HardQuestion: How much did the coffee price change from 2006-11-27 to 2016-06-03?Correct Answer: 10.5ReAct(GPT-3.5)...Thought 6: We need to modify the filtering query to the correct format by adding quotes around the date.Action 6: FilterDB[Date='2016-06-03']Observation 6: The filtering query Date='2016-06-03' is incorrect. Please modify the condition.Thought 7: We have successfully filtered the data (1 rows).Action 7: GetValue[Close]Observation 7: 118.0Thought 8: We need to subtract the price of coffee on 2006-11-27 from the price of coffee on 2016-06-03.Action 8: Calculate[118.0-94.25]Observation 8: 23.75Thought 9: After calculation, we know that the coffee price changed by 23.75.Action 9: Finish[23.75]Observation 9: Answer is INCORRECT(a)Innovation(b)Hallucination44.56% Argument Error9.02% Infeasible Actions4.77% Hallucination18.04% Incorrect Data Source10.88% Too Long Context6.90% Mis-understanding5.84% Low-Quality Retrieval48.23% Argument Error9.15% Infeasible Actions7.48% Hallucination7.48% Incorrect Data Source16.63% Too Long Context0.83% Mis-understanding10.19% Low-Quality Retrieval 5.4 Other Error Types We manually go through and count all the errors made by the ReAct (GPT-3.5) model and show the errors on both easy and hard questions in Figure 5. In addition to the aforementioned 3 main error types, there are 4 error types that frequently occur: • Infeasible Actions: The execution of tool calls are infeasible in the environment, often involving new tools that do not exist in the pre-defined tool pool. • Too Long Context: The encoding of interaction history, observations, and tool-use plans exceed the length limitation of GPT family models, resulting in runtime errors; • Mis-understanding: The LLMs cannot understand the observations obtained from external interaction and fail to determine the next steps or generate answers; • Low-Quality Retrieval: This error occurs when the retrieval model fails to extract the relevant information from text corpora, indicating insufficient external knowledge for LLMs to answer questions accurately. Comparing these error types on easy and hard questions, we find that the overall distribution is similar, though there is a slightly higher rate of hallucination and long-context errors when answering hard questions. This can be attributed to the complexity of hard questions, which often require composing more tools for question answering. 6 Conclusion We have developed ToolQA, a dataset that assesses the ability of Large Language Models (LLMs) in using external tools for solving complex problems. ToolQA is curated by an automated three- phase process for reference data collection, template-based question generation, and programmatic answer generation. This pipeline is general and can be expanded to incorporate any area of external knowledge of interest. We tested both standard LLMs and tool-augmented LLMs on ToolQA. Our analysis showed that even the strongest baseline achieved limited performance on the hard questions of ToolQA. Our study also found that current tool-augmented LLMs tend to make errors such as incorrect tool calls and using incorrect data sources. These issues could potentially be addressed by fine-tuning using a collection of tool-use corpora with publicly accessible LLMs. In the future, we are interested in include collecting high-quality, diverse data for fine-tuning, as well as assessing the performance of fine-tuned tool-augmented LLMs on ToolQA. 9 References [1] Y. Bai, A. Jones, K. Ndousse, A. Askell, A. Chen, N. DasSarma, D. Drain, S. Fort, D. Ganguli, T. Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022. [2] S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai, E. Rutherford, K. Millican, G. B. Van Den Driess- che, J.-B. Lespiau, B. Damoc, A. Clark, et al. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pages 2206–2240. PMLR, 2022. [3] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. [4] S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y. T. Lee, Y. Li, S. Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. [5] W. Chen, X. Ma, X. Wang, and W. W. Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks, 2022. [6] W. Chen, X. Wang, and W. Y. Wang. A dataset for answering time-sensitive questions. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. [7] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. [8] H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, E. Li, X. Wang, M. De- hghani, S. Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. [9] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. [10] B. Dhingra, J. R. Cole, J. M. Eisenschlos, D. Gillick, J. Eisenstein, and W. W. Cohen. Time- aware language models as temporal knowledge bases. Transactions of the Association for Computational Linguistics, 10:257–273, 2022. [11] L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and G. Neubig. Pal: Program- aided language models. arXiv preprint arXiv:2211.10435, 2022. [12] D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS, 2021. [13] S. Imani, L. Du, and H. Shrivastava. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398, 2023. [14] G. Izacard, M. Caron, L. Hosseini, S. Riedel, P. Bojanowski, A. Joulin, and E. Grave. To- wards unsupervised dense information retrieval with contrastive learning. arXiv preprint arXiv:2112.09118, 2021. [15] G. Izacard, P. Lewis, M. Lomeli, L. Hosseini, F. Petroni, T. Schick, J. Dwivedi-Yu, A. Joulin, S. Riedel, and E. Grave. Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299, 2022. [16] S. Jain, M. van Zuylen, H. Hajishirzi, and I. Beltagy. SciREX: A challenge dataset for document- level information extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7506–7516, Online, July 2020. Association for Computational Linguistics. [17] Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. J. Bang, A. Madotto, and P. Fung. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1–38, 2023. [18] Q. Jin, Y. Yang, Q. Chen, and Z. Lu. Genegpt: Augmenting large language models with domain tools for improved access to biomedical information. ArXiv, 2023. 10 [19] M. Joshi, E. Choi, D. Weld, and L. Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada, July 2017. Association for Computational Linguistics. [20] V. Karpukhin, B. Oguz, S. Min, P. Lewis, L. Wu, S. Edunov, D. Chen, and W.-t. Yih. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, 2020. [21] J. Kasai, K. Sakaguchi, Y. Takahashi, R. L. Bras, A. Asai, X. Yu, D. Radev, N. A. Smith, Y. Choi, and K. Inui. Realtime qa: What’s the answer right now? arXiv preprint arXiv:2207.13332, 2022. [22] G. Kim, P. Baldi, and S. McAleer. Language models can solve computer tasks, 2023. [23] T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa. Large language models are zero-shot reasoners. In A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, editors, Advances in Neural Information Processing Systems, 2022. [24] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474, 2020. [25] A. Lewkowycz, A. Andreassen, D. Dohan, E. Dyer, H. Michalewski, V. Ramasesh, A. Slone, C. Anil, I. Schlag, T. Gutman-Solo, et al. Solving quantitative reasoning problems with language models. arXiv preprint arXiv:2206.14858, 2022. [26] M. Li, F. Song, B. Yu, H. Yu, Z. Li, F. Huang, and Y. Li. Api-bank: A benchmark for tool-augmented llms, 2023. [27] B. Y. Lin, K. Tan, C. S. Miller, B. Tian, and X. Ren. Unsupervised cross-task generalization via retrieval augmentation. In Advances in Neural Information Processing Systems, 2022. [28] P. Lu, B. Peng, H. Cheng, M. Galley, K.-W. Chang, Y. N. Wu, S.-C. Zhu, and J. Gao. Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint arXiv:2304.09842, 2023. [29] P. Lu, L. Qiu, K.-W. Chang, Y. N. Wu, S.-C. Zhu, T. Rajpurohit, P. Clark, and A. Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. arXiv preprint arXiv:2209.14610, 2022. [30] S. Lu, N. Duan, H. Han, D. Guo, S.-w. Hwang, and A. Svyatkovskiy. Reacc: A retrieval- augmented code completion framework. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6227–6240, 2022. [31] A. Madaan and A. Yazdanbakhsh. Text and patterns: For effective chain of thought, it takes two to tango. arXiv preprint arXiv:2209.07686, 2022. [32] A. Mallen, A. Asai, V. Zhong, R. Das, H. Hajishirzi, and D. Khashabi. When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories. arXiv preprint arXiv:2212.10511, 2022. [33] S. Mishra, M. Finlayson, P. Lu, L. Tang, S. Welleck, C. Baral, T. Rajpurohit, O. Tafjord, A. Sabharwal, P. Clark, et al. Lila: A unified benchmark for mathematical reasoning. arXiv preprint arXiv:2210.17517, 2022. [34] R. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021. [35] R. Nogueira, Z. Jiang, and J. Lin. Investigating the limitations of transformers with simple arithmetic tasks. arXiv preprint arXiv:2102.13019, 2021. [36] OpenAI. Gpt-4 technical report. arXiv, 2023. [37] OpenAI. Introducing chatgpt, 2023. [38] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022. 11 [39] B. Paranjape, S. Lundberg, S. Singh, H. Hajishirzi, L. Zettlemoyer, and M. T. Ribeiro. Art: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:2303.09014, 2023. [40] A. Parisi, Y. Zhao, and N. Fiedel. Talm: Tool augmented language models. arXiv preprint arXiv:2205.12255, 2022. [41] S. G. Patil, T. Zhang, X. Wang, and J. E. Gonzalez. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334, 2023. [42] B. Peng, M. Galley, P. He, H. Cheng, Y. Xie, Y. Hu, Q. Huang, L. Liden, Z. Yu, W. Chen, et al. Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813, 2023. [43] J. Qian, H. Wang, Z. Li, S. Li, and X. Yan. Limitations of language models in arithmetic and symbolic induction. arXiv preprint arXiv:2208.05051, 2022. [44] Y. Qin, S. Hu, Y. Lin, W. Chen, N. Ding, G. Cui, Z. Zeng, Y. Huang, C. Xiao, C. Han, Y. R. Fung, Y. Su, H. Wang, C. Qian, R. Tian, K. Zhu, S. Liang, X. Shen, B. Xu, Z. Zhang, Y. Ye, B. Li, Z. Tang, J. Yi, Y. Zhu, Z. Dai, L. Yan, X. Cong, Y. Lu, W. Zhao, Y. Huang, J. Yan, X. Han, X. Sun, D. Li, J. Phang, C. Yang, T. Wu, H. Ji, Z. Liu, and M. Sun. Tool learning with foundation models, 2023. [45] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. [46] S. Robertson, H. Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333–389, 2009. [47] T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. Ili´c, D. Hesslow, R. Castagné, A. S. Luccioni, F. Yvon, M. Gallé, et al. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022. [48] T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, and T. Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023. [49] Y. Shen, K. Song, X. Tan, D. Li, W. Lu, and Y. Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580, 2023. [50] W. Shi, S. Min, M. Yasunaga, M. Seo, R. James, M. Lewis, L. Zettlemoyer, and W.-t. Yih. Replug: Retrieval-augmented black-box language models. arXiv preprint arXiv:2301.12652, 2023. [51] N. Shinn, B. Labash, and A. Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023. [52] H. Sun, Y. Zhuang, L. Kong, B. Dai, and C. Zhang. Adaplanner: Adaptive planning from feedback with language models, 2023. [53] J. Thorne, A. Vlachos, C. Christodoulopoulos, and A. Mittal. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809–819, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. [54] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. [55] X. Wang, S. Li, and H. Ji. Code4struct: Code generation for few-shot structured prediction from natural language. arXiv preprint arXiv:2210.12810, 2022. [56] Z. Wang, S. Cai, A. Liu, X. Ma, and Y. Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents, 2023. [57] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou. Chain- of-thought prompting elicits reasoning in large language models. arXiv, page 2201.11903v6, 2022. 12 [58] L. Weidinger, J. Mellor, M. Rauh, C. Griffin, J. Uesato, P.-S. Huang, M. Cheng, M. Glaese, B. Balle, A. Kasirzadeh, et al. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359, 2021. [59] S. Wiegreffe, J. Hessel, S. Swayamdipta, M. Riedl, and Y. Choi. Reframing human-ai collab- oration for generating free-text explanations. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 632–658, 2022. [60] S. Wolfram. Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT. Stephen Wolfram Writings, 2023. [61] C. Wu, S. Yin, W. Qi, X. Wang, Z. Tang, and N. Duan. Visual chatgpt: Talking, drawing and editing with visual foundation models. arXiv preprint arXiv:2303.04671, 2023. [62] Q. Xu, F. Hong, B. Li, C. Hu, Z. Chen, and J. Zhang. On the tool manipulation capability of open-source large language models. arXiv preprint arXiv:2305.16504, 2023. [63] R. Xu, Y. Yu, J. C. Ho, and C. Yang. Weakly-supervised scientific document classification via retrieval-augmented multi-stage training. arXiv preprint arXiv:2306.07193, 2023. [64] Z. Yang, L. Li, J. Wang, K. Lin, E. Azarnasab, F. Ahmed, Z. Liu, C. Liu, M. Zeng, and L. Wang. Mm-react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381, 2023. [65] Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. Cohen, R. Salakhutdinov, and C. D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369–2380, Brussels, Belgium, Oct.-Nov. 2018. Association for Computational Linguistics. [66] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. R. Narasimhan, and Y. Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, 2023. [67] J. Zhang. Graph-toolformer: To empower llms with graph reasoning ability via prompt augmented by chatgpt, 2023. [68] M. Zhang and E. Choi. SituatedQA: Incorporating extra-linguistic contexts into QA. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7371–7387, Online and Punta Cana, Dominican Republic, Nov. 2021. Association for Computational Linguistics. [69] R. Zhang, Y. Yu, P. Shetty, L. Song, and C. Zhang. Prboost: Prompt-based rule discovery and boosting for interactive weakly-supervised learning. arXiv preprint arXiv:2203.09735, 2022. [70] Y. Zhuang, Y. Li, J. Zhang, Y. Yu, Y. Mou, X. Chen, L. Song, and C. Zhang. ReSel: N- ary relation extraction from scientific text and tables by learning to retrieve and select. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 730–744, Abu Dhabi, United Arab Emirates, Dec. 2022. Association for Computational Linguistics. 13 A Additional Related Works Methods Tool Numbers Tool Categories # Tool/Task Reasoning Instruction Type Task Single-Tool Methods CoT [57] Lila [33] Program-of-Thought [5] Code4Struct [55] PAL [11] MathPrompt [13] ToolFormer [48] GraphToolFormer [67] Talm [40] Multi-Tool Methods 1 1 1 1 1 1 5 5 - - math/code code code code code Basic Graph Basic 1 1 1 1 1 1 1 1 1 Generation Generation Generation Generation Generation Generation Generation Human Info Generation Prompting Prompting Prompting Prompting Prompting Prompting PR & FT PR & FT PR & FT QA MathQA TabQA Event Extraction MathQA MathQA QA Graph QA WebGPT [34] HuggingGPT [49] Chameleon [28] GeneGPT [18] ART [39] ReAct [66] MM-ReAct [64] Visual ChatGPT [61] 10 >10 >10 38 8 3 >10 >10 >1 >1 >1 >1 >1 >1 >1 >1 Table 5: A comparison of methods that leverage LLMs for Tool-use. Web Operation Vision code, nlp, cv NCBI APIs code/math/retriever retriever vision vision Feedback Human Info Human Info Generation Human Feedback Feedback Feedback Feedback Fine-tuning Prompting Prompting Prompting Prompting PR & FT Prompting Prompting QA VQA ScienceQA, TabQA Gene Tasks BigBench QA, AlfWorld, WebShop CV tasks CV tasks We list the state-of-the-art related works in tool-augmented LLMs in Table 5. All of them can be categorized into two groups: (1) single-tool methods, that focus on making a single API call perfect in the solution; (2) multi-tool methods, that emphasize more on studying how to compose different tools together to solve a challenging problem. ToolQA is more suitable for the evaluation of the second category to test the inherent logical reasoning behind different tools. Additionally, there exist other notable contributions [56, 22, 52] within the realm of decision-making that specifically emphasize the planning capabilities of expansive language models. These endeavors can be regarded as methods affiliated with tools, wherein the actions within generated plans are analogous to distinct tools utilized for specific purposes. B Data Sources B.1 Different Data Source Introduction • Flight Status (2022-2023)5 contains almost all flight information of airlines between 2022 and 2023, which is too contemporary for LLMs’ internal knowledge. • Daily Coffee Price (2000-2022)6 contains the daily price of coffee, ranging from 2000 to 2022, where the information is too contemporary and detailed for LLMs’ internal knowledge. • Yelp Business Data7 is a subset of Yelp’s business data across 8 metropolitan areas in the USA and Canada, where the information is too detailed for LLMs’ internal knowledge. • Airbnb Open Data8 is a subset of Airbnb activities in New York, where the information is too detailed for LLMs’ internal knowledge. • DBLP Citation Network (V14)9 constructs the graph based on the records after 2020. The author-author and paper-paper relations are formulated as two separate graphs. • GSM8k10 is a dataset of 8.5K high-quality linguistically diverse grade school math word problems. We sample the questions from the error cases made by ChatGPT on the original dataset to make sure that the questions cannot be easily handled with its internal knowledge. • SciREX11 is a challenging dataset for document-level information extraction based on a collection of full-length machine-learning scientific papers. 5https://www.kaggle.com/datasets/robikscube/flight-delay-dataset-20182022?select= Combined_Flights_2022.csv 6https://www.kaggle.com/datasets/psycon/daily-coffee-price 7https://www.kaggle.com/datasets/yelp-dataset/yelp-dataset?select=yelp_academic_ dataset_business.json 8https://www.kaggle.com/datasets/arianazmoudeh/airbnbopendata 9https://www.aminer.org/citation 10https://github.com/openai/grade-school-math 11https://github.com/allenai/SciREX 14 • Agenda is our own synthetic dataset to model the real-world personal agenda data. To avoid the privacy issue, we first create names, events, and dates with ChatGPT and then randomly compose them to form 10000 different records. To create a pure-text personal agenda corpus, we feed each of the records into ChatGPT, containing generated agenda for virtual characters. More Details can be seen in Appendix B.2. B.2 Generation Details of Agenda Dataset As mentioned in § 3.2, personal or private data serves as a significant external knowledge source. There exist applications that have been designed with plugins and external tools specifically querying this type of data, such as AI personal assistants on daily agenda. Nevertheless, we recognize that this data often intersects with sensitive areas, and hence, privacy concerns are paramount. To address these issues, we automatically synthesize a personal agenda corpus. This not only ensures that the large language models (LLMs) have not been previously exposed to the data but also eliminates any possibility of them inadvertently memorizing the information within their internal knowledge. In the synthetically generated personal agenda corpus, each entry follows the pattern: "NAME performs EVENT at TIME on DATE", incorporating key elements such as names, events, dates, and time slots. To begin, we employ ChatGPT to virtually generate these elements. More precisely, we create 100 unique names, 10000 distinctive events each associated with corresponding time slots within a day, and span all possible dates from 01/01/2022 through 12/31/2022. Following this, we commence the random assembly of these generated elements to formulate personal agenda entries. For every event- time pair generated, we randomly select from the pool of 100 names and possible dates to construct each record. This process yields a total of 9,494 unique personal agenda entries. To transform this corpus into an accessible external database for model querying, we transcribe each record into a comprehensible natural language description. Prompts designed for agenda data generation are listed in Appendix F.2. C Easy Question Templates C.1 Flights We design the following 10 templates: • What was the departure time of the {CARRIER}{NUMBER} flight from {ORIGIN} to {DEST} on {ORIGIN}? • Was the flight {CARRIER}{NUMBER} from {ORIGIN} to {DEST} cancelled on {ORIGIN}? • What is the flight number of the {AIRLINE} flight from {ORIGIN} to {DEST} on {ORIGIN}? • How long was the different between the CRS-recorded departure time and actual departure time of the {CARRIER}{NUMBER} flight from {ORIGIN} to {DEST} on {ORIGIN}? • How long did {CARRIER}{NUMBER} delay when arrival on {DEST}? • How many extra minutes did the {CARRIER}{NUMBER} flight take from {ORIGIN} to {DEST} on {ORIGIN}? • What was the local arrival time of the {CARRIER}{NUMBER} flight from {ORIGIN} to {DEST} on {ORIGIN}? • What was the CRS-recorded arrival time of the {CARRIER}{NUMBER} flight from {ORIGIN} to {DEST} on {ORIGIN}? • How long was the flight {CARRIER}{NUMBER} from {ORIGIN} to {DEST} on {ORIGIN}? • How many minutes did the {CARRIER}{NUMBER} flight take to taxi in on {DATE}? C.2 Coffee We design the following 8 templates: • What was the daily coffee price opening on {DATE}? • What was the lowest coffee price on {DATE}? • What was the highest coffee price on {DATE}? • What was the daily coffee price closing on {DATE}? • What was the trading volume of coffee on {DATE}? 15 • What was the percentage change in coffee price on {DATE}, based on the difference between the opening and closing prices? • Was {DATE} a bearish or bullish day for coffee price? • What was the range of coffee price on {DATE}, based on the difference between the high and low prices? C.3 Yelp We design the following 11 templates for the Yelp dataset: • What is the address of {NAME} in the area of postal code {POSTAL-CODE}? • What city is {NAME} located in {STATE}? • What state is {NAME} located in? • What is the postal code of {NAME} in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}? • What is the star rating of {NAME} in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}? • How many reviews does {NAME} receive in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}, received? • Is {NAME} still open in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}? • Does {NAME} require appointment in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}? • What are the hours of operation for {NAME} in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}? • What categories does {NAME} belong to, in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}? • What are the coordinates of {NAME} in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}? C.4 Airbnb We design the following 10 templates for easy questions on Airbnb dataset: • What is the host’s name for {NAME} in {NEIGHBOURHOOD}? • How many days are {NAME} (id: {ID}) available during a year (365 days)? • What is the room type of {NAME} (id: {ID}) in {NEIGHBOURHOOD}? • What is the price of {NAME} (id: {ID}) in {NEIGHBOURHOOD}? • What is the minimum number of nights for {NAME} (id: {ID}) in {NEIGHBOURHOOD}? • When did {NAME} (id: {ID}) in {NEIGHBOURHOOD} constructed? • How many reviews does {NAME} (id: {ID}) in {NEIGHBOURHOOD} have? • What is the last review date for {NAME} (id: {ID}) in {NEIGHBOURHOOD}? • What is the review rate number for {NAME} (id: {ID}) in {NEIGHBOURHOOD}? • What is the average number of reviews per month for {NAME} (id: {ID}) in {NEIGHBOURHOOD}? C.5 SciREX We design the following 1 templates for easy questions on SciREX dataset: • What is the corresponding {METRIC} score of the {METHOD} method on {DATASET} dataset for {TASK} task? C.6 Agenda We design the following 5 templates for easy questions on Agenda dataset: • What did {NAME} do from {START-TIME} to {END-TIME} on {DATE}? • Where did {EVENT} that {NAME} attended take place on {DATE}? • When did {NAME} attend {EVENT} on {DATE}? • How long did {NAME} attend {EVENT} on {DATE}? • Who attended {EVENT} between {START-TIME} and {END-TIME} on {DATE} in {LOCATION}? 16 C.7 DBLP We design the following 10 templates for easy questions on DBLP dataset: • Who are the authors of {TITLE}? • What organization is {AUTHOR} from? • How many pages is {TITLE}? • How many papers did {TITLE} cite in the DBLP citation network? • How many papers did papers in the DBLP citation network cite {TITLE}? • How many collaborators does {AUTHOR} have in the DBLP citation network? • How many papers did {AUTHOR} and {AUTHOR} write together in the DBLP citation network? • What papers did {AUTHOR} write in the DBLP citation network? • How many papers did {AUTHOR} write in the DBLP citation network? • What venue did {AUTHOR} and {AUTHOR} collaborate most in the DBLP citation network? C.8 GSM8K The questions are randomly sampled from the ChatGPT errors in GSM8K dataset without following some templates. Thus, we cannot offer any question templates for GSM8K. D Hard Question Templates D.1 Flights • What percentage of the flights from {ORIGIN} were delayed on {FLIGHTDATE}? • What is the average delay time of all the flights that departed from {ORIGIN} on {FLIGHTDATE}? • How many flights were diverted on {FLIGHTDATE}? • How many flights with a distance greater than 500 miles on {FLIGHTDATE}? • What is the average airtime of the flights from {ORIGIN} to {DEST} host by {AIRLINE}? • How many flights from {ORIGIN} to {DEST} host by {AIRLINE}? • What is the average flight time of {CARRIER}{NUMBER}? • What is the fastest flight from {ORIGIN} to {DEST} on {FLIGHTDATE}? • What is the average speed of {CARRIER}{NUMBER} from {ORIGIN} to {DEST}? • What is the total number of flights operated by {AIRLINE} on {FLIGHTDATE}? D.2 Coffee • What was the highest coffee price from {START-DATE} to {END-DATE}? • What was the lowest coffee price from {START-DATE} to {END-DATE}? • What was the average coffee price from {START-DATE} to {END-DATE}? • How much did the coffee price change from {START-DATE} to {END-DATE}? • What was the percentage change in coffee price on {DATE} compared to the previous day? • On which date from {START-DATE} to {END-DATE} was the difference between the highest and lowest coffee prices the greatest? • What was the average daily volume of coffee traded from {START-DATE} to {END-DATE}? • On which date from {START-DATE} to {END-DATE} did the coffee price have the highest increase compared to the previous day? • How many times from {START-DATE} to {END-DATE} did the coffee price increase compared to the previous day? • What was the percentage increase in coffee price from {START-DATE} to {END-DATE}? • What was the coffee price range from {START-DATE} to {END-DATE}? 17 D.3 Yelp We design the following 10 templates for hard questions in Yelp Dataset. • How many {CATEGORY} businesses are there in {CITY}, {STATE}? • How many businesses are there in {POSTALCODE} area of {CITY}, {STATE}? • Which {CATEGORY} business has the highest star rating in {CITY}, {STATE}? • Which {CATEGORY} business has the highest review count in {CITY}, {STATE}?" • What is the average review counts of businesses within a 5-mile radius from {NAME}? • Which is the nearest {CATEGORY} business to {NAME}? • Can you recommend a {CATEGORY} business with the highest star rating within a 5-mile radius of {ADDRESS}? • How many businesses are not open currently in {CITY}? • What is the average star rating of {CATEGORY} businesses in {CITY}? • Which region has most bussinesses in {CITY}, {STATE}? D.4 Airbnb We design the following 10 templates for hard questions on Airbnb dataset. • What is the total price at least if you want to stay at {NAME} in {NEIGHBOURHOOD} for {NUMBER} nights? • How many airbnbs are there in {NEIGHBOURHOOD}? • What is the average price of airbnbs in {NEIGHBOURHOOD}? • What is the average review rates within 5 miles from {NAME} in {NEIGHBOURHOOD}? • How much proporion of airbnbs in {NEIGHBOURHOOD} have a flexible cancellation policy? • How much does it cost per night to stay at the most expensive entire home/apt in {NEIGHBOURHOOD}? • How many airbnbs are there in {NEIGHBOURHOOD} that have a review rate higher than 4? • Can you recommend me a hotel room with the lowest price in {NEIGHBOURHOOD}? • Can you recommend me a private room with the highest review rate that can host at least 2 people in {NEIGHBOURHOOD}? • Can you recommend a shared room with the lowest price within 10 miles from {LONGITUDE} longitude and {LATITUDE} latitude? D.5 SciREX We design the following 4 templates for hard questions on SciREX dataset: • What is the corresponding {METRIC} score of the {METHOD} method on {DATASET} dataset for {TASK} task? • On which dataset does the {METHOD} method achieve the highest {METRIC} score for {TASK} task? • Which method achieves the highest {METRIC} score on {DATASET} dataset for {TASK} task? • On what metrics is the {METHOD} method evaluated on {DATASET} dataset for {TASK} task? • Which datasets is {METHOD} method evaluated on for {TASK} task? D.6 Agenda We design the following 5 templates for hard questions on Agenda dataset: • How many events happen on {DATE} in the agenda table? • Who is unavailable between {START-TIME} and {END-TIME} on {DATE} in the agenda table? • When should I schedule a meeting with {NAME} from 9:00 AM to 6:00 PM on {DATE} in the agenda table? • What events does {NAME} have on {DATE} in the agenda table? • How many dates in the agenda table have {NAME} scheduled? 18 D.7 DBLP We design the following 10 templates for hard questions on DBLP dataset: • What keywords does {AUTHOR} focus on most in the DBLP citation network? • How many people does {AUTHOR-1} need to know at least to know {AUTHOR-2} in the DBLP citation network? • How many common collaborators does {AUTHOR-1} have with {AUTHOR-2}? • Which is the most cited paper written by {AUTHOR} in the DBLP citation network? • Which collaborator does {AUTHOR} have the most citations with in the DBLP citation net- work? • Which venue does {AUTHOR} publish the most papers in the DBLP citation network? • How many accumulated citations do papers collaborated by {AUTHOR-1} and {AUTHOR-2} have in the DBLP citation network? • How many papers in all do {AUTHOR} and his/her collaborators have in the DBLP citation network? • Who collaborated with {AUTHOR} most in the DBLP citation network? • What institutions participated in the study of {TITLE} in the DBLP citation network? E Code Examples of Programmatic Answer Generation Below is an example of programmatic answer generation. The example code is answering the question of “What percentage of the flights from {ORIGIN} were delayed on {FLIGHTDATE}?”. More details of the programmatic answers can be seen in the public code. def solution(data, flightdate, origin): num_total =len(data.loc[(data["FlightDate"] ==flightdate) & (data["Origin"] == num_cancelled =len(data.loc[(new_data["FlightDate"] ==flightdate) & origin)]) if num_cancelled >0: question ="What percentage of the flights from {} were delayed on {}?".format(origin, flightdate) answer ="{:.1f}".format(num_cancelled /num_total *100)+"%" (data["Origin"] ==origin) & (data["Cancelled"] ==True)]) F Additional Implementation Details F.1 Implementation Details All experiments are conducted on CPU: Intel(R) Core(TM) i7-5930K CPU @ 3.50GHz and GPU: NVIDIA GeForce RTX A5000 GPUs using python 3.8, Huggingface 4.6.0 and Pytorch 1.10. We keep the parameter top_p = 1.0 and temperature t = 1.0 for calling ChatGPT APIs [37] for the question generation part. F.2 Prompts F.2.1 Prompts for Agenda Data Generation The prompts used for virtual name generation: You are an AI assistant to answer questions. Can you list 100 English Names? <Agenda_Name_Gen> Prompt The prompts used for virtual events generation: 19 <Agenda_Events _Gen> Prompt You are an AI assistant for text generation. Generate 100 detailed agenda events, including the event, start time, end time, and location. Please make the events as diverse as possible and make sure these events can happen in real life. Make sure the location is a detailed name that may exist in real life. Make sure the dates are selected from 2022/01/01 to 2023/01/01. Example: Doctor’s appointment - 9:00 AM - 11:00 AM - ABC Medical Center Yoga class - 10:30 AM - 11:30 AM - Yoga Studio Downtown Generate 100 more detailed agendas that do not conflict with the previous ones. The prompts used to convert the agenda records into natural language descriptions: <Agenda_Gen> Prompt Please use natural language to describe the event in the agenda with the following information: Name: NAME Date: DATE Event: EVENT Start Time: START-TIME End Time: END-TIME Location: LOCATION F.2.2 Prompts for Methods The prompts used in ReAct [66]: <ReAct> Prompt Question: How many extra minutes did the DL1575 flight take from ATL to MCO on 2022-01-12? Thought 1: This is a question related to flights. We need to load the flights database. Action 1: LoadDB[flights] Observation 1: We have successfully loaded the flights database, including the following columns: FlightDate, Airline, Origin, Dest, Cancelled, Diverted, CRSDepTime, DepTime, DepDelayMinutes, DepDelay, ArrTime, ArrDelayMinutes, AirTime, CRSElapsedTime, ActualElapsedTime, Distance, Year, Quarter, Month, DayOfWeek, Marketing_Airline_Network, Operated_or_Branded_Code_Share_Partners, DayofMonth, DOT_ID_Marketing_Airline, IATA_Code_Marketing_Airline, Flight_Number_Marketing_Airline, Operating_Airline, DOT_ID_Operating_Airline, IATA_Code_Operating_Airline, Tail_Number, Flight_Number_Operating_Airline, OriginAirportID, OriginAirportSeqID, OriginCityMarketID, OriginCityName, OriginState, OriginStateFips, OriginStateName, OriginWac, DestAirportID, DestAirportSeqID, DestCityMarketID, DestCityName, DestState, DestStateFips, DestStateName, DestWac, DepDel15, DepartureDelayGroups, DepTimeBlk, TaxiOut, WheelsOff, WheelsOn, TaxiIn, CRSArrTime, ArrDelay, ArrDel15, ArrivalDelayGroups, ArrTimeBlk, DistanceGroup, DivAirportLandings. Thought 2: We need to filter the information related to the query. Action 2: FilterDB[Flight_Number_Marketing_Airline=1575, FlightDate=2022-01-12, Origin=ATL, Dest=MCO] Observation 2: We have successfully filtered the data (1 row). Thought 3: We then need to know the departure delayed time. Action 3: GetValue[DepDelay] Observation 3: -7.0 Thought 4: We then need to know the arrival delayed time. Action 4: GetValue[ArrDelay] Observation 4: -17.0 Thought 5: To compute the extra minutes, we need to subtract the departure delayed time from the arrival delayed time. Action 5: Calculate[(-17)-(-7)] Observation 5: -10 20 Thought 6: After calculation, we know that the answer is -10. Action 6: Finish[-10] Question: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? Thought 1: Maybe I can write a python program to solve this problem. Action 1: PythonInterpreter[# solution in Python: def solution(): # Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, # he lost 2 more. How many golf balls did he have at the end of wednesday? golf_balls_initial = 58 golf_balls_lost_tuesday = 23 golf_balls_lost_wednesday = 2 golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - \ golf_balls_lost_wednesday result = golf_balls_left return result] Observation 1: 33 Thought 2: After execution of Python Code, we know that the answer is 33. Action 2: Finish[33] Question: What is the corresponding Mean_IoU score of the FRRN method on Cityscapes dataset for Semantic_Segmentation task? Thought 1: The question is asking some performance information about Mean_IoU score of the FRRN method on Cityscapes dataset for Semantic_Segmentation task, we need to retrieve some useful information from the scirex database. Action 1: RetrieveScirex[Mean_IoU score of the FRRN method on Cityscapes dataset for Semantic_Segmentation task] Observation 1: The other stream undergoes a sequence of pooling operations to obtain robust features for recognition . The two streams are coupled at the full image resolution using residuals . Without additional processing steps and without pretraining , our approach achieves an intersection - over - union score of 71.8 % on the Cityscapes dataset . Thought 2: As the approach achieves an intersection - over - union score of 71.8 % on the Cityscapes dataset, the answer should be 71.8%. Action 2: Finish[71.8%] Question: When was the paper Learning the Principle of Least Action with Reinforcement Learning. published? Thought 1: The question is asking some basic information of a dblp paper. We need to load the DBLP graph. Action 1: LoadGraph[dblp] Observation 1: DBLP graph is loaded. Thought 2: The question is asking the published date of a paper, we need to check the node from the PaperNet in DBLP graph. Action 2: NodeCheck[PaperNet, Learning the Principle of Least Action with Reinforcement Learning.] Observation 2: {’title’: ’Learning the Principle of Least Action with Reinforcement Learning.’, ’year’: 2021, ’venue’: {’raw’: ’AAAI Spring Symposium - MLPS’}, ’n_citation’: 0, ’keywords’: [], ’doc_type’: ’Conference’, ’page_start’: ’’, ’page_end’: ’’} Thought 3: The published date of the paper is 2021. Action 3: Finish[2021] Question: How many collaborators does Chao Zhang have in the DBLP graph? Thought 1: The question is asking the number of collaborators of a person, we need load the DBLP graph. Action 1: LoadGraph[dblp] Observation 1: DBLP graph is loaded. Thought 2: The question is asking the collaborators of a person, we need to check the neighboring nodes from the AuthorNet in DBLP graph. Action 2: NeighbourCheck[AuthorNet, Chao Zhang] Observation 2: [’YUHUI YUAN’, ’Rao Fu’, ’Lang Huang’, ’Weihong Lin’, ’X Chen’, ’Jingdong Wang’] 21 Thought 3: The number of collaborators of Chao Zhang is 6. Action 3: Finish[6] Question: How many papers does Chao Zhang and Weihong Lin have in common in the DBLP graph? Thought 1: The question is asking the number of common papers of two persons, we need load the DBLP graph. Action 1: LoadGraph[dblp] Observation 1: DBLP graph is loaded. Thought 2: The question is asking the common papers of two persons, we need to check the edges between them from the PaperNet in DBLP graph. Action 2: EdgeCheck[PaperNet, Chao Zhang, Weihong Lin] Observation 2: {’weight’: 1, ’papers’: [’HRFormer: High-Resolution Vision Transformer for Dense Predict.’], ’n_citation’: [95]} Thought 3: The number of common papers of Chao Zhang and Weihong Lin is 1. Question: Where did Stephen’s Opera performance take place? Thought 1: The question is asking the location of Stephen’s Opera performance from agenda. Action 1: RetrieveAgenda[Stephen’s Opera performance] Observation 1: On January 29, 2022, there will be an opera performance at the Lyric Opera House, featuring Stephen. The show will start at 7:00 PM and end at 9:00 PM. It promises to be a wonderful evening of beautiful music and powerful performances in a stunning venue. Come and experience the magic of opera at its finest! Thought 2: The event happened in Lyric Opera. Action 2: Finish[Lyric Opera] Question: What was the trading volume of coffee on 2000-01-14? Thought 1: Maybe I can write a SQL query to solve this problem. Action 1: SQLInterpreter(SELECT Volume FROM coffee.coffee_data WHERE Date = ’2000-01-14’; Observation 1: Volume: 10115 Thought 2: The volume of coffee on 2000-01-14 is 10115. Action 2: Finish[10115] The prompts used in Chameleon [28]: <Chameleon> Prompt You need to act as a policy model, that given a question and a modular set, determines the sequence of modules that can be executed sequentially can solve the question. The modules are defined as follows: - Calculate[formula]: This module calculates a given formula and returns the result. It takes in a mathematical formula and returns the calculated result. Normally, we only consider using "Calculate" when the question involves mathematical computations. - RetrieveAgenda[keyword]: This module retrieves an agenda related to a specific keyword and returns it. It takes in a keyword and returns the corresponding agenda. Normally, we only consider using "RetrieveAgenda" when the question is about specific actions or tasks related to a topic. - RetrieveScirex[keyword]: This module retrieves paragraphs from machine learning papers related to the specified keyword and returns them. It takes in a keyword and returns the relevant paragraphs. Normally, we only consider using "RetrieveScirex" when the question involves understanding specific concepts in machine learning. - LoadDB[DBName]: This module loads a database specified by the database name and returns the loaded database. It takes in a database name and returns the corresponding database. The DBName can be one of the following: flights/ coffee/airbnb/yelp. Normally, we only consider using "LoadDB" when the 22 question requires data from a specific structured dataset. - FilterDB[column_name, relation, value]: This module filters a database by a specified column name, relation, and value, and then returns the filtered database. It takes in a column name, a relation, and a value, and returns the filtered database. Normally, we only consider using "FilterDB" when the question requires a specific subset of data from a structured dataset. - GetValue[column_name]: This module returns the value of a specified column in a database. It takes in a column name and returns its value. Normally, we only consider using "GetValue" when the question requires a specific piece of data from a structured dataset. - LoadGraph[GraphName]: This module loads a graph specified by the graph name and returns the loaded graph. It takes in a graph name and returns the corresponding graph. Normally, we only consider using "LoadGraph" when the question involves understanding or navigating specific graph structures. - NeighbourCheck[GraphName, Node]: This module lists the neighbors of a specified node in a graph and returns the neighbors. It takes in a graph name and a node, and returns the node’s neighbors. Normally, we only consider using "NeighbourCheck" when the question involves understanding relationships in a graph structure. - NodeCheck[GraphName, Node]: This module returns the detailed attribute information of a specified node in a graph. It takes in a graph name and a node, and returns the node’s attributes. Normally, we only consider using "NodeCheck" when the question requires information about a specific entity in a graph. - EdgeCheck[GraphName, Node1, Node2]: This module returns the detailed attribute information of the edge between two specified nodes in a graph. It takes in a graph name and two nodes, and returns the attributes of the edge between them. Normally, we only consider using "EdgeCheck" when the question involves understanding the relationship between two entities in a graph. - SQLInterpreter[SQL]: This module interprets a SQL query and returns the result. It takes in a SQL query and returns the result of the query. Normally, we only consider using "SQLInterpreter" when the question requires data manipulation and extraction from a structured dataset. - PythonInterpreter[Python]: This module interprets Python code and returns the result. It takes in Python code and returns the result of the code execution. Normally, we only consider using "PythonInterpreter" when the question requires complex computations or custom data manipulation. - Finish[answer]: This module returns the final answer and finishes the task. This module is the final module in the sequence that encapsulates the result of all previous modules. Below are some examples that map the problem to the modules. Question: How many extra minutes did the DL1575 flight take from ATL to MCO on 2022-01-12? Modules: ["LoadDB[flights]", "FilterDB[Flight_Number_Marketing_Airline=1575, FlightDate=2022-01-12, Origin=ATL, Dest=MCO]", "GetValue[DepDelay]", "GetValue[ArrDelay]", "Calculate[(-17)-(-7)]", "Finish[-10]"] Question: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? Modules: ["PythonInterpreter[# solution in Python:\n\ndef solution():\n # Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\n 23 golf_balls_initial = 58\n golf_balls_lost_tuesday = 23\n golf_balls_lost_wednesday = 2\n golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday\n result = golf_balls_left\n return result]", "Finish[33]"] Question: What is the corresponding Mean_IoU score of the FRRN method on Cityscapes dataset for Semantic_Segmentation task? Modules: ["ScirexRetrieve[Mean_IoU score of the FRRN method on Cityscapes dataset for Semantic_Segmentation task]", "Finish[71.8%]"] Question: When was the paper Learning the Principle of Least Action with Reinforcement Learning. published? Modules: ["LoadGraph[dblp]", "NodeCheck[PaperNet, Learning the Principle of Least Action with Reinforcement Learning.]", "Finish[2021]"] Question: How many collaborators does Chao Zhang have in the DBLP graph? Modules: ["LoadGraph[dblp]", "NeighbourCheck[AuthorNet, Chao Zhang]", "Finish[6]"] Question: How many papers does Chao Zhang and Weihong Lin have in common in the DBLP graph? Modules: ["LoadGraph[dblp]", "EdgeCheck[PaperNet, Chao Zhang, Weihong Lin]", "Finish[1]"] Question: Where did Stephen’s Opera performance take place? Modules: ["AgendaRetrieve[Stephen’s Opera performance]", "Finish[Lyric Opera]"] Question: What was the trading volume of coffee on 2000-01-14? Modules: ["SQLInterpreter[SELECT Volume FROM coffee.coffee_data WHERE Date = ’2000-01-14’]", "Finish[10115]"] Now, you need to act as a policy model, that given a question and a modular set, determines the sequence of modules that can be executed sequentially can solve the question. G Key Information of ToolQA G.1 Dataset Documentations The dataset is provided in jsonl format. Each task corresponds to two files: easy and hard (e.g., “flight-easy.jsonl” and “flight-hard.jsonl”, etc.). Each data point contains the following fields: • qid: the unique identifier for the question-answer pair; • question: the question to query; • answer: the corresponding ground-truth answer to question. G.2 Intended Uses ToolQA is intended for researchers in machine learning and related fields to innovate novel methods for tool-augmented large language models (LLMs). We also aim to help developers to test their plugins on our dataset. G.3 Hosting and Maintenance Plan ToolQA codebase is hosted and version-tracked via GitHub. It will be permanently available under the link https://github.com/night-chen/ToolQA. The download link of all the datasets can be found in the GitHub repository. 24 ToolQA is a community-driven and open-source initiative. We are committed and have resources to maintain and actively develop ToolQA in the future. We plan to grow ToolQA to include more tasks, tools, and more baseline methods. We welcome external contributors. G.4 Licensing We license our work using Apache 2.012. All the dataset will be publicly released through the aforementioned GitHub link. G.5 Limitation Tool-augmented LLM is a popular and wildly developing direction, which is wildly developing and focused on by a lot of researchers, ToolQA will keep developing and include more tasks, data, tools, and methods in the future. 12https://www.apache.org/licenses/LICENSE-2.0 25
synthetic_cpt
2
Simplifying_CLIP_Unleashing_the_Power_of_Large-Scale_Models_on_Consumer-level_Computers.pdf
1 4 2 0 2 b e F 7 1 ] P S . s s e e [ 2 v 3 9 5 0 0 . 0 1 3 2 : v i X r a Nonlinear Multi-Carrier System with Signal Clipping: Measurement, Analysis, and Optimization Yuyang Du, Graduate Student Member, IEEE, Liang Hao, Member, IEEE, Yiming Lei, Member, IEEE, Qun Yang, Student Member, IEEE, Shiqi Xu, Student Member, IEEE, Abstract—Signal clipping is a well-established method em- ployed in orthogonal frequency division multiplexing (OFDM) systems to mitigate peak-to-average power ratio (PAPR). The utilization of this technique is widespread in electronic de- vices with limited power or resource capabilities due to its high efficiency and low complexity. While clipping effectively diminishes nonlinear distortion stemming from power amplifiers (PAs), it introduces additional distortion known as clipping distortion. The optimization of system performance, considering both clipping distortions and the nonlinearity of PAs, remains an unresolved challenge due to the intricate modeling of PAs. In this paper, we undertake an analysis of PA nonlinearity utilizing the Bessel-Fourier PA (BFPA) model and simplify its power expression through inter-modulation product (IMP) analysis. We mathematically derive expressions for the receiver signal-to-noise ratio (SNR) and system symbol error rate (SER) for nonlinear clipped OFDM systems. By means of these derivations, we explore the optimal system configuration required to achieve the lower bound of SER in practical OFDM systems, taking into account both PA nonlinearity and clipping distortion. The results and methodologies presented in this paper contribute to an improved comprehension of system-level optimization in nonlinear OFDM systems employing clipping technology. Index Terms—Signal Clipping, OFDM, Nonlinear Distortion, Power Amplifier I. INTRODUCTION Power amplifiers (PAs) exhibit nonlinear characteristics when faced with high-power input signals, which is commonly referred to as PA nonlinearity [1]–[8]. The rapid progress of multiple-input and multiple-output (MIMO) technology, cou- pled with the growing number of subcarriers, has resulted in a significant rise in the peak-to-average power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM) devices. This increase in PAPR poses a major challenge in practical OFDM systems, as it can lead to reduced power efficiency, degraded system performance, and potential distortion in the transmitted signals. PA’s nonlinearity has become a vital concern, as it may cause significant performance degradation when faced with high PAPR [9], [10]. To ensure system linear- ity, designers are compelled to reduce the input power of the amplifier, albeit at the cost of energy efficiency [11]. Reducing PAPR, as a result, has emerged as a critical problem in OFDM systems and has attracted significant research attention. Several established PAPR reduction algorithms [12]–[15] have been extensively studied. However, these conventional al- Y. Du and Y. Lei are with the School of Electronics, Peking University, Beijing, China. H. Liang is with 2012 Laboratory, Huawei Technologies Co., Ltd, Beijing, China. Q. Yang is with the Department of Information Engineering, The Chinese University of Hong Kong, Shatian, Hong Kong SAR. S. Xu is with the School of Electronics and Information Technology, Sun Yat-sen University, Guangdong, China. An early version of this paper [1] has been presented in IEEE VTC2023. gorithms, such as partial transmit sequence (PTS) and selected mapping (SLM), involve complex signal processing, making them unsuitable for resource-limited or battery-driven devices, which is common in sensors and Internet of Things networks. Signal clipping offers a less complex PAPR reduction approach [16]–[18]. In OFDM systems, clipping is realized in the time-domain signals to limit their magnitude to a predetermined threshold. This approach effectively limits the PAPR of OFDM signals within an upper bound with relatively small computation requirements and is therefore very popular in resource-constrained OFDM devices [19]. However, as a nonlinear operation, signal clipping introduces additional frequency components that never appear in the original signal, bringing both out-of-band radiation and in-band distortion to the original signal. Although radiation spilling beyond the target bandwidth can be restricted by filtering, how to deal with the in-band distortion is still a challenging problem. Prior research has made efforts to investigate and mitigate the in-band distortion caused by signal clipping. In [20], a novel Bayesian approach was implemented to recover the clipped signals of an OFDM system, aiming to minimize the impact of in-band distortion at the receiver side. Another technique, proposed in a separate publication [21], is known as the repeated clipping and filtering (RCF) method. The RCF technique focuses on reducing distortion on individual tones of the OFDM signal. Researchers have also explored optimization techniques to enhance the performance of signal clipping for PAPR reduction in OFDM systems. In a couple of studies [22], [23], the authors optimized both the clipping and filtering stages to achieve improved PAPR reduction while simulta- neously mitigating in-band distortion. Alternative approaches have been investigated to tackle the computational complexity associated with clipped OFDM systems. For example, [24] applies convolutional neural networks (CNN) to reduce com- putational complexity while maintaining PAPR reduction per- formance. Additionally, compressed sensing techniques were employed in another study [25] to recover the signal clipping noise in clipped OFDM signals. However, the joint impact of PA nonlinearity and clipping distortion has received little attention in previous studies. Intuitively, the choice of clipping level introduces a trade- off between PA nonlinearity and clipping noise: a higher clipping level reduces PA nonlinearity but increases clipping noise, while lower clipping levels decrease clipping noise but amplify nonlinear distortion. A previous research [26] tried to understand the problem via simulations. However, the complex settings in real-world engineering scenarios necessitate a the- oretical analysis of the trade-off between these two distortion sources. Otherwise, with simulations only, we are unable to comprehensively understand the problem, let alone optimize the system setting to achieve the best performance. In light of this, the primary objective of this paper is to thoroughly investigate the trade-off between clipping distortion and PA nonlinearity in practical OFDM systems and make optimizations accordingly. By addressing this trade-off, we aim to enhance the overall system performance and mitigate the adverse effects of both clipping distortion and PA nonlin- earity. The contributions of this research can be summarized as follows: • To simplify the representation of PA nonlinearity, we utilize the Bessel-Fourier PA (BFPA) model. Through an analysis of the inter-modulation product (IMP) in the PA’s output, we derive the signal-to-noise ratio (SNR) for the studied system, where both nonlinear PAs and signal clipping are taken into consideration. • Our optimization problem is approached from three dis- tinct scenarios. In the first scenario, we consider a known power level for the PA’s nonlinear distortion. By deriving the optimal clipping level, we aim to minimize the symbol error rate (SER). Moving on to the second scenario, we assume the power of clipping distortion is known. In this case, we derive the optimal operating point for the PA that minimizes the SER. Lastly, we address the scenario where both PA nonlinearity and clipping distortion are unknown variables. We prove the existence of a global minimum SER and derive the optimal signal clipping level and PA operating point. Besides, we obtain a closed- form expression for the global SER lower bound, which enhances our understanding of the system’s performance. • We thoroughly examine the impact of various system parameters on both the SER and total degradation (TD) in the presence of signal clipping noise and PA nonlinearity. Our investigation offers a comprehensive analysis of how these system parameters contribute to the overall performance degradation. This paper is organized as follows. Section II presents the framework of the studied nonlinear clipped OFDM system. Section III-A builds the BFPA model, Section III-B analyzes the PA nonlinear distortion, and Section III-C derives the closed-form expression of SER. Then in Section IV, SER optimizations with different constraints are presented in detail. Section V gives simulation results and explains our experimen- tal observations. Section VI concludes this paper. This paper uses the following notation conventions. we letters, e.g., M. And we denote matrices by bold capital denote the element of a matrix and the conjugate of the element by [M]i, j and [M] i, j∗, respectively. We represent the Frobenius norm of a matrix by |H|F . We denote the expectation operator and the variance operator by E(·) and V ar(·), respectively. Furthermore, CN (m, σ2) represents a complex Gaussian random variable with a mean of m and a variance of σ2. 2 Additionally, the system we considered is comprised of NR receive antennas and NT transmit antennas. We then represent the source bit stream by vector U and denote the output of the M -order Quadrature Amplitude Modulation (M-QAM) modu- lator by matrix M. Further, the modulated symbol transmitted through the tth transmitter antenna (where t = 1, 2, ..., NT ) during the sth time slot (where s = 1, 2, ..., NS) is denoted as [M]t,s. Figure 1. The framework of the investigated OFDM system incorporates both a nonlinear PA and signal clipping. We make an assumption that the inverse fast Fourier trans- form (IFFT) module has a length NS, which is equal to the number of subcarriers. Consequently, the signal obtained after the OFDM modulator can be expressed as C = ME+ (1) where E+ represents the IFFT operation within the OFDM modulation. The (k, s) element within the matrix can be written as [27] [E]+ k,s = ej2π(s−1)(k−1)/NS We define the system input power as EU , which represents the system operating point. Additionally, we denote the input power of the signal clipping blocks as EC. Furthermore, at the tth transmit antenna, the sth post-IFFT symbol transmitted is denoted by [C]t,s. The expression for EU can be written as (2) NT NS(cid:80) x=1 EU = E([U]x [U]∗ x) NS NT(cid:80) t=1 NS(cid:80) s=1 = E([C]t,s [C]∗ t,s) NS (3) We then express the output signal of the clipping module as [S]t,s = (cid:40) [C]t,s , when [C]t,s ≤ η (cid:112) η EU ej∠[C]t,s, otherwise (cid:112) EU (4) where the phase of the transmitted symbol is denoted by ∠(.), and the signal clipping level is represented by η. From (4), we see that the output power of the clipped signal is upper- bounded by η2EU . Building upon Equation (4), the scaling factor β for signal II. SYSTEM MODEL power is written as, We first present the general framework of the studied system in Fig. 1. The number of subcarriers is denoted as NS. β = η (cid:90) ∞ η e−t2 dt + 1 − e−η2 (5) StreamSpatialStreamParserM-QAMModulatorM-QAMModulatorM-QAMModulatorOFDMModulatorOFDMModulatorOFDMModulatorSignalClippingSignalClippingSignalClippingPAPAPAOFDMDemodulationOFDMDemodulationOFDMDemodulationSpatialStreamDeparserM-QAMDemodulatorBit StreamUMCSTSRSRMRUHRayleighFadingChannelBit We then write the signal matrix after clipping as A. PA modeling and measurements 3 Numerous prior studies have extensively explored analytical models for nonlinear PAs in the literature. The power spectrum of PA’s nonlinear distortion has been investigated in prior works like [29]. These works also established the relationship between the PA operating point and the signal SNR at the receiver antennas. However, they often involve complex cal- culations, including integrals and partial differentials, which hinder their practical applicability for extensive analysis of nonlinear PA systems. Some works also develop simple PA behavior models with consideration of memory effect [30]– [32]. However, the primary focus of this paper is to understand the impact of PA nonlinearity and clipping distortion. We are not interested in considering the memory effect, which brings additional computation complexity. This paper utilizes a simple memoryless PA model known as the ”BFPA model” to capture the PA nonlinearity. The concept of BFPA was developed in [33], comprehensively analyzed in [34], and then became widely used in later work like [9], [27], [28], [35], [36]. With the BFPA model, we focus on the analysis of the amplitude modulation to amplitude modulation (AM/AM) characteristic of the PA. By investigating this characteristic, we aim to understand the nonlinear behavior exhibited by the PA and its impact on system performance. The equivalent PA model is derived based on memoryless characteristics extracted from extensive laboratory measurements of the input-output behaviors of a commercially available PA. Fig. 2 presents the experimental setup employed for the PA measurement in this research. S = Dclip + βC (6) where Dclip denotes the signal clipping distortion, and we have Dclip ∼ G (0, DkEU ). We write the coefficient of clipping distortion power as Dk = 1 J (cid:16) F F T {S}k − β2/ 1 − e−η2(cid:17) (7) where the over-sampling factor is written as J, and the kth output of the JN -point fast Fourier transform (FFT) is denoted by F F T {}k. The power of the signal clipping module’s output can be expressed as ES = 1 NS NT(cid:88) NS(cid:88) t=1 s=1 (cid:16) E [S]t,s [S]∗ t,s (cid:17) = DkEU + β2EU (8) As in Fig. 1, we represent the input of nonlinear PAs by S, which is the clipped output of prior modules. And we represent the output symbol of PA by ST , which is expressed as ST = αS + Dnon = αβC + αDclip + Dnon (9) where Dnon represents the nonlinear distortion of amplifiers and α denotes the PA’s linear gain factor. Radio-frequency signals received at the receiver antenna can be written as SR = HST + Dch (10) where the channel noise is represented by Dch, and the uncor- related elements of Dch can be written as CN (0, σ2 ch). Further, the quasi-static Rayleigh channel applied in the system is denoted by H. The output signal of the FFT module (i.e., the OFDM demodulator) is expressed as MR = SRE− = (αHS + HDnon + Dch)E− (11) where the sth post-FFT symbol obtained through the rth (r = 1, 2, ..., NR) receiver antenna is denoted by [MR]r,s, and we know (cid:2)E−(cid:3) k,s = N −1 S e−j2π(s−1)(k−1)/NS (12) As in [28], we model the PA’s nonlinear in-band distortion DnonE− as a Gaussian signal. The output of the M-QAM demodulator can be written as UR = W + αβHU (13) where the sum of system impairments is denoted by W, and the output symbol of the source bit stream is written as U. We write W as W = (αHDclip + HDnon + Dch)E− (14) III. BFPA MODEL AND SNE/SER ANALYSIS In subsection A, we present how we measure a PA chip and give the measurement results for building the PA model. And then in subsection B, we use the BA model to simplify the SNR analysis. Finally, we derive the SER performance in subsection C. Figure 2. Experimental setup for the PA measurement. We now explain our measurement as follows. We first generate a 2.4GHz single-tone signal with the signal generator. We then transmit the signal via the test PA and measure the PA’s output with a spectrum analyzer. We keep testing different signal power and recording the PA’s output power. Finally, we plot the scatter chart as in Fig. 3 and fit the AM-AM curve for the tested PA. For better illustration, we also present the ideal AM-AM curve for the tested PA by assuming that the PA is a linear one. After the measurement, we use the fitted AM-AM curve to build the BFPA model as in [34]. Given a P -order BFPA model, let us denote the pth order coefficients by bp and denote the dynamic range of the model by Pmod. 4 where Jgm denote the gth m order Bessel function of the first kind, and bp denotes the coefficients of a P -order BFPA model. Furthermore, Vp can be written as 2pπ Vp = Pmod (cid:112)β2EU /NT NR (19) where Pmod is the dynamic range of the P -order BFPA. Further considering the range of the Bessel kernel and assuming δ = 1, we have the power of a first-order IMP (i.e., the wanted signal) as Ω (cid:0)1, β2EU (cid:1) = β2EU NT NS (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) P (cid:88) p=1 bp pπ Pmod 2 (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) = a1β2EU NT NS where a1 = (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) P (cid:88) p=1 (cid:18) bppπ Pmod 2 (cid:19)(cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (20) (21) Given that the domination of PA’s nonlinear distortion comes from the third-order IMPs, we have (cid:88) Ω (cid:0)η, β2EU (cid:1) ≈ Ω (cid:0)3, β2EU (cid:1) δ=3,5... = (cid:18) β2EU NT NS (cid:19)3(cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) P (cid:88) p=1 bp (cid:18) pπ Pmod (cid:19)3(cid:12) 2 (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) = a3β6 T N 3 N 3 S E3 U (22) (23) With a similar analysis, we have the clipping distortion as α2DkEU = Ω (1, DkEU ) = DkEU NT NS (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) P (cid:88) p=1 bp pπ Pmod 2 (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) = a1Dk NT NS EU (24) Now, with (16), (20), (22), and (24), we reconsider the equation in 15 and obtain the receiver SNR in a polynomial form, which is γ = a1β2 ∥H∥2 F EU ch + a1Dk σ2 NT NS EU + NS(cid:80) s=1 ϕ (3, s) · a3β6 T N 4 S N 3 E3 U (25) C. System SER From [37], it is established that the receiver SNR’s proba- bility distribution function (PDF) is given by Pe(γ) = ∥H∥2 F e−∥H∥2 F /γ · Γ(λ) where λ = E (cid:16) ∥H∥2 F (cid:17) = NT · NR (26) (27) and Γ(z) represents the Gamma function, which can be expressed as (cid:90) ∞ Γ(z) = xz−1e−xdx (28) Figure 3. The scatter chart of the measured PA and the fitted AM-AM curve of the PA. B. Polynomial Function of Receive SNR Upon examination of the right side of (11), it is evident that the power of the desired signal can be further written as α2β2 |H| F 2EU . Additionally, the power of the impairment symbol W is represented by EW . Consequently, the SNR at the receiver, following the OFDM demodulator, can be represented as where EW encompasses various sources of system distortion, including channel noise, signal clipping noise, and PA nonlin- earity. In light of these considerations, we express it as EW = σ2 ch + α2DkEU NT(cid:88) NS(cid:88) + 1 NS t=1 s=1 (cid:16)(cid:2)HDnonE−(cid:3) E t,s · (cid:2)HDnonE−(cid:3)∗ t,s (cid:17) (16) [R3-5]We now delve into the modeling of PA nonlinearity to simplify the last term in (16). In this paper, we follow the IMP analysis presented in our previous paper [36] to approximate the power of PA’s nonlinear distortion. Due to space limits, this paper does not repeat these derivation details. Instead, we present the outline of the derivation and refer our reader to [36] for detailed analysis and discussions. In general, the last term in (16) can be expressed as (cid:88) NS(cid:88) δ=3,5... s=1 ϕ(δ, s)Ω (cid:0)δ, β2EU (cid:1) (17) where ϕ(δ, s) denotes the number of δ order IMPs falling at the sth OFDM subcarrier, and it can be counted through a numerical process [35]. Additionally, Ω(η, β2EU ) is the power of an individual δ order IMP, and it can be further written as γ = α2β2 ∥H∥2 F EU EW where (15) a3 = (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) P (cid:88) p=1 (cid:18) pπ Pmod bp 2 (cid:19)3(cid:12) (cid:12) (cid:12) (cid:12) (cid:12) Ω (cid:0)δ, β2EU (cid:1) = (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) P (cid:88) (cid:104) p=1 bp · J NS −δ 0 (Vp) · J δ (cid:105) 1 (Vp) (cid:12) 2 (cid:12) (cid:12) (cid:12) (cid:12) (18) After being captured at the receiver antennas and processed by the spatial stream parser, received symbols are inputted 0 -25-20-15-10-505101520PA input power (dBm)-7.5-2.52.57.512.517.522.527.5PA output power (dBm)AM-AM Curve of an Ideal PAFitted AM-AM Curve of a Pratical PA Data Measurenment of a Pratical PA into the M-QAM decision block. As has been established in reference [38], the relationship between the receiver SNR and the symbol estimation error probability can be expressed as 4 Pw(γ) = (cid:17) π 2(cid:90) (cid:18) (cid:16)√ M − 1 √ π M sin2θ sin2θ + gQAM γ (cid:19)NR dθ 0 (cid:33)2 (cid:32) √ − 4 π M − 1 √ M π 2(cid:90) (cid:18) 0 sin2θ sin2θ + gQAM γ (cid:19)NR dθ where gQAM = 3 2(M − 1) (29) (30) Hence, the SER of a nonlinear OFDM system with clipping can be written as ∞ (cid:90) SER(γ) = Pe(γ)Pw(γ)dγ 0 (cid:32) √ M − 1 √ M π ≈ (cid:32) √ M − 1 √ M π = 2 (cid:33) π/2 ∞ (cid:90) (cid:90) (cid:16) e− 4 γ 3γ 2sin2 θ(M −1) +∥H∥2 F (cid:17) dγdθ 0 0 2 (M − 1) ∥H∥2 F 3γ (cid:33)λ (cid:18) B λ + 1 2 , (cid:32) · F1 λ, λ + 1 2 , λ + 1, − 2 (M − 1) ∥H∥2 F 3γ (cid:19) 1 2 (cid:33) where B (x, y) = (cid:82) 1 0 tx−1(1 − t)y−1dt is Beta function, and F1 (a, b, c, d) denotes the well known Gaussian hypergeomet- ric function [39]. At the end of this section, we would like to re-emphasize the complexity of the SNR expression and the SER expression presented in (25) and (31), respectively. The complexity of these two expressions is simplified from two aspects. First, we remove the lengthy expressions about memory effects, given that we mainly focus on the nonlinearity of the amplifier. Second, as we have discussed in Subsection A, the application of the BFPA model significantly reduces the complexity of our IMP analysis. These efforts simplify the expressions of SER and SNR, which facilitates the following analysis and optimization. IV. SER OPTIMIZATION Section III gives the SER of the studied system in (31), while this section further optimizes the SER in different scenarios. Before technical details, we give an outline of the fol- lowing three subsections and make the following notations. Subsection A assumes a constant η and optimizes the system operating point. The optimal SNR, the optimal, system oper- ating point, and the optimal SER performance are denoted as γopt , respectively. Subsection B assumes EU a constant EU and optimizes the signal clipping level. We denote the optimal signal clipping level, the optimal SNR, and the optimal SER as ηopt, γopt η , respectively. Subsection C undertakes the joint optimization of EU (power U , and SERopt EU η , and SERopt , Eopt 5 of the desired signal) and η (signal clipping level). The optimal signal clipping level and the optimal PA operating point are denoted by ηg and Eg U , respectively, where the superscript g represents the global optimum. The resulting optimal values for the SNR and SER are denoted as γg and SERg, respectively. A. Optimization of PA Operation Point From (31), we can derive the function for the SER with to the receiver SNR. The resulting expression is respect presented as (32) on the following page. Furthermore, with(32), we have (cid:32) F1 λ, λ + ∂ ∂γ 1 2 , λ + 1, − 2 (M − 1) ∥H∥2 F 3γ (cid:33) > 0 (33) Since ∂SER(γ)/∂γ is negative, we know SER is a decreasing function of γ. In other words, to obtain the minimum SER, we need to find the maximum possible SNR. We now look at how to find γopt EU . With the expression in (17), we obtain the partial derivation function of γ (η, EU ) as the respect of EU as (31) ∂γ ∂EU = a1β2 ∥H∥2 F (cid:16) a1Dk NT NS (cid:16) ch − 2a3Φβ6 σ2 3NS NT 4 E3 U EU + a3β6Φ 3NS NT 4 E3 U + σ2 ch From (34), we know that ∂γ ∂EU    > 0, EU < = 0, EU = < 0, EU > NT N 4/3 S β2 NT N 4/3 S β2 NT N 4/3 S β2 (cid:115) 3 (cid:115) 3 (cid:115) 3 σ2 ch 2a3Φ σ2 ch 2a3Φ σ2 ch 2a3Φ (cid:17) (cid:17)2 (34) (35) As we can see from (35), ∂γ/∂EU turns from positive to negative with the increase of EU . Therefore, when ∂γ/∂EU = 0, γ reaches its maximum value γopt , and the corresponding EU EU can be written as Eopt U = NT NS β2 4 3 (cid:115) 3 σ2 ch 2a3Φ Introducing (36) into (17), we have γopt EU = a1β2NT N 4/3 a1N 1/3 ∥H∥2 S Dk(2a3Φ)−1/3σ2/3 F (2a3Φ)−1/3σ2/3 ch ch + 2β2σ2 ch S (36) (37) Introducing (37) to (31), the optimal SER performance can be written as SERopt EU = 2 √ M − 1 √ M π (cid:32) · B(λ + 1 2 , 1 2 (cid:32) )·F1 λ, λ + 2 (M − 1) ∥H∥2 F 3γopt EU (cid:33)λ · 1 2 , λ + 1, − 2 (M − 1) ∥H∥2 F 3γopt EU (cid:33) (38) ∂SER ∂γ = −2 √ M − 1 √ M π (cid:18) · B λ + (cid:19) 1 2 , 1 2    2λ (M − 1) ∥H∥2 F 3γ2 (cid:32) 2 (M − 1) ∥H∥2 F 3γ (cid:33)λ−1 (cid:32) · F1 λ, λ + (cid:32) + 2 (M − 1) ∥H∥2 F 3γ (cid:33)λ ∂F1 1 2 (cid:16) , λ + 1, − λ, λ + 1 (cid:33) 2 (M − 1) ∥H∥2 F 3γ 2 , λ + 1, − 2(M −1)∥H∥2 3γ F ∂γ 6 (32)    (cid:17) B. Optimization of Signal Clipping Level We see from (5) that the derivation function of power scaling factor β with respect to η should be written as We already know from subsection A that SER is a decreas- ing function of receiver SNR. Hence, we substitute (44) into (31) and obtain the optimal SER as ∂β ∂η = ∞ (cid:90) η e−t2 dt + ηe−η2 > 0 (39) SERopt η = 2 From (39), we know that the power scaling factor β is a monotone increasing function of η. We are interested in finding the optimal β so that we can do a simple mapping to obtain the optimal η. The partial derivation function for γ with respect to β2 can be calculated as ∂γ ∂β2 = (cid:16) a1DkEU NT NS 4 β6(cid:17) + σ2 (cid:16) a1DkEU NT NS ch − 2a3ΦE3 U 3NS + a3β6ΦE3 3NS NT NT U 4 + σ2 ch F EU a1 ∥H∥2 (cid:17)2 Let (40) be zero. We note that the optimal power scaling factor can be calculated through the following equation: ∂γ ∂β2 = 0 ⇔ − 2a3ΦE3 U T N 4 N 3 S β6 + a1DkEU NT NS + σ2 ch = 0 (41) Further consider 0 < β2 < 1 ,the only solution of (41) can be written as 2 3 (cid:115) 3 β2 = NSNT EU (a1DkEU + σ2 2a3Φ ch) (42) With (42), it can be proved that ∂γ ∂β2    > 0, 0 < β2 < = 0, β2 = < 0, β2 > NSNT EU (cid:115) 3 2 3 2 3 (cid:115) 3 (a1DkEU + σ2 2a3Φ ch) (a1DkEU + σ2 2a3Φ ch) (43) NSNT EU 2 3 (cid:115) 3 NSNT EU (a1DkEU + σ2 2a3Φ ch) We see from (43) that ∂γ/∂β2 changes from positive to negative with the growing up of β2. As a result, there is an optimal signal clipping level ηopt that can result in the maximum γ. With (43) and (5), we can then calculate ηopt as ηopt = ln (cid:118) (cid:117) (cid:117) (cid:117) (cid:116)1 −     (cid:118) (cid:117) (cid:117) (cid:116) NSN EU (cid:115) 3 2 3 T a1DkEU + σ2 ch 2a3Φ     (44) √ M − 1 √ M π (cid:32) (cid:32) · 2 (M − 1) ∥H∥2 F 3γopt η (cid:33)λ B(λ + 1 2 , 1 2 )· F1 λ, λ + 1 2 , λ + 1, − 2 (M − 1) ∥H∥2 F 3γopt η Then, we see that the optimal SNR can be written as γopt η = F EU a1β2 opt ∥H∥2 U + a1Dk NT NS E3 EU + σ2 ch a3β6 N 3 optΦ T N 4 S C. Joint Optimization (40) The derivation function of γ(η, EU ) is ∂2γ ∂β∂EU = ∂2 ∂β∂EU   a1β2 ∥H∥2 F EU + a3β6ΦE3 T N 4 N 3 S U a1DkEC NT NS + σ2 ch  = =   ∂ ∂EU a1DkEC NT NS a1(βg)2 ∥H∥2 (cid:16) a1DkEC NT NS U  a1(βg)2 ∥H∥2 F EU + a3(βg)6ΦE3 + σ2 N 3 T N 4 ch S F ΦE3 ch − 2a1a3(βg)8∥H∥2 (cid:17)2 T N 4 S F σ2 + a3(βg)6ΦE3 N 3 + σ2 ch N 3 U U T N 4 S (cid:33) (45) (46) (47)   where βg is the optimal signal power scaling factor that can result in ηg. Substituting (5) to (47) and letting = 0, we have ∂2γ ∂β∂EU Eg U = N 2 S − σ2 ch a1Dk (cid:118) (cid:117) (cid:117) (cid:117) (cid:116)1 −      (cid:118) (cid:117) (cid:117) (cid:116) ηg = ln (cid:113) σ2 ch 2a3Φ 3 a1DkNT N N 2 4 3 S S − σ2 ch (48) (49)      With (17), (31), (48) and (49), we obtain SERg and γg as (50) and (51) presented in the next page. V. SIMULATIONS AND DISCUSSIONS At the beginning of this section, we explain why the increase in the number of subcarriers makes PA’s nonlinearity a more challenging problem. Here we focus on a linear OFDM system’s complementary cumulative distribution func- tion (CCDF), which represents the probability of the system’s SERg = √ 2( M − 1) √ M π γg = (cid:32) · 2 (M − 1) ∥H∥2 F 3γg (cid:33)λ (cid:18) B λ + (cid:19) 1 2 , 1 2 (cid:32) · F1 λ, λ + 1 2 , λ + 1, − 2 (M − 1) ∥H∥2 F 3γg (cid:33) NT NS ∥H∥2 F Ω (cid:18) 1, NT N 4/3 S (cid:19) (cid:113) σ2 3 ch 2a3Φ (cid:18) N −1 S Ω 3, NT N 4/3 S (cid:113) σ2 3 ch 2a3Φ (cid:19) NS(cid:80) s=1 ϕ(3, s) + Ω (cid:16) 1, N 2 S −σ2 a1 ch (cid:17) + σ2 ch 7 (50) (51) actual PAPR exceeding a targeted PAPR threshold [40]. As we can see from Fig. 4, given the same PAPR threshold (say 9dB), OFDM systems with larger NS have significantly higher CCDFs. In this regard, with the rapid increase of NS, modern wireless communication systems have much higher CCDF than before, making the OFDM system more sensitive to PA’s nonlinearity. The above observation justifies the motivation of considering nonlinear amplifiers in this research. Figure 5. How SER changes with EU under different noise and clipping settings. Both simulated (Sim) and analytical (Ana) results consider two different assumptions: nonlinear (NL) PAs and linear (Lin) PA. Figure 4. How the system’s CCDF changes with PAPR under different subcarrier numbers. For detailed explanations about how CCDF is defined and calculated, we refer readers to Section III of [40]. The rest of this section validates our analytical results by simulating the studied OFDM system with signal clipping distortion and PA nonlinear distortion. We then study how the number of antennas affects the system’s SER performance and examine the total degradation (TD) performance of the system. In the following simulations, we set NS as 64, which cor- responds to the length of the OFDM modulator/demodulator block. The over-sampling rate of the OFDM symbols is set to 4. Furthermore, unless stated otherwise, we consider a MIMO configuration with a 2 × 2 antenna setting and employ 4-QAM as the default modulation scheme. We assume that the quasi- static Rayleigh channel is normalized, and we refer readers to [41] for implementation details of the channel. Fig. 5, Fig. 6, and Fig. 7 are presented to examine the influ- ence of the PA operating point EU on the SER performance under various system configurations while keeping η constant. These figures provide a comprehensive understanding of the relationship between EU and SER, with a comparison of simulated and analytical curves. Figure 6. The influence of channel noise under different EU . Here we let η = 100. Both simulated (Sim) and analytical (Ana) results consider two different assumptions: nonlinear (NL) PAs and linear (Lin) PAs. SER lower bound that is dependent on EU . This phenomenon can be explained as follows: when EU is not large, the PA stays in its linear region, hence its nonlinearity could be negligible. The increase of EU raises the PA’s input power and leads to a higher SNR and a lower SER. However, when EU is increased beyond a certain point, further increasing it results in very high nonlinear distortion. An SER lower bound can be observed when the detrimental effect of nonlinear distortion counteracts the amplification of the desired signal. Here the PA’s operating point that results in this SER lower bound is denoted as Eopt U . Beyond Eopt U , further increase EU deteriorates the SER performance. In Fig. 5, it is evident that each nonlinear case exhibits an From Fig. 6, we can see that the lower bound of system 456789PAPR Threshold [dB]10-210-1100CCDFNS=32NS=64NS=128NS=256NS=512(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:22)(cid:19)(cid:71)(cid:37)(cid:15)(cid:32)(cid:20)(cid:19)(cid:19)(cid:15)(cid:49)(cid:47)(cid:15)(cid:54)(cid:76)(cid:80)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:22)(cid:19)(cid:71)(cid:37)(cid:15)(cid:32)(cid:20)(cid:19)(cid:19)(cid:15)(cid:49)(cid:47)(cid:15)(cid:36)(cid:81)(cid:68)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:22)(cid:19)(cid:71)(cid:37)(cid:15)(cid:32)(cid:20)(cid:19)(cid:19)(cid:15)(cid:47)(cid:76)(cid:81)(cid:15)(cid:54)(cid:76)(cid:80)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:22)(cid:19)(cid:71)(cid:37)(cid:15)(cid:32)(cid:20)(cid:19)(cid:19)(cid:15)(cid:47)(cid:76)(cid:81)(cid:15)(cid:36)(cid:81)(cid:68)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:23)(cid:19)(cid:71)(cid:37)(cid:15)(cid:3)(cid:32)(cid:21)(cid:15)(cid:3)(cid:49)(cid:47)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:23)(cid:19)(cid:71)(cid:37)(cid:15)(cid:3)(cid:32)(cid:21)(cid:15)(cid:3)(cid:49)(cid:47)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:23)(cid:19)(cid:71)(cid:37)(cid:15)(cid:3)(cid:32)(cid:21)(cid:15)(cid:3)(cid:47)(cid:76)(cid:81)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:23)(cid:19)(cid:71)(cid:37)(cid:15)(cid:3)(cid:32)(cid:21)(cid:15)(cid:3)(cid:47)(cid:76)(cid:81)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:23)(cid:19)(cid:71)(cid:37)(cid:15)(cid:32)(cid:20)(cid:19)(cid:19)(cid:15)(cid:49)(cid:47)(cid:15)(cid:54)(cid:76)(cid:80)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:23)(cid:19)(cid:71)(cid:37)(cid:15)(cid:32)(cid:20)(cid:19)(cid:19)(cid:15)(cid:49)(cid:47)(cid:15)(cid:36)(cid:81)(cid:68)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:23)(cid:19)(cid:71)(cid:37)(cid:15)(cid:32)(cid:20)(cid:19)(cid:19)(cid:15)(cid:47)(cid:76)(cid:81)(cid:15)(cid:54)(cid:76)(cid:80)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:23)(cid:19)(cid:71)(cid:37)(cid:15)(cid:32)(cid:20)(cid:19)(cid:19)(cid:15)(cid:47)(cid:76)(cid:81)(cid:15)(cid:36)(cid:81)(cid:68)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:22)(cid:19)(cid:71)(cid:37)(cid:15)(cid:32)(cid:21)(cid:15)(cid:49)(cid:47)(cid:15)(cid:54)(cid:76)(cid:80)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:22)(cid:19)(cid:71)(cid:37)(cid:15)(cid:32)(cid:21)(cid:15)(cid:49)(cid:47)(cid:15)(cid:36)(cid:81)(cid:68)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:22)(cid:19)(cid:71)(cid:37)(cid:15)(cid:32)(cid:21)(cid:15)(cid:47)(cid:76)(cid:81)(cid:15)(cid:54)(cid:76)(cid:80)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:22)(cid:19)(cid:71)(cid:37)(cid:15)(cid:32)(cid:21)(cid:15)(cid:47)(cid:76)(cid:81)(cid:15)(cid:36)(cid:81)(cid:68)(cid:21)(cid:23)(cid:19)(cid:71)(cid:37)(cid:70)(cid:75)(cid:86)(cid:32)(cid:16)(cid:21)(cid:22)(cid:19)(cid:71)(cid:37)(cid:70)(cid:75)(cid:86)(cid:32)(cid:16)(cid:51)(cid:36)(cid:3)(cid:50)(cid:83)(cid:72)(cid:85)(cid:68)(cid:87)(cid:76)(cid:81)(cid:74)(cid:3)(cid:51)(cid:82)(cid:76)(cid:81)(cid:87)(cid:3)(cid:39)(cid:72)(cid:83)(cid:72)(cid:81)(cid:71)(cid:72)(cid:71)(cid:3)(cid:47)(cid:82)(cid:90)(cid:72)(cid:85)(cid:3)(cid:37)(cid:82)(cid:88)(cid:81)(cid:71)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:23)(cid:19)(cid:71)(cid:37)(cid:15)(cid:3)(cid:47)(cid:76)(cid:81)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:23)(cid:19)(cid:71)(cid:37)(cid:15)(cid:3)(cid:47)(cid:76)(cid:81)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:23)(cid:19)(cid:71)(cid:37)(cid:15)(cid:3)(cid:49)(cid:47)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:23)(cid:19)(cid:71)(cid:37)(cid:15)(cid:3)(cid:49)(cid:47)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:22)(cid:19)(cid:71)(cid:37)(cid:15)(cid:47)(cid:76)(cid:81)(cid:15)(cid:54)(cid:76)(cid:80)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:22)(cid:19)(cid:71)(cid:37)(cid:15)(cid:47)(cid:76)(cid:81)(cid:15)(cid:36)(cid:81)(cid:68)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:22)(cid:19)(cid:71)(cid:37)(cid:15)(cid:49)(cid:47)(cid:15)(cid:54)(cid:76)(cid:80)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:22)(cid:19)(cid:71)(cid:37)(cid:15)(cid:49)(cid:47)(cid:15)(cid:36)(cid:81)(cid:68)(cid:70)(cid:75)(cid:21)(cid:3)(cid:32)(cid:16)(cid:24)(cid:19)(cid:71)(cid:37)(cid:15)(cid:3)(cid:47)(cid:76)(cid:81)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:70)(cid:75)(cid:21)(cid:3)(cid:32)(cid:16)(cid:24)(cid:19)(cid:71)(cid:37)(cid:15)(cid:3)(cid:47)(cid:76)(cid:81)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68)(cid:70)(cid:75)(cid:21)(cid:3)(cid:70)(cid:75)(cid:21)(cid:3)(cid:51)(cid:36)(cid:3)(cid:50)(cid:83)(cid:72)(cid:85)(cid:68)(cid:87)(cid:76)(cid:81)(cid:74)(cid:3)(cid:51)(cid:82)(cid:76)(cid:81)(cid:87)(cid:3)(cid:39)(cid:72)(cid:83)(cid:72)(cid:81)(cid:71)(cid:72)(cid:71)(cid:3)(cid:47)(cid:82)(cid:90)(cid:72)(cid:85)(cid:3)(cid:37)(cid:82)(cid:88)(cid:81)(cid:71)(cid:21)(cid:22)(cid:19)(cid:71)(cid:37)(cid:70)(cid:75)(cid:86)(cid:32)(cid:16)(cid:21)(cid:23)(cid:19)(cid:71)(cid:37)(cid:70)(cid:75)(cid:86)(cid:32)(cid:16)(cid:21)(cid:24)(cid:19)(cid:71)(cid:37)(cid:70)(cid:75)(cid:86)(cid:32)(cid:16)(cid:3)(cid:32)(cid:16)(cid:24)(cid:19)(cid:71)(cid:37)(cid:15)(cid:3)(cid:49)(cid:47)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:3)(cid:32)(cid:16)(cid:24)(cid:19)(cid:71)(cid:37)(cid:15)(cid:3)(cid:49)(cid:47)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68) 8 Figure 7. The influence of signal clipping level under different EU . Here we let channel noise be −30dB. Both simulated (Sim) and analytical (Ana) results are presented. Figure 9. A 3D figure showing the relationship between 1) system SER, 2) signal clipping level η, and 3) PA operating point EU . Channel noise σ2 ch = −30dB. Nonlinear PAs are considered. SER and Eopt U is positively correlated with σ2 ch, the channel noise. This is, a higher noise level necessitates a higher EU to reach the optimal SER, but the resulting SER lower bound will be larger compared to scenarios with lower noise levels. Furthermore, Fig. 7 demonstrates a negative relationship be- tween Eopt U , the SER lower bound, and η, the clipping level. Figure 10. How the curves of SER performance and the curves of ηopt optimal signal clipping level change according to the system operating point EU . channel noise σ2 ch is set to −30 dB, the global optimal SER is determined to be 8.57 × 10−6, which aligns precisely with the simulated results. Fig. 10 presents how SER and the optimal signal clipping level change with the system operating point EU under various noise conditions. It can be seen that there is a global system lower bound of SER when system operating point EU sweeps. Moreover, the optimal signal clipping level ηopt is a convex function, where its convex point corresponds to the system operating point. Besides, the global system lower bound is positively related to the channel noise σ2 ch, and the global system lower bound of SER is also positively related to σ2 ch. This can be explained by (51): γg is positively related to the channel noise. Fig. 11 presents how the number of transmit/receive an- tennas affects the system SER when nonlinear amplifiers are considered. As the figure shows, using more transmit/receive antennas can significantly reduce the system’s SER, although it comes with increased system complexity. Fig. 12 presents an investigation into the relationship be- tween the PA output back off (OBO) and the overall system TD. The OBO refers to the reduction in output power from the PA’s maximum allowed output power, indicating the extent to which the PA operates in its nonlinear region. On the other Figure 8. The performance of SER as a function of signal clipping level η when nonlinear PAs are considered. Both analytical (Ana) and simulated (Sim) results are presented. Fig. 8 investigates η’s impact on the SER when EU is constant. Both analytical and simulated SER curves are presented, considering the presence of significant nonlinear distortion in the PA. Notably, each curve exhibits an SER lower bound that is clipping level-dependent. When the PA operating point is sufficiently high, such as EU = 5 dB or 0 dB, increasing the clipping level initially leads to a decrease in SER. This is because signal clipping can help mitigate the nonlinear distortion of a PA by reducing the signal’s increasing η beyond the optimal PAPR. However, further, value can result in significant degradation of the system’s SER performance because clipping noise now dominates the system’s impairment. In Fig. 9, we treat SER as a function of both PA operating point EU and the signal clipping level η, with a consideration of nonlinear PAs. As anticipated in Section IV, we can observe a global optimal SER in the figure. Specifically, when the (cid:32)(cid:20)(cid:17)(cid:19)(cid:15)(cid:3)(cid:47)(cid:76)(cid:81)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:32)(cid:20)(cid:17)(cid:19)(cid:15)(cid:3)(cid:47)(cid:76)(cid:81)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68)(cid:32)(cid:20)(cid:17)(cid:19)(cid:15)(cid:3)(cid:49)(cid:47)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:32)(cid:20)(cid:17)(cid:19)(cid:15)(cid:3)(cid:49)(cid:47)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68)(cid:32)(cid:20)(cid:17)(cid:24)(cid:15)(cid:3)(cid:47)(cid:76)(cid:81)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:32)(cid:20)(cid:17)(cid:24)(cid:15)(cid:3)(cid:47)(cid:76)(cid:81)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68)(cid:32)(cid:20)(cid:17)(cid:24)(cid:15)(cid:3)(cid:49)(cid:47)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:32)(cid:20)(cid:17)(cid:24)(cid:15)(cid:3)(cid:49)(cid:47)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68)(cid:32)(cid:24)(cid:17)(cid:19)(cid:15)(cid:3)(cid:47)(cid:76)(cid:81)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:32)(cid:24)(cid:17)(cid:19)(cid:15)(cid:3)(cid:47)(cid:76)(cid:81)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68)(cid:32)(cid:24)(cid:17)(cid:19)(cid:15)(cid:3)(cid:49)(cid:47)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:32)(cid:24)(cid:17)(cid:19)(cid:15)(cid:3)(cid:49)(cid:47)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68)(cid:54)(cid:76)(cid:74)(cid:81)(cid:68)(cid:79)(cid:3)(cid:50)(cid:83)(cid:72)(cid:85)(cid:68)(cid:87)(cid:76)(cid:81)(cid:74)(cid:3)(cid:51)(cid:82)(cid:76)(cid:81)(cid:87)(cid:39)(cid:72)(cid:83)(cid:72)(cid:81)(cid:71)(cid:72)(cid:71)(cid:3)(cid:47)(cid:82)(cid:90)(cid:72)(cid:85)(cid:3)(cid:37)(cid:82)(cid:88)(cid:81)(cid:71)(cid:20)(cid:17)(cid:19)(cid:75)(cid:32)(cid:75)(cid:3)(cid:32)(cid:3)(cid:20)(cid:17)(cid:24)(cid:24)(cid:17)(cid:19)(cid:75)(cid:32)(cid:40)(cid:56)(cid:32)(cid:19)(cid:71)(cid:37)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68)(cid:40)(cid:56)(cid:32)(cid:19)(cid:71)(cid:37)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:40)(cid:56)(cid:32)(cid:24)(cid:71)(cid:37)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68)(cid:40)(cid:56)(cid:32)(cid:24)(cid:71)(cid:37)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:54)(cid:76)(cid:74)(cid:81)(cid:68)(cid:79)(cid:3)(cid:38)(cid:79)(cid:76)(cid:83)(cid:83)(cid:76)(cid:81)(cid:74)(cid:3)(cid:47)(cid:72)(cid:89)(cid:72)(cid:79)(cid:3)(cid:39)(cid:72)(cid:83)(cid:72)(cid:81)(cid:71)(cid:72)(cid:71)(cid:3)(cid:47)(cid:82)(cid:90)(cid:72)(cid:85)(cid:3)(cid:37)(cid:82)(cid:88)(cid:81)(cid:71)(cid:40)(cid:56)(cid:32)(cid:16)(cid:20)(cid:19)(cid:71)(cid:37)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68)(cid:40)(cid:56)(cid:32)(cid:16)(cid:20)(cid:19)(cid:71)(cid:37)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:40)(cid:56)(cid:32)(cid:16)(cid:24)(cid:71)(cid:37)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68)(cid:40)(cid:56)(cid:32)(cid:16)(cid:24)(cid:71)(cid:37)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:21)(cid:22)(cid:19)(cid:71)(cid:37)(cid:70)(cid:75)(cid:86)(cid:32)(cid:16)(cid:40)(cid:56)(cid:32)(cid:16)(cid:24)(cid:71)(cid:37)(cid:15)(cid:3)(cid:36)(cid:81)(cid:68)(cid:40)(cid:56)(cid:32)(cid:16)(cid:24)(cid:71)(cid:37)(cid:15)(cid:3)(cid:54)(cid:76)(cid:80)(cid:20)(cid:19)(cid:16)(cid:26)(cid:20)(cid:19)(cid:16)(cid:25)(cid:24)(cid:20)(cid:19)(cid:16)(cid:24)(cid:19)(cid:20)(cid:19)(cid:16)(cid:23)(cid:19)(cid:20)(cid:19)(cid:16)(cid:22)(cid:54)(cid:92)(cid:80)(cid:69)(cid:82)(cid:79)(cid:3)(cid:40)(cid:85)(cid:85)(cid:82)(cid:85)(cid:3)(cid:53)(cid:68)(cid:87)(cid:72)(cid:19)(cid:17)(cid:24)(cid:20)(cid:19)(cid:16)(cid:21)(cid:20)(cid:19)(cid:16)(cid:20)(cid:16)(cid:24)(cid:20)(cid:19)(cid:19)(cid:20)(cid:40)(cid:56)(cid:3)(cid:3)(cid:62)(cid:71)(cid:37)(cid:64)(cid:70)(cid:75)(cid:21)(cid:3)(cid:3)(cid:62)(cid:71)(cid:37)(cid:64)(cid:16)(cid:20)(cid:19)(cid:20)(cid:17)(cid:24)(cid:16)(cid:20)(cid:24)(cid:21)(cid:16)(cid:21)(cid:19)(cid:21)(cid:17)(cid:24)(cid:42)(cid:79)(cid:82)(cid:69)(cid:68)(cid:79)(cid:3)(cid:54)(cid:92)(cid:86)(cid:87)(cid:72)(cid:80)(cid:3)(cid:47)(cid:82)(cid:90)(cid:72)(cid:85)(cid:3)(cid:37)(cid:82)(cid:88)(cid:81)(cid:71)(cid:16)(cid:21)(cid:19)(cid:16)(cid:20)(cid:24)(cid:16)(cid:20)(cid:19)(cid:16)(cid:24)(cid:19)(cid:24)(cid:40)(cid:3)(cid:3)(cid:62)(cid:71)(cid:37)(cid:64)(cid:20)(cid:19)(cid:16)(cid:27)(cid:20)(cid:19)(cid:16)(cid:26)(cid:20)(cid:19)(cid:16)(cid:25)(cid:20)(cid:19)(cid:16)(cid:24)(cid:20)(cid:19)(cid:16)(cid:23)(cid:20)(cid:19)(cid:16)(cid:22)(cid:20)(cid:19)(cid:16)(cid:21)(cid:54)(cid:92)(cid:80)(cid:69)(cid:82)(cid:79)(cid:3)(cid:40)(cid:85)(cid:85)(cid:82)(cid:85)(cid:3)(cid:53)(cid:68)(cid:87)(cid:72)(cid:19)(cid:17)(cid:24)(cid:20)(cid:20)(cid:17)(cid:24)(cid:21)(cid:21)(cid:17)(cid:24)(cid:50)(cid:83)(cid:87)(cid:76)(cid:80)(cid:68)(cid:79)(cid:3)(cid:54)(cid:76)(cid:74)(cid:81)(cid:68)(cid:79)(cid:3)(cid:38)(cid:79)(cid:76)(cid:83)(cid:83)(cid:76)(cid:81)(cid:74)(cid:3)(cid:47)(cid:72)(cid:89)(cid:72)(cid:79)(cid:82)(cid:83)(cid:87)(cid:54)(cid:40)(cid:53)(cid:15)(cid:3)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:21)(cid:24)(cid:71)(cid:37)(cid:82)(cid:83)(cid:87)(cid:15)(cid:3)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:21)(cid:24)(cid:71)(cid:37)(cid:54)(cid:40)(cid:53)(cid:15)(cid:3)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:22)(cid:24)(cid:71)(cid:37)(cid:82)(cid:83)(cid:87)(cid:15)(cid:3)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:22)(cid:24)(cid:71)(cid:37)(cid:54)(cid:40)(cid:53)(cid:15)(cid:3)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:22)(cid:19)(cid:71)(cid:37)(cid:82)(cid:83)(cid:87)(cid:15)(cid:3)(cid:70)(cid:75)(cid:21)(cid:32)(cid:16)(cid:22)(cid:19)(cid:71)(cid:37)(cid:42)(cid:79)(cid:82)(cid:69)(cid:68)(cid:79)(cid:3)(cid:54)(cid:92)(cid:86)(cid:87)(cid:72)(cid:80)(cid:3)(cid:47)(cid:82)(cid:90)(cid:72)(cid:85)(cid:3)(cid:37)(cid:82)(cid:88)(cid:81)(cid:71)(cid:56)(cid:3) 9 optimization of the clipping distortion and the PA nonlinearity to obtain the global minimum SER. This optimization process identifies the optimal signal clipping level and the optimal PA operating point. Furthermore, we examine the influence of system parameters on the SER performance under various configurations. REFERENCES [1] Y. Du, L. Hao, and Y. Lei, “SER analysis and joint optimization in nonlinear MIMO-OFDM systems with clipping,” in IEEE Vehicular Technology Conference (VTC-Spring), 2023. [2] R. Yoshizawa and H. Ochiai, “Energy efficiency improvement of coded OFDM systems based on PAPR reduction,” IEEE Syst. J., vol. 11, no. 2, pp. 717–728, 2017. [3] P. Aggarwal and V. A. Bohara, “End-to-end theoretical evaluation of a nonlinear MIMO-OFDM system in the presence of digital predistorter,” IEEE Syst. J., vol. 13, no. 3, pp. 2309–2319, 2019. [4] M. Cherif, A. Arfaoui, R. Zayani, and R. Bouallegue, “End-to-end deep learning for multipair two-way massive MIMO with PA impairments,” IEEE Syst. J., vol. 17, no. 2, pp. 3150–3159, 2023. [5] P. Aggarwal, A. Pradhan, and V. A. Bohara, “A downlink multiuser MIMO-OFDM system with nonideal oscillators and amplifiers: Charac- terization and performance analysis,” IEEE Syst. J., vol. 15, no. 1, pp. 715–726, 2021. [6] P. Priya and D. Sen, “Data detection with CFO uncertainty and non- linearity for mmWave MIMO-OFDM systems,” IEEE Syst. J., vol. 16, no. 3, pp. 3734–3745, 2022. [7] B. Adebisi, K. Anoh, and K. M. Rabie, “Enhanced nonlinear com- panding scheme for reducing PAPR of OFDM systems,” IEEE Syst. J., vol. 13, no. 1, pp. 65–75, 2019. [8] P. Naveen and P. Jena, “Adaptive protection scheme for microgrid with multiple point of common couplings,” IEEE Syst. J., vol. 15, no. 4, pp. 5618–5629, 2021. [9] Y. Du, L. Hao, and Y. Lei, “SER optimization in OFDM-IM systems with nonlinear power amplifiers,” IEEE Trans. Veh. Technol., pp. 1–6, 2023. [10] Y. Du, S. C. Liew, and Y. Shao, “Efficient FFT computation in IFDMA transceivers,” IEEE Trans. Wirel. Commun., pp. 1–1, 2023. [11] S. Gokceli, T. Levanen, T. Riihonen, M. Renfors, and M. Valkama, “Frequency-selective PAPR reduction for OFDM,” IEEE Trans. Veh. Technol., vol. 68, no. 6, pp. 6167–6171, 2019. [12] S. Y. Zhang and B. Shahrrava, “A SLM scheme for PAPR reduction in polar coded OFDM-IM systems without using side information,” IEEE Trans. Broadcast., pp. 1–10, 2020. [13] Seung Hee Han and Jae Hong Lee, “An overview of peak-to-average power ratio reduction techniques for multicarrier transmission,” IEEE Wireless Commun., vol. 12, no. 2, pp. 56–65, 2005. [14] J. Chen and C. Wen, “PAPR reduction of OFDM signals using cross- entropy-based tone injection schemes,” IEEE Signal Process. Lett., vol. 17, no. 8, pp. 727–730, 2010. [15] K. Bae, J. G. Andrews, and E. J. Powers, “Adaptive active constellation extension algorithm for peak-to-average ratio reduction in OFDM,” IEEE Commun. Lett., vol. 14, no. 1, pp. 39–41, 2010. [16] S. C. Thompson, J. G. Proakis, and J. R. Zeidler, “The effectiveness of signal clipping for PAPR and total degradation reduction in OFDM sys- tems,” in IEEE Global Telecommunications Conference (GLOBECOM), vol. 5, 2005. [17] H. Saeedi, M. Sharif, and F. Marvasti, “Clipping noise cancellation in OFDM systems using oversampled signal reconstruction,” IEEE Commun. Lett., vol. 6, no. 2, pp. 73–75, 2002. [18] D. Guel and J. Palicot, “Clipping formulated as an adding signal tech- nique for OFDM peak power reduction,” in IEEE Vehicular Technology Conference (VTC-Spring), 2009. [19] S. Gokceli, T. Levanen, T. Riihonen, M. Renfors, and M. Valkama, “Frequency-selective PAPR reduction for OFDM,” IEEE Trans. Veh. Technol., vol. 68, no. 6, pp. 6167–6171, 2019. [20] A. Ali, A. Al-Rabah, M. Masood, and T. Y. Al-Naffouri, “Receiver-based recovery of clipped OFDM signals for PAPR reduction: A bayesian approach,” IEEE Access, vol. 2, pp. 1213–1224, 2014. [21] J. Armstrong, “Peak-to-average power reduction for OFDM by repeated clipping and frequency domain filtering,” Electron. Lett., vol. 38, no. 5, pp. 246–247, 2002. Figure 11. How SER changes with EU under different numbers of trans- mit/receive antennas. In the figure, nTmR refers to a system with n transmit antennas plus m receive antennas. Figure 12. TD performance when various target SERs and different η values are considered. hand, TD represents the SNR loss compared to that of an ideal linear amplifier in achieving a specific level of SER. For each target SER, the TD graphs in Fig. 12 exhibit a truncation point in the low-value OBO range, with the associated SER lower bound being equal to the target SER. Notably, as the PA operates within its highly nonlinear range, the TD graphs noticeably deviate from those obtained with a linear PA. It is understandable that as the target SER decreases, the OBO value at which the truncation point occurs increases. This trend becomes evident when examining the TD graph for 4-QAM signals, particularly when the target SER ranges from 10−2 to 10−4. Furthermore, a decrease in the parameter η leads to an increase in the OBO value. This observation is apparent when analyzing the TD graph for 4-QAM signals with varying η values from 100 to 2. VI. CONCLUSION This paper studies the performance of OFDM systems with considerations of signal clipping distortion and PA nonlinearity distortion. By modeling the PA and analyzing its IMPs, we derive the system SNR and SER in a polynomial form. These derivations offer the advantage of low complexity while main- taining reasonable accuracy. Additionally, we conduct joint -20-18-16-14-12-10-8-6-4-20EU [dB]10-710-610-510-410-310-210-1100Symbol Error Ratech=-30dB, =2,1T1Rch=-30dB, =2,1T2Rch=-30dB, =2,2T1Rch=-30dB, =2,2T2R(cid:20)(cid:22)(cid:24)(cid:26)(cid:28)(cid:20)(cid:20)(cid:20)(cid:22)(cid:50)(cid:37)(cid:50)(cid:3)(cid:62)(cid:71)(cid:37)(cid:64)(cid:22)(cid:24)(cid:26)(cid:28)(cid:20)(cid:20)(cid:20)(cid:22)(cid:55)(cid:39)(cid:3)(cid:62)(cid:71)(cid:37)(cid:64)(cid:23)(cid:52)(cid:36)(cid:48)(cid:15)(cid:55)(cid:68)(cid:85)(cid:74)(cid:72)(cid:87)(cid:3)(cid:54)(cid:40)(cid:53)(cid:32)(cid:20)(cid:19)(cid:16)(cid:22)(cid:15)(cid:32)(cid:20)(cid:19)(cid:19)(cid:23)(cid:52)(cid:36)(cid:48)(cid:15)(cid:55)(cid:68)(cid:85)(cid:74)(cid:72)(cid:87)(cid:3)(cid:54)(cid:40)(cid:53)(cid:32)(cid:20)(cid:19)(cid:16)(cid:23)(cid:15)(cid:32)(cid:20)(cid:19)(cid:19)(cid:23)(cid:52)(cid:36)(cid:48)(cid:15)(cid:55)(cid:68)(cid:85)(cid:74)(cid:72)(cid:87)(cid:3)(cid:54)(cid:40)(cid:53)(cid:32)(cid:20)(cid:19)(cid:16)(cid:21)(cid:15)(cid:32)(cid:20)(cid:19)(cid:19)(cid:23)(cid:52)(cid:36)(cid:48)(cid:15)(cid:55)(cid:68)(cid:85)(cid:74)(cid:72)(cid:87)(cid:3)(cid:54)(cid:40)(cid:53)(cid:32)(cid:20)(cid:19)(cid:16)(cid:21)(cid:15)(cid:32)(cid:21)(cid:23)(cid:52)(cid:36)(cid:48)(cid:15)(cid:55)(cid:68)(cid:85)(cid:74)(cid:72)(cid:87)(cid:3)(cid:54)(cid:40)(cid:53)(cid:32)(cid:20)(cid:19)(cid:16)(cid:21)(cid:15)(cid:32)(cid:20)(cid:19)(cid:19) 10 [22] K. Anoh, C. Tanriover, and B. Adebisi, “On the optimization of iterative clipping and filtering for PAPR reduction in OFDM systems,” IEEE Access, vol. 5, pp. 12 004–12 013, 2017. [23] K. Anoh, C. Tanriover, B. Adebisi, and M. Hammoudeh, “A new approach to iterative clipping and filtering PAPR reduction scheme for OFDM systems,” IEEE Access, vol. 6, pp. 17 533–17 544, 2018. [24] I. Sohn and S. C. Kim, “Neural network based simplified clipping and filtering technique for PAPR reduction of OFDM signals,” IEEE Commun. Lett., vol. 19, no. 8, pp. 1438–1441, 2015. [25] L. Yang, K. Song, and Y. M. Siu, “Iterative clipping noise recovery of OFDM signals based on compressed sensing,” IEEE Trans. Broadcast., vol. 63, no. 4, pp. 706–713, 2017. [26] A. A. Eltholth, A. R. Mekhail, A. Elshirbini, M. Dessouki, and A. Ab- delfattah, “Modeling the effect of clipping and power amplifier non- linearities on OFDM systems,” Ubiquitous Comput. Commun. J., vol. 3, no. 1, pp. 54–59, 2009. [27] L. Yiming, M. O’Droma, and J. Ye, “A practical analysis of performance optimization in OSTBC based nonlinear MIMO-OFDM systems,” IEEE Trans. Commun., vol. 62, no. 3, pp. 930–938, 2014. [28] Y. Du, Y. Lei, and S. McGrath, “SER optimization in transparent OFDM relay systems in the presence of dual nonlinearity,” Digit. Signal Process., vol. 126, p. 103506, 2022. [29] H. Hemesi, A. Abdipour, and A. Mohammadi, “Analytical modeling of MIMO-OFDM system in the presence of nonlinear power amplifier with memory,” IEEE Trans. Commun., vol. 61, no. 1, pp. 155–163, 2013. [30] A. A. Saleh, “Frequency-independent and frequency-dependent nonlin- ear models of TWT amplifiers,” IEEE Trans Commun., vol. 29, no. 11, pp. 1715–1720, 1981. [31] P. Asbeck, H. Kobayashi, M. Iwamoto, G. Hanington, S. Nam, and L. Larson, “Augmented behavioral characterization for modeling the nonlinear response of power amplifiers,” in IEEE MTT-S International Microwave Symposium Digest, vol. 1, 2002. [32] P. Draxler, I. Langmore, T. Hung, and P. Asbeck, “Time domain characterization of power amplifiers with memory effects,” in IEEE MTT-S International Microwave Symposium Digest, vol. 2, 2003. [33] Y. Lei and M. O’Droma, “Behavioural analysis of internal mechanism of nonlinear distortion in OFDM signal systems,” in 2009 IEEE Global Telecommunications Conference, 2009, pp. 1–5. [34] M. O’Droma and L. Yiming, “A new bessel-fourier memoryless nonlin- ear power amplifier behavioral model,” IEEE Microw. Wirel. Compon. Lett., vol. 23, no. 1, pp. 25–27, 2013. [35] Y. Du, J. Chen, Y. Lei, and X. Hao, “Performance analysis of nonlinear spatial modulation multiple-input multiple-output systems,” Digit. Signal Process., vol. 115, p. 103064, 2021. [36] L. Yiming and M. O’Droma, “A novel decomposition analysis of nonlinear distortion in OFDM transmitter systems,” IEEE Trans. Signal Process., vol. 63, no. 19, pp. 5264–5273, 2015. [37] H. Zhao, Y. Gong, Y. L. Guan, and S. Li, “Performance analysis of space-time block codes in nakagami-m keyhole channels with arbitrary fading parameters,” in 2008 IEEE International Conference on Commu- nications, 2008, pp. 4090–4094. [38] A. Goldsmith, Wireless Communications. Cambridge University Press, 2005. [39] M. Abramowitz, I. A. Stegun, and R. H. Romer, “Handbook of mathe- matical functions with formulas, graphs, and mathematical tables,” 1988. [40] S. P. Yadav and S. C. Bera, “PAPR reduction using clipping and filtering technique for nonlinear communication systems,” in IEEE International Conference on Computing, Communication & Automation, 2015. [41] MathWorks, Communication [On- https://www.mathworks.com/help/comm/ref/comm. Toolbox, 2024. line]. Available: rayleighchannel-system-object.html
synthetic_cpt
1
Non-Reference_Quality_Assessment_for_Medical_Imaging_Application_to_Synthetic_Brain_MRIs.pdf
hep-th/0107226 NSF-ITP-01-74 1 0 0 2 l u J 6 2 1 v 6 2 2 7 0 1 0 / h t - p e h : v i X r a Non-Linear / Non-Commutative Non-Abelian Monopoles Koji Hashimoto∗ Institute for Theoretical Physics, University of California, Santa Barbara, CA 93106-4030 Abstract Using recently proposed non-linearly realized supersymmetry in non-Abelian 2), we derive the non-linear BPS equations in gauge theory corrected to O(α′ the background B-field for the U(2) monopoles and instantons. We show that these non-Abelian non-linear BPS equations coincide with the non-commutative anti-self-dual equations via the Seiberg-Witten map. ∗[email protected] It has been known that there are α′ corrections to the super Yang-Mills theory as a low energy effective action of superstring theories [1][2]. The low energy effective theories have been a very strong tool for analyzing the full string theory to find dualities and non- perturbative properties. However, the entire structure of the α′ corrections is still beyond our reach, although much elaborated work has been devoted to this subject [3]–[19]. To be concrete, a stack of parallel Dp-branes has a low energy effective description which is the p+1-dimensional super Yang-Mills theory accompanied with the α′ corrections, but even in the slowly-varying field approximation the complete form of the effective action has not been obtained yet. To fix this problem, recently there appeared several attempts to constrain the action by supersymmetries [17], or the equivalence [20] to non-commutative theories [10][11]. 2). Especially the paper [17] fixed all the ambiguity of the ordering and coefficients up to O(α′ In this paper, we give an evidence supporting both of these arguments of supersymmetries and non-commutative geometries, by analyzing the BPS equations. Solitons and instantons as solutions of the BPS equations in the low energy effective theory of D-branes have brane interpretations. For example, a BPS monopole in U(2) Yang-Mills-Higgs theory corresponds to a D-string suspended between two parallel D3-branes. We consider these brane configu- rations in the background B-field, and explicitly construct U(2) non-linear BPS equations for the monopoles and the instantons. For the construction we need an explicit form of the linearly/non-linearly realized supersymmetry transformations in the effective theory which was obtained in [3] and [17]. According to the equivalence observed in [20], these equations should be equivalent with the U(2) non-commutative BPS equations [21]–[26]. In this paper we shall explicitly show this equivalence∗. This fact is a supporting evidence of the super- symmetry transformation in the effective action determined in [17]. Then we shall proceed to obtain the explicit solutions to these equations and discuss the brane interpretation of them. The low energy effective action of open superstring theory with U(N) Chan-Paton factor is given by the super Yang-Mills action corrected by α′ [1][2]: L = str 1 4 − (cid:20) (Fij)2 + 1 2 2 π2α′ FijFjkFklFli − (cid:18) 1 4 (Fij)2(Fkl)2 (cid:19)(cid:21) + (fermions) + O(α′ 3). (1) The recent argument [17][18] on the ordering of the gauge fields and the fermions shows that 2 all the terms can be arranged by the symmetrized trace (str), which up to the order of α′ is compatible with the string scattering amplitudes and also the supersymmetries. We use the action in the Euclidean four-dimensional space to treat the anti-self-duality equation for both the monopoles and instantons simultaneously. This action is obtained via dimensional reduction with Aµ = 0 (µ = 0, 5, 6, 7, 8, 9). The normalization for the gauge symmetry ∗In the Abelian case, this equivalence was shown in [27]. 1 generators is given by tr[T AT B] = δAB, which follows the convention of [18]. The action (1) has a linearly realized supersymmetry for the gaugino [3] δǫχA = 1 2 ΓijF A ij ǫ − 1 8 π2α′ 2str(T AT BT CT D) ij F C F B h ji F D kl Γkl − 4F B ij F C jkF D kl Γil ǫ, i (2) which includes the α′ corrections to the first nontrivial order. The recent paper [17] shows that this system has another supersymmetry, non-linearly realized supersymmetry, as is expected from the fact that the action (1) describes a stuck of N D-branes which breaks half of the bulk supersymmetries. This non-linearly realized supersymmetry is given by δηχA = ηA − 1 2 π2(α′)2str(T AT BT CT D) 1 2 (cid:20) F B ij F C ij + 1 4 F B ij F C klΓijkl(cid:21) ηD, (3) where the transformation parameter η has its value only for a U(1) subgroup of U(N) [17]. We have already neglected the fermions in the right hand sides of (2) and (3). In order to compare our results with the previous literatures [23]–[25][27] we will consider σa for a = . Therefore especially the symmetrized trace of the four generators only the gauge group U(2). The normalized generators are defined as T a = 1 √2 1, 2, 3 and T 4 = 1 √2 appearing in the above supersymmetry transformations (2) and (3) is given by str(T AT AT AT A) = str(T aT aT 4T 4) = 1 2 , str(T aT aT bT b) = 1 6 (a 6= b), (4) where the upper case A runs all the generators of U(2): A = 1, 2, 3, 4. We turn on the background B-field which induces the non-commutativity on the world- ij +2Bij, volume of the D-branes. This B-field is appearing in the action (1) as F 4 due to the bulk gauge invariance of the B-field. ij → F 4 ij = F 4 For simplicity, we put πα′ = 1, which can be restored on the dimensional ground anytime. The action (1) and its symmetries (2) (3) are obtained in string theory in the approximation F ≪ 1 and the slowly-varying field approximation. We keep this in mind, and in the following we shall obtain the non-linearly-modified BPS equations, perturbatively in small B. The basic BPS equations around whose solutions we expand the fields are the anti-self-duality equations ij + ∗F (0)a F (0)a ij = 0, F (0)4 ij = 0, (5) ij = F (0)A where we have expanded the fields as F A ij + O(B), and the Hodge ∗ is defined as ∗Fij ≡ ǫijklFkl/2. These equations are obtained by considering the lowest order in α′ in (2) by requiring a half of the linearly-realized supersymmetries are preserved. The transformation parameters of the preserved supersymemtries then obey the chirality condition (1 + Γ5)ǫ = 0 2 (6) where Γ5 = Γ1234. In the following, we assume that this chirality condition for ǫ persists also to the higher order in α′ and even with the inclusion of B. This assumption will be checked by the explicit existence of the solutions. Along the argument given in [20][27]–[31], first we consider a combination of the two supersymmetries (2) and (3) which remains unbroken at the spatial infinity where F = 0. The vanishing of F gives δǫχA = BijΓijǫ + O(B3), δηχ4 = η4 + O(B2). Thus (δǫ + δη)χ4 = 0 at the infinity is equivalent with η4 = −BijΓijǫ + O(B3). (7) (8) Using this relation between two supersymmetry transformations, the vanishing of the super- symmetry transform of the gaugino in all the four-dimensional space leads to BPS conditions 1 2 1 2 F a ijΓijǫ = 0, F 4 ijΓijǫ − BijΓijǫ − − 1 4 1 8 str(T 4T BT CT 4) F B ij F C ij + F B ij F C (cid:20) 1 2 F B h 1 4 klΓijkl(cid:21) ij F C str(T 4T BT CT D) ij F C ji F D kl Γkl − 4F B jkF D kl Γil ǫ = 0. (10) i (9) BijΓijǫ The first one (9) gives usual anti-self-duality equation† without any correction of B. In the analysis up to this order, only the U(1) part of the gauge field obtains the first nontrivial correction of B as F 4 ij = O(B). Let us calculate the third and the fourth terms in (10). Keeping in mind that we neglect the terms of the higher order, the third term can be arranged as − 1 4 h (F (0)B ij )2 i BklΓklǫ + O(B2), (11) where we have used the anti-self-duality of F (5) and the chirality of ǫ (6). After a straight- forward calculation, the fourth term in (10) can be evaluated in the same manner and turns out to be the same as (11)‡. The term (F (0))3 is negligible because it is of the higher order. These evaluation simplifies the BPS condition (10) to ijΓijǫ − (F (0)A F 4 kl )2BijΓijǫ = 0. (12) †The α′ corrections in the linearly-realized transformation (2) are actually factored-out when the lowest order relations (5) are substituted. ‡These calculations are easily performed with the use of the block-diagonal form of the matrix B which is obtained by the space rotation without losing generality. 3 Decomposing this condition into the components, we obtain the non-linear BPS equations ij + ∗F 4 F 4 ij − π2α′ 2(Bij + ∗Bij)(F (0)A kl )2 = 0, (13) where we have restored the dimensionality. The important is to check whether the equations (13) are equivalent with the non- commutative BPS equations via the Seiberg-Witten map [20]. The non-commutative U(2) monopoles/instantons [21]–[26] satisfy the following BPS equations ij + ∗ ˆF A ˆF A kl = 0, (14) where fields with the hat indicate the ones in the non-commutative space. Substituting the Seiberg-Witten map [20] ˆFij = Fij + 1 2 θkl (cid:18) 2{Fik, Fjk} − {Ak, (Dl + ∂l)Fij} + O(θ2) (cid:19) (15) into the above non-commutative BPS equation (14) and noting that the last gauge-variant terms in (15) vanish with the use of the lowest level anti-self-duality (5), then we obtain ij + ∗F 4 F 4 ij + 1 4 (θij + ∗θij)(F (0) kl )2 = 0. Now we can use the relation [20, 27] θij = −(2πα′)2Bij (16) (17) which has been deduced from the worldsheet propagator for an open string in the approxima- tion α′B ≪ 1, then we can see the equivalence between the non-commutative BPS equations (14) and the non-linear BPS equations (13). Let us consider the specific brane configurations. (1) U(2) non-commutative monopole. In this case we perform the dimensional reduction further down to the three-dimensional space and regard the fourth gauge field A4 as a scalar field Φ. We turn on only one component of the B-field, B12 6= 0. Since we have a solution to the U(2) non-commutative BPS equation for a monopole [23][24][26], and we know the Seiberg-Witten transform of that solution to an appropreate order in α′ [27], then from the above equivalence, that transform is actually a corresponding solution to the non-linear BPS equation (13). After diagonalization of the scalar field, the eigenvalues exhibits the configuration in which the single D-string suspended between the two parallel D3-branes is tilted [21] so that they preserve 1/4 supersymmetries in the bulk with the B-field, as shown in [27]. 4 (2) U(2) instanton. It is known that the small instanton singularity of the anti-self-dual instanton moduli space is resolved if we introduce self-dual background θ [20, 32]. However, this resolution does not occur in the case of anti-self-dual θ. This fact may be observed from the non-linear BPS equations and their solutions. First let us analyze the anti-self-dual B-field (note the relation (17)) B12 + B34 = 0. Since the equation BijΓijǫ = 0 (18) holds for ǫ which is involved with the preserved supersymmetries for the anti-self-dual gauge field configuration, the whole η terms vanish. Thus the linear BPS equation is not corrected, and so the configuration is not affected by the B-field: F + ∗F = 0. (19) This is consistent with the observation that the linear BPS equation F + ∗F = 0 may solve fully α′-corrected non-Abelian effective theory, as it is true in the case of Abelian theory [33]. Since now the self-duality is the same as the B-field orientation, we can subtract the B-field from the both sides of the above equation and then obtain (19). This result may be related to the observation in [34] that for the large instanton radius the commutative description of the non-commutative U(2) instanton [25] does not seem to have θ dependence§. From the non-commutative side, we substitute the Seiberg-Witten map to the non-commutative BPS equation (14), but then the order θ terms cancel with each other and we found the usual anti-self-dual equation (19). On the other hand, for the self-dual B-field background B12 = B34, there exists a correc- tion, which is expected from the resolution of the small instanton singularity. One can solve the non-linear BPS equation (13) using the general ansatz [20] in this background for a radial function h(r). Substituting the lowest order solution A4 i = Bijxjh(r) F (0)a ij = 4ρ2 (r2 + ρ2)2 ηaij, we obtain a differential equation for h(r) and the solution is h(r) = 16π2 ρ4(3r2 + ρ2) r4(r2 + ρ2)3 . (20) (21) (22) §For the small value of ρ the gauge fields is not slowly-varying, the D-instanton charge distribution is corrected due to the derivative corrections to the Wess-Zumino term [35], thus we may not see any relation with [34]. 5 This is the first nontrivial correction to the anti-self-dual instanton. Since in this case the small instanton singularity must be resolved, we might be able to see it by computing the instanton charge distribution with this correction, but it turns out to be very small as ∼ B2ρ8/r16 compared to the original instanton density ∼ ρ4/r8. Therefore unfortunately we cannot see the change of the instanton radius caused by the introduction of the B-field. Acknowledgments: The author would like to thank T. Hirayama and W. Taylor for useful comments. This research was supported in part by Japan Society for the Promotion of Science under the Postdoctoral Research Program (# 02482), and the National Science Foundation under Grant No. PHY99-07949. References [1] D. J. Gross and E. Witten, “Superstring Modification of Einstein Equations”, Nucl. Phys. B277 (1986) 1. [2] A. A. Tseytlin, “Vector Field Effective Action in the Open Superstring Theory”, Nucl. Phys. B276 (1986) 391; Erratum – ibid. B291 (1987) 879. [3] E. Bergshoeff, M. Rakowski and E. Sezgin, “Higher Derivative Super Yang-Mills theo- ries”, Phys. Lett. B185 (1987) 371. [4] Y. Kitazawa, “Effective Lagrangian for Open Superstring from Five Point Function”, Nucl. Phys. B289 (1987) 599. [5] A. A. Tseytlin, “On non-abelian generalization of Born-Infeld action in string theory”, Nucl. Phys. B501 (1997) 41, hep-th/9701125. [6] A. Hashimoto and W. Taylor IV, “Fluctuation Spectra of Tilted and Intersecting D- branes from the Born-Infeld Action”, Nucl. Phys. B503 (1997) 193, hep-th/9703217. [7] S. Gonorazky, F. A. Schaposnik and G. Silva, “Supersymmetric Non-Abelian Born- Infeld Theory”, Phys. Lett. B449 (1999) 187, hep-th/9812094. [8] F. Denef, A. Sevrin and J. Troost, “Non-Abelian Born-Infeld versus String Theory”, Nucl. Phys. B581 (2000) 135, hep-th/0002180. [9] S. V. Ketov, “N = 1 and N = 2 supersymmetric non-abelian Born-Infeld actions from superspace”, Phys. Lett. B491 (2000) 207, hep-th/0005265. 6 [10] L. Cornalba, “On the General Structure of the Non-Abelian Born-Infeld Action”, hep-th/0006018. [11] S. Terashima, “The Non-Abelian Born-Infeld Action and Noncommutative gauge the- ory”, JHEP 0007 (2000) 033, hep-th/0006058. [12] A. Refolli, N. Terzi and D. Zanon, “Non abelian N=2 supersymmetric Born Infeld action”, Phys. Lett. B486 (2000) 337, hep-th/0006067. [13] E. A. Bergshoeff, M. de Roo and A. Sevrin, “Non-abelian Born-Infeld and kappa- symmetry”, hep-th/0011018. [14] A. Sevrin, J. Troost and W. Troost, “The non-abelian Born-Infeld action at order F 6”, Nucl. Phys. B603 (2001) 389, hep-th/0101192. [15] M. Cederwall, B. E. W. Nilsson and D. Tsimpis, “The structure of maximally supersym- metric Yang-Mills theory: constraining higher-order corrections”, JHEP 0106 (2001) 034, hep-th/0102009. [16] A. Refolli, A. Santambrogio, N. Terzi and D. Zanon, “F 5 contributions to the non- abelian Born Infeld action from a supersymmetric Yang-Mills five-point function”, hep-th/0105277. [17] M. Cederwall, B. E. W. Nilsson and D. Tsimpis, “D = 10 Super-Yang-Mills at O(α′ 2)”, hep-th/0104236. [18] E. A. Bergshoeff, A. Bilal, M. de Roo and A. Sevrin, “Supersymmetric non-abelian Born-Infeld revisited”, hep-th/0105274. [19] A. Bilal, “Higher-Derivative Corrections to the Non-Abelian Born-Infeld Action”, hep-th/0106062. [20] N. Seiberg and E. Witten, “String theory and noncommutative geometry”, JHEP 9909 (032) 1999, hep-th/9908142. [21] A. Hashimoto and K. Hashimoto, “Monopoles and Dyons in Non-Commutative Geom- etry”, JHEP 9911 (1999) 005, hep-th/9909202. [22] D. Bak, “Deformed Nahm Equation and a Noncommutative BPS Monopole”, Phys. Lett. B471 (1999) 149, hep-th/9910135. [23] K. Hashimoto, H. Hata and S. Moriyama, “Brane Configuration from Monopole Solution in Non-Commutative Super Yang-Mills Theory”, JHEP 9912 (1999) 021, hep-th/9910196. 7 [24] S. Goto and H. Hata, “Noncommutative Monopole at the Second Order in θ”, Phys. Rev. D62 (2000) 085022, hep-th/0005101. [25] K. Furuuchi, “Dp-D(p+4) in Noncommutative Yang-Mills”, JHEP 0103 (2001) 033, hep-th/0010119. [26] D. J. Gross and N. Nekrasov, “Solitons in Noncommutative Gauge Theory”, JHEP 0103 (2001) 044, hep-th/0010090. [27] K. Hashimoto and T. Hirayama, “Branes and BPS Configurations of Non-Commutative /Commutative Gauge Theories”, Nucl. Phys. B587 (2000) 207, hep-th/0002090. [28] M. Marino, R. Minasian, G. Moore and A. Strominger, “Nonlinear Instantons from Supersymmetric p-Branes”, JHEP 0001 (2000) 005, hep-th/9911206. [29] S. Terashima, “Instantons in the U(1) Born-Infeld Theory and Noncommutative Gauge Theory”, Phys. Lett. B477 (2000) 292, hep-th/9911245. [30] S. Moriyama, “Noncommutative Monopole from Nonlinear Monopole”, Phys. Lett. B485 (2000) 278, hep-th/0003231. [31] S. Moriyama, “Noncommutative/Nonlinear BPS Equations without Zero Slope Limit”, JHEP 0008 (2000) 014, hep-th/0006056. [32] N. Nekrasov and A. Schwarz, “Instantons on noncommutative R4, and (2,0) superconfor- mal six dimensional theory”, Commun. Math. Phys. 198 (1998) 689, hep-th/9802068. [33] L. Thorlacius, “Born-Infeld String as a Boundary Conformal Field Theory”, Phys. Rev. Lett. 80 (1998) 1588, hep-th/9710181. [34] K. Hashimoto and H. Ooguri, “Seiberg-Witten Transforms of Noncommutative Soli- tons”, hep-th/0105311, to be published in Phys. Rev. D. [35] N. Wyllard, “Derivative corrections to D-brane actions with constant background fields”, Nucl. Phys. B598 (2001) 247, hep-th/0008125. 8
synthetic_cpt
2
Distilling_Knowledge_from_Reader_to_Retriever_for_Question_Answering.pdf
Published as a conference paper at ICLR 2021 DISTILLING KNOWLEDGE RETRIEVER FOR QUESTION ANSWERING FROM READER TO Gautier Izacard1,2,3, Edouard Grave1 1Facebook AI Research, 2 ´Ecole normale sup´erieure, PSL University, 3Inria gizacard|egrave @fb.com { } 2 2 0 2 g u A 4 ] L C . s c [ 2 v 4 8 5 4 0 . 2 1 0 2 : v i X r a ABSTRACT The task of information retrieval is an important component of many natural lan- guage processing systems, such as open domain question answering. While tra- ditional methods were based on hand-crafted features, continuous representations based on neural networks recently obtained competitive results. A challenge of using such methods is to obtain supervised data to train the retriever model, cor- responding to pairs of query and support documents. In this paper, we propose a technique to learn retriever models for downstream tasks, inspired by knowledge distillation, and which does not require annotated pairs of query and documents. Our approach leverages attention scores of a reader model, used to solve the task based on retrieved documents, to obtain synthetic labels for the retriever. We eval- uate our method on question answering, obtaining state-of-the-art results. 1 INTRODUCTION Information retrieval is an important component for many natural language processing tasks, such as question answering (Voorhees et al., 1999) or fact checking (Thorne et al., 2018). For example, many real world question answering systems start by retrieving a set of support documents from a large source of knowledge such as Wikipedia. Then, a finer-grained model processes these docu- ments to extract the answer. Traditionally, information retrieval systems were based on hand-crafted sparse representations of text documents, such as TF-IDF or BM25 (Jones, 1972; Robertson et al., 1995). Recently, methods based on dense vectors and machine learning have shown promising re- sults (Karpukhin et al., 2020; Khattab et al., 2020). Deep neural networks based on pre-training, such as BERT (Devlin et al., 2019), have been used to encode documents into fixed-size represen- tations. These representations are then queried using approximate nearest neighbors (Johnson et al., 2019). These techniques have lead to improved performance on various question answering tasks. A challenge of applying machine learning to information retrieval is to obtain training data for the retriever. To train such models, one needs pairs of queries and the corresponding list of documents that contains the information corresponding to the queries. Unfortunately, hand-labeling data to that end is time consuming, and many datasets and applications lack such annotations. An alternative approach is to resort to heuristics, or weakly supervised learning, for example by considering that all documents containing the answer are positive examples. However, these approaches suffer from the following limitations. First, frequent answers or entities might lead to false positive examples. As an example, consider the question “where was Ada Lovelace born?”. The sentence “Ada Lovelace died in 1852 in London” would be considered as a positive example, because it contains the answer “London”. A second limitation is that for some tasks, such as fact checking or long form question answering, such heuristics might not be applicable directly. In this paper, we propose a procedure to learn retriever systems without strong supervision in the form of pairs of queries and documents. Following previous work (Chen et al., 2017), our approach uses two models: the first one retrieves documents from a large source of knowledge (the retriever), the second one processes the support documents to solve the task (the reader). Our method is inspired by knowledge distillation (Hinton et al., 2015), and uses the reader model to obtain synthetic labels to train the retriever model. More precisely, we use a sequence-to-sequence model as the reader, and use the attention activations over the input documents as synthetic labels to train the retriever. Said otherwise, we assume that attention activations are a good proxy for the relevance of 1 Published as a conference paper at ICLR 2021 documents. We then train the retriever to reproduce the ranking of documents corresponding to that metric. We make the following contributions: • First, we show that attention scores from a sequence-to-sequence reader model are a good measure of document relevance (Sec. 3.2) ; • Second, inspired by knowledge distillation, we propose to iteratively train the retriever from these activations, and compare different loss functions (Sec. 3.4) ; • Finally, we evaluate our method on three question-answering benchmarks, obtaining state- of-the-art results (Sec. 4). Our code is available at: github.com/facebookresearch/FiD. 2 RELATED WORK We briefly review information retrieval based on machine learning. We refer the reader to Manning et al. (2008) and Mitra et al. (2018) for a more exhaustive introduction to the subject. Vector space models. In traditional information retrieval systems, documents and queries are rep- resented as sparse vectors, each dimension corresponding to a different term. Different schemes have been considered to weigh the different term, the most well known being based on inverse doc- ument frequency, or term specificity (Jones, 1972). This technique was later extended, leading to the BM25 weighting scheme which is still widely used today (Robertson et al., 1995). A limita- tion of sparse representations is that the terms of the query need to match the terms of the returned documents. To overcome this, Deerwester et al. (1990) proposed to use latent semantic analysis for indexing, leading to low-dimension dense representations of documents. Neural information retrieval. Following the success of deep learning for other natural process- ing tasks, neural networks were applied to the task of information retrieval. Huang et al. (2013) proposed a deep bag-of-words model, where queries and documents were embedded independently, a technique known as bi-encoder. Documents were then ranked by using the cosine similarity with the query, and the model was trained on clickthrough data from a search engine. This technique was later extended by using convolutional neural networks (Shen et al., 2014) and recurrent neu- ral networks (Palangi et al., 2016). A limitation of independently embedding documents and query is that it does not capture fine-grained interactions between the query and documents. This lead Nogueira & Cho (2019) and Yang et al. (2019) to use a BERT model to jointly embed documents and query, a technique known as cross-encoder. End-to-end retrieval. Most of the methods described in the previous paragraph were used to re- rank a small number of documents, usually returned by a traditional IR systems. In the context of ad-hoc document retrieval, Gillick et al. (2018) showed that bi-encoder models could be com- petitive with traditional IR systems. For open domain question-answering, Karpukhin et al. (2020) introduced dense passage retrieval (DPR), which uses dense embeddings and nearest neighbors search. More precisely, question and passage embeddings are obtained using a BERT-based bi- encoder model, which is trained on a small dataset of question and passage pairs. Then, the full knowledge source (Wikipedia) is encoded using this model, and passages are queried by computing the k-nearest neighbors of the embedding of the question. Jointly embedding the query and docu- ments makes the application of cross-encoder models intractable to large database. To address this limitation, Humeau et al. (2019) introduced the poly-encoder architecture, in which each documents is represented by multiple vectors instead of one. Similarly, Khattab et al. (2020) proposed a scoring function where each term of the query and documents is represented by a single vector. To make the method tractable, their system retrieves documents with an approximate score, which are then re-ranked with the exact one. Finally, Luan et al. (2020) conducts a theoretical and empirical study of sparse, dense and cross-attention information retrieval systems. Unsupervised learning. Closest to our work, there is growing body of work trying to learn infor- mation retrieval systems from unsupervised data. Lee et al. (2019) introduced the inverse cloze task 2 Published as a conference paper at ICLR 2021 for pre-training retrievers, which can then be fine-tuned end-to-end on question-answering tasks. This pre-training scheme was later evaluated for ad-hoc document retrieval by Chang et al. (2020). Guu et al. (2020) proposed to augment language model pre-training with a retriever module, which is trained using the masked language modeling objective. Similarly, Lewis et al. (2020a) introduced a sequence-to-sequence model that is pre-trained by generating a target text, after retrieving a set of related texts. Lewis et al. (2020b) further train the retriever obtained in Karpukhin et al. (2020) by backpropagating to the retriever the error between the generated output and the gold answer. Simultaneously to our work, Yang & Seo (2020) proposes to train a retriever with knowledge dis- tillation. The main difference with our method is the nature of the synthetic labels that are used to train the retriever. Yang & Seo (2020) uses the DPR reader, which includes a classifier that predicts which passage contains the answer, and can be seen as a cross-encoder reranker. This technique thus performs the distillation of a cross-encoder retriever to a bi-encoder retriever. In contrast, our method uses the internal attention scores of the reader, which does not require additional supervision besides pairs of question and answer. 3 METHODOLOGY Our system is composed of two modules, the retriever and the reader, following the standard pipeline for open-domain question answering. Given an input question these modules are used in a two-step process to generate an answer. First the retriever selects support passages in a large knowledge source. Then these passages are processed by the reader, along with the question, to generate an answer. For the reader module we use the Fusion-in-Decoder model (Izacard & Grave, 2020), which achieves state-of-the-art performance when combined with BM25 or DPR (Karpukhin et al., 2020). It is based on a sequence-to-sequence architecture, and is initialized from pre-trained models such as T5 or BART (Raffel et al., 2019; Lewis et al., 2019). The focus of this work is to train the retriever without strong supervision or weakly supervised learning based on heuristics. For this we propose to train the retriever by learning to approximate the attention score of the reader. The training scheme outlined here can be seen as a student-teacher pipeline, where the teacher, the reader module, produces targets which are used to train a student network, the reader. By doing so, we hope to leverage the signal extracted from the question-answer pairs by the reader. Since the goal of the retriever is to retrieve the most relevant passages, by training the retriever to estimate the reader attention scores, we implicitly make the assumption that these scores are a good proxy for the usefulness of a passage to answer the question. In this section we will first describe the Fusion-in-Decoder architecture, before elaborating on the signal which is used to train the retriever, the design of the retriever, and how this module is trained. 3.1 CROSS-ATTENTION MECHANISM First, let us briefly review the Fusion-in-Decoder model (FiD, Izacard & Grave, 2020). The under- lying architecture is a sequence-to-sequence model, composed of an encoder and a decoder. The encoder independently processes np different text inputs (sk)1≤k≤np . In the case of open-domain question answering based on Wikipedia, each input sk is the concatenation of the question q and a support passage, with special tokens question:, title: and context: added before the question, the title of the Wikipedia article and the text of each passage. The output representations of the encoder are then concatenated to form a global representation X of dimension (Pk ℓk) d, where ℓk is the length of the k-th segment and d is the dimension of the embeddings and hidden rep- resentations of the model. Then, the decoder processes this representation as a regular autoregressive model, alternating self-attention, cross-attention and feed-forward modules. × Only the cross-attention module explicitly takes as input the global output representation X of the Rd denotes the output of the previous self-attention layer of the decoder, the cross- encoder. If H attention operation consists in the following operations. First, queries Q, keys K and values V are computed by applying linear transformations: ∈ Q = WQH, K = WKX, V = WV X. 3 Published as a conference paper at ICLR 2021 Then a similarity score between the query at position i, Qi, and the key at position j, Kj, is obtained by computing the dot-product between these two elements, and normalized over the dimension: αi,j = QT i Kj, ˜αi,j = exp(αi,j ) Pm exp(αi,m) . A new representation is obtained as a sum of the values, weighted by the attention probabilities, before going through a final linear transformation Wo: Oi = WO X j ˜αi,jVi,j The operations described above are performed in parallel with different linear transformations in the case of multi-head attention. Finally a normalization layer is applied, and this pipeline is wrapped by a skip connection. See Vaswani et al. (2017) for more details on the structure of Transformers. 3.2 CROSS-ATTENTION SCORE AS A RELEVANCE MEASURE FOR PASSAGE RETRIEVAL In some sense, the attention scores α:,j involving the j-th key measures the importance of this key, and corresponding value, to compute the next representation. We hypothesize that it is good proxy to estimate the relevance of a passage — the more the tokens in a text segment are attended to, the more relevant the text segment is to answer the question. Given the reader model, an input question q and a corresponding set of support passages q = (pk)1≤k≤n, we obtain relevance scores (Gq,pk )1≤k≤n for each passage by aggregating D attention scores. In particular, the score Gq,pk is obtained by averaging the pre-attention scores α0,: over all the tokens in the input sk corresponding to the passage pk, all the layers and all the heads of the decoder. Note that the FiD decoder jointly processes the passages, and thus the score Gq,pk de- pends on the other support passages. We consider other pooling operators, such as max, to aggregate attention scores over layers, heads and tokens and empirically compare them in Sec. 5.2. Before we proceed, let us consider the following simple experiment, which is a first indication that reader attention scores are indeed a strong relevance signal. Given a question and 100 passages retrieved with DPR, our goal is to select the 10 best passages. When using the top 10 passages from DPR instead of the top 100, the performance of our reader drops from 48.2 EM to 42.9 EM. On the other hand, if we select the top 10 documents according to the attention scores, the performance only drops to 46.8 EM. 3.3 DENSE BI-ENCODER FOR PASSAGE RETRIEVAL Ideally, we would like to rank passages according to the reader cross-attention scores. In practice however, since the passages and the question need to be processed simultaneously by the reader module it is impractical to query a large knowledge source this way. Thus, we use a retriever model composed of an embedder function E that maps any text passage to a d-dimensional vector, such that the similarity score between a question q and a passage p is defined as Sθ(q, p) = E(q)T E(p)/√d. This similarity metric enables us to index all passages in the knowledge source as a preprocessing step. Then at runtime, passages with the highest similarity score with the input question are retrieved, by using an efficient similarity search library such as FAISS (Johnson et al., 2019). For the embedder we use BERT and follow DPR by considering that the encodings E(q) and E(p) are obtained by extracting the representation of the initial [CLS] token. This leads to a represen- tation of dimension d = 768 in the case of a base model. Differently from DPR, we use the same encoding function E for the questions and passages by sharing parameters. 3.4 DISTILLING THE CROSS-ATTENTION SCORE TO A BI-ENCODER In this section, we describe how to train the retriever model, based on the relevance scores obtained in Sec. 3.2. For the training objective of the retriever, we propose to minimize the KL-divergence between the output Sθ(q, p) and the score Gq,p after normalization: KL(θ, L Q ) = X q∈Q,p∈Dq ˜Gq,p(log ˜Gq,p log ˜Sθ(q, p)), − 4 Published as a conference paper at ICLR 2021 where ˜Gq,p = exp(Gq,p) Pp′∈Dq exp(Gq,p′ ) , ˜Sθ(q, p) = exp(Sθ(q, p)) Pp′∈Dq exp(Sθ(q, p′)) . In Sec. 5.1 we present results obtained when using alternatives to this training objective. We con- sider two other objectives which have been used in Dehghani et al. (2017), where BM25 is used as a teacher model to train a neural ranker. A first option consists in training the retriever with a regression approach by minimizing the mean squared error: MSE(θ, L Q ) = X (Sθ(q, p) q∈Q,p∈Dq Gq,p)2. − The second option we consider is to use a max-margin loss that explicitly penalizes inversions in the ranking estimated by the retriever: ranking(θ, L Q ) = X max (0, γ q∈Q,p1,p2∈Dq sign(Gq,p1 − − Gq,p2 )(Sθ(q, p1) Sθ(q, p2))) . − In words, if p1 is more relevant to answer the question q than p2 , i.e. Gq,p1 > Gq,p2 , the loss pushes the retriever score of p1 to be larger than the score of p2 by at least a margin of γ. 3.5 ITERATIVE TRAINING In this section, we explain how iterative training can be used with the student-teacher scheme de- scribed in the previous section, similarly to Khattab et al. (2020). This iterative procedure can be interpreted as using the current retriever to sample negative examples, in order to train a new re- triever. When learning a retriever with discriminative training, negative samples play an important role, and various strategies have been considered in previous work. Karpukhin et al. (2020) com- pared random sampling with using the top-k passages from BM25 which do not contain the answer and with using the positive passages from other queries. Consider that for each question, we have 0 q . We propose to use an iterative pipeline where each iteration an initial set of support documents can be described as the following 4-step process: D 1. Train the reader R using the set of support documents for each question 2. Compute aggregated attention scores (Gq,p)q∈Q,p∈D0 q with the reader R. 0 q . D 3. Train the retriever E using the scores (Gq,p)q∈Q,p∈D0 4. Retrieve top-passages with the new trained retriever E. . q This multi-step procedure can be repeated multiple times. A critical point of the training procedure is the initial set of documents corresponding to each question. In Sec. 4, we compare retrievers obtained by starting from documents obtained using BM25 or cosine similarity from a BERT model. In particular, we show that while the initial performance with BERT is low, the iterative procedure allows to greatly improve the performance of the model. 4 EXPERIMENTS In this section we evaluate the student-teacher training procedure from the previous section. We show that we obtain competitive performance without strong supervision for support documents. 4.1 EXPERIMENTAL SETTING Datasets. We perform experiments on TriviaQA (Joshi et al., 2017) and NaturalQues- tions (Kwiatkowski et al., 2019), two standard benchmarks for open-domain question answering. TriviaQA is made of questions from trivia and quiz league websites, and does not contain gold sup- port documents. NaturalQuestions contains questions corresponding to web search queries, and gold support documents from Wikipedia. Following the setting from Lee et al. (2019); Karpukhin et al. 5 Published as a conference paper at ICLR 2021 NaturalQuestions TriviaQA Iter. BERT R@20 R@100 Dev EM BM25 R@20 R@100 Dev EM 0 1 2 3 0 1 2 3 4 4.8 32.2 51.1 67.8 4.6 37.1 60.8 72.0 76.4 12.0 45.8 62.6 76.8 12.0 59.4 73.4 83.2 84.6 9.8 16.9 28.6 39.3 9.7 19.6 43.3 52.0 62.3 59.3 76.4 80.4 80.0 75.0 79.0 82.1 81.6 - 74.0 84.3 86.7 86.3 82.3 85.5 86.5 86.6 - 41.2 46.8 47.9 46.2 65.3 66.7 67.5 67.7 - Table 1: Iterative training starting with documents retrieved with BERT and BM25. Iteration 0 corresponds to the performance of the reader trained on the set of initial support documents. We report all metrics on the validation set. (2020), we use the original evaluation set as test set, and keep 10% of the training data for valida- tion. We use the Wikipedia dump from Dec. 20, 2018 for support documents, splitting articles into non-overlapping passages of 100 tokens, and applying the same preprocessing as Chen et al. (2017). We also evaluate on NarrativeQuestions (Koˇcisk`y et al., 2018), using a publicly available prepro- cessed version.1 This is a reading comprehension dataset built on a corpus of books and movie scripts. For each story, questions are generated by human annotators based on a summary of the given document. We consider the full story setting, where the task is to answer questions given the entire story and not the summary used to generate question-answer pairs. Here the knowledge source is not the same for all questions: given a question the retrieval operation is performed on all passages of the associated story. These passages are obtained by dividing the story in chunks of 100 words. These stories are long documents, with an average of 60k words. While part of the documents could be processed entirely by the Fusion-in-Decoder module, it is interesting to limit the number of support passages to reduce the computational cost of the reading step. While answers in TriviaQA and NaturalQuestions are short, NarrativeQA answers are about five words long on average, with medium length answers such as ”He dismantles it and attaches it to his mother’s jeep” which answers the question ”What does Mark do with his radio station?”. Notably a significant number of answers do not correspond to spans in the story. It is thus not straightforward to train the retriever with heuristics using question-answer pairs. In our case we use the same pipeline as for TriviaQA and NaturalQuestions, demonstrating the flexibility of our approach. Evaluation. The model performance is assessed in two ways. First, following previous work such as DPR and ColbertQA, we report the top-k retrieval accuracy (R@k), which is the percentage of questions for which at least one passage of the top-k retrieved passages contains the gold answer. It is unclear how well this metric evaluates the retriever performance, since the answer can be contained in a passage without being related to the question. This is notably true for common words or entities. We also report the final end-to-end performance of the question answering system composed of the retriever and reader modules. This is the metric we are fundamentally interested in. For TriviaQA and NaturalQuestions, predicted answers are evaluated with the standard exact match metric (EM), as introduced by Rajpurkar et al. (2016). For NarrativeQA we report the metrics proposed in the original paper: ROUGE-L, BLEU-1, BLEU-4 and METEOR. 4.2 TECHNICAL DETAILS Initialization. Similarly to DPR, we initialize the retriever with the BERT base model, pretrained with uncased text. The Fusion-in-Decoder reader is initialized with the T5 base model. A critical 0 component of the iterative training procedure is the initialization of the support passages q asso- ciated with each question q. For this we consider different options. The first one is to use passages D 1https://cs.nyu.edu/˜kcho/NarrativeQA 6 Published as a conference paper at ICLR 2021 Model DPR (Karpukhin et al., 2020) RAG (Lewis et al., 2020b) ColBERT-QA (Khattab et al., 2020) Fusion-in-Decoder (T5 base) (Izacard & Grave, 2020) Fusion-in-Decoder (T5 large) (Izacard & Grave, 2020) Ours (starting from BERT, T5 base) Ours (starting from BM25, T5 base) Ours (starting from DPR, T5 base) Ours (starting from DPR, T5 large) NQ dev. test TriviaQA test dev. - - - - - 39.3 47.9 49.2 52.7 41.5 44.5 48.2 48.2 51.4 40.0 48.9 50.1 54.4 - - - - - 62.5 67.7 68.7 72.5 57.9 56.1 63.2 65.0 67.6 62.7 67.7 69.3 72.5 Table 2: Comparison to state-of-the-art models on NaturalQuestions and TriviaQA. retrieved using BM25. We use the implementation from Apache Lucene2 with default parameters, and tokenize questions and passages with SpaCy3. We also use passages obtained with BERT as a retriever without fine-tuning, this leads to poor initial performance. Finally in Table 2 we show that 0 initializing q with passages obtained with DPR (Karpukhin et al., 2020) outperforms the two pre- vious initializations. We train all retrievers using 100 passages. For the reader, we use 100 passages for NaturalQuestions and TriviaQA and 20 passages for NarrativeQA. D Iterative training. We apply the iterative training procedure on each dataset independently. Both the reader and the retriever are fine-tuned using the ADAM algorithm (Kingma & Ba, 2014), with a batch of size 64. The reader is trained for 10k gradient steps with a constant learning rate of 10−4, and the best model is selected based on the validation performance. The retriever is trained with a 10−5 until the performance saturates. To monitor the performance of the constant learning rate of 5 retriever during training, we measure the similarity between the reader and the retriever rankings. At each new training iteration the reader is reinitialized from T5 base, while we pursue the training of the retriever. We found that restarting from T5 base is important for the first iterations when starting with BERT documents. We have not tried to reinitialize the retriever between each iteration. More details on the hyperparameters and the training procedure are reported in Appendix A.2. · 4.3 RESULTS In Table 1, we report the performance of our approach for different number of self-training iterations. Generally, we observe that the accuracy of our system increases with the number of iterations, obtaining strong performance after a few iterations. Interestingly, while the initial performance with documents retrieved with BERT is very poor, our method still reach competitive scores on TriviaQA, and to a lesser extent, NaturalQuestions. However, a second observation is that the quality of the initial document sets plays an important role on the performance of the end system. Indeed, we observe that starting the procedure from BM25 documents, which are higher quality as indicated by the performance of the system at iteration 0, leads to stronger results than using BERT documents. An interesting research question would be to explore pre-training of the initial BERT model for retrieval, for example by using the inverse cloze task. In Table 2, we report the performance of our approach, as well as existing state-of-the-art systems on TriviaQA and NaturalQuestions. In addition to initializing our method with documents retrieved with BM25 and BERT, we also train a system by starting from DPR documents. First, we observe that our method improve the performance over the state-of-the-art, even when starting from BM25 documents. This validates our assumption that it is possible to obtain strong retrievers without the need of supervision for the documents. Second, when starting from DPR passages, our method leads to a +4.5 EM improvement on TriviaQA and +2.3 EM improvement on NaturalQuestions when the final evaluation is carried out with a large reader. In Table 3 we report retrieval results on the test set depending on the initial passages and compare to the state-of-the-art. 2lucene.apache.org 3spacy.io 7 Published as a conference paper at ICLR 2021 NaturalQuestions R@20 R@100 TriviaQA R@20 R@100 DPR (Karpukhin et al., 2020) ANCE (Xiong et al.) Starting from BERT Starting from BM25 Starting from DPR 79.4 82.1 68.8 80.0 84.3 86.0 87.9 79.5 87.7 89.3 79.4 80.3 78.4 81.4 83.6 85.0 85.3 84.4 86.5 87.7 Table 3: Retrieval performance on the test sets depending on the initial passages and comparison to the state-of-the-art. Method Best from Koˇcisk`y et al. (2018) DPR + FiD Ours starting from BM25 Ours starting from BM25 Iter. Rouge-L test dev. - - 0 1 14.5 29.7 29.9 31.6 14.0 30.8 30.3 32.0 Bleu-1 Bleu-4 Meteor dev. 20.0 33.0 34.6 34.9 test 19.1 34.0 33.7 35.3 dev. 2.23 6.7 7.1 7.6 test 2.1 6.9 6.5 7.5 dev. 4.6 10.3 10.5 11.0 test 4.4 10.8 10.4 11.1 Table 4: Performance on NarrativeQA. In Table 4, we report the performance of our method on the NarrativeQA dataset. We use the setting where the knowledge source corresponds to the whole document, and in particular, we do not use the summary. We compare our results to the best ones reported in the original paper for this setting. Similar to results obtained on NaturalQuestions and TriviaQA, we observe that training the retriever by using the attention scores of the reader leads to improvements, compared to the BM25 baseline. 5 ABLATIONS In this section, we investigate design choices regarding two key elements of our approach: the training objective and the aggregation of cross-attention scores. For all experiments, we consider a simplified experimental setting: a single training iteration is performed on NaturalQuestions, starting from BM25 passages. 5.1 TRAINING OBJECTIVES In Table 5 we report the performance of our model trained with the different training objectives described in Sec. 3.3. We observe that using the KL-divergence between the aggregated scores of the reader and the scores of the retriever outperforms the other objective functions. Method R@5 R@20 R@100 Dev EM Mean Squared Error Max-margin loss, γ = 1 Max-margin loss, γ = 0.2 Max-margin loss, γ = 0.1 KL-divergence 46.5 60.3 60.3 60.2 64.7 61.2 73.6 73.5 73.5 76.4 73.9 82.7 82.6 82.6 84.3 40.6 45.4 45.8 45.1 46.8 Table 5: Comparison of training objectives on NaturalQuestions after one iteration. We report all the metrics on the validation set. 5.2 HOW TO AGGREGATE CROSS-ATTENTION SCORES? In Section 4 the cross-attention scores α are aggregated in a specific way, in order to obtain a single scalar used to train the retriever. Formally let us denote by αi,j,k,h the cross-attention scores between 8 Published as a conference paper at ICLR 2021 token i of the output and token j of the input, for the k-th layer and h-th head. Then, the scores Gq,p for p q used in Section 4 are computed as follows: ∈ D Gq,p = mean j,k,h α0,j,k,h, where j describes the input tokens corresponding to p. In Table 6 we explore alternatives to this choice by considering different aggregation schemes. In particular, we consider (1) taking the max over the input tokens corresponding to passage p instead of the average, (2) taking the average over the output tokens instead of taking the score of the first token, (3) taking the mean over the last six layers instead of all the layers, (4) taking the max over the layers instead of the average, (5) taking the max over the heads instead of the average. We observe that the performance of our approach is relatively stable to the choice of aggregation, and that the best result is obtained by averaging, except over the output tokens where it is best to only consider the first token. Method R@5 R@20 R@100 Dev EM (0) meanj,k,h α0,j,k,h (1) meank,h maxj α0,j,k,h (2) meani,j,k,h αi,j,k,h (3) mean7≤k≤12,j,h α0,j,k,h (4) meanj,h maxk α0,j,k,h (5) meanj,k maxh α0,j,k,h 64.7 61.2 63.5 64.1 63.9 64.2 76.4 72.5 75.3 75.7 75.5 76.1 84.3 81.0 83.1 83.8 83.7 83.9 46.8 46.0 45.8 46.4 46.5 46.8 Table 6: Comparison of attention aggregation schemes on NaturalQuestions after one iteration. The index i corresponds to output tokens, j corresponds to input tokens of a given passage, h to heads and k to layers of the decoder. We report all metrics on the validation set. 6 CONCLUSION In this paper, we introduce a method to train an information retrieval module for downstream tasks, without using pairs of queries and documents as annotations. Our approach is inspired by knowl- edge distillation, where the retriever module corresponds to the student model and the reader module corresponds to the teacher model. In particular, we use the cross-attention scores, from a sequence- to-sequence reader, to obtain synthetic targets for the retriever. We compare different ways to aggre- gate the scores, as well as different training objectives to learn the retriever. We show that iteratively training the reader and the retriever leads to better performance, and obtain state-of-the-art perfor- mance on competitive question answering benchmarks. In the future, we would like to explore better pre-training strategies for the retriever module, as well as better scoring functions for the retriever. REFERENCES Wei-Cheng Chang, Felix X Yu, Yin-Wen Chang, Yiming Yang, and Sanjiv Kumar. Pre-training tasks for embedding-based large-scale retrieval. arXiv preprint arXiv:2002.03932, 2020. 3 Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading Wikipedia to answer open- domain questions. In Proc. ACL, 2017. 1, 6 Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. Indexing by latent semantic analysis. Journal of the American society for information science, 41 (6):391–407, 1990. 2 Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W. Bruce Croft. Neural In Proceedings of the 40th International ACM SIGIR ranking models with weak supervision. Conference on Research and Development in Information Retrieval, SIGIR ’17, pp. 65–74, New York, NY, USA, 2017. Association for Computing Machinery. doi: 10.1145/3077136.3080832. URL https://doi.org/10.1145/3077136.3080832. 5 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. NAACL, 2019. 1 9 Published as a conference paper at ICLR 2021 Daniel Gillick, Alessandro Presta, and Gaurav Singh Tomar. End-to-end retrieval in continuous space. arXiv preprint arXiv:1811.08008, 2018. 2 Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. Realm: Retrieval- augmented language model pre-training. arXiv preprint arXiv:2002.08909, 2020. 3 Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network, 2015. 1 Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Information & Knowledge Management, pp. 2333–2338, 2013. 2 Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. Poly-encoders: Trans- former architectures and pre-training strategies for fast and accurate multi-sentence scoring. arXiv preprint arXiv:1905.01969, 2019. 2 Gautier Izacard and Edouard Grave. Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282, 2020. 3, 7 Jeff Johnson, Matthijs Douze, and Herv´e J´egou. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, 2019. 1, 4 Karen Sparck Jones. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation, 1972. 1, 2 Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proc. ACL, 2017. 5 Vladimir Karpukhin, Barlas O˘guz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906, 2020. 1, 2, 3, 5, 7, 8, 12 Omar Khattab, Christopher Potts, and Matei Zaharia. Relevance-guided supervision for openqa with colbert, 2020. 1, 2, 5, 7 Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 7, 12 Tom´aˇs Koˇcisk`y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G´abor Melis, and Edward Grefenstette. The NarrativeQA reading comprehension challenge. TACL, 2018. 6, 8 Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Al- berti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural Questions: a benchmark for question answering research. TACL, 2019. 5 Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. Latent retrieval for weakly supervised open domain question answering. In Proc. ACL, 2019. 2, 5, 12 Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence pre- arXiv preprint training for natural language generation, arXiv:1910.13461, 2019. 3 translation, and comprehension. Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, and Luke Zettle- moyer. Pre-training via paraphrasing. arXiv preprint arXiv:2006.15020, 2020a. 3 Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K¨uttler, Mike Lewis, Wen-tau Yih, Tim Rockt¨aschel, et al. Retrieval-augmented gener- ation for knowledge-intensive nlp tasks. arXiv preprint arXiv:2005.11401, 2020b. 3, 7 Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Confer- ence on Learning Representations, 2019. 12 10 Published as a conference paper at ICLR 2021 Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. Sparse, dense, and attentional representations for text retrieval. arXiv preprint arXiv:2005.00181, 2020. 2 Christopher D Manning, Hinrich Sch¨utze, and Prabhakar Raghavan. Introduction to information retrieval. Cambridge university press, 2008. 2 Bhaskar Mitra, Nick Craswell, et al. An introduction to neural information retrieval. Foundations and Trends® in Information Retrieval, 13(1):1–126, 2018. 2 Rodrigo Nogueira and Kyunghyun Cho. Passage re-ranking with bert. arXiv preprint arXiv:1901.04085, 2019. 2 Hamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He, Jianshu Chen, Xinying Song, and Rabab Ward. Deep sentence embedding using long short-term memory networks: Analysis and application to information retrieval. IEEE/ACM Transactions on Audio, Speech, and Lan- guage Processing, 24(4):694–707, 2016. 2 Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019. 3 Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proc. EMNLP, 2016. 6 Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu, Mike Gatford, et al. Okapi at TREC-3. NIST Special Publication Sp, 1995. 1, 2 Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gr´egoire Mesnil. Learning semantic rep- In Proceedings of the 23rd resentations using convolutional neural networks for web search. international conference on world wide web, pp. 373–374, 2014. 2 James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. Fever: a large-scale dataset for fact extraction and verification. arXiv preprint arXiv:1803.05355, 2018. 1 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Infor- mation Processing Systems 30, pp. 5998–6008. 2017. 4 Ellen M Voorhees et al. The TREC-8 question answering track report. In TREC, 1999. 1 Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. Approximate nearest neighbor negative contrastive learning for dense text retrieval. URL https://arxiv.org/abs/2007.00808. 8 Sohee Yang and Minjoon Seo. arXiv:2010.10999, 2020. 3 Is retriever merely an approximator of reader? arXiv preprint Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. In Proc. NAACL (Demonstra- End-to-end open-domain question answering with BERTserini. tions), 2019. 2 11 Published as a conference paper at ICLR 2021 Hyperparameter Reader-base Reader-large Retriever Number of parameters Number of heads Number of layers Hidden size Batch size Dropout Learning rate schedule Peak learning rate Gradient clipping 220M 12 24 768 64 0.1 constant 0.0001 1. 770M 16 48 1024 64 0.1 linear 0.00005 1. 110M 12 12 768 64 0.1 constant 0.00005 1. Table 7: Hyperparameters for retriever and reader training. A EXPERIMENTAL DETAILS A.1 SETTING For NaturalQuestions and TriviaQA we follow the standard open-domain question answering set- ting used in Lee et al. (2019); Karpukhin et al. (2020). In this setting the original development set is used as test set, and 10% of the training set is used for development purpose. Moreover, for NaturalQuestions, all questions with answers longer than five tokens are discarded. For TriviaQA we use the unique human-generated answer to train the reader. In this dataset part of the answers are in uppercase. We normalize uppercase answers by converting the first letter in each word to uppercase and remaining characters to lowercase using the title Python string method. For NarrativeQA, questions and answers in uppercase are converted to lowercase. A.2 TRAINING For every datasets, both the reader and the retriever are fine-tuned with a dropout rate of 10%. All models at the exception of the large reader are trained using the ADAM algorithm (Kingma & Ba, 2014) with a constant learning rate of 10−4 for the base reader and 5 10−5 for the retriever. The base reader is trained for 10k gradient steps with a batch size of 64. We train the large reader with 10−5 and a the ADAMW algorithm (Loshchilov & Hutter, 2019) with a peak learning rate of 5 linear warmup for 600 gradient steps followed by a linear decrease of the learning rate for 14.4k gradient steps. · · We perform model selection on the validation performance. The retriever is trained until its perfor- mance saturates with a batch size of 64. To monitor the performance of the retriever during training, we measure the similarity between the ranking obtained with the reader score, and the ranking of the retriever. We use different metrics for this: the number of inversions between the two rankings, the proportion of passages in the retriever top-k that are also in the reader top-k and the number of passages to obtain all top-k passage of the reader. During training and at test time, each text input of the encoder is restricted to be at most 250 token long. For NaturalQuestions and TriviaQA, we use wikipedia as a knowledge source, thus for each passage there is an associated article title. Each input is composed of the concatenation of a question, title and support passage with special tokens question:, title: and context: added before the question, the title and the text of each passage. In the case of NarrativeQA, the question and each passage are concatenated to form the different inputs. A.3 INFERENCE At test time, for TriviaQA and NaturalQuestions we use greedy decoding, and Beam Search with 3 beams for NarrativeQA. 12 Published as a conference paper at ICLR 2021 Iter. 0 1 2 NaturalQuestions R@20 R@100 Dev EM TriviaQA R@20 R@100 Dev EM 77.1 80.3 82.4 84.3 86.7 87.9 46.4 47.8 48.2 78.2 81.4 83.5 84.7 86.4 87.4 65.0 67.1 68.1 Table 8: Iterative training starting with documents retrieved with DPR. Iteration 0 corresponds to the performance of the reader trained on the set of initial support documents. We report all metrics on the validation set. Contrary to results reported in Table 1, the reader model was not re-initialized between each iteration. 13
synthetic_cpt
2
SayPlan_Grounding_Large_Language_Models_using_3D_Scene_Graphs_for_Scalable_Task_Planning.pdf
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning Krishan Rana†1, Jesse Haviland∗1,2, Sourav Garg∗3, Jad Abou-Chakra∗1, Ian Reid3, Niko S ¨underhauf1 1QUT Centre for Robotics, Queensland University of Technology 2CSIRO Data61 Robotics and Autonomous Systems Group 3University of Adelaide ∗Equal Contribution †[email protected] Abstract: Large language models (LLMs) have demonstrated impressive results in develop- ing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant chal- lenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representa- tions. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a semantic search for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) in- troduce an iterative replanning pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and nat- ural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page sayplan.github.io. 1 Introduction “Make me a coffee and place it on my desk” – The successful execution of such a seemingly straight- forward command remains a daunting task for today’s robots. The associated challenges permeate every aspect of robotics, encompassing navigation, perception, manipulation as well as high-level task planning. Recent advances in Large Language Models (LLMs) [1, 2, 3] have led to significant progress in incorporating common sense knowledge for robotics [4, 5, 6]. This enables robots to plan complex strategies for a diverse range of tasks that require a substantial amount of background knowledge and semantic comprehension. For LLMs to be effective planners in robotics, they must be grounded in reality, that is, they must ad- here to the constraints presented by the physical environment in which the robot operates, including the available affordances, relevant predicates, and the impact of actions on the current state. Further- more, in expansive environments, the robot must additionally understand where it is, locate items of interest, as well comprehend the topological arrangement of the environment in order to plan across the necessary regions. To address this, recent works have explored the utilization of vision-based value functions [4], object detectors [7, 8], or Planning Domain Definition Language (PDDL) de- scriptions of a scene [9, 10] to ground the output of the LLM-based planner. However, these efforts are primarily confined to small-scale environments, typically single rooms with pre-encoded infor- mation on all the existing assets and objects present. The challenge lies in scaling these models. As the environment’s complexity and dimensions expand, and as more rooms and entities enter the 7th Conference on Robot Learning (CoRL 2023), Atlanta, USA. Figure 1: SayPlan Overview (top). SayPlan operates across two stages to ensure scalability: (left) Given a collapsed 3D scene graph and a task instruction, semantic search is conducted by the LLM to identify a suitable subgraph that contains the required items to solve the task; (right) The explored subgraph is then used by the LLM to generate a high-level task plan, where a classical path planner completes the navigational component of the plan; finally, the plan goes through an iterative re- planning process with feedback from a scene graph simulator until an executable plan is identified. Numbers on the top-left corners represent the flow of operations. scene, pre-encoding all the necessary information within the LLM’s context becomes increasingly infeasible. To this end, we present a scalable approach to ground LLM-based task planners across environments spanning multiple rooms and floors. We achieve this by exploiting the growing body of 3D scene graph (3DSG) research [11, 12, 13, 14, 15, 16]. 3DSGs capture a rich topological and hierarchically- organised semantic graph representation of an environment with the versatility to encode the nec- essary information required for task planning including object state, predicates, affordances and attributes using natural language – suitable for parsing by an LLM. We can leverage a JSON repre- sentation of this graph as input to a pre-trained LLM, however, to ensure the scalability of the plans to expansive scenes, we present three key innovations. Firstly, we present a mechanism that enables the LLM to conduct a semantic search for a task- relevant subgraph G(cid:48) by manipulating the nodes of a ‘collapsed’ 3DSG, which exposes only the top level of the full graph G, via expand and contract API function calls – thus making it feasible to plan over increasingly large-scale environments. In doing so, the LLM maintains focus on a rela- tively small, informative subgraph, G(cid:48) during planning, without exceeding its token limit. Secondly, as the horizon of the task plans across such environments tends to grow with the complexity and range of the given task instructions, there is an increasing tendency for the LLM to hallucinate or produce infeasible action sequences [17, 18, 7]. We counter this by firstly relaxing the need for the LLM to generate the navigational component of the plan, and instead leverage an existing optimal path planner such as Dijkstra [19] to connect high-level nodes generated by the LLM. Finally, to en- sure the feasibility of the proposed plan, we introduce an iterative replanning pipeline that verifies and refines the initial plan using feedback from a scene graph simulator in order to correct for any unexecutable actions, e.g., missing to open the fridge before putting something into it – thus avoid- ing planning failures due to inconsistencies, hallucinations, or violations of the physical constraints and predicates imposed by the environment. 2 IterativeReplanningExplore Scene GraphSimulatorFeedbackPlanVerification{command: expand_node, node_name: “kitchen}Full Search SequenceScene Graph Simulator“Make Peter a coffee”InstructionPrompt3D Scene Graph [Collapsed]MemorySEMANTICSEARCHIterative ReplanningHigh-Level PlanExecutable Plan{goto: office}{access: desk}{pickup: mug}{goto: kitchen}{release: mug}{turn_on: machine}{turn_off: machine}{pickup: mug}{goto: office}{access: desk}{goto: pose13}{goto: office}{access: desk}{pickup: mug}{goto: pose18}{goto: pose21}{goto: kitchen}{release: mug}{turn_on: machine}{turn_off: machine}{pickup: mug}{goto: pose21}{goto: pose26}{goto: pose25}{goto: office}{access: desk}Path PlannerScene Graph Simulator“Make Peter a coffee”InstructionPrompt{Agent RoleEnvironment FunctionsEnvironment StateOutput FormatExample}MemoryExplored SubgraphSpecifications“Make Peter a coffee”InstructionPromptExplored SubgraphAgent RoleEnvironment FunctionsEnvironment StateOutput FormatExample | FeedbackMemorySpecificationsHigh-Level PlanITERATIVE REPLANNINGGraph API CallCollapseGraph“Make Peter a coffee”SemanticSearch12345678910Feedback: “Cannot release coffee mug here”IterativeReplanningCollapseGraph“Make Peter a coffee”SemanticSearch3D Scene Graph3D SceneGraphAgent RoleEnvironment FunctionsEnvironment StateOutput FormatExample Our approach SayPlan ensures feasible and grounded plan generation for a mobile manipulator robot operating in large-scale environments spanning multiple floors and rooms. We evaluate our framework across a range of 90 tasks organised into four levels of difficulty. These include semantic search tasks such as (“Find me something non-vegetarian.”) to interactive, long-horizon tasks with ambiguous multi-room objectives that require a significant level of common-sense reasoning (“Let’s play a prank on Niko”). These tasks are assessed in two expansive environments, including a large office floor spanning 37 rooms and 150 interactable assets and objects, and a three-storey house with 28 rooms and 112 objects. Our experiments validate SayPlan’s ability to scale task planning to large-scale environments while conserving a low token footprint. By introducing a semantic search pipeline, we can reduce full large-scale scene representations by up to 82.1% for LLM parsing and our iterative replanning pipeline allows for near-perfect executability rates, suitable for execution on a real mobile manipulator robot.1 2 Related Work Task planning in robotics aims to generate a sequence of high-level actions to achieve a goal within an environment. Conventional methods employ domain-specific languages such as PDDL [20, 21, 22] and ASP [23] together with semantic parsing [24, 25], search techniques [26, 27] and complex heuristics [28] to arrive at a solution. These methods, however, lack both the scalability to large environments as well as the task generality required when operating in the real world. Hierarchical and reinforcement learning-based alternatives [29, 30], [31] face challenges with data demands and scalability. Our work leverages the in-context learning capabilities of LLMs to generate task plans across 3D scene graphs. Tasks, in this case, can be naturally expressed using language, with the internet scale training of LLMs providing the desired knowledge for task generality, while 3D scene graphs provide the grounding necessary for large-scale environment operation. This allows for a general and scalable framework when compared to traditional non-LLM-based alternatives. Task planning with LLMs, that is, translating natural language prompts into task plans for robotics, is an emergent trend in the field. Earlier studies have effectively leveraged pre-trained LLMs’ in- context learning abilities to generate actionable plans for embodied agents [4, 10, 9, 8, 32, 7, 33]. A key challenge for robotics is grounding these plans within the operational environment of the robot. Prior works have explored the use of object detectors [8, 7], PDDL environment representations [10, 9, 34] or value functions [4] to achieve this grounding, however, they are predominantly constrained to single-room environments, and scale poorly with the number of objects in a scene which limits their ability to plan over multi-room or multi-floor environments. In this work, we explore the use of 3D scene graphs and the ability of LLMs to generate plans over large-scale scenes by exploiting the inherent hierarchical and semantic nature of these representations. Integrating external knowledge in LLMs has been a growing line of research combining language models with external tools to improve the reliability of their outputs. In such cases, external modules are used to provide feedback or extra information to the LLM to guide its output generation. This is achieved either through API calls to external tools [35, 36] or as textual feedback from the operating environment [37, 8]. More closely related to our work, CLAIRIFY [38] iteratively leverage com- piler error feedback to re-prompt an LLM to generate syntactically valid code. Building on these ideas, we propose an iterative plan verification process with feedback from a scene graph-based simulator to ensure all generated plans adhere to the constraints and predicates captured by the pre- constructed scene graph. This ensures the direct executability of the plan on a mobile manipulator robot, operating in the corresponding real-world environment. 3 SayPlan 3.1 Problem Formulation We aim to address the challenge of long-range task planning for an autonomous agent, such as a mobile manipulator robot, in a large-scale environment based on natural language instructions. This requires the robot to comprehend abstract and ambiguous instructions, understand the scene and generate task plans involving both navigation and manipulation of a mobile robot within an 1sayplan.github.io 3 Algorithm 1: SayPlan Given: scene graph simulator ψ, classical path planner φ, large language model LLM Inputs: prompt P, scene graph G, instruction I 1: G(cid:48) ← collapseψ(G) Stage 1: Semantic Search (cid:46) collapse scene graph (cid:46) search scene graph for all relevant items 2: while command != “terminate” do 3: 4: 5: 6: 7: command, node name ← LLM (P, G(cid:48), I) if command == “expand” then G(cid:48) ← expandψ(node name) else if command == “contract” then G(cid:48) ← contractψ(node name) Stage 2: Causal Planning 8: feedback = “ ” 9: while feedback != “success” do 10: 11: 12: 13: return full plan plan ← LLM (P, G(cid:48), I, feedback) full plan ← φ(plan, G(cid:48)) feedback ← verify_planψ(full plan) (cid:46) expand node to reveal objects and assets (cid:46) contract node if nothing relevant found (cid:46) generate a feasible plan (cid:46) high level plan (cid:46) compute optimal navigational path between nodes (cid:46) forward simulate the full plan (cid:46) executable plan environment. Existing approaches lack the ability to reason over scenes spanning multiple floors and rooms. Our focus is on integrating large-scale scenes into planning agents based on Language Models (LLMs) and solving the scalability challenge. We aim to tackle two key problems: 1) representing large-scale scenes within LLM token limitations, and 2) mitigating LLM hallucinations and erroneous outputs when generating long-horizon plans in large-scale environments. 3.2 Preliminaries Here, we describe the 3D scene graph represen- tation of an environment and the scene graph simulator API which we leverage throughout our approach. Scene Representation: 3D Scene Graphs (3DSG) [11, 12, 14] have recently emerged as an actionable world representation for robots [13, 15, 16, 39, 40, 41], which hierarchi- cally abstract the environment at multiple lev- els through spatial semantics and object rela- tionships while capturing relevant states, affor- dances and predicates of the entities present in the environment. Formally, a 3DSG is a hierar- chical multigraph G = (V, E) in which the set of vertices V comprises V1 ∪V2 ∪. . .∪VK, with each Vk signifying the set of vertices at a particular level of the hierarchy k. Edges stemming from a vertex v ∈ Vk may only terminate in Vk−1 ∪ Vk ∪ Vk+1, i.e. edges connect nodes within the same level, or one level higher or lower. Figure 2: Hierarchical Structure of a 3D Scene Graph. This graph consists of 4 levels. Notes that the room nodes are connected to one another via sequences of pose nodes which capture the topo- logical arrangement of a scene. asset, location: kitchen, affordances: We assume a pre-constructed 3DSG representation of a large-scale environment generated using existing techniques [15, 13, 11]. The entire 3DSG can be represented as a NetworkX Graph object [42] and text-serialised into a JSON data format that can be parsed directly by a pre- trained LLM. An example of a single asset node from the 3DSG is represented as: {name: [turn_on, coffee_machine, type: turn_off, release], state: [red, automatic], position: off, attributes: [2.34, 0.45, 2.23]} with edges between nodes captured as {kitchen↔coffee machine}. The 3DSG is organized in a hierarchical manner with four primary levels: floors, rooms, assets, and objects as shown in Figure 2. The top level contains floors, each of which branches out to several rooms. These rooms are interconnected through pose nodes to represent the environment’s topological structure. Within each room, we find assets (immovable entities) and objects (movable entities). Both asset and object nodes encode particulars including state, affordances, additional attributes such as colour or weight, and 3D pose. The graph also incorporates a dynamic agent 4 node, denoting a robot’s location within the scene. Note that this hierarchy is scalable and node levels can be adapted to capture even larger environments e.g. campuses and buildings Scene Graph Simulator ψ refers to a set of API calls for manipulating and operating over JSON for- matted 3DSGs, using the following functions: 1) collapse(G): Given a full 3DSG, this function returns an updated scene graph that exposes only the highest level within the 3DSG hierarchy e.g. floor nodes. 2) expand(node name): Returns an updated 3DSG that reveals all the nodes con- nected to node name in the level below. 3) contract(node name): Returns an updated 3DSG that hides all the nodes connected to node name in the level below. 4) verify_plan(plan): Forward simulates the generated plan at the abstract graph level captured by the 3DSG to check if each action adheres to the environment’s predicates, states and affordances. Returns textual feedback e.g. “cannot pick up banana” if the fridge containing the banana is closed. 3.3 Approach We present a scalable framework for grounding the generalist task planning capabilities of pre- trained LLMs in large-scale environments spanning multiple floors and rooms using 3DSG repre- sentations. Given a 3DSG G and a task instruction I defined in natural language, we can view our framework SayPlan as a high-level task planner π(a|I, G), capable of generating long-horizon plans a grounded in the environment within which a mobile manipulator robot operates. This plan is then fed to a low-level visually grounded motion planner for real-world execution. To ensure the scala- bility of SayPlan, two stages are introduced: Semantic Search and Iterative Replanning which we detail below. An overview of the SayPlan pipeline is illustrated in Figure 1 with the corresponding pseudo-code given in Algorithm 1. Semantic Search: When planning over 3DSGs using LLMs we take note of two key observations: 1) A 3DSG of a large-scale environment can grow infinitely with the number of rooms, assets and objects it contains, making it impractical to pass as input to an LLM due to token limits and 2) only a subset of the full 3DSG G is required to solve any given task e.g. we don’t need to know about the toothpaste in the bathroom when making a cup of coffee. To this end, the Semantic Search stage seeks to identify this smaller, task-specific subgraph G(cid:48) from the full 3DSG which only contains the entities in the environment required to solve the given task instruction. To identify G(cid:48) from a full 3DSG, we exploit the semantic hierarchy of these representations and the reasoning capabilities of LLMs. We firstly collapse G to expose only its top level e.g. the floor nodes, reducing the 3DSG initial token representation by ≈ 80%. The LLM manipulates this collapsed graph via expand and contract API calls in order to identify the desired subgraph for the task based on the given instruction I. This is achieved using in-context learning over a set of input-out examples (see Appendix J), and utilising chain-of-thought prompting to guide the LLM in identifying which nodes to manipulate. The chosen API call and node are executed within the scene graph simulator, and the updated 3DSG is passed back to the LLM for further exploration. If an expanded node is found to contain irrelevant entities for the task, the LLM contracts it to manage token limitations and maintain a task-specific subgraph (see Figure 3). To avoid expanding already-contracted nodes, we maintain a list of previously expanded nodes, passed as an additional Memory input to the LLM, facilitating a Markovian decision-making process and allowing SayPlan to scale to extensive search sequences without the overhead of maintaining the full interaction history [5]. The LLM autonomously proceeds to the planning phase once all necessary assets and objects are identified in the current subgraph G(cid:48). An example of the LLM-scene graph interaction during Semantic Search is provided in Appendix K. Iterative Replanning: Given the identified subgraph G(cid:48) and the same task instruction I from above, the LLM enters the planning stage of the pipeline. Here the LLM is tasked with generating a sequence of node-level navigational (goto(pose2)) and manipulation (pickup(coffee_mug)) actions that satisfy the given task instruction. LLMs, however, are not perfect planning agents and tend to hallucinate or produce erroneous outputs [43, 9]. This is further exacerbated when planning over large-scale environments or long-horizon tasks. We facilitate the generation of task plans by the LLM via two mechanisms. First, we shorten the LLM’s planning horizon by delegating pose-level path planning to an optimal path planner, such as Dijkstra. For exam- ple, a typical plan output such as [goto(meeting_room), goto(pose13), goto(pose14), goto(pose8), ..., goto(kitchen), access(fridge), open(fridge)] is simplified to [goto(meeting_room), goto(kitchen), access(fridge), open(fridge)]. The path 5 planner handles finding the optimal route between high-level locations, allowing the LLM to focus on essential manipulation components of the task. Secondly, we build on the self-reflection capabil- ities of LLMs [17] to iteratively correct their generated plans using textual, task-agnostic feedback from a scene graph simulator which evaluates if the generated plan complies with the scene graph’s predicates, state, and affordances. For instance, a pick(banana) action might fail if the robot is already holding something, if it is not in the correct location or if the fridge was not opened beforehand. Such failures are transformed into textual feedback (e.g., ”cannot pick banana”), ap- pended to the LLM’s input, and used to generate an updated, executable plan. This iterative process, involving planning, validation, and feedback integration, continues until a feasible plan is obtained. The validated plan is then passed to a low-level motion planner for robotic execution. An example of the LLM-scene graph interaction during iterative replanning is provided in Appendix L. Specific implementation details are provided in Appendix A. 4 Experimental Setup We design our experiments to evaluate the 3D scene graph reasoning capabilities of LLMs with a particular focus on high-level task planning pertaining to a mobile manipulator robot. The plans ad- here to a particular embodiment consisting of a 7-degree-of-freedom robot arm with a two-fingered gripper attached to a mobile base. We use two large-scale environments, shown in Figure 4, which exhibit multiple rooms and multiple floors which the LLM agent has to plan across. To better ablate and showcase the capabilities of SayPlan, we decouple its semantic search ability from the overall causal planning capabilities using the following two evaluation settings as shown in Appendix C: Semantic Search: Here, we focus on queries which test the semantic search capabilities of an LLM provided with a collapsed 3D scene graph. This requires the LLM to reason over the room and floor node names and their corresponding attributes in order to aid its search for the relevant assets and objects required to solve the given task instruction. We evaluate against a human baseline to understand how the semantic search capabilities of an LLM compare to a human’s thought process. Furthermore, to gain a better understanding of the impact different LLM models have on this graph- based reasoning, we additionally compare against a variant of SayPlan using GPT-3.5. Causal Planning: In this experiment, we evaluate the ability of SayPlan to generate feasible plans to solve a given natural language instruction. The evaluation metrics are divided into two compo- nents: 1) Correctness, which primarily validates the overall goal of the plan and its alignment to what a human would do to solve the task and 2) Executability, which evaluates the alignment of the plan to the constraints of the scene graph environment and its ability to be executed by a mobile manipulator robot. We note here that for a plan to be executable, it does not necessarily have to be correct and vice versa. We evaluate SayPlan against two baseline methods that integrate an LLM for task planning: LLM-As-Planner, which generates a full plan sequence in an open-loop manner; the plan includes the full sequence of both navigation and manipulation actions that the robot must execute to complete a task, and LLM+P, an ablated variant of SayPlan, which only incorporates the path planner to allow for shorter horizon plan sequences, without any iterative replanning. 5 Results 5.1 Semantic Search Office Home Subtask Human SayPlan (GPT-3.5) SayPlan (GPT-4) Human SayPlan (GPT-3.5) SayPlan (GPT-4) Simple Search Complex Search 100% 100% 6.6% 0.0% 86.7% 73.3% 100% 100% 0.0% 0.0% 86.7% 73.3% We summarise the results for the semantic search evaluation in Table 1. SayPlan (GPT-3.5) consistently failed to reason over the input graph representation, hallucinating nodes to explore or stagnating at exploring the same node multiple times. SayPlan (GPT-4) in contrast achieved 86.7% and 73.3% success in identifying the desired subgraph across both the simple and complex search tasks respectively, demonstrating significantly better graph-based reasoning than GPT-3.5. Table 1: Evaluating the semantic search capabilities of GPT-4. The table shows the semantic search success rate in finding a suitable subgraph for planning. 6 Simple Long Horizon Types of Errors Corr Exec Corr Exec Missing Action Missing Pose Wrong Action Incomplete Search Hallucinated Nodes LLM+P LLM-As-Planner SayPlan 93.3% 13.3% 33.3% 0.0% 26.7% 10.0% 93.3% 80.0% 66.7% 13.3% 20.0% 60.0% 0.17% 0.0% 93.3% 100.0% 73.3% 86.6% 0.0% 0.0% 0.0% 3.33% 0.03% 0.0% 10.0% 10.0% 6.67% Table 3: Causal Planning Results. Left: Correctness and Executability on Simple and Long Horizon planning tasks and Right: Types of execution errors encountered when planning using LLMs. Note that SayPlan corrects the majority of the errors faced by LLM-based planners. While as expected the human baseline achieved 100% on all sets of instructions, we are more interested in the qualitative assessment of the common-sense reasoning used during seman- tic search. More specifically we would like to identify the similarity in the semantic search heuristics utilised by humans and that used by the underlying LLM based on the given task in- struction. We present the full sequence of explored nodes for both SayPlan (GPT-4) and the human base- line in Appendix F. As shown in the tables, Say- Plan (GPT-4) demonstrates remarkably similar performance to a human’s semantic and com- mon sense reasoning for most tasks, exploring a similar sequence of nodes given a particu- lar instruction. For example, when asked to “find a ripe banana”, the LLM first explores the kitchen followed by the next most likely In the case where no location, the cafeteria. semantics are present in the instruction such as “find me object K31X”, we note that the LLM agent is capable of conducting a breadth- first-like search across all the unexplored nodes. This highlights the importance of meaningful node names and attributes that capture the rel- evant environment semantics that the LLM can leverage to relate the query instruction for effi- cient search. Figure 3: Scene Graph Token Progression Dur- ing Semantic Search. This graph illustrates the scalability of our approach to large-scale 3D scene graphs. Note the importance of node contraction in maintaining a near constant token representa- tion of the 3DSG input. Full Graph (Token Count) Collapsed Graph (Token Count) Compression Ratio Office Home 6731 6598 878 1817 86.9% 72.5% Table 2: 3D Scene Graph Token Count Number of tokens required for the full graph vs. collapsed graph. An odd failure case in the simple search instructions involved negation, where the agent consistently failed when presented with questions such as “Find me an office that does not have a cabinet” or “Find me a bathroom with no toilet”. Other failure cases noted across the complex search instruc- tions included the LLM’s failure to conduct simple distance-based and count-based reasoning over graph nodes. While trivial to a human, this does require the LLM agent to reason over multiple nodes simultaneously, where it tends to hallucinate or miscount connected nodes. Scalability Analysis: We additionally analyse the scalability of SayPlan during semantic search. Table 2 illustrates the impact of exploiting the hierarchical nature of 3D scene graphs and allowing the LLM to explore the graph from a collapsed initial state. This allows for a reduction of 82.1% in the initial input tokens required to represent the Office environment and a 60.4% reduction for the Home environment. In Figure 3, we illustrate how endowing the LLM with the ability to contract explored nodes which it deems unsuitable for solving the task allows it to maintain near-constant input memory from a token perspective across the entire semantic search process. Note that the initial number of tokens already present represents the input prompt tokens as given in Appendix J. Further ablation studies on the scalability of SayPlan to even larger 3DSGs are provided in Appendix H. 7 5.2 Causal Planning The results for causal planning across simple and long-horizon instructions are summarised in Ta- ble 3 (left). We compared SayPlan’s performance against two baselines: LLM-As-Planner and LLM+P. All three methods displayed consistent correctness in simple planning tasks at 93%, given that this metric is more a function of the underlying LLMs reasoning capabilities. However, it is in- teresting to note that in the long-horizon tasks, both the path planner and iterative replanning play an important role in improving this correctness metric by reducing the planning horizon and allowing the LLM to reflect on its previous output. The results illustrate that the key to ensuring the task plan’s executability was iterative replanning. Both LLM-As-Planner and LLM+P exhibited poor executability, whereas SayPlan achieved near- perfect executability as a result of iterative replanning, which ensured that the generated plans were grounded to adhere to the constraints and predicated imposed by the environment. Detailed task plans and errors encountered are provided in Appendix G. We summarise these errors in Table 3 (right) which shows that plans generated with LLM+P and LLM-As-Planner entailed various types of errors limiting their executability. LLM+P mitigated navigational path planning errors as a result of the classical path planner however still suffered from errors pertaining to the manipulation of the environment - missing actions or incorrect actions which violate environment predicates. SayPlan mitigated these errors via iterative replanning, however in 6.67% of tasks, it failed to correct for some hallucinated nodes. While we believe these errors could be eventually corrected via iterative replanning, we limited the number of replanning steps to 5 throughout all experiments. We provide an illustration of the real-world execution of a generated plan using SayPlan on a mobile manipulator robot coupled with a vision-guided motion controller [44, 45] in Appendix I. 6 Limitations SayPlan is notably constrained by the limitations inherent in current large language models (LLMs), including biases and inaccuracies, affecting the validity of its generated plans. More specifically, SayPlan is limited by the graph-based reasoning capabilities of the underlying LLM which fails at simple distance-based reasoning, node count-based reasoning and node negation. Future work could explore fine-tuning these models for these specific tasks or alternatively incorporate existing and more complex graph reasoning tools [46] to facilitate decision-making. Secondly, SayPlan’s current framework is constrained by the need for a pre-built 3D scene graph and assumes that ob- jects remain static post-map generation, significantly restricting its adaptability to dynamic real- world environments. Future work could explore how online scene graph SLAM systems [15] could be integrated within the SayPlan framework to account for this. Additionally, the incorporation of open-vocabulary representations within the scene graph could yield a general scene representation as opposed to solely textual node descriptions. Lastly, a potential limitation of the current system lies in the scene graph simulator and its ability to capture the various planning failures within the environment. While this works well in the cases presented in this paper, for more complex tasks in- volving a diverse set of predicates and affordances, the incorporation of relevant feedback messages for each instance may become infeasible and forms an important avenue for future work in this area. 7 Conclusion SayPlan is a natural language-driven planning framework for robotics that integrates hierarchical 3D scene graphs and LLMs to plan across large-scale environments spanning multiple floors and rooms. We ensure the scalability of our approach by exploiting the hierarchical nature of 3D scene graphs and the semantic reasoning capabilities of LLMs to enable the agent to explore the scene graph from the highest level within the hierarchy, resulting in a significant reduction in the initial tokens required to capture larger environments. Once explored, the LLM generates task plans for a mobile manipulator robot, and a scene graph simulator ensures that the plan is feasible and grounded to the environment via iterative replanning. The framework surpasses existing techniques in producing correct, executable plans, which a robot can then follow. Finally, we successfully translate validated plans to a real-world mobile manipulator agent which operates across multiple rooms, assets and objects in a large office environment. SayPlan represents a step forward for general-purpose service robotics that can operate in our homes, hospitals and workplaces, laying the groundwork for future research in this field. 8 Acknowledgments The authors would like to thank Ben Burgess-Limerick for assistance with the robot hardware setup, Nishant Rana for creating the illustrations and Norman Di Palo and Michael Milford for insight- ful discussions and feedback towards this manuscript. The authors also acknowledge the ongoing support from the QUT Centre for Robotics. This work was partially supported by the Australian Government through the Australian Research Council’s Discovery Projects funding scheme (Project DP220102398) and by an Amazon Research Award to Niko S¨underhauf. References [1] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. E. Miller, M. Simens, A. Askell, P. Welinder, P. F. Christiano, J. Leike, and R. J. Lowe. Training language models to follow instructions with human feedback. ArXiv, abs/2203.02155, 2022. [2] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Had- sell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc., 2020. [3] OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023. [4] A. Brohan, Y. Chebotar, C. Finn, K. Hausman, A. Herzog, D. Ho, J. Ibarz, A. Irpan, E. Jang, R. Julian, et al. Do As I Can, Not As I Say: Grounding language in robotic affordances. In Conference on Robot Learning, pages 287–318. PMLR, 2023. [5] N. Wake, A. Kanehira, K. Sasabuchi, J. Takamatsu, and K. Ikeuchi. Chatgpt empowered long- step robot control in various environments: A case application, 2023. [6] D. Driess, F. Xia, M. S. M. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Yu, W. Huang, Y. Chebotar, P. Sermanet, D. Duckworth, S. Levine, V. Vanhoucke, K. Hausman, M. Toussaint, K. Greff, A. Zeng, I. Mordatch, and P. Florence. Palm-E: An embodied multimodal language model, 2023. [7] C. H. Song, J. Wu, C. Washington, B. M. Sadler, W.-L. Chao, and Y. Su. LLM-Planner: Few-shot grounded planning for embodied agents with large language models. arXiv preprint arXiv:2212.04088, 2022. [8] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch, Y. Chebotar, et al. Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608, 2022. [9] B. Liu, Y. Jiang, X. Zhang, Q. Liu, S. Zhang, J. Biswas, and P. Stone. LLM+P: Empowering large language models with optimal planning proficiency. arXiv preprint arXiv:2304.11477, 2023. [10] T. Silver, V. Hariprasad, R. S. Shuttleworth, N. Kumar, T. Lozano-P´erez, and L. P. Kaelbling. PDDL planning with pretrained large language models. In NeurIPS 2022 Foundation Models for Decision Making Workshop. [11] I. Armeni, Z.-Y. He, J. Gwak, A. R. Zamir, M. Fischer, J. Malik, and S. Savarese. 3D In Proceedings of scene graph: A structure for unified semantics, 3D space, and camera. the IEEE/CVF international conference on computer vision, pages 5664–5673, 2019. [12] U.-H. Kim, J.-M. Park, T.-J. Song, and J.-H. Kim. 3-D scene graph: A sparse and semantic rep- resentation of physical environments for intelligent agents. IEEE transactions on cybernetics, 50(12):4921–4933, 2019. 9 [13] A. Rosinol, A. Violette, M. Abate, N. Hughes, Y. Chang, J. Shi, A. Gupta, and L. Carlone. Kimera: From slam to spatial perception with 3D dynamic scene graphs. The International Journal of Robotics Research, 40(12-14):1510–1546, 2021. [14] P. Gay, J. Stuart, and A. Del Bue. Visual graphs from motion (vgfm): Scene understanding with object geometry reasoning. In Computer Vision–ACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, December 2–6, 2018, Revised Selected Papers, Part III 14, pages 330–346. Springer, 2019. [15] N. Hughes, Y. Chang, and L. Carlone. Hydra: A real-time spatial perception engine for 3D scene graph construction and optimization. Robotics: Science and Systems XIV, 2022. [16] C. Agia, K. M. Jatavallabhula, M. Khodeir, O. Miksik, V. Vineet, M. Mukadam, L. Paull, and F. Shkurti. Taskography: Evaluating robot task planning over large 3D scene graphs. In Conference on Robot Learning, pages 46–58. PMLR, 2022. [17] N. Shinn, F. Cassano, B. Labash, A. Gopinath, K. Narasimhan, and S. Yao. Reflexion: Lan- guage agents with verbal reinforcement learning, 2023. [18] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022. [19] E. W. Dijkstra. A note on two problems in connexion with graphs. In Edsger Wybe Dijkstra: His Life, Work, and Legacy, pages 287–290. 2022. [20] D. McDermott, M. Ghallab, A. Howe, C. Knoblock, A. Ram, M. Veloso, D. Weld, and D. Wilkins. PDDL-the planning domain definition language. 1998. [21] M. Fox and D. Long. PDDL2. 1: An extension to PDDL for expressing temporal planning domains. Journal of artificial intelligence research, 20:61–124, 2003. [22] P. Haslum, N. Lipovetzky, D. Magazzeni, and C. Muise. An introduction to the planning do- main definition language. Synthesis Lectures on Artificial Intelligence and Machine Learning, 13(2):1–187, 2019. [23] M. Gelfond and Y. Kahl. Knowledge representation, reasoning, and the design of intelligent agents: The answer-set programming approach. Cambridge University Press, 2014. [24] S. Tellex, T. Kollar, S. Dickerson, M. R. Walter, A. G. Banerjee, S. J. Teller, and N. Roy. Understanding natural language commands for robotic navigation and mobile manipulation. Proceedings of the AAAI Conference on Artificial Intelligence, 2011. [25] J. Thomason, A. Padmakumar, J. Sinapov, N. Walker, Y. Jiang, H. Yedidsion, J. W. Hart, P. Stone, and R. J. Mooney. Jointly improving parsing and perception for natural language commands through human-robot dialog. J. Artif. Intell. Res., 67:327–374, 2020. [26] H. Kautz and B. Selman. Pushing the envelope: Planning, propositional logic, and stochastic search. In Proceedings of the national conference on artificial intelligence, pages 1194–1201, 1996. [27] B. Bonet and H. Geffner. Planning as heuristic search. Artificial Intelligence, 129(1-2):5–33, 2001. [28] M. Vallati, L. Chrpa, M. Grze´s, T. L. McCluskey, M. Roberts, S. Sanner, et al. The 2014 international planning competition: Progress and trends. AI Magazine, 36(3):90–98, 2015. [29] R. Chitnis, T. Silver, B. Kim, L. Kaelbling, and T. Lozano-Perez. CAMPs: Learning Context- Specific Abstractions for Efficient Planning in Factored MDPs. In Conference on Robot Learn- ing, pages 64–79. PMLR, 2021. [30] T. Silver, R. Chitnis, A. Curtis, J. B. Tenenbaum, T. Lozano-P´erez, and L. P. Kaelbling. Plan- ning with learned object importance in large problem instances using graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 11962–11971, 2021. 10 [31] F. Ceola, E. Tosello, L. Tagliapietra, G. Nicola, and S. Ghidoni. Robot task planning via deep reinforcement learning: a tabletop object sorting application. In 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), pages 486–492, 2019. doi:10.1109/ SMC.2019.8914278. [32] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022. [33] A. Zeng, A. Wong, S. Welker, K. Choromanski, F. Tombari, A. Purohit, M. Ryoo, V. Sind- hwani, J. Lee, V. Vanhoucke, et al. Socratic models: Composing zero-shot multimodal reason- ing with language. arXiv preprint arXiv:2204.00598, 2022. [34] Y. Xie, C. Yu, T. Zhu, J. Bai, Z. Gong, and H. Soh. Translating natural language to planning goals with large-language models. arXiv preprint arXiv:2302.05128, 2023. [35] B. Peng, M. Galley, P. He, H. Cheng, Y. Xie, Y. Hu, Q. Huang, L. Liden, Z. Yu, W. Chen, et al. Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813, 2023. [36] T. Schick, J. Dwivedi-Yu, R. Dess`ı, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, and T. Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023. [37] R. Liu, J. Wei, S. S. Gu, T.-Y. Wu, S. Vosoughi, C. Cui, D. Zhou, and A. M. Dai. Mind’s eye: Grounded language model reasoning through simulation. arXiv preprint arXiv:2210.05359, 2022. [38] M. Skreta, N. Yoshikawa, S. Arellano-Rubach, Z. Ji, L. B. Kristensen, K. Darvish, A. Aspuru- Guzik, F. Shkurti, and A. Garg. Errors are useful prompts: Instruction guided task program- ming with verifier-assisted iterative prompting. arXiv preprint arXiv:2303.14100, 2023. [39] Z. Ravichandran, L. Peng, N. Hughes, J. D. Griffith, and L. Carlone. Hierarchical represen- tations and explicit memory: Learning effective navigation policies on 3D scene graphs using graph neural networks. In 2022 International Conference on Robotics and Automation (ICRA), pages 9272–9279. IEEE, 2022. [40] A. Kurenkov, R. Mart´ın-Mart´ın, J. Ichnowski, K. Goldberg, and S. Savarese. Semantic and ge- ometric modeling with neural message passing in 3D scene graphs for hierarchical mechanical search. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 11227–11233. IEEE, 2021. [41] S. Garg, N. S¨underhauf, F. Dayoub, D. Morrison, A. Cosgun, G. Carneiro, Q. Wu, T.-J. Chin, I. Reid, S. Gould, et al. Semantics for robotic mapping, perception and interaction: A survey. Foundations and Trends® in Robotics, 8(1–2):1–224, 2020. [42] A. A. Hagberg, D. A. Schult, and P. J. Swart. Exploring network structure, dynamics, and function using networkx. In G. Varoquaux, T. Vaught, and J. Millman, editors, Proceedings of the 7th Python in Science Conference, pages 11 – 15, Pasadena, CA USA, 2008. [43] M. Skreta, N. Yoshikawa, S. Arellano-Rubach, Z. Ji, L. B. Kristensen, K. Darvish, A. Aspuru- Guzik, F. Shkurti, and A. Garg. Errors are useful prompts: Instruction guided task pro- gramming with verifier-assisted iterative prompting. ArXiv, abs/2303.14100, 2023. URL https://api.semanticscholar.org/CorpusID:257757298. [44] J. Haviland, N. S¨underhauf, and P. Corke. A holistic approach to reactive mobile manipulation. IEEE Robotics and Automation Letters, 7(2):3122–3129, 2022. [45] P. Corke and J. Haviland. Not your grandmother’s toolbox–the robotics toolbox reinvented for python. In 2021 IEEE international conference on robotics and automation (ICRA), pages 11357–11363. IEEE, 2021. [46] J. Zhang. Graph-toolformer: To empower LLMs with graph reasoning ability via prompt augmented by chatgpt. arXiv preprint arXiv:2304.11116, 2023. 11 [47] S. Haddadin, S. Parusel, L. Johannsmeier, S. Golz, S. Gabl, F. Walch, M. Sabaghian, C. J¨ahne, L. Hausperger, and S. Haddadin. The franka emika robot: A reference platform for robotics research and education. IEEE Robotics and Automation Magazine, 29(2):46–64, 2022. doi: 10.1109/MRA.2021.3138382. [48] Omron. Omron LD / HD Series. URL https://www.ia.omron.com/products/ family/3664/dimension.html. [49] C. Chi, S. Feng, Y. Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song. Diffusion policy: In Proceedings of Robotics: Science and Visuomotor policy learning via action diffusion. Systems (RSS), 2023. [50] K. Rana, A. Melnik, and N. S¨underhauf. Contrastive language, action, and state pre-training for robot learning, 2023. [51] Q-transformer: Scalable offline reinforcement learning via autoregressive q-functions. In 7th Annual Conference on Robot Learning, 2023. [52] K. Rana, M. Xu, B. Tidd, M. Milford, and N. Suenderhauf. Residual skill policies: Learning an adaptable skill-based action space for reinforcement learning for robotics. In 6th Annual Conference on Robot Learning, 2022. URL https://openreview.net/forum?id= 0nb97NQypbK. 12 A Implementation Details We utilise GPT-4 [3] as the underlying LLM agent unless otherwise stated. We follow a similar prompting structure to Wake et al. [5] as shown in Appendix J. We define the agent’s role, details pertaining to the scene graph environment, the desired output structure and a set of input-output examples which together form the static prompt used for in-context learning. This static prompt is both task- and environment-agnostic and takes up ≈3900 tokens of the LLM’s input. During semantic search, both the 3D Scene Graph and Memory components of the input prompt get updated at each step, while during iterative replanning only the Feedback component gets updated with information from the scene graph simulator. In all cases, the LLM is prompted to output a JSON object containing arguments to call the provided API functions. B Environments Figure 4: Large-scale environments used to evaluate SayPlan. The environments span multiple rooms and floors including a vast range of We evaluate SayPlan across a set of two large-scale environments spanning multiple rooms and floors as shown in Figure 4. We provide details of each of these environments below, including a breakdown of the number of entities and tokens required to represent them in the 3DSG: Office: A large-scale office floor, spanning 37 rooms and 151 assets and objects which the agent can interact with. A full and collapsed 3D scene graph representation of this environment are provided in Appendix D and E respectively. This scene graph represents a real-world office floor within which a mobile manipulator robot is present. This allows us to embody the plans generated using SayPlan and evaluate their feasibility in the corresponding environment. Real-world video demonstrations of a mobile manipulator robot executing the generated plan in this office environment are provided on our project site2. Home: An existing 3D scene graph from the Stanford 3D Scene Graph dataset [11] which consists of a family home environment (Klickitat) spanning 28 rooms across 3 floors and contains 112 assets and objects that the agent can interact with. A 3D visual of this environment can be viewed at the 3D Scene Graph project website3. B.1 Real World Environment Plan Execution To enable real-world execution of the task plans generated over a 3DSG, we require a corresponding 2D metric map within which we can align the posed nodes captured by the 3DSG. At each room node we assume the real robot can visually locate the appropriate assets and objects that are visible to 2sayplan.github.io 33dscenegraph.stanford.edu/Klickitat 13 Office SpaceSingle-Floor, Multi-RoomHomeMulti-Floor, Multi-Room Entity Type Number of Entities Total Number of Tokens Average Number of Tokens Room Node Asset Node Object Node Agent Node Node Edges Full Graph Collapsed Graph 37 73 78 1 218 407 105 340 1994 2539 15 1843 6731 878 9.19 27.3 32.6 15.0 8.45 16.5 8.36 Table 4: Detailed 3DSG breakdown for the Office Environment. The table summarises the num- ber of different entities present in the 3DSG, the total LLM tokens required to represent each entity group and the average number of tokens required to represent a single type of entity. Entity Type Number of Entities Total Number of Tokens Average Number of Tokens Room Node Asset Node Object Node Agent Node Node Edges Full Graph Collapsed Graph 28 52 60 1 323 464 240 231 1887 1881 15 2584 6598 1817 8.25 36.3 31.35 15 8 14.2 7.57 Table 5: Detailed 3DSG breakdown for the Home Environment. The table summarises the num- ber of different entities present in the 3DSG, the total LLM tokens required to represent each entity group and the average number of tokens required to represent a single type of entity. it within the 3DSG. The mobile manipulator robot used for the demonstration consisted of a Franka Panda 7-DoF robot manipulator [47] attached to an LD-60 Omron mobile base [48]. The robot is equipped with a LiDAR scanner to localise the robot both within the real world and the correspond- ing 3DSG. All the skills or affordances including pick, place, open and close were developed using the motion controller from [44] coupled with a RGB-D vision module for grasp detection, and a behaviour tree to manage the execution of each component including failure recovery. Future work could incorporate a range of pre-trained skills (whisking, flipping, spreading etc.) using imitation learning [49, 50] or reinforcement learning [51, 52] to increase the diversity of tasks that SayPlan is able to achieve. C Tasks Instruction Family Num Explanation Example Instruction Semantic Search Simple Search Complex Search Simple Planning Long-Horizon Planning 30 30 15 15 Queries focussed on evaluating the basic semantic search capabilities of SayPlan Abstract semantic search queries which require complex reasoning Find me a ripe banana. Find the room where people are playing board games. Causal Planning Queries which require the agent to perform search, causal reasoning and environment interaction in order to solve a task. Long Horizon planning queries requiring multiple interactive steps Refrigerate the orange left on the kitchen bench. Tobi spilt soda on his desk. Help him clean up. Table 6: List of evaluation task instructions. We evaluate SayPlan on 90 instructions, grouped to test various aspects of the planning capabilities across large-scale scene graphs. The full instruction set is given in Appendix C. 14 We evaluate SayPlan across 4 instruction sets which are classified to evaluate different aspects of its 3D scene graph reasoning and planning capabilities as shown in Table 6: Simple Search: Focused on evaluating the semantic search capabilities of the LLM based on queries which directly reference information in the scene graph as well as the basic graph-based reasoning capabilities of the LMM. Complex Search: Abstract semantic search queries which require complex reasoning. The infor- mation required to solve these search tasks is not readily available in the graph and has to be inferred by the underlying LLM. Simple Planning: Task planning queries which require the agent to perform graph search, causal reasoning and environment interaction in order to solve the task. Typically requires shorter horizon plans over single rooms. Long Horizon Planning: Long Horizon planning queries require multiple interactive steps. These queries evaluate SayPlan’s ability to reason over temporally extended instructions to investigate how well it scales to such regimes. Typically requires long horizon plans spanning multiple rooms. The full list of instructions used and the corresponding aspect the query evaluates are given in the following tables: C.1 Simple Search C.1.1 Office Environment Instruction Find me object K31X. Find me a carrot. Find me anything purple in the postdoc bays. Find me a ripe banana. Find me something that has a screwdriver in it. One of the offices has a poster of the Terminator. Which one is it? I printed a document but I don’t know which printer has it. Find the document. I left my headphones in one of the meeting rooms. Locate them. Find the PhD bay that has a drone in it. Find the kale that is not in the kitchen. Find me an office that does not have a cabinet. Find me an office that contains a cabinet, a desk, and a chair. Find a book that was left next to a robotic gripper. Luis gave one of his neighbours a stapler. Find the stapler. There is a meeting room with a chair but no table. Locate it. (cid:66) unguided search with no semantic cue (cid:66) semantic search based on node name (cid:66) semantic search with termination conditioned on at- tribute (cid:66) semantic search with termination conditioned on at- tribute (cid:66) unguided search with termination conditioned on children (cid:66) semantic search with termination conditioned on children (cid:66) semantic search based on parent (cid:66) semantic search based on parent (cid:66) semantic search with termination conditioned on children (cid:66) semantic search with termination conditioned on a negation predicate on parent (cid:66) semantic search with termination conditioned on a negation predicate on children (cid:66) semantic search with termination conditioned on a conjunctive query on children (cid:66) semantic search with termination conditioned on a sibling (cid:66) semantic search with termination conditioned on a sibling (cid:66) semantic search with termination conditioned on a conjunctive query with negation Table 7: Simple Search Instructions. Evaluated in Office Environment. 15 C.1.2 Home Environment Instruction Find me a FooBar. Find me a bottle of wine. Find me a plant with thorns. Find me a plant that needs watering. Find me a bathroom with no toilet. The baby dropped their rattle in one of the rooms. Locate it. I left my suitcase either in the bedroom or the living room. Which room is it in. Find the room with a ball in it. I forgot my book on a bed. Locate it. Find an empty vase that was left next to sink. Locate the dining room which has a table, chair and a baby monitor. Locate a chair that is not in any dining room. I need to shave. Which room has both a razor and shaving cream. Find me 2 bedrooms with pillows in them. Find me 2 bedrooms without pillows in them. (cid:66) unguided search with no semantic cue (cid:66) semantic search based on node name (cid:66) semantic search with termination conditioned on at- tribute (cid:66) semantic search with termination conditioned on at- tribute (cid:66) semantic search with termination conditioned on a negation predicate (cid:66) semantic search based on node name (cid:66) semantic search based on node name (cid:66) semantic search based on node name (cid:66) semantic search based on node name (cid:66) semantic search with termination conditioned on sib- ling (cid:66) semantic search with termination conditioned on con- juctive query (cid:66) semantic search with termination conditioned on negation predicate (cid:66) semantic search with termination conditioned on children (cid:66) semantic search with multiple returns (cid:66) semantic search with multiple returns based on nega- tion predicate Table 8: Simple Search Instructions. Evaluated in Home Environment. 16 C.2 Complex Search C.2.1 Office Environment Instruction Find object J64M. J64M should be kept at below 0 degrees Celsius. Find me something non vegetarian. Locate something sharp. Find the room where people are playing board games. Find an office of someone who is clearly a fan of Arnold Schwarzenegger. There is a postdoc that has a pet Husky. Find the desk that’s most likely theirs. One of the PhD students was given more than one complimentary T-shirts. Find his desk. Find me the office where a paper attachment device is inside an asset that is open. There is an office which has a cabinet containing exactly 3 items in it. Locate the office. There is an office which has a cabinet containing a rotten apple. The cabinet name contains an even number. Locate the office. Look for a carrot. The carrot is likely to be in a meeting room but I’m not sure. Find me a meeting room with a RealSense camera. Find the closest fire extinguisher to the manipulation lab. Find me the closest meeting room to the kitchen. Either Filipe or Tobi has my headphones. Locate it. (cid:66) semantic search guided by implicit world knowledge (knowledge not directly encoded in graph) (cid:66) semantic search with termination conditioned on im- plicit world knowledge (cid:66) unguided search with termination conditioned on im- plicit world knowledge (cid:66) semantic search with termination conditioned on ability to deduce context from node children using world knowledge (“board game” is not part of any node name or attribute in this graph) (cid:66) semantic search with termination conditioned on ability to deduce context from node children using world knowledge (cid:66) semantic search with termination conditioned on ability to deduce context from node children using world knowledge (cid:66) semantic search with termination conditioned on the number of children (cid:66) semantic search with termination conditioned on node descendants and their attributes (cid:66) semantic search with termination conditioned on the number of children (cid:66) semantic search guided by numerical properties (cid:66) semantic search guided by user provided bias (cid:66) semantic search that has no result (no meeting room has a realsense camera in the graph) (cid:66) search guided by node distance (cid:66) search guided by node distance (cid:66) evaluating constrained search, early termination once the two office are explored Table 9: Complex Search Instructions. Evaluated in Office Environment. 17 C.2.2 Home Environment Instruction (cid:66) semantic search guided by implicit world knowledge I need something to access ChatGPT. Where should I go? Find the livingroom that contains the most electronic devices. Find me something to eat with a lot of potassium. I left a sock in a bedroom and one in the living room. Locate them. They should match. Find me a potted plant that is most likely a cactus. Find the dining room with exactly 5 chairs. (cid:66) semantic search with termination implicitly condi- (cid:66) semantic search with termination conditioned on children with indirect information (cid:66) semantic search with termination conditioned on im- plicit world knowledge (cid:66) semantic search with multiple returns (cid:66) semantic search with termination implicitly condi- tioned on attribute Find me the bedroom closest to the home office. Find me a bedroom with an unusual amount of bowls. Which bedroom is empty. Which bathroom has the most potted plants. The kitchen is flooded. Find somewhere I can heat up my food. Find me the room which most likely belongs to a child 15 guests are arriving. Locate enough chairs to seat them. A vegetarian dinner was prepared in one of the dining rooms. Locate it. My tie is in one of the closets. Locate it. tioned on quantity of children (cid:66) semantic search with termination implicitly condi- tioned on node distance (cid:66) semantic search with termination implicitly condi- tioned on quantity of children (cid:66) semantic search with termination implicitly condi- tioned on quantity of children (cid:66) semantic search with termination implicitly condi- tioned on quantity of children (cid:66) semantic search guided by negation (cid:66) semantic search with termination conditioned on ability to deduce context from node children using world knowledge (cid:66) semantic search with termination implicitly condi- tioned on the quantity of specified node (cid:66) semantic search with selection criteria based on world knowledge (cid:66) evaluating constrained search that has no result, ter- mination after exploring closets Table 10: Complex Search Instructions. Evaluated in Home Environment. 18 C.3 Simple Planning Instruction Close Jason’s cabinet. Refrigerate the orange left on the kitchen bench. Take care of the dirty plate in the lunchroom. Place the printed document on Will’s desk. Peter is working hard at his desk. Get him a healthy snack. Hide one of Peter’s valuable belongings. Wipe the dusty admin shelf. There is coffee dripping on the floor. Stop it. Place Will’s drone on his desk. Move the monitor from Jason’s office to Filipe’s. My parcel just got delivered! Locate it and place it in the appropriate lab. Check if the coffee machine is working. Heat up the chicken kebab. Something is smelling in the kitchen. Dispose of it. Throw what the agent is holding in the bin. Table 11: Simple Planning Instructions. Evaluated in Office Environment. C.4 Long Horizon Planning Instruction Heat up the noodles in the fridge, and place it somewhere where I can enjoy it. Throw the rotting fruit in Dimity’s office in the correct bin. Wash all the dishes on the lunch table. Once finished, place all the clean cutlery in the drawer. Safely file away the freshly printed document in Will’s office then place the undergraduate thesis on his desk. Make Niko a coffee and place the mug on his desk. Someone has thrown items in the wrong bins. Correct this. Tobi spilt soda on his desk. Throw away the can and take him something to clean with. I want to make a sandwich. Place all the ingredients on the lunch table. A delegation of project partners is arriving soon. We want to serve them snacks and non-alcoholic drinks. Prepare everything in the largest meeting room. Use items found in the supplies room only. Serve bottled water to the attendees who are seated in meeting room 1. Each attendee can only receive a single bottle of water. Empty the dishwasher. Place all items in their correct locations Locate all 6 complimentary t-shirts given to the PhD students and place them on the shelf in admin. I’m hungry. Bring me an apple from Peter and a pepsi from Tobi. I’m at the lunch table. Let’s play a prank on Niko. Dimity might have something. There is an office which has a cabinet containing a rotten apple. The cabinet name contains an even number. Locate the office, throw away the fruit and get them a fresh apple. Table 12: Long-Horizon Planning Instructions. Evaluated in Office Environment. 19 D Full 3D Scene Graph: Office Environment Figure 5: 3D Scene Graph - Fully Expanded Office Environment. Full 3D scene graph exposing all the rooms, assets and objects available in the scene. Note that the LLM agent never sees all this information unless it chooses to expand every possible node without contraction. 20 ObjectRoomPoseAgentAssetlunch_tablerecycling_bincoffee_machinerubbish_bindishwasherfridgeproduce_containertable1cabinet4K31Xprinter2cabinet1cabinet6cabine5cabinet3desk21desk24desk35desk36desk26desk30desk28desk27desk25desk29desk34desk33desk23desk22desk19desk20desk4desk1desk6desk5table4desk2cabinet2desk38desk31desk32desk11desk8desk9desk10desk12desk7desk14desk17desk16desk13desk15desk18desk3table5kitchen_benchdrawermicrowavecabinettable6chair4chair3chair5table3table2shelftoolboxdesk37chair1chair22625242123222019141391211103215716461718158meeting_room4postdoc_bay3lobbyphd_bay4kitchenpresentation_loungeprinting_zone1supplies_stationagriculture_labmeeting_room3cafeteriaajays_officechris_officelauriannes_officedimitys_officefilipes_officeluis_officewills_officephd_bay3postdoc_bay2meeting_room2robot_lounge1robot_lounge2printing_zone2jasons_officeaarons_officenikos _officemichaels_officemobile _robotics_labmeeting_room1tobis_officepeters_officepostdoc_bay1phd_bay1phd_bay2adminmanipulation _labagentforkknifecomplimentary_tshtirt5J64Mchicken_kebabcarrotgreek_saladnoodlesbanana2tomatoesalmon_bagelapple2buzzerjangariskmonopolydorittos1documentpepsifire extinguisher1staplercomplimentary_tshirt7screwdiver1frame1complimentary_tshirt8frame2complimentary_tshirt2complimentary_tshirt1complimentary_tshirt3complimentary_tshirt6complimentary_tshirt4fire extinguisher2apple1undergrad_thesis1drone2drone1frame3stapler2cheesebowlplate2spoonbanana1kale_leaves1cupboard1cupboard2paper_towelprinter_papervodkaorange_juicebiscuitsbottle_water1bottle_water2bottle_water3bottle_water4bottle_water5coffee_mugapple3phonestapler3scissorsmonitorbook1gripperbook2shelf2banana_peelplastic_bottlemilk_cartonorange_peelapple_corekale_leaves2orange1breadbutterchipsparcelmarkerterminator_posterheadphonesplate E Contracted 3D Scene Graph: Office Environment Figure 6: 3D Scene Graph - Contracted Office Environment. Contracted 3D scene graph expos- ing only the highest level within the hierarchy - room nodes. This results in an 82.1% reduction in the number of tokens required to represent the scene before the semantic search phase. 21 ObjectRoomPoseAgentAsset2625242123222019141391211103215716461718158meeting_room4postdoc_bay3lobbyphd_bay4kitchenpresentation_loungeprinting_zone1supplies_stationagriculture_labmeeting_room3cafeteriaajays_officechris_officelauriannes_officedimitys_officefilipes_officeluis_officewills_officephd_bay3postdoc_bay2meeting_room2robot_lounge1robot_lounge2printing_zone2jasons_officeaarons_officenikos _officemichaels_officemobile _robotics_labmeeting_room1tobis_officepeters_officepostdoc_bay1phd_bay1phd_bay2adminmanipulation _lab F Semantic Search Evaluation Results - Full listings of the generated semantic search sequences for the evaluation instruction sets are provided on the following pages - 22 23 SayPlanHumanSuccess Failpeters_officemobile_robotics_labrobot_lounge1agriculture_labmanipulation_labrobot_lounge2tobis_officenikos_officemichaels_officesupplies_stationmobile_robotics_labagriculture_labprinting_zone1manipulation_labprinting_zone2adminmichaels_officekitchenkitchenpostdoc_bay1postdoc_bay2postdoc_bay1postdoc_bay2kitchencafeteriakitchencafeteriamobile_robotics_labrobot_lounge1agriculture_labmanipulation_labrobot_lounge2supplies_stationrobot_lounge1printing_zone2printing_zone1robot_lounge2peters_officemichaels_officenikos_officetobis_officeaarons_officeluis_officedimitys_officefilipes_officewills_officechris_officemichaels_officeprinting_zone1printing_zone2printing_zone2printing_zone2meeting_room1meeting_room3meeting_room2meeting_room1meeting_room3meeting_room4meeting_room2phd_bay1phd_bay3phd_bay2phd_bay1phd_bay3phd_bay2mobile_robotics_labagriculture_labcafeteriaagriculture_labpeters_officenikos_officetobis_officechris_officewills_officeajays_officefilipes_officeluis_officelauriannes_officedimitys_officepeters_officetobis_officeFind me object K31X.Find me something that has a screwdriver in it.One of the offices has a poster of the Terminator. Which one is it?Find me a carrot.Find me anything purple in the postdoc bays.Find me a ripe banana.I printed a document, but I dont know which printer has it. Find the document.Find me an office that does not have a cabinet.I left my headphones in one of the meeting rooms. Locate them.Find the PhD bay that has a drone in it.Find the kale that is not in the kitchen. Table 13: Simple Search Office Environment Evaluation. Sequence of Explored Nodes for Sim- ple Search Office Environment Instructions. 24 SayPlanHumanSuccess Failpeters_officenikos_officetobis_officechris_officewills_officeajay_officefilipes_officeluis_officenikos_officelauriannes_officedimity_officepeters_officetobis_officemobile_robotics_labmanipulation_labmanipulation_labluis_officefilipes_officewills_officeluis_officefilipes_officewills_officemeeting_room1meeting_room3meeting_room2meeting_room1meeting_room2Find me an office that contains a cabinet, a desk and a chair.Find me a book that was left next to a robotic gripper.Luis gave one of his neighbours a stapler. Find the stapler.There is a meeting room with a chair but no table. Locate it. 25 wills_officejasons_officepeters_officemichaels_officenikos_officetobis_officelaurriannes_officeaarons_officeajays_officechris_officedimitys_officekitchenkitchenkitchenkitchentobis_officekitchenagriculture_labmanipultion_labmobile_robotics_labpeters_officemanipulation_labnikos_officemichaels_officemichaels_officeprinting_zone2kitchenprinting_zone1agriculture_labcafeterianikos_officesupplies_stationadminpeters_officetobis_officemeeting_room4presentation_loungemeeting_room2meeting_room1cafeteriameeting_room3meeting_room4cafeteriameeting_room2meeting_room1presentation_loungemeeting_room3peters_officemichaels_officenikos_officetobis_officechris_officemichaels_officeajays_officewills_officepostdoc_bay1postdoc_bay2postdoc_bay1postdoc_bay2phd_bay1phd_bay1phd_bay2peters_officemichaels_officenikos_officetobis_officewills_officemichaels_officenikos_officewills_officejasons_officepeters_officemichaels_officenikos_officetobis_officelaurriannes_officeaarons_officeajays_officechris_officedimitys_officedimitys_officeajay_officechris_officelauriannes_officewills_officejasons_officemichaels_officechris_officedimitys_officenikos_officeajays_officewills_officeFind object J64M. J64M should be kept at below 0 degrees Celsius.Find the office of someone who is clearly a fan of Arnold Schwarzenegger.There is postdoc that has a pet Husky. Find the desk that’s most likely theirs.Find me something non vegetarian.Locate something sharp.Find the room where people are playing board games..One of the PhD students was given more than one complimentary T-shirt. Find his desk.Find me the office where a paper attachment device is inside an asset that is open.There is an office which has a cabinet containing exactly 3 items in it. Locate the office.There is an office containing a rotten apple. The cabinet name contains an even number. Locate the office.SayPlanHumanSuccess Fail Table 14: Complex Search Office Environment Evaluation. Sequence of Explored Nodes for Complex Search Office Environment Instructions. 26 meeting_room1meeting_room4meeting_room3meeting_room2kitchenmeeting_room1meeting_room4meeting_room3meeting_room2kitchenmeeting_room1meeting_room4meeting_room3meeting_room2presentation_loungemeeting_room1meeting_room4meeting_room3meeting_room2manipulation_labpose15adminkitchenmeeting_room3filipes_officefilipes_officetobis_officefilipes_officetobis_officeLook for a carrot. The carrot is likley to be in a meeting room but I’m not sure.Find me a meeting room with a RealSense camera.Find the closest fire extinguisher to the manipulation lab.Find me the closest meeting room to the kitchen.Either Filipe or Tobi has my headphones. Locate them.SayPlanHumanSuccess Fail 27 dining_room0bedroom1bathroom0bathroom3bathroom2bathroom1closet1bathroom4bedroom2bedroom3closet0living_room0kitchen0dining_room1dining_room0kitchen1dining_room2living_room1living_room2kitchen0dining_room1dining_room0kitchen1kitchen1kitchen0living_room0dining_room0dining_room2living_room1dining_room1bathroom1living_room0dining_room0kitchen0living_room1bathroom0bedroom1living_room0dining_room2dining_room0living_room1bathroom1bedroom0dining_room1living_room2bathroom0living_room0living_room2kitchen0living_room1bedroom1living_room0dining_room2dining_room0living_room1bedroom0dining_room1living_room2bathroom0bathroom2bathroom1bathroom4bathroom1bathroom3bathroom2bedroom3playroom0bedroom1bedroom0living_room0bedroom2bedroom3dining_room1playroom0living_room2living_room1living_room0bedroom2dining_room0dining_room2bedroom0bedroom1bedroom0bedroom3bedroom2bedroom1living_room0living_room1bedroom0bedroom3living_room2bedroom1bedroom2living_room0playroom0playroom0bedroom0bedroom3bedroom2bedroom1bedroom0bedroom3bedroom1Find me a FooBar.Find me a bathroom with no toilet.The baby dropped their rattle in one of the rooms. Locate it.Find me a bottle of wine.Find me a plant with thorns.Find me a plant that needs watering.I left my suitcase either in the bedroom or the living room. Which room is it in.Find the room with a ball in it.I forgot my book on a bed. Locate it.kitchen1dining_room2dining_room1home_office0kitchen0living_room0living_room1living_room2SayPlanHumanSuccess Fail Table 15: Simple Search Home Environment Evaluation. Sequence of Explored Nodes for Sim- ple Search Home Environment Instructions. 28 bathroom0bathroom3bathroom2bathroom1bathroom4bathroom3kitchen0bathroom1bathroom0kitchen1bathroom2bathroom4dining_room0dining_room1dining_room0dining_room1living_room0living_room1home_office0bathroom0bathroom3bathroom2bathroom1bathroom0bathroom3bathroom2bathroom1bedroom0bedroom3bedroom2bedroom1bedroom0bedroom3bedroom2bedroom1bedroom0bedroom3bedroom2bedroom1bedroom0bedroom1Locate the dining room which has a table, chair and a baby monitor.Locate a chair that is not in any dining room.Find an empty vase that was left next to a sink.I need to shave. Which room has both a razor and shaving cream.Find me 2 bedrooms with pillows in them.Find me 2 bedrooms without pillows in them.SayPlanHumanSuccess Fail 29 home_office0home_office0living_room0living_room2living_room1living_room0living_room2living_room1kitchen0kitchen1kitchen0kitchen1bedroom0living_room0bedroom2bedroom1bedroom2living_room1bedroom0bedroom3bedroom2bedroom1living_room0living_room0kitchen0home_office0living_room1living_room2living_room0living_room2living_room1dining_room0dining_room1dining_room0dining_room1home_office0pose1206bedroom2bedroom0bedroom2bedroom1bedroom0bedroom2bedroom1bedroom0bedroom3bedroom2bedroom1bedroom3bedroom2bathroom0bathroom3bathroom2bathroom1bathroom0bathroom3bathroom2bathroom1kitchen0dining_room0kitchen1dining_room0I need something to access ChatGPT. Where should I go?.Find the potted plant that is most likely a cactus.Find the dining room with exactly 5 chairs.Find the livingroom that contains the most electronic devices.Find me something to eat with alot of potassium.I left a sock in a bedrooom and in one of the livingrooms. Locate them. They should match.Find me the bedroom closest to the home office.The kitchen is flooded. Find somewhere I can heat up my food.Find me the bedroom with an unusual amount of bowls.Which bedroom is empty.Which bathroom has the most potted plants.dining_room2dining_room2closet0SayPlanHumanSuccess Fail Table 16: Complex Search Home Environment Evaluation. Sequence of Explored Nodes for Complex Search Home Environment Instructions. 30 bedroom0bedroom3bedroom2bedroom1bedroom0bedroom3bedroom2bedroom1living_room1dining_room0home_office0living_room0dining_room1bedroom0living_room2dining_room0living_room0dining_room2dining_room1living_room1dining_room0dining_room2dining_room1dining_room0dining_room2dining_room1closet0closet1closet0closet1Find me the room which most likley belongs to a child.15 guests are arriving. Locate enough chairs to seat them.A vegetarian dinner was prepared in one of the dining rooms. Locate it.My tie is in one of the closets. Locate it.SayPlanHumanSuccess Fail G Causal Planning Evaluation Results In this section, we provide a detailed breakdown of the causal planning performance of SayPlan across the two sets of evaluation instructions. Tables 17 and 18 detail the correctness, executability and the number of iterative replanning steps it took to obtain an executable plan. Instruction Close Jason’s cabinet. Refrigerate the orange left on the kitchen bench. Take care of the dirty plate in the lunchroom. Place the printed document on Will’s desk. Peter is working hard at his desk. Get him a healthy snack. Hide one of Peter’s valuable belongings. Wipe the dusty admin shelf. There is coffee dripping on the floor. Stop it. Place Will’s drone on his desk. Move the monitor from Jason’s office to Filipe’s. My parcel just got delivered! Locate it and place it in the appropriate lab. Check if the coffee machine is working. Heat up the chicken kebab. Something is smelling in the kitchen. Dispose of it. Throw what the agent is holding in the bin. Corr. Exec. No. of Replanning Iterations (cid:51) (cid:51) (cid:51) (cid:51) (cid:55) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) 0 0 0 0 5 0 0 0 0 0 0 0 1 0 1 Table 17: Correctness, Executability and Number of Replanning Iterations for Simple Plan- ning Instructions. Evaluating the performance of SayPlan on each simple planning instruction. Values indicated in red indicate that no executable plan was identified up to that number of iterative replanning steps. In this case, 5 was the maximum number of replanning steps. 31 Instruction Heat up the noodles in the fridge, and place it somewhere where I can enjoy it. Throw the rotting fruit in Dimity’s office in the correct bin. Wash all the dishes on the lunch table. Once finished, place all the clean cutlery in the drawer. Safely file away the freshly printed document in Will’s office then place the undergraduate thesis on his desk. Make Niko a coffee and place the mug on his desk. Someone has thrown items in the wrong bins. Correct this. Tobi spilt soda on his desk. Throw away the can and take him something to clean with. I want to make a sandwich. Place all the ingredients on the lunch table. A delegation of project partners is arriving soon. We want to serve them snacks and non-alcoholic drinks. Prepare everything in the largest meeting room. Use items found in the supplies room only. Serve bottled water to the attendees who are seated in meeting room 1. Each attendee can only receive a single bottle of water. Empty the dishwasher. Place all items in their correct locations. Locate all 6 complimentary t-shirts given to the PhD stu- dents and place them on the shelf in admin. I’m hungry. Bring me an apple from Peter and a Pepsi from Tobi. I’m at the lunch table. Let’s play a prank on Niko. Dimity might have some- thing. There is an office which has a cabinet containing a rotten apple. The cabinet name contains an even number. Lo- cate the office, throw away the fruit and get them a fresh apple. Corr. Exec. No. of Replanning Iterations (cid:51) (cid:51) (cid:55) (cid:51) (cid:51) (cid:55) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:55) (cid:51) (cid:55) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:55) (cid:51) (cid:55) 2 1 2 2 0 0 3 3 2 2 2 1 5 1 5 Table 18: Correctness, Executability and Number of Replanning Iterations for Long-Horizon Planning Instructions. Evaluating the performance of SayPlan on each long-horizon planning instruction. Values indicated in red indicate that no executable plan was identified up to that number of iterative replanning steps. In this case, 5 was the maximum number of replanning steps. The full plan sequences generated by SayPlan and all the baseline methods for each of the above instructions are detailed in Table 19. Note the regions highlighted in red indicating the precise action where a plan failed. - Full listings of the generated plan sequences for the simple and long-horizon instruction sets are provided on the following pages - 32 33 SayPlanLLM-As-PlannerLLM+PSuccess FailClose Jason's cabinet.goto(pose13)goto(jasons_office)access(cabinet5)close(cabinet5)goto(pose13)goto(jasons_office)access(cabinet5)close(cabinet5)goto(pose13)goto(jasons_office)access(cabinet5)close(cabinet5)Refrigerate the orange left on the kitchen bench.goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(kitchen_bench)pickup(orange1)access(fridge)open(fridge)release(orange1)close(fridge)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(kitchen_bench)pickup(orange1)access(fridge)open(fridge)release(orange1)close(fridge)goto(pose13)goto(pose14)goto(pose25)goto(kitchen)pickup(orange1)access(fridge)open(fridge)release(orange1)close(fridge)Take care of the dirty plate in the lunchroom.goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(cafeteria)access(lunch_table)pickup(plate)goto(pose25)goto(kitchen)access(dishwasher)open(dishwasher)release(plate)close(dishwasher)turn_on(dishwasher)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(cafeteria)access(lunch_table)pickup(plate)goto(pose25)goto(kitchen)access(dishwasher)open(dishwasher)release(plate)close(dishwasher)turn_on(dishwasher)goto(pose8)goto(pose13)goto(pose14)goto(pose23)goto(pose25)goto(cafeteria)access(lunch_table)pickup(plate)goto(cafeteria)goto(pose25)goto(kitchen)access(dishwasher)open(dishwasher)release(plate)close(dishwasher)turn_on(dishwasher)Place the printed document on Will's desk.goto(pose13)goto(pose9)goto(pose3)goto(printing_zone2)access(printer2)pickup(document)goto(pose3)goto(pose9)goto(pose10)goto(wills_office)access(desk4)release(document)goto(pose13)goto(pose9)goto(pose3)goto(printing_zone2)access(printer2)pickup(document)goto(pose3)goto(pose9)goto(pose10)goto(wills_office)access(desk4)release(document)goto(pose8)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(printing_zone1)pickup(printed_document)goto(pose24)goto(pose23)goto(pose14)goto(pose10)goto(wills_office)release(printed_document) 34 Hide one of Peter's valuable belongings.goto(pose7)goto(pose5)goto(pose2)goto(pose1)goto(peters_office)access(desk2)pickup(phone)access(cabinet2)open(cabinet2)release(phone)close(cabinet2)goto(pose7)goto(pose5)goto(pose2)goto(pose1)goto(peters_office)access(desk2)pickup(phone)access(cabinet2)open(cabinet2)release(phone)close(cabinet2)goto(pose1)goto(peters_office)access(cabinet2)open(cabinet2)access(desk2)pickup(phone)access(cabinet2)release(phone)close(cabinet2)Peter is working hard at his desk. Get him a healthy snack.goto(pose7)goto(pose5)goto(pose2)goto(pose1)goto(peters_office)access(cabinet2)open(cabinet2)pickup(apple3)release(apple3)access(cabinet2)open(cabinet2)pickup(apple3)goto(pose7)goto(pose5)goto(pose2)goto(pose1)goto(peters_office)release(apple3)goto(pose13)goto(pose14)goto(pose23)goto(pose25)goto(kitchen)access(kitchen_bench)pickup(kale_leaves2)goto(kitchen)goto(pose25)goto(pose23)goto(pose14)goto(pose13)goto(peters_office)access(desk2)release(kale_leaves2)Wipe the dusty admin shelf.goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(supplies_station)access(cupboard1)open(cupboard1)pickup(paper_towel)goto(pose24)goto(pose23)goto(pose15)goto(pose18)goto(admin)access(shelf)release(paper_towel)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(supplies_station)access(cupboard1)open(cupboard1)pickup(paper_towel)goto(pose24)goto(pose23)goto(pose15)goto(pose18)goto(admin)access(shelf)release(paper_towel)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(supplies_station)goto(cupboard1)open(cupboard1)pickup(paper_towel)goto(supplies_station)goto(pose24)goto(pose23)goto(pose14)goto(admin)action(wipe_shelf)There is coffee dripping on thefloor. Stop it.goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(coffee_machine)turn_off(coffee_machine)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(coffee_machine)turn_off(coffee_machine)goto(pose13)goto(pose14)goto(pose23)goto(pose25)goto(kitchen)goto(coffee_machine)turn_off(coffee_machine)SayPlanLLM-As-PlannerLLM+PSuccessFail 35 Check if the coffee machine is working.goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(coffee_machine)turn_on(coffee_machine)turn_off(coffee_machine)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(coffee_machine)turn_on(coffee_machine)turn_off(coffee_machine)goto(pose13)goto(pose14)goto(pose23)goto(pose25)goto(kitchen)goto(coffee_machine)turn_on(coffee_machine)turn_off(coffee_machine)Place Will's drone on his desk.goto(pose13)goto(pose9)goto(pose10)goto(wills_office)access(cabinet4)open(cabinet4)pickup(drone2)access(desk4)release(drone2)goto(pose13)goto(pose9)goto(pose10)goto(wills_office)access(cabinet4)open(cabinet4)pickup(drone2)access(desk4)release(drone2)goto(pose8)goto(pose13)goto(pose14)goto(pose23)goto(pose10)goto(wills_office)access(cabinet4)open(cabinet4)pickup(drone2)close(cabinet4)goto(wills_office)access(desk4)release(drone2)Move the monitor from Jason's office to Filipe's.goto(pose13)goto(jasons_office)access(desk5)pickup(monitor)goto(pose13)goto(pose9)goto(pose10)goto(filipes_office)access(desk37)release(monitor)goto(pose13)goto(jasons_office)access(desk5)pickup(monitor)goto(pose13)goto(pose9)goto(pose10)goto(filipes_office)access(desk37)release(monitor)goto(pose8)goto(pose13)goto(jasons_office)pickup(monitor)goto(pose13)goto(pose9)goto(pose10)goto(filipes_office)release(monitor)My parcel just got delivered! Locateit and place it in the appropriate lab.goto(pose13)goto(pose14)goto(lobby)access(shelf2)pickup(parcel)goto(pose14)goto(pose13)goto(pose8)goto(mobile_robotics_lab)access(table4)release(parcel)goto(pose13)goto(pose14)goto(lobby)access(shelf2)pickup(parcel)goto(pose14)goto(pose13)goto(pose8)goto(mobile_robotics_lab)access(table4)release(parcel)goto(pose8)goto(pose13)goto(pose14)goto(lobby)access(shelf2)pickup(parcel)goto(lobby)goto(pose14)goto(pose13)goto(pose8)goto(mobile_robotics_lab)access(table4)release(parcel)SayPlanLLM-As-PlannerLLM+PSuccessFail 36 Heat up the chicken kebab.goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(fridge)open(fridge)pickup(chicken_kebab)close(fridge)access(microwave)open(microwave)release(chicken_kebab)close(microwave)turn_on(microwave)turn_off(microwave)open(microwave)pickup(chicken_kebab)close(microwave)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(fridge)open(fridge)pickup(chicken_kebab)close(fridge)access(microwave)open(microwave)release(chicken_kebab)turn_on(microwave)turn_off(microwave)open(microwave)pickup(chicken_kebab)close(microwave)goto(pose8)goto(pose13)goto(pose14)goto(pose23)goto(pose25)goto(kitchen)access(fridge)open(fridge)pickup(chicken_kebab)close(fridge)access(microwave)open(microwave)release(chicken_kebab)close(microwave)turn_on(microwave)turn_off(microwave)open(microwave)pickup(chicken_kebab)close(microwave)Something is smelling in the kitchen. Dispose of it.goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(fridge)open(fridge)pickup(salmon_bagel)access(compost_bin)open(compost_bin)release(salmon_bagel)close(compost_bin)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(fridge)open(fridge)pickup(salmon_bagel)access(compost_bin)open(compost_bin)release(salmon_bagel)close(compost_bin)goto(pose8)goto(pose13)goto(pose14)goto(pose23)goto(pose25)goto(kitchen)access(fridge)open(fridge)pickup(salmon_bagel)close(fridge)goto(rubbish_bin)open(rubbish_bin)release(salmon_bagel)close(rubbish_bin)Throw what theagent is holding in the bin.goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(recycling_bin)open(recycling_bin)release(sweet_wrapper)close(recycling_bin)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(recycling_bin)release(sweet_wrapper)goto(pose13)goto(pose14)goto(pose23)goto(pose25)goto(kitchen)goto(rubbish_bin)release(sweet_wrapper)SayPlanLLM-As-PlannerLLM+PSuccessFail 37 Heat up the noodles in the fridge, and place it somewhere whereI can enjoy it.goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(fridge)open(fridge)pickup(noodles)access(microwave)open(microwave)release(noodles)close(microwave)turn_on(microwave)turn_off(microwave)open(microwave)pickup(noodles)goto(pose25)goto(cafeteria)access(lunch_table)release(noodles)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(fridge)open(fridge)pickup(noodles)access(microwave)open(microwave)release(noodles)turn_on(microwave)turn_off(microwave)open(microwave)pickup(noodles)goto(pose25)goto(cafeteria)access(lunch_table)release(noodles)goto(pose25)goto(kitchen)access(fridge)open(fridge)pickup(noodles)close(fridge)access(microwave)open(microwave)release(noodles)close(microwave)turn_on(microwave)turn_off(microwave)open(microwave)pickup(noodles)close(microwave)goto(cafeteria)release(noodles)Throw the rotting fruit in Dimity's office in the correct bin.goto(pose13)goto(pose14)goto(pose23)goto(pose22)goto(dimitys_office)access(cabinet3)open(cabinet3)pickup(apple2)goto(pose22)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(compost_bin)open(compost_bin)release(apple2)goto(pose13)goto(pose14)goto(pose23)goto(pose22)goto(dimitys_office)access(cabinet3)open(cabinet3)pickup(apple2)goto(pose22)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(compost_bin)release(apple2)goto(pose13)goto(pose14)goto(pose23)goto(pose22)goto(dimitys_office)access(cabinet3)pickup(apple2)goto(dimitys_office)goto(pose22)goto(pose23)goto(pose25)goto(kitchen)access(recycling_bin)release(apple2)SayPlanLLM-As-PlannerLLM+PSuccessFail 38 Wash all the dishes on the lunch table. Once finished, place all the clean cutlery in the drawer.goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(cafeteria)access(lunch_table)pickup(plate)goto(pose25)goto(kitchen)access(drawer)open(drawer)release(plate)goto(pose25)goto(cafeteria)access(lunch_table)pickup(fork)goto(pose25)goto(kitchen)access(drawer)release(fork)goto(pose25)goto(cafeteria)access(lunch_table)pickup(knife)goto(pose25)goto(kitchen)access(drawer)release(knife)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(cafeteria)access(lunch_table)pickup(plate)pickup(fork)pickup(knife)goto(pose25)goto(kitchen)access(drawer)release(plate)release(fork)release(knife)goto(pose25)goto(cafeteria)goto(lunch_table)pickup(plate)pickup(fork)pickup(knife)goto(pose25)goto(kitchen)goto(dishwasher)open(dishwasher)release(plate)release(fork)release(knife)close(dishwasher)turn_on(dishwasher)turn_off(dishwasher)open(dishwasher)pickup(plate)pickup(fork)pickup(knife)goto(drawer)open(drawer)release(fork)release(knife)close(drawer)Safely file away the freshly printed document in Will's office then place the undergraduate thesis on his desk.goto(pose13)goto(pose9)goto(pose3)goto(printing_zone2)access(printer2)pickup(document)goto(pose3)goto(pose9)goto(pose10)goto(wills_office)access(cabinet4)open(cabinet4)release(document)close(cabinet4)access(cabinet4)open(cabinet4)pickup(undergrad_thesis1)close(cabinet4)access(desk4)release(undergrad_thesis1)pickup(document)goto(pose13)goto(pose9)goto(pose10)goto(wills_office)access(cabinet4)open(cabinet4)release(document)close(cabinet4)access(desk4)pickup(undergrad_thesis1)release(undergrad_thesis1)goto(pose8)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(printing_zone1)pickup(freshly_printed_document)goto(pose24)goto(pose23)goto(pose14)goto(pose13)goto(pose9)goto(pose10)goto(wills_office)access(cabinet4)open(cabinet4)release(freshly_printed_document)pickup(undergrad_thesis1)close(cabinet4)access(desk4)release(undergrad_thesis1)SayPlanLLM-As-PlannerLLM+PSuccessFail 39 Make Niko a coffee and place themug on his desk.goto(pose7)goto(pose5)goto(nikos_office)access(desk1)pickup(coffee_mug)goto(pose5)goto(pose7)goto(pose16)goto(pose18)goto(pose15)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(coffee_machine)release(coffee_mug)turn_on(coffee_machine)turn_off(coffee_machine)pickup(coffee_mug)goto(pose25)goto(pose24)goto(pose23)goto(pose15)goto(pose18)goto(pose16)goto(pose7)goto(pose5)goto(nikos_office)access(desk1)release(coffee_mug)goto(pose7)goto(pose5)goto(nikos_office)access(desk1)pickup(coffee_mug)goto(pose5)goto(pose7)goto(pose8)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(coffee_machine)release(coffee_mug)turn_on(coffee_machine)turn_off(coffee_machine)pickup(coffee_mug)goto(pose25)goto(pose24)goto(pose23)goto(pose14)goto(pose13)goto(pose8)goto(pose7)goto(pose5)goto(nikos_office)access(desk1)release(coffee_mug)goto(pose8)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)pickup(coffee_mug)goto(pose25)goto(pose24)goto(pose23)goto(pose14)goto(pose13)goto(pose5)goto(nikos_office)goto(desk1)release(coffee_mug)Someone has thrown items in the wrong bins. Correct this.goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(recycling_bin)open(recycling_bin)pickup(milk_carton)access(rubbish_bin)open(rubbish_bin)release(milk_carton)access(recycling_bin)pickup(orange_peel)release(orange_peel)access(recycling_bin)pickup(apple_core)release(apple_core)access(rubbish_bin)pickup(banana_peel)access(recycling_bin)release(banana_peel)access(rubbish_bin)pickup(plastic_bottle)release(plastic_bottle)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(recycling_bin)open(recycling_bin)pickup(milk_carton)access(rubbish_bin)open(rubbish_bin)release(milk_carton)access(recycling_bin)pickup(orange_peel)release(orange_peel)access(recycling_bin)pickup(apple_core)release(apple_core)access(rubbish_bin)pickup(banana_peel)access(recycling_bin)release(banana_peel)access(rubbish_bin)pickup(plastic_bottle)release(plastic_bottle)goto(pose13)goto(pose14)goto(pose23)goto(pose25)goto(kitchen)goto(recycling_bin)pickup(milk_carton)goto(rubbish_bin)release(milk_carton)goto(recycling_bin)pickup(orange_peel)goto(rubbish_bin)release(orange_peel)goto(recycling_bin)pickup(apple_core)goto(rubbish_bin)release(apple_core)goto(rubbish_bin)pickup(banana_peel)goto(recycling_bin)release(banana_peel)goto(rubbish_bin)pickup(plastic_bottle)goto(recycling_bin)release(plastic_bottle)SayPlanLLM-As-PlannerLLM+PSuccessFail 40 Tobi spilt soda onhis desk. Throw away the can and take him something to clean with.goto(pose7)goto(pose5)goto(pose2)goto(pose1)goto(tobis_office)access(desk38)pickup(pepsi)goto(pose1)goto(pose2)goto(pose3)goto(pose9)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(recycling_bin)open(recycling_bin)release(pepsi)goto(pose25)goto(pose24)goto(supplies_station)access(cupboard1)open(cupboard1)pickup(paper_towel)goto(pose24)goto(pose23)goto(pose14)goto(pose13)goto(pose9)goto(pose3)goto(pose2)goto(pose1)goto(tobis_office)access(desk38)release(paper_towel)goto(pose7)goto(pose5)goto(pose2)goto(pose1)goto(tobis_office)access(desk38)pickup(pepsi)goto(pose1)goto(pose2)goto(pose3)goto(pose9)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(recycling_bin)release(pepsi)goto(pose25)goto(pose24)goto(supplies_station)access(cupboard1)pickup(paper_towel)goto(pose24)goto(pose23)goto(pose14)goto(pose13)goto(pose9)goto(pose3)goto(pose2)goto(pose1)goto(tobis_office)release(paper_towel)goto(pose8)goto(pose13)goto(tobis_office)access(desk38)pickup(pepsi)goto(tobis_office)goto(pose1)goto(pose2)goto(pose5)goto(kitchen)access(recycling_bin)release(pepsi)goto(kitchen)goto(pose5)goto(pose2)goto(pose24)goto(supplies_station)access(cupboard1)pickup(paper_towel)goto(supplies_station)goto(pose24)goto(pose2)goto(pose5)goto(pose1)goto(tobis_office)access(desk38)release(paper_towel)SayPlanLLM-As-PlannerLLM+PSuccessFail 41 I want to make a sandwich. Place all the ingredients on the lunch table.goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(kitchen_bench)pickup(bread)goto(pose25)goto(cafeteria)access(lunch_table)release(bread)goto(pose25)goto(kitchen)access(kitchen_bench)pickup(butter)goto(pose25)goto(cafeteria)access(lunch_table)release(butter)goto(pose25)goto(kitchen)access(fridge)open(fridge)pickup(cheese)goto(pose25)goto(cafeteria)access(lunch_table)release(cheese)goto(pose25)goto(kitchen)access(fridge)pickup(tomato)goto(pose25)goto(cafeteria)access(lunch_table)release(tomato)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(kitchen_bench)pickup(bread)goto(pose25)goto(cafeteria)release(bread)goto(pose25)goto(kitchen)pickup(butter)goto(pose25)goto(cafeteria)release(butter)goto(pose25)goto(kitchen)access(fridge)open(fridge)pickup(cheese)goto(pose25)goto(cafeteria)release(cheese)goto(pose25)goto(kitchen)pickup(tomato)goto(pose25)goto(cafeteria)release(tomato)goto(pose13)goto(pose14)goto(pose23)goto(pose25)goto(kitchen)pickup(cheese)pickup(tomato)pickup(kale_leaves2)pickup(bread)pickup(butter)goto(pose25)goto(pose26)goto(presentation_lounge)release(cheese)release(tomato)release(kale_leaves2)release(bread)release(butter)SayPlanLLM-As-PlannerLLM+PSuccessFail 42 A delegation of project partners is arriving soon. We want to serve them snacks and non-alcoholic drinks. Prepare everything in the largest meeting room. Use items found in the supplies room only.goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(supplies_station)access(cupboard2)open(cupboard2)pickup(orange_juice)goto(pose24)goto(pose25)goto(pose26)goto(meeting_room3)access(table1)release(orange_juice)goto(pose26)goto(pose25)goto(pose24)goto(supplies_station)access(cupboard2)pickup(biscuits)goto(pose24)goto(pose25)goto(pose26)goto(meeting_room3)access(table1)release(biscuits)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(supplies_station)access(cupboard2)open(cupboard2)pickup(orange_juice)goto(pose24)goto(pose25)goto(pose26)goto(meeting_room3)release(orange_juice)goto(pose26)goto(pose25)goto(pose24)goto(supplies_station)pickup(biscuits)goto(pose24)goto(pose25)goto(pose26)goto(meeting_room3)release(biscuits)goto(pose8)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(supplies_station)open(cupboard2)pickup(bottled_water1)pickup(bottled_water2)pickup(bottled_water3)pickup(bottled_water4)pickup(bottled_water5)pickup(orange_juice)pickup(biscuits)close(cupboard2)goto(pose24)goto(pose25)goto(pose26)goto(meeting_room3)release(bottled_water1)release(bottled_water2)release(bottled_water3)release(bottled_water4)release(bottled_water5)release(orange_juice)release(biscuits)Empty the dishwasher. Place all items in their correct locationsgoto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(dishwasher)open(dishwasher)pickup(plate2)access(cabinet)open(cabinet)release(plate2)access(dishwasher)pickup(bowl)access(cabinet)release(bowl)access(dishwasher)pickup(spoon)access(drawer)open(drawer)release(spoon)close(drawer)access(cabinet)close(cabinet)access(dishwasher)close(dishwasher)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(pose25)goto(kitchen)access(dishwasher)open(dishwasher)pickup(plate2)access(cabinet)release(plate2)access(dishwasher)pickup(bowl)access(cabinet)release(bowl)access(dishwasher)pickup(spoon)access(drawer)release(spoon)close(dishwasher)goto(pose13)goto(pose14)goto(pose23)goto(pose15)goto(kitchen)access(dishwasher)open(dishwasher)pickup(plate2)goto(kitchen)access(cabinet)release(plate2)goto(kitchen)access(dishwasher)pickup(bowl)goto(kitchen)access(cabinet)release(bowl)goto(kitchen)access(dishwasher)pickup(spoon)goto(kitchen)access(drawer)release(spoon)goto(kitchen)access(dishwasher)close(dishwasher)SayPlanLLM-As-PlannerLLM+PSuccessFail 43 Serve bottled water to the attendees who are seated in meeting room 1. Each attendee can only receive a single bottleof water.goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(supplies_station)access(cupboard2)open(cupboard2)pickup(bottled_water1)goto(pose24)goto(pose23)goto(pose14)goto(pose13)goto(pose9)goto(pose3)goto(pose2)goto(meeting_room1)access(table5)release(bottled_water1)goto(pose2)goto(pose3)goto(pose9)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(supplies_station)access(cupboard2)pickup(bottled_water2)goto(pose24)goto(pose23)goto(pose14)goto(pose13)goto(pose9)goto(pose3)goto(pose2)goto(meeting_room1)access(table5)release(bottled_water2)goto(pose2)goto(pose3)goto(pose9)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(supplies_station)access(cupboard2)pickup(bottled_water3)goto(pose24)goto(pose23)goto(pose14)goto(pose13)goto(pose9)goto(pose3)goto(pose2)goto(meeting_room1)access(table5)release(bottled_water3)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(supplies_station)access(cupboard2)pickup(bottled_water1)goto(pose24)goto(pose23)goto(pose14)goto(pose13)goto(pose9)goto(pose3)goto(pose2)goto(meeting_room1)release(bottled_water1)goto(pose2)goto(pose3)goto(pose9)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(supplies_station)access(cupboard2)pickup(bottled_water2)goto(pose24)goto(pose23)goto(pose14)goto(pose13)goto(pose9)goto(pose3)goto(pose2)goto(meeting_room1)release(bottled_water2)goto(pose2)goto(pose3)goto(pose9)goto(pose13)goto(pose14)goto(pose23)goto(pose24)goto(supplies_station)access(cupboard2)pickup(bottled_water3)goto(pose24)goto(pose23)goto(pose14)goto(pose13)goto(pose9)goto(pose3)goto(pose2)goto(meeting_room1)release(bottled_water3)goto(pose8)goto(pose13)goto(pose14)goto(pose23)goto(supplies_station)pickup(bottled_water1)goto(pose23)goto(pose14)goto(pose1)goto(meeting_room1)release(bottled_water1)goto(pose1)goto(pose14)goto(pose23)goto(supplies_station)pickup(bottled_water2)goto(pose23)goto(pose14)goto(pose1)goto(meeting_room1)release(bottled_water2)goto(pose1)goto(pose14)goto(pose23)goto(supplies_station)pickup(bottled_water3)goto(pose23)goto(pose14)goto(pose1)goto(meeting_room1)release(bottled_water3)SayPlanLLM-As-PlannerLLM+PSuccessFail 44 Locate all 6 complimentary t-shirts given to the PhD students and place them on the shelf in admin.goto(pose7)goto(pose5)goto(pose6)goto(phd_bay1)access(desk9)pickup(complimentary_tshirt1)goto(pose6)goto(pose5)goto(pose7)goto(pose16)goto(pose18)goto(admin)access(shelf)release(complimentary_tshirt1)goto(pose18)goto(pose16)goto(pose7)goto(pose5)goto(pose6)goto(phd_bay1)access(desk10)pickup(complimentary_tshirt2)goto(pose6)goto(pose5)goto(pose7)goto(pose16)goto(pose18)goto(admin)access(shelf)release(complimentary_tshirt2)goto(pose18)goto(pose16)goto(pose17)goto(phd_bay2)access(desk15)pickup(complimentary_tshirt3)goto(pose17)goto(pose16)goto(pose18)goto(admin)access(shelf)release(complimentary_tshirt3)goto(pose18)goto(pose16)goto(pose17)goto(phd_bay2)access(desk18)pickup(complimentary_tshirt4)goto(pose17)goto(pose16)goto(pose18)goto(admin)access(shelf)release(complimentary_tshirt4)goto(pose18)goto(pose16)goto(pose17)goto(phd_bay2)access(desk18)pickup(complimentary_tshirt6)goto(pose17)goto(pose16)goto(pose18)goto(admin)access(shelf)release(complimentary_tshirt6)goto(pose18)goto(pose19)goto(phd_bay4)access(desk25)pickup(complimentary_tshirt5)goto(pose19)goto(pose18)goto(admin)access(shelf)release(complimentary_tshirt5)goto(pose7)goto(pose5)goto(pose6)goto(phd_bay1)access(desk9)pickup(complimentary_tshirt1)goto(pose6)goto(pose5)goto(pose7)goto(pose16)goto(pose18)goto(admin)access(shelf)release(complimentary_tshirt1)goto(pose18)goto(pose16)goto(pose7)goto(pose5)goto(pose6)goto(phd_bay1)access(desk10)pickup(complimentary_tshirt2)goto(pose6)goto(pose5)goto(pose7)goto(pose16)goto(pose18)goto(admin)access(shelf)release(complimentary_tshirt2)goto(pose18)goto(pose16)goto(pose17)goto(phd_bay2)access(desk15)pickup(complimentary_tshirt3)goto(pose17)goto(pose16)goto(pose18)goto(admin)access(shelf)release(complimentary_tshirt3)goto(pose18)goto(pose16)goto(pose17)goto(phd_bay2)access(desk18)pickup(complimentary_tshirt4)goto(pose17)goto(pose16)goto(pose18)goto(admin)access(shelf)release(complimentary_tshirt4)access(desk18)pickup(complimentary_tshirt6)access(shelf)release(complimentary_tshirt6)goto(pose18)goto(pose19)goto(phd_bay4)access(desk25)pickup(complimentary_tshirt5)goto(pose19)goto(pose18)goto(admin)access(shelf)release(complimentary_tshirt5)SayPlanLLM-As-PlannerLLM+PSuccessFail Table 19: Causal Planning Evaluation. Task planning action sequences generated for a mobile manipulator robot to follow for both the simple and long-horizon planning instruction sets. 45 H Scalability Ablation Study In this study, we evaluate the ability of SayPlan and the underlying LLM to reason over larger-scale scene graphs. More specifically, as SayPlan’s initial input is a collapsed 3DSG, we explore how increasing the number of nodes in this base environment impacts the ability of the LLM to attend to the relevant parts of the scene graph for both semantic search and iterative replanning. Figure 7: Evaluating the performance of the underlying LLMs semantic search capabilities as the scale of the environment increases. For the office environment used in this study, we are primarily interested in the number of room nodes present in the collapsed form of the 3DSG. Figure 8: Evaluating the performance of SayPlan’s causal planning capabilities as the scale of the environment increases. For the office environment used in this study, we are primarily interested in the number of room nodes present in the collapsed form of the 3DSG. We note here that all the failures that occurred across both semantic search and iterative replanning were a result of the LLM’s input exceeding the maximum token limits – in the case of GPT-4 this corresponded to 8192 tokens. With regard to the scalability to larger environments, this is an important observation as it indicates that the LLM’s reasoning capabilities or ability to attend to the relevant parts of the 3DSG is not significantly impacted by the presence of ”noisy” or increasing number of nodes. One potential downside to larger environments however is the increased number of steps required before semantic search converges. As more semantically relevant floor or room nodes enter the scene, each one of these may be considered by the LLM for exploration. 46 3060100200300Base Environment Size (Number of Room Nodes)"Find me a carrot.""Find me a book that was left next to a robotic gripper.""Find me a ripe banana.""Find object J64M. J64M should be kept below 0 degree Celsius.""Find me something non-vegetarian.""There is postdoc who has a pet Husky. Find their desk."Simple SearchComplex SearchSuccessFailure3060100200300Base Environment Size (Number of Room Nodes)"Close Jason's cabinet.""Hide one of Peter's valuable belongings.""Something is smelling in the kitchen. Dispose of it.""Heat up the noodles in the fridge, and place it somewhere where I can enjoy it.""Let's play a prank on Niko. Dimity might have something.""Tobi spilt soda on his desk. Throw away the can and take him something to clean with."Simple PlanningLong-Horizon PlanningSuccessFailure I Real World Execution of a Generated Long Horizon Plan. Figure 9: Real World Execution of a Generated Long Horizon Plan. Execution of a generated and validated task plan on a real-world mobile manipulator robot. 47 “a postdoc spilled their soda, help them clean it up”goto(pose8)goto(pose7)goto(pose5)goto(pose4)goto(postdoc_bay4)access(desk31)pickup(soda_can2)access(trash_can)release(soda_can2)goto(pose5)goto(pose7)goto(pose24)goto(pose25)goto(kitchen)goto(pose23)goto(pose8)goto(pose13)goto(pose14)access(kitchen_bench)pickup(tea_towel)goto(pose24)goto(pose24)goto(pose5)goto(pose7)goto(pose23)goto(pose8)goto(pose13)goto(pose14)goto(pose4)goto(postdoc_bay4)access(desk31)release(tea_towel)Generated Plan: J Input Prompt Structure Input prompt passed to the LLM for SayPlan. Note that the components highlighted in violet rep- resent static components of the prompt that remain fixed throughout both the semantic search and iterative replanning phases of SayPlan. Agent Role: You are an excellent graph planning agent. Given a graph representation of an environment, you can explore the graph by expanding nodes to find the items of interest. You can then use this graph to generate a step-by-step task plan that the agent can follow to solve a given instruction. Environment Functions: goto(<pose>): Move the agent to any room node or pose node. access(<asset>): Provide access to the set of affordances associated with an asset node and its connected objects. pickup(<object>): Pick up an accessible object from the accessed node. release(<object>): Release grasped object at an asset node. turn_on/off(<object>): Toggle object at agent’s node, if accessible and has affordance. open/close(<asset>): Open/close asset at agent’s node, affecting object accessibility. done(): Call when the task is completed. Environment State: ontop_of(<asset>): Object is located on <asset> inside_of(<asset>): Object is located inside <asset> inside_hand: Object is currently being grasped by the robot/agent closed: Asset can be opened open: Asset can be closed or kept open on: Asset is currently on off: Asset is currently off accessible: The object is not accessible if it is inside an asset and the asset state is "closed". Environment API: expand_node(<node>): Reveal assets/objects connected to a room/floor node. contract_node(<node>): Hide assets/objects, reducing graph size for memory constraints. verify_plan(): Verify generated plan in the scene graph environment. Output Response Format: {chain_of_thought: break your problem down into a series of intermediate reasoning steps to help you determine your next command, reasoning: justify why the next action is important mode: "exploring" OR "planning" command: {‘‘command_name": Environment API call "node_name": node to perform an operation on "plan": task plan if in planning mode}} Example: <see Appendix K and L> Instruction: Natural language description of the task 3D Scene Graph: Text-serialised JSON description of a 3D scene graph Memory: History of previously expanded nodes Feedback: External textual feedback from scene graph simulator Ensure the response can be parsed by Python json.loads. 48 K Example Input-Output Interaction Between LLM and Scene Graph Simulator During Semantic Search Instruction: make a coffee for Tom and place it in his room 3D Scene Graph: {nodes: {room: [{id: bobs_room}, {id: toms_room}, {id: jacks_room}, {id: kitchen}, {id: livingroom}], pose: [{id: pose1}, {id: pose2}, {id: pose3}, {id: pose4}, {id: pose5}], agent: [{location: bobs_room, id: agent}]}, links: [bobs_room↔pose1, bobs_room↔agent, toms_room↔pose1, toms_room↔pose2, toms_room↔pose5, jacks_room↔pose2, jacks_room↔pose3, kitchen↔pose3, kitchen↔pose4, kitchen↔pose5, livingroom↔pose4]} Memory: [] SayPlan [Mode->Search]: chain-of-thought: explore graph for coffee mug, coffee machine and somewhere to place the coffee -> perform common sense exploration first -> explore toms room -> explore kitchen -> if items not found then explore other nodes reasoning: i will expand toms room command: expand(toms_room) Scene Graph Simulator: <Updated 3D scene graph> Memory: [toms_room] SayPlan [Mode->Search]: chain-of-thought: i have found a wardrobe in tom’s room -> leave this node expanded -> the coffee mug is not in his room -> still have not found the coffee machine -> kitchen might have coffee machine and coffee mug -> explore this node next reasoning: i will expand the kitchen next command: expand(kitchen) Scene Graph Simulator: <Updated 3D scene graph> Memory: [toms_room, kitchen] SayPlan [Mode->Search]: chain-of-thought: i’ve found the coffee machine in the kitchen -> the coffee mug however is not in the kitchen -> i will explore other unexpanded nodes now for the coffee mug -> jacks room, bobs room and the living room are unexplored yet -> i will try jack"s room next reasoning: i will expand jacks room next command: expand(jacks_room) Scene Graph Simulator: <Updated 3D scene graph> Memory: [toms_room, kitchen, jacks_room] SayPlan [Mode->Search]: chain-of-thought: jacks room does not have anything relevant towards solving this task -> I should contract this node to save token memory -> i will explore other unexplored nodes after doing this reasoning: contract jack’s room to save memory command: contract(jacks_room) Scene Graph Simulator: <Updated 3D scene graph> Memory: [toms_room, kitchen, jacks_room] SayPlan [Mode->Search]: chain-of-thought: i have found a wardrobe in tom’s room and coffee machine in the kitchen -> the coffee mug is still required for the task -> explore a new room next reasoning: i will expand bob’s room as it is unexplored command: expand(bobs_room) Scene Graph Simulator: <Updated 3D scene graph> Memory: [toms_room, kitchen, jacks_room, bobs_room] SayPlan [Mode->Search]: chain-of-thought: i have found the coffee mug, the coffee machine and tom’s wardrobe on the graph -> search complete -> I will switch to planning mode-> collect coffee mug -> generate plan for making coffee -> place coffee mug on Tom’s wardrobe reasoning: I will generate a task plan using the identified subgraph 49 L Example Input-Output Interaction Between LLM and Scene Graph Environment During Iterative Replanning Instruction: make a coffee for Tom and place it in his room 3D Scene Graph: {nodes: {room: [{id: bobs_room}, {id: toms_room}, {id: jacks_room}, {id: kitchen}, {id: livingroom}], pose: [{id: pose1}, {id: pose2}, {id: pose3}, {id: pose4}, {id: pose5}], agent: [{location: bobs_room, id: agent}], asset: [{room: toms_room, state: free, affordances: [release], id: bed2}, {room: toms_room, state: closed, affordances: [open, close, release], id: wardrobe2}, {room: kitchen, state: closed, affordances: [open, close, release], id: fridge}, {room: kitchen, affordances: [turn_on, turn_off], state: off, id: coffee_machine}, {room: bobs_room, state: free, affordances: [release], id: bed1}, {room: bobs_room, state: closed, affordances: [open, close, release], id: wardrobe1}], object: [{affordances: [pickup], state: inside_of(wardrobe1), attributes: "blue", id: coffee_mug}]}, links: [bobs_room↔pose1, bobs_room↔agent, bobs_room↔bed1, bobs_room↔wardrobe1, toms_room↔pose1, toms_room↔pose2, toms_room↔pose5, toms_room↔bed2, toms_room↔wardrobe2, jacks_room↔pose2, jacks_room↔pose3, kitchen↔pose3, kitchen↔pose4, kitchen↔pose5, kitchen↔fridge, kitchen↔coffee_machine, livingroom↔pose4, wardrobe1↔coffee_mug]} Memory: [toms_room, kitchen, jacks_room, bobs_room] SayPlan [Mode->Planning]: chain-of-thought: "i have found the coffee mug, the coffee machine and tom’s wardrobe on the graph -> search complete -> I will switch to planning mode-> collect coffee mug -> generate plan for making coffee -> place coffee mug on Tom’s wardrobe" reasoning: I will generate a task plan using the identified subgraph plan: [goto(bobs_room) > access(wardrobe1) > pickup(coffee_mug) > goto(kitchen) > access(coffee_machine) > release(coffee_mug) > turn_on(coffee_machine) > turn_off(coffee_machine) > pickup(coffee_mug) > goto(toms_room) > access(wardrobe2) > release(coffee_mug) > done] Path Planner:plan: [goto(pose3) > goto(pose4) > goto(pose1) > goto(bobs_room) > access(wardrobe1) > pickup(coffee_mug) > goto(pose2) > goto(pose5) > goto(kitchen) > access(coffee_machine) > release(coffee_mug) > turn_on(coffee_machine) > turn_off(coffee_machine) > pickup(coffee_mug) > goto(pose5) > goto(pose2) > goto(toms_room) > access(wardrobe2) > release(coffee_mug) > done] Scene Graph Simulator (Feedback): "coffee mug is not accessible" Memory: [toms_room, kitchen, jacks_room, bobs_room] SayPlan [Mode->Planning]: chain-of-thought: "the coffee mug was not accessible -> the wardrobe it is in is currently closed -> I update the plan to open the wardrobe first -> generate plan for making coffee -> place coffee mug on Tom’s wardrobe" reasoning: I will generate a task plan using the identified subgraph and the feedback from the simulator plan: [goto(bobs_room) > access(wardrobe1) > open(wardrobe1) > pickup(coffee_mug) > goto(kitchen) > access(coffee_machine) > release(coffee_mug) > turn_on(coffee_machine) > turn_off(coffee_machine) > pickup(coffee_mug) > goto(toms_room) > access(wardrobe2) > release(coffee_mug) > done] Path Planner:plan: [goto(pose3) > goto(pose4) > goto(pose1) > goto(bobs_room) > access(wardrobe1) > open(wardrobe1) > pickup(coffee_mug) > goto(pose2) > goto(pose5) > goto(kitchen) > access(coffee_machine) > release(coffee_mug) > turn_on(coffee_machine) > turn_off(coffee_machine) > pickup(coffee_mug) > goto(pose5) > goto(pose2) > goto(toms_room) > access(wardrobe2) > release(coffee_mug) > done] Scene Graph Simulator: Plan Verified 50
synthetic_cpt
1
Pretrained_Language_Models_for_Semantics-Aware_Data_Harmonisation_of_Observational_Clinical_Studies_in_the_Era_of_Big_Data.pdf
Explicit Pairwise Word Interaction Modeling Improves Pretrained Transformers for English Semantic Similarity Tasks Yinan Zhang, Raphael Tang, and Jimmy Lin David R. Cheriton School of Computer Science University of Waterloo 9 1 0 2 v o N 7 ] L C . s c [ 1 v 7 4 8 2 0 . 1 1 9 1 : v i X r a Abstract In English semantic similarity tasks, clas- sic word embedding-based approaches explic- itly model pairwise “interactions” between the word representations of a sentence pair. Transformer-based pretrained language mod- els disregard this notion, instead modeling pairwise word interactions globally and implic- itly through their self-attention mechanism. In this paper, we hypothesize that introducing an explicit, constrained pairwise word interaction mechanism to pretrained language models im- proves their effectiveness on semantic similar- ity tasks. We validate our hypothesis using BERT on four tasks in semantic textual similar- ity and answer sentence selection. We demon- strate consistent improvements in quality by adding an explicit pairwise word interaction module to BERT. 1 Introduction A substantial body of literature in the field of nat- ural language processing is devoted to the archi- tectural design of word embedding-based neural networks. Over the years, painstaking progress has been made toward developing the most effec- tive network components. Important advancements include hierarchical attention (Yang et al., 2016), multi-perspective convolutions (He et al., 2015), and tree-structured networks (Tai et al., 2015). With the rise of the transformer-based pretrained language models, however, many of these com- ponents have been all but forgotten. Nowadays, the dominant paradigm is to pretrain a trans- former (Vaswani et al., 2017) on large text cor- pora, then fine-tune on a broad range of down- stream single-sentence and sentence-pair tasks alike. Prominent examples include BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019), which currently represent the state of the art across many natural language understanding tasks. Self-evidently, these models dispense with much of the age-old wisdom that has so well guided the design of neural networks in the past. Perhaps, that’s the beauty of it all: a simple, universal ar- chitecture just “works.” However, it certainly begs the following question: what neural architectural design choices can we use from the past? In this paper, we precisely explore this question in the context of semantic similarity modeling for English. For this task, one important component is the very deep pairwise word interaction (VD- PWI) module, first introduced in He and Lin (2016), which serves as a template for many succeeding works (Lan and Xu, 2018). Conceptually, they propose to explicitly compute pairwise distance matrices for the distinct word representations of the two sentences. The matrices are then fed into a convolutional neural network, which treats se- mantic similarity modeling as a pattern recognition problem. Clearly, transformers lack such an ex- plicit mechanism, instead modeling pairwise word interactions in an unconstrained, implicit manner through self-attention. We take the anachronistic position that the pair- wise word interaction module is still useful. Con- cretely, we hypothesize that appending this module to pretrained transformers increases their effective- ness in semantic similarity modeling—we argue that this module is more than a historical artifact. Using BERT (Devlin et al., 2019), a pretrained transformer-based language model, we validate our hypothesis on four tasks in semantic textual simi- larity and answer sentence selection. Our core contribution is that, to the best of our knowledge, we are the first to explore whether in- corporating the pairwise word interaction module improves pretrained transformers for semantic sim- ilarity modeling. We consistently improve the ef- fectiveness of BERT on all four tasks by adding an explicit pairwise word interaction module. and separator tokens. Next, BERT ingests the input into a sequence of layers composed of nonlinear po- sitionwise operators and multiheaded self-attention mechanisms, matching the transformer model—see Vaswani et al. (2017) for specific details. Crucially, the pairwise word interaction modeling occurs in the self-attention mechanisms, defined as Attn(Q, K, V ) = softmax (cid:0)cQKT (cid:1) V SelfAttn(X) = Attn(WqX (l), WkX (l), WvX (l)) where c is a scaling constant, Wq, Wk, Wv ∈ Rd×h are linear operators, and X (l) ∈ Rh×L is the stacked word representations at layer l across an input of length L. A minor point is that, for mul- tiheaded attention, there are h d attention heads (d divides h), the output representations of which are concatenated. The key point is that this mechanism models pairwise context in a global and uncon- strained manner; that is, any pair of words—even among the same sentence or the same word itself— is free to attend to each other. Finally, for classification tasks, BERT passes the final representation of the [CLS] token through a softmax layer across the classes—see Figure 1. The entire model is fine-tuned end-to-end. 3 Our Approach Given a tokenized sentence pair s1 and s2, He and Lin (2016) first embed each word using shallow GloVe word embeddings (Pennington et al., 2014), pretrained on Wikipedia and GigaWord-5. They then use BiLSTMs for modeling the context of input sentences, obtaining forward and backward context vectors uuuf 1:|s2| for s1 and s2—the superscript indicates directional- ity: f for forward and b backward. Pairwise interaction layer. From these context vectors, the distance between all context vectors across both sentences are computed to obtain a similarity cube (SimCube) of size R4×k×|s1|×|s2|, where k is the length of the similarity vector: 1:|s1| and vvvf 1:|s1|, uuub 1:|s2|, vvvb SimCube[1, :, i, j] = coU(uuuf SimCube[2, :, i, j] = coU(uuub i + uuub i ⊕ uuub SimCube[3, :, i, j] = coU(uuuf SimCube[4, :, i, j] = coU(uuuf i , vvvf j ) i , vvvb j) i , vvvf j + vvvb j) i , vvvf j ⊕ vvvb j) He and Lin (2016) define the comparison unit (coU) as coU(uuu, vvv) = [δ(uuu, vvv), (cid:107)uuu − vvv(cid:107)2, uuu · vvv], where δ denotes the cosine distance between two Figure 1: An illustration of BERT for sentence-pair tasks, taken from Devlin et al. (2019). 2 Background and Related Work Presently, the predominant approach to many NLP tasks is to first train an expressive language model (LM) on large text corpora, then fine-tune it on downstream task-specific data. One of the pio- neers of this approach, Peters et al. (2018) pretrain their bidirectional long short-term memory network (BiLSTM; Hochreiter and Schmidhuber, 1997), called ELMo, on the Billion Word Corpus (Chelba et al., 2014). Then, for each task-specific neural network, they use the contextualized LM embed- dings in place of the usual GloVe- or word2vec- based word embeddings (Pennington et al., 2014; Mikolov et al., 2013), fine-tuning the entire model end-to-end. Using this method, they achieve state of the art across question answering, sentiment classification, and textual entailment. Pretrained transformers. Recent, transformer- based pretrained language models (Vaswani et al., 2017) disregard the task-specific neural network Instead, the language model is the altogether. downstream model. Devlin et al. (2019) are the first to espouse this approach, calling their bidirec- tional transformer-based model BERT. They pre- train BERT using a cloze and next sentence pre- diction task on Wikipedia and BooksCorpus (Zhu et al., 2015), then swap out the LM output layer with a task-specific one at fine-tuning time. Concretely, during fine-tuning, a word-tokenized sentence pair s1 = {w11, . . . , wn1} and s2 = {w12, . . . , wm2} is first encoded as [CLS] ⊕ s1 ⊕ [SEP] ⊕ s2 ⊕ [SEP], where ⊕ denotes concate- nation, and [CLS] and [SEP] are special class vectors. The similarity cube is finally reshaped into R4k×|s1|×|s2|. To reduce the effects of unimportant interactions, He and Lin (2016) further apply a pair- wise focus function and reduce their corresponding magnitudes by a factor of ten. Classification. The problem is then converted to a pattern recognition one, where a 19-layer convolu- tional neural network models the patterns of strong pairwise interactions in the similarity cube. A final softmax layer is used for classification. 3.1 BERT with VDPWI We use the same procedure as He and Lin (2016) for word interaction modeling, except that we feed sentence input pairs to BERT (Devlin et al., 2019) for context modeling as the first step. The contex- tualized embeddings from BERT are used in the downstream model for constructing similarity cube, and the entire model is fine-tuned end-to-end. Sentence encoding schemes. We also explore the effectiveness of different encoding methods, as well as the contribution of the BiLSTMs in our experimental settings: • Joint vs. separate encoding: we jointly or separately encode the sentence pair for BERT. • Removing the BiLSTM: we experiment with keeping or removing the BiLSTM. In the first scheme, for joint encoding, we concate- nate the tokens from the two sentences and use the regular [SEP] token to mark the end of the first sentence. For separate encoding, we feed the sen- tences to BERT one at a time, so the two sentences do not interact with each other before the explicit interaction modeling. In the second scheme, our motivation for re- moving the BiLSTM is that pretrained transform- ers already provide deep contextualized word em- beddings, so further context modeling may be unnecessary—we may need to perform explicit pairwise word interaction modeling only. Note that, since different forward and backward context vectors exist only with the BiLSTM, the SimCube without BiLSTMs is in Rk×|s1|×|s2|. We represent separate and joint encoding for BERTBASE by appending “SEP” or “JOINT”, re- spectively, to the subscript of the model name. We indicate the removal of the BiLSTM by appending “− BiLSTM” to the name. 4 Experimental Setup We run our experiments on machines with two Ti- tan V GPUs and CUDA v10.0. Our models are implemented in PyTorch v1.2.0. 4.1 Datasets We conduct experiments on two question- answering (QA) datasets and two semantic sim- ilarity datasets, all in English: WikiQA (Yang et al., 2015) comprises question– answer pairs from Bing query logs. We follow their preprocessing procedure to filter out questions with no correct candidate answer sentences, after which 12K binary-labeled pairs are left. TrecQA (Wang et al., 2007) is an open-domain QA dataset from information retrieval conferences, consisting of 56K question–answer pairs. STS-B. The Semantic Textual Similarity Bench- mark (STS-B; Cer et al., 2017) contains sentence pairs drawn from news headlines, video and im- age captions, and natural language inference data. Human annotators assign to each pair a similarity score between one and five, inclusive. SICK (Marelli et al., 2014) consists of 10K sen- tence pairs originally from Task 1 of the SemEval 2014 competition. A similarity score between one and five, inclusive, is provided for each pair. SICK and STS-B are evaluated using Pearson’s r and Spearman’s ρ, and TrecQA and WikiQA using mean average precision (MAP) and mean reciprocal rank (MRR). 4.2 Training and Hyperparameters For fine-tuning BERT, we follow a similar pro- cedure to Devlin et al. (2019). Specifically, we perform grid search across the learning rate in {5, 4, 3, 2} × 10−5 and the number of epochs in {5, 4, 3, 2}, choosing the configuration with the best development set scores. Following the original setup, we use the Adam optimizer (Kingma and Ba, 2014) with a batch size of 32. For our experiments on SICK and STS-B, which use noncategorical scores, we minimize the Kullback–Leibler diver- gence, while we use the NLL loss on WikiQA and TrecQA, which are classification tasks; these ob- jective functions are standard on these datasets (He and Lin, 2016). For training the pairwise word interaction model, following He and Lin (2016), we use the RMS- Prop optimizer (Tieleman and Hinton, 2012) with # Model STS-B WikiQA TrecQA r/ρ MAP/MRR MAP/MRR 1 PWIM (Liu et al., 2019a) 2 BERTBASE (Devlin et al., 2019) 74.4/71.8 84.7/83.9 3 BERTBASE, SEP + PWIM 84.7/83.9 4 BERTBASE, JOINT + PWIM 84.7/83.9 5 BERTBASE, SEP + PWIM − BiLSTM 85.2/84.0 6 BERTBASE, JOINT + PWIM − BiLSTM 85.0/83.7 70.9/72.3 76.3/77.6 70.5/71.6 76.6/78.0 70.6/72.0 73.0/74.5 75.9/82.2 81.2/86.2 69.2/72.4 83.7/87.9 68.7/72.5 82.7/87.5 SICK r/ρ 87.1/80.9 87.9/82.3 88.0/83.6 88.5/83.8 88.5/83.7 88.8/84.0 Table 1: Test results on different datasets. Best results are bolded; second best underlined. a batch size of 8. To tune the hyperparameters on the development set, we run random search across learning rates in the interval [5 × 10−5, 5 × 10−4] and number of epochs between 3 and 15, inclusive. 5 Results We present our results in Table 1. The original VD- PWI model results (first row) for WikiQA, TrecQA, and SICK are copied from Liu et al. (2019a), while we train their model on STS-B, which they do not use. The second row is the result from directly fine-tuning BERT on the four datasets. We report our BERT with VDPWI results in rows 3–6. 5.1 Model Quality For all four datasets, we find that adding explicit PWI modeling improves the effectiveness of BERT in the original joint encoding scheme—see rows 2 and 4, where we observe an average improvement of 0.9 points. The one-sided Wilcoxon signed-rank (WSR) test reveals that this difference is statisti- cally significant (p < 0.05). Although no single setting achieves the best re- sult on all datasets—i.e., the best numbers appear in different rows in the table—two of our methods (rows 4 and 6) consistently improve upon the orig- inal BERT (row 2). Differences between BERT (row 2) and BERT with VDPWI without the Bi- LSTM (row 6) are not statistically significant ac- cording to the one-sided WSR test (p > 0.05). 5.2 Encoding Scheme Analysis joint versus separate sentence encoding For schemes, we observe that, on all but STS-B, joint encoding achieves better results than sepa- rate encoding—see rows 3 and 5, which represent the separate encoding scheme, and rows 4 and 6, which represent the joint scheme. With or without the BiLSTM, we find that separate encoding results in a degenerate solution on TrecQA, where the model underperforms the original nonpretrained model (row 1)—the gap between separate and joint encoding can be up to 14 points. Adjusting for multiple comparisons using the Holm–Bonferroni correction, one-sided WSR tests reveal significant differences (p < 0.05) between all four separate– joint encoding pairs, except for the jointly encoded BERT with VDPWI (row 4) and the separately encoded BERT with the BiLSTM-removed VD- PWI (row 5; p > 0.05). We conclude that, to avoid potentially degenerate solutions, jointly encoding the sentences is necessary. For the BiLSTM ablation experiments, we do not find a detectably significant difference in keeping or removing the BiLSTM according to the two- sided WSR test (p > 0.05), corrected using the Holm–Bonferroni method. Additionally, the mag- nitudes of the differences in the results are minor— compare rows 3 and 5, and 4 and 6. We conclude that incorporating the BiLSTM may not be entirely necessary; the pairwise interaction layer and con- volutional classifier stack suffices. 6 Conclusions and Future Work We explore incorporating explicit pairwise word interaction modeling into BERT, a pretrained transformer-based language model. We demon- strate its effectiveness on four tasks in English se- mantic similarity modeling. We find consistent improvements in quality across all datasets. One line of future work involves applying other neural network modules within and on top of pre- trained language models. Another obvious exten- sion to this work is to examine other pretrained transformers, such as RoBERTa (Liu et al., 2019b) and XLNet (Yang et al., 2019). Acknowledgments This research was supported by the Natural Sci- ences and Engineering Research Council (NSERC) of Canada, and enabled by computational re- sources provided by Compute Ontario and Com- pute Canada. References Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evalu- ation. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2014. One billion word benchmark for mea- suring progress in statistical language modeling. In Fifteenth Annual Conference of the International Speech Communication Association. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies. Hua He, Kevin Gimpel, and Jimmy Lin. 2015. Multi- perspective sentence similarity modeling with con- In Proceedings of the volutional neural networks. 2015 Conference on Empirical Methods in Natural Language Processing. Hua He and Jimmy Lin. 2016. Pairwise word interac- tion modeling with deep neural networks for seman- In Proceedings of the tic similarity measurement. 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation. Diederik P. Kingma 2014. and Adam: A method for stochastic optimization. arXiv:1412.6980. Jimmy Ba. Wuwei Lan and Wei Xu. 2018. Neural network models for paraphrase identification, semantic textual simi- larity, natural language inference, and question an- In Proceedings of the 27th International swering. Conference on Computational Linguistics. Processing and the 9th International Joint Confer- ence on Natural Language Processing. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692. Marco Marelli, Luisa Bentivogli, Marco Baroni, Raf- faella Bernardi, Stefano Menini, and Roberto Zam- parelli. 2014. SemEval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- In Proceedings of the 53rd Annual Meet- works. ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing. Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5-RMSProp, Coursera: Neural networks for ma- chine learning. Technical Report. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems. Mengqiu Wang, Noah A. Smith, and Teruko Mita- mura. 2007. What is the jeopardy model? a quasi- In Proceedings of synchronous grammar for QA. the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Linqing Liu, Wei Yang, Jinfeng Rao, Raphael Tang, Incorporating contextual and Jimmy Lin. 2019a. and syntactic structures improves semantic similar- In Proceedings of the 2019 Confer- ity modeling. ence on Empirical Methods in Natural Language Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. WikiQA: A challenge dataset for open-domain ques- In Proceedings of the 2015 Con- tion answering. ference on Empirical Methods in Natural Language Processing. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. arXiv:1906.08237. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE In- ternational Conference on Computer Vision.
synthetic_cpt
1
Sample-Efficient_Unsupervised_Domain_Adaptation_of_Speech_Recognition_Systems_A_Case_Study_for_Modern_Greek.pdf
The General sampling theorem, Compressed sensing and a method of image sampling and reconstruction with sampling rates close to the theoretical limit L. Yaroslavsky School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel E-mail: [email protected] Abstract The article addresses the problem of image sampling with minimal possible sampling rates and reviews the recent advances in sampling theory and methods: modern formulations of the sampling theorems, potentials and limitations of Compressed sensing methods and a practical method of image sampling and reconstruction with sampling rates close to the theoretical minimum. Keywords: Sampling, Sampling theory, Sampling theorem, Sampling rate, Compressive sensing, Underdetermined inverse problems 1. Introduction. Sampling is the very first step in digital imaging. The fundamental part of its theoretical base is sampling theory. The origins of the sampling theory date back to 1920-1940th years to classical publications by H. Nyquist, V. A. Kotelnikov and C. Shannon ([ 1], [ 2], [ 3]). The classical theory is based on the concept of band-limited signals, i.e. signals whose Fourier spectrum is non-zero only within a finite frequency interval in the signal Fourier domain. In the last two decades, the needs of the development of digital imaging engineering inspired new advances in the sampling theory associated with notions of signal spectrum sparsity, Compressed sensing, and methods of signal sampling with sampling rates close to the theoretical limits ([ 4], [ 5], [ 6], [ 7], [ 8], [ 9],[ 10], [11], [ 12], [ 13], [14], [ 15]). The article represents a brief review of recent publications on these advances. Section 2 presents classical and modern formulations of the sampling theorem. In Section 3 the General sampling theorem is formulated and minimal sampling rate is evaluated using the concept of signal sub- band decomposition. Section 4 presents an alternative derivation of the General sampling theorem based on a discrete signal model and The Discrete sampling theorem. In Section 5 the ubiquitous compressibility of digital images acquired by conventional imaging devices is discussed. Section 6 briefly reviews potentials and limitations of the Compressed sensing methods advanced as a solution to the problem of minimization of the signal sampling rates. In Section 7 a new method of image sampling and reconstruction is outlined that allows reaching sampling rates close to the theoretical minimum. In the concluding Section 8 some practical issues of implementation of this method and its possible applications for solving other imaging problems are discussed. 2. The classical sampling theorem The classical Kotelnikov-Shannon 1D sampling theorem ([ 2], [ 3]) states that band-limited signals can be precisely reconstructed from their samples taken with sampling one from another, where [ interval is F1=∆ ,2 F F− ]2 xxxx-xxxx/xx/xxxxxx 1 © xxxx IOP Publishing Ltd the interval in the frequency domain of the signal Fourier Transform, that contains the entire signal spectrum. 1D band-limited spectrum ]2 around signals with Fourier ,2 F 0 f f − ,2 F− spectrum concentrated within F F concentrated within a bounded interval [ zero frequency are called baseband signals. 1D band-limited intervals signals with [ ]2 and [ around F f − a non-zero frequency 0f , called the carrier frequency, are called passband signals. According to the properties of the can be regarded Fourier transform, a passband signal ( )x as a result of modulation of a baseband signal by a ( )x aPB ]2 ,2 F − − + + f 0 0 0 aBB sinusoidal signal of frequency 0f : a PB ( ) x = a BB ( ) x ( sin π 02 )xf ( 1) The classical sampling theorem can be straightforwardly applied to passband signals if, before sampling, passband signals are converted into the corresponding baseband signals by multiplying, or demodulating, them by a sinusoidal signal of the carrier frequency 0f and subsequent ideal low-pass filtering of the demodulation result within the baseband [ ,2 F F− ]2 . y x x F ,2 by a F ] [ ;2 − ,2 )y rectangular ] )2 F For 2D signals, such as images, the classical sampling theorem states that signals with Fourier spectrum band- limited say interval, [ ( F , in the frequency domain − ( f , x reconstructed from their samples taken with sampling ]y intervals [ F one from another at nodes of the rectangular sampling lattice in signal Cartesian coordinates ( of the signal Fourier Transform can be precisely )yx, =∆ x =∆ y F 1 1 . f , x y In reality no band-limited signals exist and only approximate reconstruction of signals from their sampled representation is possible. In view of this, the classical sampling theorem should be reformulated in terms of signal band-limited approximation with a given mean square error (MSE): x ∆∆ , Signal )yxa ( , sampled with sampling intervals ( )y over the rectangular sampling lattice can be reconstructed from its samples with no distortions caused by spectra aliasing due to sampling and with MSE equal to the signal energy (integral of signal spectrum square module) outside interval the [ −=Ω called the signal rectangular 21, frequency ]y −∆ x 21; 21, 21 y ∆ rect ∆ ∆ x sampling base band, iff signal sampling and reconstruction are carried out using sampling and reconstruction devices with frequency responses ( samp Fr )( f x , f )y and 2 f x , f )y , correspondingly, оf the ideal low-pass ( rec Fr )( filters : ( samp Fr )( f x , f y ) 1 ∆∆= ,0      x ( , f x , f y ) Ω∈ rect ; y otherwise ( rec )( Fr f x , f y ) = ( ,1   ,0  f x , f ) Ω∈ y rect otherwise ( 2) 3. The General sampling theorem. A formulation based on the concept of signal sub-band decomposition The classical 2D sampling theorem implies that the minimal signal sampling rate, i.e. the minimal number of samples per unit of the signal area sufficient for signal reconstruction of with a given MSE equals the area ∆×∆= x x F × F 1 1 y y the 2D rectangular band-limiting frequency interval rectΩ that contains signal frequency components, which all together reproduce the signal with this MSE. Generally, the signal frequency interval of the MSEΩ minimal area that contains the signal largest frequency components, which reproduce the signal with a given MSE, may have an arbitrary shape. We will call this frequency interval "signal spectrum MSE-defined (MSED-) zone". The General sampling theorem extends the above statement on the signal minimal sampling rate to the general case of arbitrary signals and states: The minimal number of samples per unit of signal area sufficient for signal reconstruction from them with a given MSE equals the area of the signal Fourier spectrum MSED- zone. The theorem can be proved using the concept of signal sub- sub-band band decomposition decomposition, image In is decomposed into a sum ([16], )yxa , ( [17],). the ( , yxa ) ∑≅ k k ( )( a , yx ) ( 3) { ( )( a k }yx ) , of a certain number K components with spectra of rectangular shapes that all together approximate the image spectrum as it is illustrated in Figure 1. Each of the components can be, according to the classical sampling theorem for band-limited and passband signals, reconstructed from its corresponding samples taken with sampling rate }kSR equal to its corresponding area { . Hence the } { ( ) k S , f x f y sampling overall reconstruction of all K signal sub-bands amounts to sufficient KSR rate for precise SR K = K ∑ k 1 = SR k = K ( )∑ S k f x , k 1 = f y ( 4) obtained by sampling signal sub-band components consists of samples of signal sub-band components rather than of samples of the signal itself. 4. The General sampling theorem. A formulation based on a discrete signal model Sampling is a special case of signal discretization methods ([ 16]). In general, discrete representation of signals is obtained as a set of coefficients of signal expansion over a set of discretization basis functions and signal reconstruction from its discrete representation is performed using reconstruction basis functions reciprocal to the discretization ones. Consider this general case using a discrete model. Let NA be a vector of N samples{ } kka = 0 ,..., N 1 − of a discrete signal, NΦ be an NN × orthonormal transform matrix N ϕ=Φ { }kr ( ) , k = ,...,1,0 N − 1 , r = ,...,1,0 N − 1 (6) (7) ( 8) samples and NΓ be a vector of }krϕ ( ) { composed of basis functions signal transform coefficients { }rγ such that:     1 − ϕγ r r   ( ) k   =ΓΦ= NN { } a ∑ A = . N N = 0 k r ~ Select a subset R ~ { }R ~ ∈r approximation and define a “ ( NA )BS of K transform coefficients indices NK of ”-bounded spectrum (BS-) to the signal NA as: A BS N = a BS k     that ~~ ϕγ ~ r r = ∑ ~~ ∈Rr   ( ) k   ~ N a KofN }1 BS ~ k Φ= −N ,..,1,0 ∑ NK < available ~ =Γ⋅ K . These available K is a K -size subset of are only ~ of this signal, where K Assume }{ } K { BS ka ~ ~ ∈k indices { }k ~ signal samples define a system of K equations: 1 − ϕγ ~ ~ r r from the set { ,{ } K ~ ~ k ∈ ,     where  ~ ( )  k   KK × sub-transform matrix is composed of ~ ( )kr ~ ~ϕ of the basis functions with indices { }R ~ ∈k for signal sample indices is a vector composed of the corresponding subset { }r~γ of the signal transform coefficients. One can find these coefficients by inverting matrix ~ { } γ =Γ ~ K r KofNΦ ~ 1 − A KofN samples KofNΦ , and ~ ∈r ~ KΓ (10) Φ= (9) ~ K K = 0 ⋅ r provided that matrix 1−Φ KofN inverse to the matrix KofNΦ Figure 1. An example of an image spectrum MSED-zone (upper) and its sub-band decomposition (bottom). Carrier frequencies ( ) for the passband components are indicated only for the fourth component in order to not overburden the image. ( ) 4 0 , x ( )4 y 0 f f In the limit, when ∞→K , the sub-band components cover the entire area fS , x f the overall sampling rate amounts to y of the image spectrum and therefore lim K ∞→ SR K = S , yx lim K ∞→ K ∑ k 1 = S ( k f x ) , f y = S f x , f y ( 5) i.e. to the area fS , x f y occupied by the signal Fourier spectrum MSED-zone. Note that, in distinction from the conventional sampling, the signal sampled representation 3 Potential candidates of signal transforms with good energy compaction capability that can be used for obtaining signal bounded spectrum approximations are Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT), Walsh Transform and Wavelet Transforms. DFT and DCT admit arbitrary positions of the available signal samples, whereas Walsh Transform and Wavelet Transforms impose on positioning signal samples certain limitation ([18]). In the limit, when the number of samples N in our discrete model tends to the infinity, the model transmutes to a continuous one, discrete Fourier transform transmutes to the integral Fourier transform and the Discrete Sampling Theorem for the DFT transmutes to the above-formulated General Sampling Theorem. 5. The ubiquitous compressibility of images sampled over the standard rectangular sampling lattices the standard Sampling images over the uniform rectangular sampling lattices assumed by the classical 2D sampling theorem is accepted as imaging engineering for in designing image scanners, digital cameras and image display devices. It also is assumed by default by image processing software. This by the necessity implies that in order to avoid image distortions additional to those that define the image spectrum MSED-zone, image MSED-zones must be inscribed into the sampling base-band rectangle defined by the image sampling rate. Therefore sampling images over the rectangular sampling lattices always requires sampling rates that exceed the minimal one defined by the area of the image spectrum MSED-zone, i.e. images sampled in the standard way are always oversampled. This causes the need in image compression for their storage and transmission. The degree of the over-sampling, or the oversampling redundancy, is inverse to the ratio of the MSED-zone area to the area of the image sampling rectangular baseband. This ratio is called the image spectrum sparsity. Image spectrum sparsity is illustrated in Figure 2, which presents an example of a test image and its Fourier spectrum centred at zero spatial frequencies. Highlighted in the spectrum is the image spectrum MSED-zone that contains image spectral components, which reconstruct the image with MSE set, in this particular example, to be equal to that of the image JPEG compression by Matlab means. Image sampling baseband in this spectrum is the entire area of the image spectrum. Spectrum sparsity of this image is 0.31. Experimental experience evidences that spectra sparsities of sampled natural images lie usually in the range 0.1-0.4. exists. The latter is conditioned by positions { ~ ∈k of available signal samples and by the selection of the subset ~ { }R of transform basis functions. }K ~ Found in this way transform coefficients together with the rest of the coefficients set to zero can be used for obtaining a bounded spectrum (BS-) approximation BS NA to the complete signal NA with the mean square error: MSE = A N ˆ A − N 2 = N 1 − ∑ k = 0 a k − a BS k 2 = 2 γ ~ r (11) ∑ ~ Rr ∉ This error can be minimized by an appropriate selection of K basis functions of the sub-transform KofNΦ . In order to do so, one should know the energy compaction ordering of the basis functions of the transform NΦ , i.e. the order of basis functions, in which the energy (squared module) of signal representation coefficients decays with their indices. If, in addition, one knows a transform that is capable of the best energy compaction the smallest number of into transform coefficients, one can, by choosing this transform, secure the best minimum mean square error bounded spectrum approximation of the signal { }ka for the given subset { }BS ka ~ of its samples. The subset MSEΩ of indices { }r~ of the largest transform coefficients, which reconstruct the signal with a given MSE is the above-mentioned signal spectrum MSE Defined (MSED-) zone. With the above reasoning, on can formulate in two statements the following Discrete Sampling Theorem ([14], [ 15], [ 16], [17]): Statement 1. For any discrete signal of N samples defined samples, its bounded spectrum, in terms of a by its NK ≤ NΦ , approximation can be obtained with certain transform mean square error defined by Eq. 11 provided positions of 1−Φ KofN inverse that corresponds to the the samples secure the existence of the matrix to the sub-transform matrix KofNΦ spectrum bounding. The approximation error can be minimized by using a transform with the best energy compaction capability. NK ≤ NΦ ( Statement 2. Any signal of N samples that is known to have non-zero transform coefficients for a certain only NΦ -transform “bounded spectrum” signal) transform can be precisely reconstructed from exactly K its samples provided positions of the samples secure the existence of the 1−Φ KofN inverse to the sub-transform matrix KofNΦ matrix that corresponds to the spectrum bounding. 4 indefinite number of solutions unless there is an a priori restriction on signals to be recovered that allows choosing from all possible solutions the only one that satisfies this restriction. In this 0L norm in spectral restriction is formulated in terms of domain of a certain image transform, i.e. in terms of the amount of signal non-zero transform coefficients. the Compressed sensing approach, NK < According to the theory of the Compressed sensing, if an image of N samples is known to have only non- zero transform coefficients in the domain of a certain "sparsifying" transform, it can be precisely reconstructed KM > from a certain number measurements by means of 0L norm in the image transform minimization of the domain ([ 4], [ 5[ 5]). This, in particular, means that signal sampling rate, in distinction from the conventional sampling according to the classical sampling theorem, does not depend on the signal highest frequency and, in particular, that the signal sampling rate can be lower than twice the signal highest frequency, i.e. a kind of a "sub-Nyquist" sampling with aliasing is admissible. False spectra aliasing components within the sampling base band caused by the "sub-Nyquist" signal sampling can not, in principle, be filtered out by signal linear filtering. The minimization of 0L norm of image spectra suggested by the Compressed sensing approach implements a kind of a nonlinear filtering for separating true signal spectrum components from the false aliasing ones. The capability of the Compressed sensing approach of reconstructing signals sampled with aliasing can be demystified using the following simple model of sampling and reconstruction of signals composed from a known number of sinusoidal components ([ 13]). NK < KM > Let N be a number of signal samples, be a number of its sinusoidal components and let the signal be sub-sampled at arbitrarily chosen points. It is required to precisely reconstruct all N signal samples from these M available samples. This would be achieved if one could determine amplitudes and frequencies of the signal sinusoidal components. Figure 3 and Figure 4 illustrate, utilizing results of a computer simulation, how and when this can be done. K in 512 = N ,5 A test signal presented in Figure 3, 1st plot from the top, = has five sinusoidal components ( ) seen as the signal Discrete Cosine five Kronecker deltas Transform spectrum (2nd plot from the top in Figure 3). When this signal is sub-sampled, in its spectrum (3rd plot in Figure 3) a lot of false aliasing spectral components appear. 76=M In the shown example the signal is sub-sampled at random positions. In this particular example the signal's five true spectral components exceed the aliasing ones and are easily detectable by finding positions of the given number 5=K the largest spectral components. Figure 2. An example of a test image (upper) and its Fourier spectrum (bottom) centred at its DC component. Highlighted (by red in e-version) in the spectrum are the largest spectral components sufficient for image reconstruction with the same MSE as that of its JPEG compression (the image spectrum MSED-zone). 6. Compressed sensing: potentials and limitations The ubiquitous compressibility of images acquiered by the conventional methods raises a very natural question: "is it possible to just directly measure the minimal amount of data and avoide the need in image compression? " This question was first posed by the inventors of the Compressed sensing approach (known also under the name “Compressed sampling”) as a solution to this problem ([ 4], [ 5], [ 6]). The Compressed sensing approach considers signal sampling and reconstruction as an underdetermined inverse problem of recovering a signal of N samples from a fewer of measurements ([ 4],[ 5], [ 6]). Because number the signal recovering problem is underdetermined, it has an NK < 5 signal maximum vs. the number of reconstruction iterations (4-th plot in Figure 3) and the plot of DCT spectrum of the reconstructed by the iterative algorithm signal (5-th plot) illustrate this process and demonstrate that practically precise reconstruction of the signal is achieved. Obviously, when the signal sub-sampling rate is not sufficiently high and spectrum aliasing is severe, reliable detection of the signal true spectral components in the spectrum of the sampled signal and, hence, signal reconstruction become impossible. This case is illustrated in Figure 4 by the results of the same experiment and with the same parameters as in Figure 3, but for another realization of positions of signal samples. Figure 3. Reconstruction of a test signal sampled with aliasing. From top to bottom: a test signal composed of five sinusoidal components; test signal DCT spectrum; DCT spectrum of the sub- sampled signal; plot of the root mean square reconstruction error normalized to the signal maximum (RMSE/SignMax) vs. the number of the reconstruction iterations; DCT spectrum of the reconstructed signal. Frequency in plots of spectra is given in fractions of the sampling base band. Once this is done, reconstruction of all signal N samples can be achieved using an iterative Gerchberg-Papoulis type algorithm, in which, at each iteration step, (i) DCT of the current estimate of the reconstructed signal is computed; (ii) positions of the given number K the largest spectral components are detected; (iii) all spectral components except the detected ones are set to zero; (iv) inverse DCT of the modified in this way signal spectrum is computed; (v) available signal samples are restored in the obtained estimate of the reconstructed signal to get a next estimate of the reconstructed signal and the loop is repeated. The plot of the reconstruction root mean square error (RMSE) normalized to 6 Figure 4. Failure of reconstruction of a test signal sampled with aliasing. From top to bottom: a test signal composed of five sinusoidal components; test signal DCT spectrum; DCT spectrum of the sub-sampled signal; plot of the root mean square reconstruction error normalized to the signal maximum (RMSE/SignMax) vs. the number of the reconstruction iterations; DCT spectrum of the reconstructed signal. Frequency in plots of spectra is given in fractions of the sampling base band. In this case two of the signal spectrum components are not detected and false peaks are detected instead. As a result, signal reconstruction by the iterative algorithm failed. In the presented examples, signal spectrum sparsity, i.e. the ratio of the number of the signal non-zero spectral coefficients to the total number of spectral coefficients, is 210 − ≈ /5 512 5=K , whereas actually . According to the Discrete sampling =NK theorem the minimal number of signal samples needed for its 76=M precise reconstruction is samples were required for signal reconstruction in the example of the successful signal reconstruction shown in Figure 3. Therefore, the sampling redundancy, i.e. the ratio of the actual number of signal samples required in this case for signal reconstruction to the number of the signal non-zero spectral coefficients, is KMR 5/76 2.15 . = ≅ = Two shown examples demonstrate that when sampling is performed in random positions, signal precise reconstruction is possible with a certain probability depending on the sampling redundancy. Figure 5 presents results of an experimental evaluation of the sampling redundancy required by the above-described algorithm. The results were obtained by Monte-Carlo simulation of sampling and reconstruction using a model of a single sinusoidal signal. The experiments were conducted for sinusoidal signals ( ) of five different frequencies (0.9, 0.7, 0.5, 0.3 and 0.1 of the signal base band) and of five different signal lengths N (128, 256, . 512, 1024, 2048), i.e. of five different signal sparsities 1=K NK 310− and 5 × probability of signal frequency identification error less than 410− , 210− , correspondingly. These results were 410 obtained over realizations of the random sampling for each individual experiment with a given sampling rate, signal frequency and signal length. From this illustrative model one can also conclude that the presence of noise in the sampled data will hamper reliable detection of the signal spectral components and will require an additional signal reconstruction. redundancy sampling the for Potentials of the Compressed sensing approach to image sampling and reconstruction are widely advertised in the literature. Much limitations. Particularly important is the question, how close is the amount of measurements required by the Compressed sensing methods of image sampling and reconstruction to the theoretical minimum defined by the sampling theory. is known about less its According to the theory of the Compressed sensing, the precise reconstruction of a signal of N samples that has non-zero transform coefficients is possible, when the number of measurements M sufficient for signal reconstruction satisfies the following inequality ([ 8],[ 9]) NK < KM [ ( log2−> ]NKKM )( ) ( 12) By virtue of the Discrete sampling theorem, the signal is the theoretical lower bound of the sparsity NK SS = sampling rate required for signal reconstruction. Therefore represents the sampling redundancy the ratio KMR = with respect to the theoretical minimum. Inequality (12) can be rewritten as a relationship between the signal sparsity SS = and the sampling redundancy KMR = NK as R −> log2 ( R × )SS ( 13) Numerical evaluation of this relationship between the sampling redundancy R and the signal sparsity SS gives that in the above-mentioned range from 0.1 to 0.4 of spectra sparsities of natural images the sampling redundancy of the Compressed sensing methods should theoretically be larger than 2 to 3. Experimental data collected over publications show that in practice it should be larger than 2.5 to 5 ([11]). This means that the sampling redundancy required by the Compressed sensing methods for natural images is of the same order as the sampling redundancy of their regular sampling (2.5 - 10), i.e. in reality Compressed sensing methods do not solve the problem of the compressibility of images acquired by the conventional means. The substantial sampling redundancy required by the Compressed sensing methods is not their only drawback. Their applicability is also impeded by the vulnerability to noise in the sensed data and by the impossibility to predict Figure 5. Estimates of the sampling redundancy required for reconstruction of sinusoidal signals from their randomly placed samples vs. the signal sparsity for three probabilities of the 310 − , and reconstruction failure ( 410 − ). 210− , They show that sampling redundancies in the range of 14-32 the times are required for signal reconstruction with 7 and secure the resolving power of the reconstructed images. Resolving power of images is determined by the size and shape of the MSED-zones of their spectra. Spectra MSED- zones of the methods of reconstructed by Compressed sensing are formed in the process of image reconstruction rather than are specified in advance from the requirements to the image resolving power. images To summarize the said, Compressed sensing methods are to a certain degree capable of reconstructing sparse approximations of images sampled with aliasing. No a priori knowledge regarding MSED-zones of the image spectra is required for this. This is an attractive feature of the Compressed sensing methods. It however has its price: because of this Compressed sensing methods require a significant redundancy in the number of measurements sufficient for image reconstruction. In many practical tasks of digital image acquisition, the assumption of the complete uncertainty regarding the image spectra MSED-zones has no justification. In fact, if one is ready, as it is assumed by the Compressed sensing approach, to accept a sparse spectrum approximation to an image and has chosen an image sparsifying transform, one tacitly the energy compaction implies certain knowledge of capability of the chosen transform. Making use of this in any case available a priori knowledge allows implementation of image sampling with sampling rates close to the theoretical minimum. 7. A method of image sampling and reconstruction with sampling rates close to the theoretical minimum The Discrete Sampling Theorem implies that for image sampling and reconstruction one should ([ 12], [ 13], [14], [ 15]): - - Choose an image sparsifying transform that features the best, for the given image, energy compaction capability. For a given number N of image samples, specify a desired MSED-zone of the image spectrum, i.e. a set of indices of transform coefficients to be used for image reconstruction. NM < - Take M image samples. - Use the obtained M image samples for determining M transform coefficients that belong to the chosen MSED-zone. Set the rest transform coefficients to zero and use the obtained spectrum for reconstruction of the required N image samples by its inverse transform. MN − - Consider possible ways for implementation of this protocol. - Choosing an image sparsifying transform. The key role in the choice of the image sparsifying transform plays the energy compaction capability of the transform. An additional 8 feature, which is usually required, is the availability of a fast transform algorithm. From this viewpoint, Discrete Cosine (DCT), Discrete Fourier (DFT) and Wavelet transforms appear most suitable. - Defining the image spectrum MSED-zone. Definition of the image spectrum MSED-zone, i.e. indices of transform coefficients to be used for image reconstruction, is based on the known energy compaction capability of the transform. In most cases, DCT can be recommended as the image sparsifying transform. DCT is known to efficiently compact image largest transform coefficients into quite tight groups in the area of low spatial frequencies. An additional advantage of using DCT as the image sparsifying transform is that it is a version integral Fourier of transform and, therefore, it perfectly concords with treatment of imaging systems in terms of their frequency transfer functions [ 16]). the discrete representation of the Of course, given an image to be sampled, one can't not precisely specify MSED-zone of its spectrum. However, a considerable practical experience, and, in particular, results of developing zonal quantization tables for JPEG image compression, show that MSED-zones of spectra of natural images are sufficiently well concentrated and can be with a reasonable accuracy circumscribed by one of some standard shapes specified by few geometrical parameters such as total area, angular orientation, aspect ratio, etc. Examples of such standard shapes well suited for approximating MSED-zones of DCT spectra of natural images are presented in Figure 6. One can associate each particular shape with a certain class of images such as micrographs, aerial photographs, space photos, in-door and out-door scenes, etc. . Figure 6. Possible standard shapes for approximating the MSED- zones of image DCT spectra (spectra DC components are in the upper left corners of the shapes). Standard shapes for approximating signal spectra MSED- zones have two important properties: (i) they do not require fine tuning of their geometrical parameters to fit the spectra MCSED-zones they are chosen to approximate and (ii) their areas by the necessity always exceed the areas of their corresponding image spectra MSED-zones. These properties are illustrated in Figure 7. The spectrum MSED-zone of a test image (Figure 7, a) is shown in figures b)-e) by white dots. It is obtained for the image reconstruction root mean square error (RMSE) 3.85 gray levels of 256 gray levels for the image dynamic range, the same as the reconstruction RMSE of image JPEG compression by Matlab means. The reason why areas of standard shapes always exceed areas of spectra MSED-zones they approximate is also almost obvious. Image spectra MSED-zones are composed of the largest image spectral components that reconstruct the image with a given MSE. Shapes that approximate the MSED-zones will certainly contain some quantity of "no- MSED"-zone components that by definition have lower energy than the largest ones, which form the MSED-zone, and may not contain some MSED-zone components. Therefore, in order to secure the given image reconstruction MSE, areas of MSED-zone approximating shapes must exceed areas of the corresponding MSED-zones. Therefore image sampling rate equal to the area of the MSED-zone approximating shapes, being minimal the given approximating shape, will always to a certain degree exceed the minimal sampling rate defined by the area of the image proper MSED-zone. This sampling redundancy is the price for not knowing exact positions of spectral components that form the image spectrum MSED-zone. For the example presented in Figure 7 this redundancy is 1.67. for - Positioning image samples. As was mentioned, DCT as the sparsifying transform imposes no limitations on positions of image samples and they can be arbitrary. - Methods of image reconstruction. There are two options Figure 7. Test image “BloodVessels512 (a) , and image spectrum MSED- zone (white dots) along with the borders (white lines) of the rectangular, triangular and oval shapes with different shape parameters that approximate it ( b) –e)). As one can see, this MSED-zone exhibits a considerable anisotropy, which apparently evidences a certain prevalence of horizontally oriented edges of blood vessels shown in the test image. Solid lines in these images represent borders of four different standard shapes (rectangle, triangle, and two ovals) that are chosen to approximate the spectrum MSED- zone. These shapes have different aspect ratios (0.35, 0.25, 0.3, and 0.45) but all permit image reconstruction with approximately the same RMSEs (4.1, 3.7, 3.8 and 3.8) as the RMSE for the image spectrum MSED-zone. Therefore they are practically equivalent as approximations of the given spectrum MSED-zone. The reason why no fine adjustment of shape parameters is needed for choosing spectra MSED-zones approximating shapes lies in the experimental fact that borders of image spectra MSED-zones are quite fuzzy, which one can easily see on the presented example. the MN − inverse for implementing image reconstruction: • The direct matrix inversion according to Eq. 10 for computing, from available M signal samples, M transform coefficients chosen for the reconstruction. The found M transform coefficients supplemented with the rest coefficients set to zero are then used for reconstruction of all required N signal samples by transform. Practical usage of this option is limited because the matrix inversion is a very time consuming computational task and no fast matrix inversion algorithms are known. • The iterative Gerchberg-Papoulis type algorithm, in which at each iteration step: (i) spectrum of the current estimate of the reconstructed image in the chosen transform is computed; (ii) all transform coefficients outside the chosen image spectrum MSED-zone approximating shape are zeroed; (iii) the modified in this way image spectrum is inversely transformed and available image samples are restored in the reconstructed image producing its estimate for the next iteration. Reconstruction iterations start from an image, in which not available samples are obtained by interpolation from the available ones using one or another interpolation method. The above reasonings imply that, assuming DCT as the image sparsifying transform and a square sampling lattice, image sampling and reconstruction should be performed in the following steps ([14],[ 15]): • Choose a required image spatial resolution SpR (in “dots per inch)”) in the same way as it is being done in the ordinary sampling. 9 N x • Given the physical dimensions SzX and SzY of the image (in inches) in X and Y image coordinates, dimensions of the square correspondingly, determine YX / SpR and sampling lattice SpR SzX × • Choose, on the basis of evaluation of the image, one of the standard shapes for bounding image DCT spectrum MSED-zone and set its geometrical parameters. • N × Inscribe the chosen shape into the rectangle of samples as tightly as possible and evaluate the SzY N y × = = . x N y fraction SS of the area, which the shape occupies in the rectangle (spectrum sparsity). • SSM = • uniformly over the sampling lattice of Sample the image in M positions distributed as samples as number to be taken. Find N × samples image x N × N × the of y x N y possible. As one can see, the described sampling protocol is almost identical to the ordinary standard 2D sampling protocol except that in the suggested method image is sparsely sampled in a sub-set of nodes of the ordinary square sampling lattice and setting the spectrum bounding shapes for approximating is required. image spectra MSED-zones the For image reconstruction, apply to the sampled image one of the above-mentioned image reconstruction options using, for image spectrum bounding, the chosen spectrum MSED- zone approximating shape. As a result, an image with spectrum bounded by the chosen MSED-zone approximating shape, or a bounded spectrum (BS-) image, will be obtained, which has the prescribed spatial resolution SpR . Inasmuch as in the described method the sampling rate equals the area of the chosen spectrum bounding shape, the method reaches the minimal rate for the given spectrum bounding shape. However, as mentioned previously, the latter is somewhat larger than the area occupied by the actual image spectrum MSED-zone, which the chosen spectrum bounding shape approximates. Therefore, for each particular image, the method has a residual sampling redundancy equal to the ratio of the area of the chosen spectrum bounding shape to the area of the actual image spectrum MSED-zone. In view of the said the described image sampling and reconstruction method is called the Arbitrary Sampling and Bounded Spectrum Reconstruction (ASBSR-) method. The described ASBSR-method was extensively verified on a considerable amount of various test images ([14]). An illustrative example of reconstruction of one of the test images sampled over the uniform sampling lattice with random jitter is shown in Figure 8. Figure 8. An illustrative example of results of experiments on image sampling and reconstruction using the ASBSR-method. From top to bottom: sampled test image“Rome512”(grey dots); image spectrum MSED-zone (white dots) and borders of its chosen approximating shape (white solid line); test image reconstructed using the iterative reconstruction algorithm; plots of root mean square of all (solid line) and of the smallest 90% (dash line) reconstruction errors vs. the number of iterations. 10 shapes can be The experiments confirm that images sampled with sampling rates equal to the minimal rate for their chosen MSED-zone approximating reconstructed with a sufficiently good accuracy comparable with that of image JPEG compression. The redundancy in the number of the required samples associated with the redundancy of the standard shapes approximating image spectra MSED-zones was in the experiments in the range 1.5-1.7, which is noticeably lower than the mentioned above range 2.5-5 for the sampling redundancy of compressed sensing methods. 8. Some practical issues and other possible applications of the ASBSR-method In conclusion, address some practical issues of using the ASBSR-method of image sampling and reconstruction: robustness of the method to noise in sampled data, image anti-aliasing pre-filtering, recommended sample positioning, and possible applications of the method to solving under- determined inverse problems. In distinction from the Compressed sensing methods, the ASBSR method, being a linear one, is insensitive to noise in image signals. If input image is contaminated with additive white noise, image reconstructed from the sampled data will also contain additive noise with non-zero spectrum within the shape used for bounded spectrum image reconstruction and variance equal to the variance of the input image noise times the fraction of the area of sampling base band occupied by the spectrum bounding shape. As dictated by the sampling theory, image pre-filtering for bounding its spectrum before sampling is necessary in order to avoid distortions of reconstructed images caused by spectra aliasing due to sampling. Conventionally such pre- filtering is carried out by apertures of light sensitive cells of digital cameras and image scanners. The ASBSR method envisages, generally, choosing spectrum MSED-zone approximating shapes individually for each particular image. Ordinary photo sensors are not capable of implementing such choice. As a solution to this problem, the usage of synthetic multiple aperture sensors can be proposed ([ 12], [ 19]). In these sensors, several individual sub-sensors are allocated for each image sample and the desired anti-aliasing filter frequency response is synthesized by an appropriate weighted summation of outputs of individual sub-sensors. The multiple aperture sensors are especially well suited for the so called single-pixel cameras, where sampling is carried out using digital micro-mirror the possibility of arbitrary devices, which enable arrangement of sampling positions. Note that single pixel cameras of implementation Compressed sensing methods of image acquisition ([ 7]) recommended are for practical alternative, the use of a universal "all purpose" shape can be considered. As such, a “pie-sector” shape can be suggested, which fits the majority of natural images quite well. In the experimental verification of the method three types of sampling lattices were tested ([ 12], [ 13], [14], [ 15]): - “quasi-uniform” sampling lattice, in which image samples are distributed uniformly in both image coordinates with an appropriate rounding off their positions to the nearest nodes of the dense square sampling lattice allocated for the sampled image; - uniform sampling lattice with pseudo-random jitter, in which sample positions in both image coordinates are randomly chosen within the primary uniform sampling intervals independently in each of two image coordinates; - the totally pseudorandom sampling lattice, in which sample positions are randomly placed with uniform distribution at nodes of the dense sampling lattice allocated for the sampled image. lattices with pseudo-random For all test images used in the experiments, root mean square of reconstgruction errors (RMSE) decayed with iterations most rapidly for the case of sampling over the jitter. uniform sampling Reconstruction RMSEs for totally random samplimg lattice were about 1.5-2 times and for “quasi-uniform” sampling lattice 2-2.5 times larger than those for the “uniform with jitter” sampling lattices for the same number of iterations. When “quasi-uniform” lattices were used, stagnation of the iteration process was observed. This phenomenon can apparently be attributed to the emerging of regular patterns of thickening and rarefication of sampling positions due to rounding off their coordinates to the node positions of the regular uniform sampling lattice ([ 12], [ 13],[14], [ 15]). sampling NM < As already mentioned, the task of reconstruction of images of N samples from sampled data is a special case of imaging the under-determined problems. The found solution of this task, the bounded spectrum (BS-) image reconstruction, can be used for solving other under-determined inverse imaging problems as well. In Refs.[ 12], [ 13], [14], [ 15], one can find demonstrations of using this option for inverse - - - - - demosaicing color images; image super-resolution from multiple chaotically sampled video frames; image super-resolution in computed tomography; image reconstruction from their sparsely sampled Fourier spectra; image reconstruction from the modulus of its Fourier spectrum. Choosing anti-aliasing filters individually adjusted for each particular image is advisable but not very critical. As a 11 References [ 1] Nyquist, H., Certain factors affecting telegraph speed, Bell System Technical Journal, 1924, 3, p. 324 [ 2] Kotel’nikov V. A., “On the transmission capacity of "ether" and wire in electro-communications,” Izd. Red. Upr. Svyazzi RKKA In: Modern Sampling Theory: Mathematics and Applications, J. J. Benedetto and P. J. S. G. Ferreira, Eds. Boston, MA: Birkhauser, 2000) [ 3] Shannon C. E. “Communication in the presence of noise,” Proc.IRE, vol. 37, pp. 10–21, 1949. [ 4] Donoho D., “Compressed sensing,” IEEE Trans. Inform. Theory 52(4), pp. 1289–1306, 2006. [ 5] Candès E., “Compressed sampling,” Proc. Int. Congress of Math., Madrid, Spain, pp. 1433-1452, 2006 [ 6] Candès E., J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59 (8), 1207 (2006). [ 7] Baraniuk R. G., Compressive Sensing [Lecture Notes], SignalProcessing Magazine, July 2007 [ 8] ] Donoho D. L. and J. Tanner, “Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing”, Phil. Trans. R. Soc. A 367, 4273–4293, (2009) [ 9] Donoho D. L. and Tanner J., “Exponential bounds implying construction of compressed sensing matrices, error-correcting codes, and neighborly polytopes by random sampling,” IEEE Trans. Inf. Theory v. 56, p. 2002-2016, 2010. [ 10] Chandrasekaran V., Recht B., Parrilo P. A., and Willsky A. S., “The convex geometry of linear inverse problems”, Foundations of Computational Mathematics, 12(6), 805-849 (2012). [11] Yaroslavsky L. P., "Can compressed sensing beat the Nyquist sampling rate?," Opt. Eng., 54(7), 079701 (2015). https/doi.org 10.1117/1.OE.54.7.079701. [ 12] Yaroslavsky L. P., "How can one sample images with sampling rates close to the theoretical minimum?" Journal of Optics, 19, N. 5, (2017). https/doi.org: 10.1088/2040-8986/aa65b7. [ 13] Yaroslavsky L. P., Compressed Sensing, the ASBSR-Method of Image Sampling and Reconstruction, and the Problem of Digital Image Acquisition with the Lowest Possible Sampling Rate , In: Compressed Sensing: Methods, Theory and Applications, Chapt.1., Ed. Jonathon M. Sheppard, Nova Publishers, 2018 ISBN: 978-1- 53613-082-9 [14] Yaroslavsky L. P., Advances in Sampling Theory and Techniques , SPIE Press Book, 2020. [ 15] Yaroslavsky L. P., Digital Signal Processing in Experimental Research, How to Optimally Sample and Resample Images: Theory and Methods Using Matlab, Vol. 3, Bentham books, 2020. [ 16] Yaroslavsky L.P.,Theoretical Foundations of Digital Imaging, CRC Press, 2013. [17] Yaroslavsky L. P., Shabat G., Salomon B. G., Ideses I. A., and Fishbain B., Nonuniform sampling, image recovery from sparse data and the discrete sampling theorem, J. Opt. Soc. Am. A, Vol. 26, No. 3/March [ 18] 2009Gonzales R. C., Woods R. E. Digital Image Rrocessing, 2nd edition, Prentice Hall, 2002 [ 19] Fiete R. D., "Multiple aperture imaging system", US patent US 2005 (https://www.google.com/patents/US6943946). 6,943,946 Sep. B2, 13, 12
synthetic_cpt
1
Harvest_Video_Foundation_Models_via_Efficient_Post-Pretraining.pdf
3 2 0 2 t c O 0 3 ] V C . s c [ 1 v 4 5 5 9 1 . 0 1 3 2 : v i X r a Harvest Video Foundation Models via Efficient Post-Pretraining Yizhuo Li1,2∗, Kunchang Li2,3∗, Yinan He2, Yi Wang2, Yali Wang2,3, Limin Wang2,4, Yu Qiao2,3, Ping Luo1,2 1The University of Hong Kong, 2Shanghai AI Lab, 3Shenzhen Institutes of Advanced Technology, CAS 4State Key Laboratory for Novel Software Technology, Nanjing University Abstract the image-language domain, video foundation models have also flourished through training on massive video-text pairs. Building video-language foundation models is costly and difficult due to the redundant nature of video data and the lack of high-quality video-language datasets. In this paper, we propose an efficient framework to harvest video foun- dation models from image ones. Our method is intuitively simple by randomly dropping input video patches and mask- ing out input text during the post-pretraining procedure. The patch dropping boosts the training efficiency signif- icantly and text masking enforces the learning of cross- modal fusion. We conduct extensive experiments to vali- date the effectiveness of our method on a wide range of video-language downstream tasks including various zero- shot tasks, video question answering, and video-text re- trieval. Despite its simplicity, our method achieves state-of- the-art performances, which are comparable to some heav- ily pretrained video foundation models. Our method is ex- tremely efficient and can be trained in less than one day on 8 GPUs, requiring only WebVid-10M [3] as pretrain- ing data. We hope our method can serve as a simple yet strong counterpart for prevalent video foundation models, provide useful insights when building them, and make large pretrained models more accessible and sustainable. This is part of the InternVideo project https://github.com/ OpenGVLab/InternVideo. 1. Introduction A new line of research is rising in multimodal mod- eling which connects images and text with the help of cross-modal contrastive learning [49, 37, 25, 75]. Large image foundation models like CLIP [49] are capable of aligning images and text into one shared embedding space, demonstrating a powerful ability to model both visuals and languages. They deliver excellent performance on down- stream tasks, especially zero-shot ones without further fine- tuning. Following the success of cross-modal modeling in *Interns at Shanghai AI Laboratory Figure 1. The pipeline of our post-pretraining framework. Our method explores the possibility of building powerful video foun- dation models upon image foundation models via post-pretraining. We employ a masking strategy on both video and text to boost the efficiency of post-pretraining and promote cross-modal fusion. However, building video foundation models can be costly and difficult. Video processing costs are directly pro- portional to the length of the video, which makes it more expensive than processing images. Successive frames in videos also tend to contain a lot of redundant spatial infor- mation, which wastes a lot of computation. Additionally, the existing video-text datasets, such as WebVid-10M [3], are relatively small compared to their image-text counter- parts (e.g., LAION-2B [51]), which makes constructing video foundation models more challenging. In such scenar- ios, it is often more feasible and cost-effective to develop video foundation models based on existing image founda- tion models. Attempts to achieve this have been explored in BridgeFormer [20], CLIP4Clip [44], and CLIP-ViP [71]. In this work, we aim to further push the limits of post- pretraining in an efficient manner. We propose a simple post-pretraining framework to build video foundation models upon image foundation models. Our method follows the popular MAE paradigm [23, 58, 22, 42], by randomly dropping the input video patches with a certain probability. We call it “dropping” instead of “mask- ing” because we do not recover dropped patches or replace them with special tokens. However, we may use these terms I:ImageFoundationModelV:VideoFoundationModelS:SpecializedModelVIRSR ^]Z_] WX Y][WZ` a]bacc.[e]Xf]R TUVWXY WZ [ℎY XY].ã: åℎ][’X [ℎY T][ _cWZ`?R: èÇYY^WZ`.Post-PretrainingviaMaskingR:RandomizedImageModel interchangeably for convenience. Additionally, we also ran- domly mask the input text and predict the masked token with an extra decoder. Video patch dropping and text mask- ing look alike yet are applied for different purposes. Video patch dropping is designed to significantly boost training ef- ficiency, while text masking is designed to promote modali- ties fusion to build a more capable video foundation model. The framework is illustrated in Figure 1. We employ a straightforward post-pretraining procedure, by jointly optimizing contrastive loss and masked text pre- diction loss, trained on WebVid-10M [3] for only 50k steps. We conduct extensive experiments on a wide range of video-language downstream tasks to evaluate the perfor- mance of our given framework, including multiple zero- shot tasks, video question answering, and video-text re- trieval. Despite the simplicity of our method, we achieve SOTA performance comparable to popular video founda- tion models. Moreover, our method is highly efficient and the post-pretraining procedure takes less than 192 GPU hours (using A100). As a comparison, a typical video foun- dation model like All-in-one [61] requires more than 5k GPU hours (using A100) with inferior performance. Based on the experimental results, we give an in-depth discussion on existing paradigms for video foundation mod- els. We attribute the effectiveness of our method to the pow- erful CLIP pretraining, which reveals that image-trained models can perform well on video-language tasks with an inexpensive post-pretraining procedure. This reveals the limitation of existing video-language datasets, which may not provide enough temporal textual description to model the rich information in videos. We also find that the text en- coder plays a vital role in video-language tasks. However, current video-language datasets may not be diverse or of high enough quality to train an adequate text encoder. We hope our method can serve as a strong yet efficient coun- terpart for video foundation models and provides useful in- sights into building them. 2. Related Work Video-Language Pretraining. Starting from the rapid de- velopment of image-language pretraining [56, 7], large- scale video-language pretraining with fine-tuning on spe- cific downstream tasks has become the standard paradigm in the video-language understanding [78, 14, 68]. The earliest methods [55, 83] directly extract the offline video and text features from well-pretrained visual and language encoders, while the recent methods [32, 3, 77, 61, 31] have demonstrated the feasibility of end-to-end training. Besides, the popular methods often include two or three pretraining tasks, e.g., masked language modeling [54, 40], frame order modeling [78], video-text matching [32], video-text contrastive learning [68] and video-text masked the previous modeling [14]. As for the training data, large-scale video-text pairs are introduced, methods mainly leverage image-text pairs, such as COCO Caption [6], Google Conceptual Captions [52], and Vi- sual genome [28]. For better video-language understand- ing, includ- ing WebVid-2M[3], HowTo100M [45], and YT-Temporal- 180M [78]. Unlike most methods that require large-scale datasets or enormous training resources, our method re- quires only WebVid-10M [3] and 8 GPUs and can be trained in less than one day. Video-Language Downstream Tasks. Video-Language understanding tasks [41, 43, 26, 29, 47, 16] have attracted rapidly growing attention in the computer vision commu- nity and natural language processing community. In the pe- riod before the video-language pretraining booming, some specific downstream tasks have been widely studied includ- ing video question answering [24, 33, 34, 67], video-to- text retrieval [69, 27, 35], video caption [4, 69, 64, 50, 82] and temporal localization [2, 18, 27, 35]. In the research paradigm of these tasks, offline video feature extraction [66, 17, 30, 11] plays an important role in performance. With the rapid progress of video-language pretraining tasks [55, 83, 39], the performance of downstream tasks is further im- proved. Recently, CLIP [49] demonstrates very impressive transfer and generalization capacities in the video-language field, including video-text retrieval [44, 19, 12, 8], video caption [57, 74], video summarization [46] and zero-shot and few-shot recognition [81, 80, 79]. In this work, we de- sign a task adaption module to further extend CLIP for more diverse downstream tasks such as video question answering. Masked Modeling. The prevailing masking modeling strat- egy in computer vision is introduced by MAE [23]. MAE randomly drops input vision tokens and reconstructs them as a proxy task to learn spatial representation. MAE-based methods have already been extended to videos [58, 13, 22]. Our method follows a similar design but does not recover the masked tokens. Masked learning in natural language processing has a longer history back such as the highly in- fluential work BERT [9]. A recent work FLIP [42] shares a similar idea with our method by applying masking to ef- ficiently train cross-modal models but differs in several as- pects. We discuss the differences in the following section. 3. Methodology Our method is a simple align before fuse framework, which applies masking to both video and text inputs. The method consists of three main components: (1) a video encoder, (2) a text encoder, and (3) a cross-modal fusion module. As the mission of our method is to unleash the potential of image foundation models on video-language tasks via post-pretraining, we follow the paradigm of AL- BEF [37] and CoCa [75] to append a modality-fusion mod- ule after video-language alignment. By following a com- mon paradigm similar to prevalent video foundation mod- Figure 2. Overall framework of the proposed method. Our method is intuitively simple with video patch dropping and text masking design. 1) We randomly drop input video patches with a certain probability before feeding them into the visual encoder, without recovering them. 2) We randomly replace a certain portion of input text tokens with special [MASK] token, and predict the masked targets by introducing a text decoder. 3) Our method is built upon pre-trained CLIP model and keeps the text encoder frozen. The framework is jointly trained with the contrastive loss Lcon and the masked language loss Lmask. els, we demonstrate that our method can achieve superior performance without the need for specialized designs. The framework is illustrated in Figure 2. Video Patch Dropping. We apply video patch dropping following MAE-based methods [23, 58, 22, 42]. Videos in nature are temporally redundant [58] and thus require a lot more unnecessary computational resources to process. During training, cross-modal contrastive learning also re- quires a relatively larger batch size compared to typical su- pervised video tasks for better performance [49, 42]. There- fore, we introduce video patch dropping to reduce the com- putational cost and meet the batch size requirement. Video patch dropping is a key component of our method as it al- leviates the computational cost to a large extent. We call it “dropping” instead of “masking” because we do not recover the dropped patches as in MAE [23] or VideoMAE [58]. And we call it “patch” instead of “token” to distinguish it from the following text token masking. Text Masking. We also apply masking on input text to create a proxy task for cross-modal fusion. In contrast to FLIP [42], which applies the same approach of dropping text tokens as video patches, our method employs a random replacement technique like BERT [9], whereby a subset of text tokens is substituted with a designated token referred to as [MASK]. We employ a cross-modal transformer de- coder as the multi-modal fuser. The fuser is a standard transformer decoder. At cross-attention layers, the decoder takes text features as queries and video features as keys and values. Given masked video features and text features, the target of the decoder is to predict the masked text tokens. The decoder shares the same objective as the multimodal encoder in ALBEF [37] or the captioner in CoCa [75], to push the ability of contrastive models beyond alignment. With text masking as an auxiliary task, we can enforce the model to learn more fine-grained cross-modal information instead of only global semantics. In this way, the model will perform better at tasks requiring modalities fusion such as video question answering without large-scale re-training. Training Objectives Our framework is optimized to- ward two objectives: video-text alignment and masked lan- guage modeling. Video-text alignment is trained by min- imizing the InfoNCE loss with global video and text fea- tures. Masked language modeling is trained by minimizing the cross-entropy loss between the decoder’s output and the ground-truth masked text tokens. Our method shares similar ideas with a recent concurrent work FLIP [42] but differs in the following aspects: (i) Our method is aimed at harvesting the potential of pretrained image foundation models on video-language tasks via post- pretraining, while FLIP is aimed at speeding up CLIP train- ing. Videos in nature are more redundant than images and (ii) Our method em- benefit more from patch dropping. ploys a cross-modal transformer decoder as the multi-modal fuser, while FLIP does not involve any cross-modal fusion. The decoder pushes the ability of contrastive models be- yond alignment and makes our method generalize on more downstream tasks. (iii) FLIP requires an unmasking pro- cedure before being applied to downstream tasks while our method does not, which makes our method more efficient. 4. Experiments In this section, we first describe the settings for post- pretraining in Sec. 4.1 and then demonstrate the perfor- mance of the post-pretrained model by evaluating on a va- riety of downstream tasks. In Sec. 4.2, we validate the ef- fectiveness of our method on zero-shot tasks. We evaluate by fine-tuning on video-text retrieval in Sec. 4.3 and video question answering in Sec. 4.4. Ablations and discussions are conducted in Sec. 4.5 and Sec. 4.6. [𝑀𝑎𝑠𝑘]𝐴𝑝𝑎𝑛𝑑𝑎𝑏𝑎𝑚𝑏𝑜𝑜DroppingRandom𝐴𝑝𝑎𝑛𝑑𝑎𝑒𝑎𝑡𝑖𝑛𝑔𝑏𝑎𝑚𝑏𝑜𝑜VideoEncoderTextEncoderTextDecoderRandomDrop90%Mask15%Keys/ValuesQueriesCLIPPre-trainedℒ#$%ℒ&’()𝑒𝑎𝑡𝑖𝑛𝑔MaskingTraining 4.1. Post-Pretraining Dataset. We choose WebVid-10M [3], a diverse and clean video-text dataset collected from stock footage sites consist- ing of 10.7M video-text pairs. Compared with typical video foundation models, it is 1/10 of HD-VILA-100M [70] used in CLIP-VIP [71], 1/10 of WebVid-2.5M + Howto100M used in All-in-One [61], and 1/18 of YT-Temporal-180M used in MERLOT [78]. No additional data or pretrained models are used other than WebVid-10M and CLIP. Architecture. We use a simplified UniformerV2 [38] as the visual encoder by default. Since the spatiotemporal convo- lution in UniformerV2 hinders the utilization of video patch dropping, we only insert the global UniBlocks but remove the Dynamic Position Encoding module. We initialize ad- ditional parameters in a way that the output is identical to the original CLIP model which we find to be essential for decent zero-shot performance. The masked language mod- ule is a standard 4-layer transformer [60] decoder with a dimension of 512 followed by a two-layer MLP. Other set- tings leave CLIP Base/16 untouched. We ablate the choice of the visual encoder by comparing with the vanilla ViT[10] backbone. Unlike UniformerV2 with the temporal modules, the features of ViT are extracted frame-wise and directly averaged across frames. UniformerV2 endows a stronger temporal modeling ability with extra modules. Comparing UniformerV2 with ViT will reveal the impacts of explicit temporal modeling on different datasets and tasks. Training. Thanks to the efficiency of patch dropping, we can post-pretrain with minimal computational resources. By default, we train for 50k steps on 8 A100 GPUs within 1 day. As a comparison, a typical video foundation model like All-in-one [61] requires 32 A100 GPUs for 7 days. The model is trained with a batch size of 1024, a learning rate of 1 × 10−5, weight decay of 0.2, and a cosine anneal- ing schedule with 4k warm-up steps. The text encoder is frozen during post-pretraining as the original training cor- pus of CLIP is much richer than WebVid-10M [3]. CLIP- ViP [71] also demonstrates that there exists a domain gap between the pretraining data and downstream tasks. With- out additional data or a pretrained captioner, freezing the text encoder is the optimal choice. Implementation. We follow VideoMAE [58] using a large dropping ratio of 90% for video input, saving computational resources to a large extent. We randomly sample 8 frames per clip as video input. For text input, we use a mask ratio of 15% by default following BERT [9]. The effects of different dropping ratios and mask ratios are ablated in Sec. 4.5. 4.2. Zero-Shot Tasks We first validate the effectiveness of our method on zero- shot tasks. One of the key challenges of zero-shot learning is the distribution shift between the pretraining and target domains. CLIP-based methods suffer more from overfit- Method Top-1 Accuracy ER-ZSAR [5] ActionCLIP [63] Ours without WiSE-FT Ours with ViT Ours 42.1 56.0 54.0 56.8 56.7 Table 1. Zero-shot action recognition on Kinetics-400. Despite our effort to tackle distribution shift by freezing text encoder, the performance is still lower than SOTA method without massive ad- ditional data or interfering with the weights. Method MSRVTT LSMDC JSFusion [76] All-in-one [61] MERLOT [78] VIOLET [14] All-in-one [61] Ours without WiSE-FT Ours with ViT Ours 83.4 92.3 90.9 91.9 80.3 92.6 93.5 93.2 73.5 84.4 81.7 82.8 56.3 74.9 76.5 76.3 Table 2. Zero-shot multiple-choice on MSRVTT and LSMDC. Those methods with supervised training are grayed out. Unlike action recognition, our method surpasses the SOTA method even without WiSE-FT. This is attributed to the frozen text encoder as multiple-choice task requires better textual modeling. ting as the pretraining dataset is usually much larger than the post-pretraining dataset [71]. WiSE-FT [65] tackles this problem by simply weighted ensembling the pretrained model and the fine-tuned model. Unlike in WiSE-FT where the target domain is also an image one, our scenario re- quires a more flexible method to solve the domain gap. We extend WiSE-FT to an online and multiple-checkpoint ver- sion. In short, we evenly ensemble using l checkpoints ev- ery k epochs/intervals during post-pretraining. We provide a detailed explanation in the supplementary and validate this design in the ablation study. By default, we set k = 10 and l = 3. We do not apply patch dropping in zero-shot tasks as patch dropping hurts the performance of zero-shot tasks greatly without an unmasking procedure. Zero-Shot Action Recognition. We report zero-shot action recognition performance on Kinetics-400[53] as an indica- tor of zero-shot classification ability. We follow the setting of ActionCLIP [63] with textual prompt and average the similarity between the normalized visual classification to- ken and text classification tokens. The results are reported in Table 1. Despite our effort to tackle the distribution shift by freezing the text encoder, the performance after post- pretraining drops to 54.0% top-1 accuracy. With the mod- ified WiSE-FT, the performance is boosted to 56.0% top-1 accuracy, indicating that the distribution shift is alleviated without massive additional data. Our method shows slightly better performance with ViT backbone, which indicates less need for temporal information in this scenario. Method R@1↑ R@5↑ R@10↑ MdR↓ VideoCLIP [68] Frozen [3] BridgeFormer [20] ALPRO [36] VIOLET [14] OmniVL† [62] Ours with ViT Ours NoiseEst [1] Frozen [3] BridgeFormer [20] Ours with ViT Ours NoiseEst [1] Frozen [3] BridgeFormer [20] Ours with ViT Ours VideoCLIP [68] Frozen [3] BridgeFormer [20] VIOLET [14] ALPRO [36] OmniVL† [62] Ours with ViT Ours CLIP [49] Ours with ViT Ours 10.4 24.7 33.2 24.1 25.9 34.6 32.6 36.2 13.7 33.7 48.4 39.2 44.0 4.2 9.3 15.5 17.5 16.0 16.6 20.2 25.6 23.5 23.8 33.3 29.8 32.2 39.7 45.2 48.9 MSR-VTT 30 57.2 68.6 55.4 59.7 66.6 65.5 69.7 22.2 46.9 58.0 44.7 49.5 58.4 54.9 60.3 MSVD 35.7 64.7 76.4 66.9 72.7 47.7 76.3 85.8 76.2 82.5 LSMDC 11.6 22.0 30.7 29.9 30.2 17.1 30.1 38.7 38.0 36.7 DiDeMo - 58.5 61.1 59.8 57.9 68.5 63.6 68.5 46.9 46.4 50.6 49.8 47.3 58.7 54.3 58.0 VATEX 72.3 76.5 80.6 82.2 85.4 88.4 - 7.0 4.0 8.0 - - 4.0 3.0 12.0 3.0 2.0 2.0 2.0 119.0 51.0 22.0 25.0 28.0 - 7.0 5.0 - 3.0 - 5.0 4.0 2.0 2.0 2.0 Table 3. Zero-shot video-text retrieval on MSR-VTT, MSVD, LSMDC, DiDeMo, and VATEX. Video foundation models and retrieval-specialized methods are mixed for reference. Our method is aimed to compare with video foundation models for general pur- pose, therefore those methods specially designed for retrieval are grayed out. “†” utilizes matching loss to rerank the retrieved re- sults for better performance. Results without WiSE-FT are not reported because they all fail in zero-shot video-text retrieval. Zero-Shot Multiple-Choice. We evaluate the performance on the zero-shot multiple-choice task. The objective of the multiple-choice task is to find the correct caption from the candidates, serving as a simplified version of the retrieval task. We report zero-shot performance on MSRVTT [76], and LSMDC [59] in Table 2. Our method even outperforms some supervised methods with such a simple framework. Similar to action recognition, using ViT as the backbone yields better results due to less need for temporal informa- tion. This task presents a smaller relative gap between post- pretraining with WiSE-FT and the one without. We attribute this to the fact that the multiple-choice task requires bet- ter textual modeling ability, which is retained via freezing the text encoder. A similar observation is presented in Co- Tokenization [48], in which a pretrained T5 model achieves almost 100% accuracy on the multiple-choice task in TGIF- QA [24] even without video. Zero-Shot Video-Text Retrieval. We evaluate zero-shot video-text retrieval performance on 5 popular video-text retrieval datasets including MSRVTT [69], MSVD [4], LSMDC [50], DiDeMo [2], and VATEX [64]. A brief in- troduction of these datasets can be found in Sec. 4.3. The results are reported in Table 3. Our method demonstrates superior performance across all 5 datasets. The only in- ferior case is DiDeMo on which the performance is only slightly lower than OmniVL, which uses an extra match- ing loss to rank the retrieved results while our method is purely similarity-based. The different characteristic of video-text retrieval task is that without WiSE-FT, results are only single-digit, unlike classification and multiple-choice tasks. This reveals that the distribution shift shows a non- negligible effect that is too large to be alleviated by only freezing the text encoder. Our method shows superior performance on zero-shot tasks with the modified WiSE-FT. This is surprising yet ex- pected. First, CLIP models are already powerful zero-shot models trained on diverse data, while those video-language tasks are still limited to a small domain. Second, we han- dle the distribution shift to a large extent by freezing the text encoder and using WiSE-FT. However, the failure on retrieval without WiSE-FT indicates that classification and multi-choice rely more on static vision, while retrieval re- quires more dynamic and interactive modeling, thus suffer- ing more from the distribution shift. On several tasks, using the vanilla ViT as the backbone shows better performance. This may be due to that using static vision is sufficient to handle those benchmarks. We discuss this in Sec. 4.6. 4.3. Video-Text Retrieval The introduction of video patch dropping should natu- rally lead to two benefits: saving computational resources greatly and improving the performance of contrastive learn- ing by fitting larger batch size [49, 42]. We demonstrate the effectiveness of video patch dropping with the retrieval task which benefits more from large batch size. Datasets. We evaluate the performance of our method on 5 datasets including MSRVTT [69], MSVD [4], LSMDC [50], DiDeMo [2], and VATEX [64]. MSRVTT contains 10,000 videos in total and 200,000 captions. MSVD contains 1,970 videos in total and 40 captions for each video. LSMDC contains 118,081 videos in total and each video has one caption. DiDeMo contains 10,000 videos in total and 40,000 captions. Method R@1↑ R@5↑ R@10↑ MdR↓ 31.0 FROZEN [3] CLIP4Clip [44] 44.5 BridgeFormer [20] 44.9 CLIP-ViP† [71] 54.2 22.0 ClipBERT [32] 33.9 ALPRO [36] 37.9 All-in-one [61] VIOLETv2† [15] 37.2 OmniVL† [62] 47.8 Ours w/o dropping 45.3 45.7 Ours with ViT 47.4 Ours 33.7 FROZEN [3] CLIP4Clip [44] 46.2 BridgeFormer [20] 54.4 Ours w/o dropping 49.9 50.4 Ours with ViT 51.0 Ours FROZEN [3] 9.3 BridgeFormer [20] 21.8 22.6 CLIP4Clip [44] CLIP-ViP† [71] 29.4 VIOLETv2† [15] 24.0 Ours w/o dropping 22.5 22.0 Ours with ViT 24.7 Ours FROZEN [3] 31.0 BridgeFormer [20] 37.0 43.4 CLIP4Clip [44] CLIP-ViP† [71] 50.5 21.1 ClipBERT [32] 31.2 All-in-one [61] 35.9 ALPRO [36] VIOLETv2† [15] 47.9 OmniVL† [62] 52.4 Ours w/o dropping 46.5 45.4 Ours with ViT 46.7 Ours CLIP4Clip [44] 55.9 Ours w/o dropping 64.4 64.2 Ours with ViT 64.5 Ours MSR-VTT 70.5 81.6 80.3 84.8 59.9 73.2 77.1 75.8 83.8 80.9 82.1 82.6 59.5 71.4 71.9 77.2 46.8 60.7 68.1 64.8 74.2 72.5 73.8 73.2 MSVD 76.3 84.6 89.4 87.8 88.0 88.4 64.7 76.1 82.8 79.7 79.4 80.5 LSMDC 30.1 50.6 49.1 59.0 54.1 54.3 52.5 53.1 22.0 41.1 41.0 50.6 43.5 43.7 42.3 44.0 DiDeMo 72.4 73.9 80.6 87.1 61.1 72.1 78.8 84.1 85.4 81.9 80.4 82.4 59.8 62.2 70.2 78.4 47.3 60.5 67.5 76.5 79.5 73.8 72.4 74.4 VATEX 95.0 96.3 96.3 96.5 89.2 92.2 92.1 92.1 3.0 2.0 2.0 1.0 6.0 3.0 - - - 2.0 2.0 2.0 3.0 2.0 1.0 2.0 1.0 1.0 51.0 10.0 11.0 - 8.0 9.0 8.0 3.0 3.0 2.0 1.0 6.3 3.0 3.0 - - 2.0 2.0 2.0 1.0 1.0 1.0 1.0 Video-text retrieval task on MSR-VTT, MSVD, Table 4. LSMDC, DiDeMo, and VATEX. Our baseline model is CLIP4Clip [44] and our method only provides the pre-trained model. Methods specially designed for retrieval are grayed out. “†” marks those utilizing matching loss to rerank the retrieved re- sults for better performance. CLIP-ViP utilizes substantially more data but is highly related as a post-pretraining counterpart. Implementation and Training. We follow the standard data split in CLIP4Clip [44] and also follow its setting for fine-tuning on video-text retrieval tasks. When training without video patch dropping, we use a batch size of 24 per GPU due to memory limitation. The batch size is in- creased to 128 with video patch dropping as the memory requirement is significantly lifted. The results are reported in Table 4 with R@1, R@5, R@10, and median rank. It should be noted that the base- line of our method is CLIP4Clip and our method only pro- vides the post-pretrained model. Some SOTA methods like OmniVL [62], VIOLETv2 [15], and CLIP-ViP [71] utilize a matching loss when pretraining to rerank the retrieved re- sults for better performance. While our method is purely similarity-based for generality and fair comparison. CLIP- ViP [71] is a highly related work as a post-pretraining coun- terpart but uses substantially more data including 114.5M pairs and an additional pretrained captioner. With video patch dropping, our model achieves results comparable to SOTA methods on all datasets. We find that our method does not require an unmasking procedure to be applied on downstream tasks unlike FLIP [42]. This is pos- sibly due to the fact that our method is initialized with a pretrained CLIP and tries to alleviate distribution shift with a frozen text encoder and WiSE-FT. The performance gap between using a vanilla ViT and UniformerV2 as the back- bone is smaller than the zero-shot setting. This can be at- tributed to saturating performance leading to a smaller gap, but also demonstrates that an image-based model may be good enough for existing video-language tasks. 4.4. Video Question Answering Compared with other CLIP-based methods which focus on alignment, our method provides a more general video foundation model by using video and text masking, which fuses features across modalities. To validate this, we con- duct experiments on video question answering. Unlike video-text retrieval which is purely similarity-based, ques- tion answering requires more interactions between modali- ties to predict the answer. Datasets. We report results on MSRVTT-QA [67], MSVD- QA [67], and the frame-QA subtask on TGIF-QA [24]. MSRVTT-QA contains 243K open-ended questions over 10K videos. MSVD-QA consists of 47K open-ended ques- tions over 2K videos. We follow the settings in All-in- one [61] as the dataset setup. Specifically, we choose 1,500, 1,000, and 1,540 most common options as the target vocab- ulary for each dataset respectively. Implementation. We add a two-layer MLP on top of the pretrained model to predict the answer. We consider three possible features to feed into the VQA classification head: 1) Alignment features only, which concatenates the classifi- cation tokens of the vision encoder and text encoder. This is Method MSRVTT MSVD TGIF-QA Drop Ratio R@1↑ R@5↑ R@10↑ MdR↓ Mem/G↓ Just Ask [72] Co-Tokenization [48] ClipBERT [32] ALPRO [36] All-in-one [61] MERLOT [78] VIOLET [14] OmniVL [62] VIOLETv2 [15] Ours w/o decoder Ours with ViT Ours 41.8 45.7 37.4 42.1 42.9 43.1 43.9 44.1 44.5 44.2 44.1 44.8 Table 5. Video question answering on MSRVTT, MSVD, and TGIF-QA. Our method is aimed to compare with video founda- tion models for general purposes. Therefore those methods spe- cially designed for video question answering are grayed out. - 62.5 60.3 - 64.2 69.5 68.9 - 72.8 67.2 67.0 69.3 47.5 48.6 - 46.3 46.5 - 47.9 51.0 54.7 50.7 50.1 52.4 the default setting when post-pretraining without text mask- ing. 2) Fusion features only, which takes the end-of-text token in the text decoder features as the final classification feature. 3) The combination of alignment and fusion fea- tures, which we find to work best in the experiments and is consistent with the intuition: alignment features and fusion features are complementary to each other. Training. For all three datasets, we train the post-pretrained model on standard training split with a learning rate of 1×10−5 for 20 epochs. We use a cosine learning rate sched- uler with 2 warm-up epochs. This simple choice of training hyperparameters shows that our post-pretraining framework produces a robust and general model. We report the results of video question answering in Ta- ble 5. With the text masking, our method exhibits supe- rior performance comparable to or even surpassing some methods designed specifically for VQA tasks such as Just Ask [72]. Compared with video foundation models coun- terparts, our method also performs better than some mod- els trained on a large-scale dataset with heavy computa- tional resources such as MERLOT [78] and All-in-one [61]. The text masking improves the accuracy by 0.6%, 1.7%, and 2.1% on MSRVTT, MSVD, and TGIF-QA respectively, showing that the text masking design is effective in fusing different modalities. We also attribute this performance gain to the better abil- ity to model text. Currently, common video question an- swering practices share the same settings with classifica- tion, but with a much larger vocabulary size (e.g., 1,500 for MSRVTT-QA). Therefore, a well-trained text encoder is crucial for better performance. Similar observations are shared in Co-Tokenization [48], FrozenBiLM [73], and Img2Prompt [21], where frozen large language models are found to be extremely helpful in question answering tasks. 0.7 0.8 0.9 48.0 47.7 47.4 74.3 74.0 73.2 83.4 83.2 82.6 2.0 2.0 2.0 37.3 25.6 16.2 Table 6. Different patch drop ratio. Performance of our method on fine-tuned MSRVTT video-text retrieval when post-pretrained with different drop ratios for patch dropping. “Mem” denotes sin- gle GPU memory usage with per GPU batch size 128. Mask Ratio 0 0.05 0.15 0.25 Accuracy 44.2 44.4 44.8 44.6 Table 7. Differnt text masking ratio. Performance of our method on MSRVTT video question answering when trained with differ- ent mask ratios for masked language module. “0” means removing the masked language module. 4.5. Ablation Study Drop Ratio of Video Patch Dropping. We follow Video- MAE [58] with a drop ratio of 90% when implementing video patch dropping. As retrieval task benefits most from patch dropping, we compare different dropping ratios with the same batch size in post-pretraining and show how the dropping ratio affects the downstream performance. The re- sults on MSRVTT video-text retrieval are shown in Table 6. As expected, when post-pretraining with the same batch size, a lower drop ratio yields higher performance due to the smaller gap between post-pretraining and downstream fine-tuning. However, the GPU memory usage of drop ratio 0.7 is 2.3 times higher than that of drop ratio 0.9. Intuitively there is a trade-off between the performance and efficiency of video patch dropping. One should consider the efficiency and performance based on the limitation of computational resources. We simply adopt the drop ratio of 0.9 for the highest efficiency with an acceptable performance drop on downstream tasks. This also aligns with the conclusion of VideoMAE [58] that videos in nature are highly redundant. Videos can endure higher drop ratios compared to images which typically utilize a drop ratio around 70% [42]. Mask Ratio of Text Masking. The default text masking ratio follows BERT [9] using 15%. However, considering that the masked text decoder serves as a cross-modal fuser, a different mask ratio may work better. As video question answering task benefits most from text masking, we com- pare different mask ratios in post-pretraining and evaluate on MSRVTT-QA. The results in Table 7 vary little on video question answering task. We attribute this to three reasons. First, a frozen text encoder fixes unmasked text features and leaves little space for the masked text decoder to learn. Sec- ond, the masked text decoder is not initialized with a pre- trained model and is only post-pretrained for a relatively short schedule. Third, the masked text decoder is jointly k - 2 2 3 5 5 l MSR-VTT LSMDC combined - 50 10 10 5 25 92.6 93.1 93.4 93.2 93.7 93.0 74.9 75.0 76.0 76.3 75.7 75.5 78.9 79.1 80.0 80.2 79.8 79.5 Table 8. Performance of modified WiSE-FT on zero-shot multi- ple choice tasks with different training hyperparameters. “com- bined” indicates the combination of two datasets. When k = 2 and l = 50, this is standard WiSE-FT with α = 0.5. Our method is robust to the ensembling interval k and the checkpoint number l. The modified version provides consistent improvement over the original WiSE-FT. trained with patch dropping. With a high mask ratio, the decoder does not benefit from auxiliary vision information. Hyperparameters of WiSE-FT. We use a modified ver- sion of WiSE-FT [65] to further alleviate distribution shift in zero-shot tasks. To ablate this choice, we vary the en- sembling interval k and the number of checkpoints l in post-pretraining. As only zero-shot tasks benefit most from WiSE-FT, we conduct ablation analysis on k and l on zero- shot multiple-choice tasks. The results are shown in Ta- ble 8. The results reveal that the zero-shot performance does not vary drastically with different training hyperparameters, but the modified version provides a consistent improvement over the original WiSE-FT. 4.6. Discussion Reflections on Video-Language Training. Our results provide several observations on current video-language training, including pretraining data and downstream tasks. In short, they may not be “video” enough. (i) Our method is a simple post-pretraining framework without bells and whistles and is trained with minimal costs. Still, it achieves performance comparable to heavily trained video founda- tion models. We give credit to the powerful CLIP pretrain- ing. Several studies have already found that image-based models can perform well on video benchmarks including CoCa [75] and Singularity [31]. Also in CLIP-ViP [71], video-language pretraining can be improved with captions generated by an image captioner. (ii) Our experiments are conducted with two different types of backbones: Uni- formerV2 and vanilla ViT. One would expect that a pow- erful spatiotemporal backbone like UniformerV2 will out- perform vanilla ViT to a large extent, but in our study this is not the case. The vanilla ViT even surpasses UniformerV2 on several downstream tasks, mostly in zero-shot settings. The performance gap between ViT and UniformerV2 on re- trieval tasks also shrinks from zero-shot to finetuning set- ting. This indicates that temporal modeling may not be so important on some of current video-language benchmarks. Or spatiotemporal backbones like UniformerV2 have not fully utilized temporal information yet. (iii) Videos should intuitively contain more information than images, but com- monly used video-text data like WebVid does not demon- strate longer and richer text descriptions compared with image-text data. This is also one of the reasons why a frozen text encoder is good enough in our method. The Role of Text Encoder. Our method will perform worse if the text encoder is not frozen, which shares the same ob- servation with CLIP-ViP. CLIP-ViP attributes this to lan- guage domain gap and tackles this problem by using ex- tensively generated caption data, while our method simply freezes the text encoder. We believe that a well-trained text encoder can further boost the performance on video- language tasks. This has also been demonstrated by some works on question answering with the help of powerful language models including Co-Tokenization [48], Frozen- BiLM [73], and Img2Prompt [21]. However, current video- text datasets may not be diverse and high-quality enough to train a satisfying text encoder, especially when adopting a well-pretrained model like CLIP. Further post-pretraining without sufficient text data will hurt the performance. Future Directions We provide two possible directions of improvement based on our observations. The first one is to build models and benchmarks that are more “video” with more temporal reasoning. For example, a good video model should be able to distinguish between “a plane is taking off” and “a plane is landing” (which unfortunately, most models cannot), and a good video benchmark should focus more on temporal information. The second one is to form richer lan- guage descriptions for video-text data. For example, dense captions containing much more textual information with ex- plicit timestamps may be more beneficial to temporal mod- eling than the current “one caption per video” setting. 5. Conclusion We propose a simple yet efficient post-pretraining frame- work to build video foundation models based on image foundation models. With the introduction of video patch dropping and text masking, our method achieves state-of- the-art performance on various video-language tasks, in- cluding various zero-shot tasks, video question answering, and video-text retrieval. The performances of our model are superior to some heavily pretrained video foundation mod- els. The experimental results demonstrate the effectiveness and generality of our method. Our method establishes a novel counterpart for video foundation models and provides in-depth reflections on video-language training and the role of the text encoder. We hope our method can provide a new direction in building large pretrained models, making them more accessible and sustainable. Societal Impacts. Due to the efficiency of our method, it enables large pretrained models to be more accessible for small research organizations and more environmentally friendly with less carbon footprint, which is one of the major concerns of large pretrained models. However, our method shares the same potential negative impact as the CLIP model does, a zero-shot classification can be used for surveillance especially now it is the engifted ability to classify temporal actions. Also, a post-pretraining proce- dure may make it hard to trace data sources when handling copyright or privacy issues. References [1] Elad Amrani, Rami Ben-Ari, Daniel Rotman, and Alex Bronstein. Noise estimation using density estimation for self-supervised multimodal learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 6644–6652, 2021. 5 [2] Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. Localizing mo- In Proceedings of ments in video with natural language. the IEEE international conference on computer vision, pages 5803–5812, 2017. 2, 5 [3] Max Bain, Arsha Nagrani, G¨ul Varol, and Andrew Zisser- man. Frozen in time: A joint video and image encoder for end-to-end retrieval. In Proceedings of the IEEE/CVF Inter- national Conference on Computer Vision, pages 1728–1738, 2021. 1, 2, 4, 5, 6 [4] David Chen and William B Dolan. Collecting highly paral- lel data for paraphrase evaluation. In Proceedings of the 49th annual meeting of the association for computational linguis- tics: human language technologies, pages 190–200, 2011. 2, 5 Elaborative rehearsal [5] Shizhe Chen and Dong Huang. In Proceedings of the for zero-shot action recognition. IEEE/CVF International Conference on Computer Vision, pages 13638–13647, 2021. 4 [6] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedan- tam, Saurabh Gupta, Piotr Doll´ar, and C. Lawrence Zit- nick. Microsoft coco captions: Data collection and evalu- ation server. ArXiv, abs/1504.00325, 2015. 2 [7] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: In ECCV, Universal image-text representation learning. 2020. 2 [8] Xingyi Cheng, Hezheng Lin, Xiangyu Wu, F. Yang, and Dong Shen. Improving video-text retrieval by multi- stream corpus alignment and dual softmax loss. ArXiv, abs/2109.04290, 2021. 2 [9] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Pre-training of deep bidirectional arXiv preprint Toutanova. transformers for language understanding. arXiv:1810.04805, 2018. 2, 3, 4, 7 Bert: [10] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. ArXiv, abs/2010.11929, 2021. 4 [11] Chenyou Fan, Xiaofan Zhang, Shu Zhang, Wensheng Wang, Chi Zhang, and Heng Huang. Heterogeneous memory en- hanced multimodal attention model for video question an- In Proceedings of the IEEE/CVF conference on swering. computer vision and pattern recognition, pages 1999–2007, 2019. 2 [12] Han Fang, Pengfei Xiong, Luhui Xu, and Yu Chen. Clip2video: Mastering video-text retrieval via image clip. ArXiv, abs/2106.11097, 2021. 2 [13] Christoph Feichtenhofer, Haoqi Fan, Yanghao Li, and Kaim- ing He. Masked autoencoders as spatiotemporal learners. arXiv preprint arXiv:2205.09113, 2022. 2 [14] Tsu-Jui Fu, Linjie Li, Zhe Gan, Kevin Lin, William Yang Wang, Lijuan Wang, and Zicheng Liu. Violet: End-to-end video-language transformers with masked visual-token mod- eling. arXiv preprint arXiv:2111.12681, 2021. 2, 4, 5, 7 [15] Tsu-Jui Fu, Linjie Li, Zhe Gan, Kevin Lin, William Yang Wang, Lijuan Wang, and Zicheng Liu. An empirical study of end-to-end video-language transformers with masked visual modeling. arXiv preprint arXiv:2209.01540, 2022. 6, 7 [16] Valentin Gabeur, Chen Sun, Karteek Alahari, and Cordelia Schmid. Multi-modal transformer for video retrieval. In European Conference on Computer Vision, pages 214–229. Springer, 2020. 2 [17] Jiyang Gao, Runzhou Ge, Kan Chen, and Ram Nevatia. Motion-appearance co-memory networks for video question answering. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, pages 6576–6585, 2018. 2 [18] Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. Tall: Temporal activity localization via language query. In Proceedings of the IEEE international conference on com- puter vision, pages 5267–5275, 2017. 2 [19] Zijian Gao, Jingyun Liu, Sheng Chen, Dedan Chang, Hao Zhang, and Jinwei Yuan. Clip2tv: An empirical study on transformer-based methods for video-text retrieval. ArXiv, abs/2111.05610, 2021. 2 [20] Yuying Ge, Yixiao Ge, Xihui Liu, Dian Li, Ying Shan, Xi- aohu Qie, and Ping Luo. Bridging video-text retrieval with multiple choice questions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16167–16176, 2022. 1, 5, 6 [21] Jiaxian Guo, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Boyang Li, Dacheng Tao, and Steven CH Hoi. From images to textual prompts: Zero-shot vqa with frozen large language models. arXiv preprint arXiv:2212.10846, 2022. 7, 8 [22] Tengda Han, Weidi Xie, and Andrew Zisserman. Turbo train- ing with token dropout. arXiv preprint arXiv:2210.04889, 2022. 1, 2, 3 [23] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll´ar, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16000– 16009, 2022. 1, 2, 3 [24] Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, and Gunhee Kim. Tgif-qa: Toward spatio-temporal reasoning in visual question answering. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 2758–2766, 2017. 2, 5, 6 [25] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representa- In International tion learning with noisy text supervision. Conference on Machine Learning, pages 4904–4916. PMLR, 2021. 1 [26] Jianwen Jiang, Ziqiang Chen, Haojie Lin, Xibin Zhao, and Yue Gao. Divide and conquer: Question-guided spatio- temporal contextual attention for video question answering. In Proceedings of the AAAI Conference on Artificial Intelli- gence, volume 34, pages 11101–11108, 2020. 2 [27] Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. Dense-captioning events in videos. In Proceedings of the IEEE international conference on com- puter vision, pages 706–715, 2017. 2 [28] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalan- tidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123:32–73, 2016. 2 [29] Thao Minh Le, Vuong Le, Svetha Venkatesh, and Truyen Tran. Hierarchical conditional relation networks for video question answering. In Proceedings of the IEEE/CVF con- ference on computer vision and pattern recognition, pages 9972–9981, 2020. 2 [30] Jie Lei, Tamara L Berg, and Mohit Bansal. Detecting moments and highlights in videos via natural language queries. Advances in Neural Information Processing Sys- tems, 34:11846–11858, 2021. 2 [31] Jie Lei, Tamara L. Berg, and Mohit Bansal. Revealing single frame bias for video-and-language learning. ArXiv, abs/2206.03428, 2022. 2, 8 [32] Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L Berg, Mohit Bansal, and Jingjing Liu. Less is more: Clipbert for In Pro- video-and-language learning via sparse sampling. ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7331–7341, 2021. 2, 6, 7 [33] Jie Lei, Licheng Yu, Mohit Bansal, and Tamara L Berg. Tvqa: Localized, compositional video question answering. arXiv preprint arXiv:1809.01696, 2018. 2 [34] Jie Lei, Licheng Yu, Tamara L Berg, and Mohit Bansal. Tvqa+: Spatio-temporal grounding for video question an- swering. arXiv preprint arXiv:1904.11574, 2019. 2 [35] Jie Lei, Licheng Yu, Tamara L Berg, and Mohit Bansal. Tvr: A large-scale dataset for video-subtitle moment retrieval. In European Conference on Computer Vision, pages 447–463. Springer, 2020. 2 [36] Dongxu Li, Junnan Li, Hongdong Li, Juan Carlos Niebles, and Steven CH Hoi. Align and prompt: Video-and-language In Proceedings of the pre-training with entity prompts. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4953–4963, 2022. 5, 6, 7 Align before fuse: Vision and language representation learn- ing with momentum distillation. Advances in neural infor- mation processing systems, 34:9694–9705, 2021. 1, 2, 3 [38] Kunchang Li, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Limin Wang, and Yu Qiao. Uniformerv2: Spatiotemporal learning by arming image vits with video uniformer. arXiv preprint arXiv:2211.09552, 2022. 4 [39] Linjie Li, Yen-Chun Chen, Yu Cheng, Zhe Gan, Licheng Yu, and Jingjing Liu. Hero: Hierarchical encoder for video+ language omni-representation pre-training. arXiv preprint arXiv:2005.00200, 2020. 2 [40] Linjie Li, Zhe Gan, Kevin Lin, Chung-Ching Lin, Zicheng Liu, Ce Liu, and Lijuan Wang. Lavender: Unifying video-language understanding as masked language model- ing. arXiv preprint arXiv:2206.07160, 2022. 2 [41] Linjie Li, Jie Lei, Zhe Gan, Licheng Yu, Yen-Chun Chen, Rohit Pillai, Yu Cheng, Luowei Zhou, Xin Eric Wang, William Yang Wang, et al. Value: A multi-task benchmark arXiv for video-and-language understanding evaluation. preprint arXiv:2106.04632, 2021. 2 [42] Yanghao Li, Haoqi Fan, Ronghang Hu, Christoph Feichten- hofer, and Kaiming He. Scaling language-image pre-training via masking. arXiv preprint arXiv:2212.00794, 2022. 1, 2, 3, 5, 6, 7 [43] Yang Liu, Samuel Albanie, Arsha Nagrani, and Andrew Zisserman. Use what you have: Video retrieval using representations from collaborative experts. arXiv preprint arXiv:1907.13487, 2019. 2 [44] Huaishao Luo, Lei Ji, Ming Zhong, Yang Chen, Wen Lei, Nan Duan, and Tianrui Li. Clip4clip: An empirical study of clip for end to end video clip retrieval and captioning. Neu- rocomputing, 508:293–304, 2022. 1, 2, 6 [45] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, and Josef Sivic. Ivan Laptev, Howto100m: Learning a text-video embedding by watching 2019 IEEE/CVF hundred million narrated video clips. International Conference on Computer Vision (ICCV), pages 2630–2640, 2019. 2 [46] Medhini Narasimhan, Anna Rohrbach, and Trevor Darrell. Clip-it! language-guided video summarization. In NeurIPS, 2021. 2 [47] Mandela Patrick, Po-Yao Huang, Yuki Asano, Florian Metze, Alexander Hauptmann, Joao Henriques, and Andrea Vedaldi. Support-set bottlenecks for video-text representa- tion learning. arXiv preprint arXiv:2010.02824, 2020. 2 [48] AJ Piergiovanni, Kairo Morton, Weicheng Kuo, Michael S Ryoo, and Anelia Angelova. Video question answering with iterative video-text co-tokenization. In European Conference on Computer Vision, pages 76–94. Springer, 2022. 5, 7, 8 [49] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learn- ing transferable visual models from natural language super- vision. In International Conference on Machine Learning, pages 8748–8763. PMLR, 2021. 1, 2, 3, 5 [37] Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. [50] Anna Rohrbach, Marcus Rohrbach, Niket Tandon, and Bernt Schiele. A dataset for movie description. In Proceedings of the IEEE conference on computer vision and pattern recog- nition, pages 3202–3212, 2015. 2, 5 In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4581–4591, 2019. 2, 5 [51] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Worts- man, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. arXiv preprint arXiv:2210.08402, 2022. 1 [52] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, im- age alt-text dataset for automatic image captioning. In ACL, 2018. 2 [53] Lucas Smaira, Jo˜ao Carreira, Eric Noland, Ellen Clancy, A short note on ArXiv, Amy Wu, and Andrew Zisserman. the kinetics-700-2020 human action dataset. abs/2010.10864, 2020. 4 [54] Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. Vl-bert: Pre-training of generic visual- linguistic representations. In ICLR, 2020. 2 [55] Chen Sun, Austin Myers, Carl Vondrick, Kevin P. Murphy, and Cordelia Schmid. Videobert: A joint model for video and language representation learning. 2019 IEEE/CVF In- ternational Conference on Computer Vision (ICCV), pages 7463–7472, 2019. 2 [56] Hao Hao Tan and Mohit Bansal. Lxmert: Learning cross- modality encoder representations from transformers. In EMNLP, 2019. 2 [57] Mingkang Tang, Zhanyu Wang, Zhenhua Liu, Fengyun Rao, Dian Li, and Xiu Li. Clip4caption: Clip for video caption. Proceedings of the 29th ACM International Conference on Multimedia, 2021. 2 [58] Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. Videomae: Masked autoencoders are data-efficient learn- ers for self-supervised video pre-training. arXiv preprint arXiv:2203.12602, 2022. 1, 2, 3, 4, 7 [59] Atousa Torabi, Niket Tandon, and Leonid Sigal. Learning language-visual embedding for movie understanding with natural-language. ArXiv, abs/1609.08124, 2016. 5 [60] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 4 [61] Alex Jinpeng Wang, Yixiao Ge, Rui Yan, Yuying Ge, Xudong Lin, Guanyu Cai, Jianping Wu, Ying Shan, Xi- aohu Qie, and Mike Zheng Shou. All in one: Explor- arXiv preprint ing unified video-language pre-training. arXiv:2203.07303, 2022. 2, 4, 6, 7 [62] Junke Wang, Dongdong Chen, Zuxuan Wu, Chong Luo, Lu- owei Zhou, Yucheng Zhao, Yujia Xie, Ce Liu, Yu-Gang Jiang, and Lu Yuan. Omnivl: One foundation model for image-language and video-language tasks. arXiv preprint arXiv:2209.07526, 2022. 5, 6, 7 [63] Mengmeng Wang, Jiazheng Xing, and Yong Liu. Action- clip: A new paradigm for video action recognition. ArXiv, abs/2109.08472, 2021. 4 [64] Xin Wang, Jiawei Wu, Junkun Chen, Lei Li, Yuan-Fang Wang, and William Yang Wang. Vatex: A large-scale, high- quality multilingual dataset for video-and-language research. [65] Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gon- tijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, et al. Robust fine-tuning of zero-shot models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7959–7971, 2022. 4, 8 [66] Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin Murphy. Rethinking spatiotemporal feature learn- ing: Speed-accuracy trade-offs in video classification. In Proceedings of the European conference on computer vision (ECCV), pages 305–321, 2018. 2 [67] Dejing Xu, Zhou Zhao, Jun Xiao, Fei Wu, Hanwang Zhang, Xiangnan He, and Yueting Zhuang. Video question answer- ing via gradually refined attention over appearance and mo- tion. In Proceedings of the 25th ACM international confer- ence on Multimedia, pages 1645–1653, 2017. 2, 6 [68] Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, and Christoph Feichtenhofer. Videoclip: Contrastive pre-training arXiv preprint for zero-shot video-text understanding. arXiv:2109.14084, 2021. 2, 5 [69] Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5288–5296, 2016. 2, 5 [70] Hongwei Xue, Tiankai Hang, Yanhong Zeng, Yuchong Sun, Bei Liu, Huan Yang, Jianlong Fu, and Baining Guo. Ad- vancing high-resolution video-language representation with In Proceedings of the large-scale video transcriptions. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5036–5045, 2022. 4 [71] Hongwei Xue, Yuchong Sun, Bei Liu, Jianlong Fu, Ruihua Song, Houqiang Li, and Jiebo Luo. Clip-vip: Adapting pre- trained image-text model to video-language representation alignment. arXiv preprint arXiv:2209.06430, 2022. 1, 4, 6, 8 [72] Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, and Cordelia Schmid. Just ask: Learning to answer questions In Proceedings of the from millions of narrated videos. IEEE/CVF International Conference on Computer Vision, pages 1686–1697, 2021. 7 [73] Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, and Cordelia Schmid. Zero-shot video question answering via frozen bidirectional language models. arXiv preprint arXiv:2206.08155, 2022. 7, 8 [74] Bang Yang and Yuexian Zou. Clip meets video caption- ers: Attribute-aware representation learning promotes accu- rate captioning. ArXiv, abs/2111.15162, 2021. 2 [75] Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mo- jtaba Seyedhosseini, and Yonghui Wu. Coca: Contrastive captioners are image-text foundation models. arXiv preprint arXiv:2205.01917, 2022. 1, 2, 3, 8 [76] Youngjae Yu, Jongseok Kim, and Gunhee Kim. A joint se- quence fusion model for video question answering and re- trieval. In Proceedings of the European Conference on Com- puter Vision (ECCV), pages 471–487, 2018. 4, 5 [77] Rowan Zellers, Jiasen Lu, Ximing Lu, Youngjae Yu, Yan- peng Zhao, Mohammadreza Salehi, Aditya Kusupati, Jack Hessel, Ali Farhadi, and Yejin Choi. Merlot reserve: Neu- ral script knowledge through vision and language and sound. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16375–16387, 2022. 2 [78] Rowan Zellers, Ximing Lu, Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, and Yejin Choi. Mer- lot: Multimodal neural script knowledge models. Advances in Neural Information Processing Systems, 34:23634–23651, 2021. 2, 4, 7 [79] Renrui Zhang, Rongyao Fang, Peng Gao, Wei Zhang, Kunchang Li, Jifeng Dai, Yu Qiao, and Hongsheng Li. Tip-adapter: Training-free clip-adapter for better vision- language modeling. arXiv preprint arXiv:2111.03930, 2021. 2 [80] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language mod- els. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16795–16804, 2022. 2 [81] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. Int. J. Comput. Vis., 130:2337–2348, 2022. 2 [82] Luowei Zhou, Chenliang Xu, and Jason J Corso. Towards automatic learning of procedures from web instructional videos. In Thirty-Second AAAI Conference on Artificial In- telligence, 2018. 2 [83] Linchao Zhu and Yi Yang. Actbert: Learning global-local video-text representations. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8743–8752, 2020. 2
synthetic_cpt
3
Federated_Data-Efficient_Instruction_Tuning_for_Large_Language_Models.pdf
2 1 0 2 t c O 9 1 ] B D . s c [ 1 v 3 0 4 5 . 0 1 2 1 : v i X r a An Experience Report of Large Scale Federations Andreas Schwarte1, Peter Haase1, Michael Schmidt1, Katja Hose2, and Ralf Schenkel2 1 fluid Operations AG, 69190 Walldorf, Germany, [email protected] 2 Max-Planck-Institut f¨ur Informatik 66123 Saarbr¨ucken, Germany, [email protected], [email protected] Abstract. We present an experimental study of large-scale RDF feder- ations on top of the Bio2RDF data sources, involving 29 data sets with more than four billion RDF triples deployed in a local federation. Our federation is driven by FedX, a highly optimized federation mediator for Linked Data. We discuss design decisions, technical aspects, and ex- periences made in setting up and optimizing the Bio2RDF federation, and present an exhaustive experimental evaluation of the federation sce- nario. In addition to a controlled setting with local federation members, we study implications arising in a hybrid setting, where local federation members interact with remote federation members exhibiting higher net- work latency. The outcome demonstrates the feasibility of federated se- mantic data management in general and indicates remaining bottlenecks and research opportunities that shall serve as a guideline for future work in the area of federated semantic data processing. 1 Introduction The vision of the Semantic Web, i.e. transforming the current Web of Documents into a Web of Data, has been gaining more and more attention lately. Connect- ing not only documents on the web but establishing connections on the data level, opens up new possibilities of automatic interaction, knowledge representa- tion, question answering, and knowledge acquisition that has not been available before. Especially, the Linked Open Data [2] community has been working on providing links between RDF data on the Web – making RDF and SPARQL the popular standards for data representation and querying on the Semantic Web. The Linked Open Data cloud now consists of 295 data sources and about 31 billion RDF triples – and is constantly growing. One of the core principles of Linked Data is to use Uniform Resource Identifiers (URIs) as unique identifiers that globally represent a specific entity and can be used across data sources to interlink resources. As the data provided on the Web and by each source is rapidly outgrowing the capacity of purely explorative querying — DBpedia for instance now has about 1 billion triples — some sources provide their data collections for download as RDF dumps or enable access via SPARQL endpoints. Accessing a data set through its SPARQL endpoint has two major advantages over downloading RDF dumps. First, it allows to evaluate complex queries over the data set without the need to set up a private triple store, possibly even on expensive high-end hardware. Second, data behind SPARQL endpoints is often more up-to-date compared to available dumps (which may be updated only in large intervals and therefore not include recent updates to the data set). For queries that include multiple data sets, connecting multiple SPARQL endpoints to a federation comes with a number of benefits over a centralized integration in a single triple store: (i) the data is always up-to-date; (ii) the computational load is shared among servers (which holds even for local federa- tions, where different data sets are kept in different triple stores on local servers); (iii) available endpoints can be integrated into federations ad hoc, avoiding the often time-consuming process of loading dumps into local repositories; and (iv) increased flexibility, allowing to use and query arbitrary combinations of the data sources in different, requirement-tailored federations. The latter two are partic- ularly important when local data sources are combined with public endpoints. As indicated by our experimental results in this paper, for typical queries against large federations effectively only a small subset of the endpoints con- tribute to the final result. Consequently, splitting up a query into subqueries and evaluating them in parallel over a local federation can be even faster than evaluating the full query over a single triple store containing all the data of all sources (compare, for instance, the experimental results in [9]). There remains, of course, a tradeoff between the benefit of distributed and parallel processing and the communication overhead between different instances, so that some queries would be evaluated more efficiently in a centralized setup. Having seen many promising results in previous benchmarks on federated query processing with only few federation members and in the order of a hun- dred million triples [5, 9, 10], the goal of this paper is to demonstrate the prac- ticability of large-scale RDF federations: using FedX [10], a highly optimized Linked Data federation mediator, we set up a federation with 29 SPARQL end- points hosting the individual data sets from the Bio2RDF domain, containing more than 4 billion RDF triples in total. In our experimental results, we study the performance of queries against such federations, comparing (i) local feder- ations with all SPARQL endpoints running on servers in a local network and (ii) hybrid federations where some sources are hosted locally and others on the Web or on Amazon EC2; the latter setup is a classical setting in the enterprise context, where companies need to combine local, private data sources with open data accessible through public SPARQL endpoints. It is beneficial whenever lo- cal working copies of some data (e.g., generated based on information extraction from natural language text, latest experiments, user-generated/corrected data, downloaded cleaned dumps, etc.) need to be augmented with public information. Contributions. In summary, we make the following contributions. – We present the first real large-scale federation setup in the context of RDF, implemented using the FedX federation mediator on top of 29 Bio2RDF SPARQL endpoints containing about 4.1 billion RDF triples. – Our description summarizes problems and solutions, as well as practical aspects of the federation setup. All experiments can be reproduced by anyone following the instructions outlined in Section 4. – We set up a public demonstrator of our Bio2RDF federation, supporting live queries against and browsing of the underlying federated data graph.3 – An exhaustive evaluation along different dimensions – including technical setup aspects, performance and scalability, and network latency – proves the feasability of federated RDF data management in large-scale settings. – Our experiments reveal open issues and current limitations, which serve as a guideline for future work in the area of federated semantic data processing. Structure. After a discussion of related work in Section 1.1, we turn towards a description of the federation technology, the FedX system, in Section 2. Next, Section 3 describes the federation setup, including a motivation of the chosen scenario, a description of the datasets, and a general discussion of the benchmark queries. In Section 4 we describe the infrastructure setup, motivate the different experimental scenarios, metrics, and present an exhaustive discussion of the experimental results. Finally, we elaborate on the implications of our results for future work and conclude with some final remarks in Section 5. 1.1 Related Work With the uptake of Linked Data in recent years, the topic of integrated query- ing over multiple distributed data sources has attracted significant attention. In order to join information provided by these different sources, efficient query processing strategies are required, the major challenge lying in the natural dis- tribution of the data. So far, the commonly used approach for query processing in large scale integration scenarios is still to integrate relevant data sets into a local, centralized triple store. Examples of such integrated repositories are the LOD cloud cache4 or Factforge5 that integrate significant subsets of the Linked Open Data cloud. As a more domain specific example, Linked Life Data6 inte- grates 23 datasources from the biomedical domain. Following a similar approach, the OpenPHACTS project7 attempts to build an integrated resource of multiple databases in the pharmaceutical space. Yet recently one can observe a paradigm shift towards federated approaches over the distributed data sources with the ultimate goal of virtual integration [7, 8] . A recent overview and analysis of federated data management and query optimization techniques is presented in [6]. Basic federation capabilities have been added to SPARQL with the SPARQL 1.1 Federation extensions8. They introduce the SERVICE operator, which allows for providing source information directly within the SPARQL query. Aranda et 3 See http://biofed.fluidops.net 4 http://lod.openlinksw.com/ 5 http://factforge.net/ 6 http://linkedlifedata.com/ 7 http://www.openphacts.org/ 8 http://www.w3.org/TR/sparql11-federated-query/ al. [1] provide a formal semantics for the language extensions. While our fed- eration approach in FedX also supports SPARQL 1.1 Federation, it does not require these extensions. Instead, it is fully compatible with the SPARQL 1.0 query language, i.e. multiple distributed data sources can be queried transpar- ently as if the data resided in a virtually integrated RDF graph. Source selection is achieved through automated means over a set of defined sources (which can be dynamically extended) without explicit specification in the query. In [9] we introduced the FedBench benchmark suite for testing and analyz- ing the performance of federated query processing strategies. Our experiments presented in this paper build upon the FedBench benchmark, but evaluate a federation scenario of a significantly larger scale. 2 FedX In the following we give some insights into the technologies and concepts of FedX [10], which is used as the federation mediator in our experimental study. FedX is a practical framework for transparent access to Linked Data sources through a federation. By virtually integrating multiple heterogeneous sources, the federation mediator exposes the union of all source graphs transparently to the user, i.e. the user can evaluate queries as if the data resided in a single triple store. Federation members are specified as a list of SPARQL endpoints, which can be added to (or removed from) the federation on-demand, since no precomputed statistics are required for query processing. With its federation- tailored optimization techniques discussed below, FedX enables an efficient and scalable SPARQL query processing for different practical federated settings. The query processing workflow in FedX is depicted in Figure 1. FedX first parses the query into an internal tree-like representation, which is then opti- mized using various techniques. Optimization in FedX includes source selection (i.e., finding the relevant sources for each triple pattern using SPARQL ASK re- quests), forming exclusive groups (i.e., grouping those triple patterns that have the same single source), and a rule-based join reordering approach. At runtime FedX manages a so-called source selection cache, containing information about which endpoints can potentially yield results for a given triple pattern. With this cache, FedX is able to reduce the number of requests since it can prune endpoints that are not relevant for the evaluation of subqueries directly. Fig. 1: Federated Query Processing Model of FedX SPARQL RequestQuery ResultParsingSource SelectionQuery Execution(Bound Joins)Global Optimizations(Groupings + Join Order)SPARQLEndpoint 1. . .Subquery Generation:Evaluation atRelevant EndpointsLocalAggregation ofPartial ResultsCachePer Triple PatternSPARQL ASK queriesSPARQLEndpoint 2SPARQLEndpoint N As a user-facing frontent built on top of the federation managed by FedX, we provide a browser-based demo system based on the Information Workbench9, a Linked Data platform which allows to declaratively use widgets within a semantic wiki to interact with the underlying Linked Data graph. 3 Experiment Scenario: Federating Bio2RDF For our experimental study of a large scale federation we decided to use data sets from the life science domain. The industries in the life sciences (including pharmaceuticals, bio technology) have been an early adopter of semantic tech- nologies and the value of providing integrated access to distributed data sources has been demonstrated in many practical applications [3]. Most of the Linked Data data sets in the life sciences have been published as part of the Bio2RDF initiative, with the goal to provide interlinked life science data to support bi- ological knowledge discovery. Compared to other domains, the data sets that have been developed for the life science domain are of rather high quality and very well interconnected. Consider as an example the Drugbank dataset which provides direct links for most drugs to the corresponding KEGG compounds. For our federation we have selected 29 data sets, covering – to the best of our knowledge – all relevant publicly available data sets in the domain. In total, the selection comprises more than 4 billion triple. Table 1 lists all datasets and depicts the number of triples and entities, as well as the main instance type(s). # Dataset 1 CellMap 2 ChEBI 3 DailyMed 4 Disease Ontology 5 DBpedia Subset 6 Diseasome 7 DrugBank 8 Entrez-Gene 9 Genewiki 10 KEGG 11 Mappings 12 Pubmed 13 UMLS 14 Uniprot 15 BiogGRID 16 Gene Ontology 17 HapMap 18 HPRD 19 Humancyc 20 IMID 21 IntAct 22 LHGDN 23 LinkedCT 24 MINT 25 NCI-Nature 26 Phenotype Ontology 84k 27 Reactome 28 Sider 29 Symptom Table 1: Lifescience datasets used for federation scenario: 29 datasets/4B+ RDF triples biopax-2:protein - dailymed:drugs - e.g. dbo:Drug diseasome:genes drugbank:drugs entrezgene:Gene - kegg:Compound, kegg:Drug, kegg:Enzyme, kegg:Reaction - pubmed:Citation skos:Concept uniprot:Protein, uniprot:Journal biopax-2:protein skos:Concept - biopax-2:protein - biopax-2:protein biopax-2:protein - linkedct:trials, linkedct:condition biopax-2:protein biopax-2:protein - biopax-2:protein - - #Triples #Entities Instance type(s) 60k 149k 238k 650k 68k 163k 110k 145k 31M 70M 30k 75k 0.5M 290k 161.5M 67M 391k 1.0M 1M 2.4M 4.1M 2.8M 299M 1.4B 27.7M 121M 495M 2.3B 4.7M 12M 187k 320k 43M 22M 777k 2M 143k 327k 36k 83k 5.5M 16.6M 160k 316k 2.8M 7.0M 6M 2.1M 237k 611k 36k 330k 30k 2k 815k 102k 4.2k 9 http://www.fluidops.com/information-workbench/ Table 2: Summary of query characteristics. Operators: And (“.”), Union, Filter, Group By, Count (#), Optional; Solution modifiers: Distinct, Limit, Of fset, Order By; Structure: Star, Chain, Hybrid FedBench Life Science (LS) Op. Mod. Struct. #Res. 1159 319 9869 3 395 28 109 - 1 U - 2 AU - 3 A - 4 A - 5 A 6 A - 7 AFO - - - H H H H H Linked Life Data (LLD) Op. Mod. Struct. #Res. - 1 A D 2 A - 3 A D 4 A 5 A D 6 AF D - 7 A - 8 A 9 # - 10 AG - S H C C C H H H - C 167 22 70 210 45 63 59 131 1 2 Example, Life Science Query 4: For all drugs in DBpedia, find all drugs they interact with, along with an explanation of the interaction. Example, Linked Life Data Query 8: Select all human genes located on the Y-chromosome with known molecular interactions. SELECT ?Drug ?IntDrug ?IntEffect WHERE { ?Drug rdf:type dbpedia-owl:Drug . ?y owl:sameAs ?Drug . ?Int drugbank:interactionDrug1 ?y . ?Int drugbank:interactionDrug2 ?IntDrug . ?Int drugbank:text ?IntEffect . } SELECT ?genedescription ?taxonomy ?interaction WHERE { ?interaction biopax2:PARTICIPANTS ?p . ?interaction biopax2:NAME ?interactionname . ?p biopax2:PHYSICAL-ENTITY ?protein . ?protein skos:exactMatch ?uniprotaccession . ?uniprotaccession core:organism ?taxonomy . ?taxonomy core:scientificName ’Homo sapiens’ . ?geneid gene:uniprotAccession ?uniprotaccession . ?geneid gene:description ?genedescription . ?geneid gene:chromosome ’Y’ . } Fig. 2: Selected benchmark queries Queries. We selected two query sets that implement realistic use cases on top of the life science data collection. The first query set (LS in Table 2) is a slightly modified version of the Life Science query set from the FedBench benchmark suite, updated to reflect changes in the schema and data of the latest versions of the respective data sets. The second query set (LLD in Table 2) contains sample queries from Linked Life Data (cf. http://linkedlifedata.com/sparql) and represents typical queries that can be performed against the integrated set of life science databases. We limited the selection to those queries that can be answered based on publicly available data sets (i.e., without data exclusively available through the Linked Life Data system). Figure 2 exemplarily discusses two sample queries taken from the two query sets. Table 2 gives an overview of the benchmark queries and their properties, showing that they vastly vary in their characteristics. In particular, we indi- cate the SPARQL operators that are used inside the query (Op.), the solu- tion modifiers that were used additionally (Sol.), categorize the query struc- ture (Struct.), roughly distinguishing different join combinations – like subject- subject or subject-object joins – leading to different query structures commonly referred to as star-shaped, chain, or hybrid queries, and indicate the number of results (#Res.) on the federation datasets. A complete description of the data sets (including download links) and queries used in the benchmark is available at http://biofed.fluidops.net/. 4 Experiments 4.1 Infrastructure Description and Setup In our experiments we focus on two different federated settings. First, we set up a local federation to evaluate the performance and practicability of federated data processing with FedX in a controlled setting with low network latency, where all endpoints are deployed in a dedicated local environment. Complementary, the hybrid federation consists of a mix of local and remote SPARQL endpoints (the latter hosted in the Amazon AWS cloud), which allows us to study the im- plications arising in scenarios with higher network latency. The hybrid scenario reflects challenges in the enterprise context, where private, enterprise-internal data sources are combined with public SPARQL endpoints in a federated set- ting. To guarantee repeatability of the experiments we establish a controlled environment in both settings, i.e. we use SPARQL endpoints running on non- shared compute and storage resources. The details are descibed in the following. Local federation. For the local federation we provide access to the life- science datasets through individual SPARQL endpoints running in our local computing cluster. In this cluster we use two HP Proliant DL360 servers run- ning a 64bit Windows Server operating system, one with 8x2GHz CPU and 64GB RAM (Server1), the other with 2x3GHz CPU and 20GB RAM (Server2), both backed by fast storage. The total available memory is distributed to the individ- ual SPARQL endpoints corresponding to the number of triples, e.g. the Uniprot endpoint got assigned a total memory of 14GB, while the smaller Drugbank endpoint is running in a 1.5GB process. The datasets 1 to 14 from Table 1 are deployed on Server1 and the remaining ones, 15 to 29, are deployed on Server2. The individual SPARQL endpoints are powered by a state-of-the-art triple store implementing the OpenRDF Sesame interface10, running in Tomcat 6 ap- plication server processes. Sesame is the de-facto standard framework for process- ing RDF data and offers access to RDF storage solutions through an easy-to-use API. The triple stores themselves can be accessed via SPARQL endpoints. Hybrid federation. For the hybrid setting we deployed selected SPARQL endpoints from the local infrastructure to an Amazon AWS EC2 instance. More precisely, we deployed the DrugBank, Uniprot, and Pubmed data sets in the AWS cloud. Like in the local setting, these data sets were deployed as individual SPARQL endpoints on top of a Tomcat 6 application server, using exactly the same database setup and memory assignment for the individual endpoints as in the local setting. The endpoints were hosted together on a single, high-memory AWS instance (type “m2.2xlarge”) running 64bit MS Windows Server 2008 with 13 EC2 Compute Units (4 virtual cores with 3.25 EC2 compute units each) and 34.2GB memory. The data sets were attached to the instance using Amazon EBS storage volumes. The instance and volumes were both hosted in the AWS zone ”EU West (Ireland)”, allowing for fast communication between compute and storage infrastructure. Note that Ireland is the AWS zone closest to Germany, where the local endpoints and FedX were run. 10 http://www.openrdf.org Mediator and benchmark driver. In both settings, the federation was driven by the FedX v2.0 federation mediator described in Section 2. FedX was configured to run over the set of the 29 Bio2RDF SPARQL endpoints, either using only local endpoints (in the local setting) or the combination of local and global endpoints described above in the federated setting. For running the experiments we used FedBench11 [9], a comprehensive benchmark suite for an- alyzing the efficiency and effectiveness of federated query processing strategies over semantic data that provides customizable benchmark drivers. Metrics. The central measure in our experiments is the query evaluation time: in both the local and the hybrid scenario we report on the average elapsed time over five runs, assessed after five previous warmup runs. Following the guidelines described in [4], we indicate the geometric mean, which is defined as the nth root over the product of n values: compared to the arithmetic mean, the geometric mean flattens outliers, which – in our setting – occasionally arised, particularly in the hybrid federation, due to unpredictable effects such as punc- tually high network delays. Other metrics we discuss are (i) the number of re- quests sent to SPARQL endpoints from FedX during query evaluation, (ii) the number of triple patterns in the individual queries and (iii) the efficiency of the source selection algorithm in FedX. As we will discuss in the following, these are parameters that have significant influence on the query evaluation times. 4.2 Experimental Results We start with a discussion of FedX’ source selection strategy (cf. Section 2), which forms the basis for the understanding of the subsequent results. In order to minimize the number of requests, FedX – prior to evaluating the query – sends ASK queries for the triple patterns contained in the query to all SPARQL endpoints, to identify which sources are potentially relevant for which patterns in the query. This information is then used to optimize query processing, such as sending patterns only to relevant endpoints or grouping subqueries that can be answered by a single endpoint alone. Visualizing the outcome of the source selec- tion strategy, Figure 3 shows, for each of the benchmark queries (i) the number of triple patterns in the query (plotted below the query name) and (ii) the min- imum, maximum, and average (over all triple patterns in the query) number of endpoints that have been identified as relevant for the triple patterns according to the source selection strategy. As an example, query LLD2 is composed out of 7 triple patterns, where the minimal pattern(s) retrieve non-empty results from 3 endpoints, the maximal pattern(s) retrieve results from 9 endpoints, and the average number of endpoints that contribute results to a triple pattern in LLD2 is about 7.2. As a whole, the diagram leads to two interesting observa- tions: first of all, the queries vary in complexity regarding the number of sources that (potentially) contribute to the query result: there are simple queries which can be answered by querying a single source, while others have triple patterns containing potential matches in up to 9, in the worst case even all 29 federation 11 FedBench project page: http://code.google.com/p/fbench/ Fig. 3: Source selection analysis: relationship between queries and triple patterns w.r.t. relevant endpoints according to FedX’ source selection algorithm. members12. Second, the results demonstrate that the source selection strategy of FedX is quite efficient, reducing the average number of sources involved in answering triple patterns to at most 10 out of 29 for all the queries, typically even less. Given that the number of requests is one of the main factors driv- ing evaluation time (as will be discussed in the following), this efficient source selection strategy can be seen as a cornerstone for the practicability of FedX. Figure 4(a) compares the query evaluation times for our 17 benchmark queries over the local and hybrid federation, with source selection caching enabled. Given the warmup phase prior to taking the measurements, an active source selection cache implies that FedX in this scenario has full knowledge about which sources can contribute results to which triple pattern in the input query. Starting with the discussion of the local federation setting, we can observe that all 17 queries return a result within 15s. 15 queries are faster than 3s, 10 queries in the sub- second range, and 5 queries are even faster than 0.1s. Given that our queries represent a mix of dedicated benchmark queries designed particularly to test challenges in federated scenarios and real-world use cases from the Bio2RDF project, these numbers impressively demonstrate the practicability of FedX as a mediator for large-scale RDF federations in the billion triple range. In addition to a tabular representation of the evaluation time for the two settings (columns Local and Hybrid), Figure 4(b) summarizes the number of requests (#Req) sent to the SPARQL endpoints during query evaluation. We can observe a clear coincidence between the number of requests sent during query 12 Query LS2 contains the triple pattern ?caff ?predicate ?object, which – taken alone – can be answered by all endpoints. 0 1 2 3 4 5 6 7 8 9 10 11 12LS1LS2LS3LS4LS5LS6LS7LLD1LLD2LLD3LLD4LLD5LLD6LLD7LLD8LLD9LLD10Sources#TP:(2)(3)(5)(7)(6)(5)(5)(3)(7)(3)(4)(6)(5)(5)(9)(1)(3)29MaxAverageMin evaluation and the query evaluation time; for instance, the five most expensive queries (in terms of runtime) – LS3, LS5, LS7, LLD4, LLD5 – are character- ized by the five highest numbers of requests sent to SPARQL endpoints. This indicates that the network delay is the dominating factor in query evaluation. (a) b) Local Hybrid #Req #Req(D) #Req(U) 0.134 0.021 LS1 0.090 0.016 LS2 4.159 2.356 LS3 0.085 0.114 LS4 1.678 2.037 LS5 0.148 0.194 LS6 5.451 2.783 LS7 0.015 0.016 LLD1 1.282 1.199 LLD2 0.126 0.170 LLD3 LLD4 7.067 7.358 LLD5 14.823 17.167 1.142 0.660 LLD6 0.446 0.514 LLD7 1.045 1.345 LLD8 0.016 LLD9 0.015 0.156 LLD10 0.122 1 1 1512 3 815 84 1355 1 649 75 3043 6301 135 162 521 1 75 1 1 1511 1 1 1 110 0 0 0 77 806 21 0 9 0 0 0 0 0 0 91 0 0 0 0 0 78 109 19 161 148 0 0 c) LS1 LS2 LS3 LS4 LS5 LS6 LS7 LLD1 LLD2 LLD3 LLD4 LLD5 LLD6 LLD7 LLD8 LLD9 LLD10 No Caching Caching #Savings 0.203 0.309 4.274 0.461 2.098 0.462 5.434 0.440 2.162 0.429 7.077 16.952 1,098 0.066 1.511 0.063 0.311 0.134 0.090 4.159 0.085 1.678 0.148 5.451 0.015 1.282 0.126 7.067 17.167 1.142 0.446 1.045 0.016 0.156 58 87 145 203 174 145 145 87 203 87 116 174 145 144 261 29 87 Fig. 4: Experimental Results: (a) Graphical comparison of query evaluation time in local and hybrid federation; (b) Tabular listing of evaluation times and number of requests sent to SPARQL endpoints during query evaluation; (c) Influence of source selection caching in FedX on evaluation times in the hybrid setting. In order to study the effect of network latency in more detail, we next compare the results in the local federation with the hybrid federation. As expected, the query results in the hybrid setting are generally (yet not always) slower due to the higher network latency induced by the communication with the SPARQL endpoints in the AWS cloud. Going into more detail, Figure 4(b) also shows 0.01 0.1 1 10 100 1000LS1LS2LS3LS4LS5LS6LS7LLD1LLD2LLD3LLD4LLD5LLD6LLD7LLD8LLD9LLD10Evaluation Time (s)Local FederationHybrid Federation the number of subqueries sent against the remote SPARQL endpoints Drugbank (#Req(D)) and Uniprot (#Req(U)). Based on these numbers, we can classify the queries into three classes. The first class contains queries that do not require communication between FedX and the remote endpoints (LLD1-3 and LLD9- 10). For these queries, FedX’ source selection cache helps to avoid expensive requests to the endpoints in the AWS cloud, so we observe no or only small overheads in evaluation time. The second class of queries, such as LS3, LS7, and LLD5, require a considerable amount of requests against the remote endpoints; as a consequence, the execution time in the federated setting increases. Still, the highest percental increase of about 100% can be observed for LS7, which still results in practical response times. Somewhat surpisingly, we can observe a third class of of queries, for which the hybrid setup even outperforms the local setup (LS4-6, LLD1, LLD3-4, and LLD7-8). This result can be explained by the fact that the overall load on the Amazon machine, which hosts only three endpoints – rather than 14 − 15 endpoints, as it is the case for the two local servers – is lower, which in turn results in generally faster response times for the subqueries sent to the endpoints. This shows that in many cases the advantages gained by distribution dominate the overhead imposed by increased communication costs. Finally, in Figure 4(c), we study the influence of source selection caching in FedX: the first two columns compare the evaluation times of the queries with source selection caching disabled vs. enabled in the hybrid setting; the #Savings column denotes the number of Ask requests (sent to endpoints in order to find out whether they can contribute answers to a given triple pattern) that could be saved when caching was turned on. As can be seen, caching leads to runtime savings in most cases. As a general trend, the percentual savings are particularly high whenever #Savings is high compared to the overall number of requests, #Req, depicted in Figure 4(b); for instance, for query LS2, we save 87 requests (out of, in total, 87+1 = 88 requests to endpoints), which induces a significant percental speedup. For queries where the number of requests is already high (e.g., LS3, LS7, or LLD4), the caching benefits are negligible. In summary, the results show that the source selection cache is not crucial for efficient evaluation, thus proving the flexibility of FedX which allows to add new federation members ad hoc, without warming up caches or precalculating statistics. Still, source selection caching yields an additional speedup for most queries, which can be particularly beneficial in scenarios with high query loads involving many simple queries. 5 Conclusions and Future Work We presented the first large-scale RDF federation in the billion triple range over Bio2RDF data sources, driven by the highly optimized Linked Data fed- eration mediator FedX. Our exhaustive and repeatable experimental evaluation demonstrates the practicability of our approach and studies various aspects driv- ing evaluation time. One cornerstone of evaluation performance is an efficient source selection strategy. It is crucial to minimize the number of requests sent to the individual SPARQL endpoint during query evaluation, which is the major bottleneck in efficient federated query processing. Going beyond this finding, our experiments identify settings in which the advantages gained by distribution dominate the overhead imposed by increased communication costs, thus lever- aging the benefits of a federated setup with autonomous compute endpoints. The focus of future work in this area therefore should lie on techniques to further minimize the communication efforts. One promising approach, which we identified during our interpretation of query evaluation plans, is to exploit data set specific namespaces in URIs, in order to further improve the source selection process. Another promising approach aiming at a combination of the benefits of federation and centralization would be the automated colocation of data sets that exhibit frequent joins and therefore impose high communication costs, which could e.g. be reached by an adaptive query log analysis, combined with a caching layer maintained inside the federation layer. References 1. Carlos Buil Aranda, Oscar Corcho, and Marcelo Arenas. Semantics and optimiza- tion of the SPARQL 1.1 federation extension. In ESWC. Springer, 2011. 2. Christian Bizer, Tom Heath, and Tim Berners-Lee. Linked Data - The Story So Far. International Journal on Semantic Web and Information Systems (IJSWIS), 5(3):1–22, 2009. 3. Michel Dumontier. Building an effective semantic web for health care and the life sciences. Semantic Web, 1(1-2):131–135, 2010. 4. Philip J. Fleming and John J. Wallace. How not to lie with statistics: The correct way to summarize benchmark results. Commun. ACM, 29(3):218–221, 1986. 5. O. G¨orlitz and S. Staab. Splendid: Sparql endpoint federation exploiting void In Proceedings of the 2nd International Workshop on Consuming descriptions. Linked Data. Bonn, Germany, 2011. 6. Olaf G¨orlitz and Steffen Staab. Federated Data Management and Query Opti- mization for Linked Open Data. In New Directions in Web Data Management. Springer, 2011. 7. Olaf Hartig, Christian Bizer, and Johann-Christoph Freytag. Executing SPARQL Queries over the Web of Linked Data. In ISWC 2009. Springer, 2009. 8. G¨unter Ladwig and Duc Tran Thanh. SIHJoin: Querying Remote and Local Linked Data. ESWC, 2011. 9. Michael Schmidt, Olaf G¨orlitz, Peter Haase, G¨unter Ladwig, Andreas Schwarte, and Thanh Tran. Fedbench: A benchmark suite for federated semantic data query processing. In The Semantic Web – ISWC 2011, pages 585–600, 2011. 10. Andreas Schwarte, Peter Haase, Katja Hose, Ralf Schenkel, and Michael Schmidt. Fedx: Optimization techniques for federated query processing on linked data. In The Semantic Web – ISWC 2011, pages 601–616, 2011.
synthetic_cpt
2
The_Super_Weight_in_Large_Language_Models.pdf
4 2 0 2 v o N 1 1 ] L C . s c [ 1 v 1 9 1 7 0 . 1 1 4 2 : v i X r a Preprint. Under review. THE SUPER WEIGHT IN LARGE LANGUAGE MODELS Mengxia Yu1∗, De Wang2, Qi Shan2, Colorado Reed2†, Alvin Wan2 2Apple 1University of Notre Dame ABSTRACT Recent works have shown a surprising result: a small fraction of Large Language Model (LLM) parameter outliers are disproportionately important to the quality of the model. LLMs contain billions of parameters, so these small fractions, such In this work, we as 0.01%, translate to hundreds of thousands of parameters. present an even more surprising finding: Pruning as few as a single parame- ter can destroy an LLM’s ability to generate text – increasing perplexity by 3 orders of magnitude and reducing zero-shot accuracy to guessing. We propose a data-free method for identifying such parameters, termed super weights, using a single forward pass through the model. We additionally find that these super weights induce correspondingly rare and large activation outliers, termed super activations. When preserved with high precision, super activations can improve simple round-to-nearest quantization to become competitive with state-of-the-art methods. For weight quantization, we similarly find that by preserving the su- per weight and clipping other weight outliers, round-to-nearest quantization can scale to much larger block sizes than previously considered. To facilitate further research into super weights, we provide an index of super weight coordinates for common, openly available LLMs1. 1 INTRODUCTION Large Language Models (LLMs) have been growing in size and capability at an unprecedented rate, enabling them to capture increasingly complex linguistic patterns across a wide range of tasks. However, with this increase in model scale, new and unexpected behaviors have emerged. Dettmers et al. (2022) discovered that once LLMs reach a certain scale, a small set of hidden state features contains outliers of exceptionally large magnitude. These outliers account for a small percentage of all activations but are crucial for preserving the compressed model’s quality (Dettmers et al., 2022; Xiao et al., 2023; Wei et al., 2023; Shao et al., 2024). However, not all outliers are equally important. In this paper, we study a tiny yet important set of outliers in LLMs, termed super weights. In Llama-7B, pruning the super weight, a single scalar, completely destroys the model’s ability to generate text; the average accuracy of zero-shot down- stream tasks effectively plummets to zero. Conversely, pruning the other top 7,000 outliers, includ- ing outliers that are larger than the super weight, affects no more than a few percentage points. Intriguingly, super weights behave similarly across model families and sizes. For one, the super weight is always found in the mlp.down proj weight, always in an early layer. We also find that the super weight amplifies input activation inliers to ultimately produce the exceptionally large magnitude activation observed by Sun et al. (2024) – we term this the super activation. This super activation persists throughout the model at exactly the same magnitude and position regardless of the prompt, and we find this is uniquely enabled by skip connections. Finally, super weights suppress stopword likelihood. Taken together, pruning the super weight destroys quality by dampening the super activation and shifting almost all logit probability mass to stopwords. Both super weights and super activations, which we collectively refer to as super outliers, are critical to model quality. Fortunately, there are no more than a handful of scalar super outliers per tensor; in light of this, we revisit round-to-nearest quantization, equipped only with the ability to hold ∗Work done while interning at Apple. †Corresponding author. cj [email protected] 1Code is available in https://github.com/mengxiayu/LLMSuperWeight. 1 Preprint. Under review. Figure 1: Super Weight Phenemenon. We discover that pruning a single, special scalar, which we call the super weight, can completely destroy a Large Language Model’s ability to generate text. On the left, the original Llama-7B, which contains a super weight, produces a reasonable completion. On the right, after pruning the super weight, Llama-7B generates complete gibberish. As we show below, this qualitative observation has quantitative impact too: zero-shot accuracy drops to guessing and perplexity increases by orders of magnitude. out and restore super outliers. This yields a data-free, hardware-friendly method. For activation quantization, we find this technique competitive with SmoothQuant; for weight quantization, we can scale round-to-nearest to much larger block sizes with higher quality. Our contributions are summarized as follows. 1. Super Weights: We discover a tiny subset of outliers in LLMs, at most six scalars, that are disproportionately important; pruning these super weights destroys model quality. 2. Identifying Super Weights: We present a data-free way to identify super weights using only a single forward pass and provide an index of super weights for existing, open LLMs. 3. Super Activations: We analyze how super weights influence inference and relate them to the activation outliers observed in prior work. 4. Compression: By preserving super outliers, we show that round-to-nearest quantization increases effectiveness noticeably; preserving super outliers improves compression quality. 2 RELATED WORK 2.1 OUTLIERS IN LLMS LLM outliers are widely observed in existing literature. Kovaleva et al. (2021) notes weight out- liers, which emerge gradually, beginning early in pre-training, and cause abnormal spikes at select dimensions in the output embedding vectors. Disabling those outliers significantly degrades both the training loss and the downstream task performance. Bondarenko et al. (2021) notes activation outliers, which encourage specific attention patterns, such as attending to the special separator to- ken. However, Sun et al. (2024) first observes an exceptionally extraordinary outlier; in particular, they discover massive activations in LLMs that persist across layers in a fixed position, which Yang et al. (2024) hypothesizes is caused by gated linear units (GLU) and its variants, such as GEGLU and SwiGLU. To mitigate these massive activations, Sun et al. (2024) proposes a learnable atten- tion bias, and (Son et al., 2024; Yang et al., 2024) inserts certain prefixes. To complement these mitigation studies, our focus is instead to leverage, rather than mitigate, these super activations. 2.2 OUTLIER-AWARE QUANTIZATION METHODS Quantization is one of the most popular techniques for reducing LLM resource consumption. How- ever, quantizing LLMs is non-trivial, due to outliers that increase the range of values. Existing works typically study two settings for LLM quantization: (1) Weight-only quantization, where only weights are quantized into low-bit integers; (2) Weight-activation quantization, where both activa- tion and weights are quantized. For weight-only quantization, several common solutions including using smaller block sizes, to limit the number of values any single outlier can impact (Dettmers et al., 2024; Shao et al., 2024; Dettmers & Zettlemoyer, 2023; Frantar et al., 2022; Dettmers et al., 2023); scaling sensitive weights, via a grid-searched channel-wise scaling, Lin et al. (2024); or clipping outliers via learned optimal thresholds (Shao et al., 2024; Lin et al., 2024). The most common approach is to extract and store 2 mustard. I love the tasteMy favorite condiment is:/νώ好 !\β........2 .1 .2 .1 .1 .1 .2 .1-1.9 PROMPTWITH SUPER WEIGHTWITHOUT SUPER WEIGHT.2 .1 .2 .1 .1 .1 .2 .10 Preprint. Under review. Llama-7B Arc-c Arc-e Hella. Lamb. PIQA SciQ Wino. AVG C4 Wiki-2 Original Prune SW Prune Non-SW 41.81 19.80 41.47 75.29 39.60 74.83 Prune SW, +SA 26.60 54.63 56.93 30.68 56.35 56.93 73.51 0.52 69.88 12.79 78.67 59.90 78.51 94.60 39.40 94.40 70.01 56.12 69.14 70.11 35.14 69.22 7.08 763.65 7.57 5.67 1211.11 6.08 67.95 61.70 70.01 50.09 476.23 720.57 Table 1: Super Weight Importance. (Section 3) Prune SW: Pruning the single, scalar-valued super weight significantly impairs quality – reducing accuracy on zero-shot datasets and increasing perplexity by orders of magnitude. Prune Non-SW By contrast, retaining the super weight and instead pruning the other 7,000 largest- magnitude weights marginally affects quality. In other words, a single super weight is more important than even the top 7,000 largest weights combined. (Section 3.2) Prune SW, +SA: Pruning the super weight but restoring the super activation partially recovers quality. Note that quality is still drastically impaired however, so we conclude that super activations only partially explain how super weights operate. This also shows that super weights and super activations both need special handling, to preserve quality. sensitive weight outliers in higher-precision (Dettmers et al., 2024; Kim et al., 2024; Dettmers et al., 2022). However, decomposed, mixed-precision arithmetic for hundreds of thousands of weights is unfriendly for hardware and incurs significant latency penalties. We take a different approach, handling at most a half dozen scalars to maintain hardware friendliness. For activation quantization, there are an increased number of even more aggressive outlier values, making activation quantization more challenging. To tackle this, previous work rotates (Liu et al., 2024; Ashkboos et al., 2024; Chee et al., 2023), clips (Wei et al., 2022) or shifts (Wei et al., 2023; Shao et al., 2024) activations to mitigate activation outliers. One effective approach scales activa- tions (Xiao et al., 2023), migrating the difficulty of quantization from activations to weights with a mathematically equivalent transformation. However, this method – SmoothQuant – requires calibra- tion data to find the optimal hyperparameters. We show a competitive alternative that is alternatively data-free, with a small change to a naive round-to-nearest method. Recent studies have discovered that activation outliers are associated with weight outliers. The hid- den dimensions where activation outliers emerge have a high correlation to sensitive weight channels (Heo et al., 2024; Lee et al., 2024). Along these lines, activation magnitudes have been used as an in- dicator to find salient weight channels to preserve in weight quantization (Lin et al., 2024). We find the relationship between activations and weights is even more striking: Rather than channel-wise pairs, we find relationships between two individual scalars – up to six weights and one activation. 3 SUPER WEIGHTS Many studies corroborate the importance of weight outliers, showing that a small percentage of the largest magnitude outliers are essential to model quality. This percentage can be as small as 0.01%, but for these billion-parameter models, 0.01% can still include hundreds of thousands of weights. Our investigation reveals a surprising fact about even this group of weight outliers: There exists a single, scalar weight that, despite not being the largest, holds more importance than thousands of other outlier weights combined. We call this single scalar weight a super weight. In our analysis, we find that the super weight is necessary for quality, having an outsized influence on quality if removed. Without the super weight, LLMs fail to generate text, resulting in qualitatively (Figure 1) and quantitatively (Table 1) gibberish responses. In particular, zero-shot dataset accuracy is reduced to guessing, and perplexity increases by several orders of magnitude (Prune SW). To quantify this influence, we prune all other outlier weights (Prune Non-SW), comparing the impact of a single super weight against 7,000 other outliers. Remarkably, the accuracy drop associated with pruning this single weight is much greater than the effect of all other outliers combined. 3.1 IDENTIFICATION OF SUPER WEIGHTS Super weights create super activations. Sun et al. (2024) first discover a handful of exceptionally massive activations, which are crucial to model quality. Massive activations persist across many layers, feature constant magnitude, and always exist at the same position, regardless of input. We 3 Preprint. Under review. Model No. Type Weight Coordinates Model No. Type Weight Coordinates Llama 7B Llama 13B Llama 30B Llama2 7B Llama2 13B Mistral-7B v0.1 2 2 2 3 3 10 1 3 1 mlp mlp mlp mlp mlp mlp mlp mlp down proj [3968, 7003] down proj down proj down proj down proj down proj [2231, 2278] [2231, 6939] [5633, 12817] [5633, 17439] [5633, 14386] down proj [2533, 7890] down proj [4743, 7678] mlp down proj [2070, 7310] OLMo-1B 0724-hf OLMo-7B 0724-hf Phi-3 mini-4k-instruct 1 1 1 2 7 24 2 2 2 4 4 4 mlp mlp mlp mlp mlp mlp mlp mlp mlp mlp mlp mlp down proj down proj down proj down proj down proj down proj down proj down proj down proj down proj down proj down proj [1764, 1710] [1764, 8041] [269, 7467] [269, 8275] [269, 453] [269, 2300] [525, 808] [1693, 808] [1113, 808] [525, 2723] [1113, 2723] [1693, 2723] Table 2: Super Weight Directory. The above layer numbers, layer types, and weight types can be directly applied to Huggingface models. For example, for Llama-7B on Huggingface, access the super weight using layers[2].mlp.down proj.weight[3968, 7003]. find a further intriguing property: The activation’s channel aligns with our super weight’s, and the activation first appears right after our super weight. To confirm whether this is correlation or causation, we prune the super weight and check the massive activation’s magnitude. Per Figure 4, we discover that pruning the super weight drastically reduces the massive activation’s magnitude. This suggests that the massive activations are created by super weights. For consistency, we dub these massive activations “super activations”. With further investigation, we reveal the mechanism of super weights and super activations. Sun et al. (2024) explained super activations as bias terms, but they did not explain how super activations are created and why they are always in the same positions. Through empirical analysis, we find that before down projection, the Hadamard product of the gate and up projection creates a relatively large activation, which aligns with the findings of Yang et al. (2024). More importantly, the super weights further amplify it and create super activations. Identifying super weight by activation spikes. Based on the above analysis, we present an effi- cient way to locate super weights: SWs can be located by detecting the spikes in the down proj inputs and outputs distributions across the layers. This detection only requires a single input prompt, rather than a set of validation data or use-case examples. Suppose that we have a down proj weight matrix W ∈ RD×H , where D is the dimension of the activation feature and H is the intermediate hidden dimension. Let X ∈ RL×H be the input matrix, where L is the sequence length. Y = XWT , where Yij = (cid:80)d k=1 XikWjk. Suppose Yij is a super activation. If Xik and Wjk are both outliers that are much larger than other values, Yij will be dominated by their product. That is, Yij ≈ Xik Wjk. In this case, j and k are determined by Xik and Yij. Therefore, we start by plotting extreme outliers in the input and output activations of mlp.down proj. Then, we determine the layer and coordinates of the super weight, as illustrated in Figure 3. Once we have detected one super weight, we remove it from the model and repeat the above process, until the magnitudes of large maximum activations are greatly suppressed. We have identified super weights for commonly available LLMs across different LLM families and model sizes, presented in Table 2. Most of the models we have examined have no more than three super weights. The model with the most super weights, i.e., Phi-3-mini-4k-instruct, contains six. We have also examined the instruction-finetuned models, such as Mistral-7B-Instruct-v0.1 and Llama- 2-7B-chat. We find that their super weights are located at the same coordinates as the pre-trained models, which suggests that instruct fine-tuning does not change the position of super weights. 3.2 MECHANISMS OF SUPER WEIGHTS We find that super weights (1) induce super activations, which have lasting effects throughout the entire model, and (2) suppress stopword likelihood (Figure 2). Super weights (partially) operate via super activations. To assess whether the super weight’s impact on model quality is solely mediated by the super activations or also by activations of other 4 Preprint. Under review. Figure 2: How Super Weights behave. I: Super weights are often found in an early layer’s down projection, indicated with a blue-purple box. The super weight immediately creates an incredibly large-magnitude super activation. II: Super activations are propagated through skip connections, indicated with blue-purple lines. III: This has a net effect of suppressing stopword likelihoods in the final logits. Removing the super weight causes stopword likelihood skyrocket, indicated with the blue-purple stacked bars. See Appendix A.3. Figure 3: How to identify the Super Weight for Llama-7B. down proj input features a large maximum-magnitude activation only in Layer 2, where the super activation first appeared. The value’s channel index, e.g., 7003, tells the row of SW. down proj output likewise features a large maximum-magnitude activation at Layer 2. This value’s channel index, e.g., 3968, gives us the column of the SW. Figure 4: The super activation per- sists throughout the entire model, at exactly the same magnitude, start- ing after Layer 2. Pruning the su- per weight decreases the super acti- vation’s magnitude by 75%. tokens, we conducted an experiment involving the removal of super weights (SW) and restoration of super activations (SA). Note that a super weight should influence the same channel for all tokens. We conduct an ablation experiment with three conditions: (1) the original model, (2) remove the super weight (Prune SW), i.e., setting the weight scalar as zero, (3) remove the super weight and restore the super activation at the layer where it first appears (Prune SW,+SA). The third condition allows us to isolate the impact of super weights on super activations only. Results are shown in Table 1. Specifically, when we restore the super activations, the average ac- curacy recovers to 49.94 from 35.14, indicating that the restoration of super activations salvaged approximately 42% of the quality loss. These findings suggest that while the super activations con- tribute substantially to the model’s performance, they do not fully account for the super weight’s overall influence on quality. Super weights affect output token probability distributions. We studied the impact of super weights with respect to the output token probability distribution, averaged over 500 prompts from Lambaba validation set. We find that when super weights are removed, the stopword probabilities are amplified, e.g., with Llama-7B, the probability of “the” is amplified by around 2×, “.” by 5×, and “,” by 10× (Figure 5, Appendix Figure 11). To dive deeper on how SW impact the output token distribution, we conduct a case study with a prompt “Summer is hot. Winter is ”. The correct next token should be “cold”, which is a word that has strong semantic meaning. With the original model with SW, it correctly predicts the next token “cold” with a high probability 81.4%. However, when the SW is removed, the model’s top prediction is a stopword “the” with a non-confident low probability of 9.0%. This indicates that the SW is essential for the model to make a correct and confident prediction of meaningful words. Sensitivity of super weights. We aim to illustrate how variations in the magnitude of super weights impact the model’s quality, especially, how does increasing the magnitude affect model quallity. We multiply the super weights by a scaling factor ranging from 0.0 to 3.0. Results in 5 NormSelf-AttnNormMLPMaximmenutheokaIIIIII051015202530Layer Number5004003002001000Max Activation ValueLlama-7B Max down_proj Input051015202530Layer Number1500100050005001000Max Activation ValueLlama-7B Max down_proj Output051015202530Layer Number025050075010001250Max Activation ValueLlama-7B Max Layer OutputOriginalRemove Non-SWRemove SW Preprint. Under review. Figure 5: Super weights suppress stopwords. Above, we consistently observe that removing super weights results in 2-5× larger stopword probabilities, across a variety of LLMs. At the same time, we observe non- stopwords decrease sharply in probability, reducing by 2-3× to as little as 0.1% probability. Overall, this results in stopwords dominating the highest likelihood tokens. Figure 6: Amplifying super weight improves quality. Across model sizes, we consistently observe that there exists some scaling where quality is improved. Although the quality improvement is miniscule, a consistent and noticeable trend is surprising, given we’re changing only one scalar out of billions. The purple line is the original model’s zero-shot average accuracy. Figure 6 show that amplifying super weights can improve model accuracy, to some extent. See full versions of these plots, for more models and all datasets, in Appendix A.1. 4 SUPER-OUTLIER AWARE QUANTIZATION Quantization is a powerful technique for compressing models and reducing memory requirements. However, the presence of outliers can significantly degrade quantization quality, for both weight quantization and activation quantization. As we mentioned before, we refer to these problematic outliers, both super weights and super activations, as super outliers. As we have shown above, these super outliers carry disproportionate importance for model quality, making their preservation during quantization critical. Quantization generally maps continuous values to a finite set of values; we consider one of the simplest forms – namely, asymmetric round-to-nearest quantization: Q(X) = Round (cid:18) X − MIN(X) ∆ (cid:19) , Q−1( ˆX) = ∆ · ˆX + MIN(X) 2N −1−1 where ∆ = MAX(X)−MIN(X) is the quantization step and N is the number of bits. Note that the maximum value is used to calculate ∆, so super outliers in X drastically increase the step size. With larger step sizes, inliers are rounded to more distant values on average, increasing the quanti- zation error. With increasingly super outliers, inliers are rounded to fewer discrete values, and more quantization bins remain unused. In this way, super outliers cause poor quantization fidelity. We specifically consider the case where hardware performs arithmetic in half precision, meaning the tensor X is quantized and dequantized before usage; in this setting, we can leverage prior knowledge of super outliers in two ways. First, hold out the super outlier to prevent adverse effects on inlier quantization. Second, restore the super outlier’s value after dequantization, to ensure the super outlier’s effects are preserved. We adopt this insight in two forms below, for weights and activations. 6 theherJa.myAhimShIToken Labels0%1%2%3%4%5%6%7%8%ProbabilitiesLlama-7B Token ProbabilitiesOriginalNo SW the her. a A my J his him IToken Labels0%1%2%3%4%5%ProbabilitiesOLMo-7B Token ProbabilitiesOriginalNo SWthehera.JAmyhimT"Token Labels0%0%1%2%2%2%3%4%4%ProbabilitiesMistral-7B Token ProbabilitiesOriginalNo SW0.51.01.52.02.53.0SW Scaling Factor69.2569.5069.7570.0070.2570.50Average Zero-Shot Acc.Llama-7B Super Weight Sensitivity1.01.52.02.53.0SW Scaling Factor72.072.172.272.372.472.5Average Zero-Shot Acc.Llama-13B Super Weight Sensitivity0.81.01.21.4SW Scaling Factor75.075.175.275.3Average Zero-Shot Acc.Llama-30B Super Weight Sensitivity Preprint. Under review. PPL (↓) Llama-7B Llama-13B Llama-30B FP16 Wiki-2 5.68 C4 7.08 Wiki-2 5.09 C4 6.61 Wiki-2 4.10 C4 5.98 Naive W8A8 SmoothQuant Ours 5.83 (0%) 5.71 (100%) 5.74 (75%) 7.23 (0%) 7.12 (100%) 7.14 (82%) 5.20 (0%) 5.13 (100%) 5.15 (71%) 6.71 (0%) 6.64 (100%) 6.66 (71%) 4.32 (0%) 4.20 (100%) 4.22 (83%) 6.14 (0%) 6.06 (100%) 6.08 (75%) Table 3: Round-to-nearest with super-activation handling is competitive. W8A8 is the baseline 8-bit weight and activation quantization, and the small italicized, parenthesized percentages denote what percentage of SmoothQuant’s quality improvement is retained. We observe that a naive round-to-nearest, while handling a single scalar super activation per tensor, is competitive with SmoothQuant. Note that SmoothQuant uses calibration data to compute scales, whereas our method does not require data. 4.1 ACTIVATION QUANTIZATION We conduct experiments using round-to-nearest quantization, with a small modification – replace the super activation with the median value (REPLACE), quantize (Q) and dequantize (Q−1) activations, then restore the super activation in FP16 (RESTORE). This can be expressed as the following, ˆA = RESTORE(Q−1(Q(REPLACE(A))) (1) Since the super activation is a single scalar, the bitrate and kernel complexity are not significantly impacted. 4.2 WEIGHT QUANTIZATION Prior art uses (Dettmers et al., 2023; Lin et al., 2024) small group sizes of 64 or 128, as Dettmers & Zettlemoyer (2023) finds that small group sizes are required for precise low-bit quantization. How- ever, the small group sizes come with computational and bitrate overhead, requiring other techniques to handle a high number of half precision scales and biases. To address this challenge, we propose a simple method to improve INT4 quantization with large blocksizes. First, we identify super weights using Section 3.1. Second, to improve inlier fit, we clip (CLIP) the outlier weights; in this step, the super weight is clipped as well. Quantize (Q) and dequantize (Q−1) the clipped weights. Then, to ensure the effect of the super weight is preserved, we restore the half-precision super weight after dequantization (RESTORE). ˆW = RESTORE(Q−1(Q(CLIPz(W ))) (2) As described in the equation above, we parameterize clipping using a z-score. Assuming all weights fit a Gaussian, we consider all values with a z-score beyond a certain threshold z to be an outlier. To tune this hyperparameter z, we find the minimum reconstruction error z-score using 500 examples from the Wikitext-2 train set. 5 EXPERIMENTS To comprehensively demonstrate the effects of super weights, we conduct experiments across LLaMA 7B to 30B, (Touvron et al., 2023), Mistral 7B (Jiang et al., 2023), and OLMo (Groeneveld et al., 2024) 2 To assess the practical application capabilities of LLMs, we evaluate their accuracy on zero-shot benchmarks, including PIQA (Bisk et al., 2020), ARC (Clark et al., 2018), HellaSwag (Zellers et al., 2019), Lambada (Paperno et al., 2016), and Winogrande (Sakaguchi et al., 2021). We use the lm-evaluation-harness (Gao et al., 2024) library to evaluate the above tasks. We also calculate the perplexity for Wikitext-2 (Merity et al., 2017) and C4 (Raffel et al., 2020), following the widely accepted setting from (Frantar et al., 2022). 2For OLMo, we use their latest Huggingface checkpoints, e.g., allenai/OLMo-7B-0724-hf. 7 Preprint. Under review. PPL (↓) OLMo-1B OLMo-7B Mistral-7B Wiki-2 C4 Wiki-2 C4 Wiki-2 C4 FP16 Naive W8A8 Ours 10.12 (100%) 10.79 (0%) 10.23 (84%) 12.31 (100%) 12.84 (0%) 12.52 (60%) 7.51 (100%) 8.70 (0%) 7.80 (76%) 9.52 (100%) 10.41 (0%) 9.72 (78%) 5.25 (100%) 5.32 (0%) 5.31 (14%) 7.75 (100%) 7.83 (0%) 7.81 (25%) Table 4: Handling the super activation improves activation quantization. Perplexity ↓ on Wikitext-2 and C4 for OLMo models and various quantization methods. We can see that simply restoring the scalar-valued super activation after quantizing and dequantizing successfully improves quantization’s effectiveness at pre- serving quality. Notably, note that SmoothQuant does not work on OLMo, as its LayerNorms do not have adjustable parameters. See more results in Appendix A.4 . 5.1 ACTIVATION QUANTIZATION Following SmoothQuant’s setting, we simulate W8A8 quantization with FP16 arithmetic. Specif- ically, we perform 8-bit per-tensor quantization for weights, and 8-bit per-token quantization for activations. We quantize the inputs and weights for linear layers (including q, k, v, gate, up, down projections), and BMM (i.e., batched matmul) in attention layers. For SmoothQuant, we use the default α as 0.85. We compare our method with SmoothQuant in Table 3. For three Llama models on both datasets, we achieve over 70% of SmoothQuant’s improvement over naive quantization. On C4 with Llama- 7B and on Wikitext with Llama-30B, we attain above 80% of SmoothQuant’s improvement. Our method demonstrates that a significantly simplified approach to quantization can achieve competitive results compared to more complex methods. Unlike SmoothQuant, which applies scales to every activation channel, our method focuses solely on addressing one critical activation outlier. We extended our evaluation to include additional LLMs: OLMo (1B and 7B), Mistral-7B, and Llama-2-7B. Results are shown in Table 4 and Appendix Table 7. These models represent a di- verse set of architectures and training paradigms, allowing us to assess the generalizability of our quantization method. Since SmoothQuant does not report on this set of models, we compare our results with naive W8A8 quantization. Across all models and datasets, our method consistently out- performs naive W8A8 quantization. Our method demonstrates remarkable performance on OLMo models. Notably, OLMo models use non-parametric LayerNorm, making them incompatible with the SmoothQuant method, which relies on LayerNorm weights to apply the per-channel scales. On Mistral-7B, the improvements are smaller. We hypothesize that this is because the LayerNorm of these models may have learned weights that aggressively suppress the super activation, resulting in a more uniform distribution of activation magnitudes. These results underscore the critical importance of the super activation in maintaining model per- formance during quantization. By addressing this single activation with minimal computational overhead, our method captures a significant portion of the benefits achieved by more complex quan- tization schemes. This finding suggests that the super activation plays a disproportionately large role in preserving model quality during the quantization process. 5.2 WEIGHT QUANTIZATION Recent advancements in LLM quantization techniques have inadvertently highlighted the impor- tance of super weights. Two notable methods, AWQ (Lin et al., 2024) and SqueezeLLM (Kim et al., 2024), demonstrate the significance of preserving these super weights, albeit through different approaches. 5.2.1 EXISTING WORKAROUNDS FOR THE SUPER WEIGHT AWQ, recognizing the need to minimize quantization errors for important weights, introduced a per-channel scaling method. This technique automatically searches for optimal scaling factors, ef- fectively amplifying crucial weight channels. Our analysis of Llama-7B revealed that AWQ scales 8 Preprint. Under review. Figure 7: Restoring super weight improves block scaling. Smaller block sizes are often used to handle outliers implicitly. We note that block sizes can scale slightly more gracefully by just handling the single scalar-valued super weight. up the super weight by a factor of 12, corroborating our assessment of the super weight’s importance. Similarly, SqueezeLLM proposes a sparse matrix approach that retains the top 0.05% of outlier val- ues in FP16 precision. Our investigation confirmed that this sparse matrix consistently includes the super weights, further validating their importance. Both AWQ and SqueezeLLM, despite employing different strategies, converge on the same principle: protecting super weights is crucial for effective weight quantization in LLMs. 5.2.2 SCALING UP BLOCK SIZES To evaluate the effectiveness of the proposed super weight-aware quantization method, we compare it with the traditional round-to-near quantization approach. We assess the models on a suite of zero-shot downstream tasks, with results illustrated in Figure 7. In the traditional round-to-near quantization, we observe a clear trend: as the block size increases, model quality significantly degrades. This decline likely results from the increased quantization er- ror introduced when larger blocks of weights are quantized together, which allows outliers to impact more weights. In contrast, our super weight-aware quantization method demonstrates much greater robustness to larger block sizes. As the block size increases, the degradation in model quality is noticeably smaller compared to the round-to-near method. This robustness stems from our method’s ability to preserve the most critical weight (the super weight) while minimizing the influence of out- lier weights on the overall quantization process. By clipping outliers and focusing on inlier weights, our method maintains higher fidelity in representing the model’s parameters. A key advantage of our method is its ability to support larger block sizes with less loss in model quality. This capability leads to a lower average bitrate and smaller file sizes, which are essential for deploying models in resource-constrained environments, such as mobile devices or edge computing scenarios. 6 CONCLUSION Our study of Large Language Models has revealed the critical role of super outliers – specifically, the super weight and its induced super activation. Although these super outliers are small in number, identifying and preserving them is essential for model quality; pruning the super weight completely destroys the model’s ability to generate text, and retaining the super weight can significantly improve the quantized model’s quality. Our findings shed light on how these outliers influence model behavior and provide practical strate- gies for their detection and management. By sharing a directory of super weights, we furthermore hope to inspire further research into their properties and implications. 9 128x128256x256512x5121024x1024Block size30405060Average Zero-shot AccuracyLlama-7B Block ScalingRTNOurs128x128256x256512x5121024x1024Block size3040506070Average Zero-shot AccuracyMistral-7B Block ScalingRTNOurs Preprint. Under review. REFERENCES Saleh Ashkboos, Amirkeivan Mohtashami, Maximilian L Croci, Bo Li, Martin Jaggi, Dan Alistarh, Torsten Hoefler, and James Hensman. Quarot: Outlier-free 4-bit inference in rotated llms. arXiv preprint arXiv:2404.00456, 2024. Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical com- monsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 7432–7439, 2020. Yelysei Bondarenko, Markus Nagel, and Tijmen Blankevoort. Understanding and overcoming the challenges of efficient transformer quantization. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 7947–7969, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021. emnlp-main.627. URL https://aclanthology.org/2021.emnlp-main.627. Jerry Chee, Yaohui Cai, Volodymyr Kuleshov, and Christopher M De Sa. 2- In A. Oh, T. Nau- bit quantization of mann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (eds.), Advances in Neu- ral Information Processing Systems, volume 36, pp. 4396–4429. Curran Associates, Inc., URL https://proceedings.neurips.cc/paper_files/paper/2023/ 2023. file/0df38cd13520747e1e64e5b123a78ef8-Paper-Conference.pdf. large language models with guarantees. Quip: Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge, 2018. URL https://arxiv.org/abs/1803.05457. Tim Dettmers and Luke Zettlemoyer. The case for 4-bit precision: k-bit inference scaling laws. In International Conference on Machine Learning, pp. 7750–7774. PMLR, 2023. Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. 8- In S. Koyejo, S. Mohamed, Infor- Inc., URL https://proceedings.neurips.cc/paper_files/paper/2022/ bit matrix multiplication for and A. Oh (eds.), Advances in Neural A. Agarwal, D. Belgrave, K. Cho, mation Processing Systems, volume 35, pp. 30318–30332. Curran Associates, 2022. file/c3ba4962c05c49636d4c6206a97e9c8a-Paper-Conference.pdf. transformers at Gpt3.int8(): scale. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. QLoRA: Efficient finetuning of quantized LLMs. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=OUIFPHEgJU. Tim Dettmers, Ruslan A. Svirschevski, Vage Egiazarian, Denis Kuznedelev, Elias Frantar, Saleh Ashkboos, Alexander Borzunov, Torsten Hoefler, and Dan Alistarh. SpQR: A sparse-quantized representation for near-lossless LLM weight compression. In The Twelfth International Confer- ence on Learning Representations, 2024. URL https://openreview.net/forum?id= Q1u25ahSuy. Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. GPTQ: Accurate post-training compression for generative pretrained transformers. arXiv preprint arXiv:2210.17323, 2022. Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Fos- ter, Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muen- nighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lin- tang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, 07 2024. URL https://zenodo.org/records/ 12608602. Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, et al. Olmo: Accelerating the science of language models. arXiv preprint arXiv:2402.00838, 2024. 10 Preprint. Under review. Jung Hwan Heo, Jeonghoon Kim, Beomseok Kwon, Byeongwook Kim, Se Jung Kwon, and Dong- soo Lee. Rethinking channel dimensions to isolate outliers for low-bit weight quantization of large language models. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=JzG7kSpjJk. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chap- lot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, L´elio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timoth´ee Lacroix, and William El Sayed. Mistral 7b, 2023. URL https: //arxiv.org/abs/2310.06825. Sehoon Kim, Coleman Richard Charles Hooper, Amir Gholami, Zhen Dong, Xiuyu Li, Sheng Shen, Michael W. Mahoney, and Kurt Keutzer. SqueezeLLM: Dense-and-sparse quantization. In Forty- first International Conference on Machine Learning, 2024. URL https://openreview. net/forum?id=0jpbpFia8m. Olga Kovaleva, Saurabh Kulshreshtha, Anna Rogers, and Anna Rumshisky. Bert busters: Outlier dimensions that disrupt transformers. arXiv preprint arXiv:2105.06990, 2021. Changhun Lee, Jungyu Jin, Taesu Kim, Hyungjun Kim, and Eunhyeok Park. Owq: Outlier-aware weight quantization for efficient fine-tuning and inference of large language models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12):13355–13364, Mar. 2024. doi: 10. 1609/aaai.v38i12.29237. URL https://ojs.aaai.org/index.php/AAAI/article/ view/29237. Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei-Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han. Awq: Activation-aware weight quantization for on-device llm compression and acceleration. Proceedings of Machine Learning and Systems, 6: 87–100, 2024. Zechun Liu, Changsheng Zhao, Igor Fedorov, Bilge Soran, Dhruv Choudhary, Raghuraman Krish- namoorthi, Vikas Chandra, Yuandong Tian, and Tijmen Blankevoort. Spinquant–llm quantization with learned rotations. arXiv preprint arXiv:2405.16406, 2024. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mix- In International Conference on Learning Representations, 2017. URL https: ture models. //openreview.net/forum?id=Byj72udxe. Denis Paperno, Germ´an Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fern´andez. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Katrin Erk and Noah A. Smith (eds.), Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pp. 1525–1534, Berlin, Germany, August 2016. Association for Com- putational Linguistics. doi: 10.18653/v1/P16-1144. URL https://aclanthology.org/ P16-1144. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1–67, 2020. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: an ad- versarial winograd schema challenge at scale. Commun. ACM, 64(9):99–106, aug 2021. ISSN 0001-0782. doi: 10.1145/3474381. URL https://doi.org/10.1145/3474381. Wenqi Shao, Mengzhao Chen, Zhaoyang Zhang, Peng Xu, Lirui Zhao, Zhiqian Li, Kaipeng Zhang, Peng Gao, Yu Qiao, and Ping Luo. Omniquant: Omnidirectionally calibrated quantization for large language models. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=8Wuvhh0LYW. Seungwoo Son, Wonpyo Park, Woohyun Han, Kyuyeun Kim, and Jaeho Lee. Prefixing atten- tion sinks can mitigate activation outliers for large language model quantization. arXiv preprint arXiv:2406.12016, 2024. 11 Preprint. Under review. Mingjie Sun, Xinlei Chen, J Zico Kolter, and Zhuang Liu. Massive activations in large language models. In ICLR 2024 Workshop on Mathematical and Empirical Understanding of Foundation Models, 2024. URL https://openreview.net/forum?id=1ayU4fMqme. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Ar- mand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023. URL https://arxiv.org/abs/2302.13971. low-bit transformer Xiuying Wei, Yunchen Zhang, Xiangguo Zhang, Ruihao Gong, Shanghang Zhang, Outlier suppression: Pushing the limit In S. Koyejo, S. Mohamed, A. Agarwal, Information Process- URL Inc., 2022. Qi Zhang, Fengwei Yu, and Xianglong Liu. of language models. D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural ing Systems, volume 35, pp. 17402–17414. Curran Associates, https://proceedings.neurips.cc/paper_files/paper/2022/file/ 6f6db140de9c9f111b12ef8a216320a9-Paper-Conference.pdf. Xiuying Wei, Yunchen Zhang, Yuhang Li, Xiangguo Zhang, Ruihao Gong, Jinyang Guo, and Xiang- long Liu. Outlier suppression+: Accurate quantization of large language models by equivalent and effective shifting and scaling. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 1648–1665, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023. emnlp-main.102. URL https://aclanthology.org/2023.emnlp-main.102. Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han. Smoothquant: accurate and efficient post-training quantization for large language models. In Proceedings of the 40th International Conference on Machine Learning, ICML’23. JMLR.org, 2023. Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. Efficient streaming language models with attention sinks. In The Twelfth International Conference on Learning Rep- resentations, 2024. URL https://openreview.net/forum?id=NG7sS51zVF. Jaewoo Yang, Hayun Kim, and Younghoon Kim. Mitigating quantization errors due to activation spikes in glu-based llms. arXiv preprint arXiv:2405.14428, 2024. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. HellaSwag: Can a In Anna Korhonen, David Traum, and Llu´ıs M`arquez machine really finish your sentence? (eds.), Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4791–4800, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10. 18653/v1/P19-1472. URL https://aclanthology.org/P19-1472. 12 Preprint. Under review. A APPENDIX A.1 SUPER WEIGHT SENSITIVITY In this section, we show the full set of results on zero-shot downstream datasets. Results are shown in Table 5 for FP16 models. From the table, we can see that some datasets are more sensitive to super weight (SW) amplification. For example, Winogrande and Lambada show consistent improvements across models when amplifying SW, while PIQA shows same or slightly lower accuracy. We also evaluate the 4-bit quantized Llama-7B with amplified SW in Table 6. We witness similar minor improvements when SW is amplified. Llama-7B Llama-13B Llama-30B Mistral-7B Original Amplified Original Amplified Original AmplifiedOriginal Amplified ARC-C ARC-E Hella. Lamb. PIQA SciQ Wino. AVG 41.81 75.29 56.93 73.51 78.67 94.60 70.01 70.12 41.89 75.63 56.76 74.09 78.67 95.40 71.11 70.51 46.42 77.36 59.96 76.15 79.11 95.00 72.77 72.39 46.76 77.19 59.82 76.58 78.94 95.30 72.85 72.49 52.82 80.39 63.34 77.59 80.96 96.10 75.69 75.27 52.47 50.25 80.51 80.89 63.21 61.23 77.68 75.62 81.28 80.73 96.10 95.90 76.01 73.71 75.33 74.05 49.74 81.02 61.39 75.92 80.47 96.00 74.11 74.09 Table 5: Accuracy of zero-shot benchmarks of amplifying super weights in FP16 models. The scaling factor is chosen by the highest average accuracy. RTN-4bit-8x8 RTN-4bit-64x64 Original Amplified Original Amplified Llama-7B Llama-13B Llama-30B 69.59 72.09 74.93 69.88 72.13 75.16 66.19 71.86 73.88 67.54 72.07 74.04 Table 6: Average accuracy of zero-shot benchmarks of amplifying super weights in models with round-to- nearest 4bit weight quantization with blocksizes of 8x8 and 64x64. On quantized models, amplifying super weights also yields a small yet consistent quality improvement. We visualize the full sensitivity graph starting from zero. We note that the average of all zero-shot datasets is around 30% when datasets are reduced to guessing accuracies. Figure 8: Amplifying super weight improves quality. Full results for scaling super weight from 0 to 3. A.2 MAXIMUM-MAGNITUDE ACTIVATION OF DOWN PROJECTION MODULE Below, we show more examples of identifying super weights. We visualize the maximum-magnitude activations in the inputs and outputs of down proj of all the transformer layers. The outlier ”sparks” indicate the row and column index of super weights. 13 0123SW Scaling Factor40506070Average Zero-Shot Acc.Llama-7B Super Weight Sensitivity0123SW Scaling Factor3040506070Average Zero-Shot Acc.Llama-13B Super Weight Sensitivity0123SW Scaling Factor3040506070Average Zero-Shot Acc.Llama-30B Super Weight Sensitivity Preprint. Under review. Figure 9: Maximum-magnitude activation of down proj across all transformer layers of Mistral-7B. Figure 10: Maximum-magnitude activation of down proj across all transformer layers of OLMo-7B. A.3 LOGIT DISTRIBUTION WITH SUPER WEIGHT REMOVAL Below, we visualize more of the logit distribution, when super weights are removed from Llama-7B. Despite the more thorough visualization, the conclusions remain the same: stopwords are amplified and non-stopwords see a drastic decrease in likelihood. Figure 11: Output token distribution before and after removing the super weight on Llama-7B. A.4 ADDITIONAL ACTIVATION QUANTIZATION RESULTS WITH SUPER ACTIVATIONS Below, we include results for Llama-2 7B using our activation quantization. See Table 7. A.5 SUPER WEIGHTS AND ATTENTION SINKS Given that super activations are typically observed on the first token of an input sequence, we hy- pothesized a potential relationship between super weights and the well-documented phenomenon of attention sinks (Xiao et al., 2024; Son et al., 2024). To test this hypothesis, we conducted experi- ments comparing attention weight patterns in the presence and absence of super weights. Contrary to our initial expectations, we find that attention sinks persist even when super weights are removed from the model, while not preserving model quality. 14 051015202530Layer Number75050025002505007501000Max Activation ValueMistral-7B Max down_proj Input051015202530Layer Number250200150100500Max Activation ValueMistral-7B Max down_proj Output051015202530Layer Number3002001000100200Max Activation ValueOLMo-7B Max down_proj input051015202530Layer Number4003002001000100200300Max Activation ValueOLMo-7B Max down_proj outputtheherJa.myAhimShIMandDhis"ChLb,TZAmbabyHewHelenbutmenuportalRyouDanielthatMrclRonMarBMaximsSamoneSHKCharlesmaokayChrisVToken Labels0%2%4%6%8%ProbabilitiesLlama-7B Top-50 TokensOriginalNo SW Preprint. Under review. Figure 12: Output token distribution before and after removing the super weight on Mistral-7B. Figure 13: Output token distribution before and after removing the super weight on OLMo-7B. PPL (↓) Llama-2-7B Wiki-2 C4 FP16 5.47 (100%) 6.97 (100%) W8A8 5.58 (0%) 7.09 (0%) Ours 5.57 (9.1%) 7.07 (16.7%) Table 7: Perplexity (↓) on Wikitext-2 and C4 with Llama-2-7B. 15 thehera.JAmyhimT"hisandMaxMImenu,RNZDCShCharlesLbutportalyouEdHSarahRonthatGabrielSamPegbKHelenokayDanieloneCheCarlosyourbabyBofclToken Labels0%1%2%3%4%ProbabilitiesMistral-7B Top-50 TokensOriginalNo SW the her. a A my J his him I ", menu M and portal that L you me R baby Z Mr Ron Gabriel okay T D but of Maxim d Sam first your one on K Zach Sarah book Daniel Al N go Add Helen ChrisToken Labels0%2%4%ProbabilitiesOLMo-7B Top-50 TokensOriginalNo SW
synthetic_cpt
6
Synthetic_Data_Generation_with_Large_Language_Models_for_Text_Classification_Potential_and_Limitations.pdf
Synthetic Data Generation with Large Language Models for Text Classification: Potential and Limitations Zhuoyan Li1, Hangxiao Zhu2, Zhuoran Lu1, Ming Yin1 1Purdue University 2Washington University in St. Louis {li4178, lu800, mingyin}@purdue.edu, [email protected] Abstract The collection and curation of high-quality training data is crucial for developing text clas- sification models with superior performance, but it is often associated with significant costs and time investment. Researchers have recently explored using large language models (LLMs) to generate synthetic datasets as an alternative approach. However, the effectiveness of the LLM-generated synthetic data in supporting model training is inconsistent across different classification tasks. To better understand fac- tors that moderate the effectiveness of the LLM- generated synthetic data, in this study, we look into how the performance of models trained on these synthetic data may vary with the subjec- tivity of classification. Our results indicate that subjectivity, at both the task level and instance level, is negatively associated with the perfor- mance of the model trained on synthetic data. We conclude by discussing the implications of our work on the potential and limitations of leveraging LLM for synthetic data generation1. 1 Introduction Today, machine-learning-powered text classifica- tion models have been widely applied in diverse applications such as detecting biased or toxic lan- guage on online platforms (Wiegand et al., 2019) and filtering spam emails (Jindal and Liu, 2007). However, the performance of these models largely depends on the quality of the training data. This poses a substantial challenge in practice, especially when models need to be built for a novel task do- main or to incorporate new classification categories, as the training data collection and curation process is often costly, time-consuming, and complex. Meanwhile, with the recent advancements in large language models (LLMs), researchers have started to explore the potential of utilizing LLMs for generating synthetic data tailored to specific 1The collected human annotations are available at huggingface.co/datasets/xfleezy/human_annotation_emnlp23. tasks and augmenting the training data in low- resourced data settings (Kumar et al., 2020; Yoo et al., 2021; Hartvigsen et al., 2022; Sahu et al., 2022). Most recently, a few studies also investi- gate into the feasibility of generating a synthetic dataset from scratch using LLMs to support zero- shot learning (Ye et al., 2022; Wang et al., 2021; Tang et al., 2023; Gao et al., 2023). While LLM- based data augmentation is often found to outper- form other data augmentation methods in boosting the model performance, mixed results are reported regarding whether the LLM-generated synthetic data can effectively support model training to en- able a level of model performance that is compara- ble to models trained on the data collected in the real world and carefully annotated. This leaves uncertainty for researchers and practitioners in de- ciding whether to rely on LLMs for synthetic data generation or to proceed with the traditional data collection and curation pipeline when they need to construct a text classification model for a new task. Naturally, one may wonder what factors might mod- erate the effectiveness of LLM-generated synthetic data in facilitating successful model training. We conjecture that one such factor could be the subjectivity of classification tasks. Indeed, lan- guage is inherently subjective and interpretive (Ben- veniste, 1971; Wiebe et al., 2004). Previous re- search has showed that people often perceive the same text in different ways because of their per- sonal biases and perspectives (Sap et al., 2021; Li et al., 2022; Gordon et al., 2022). Thus, achiev- ing high model performance for classification tasks with high subjectivity seems to impose a greater demand on the training data in reflecting the rich- ness and nuances present in human language, and the extent to which LLM-generated synthetic data can acompolish this objective is unclear. Thus, in this paper, we formally evaluate the effectiveness of LLM (i.e., the cutting-edge GPT- 3.5-Turbo model) in generating synthetic data to 3 2 0 2 t c O 3 1 ] L C . s c [ 2 v 9 4 8 7 0 . 0 1 3 2 : v i X r a support model training for different text classifica- tion tasks. We adopt two approaches for synthetic data generation—a zero-shot setting in which the LLM is directly prompted to generate text instances with different labels of interests, and a few-shot setting in which a few real-world data instances are provided as examples to guide the LLM in generating the synthetic data. We conduct two evaluation studies, each corresponding to one di- mension of subjectivity—the first study examines the effectiveness of the synthetic data on 10 types of classification tasks and explores how it varies with the task-level subjectivity (i.e., whether this type of classification task is subjective); the second study concerns that given a specific classification task, how the performance of a model trained on synthetic data changes with the instance-level sub- jectivity (i.e., whether people tend to disagree with each other on the label of this task instance). Our findings suggest that across the 10 types of classifi- cation tasks that we have considered in this study, models trained on the LLM-generated synthetic data generally perform worse than those trained on the real-world data, yet guiding LLM’s synthetic data generation process with a small amount of real-world data (i.e., as done in the few-shot data generation setting) can improve the effectiveness of the data generated. Moreover, we find that the per- formance of models trained on the LLM-generated synthetic data is very close to those trained on the real-world data for tasks with low subjectivity (e.g., news topic classification, spam email detection), while the performance decrease is much bigger on tasks with high subjectivity (e.g., humor or sar- casm detection). Finally, even within the same type of classification task, models trained on the LLM- generated synthetic data tend to exhibit a higher level of performance on those task instances with lower subjectivity, for which human annotators ex- hibit a higher level of agreement in their annotation. Together, our study provides important experi- mental evidence regarding the potential and limi- tations of using LLMs to generate synthetic data for text classification tasks. We conclude by dis- cussing the implications, limitations, and future work of our study. 2 Related Work aging generative models to create synthetic data for training machine learning models, especially for computer vision (CV) and natural language processing (NLP) tasks. In the realm of CV, sev- eral works have utilized GAN-based models (Kar- ras et al., 2019) or diffusion models (Nichol et al., 2021) to generate synthetic data for image recogni- tion (Besnier et al., 2020; He et al., 2022) or object segmentation (Zhang et al., 2021). Similarly, in the NLP field, researchers have also probed into the ca- pacity of language models in generating synthetic data for various text classification tasks (Kumar et al., 2020; Chung et al., 2023; Sahu et al., 2022; Yoo et al., 2021; Ye et al., 2022; Wang et al., 2021; Hartvigsen et al., 2022; Meng et al., 2022; Gao et al., 2022; Aggarwal et al., 2022; Chen et al., 2022), with mixed results reported regarding the effectiveness of the synthetic data generated. In this study, we aim to obtain a better understanding of when the synthetic data generated by language models can lead to effective model training, and we focus on exploring the role of task subjectivity in moderating the effectiveness of the synthetic data. Large language models. Based on the Trans- former architecture (Vaswani et al., 2017), large language models (LLMs) have facilitated remark- able progress in the field of natural language pro- cessing. The utilization of bidirectional contexts in the BERT model (Devlin et al., 2018) has re- sulted in superior performance across a wide range of tasks. Building on this, OpenAI’s GPT series, comprising of models like GPT-2 (Radford et al., 2019), the colossal GPT-3 (Brown et al., 2020) with an impressive 175 billion parameters and the most recent GPT-4 (OpenAI, 2023), pushed the boundaries of possibilities of LLMs. These mod- els exhibit remarkable proficiency in generating high-quality human-like text (Clark et al., 2021; Dou et al., 2021; Zhou et al., 2023), showcasing capabilities in rudimentary reasoning (Wei et al., 2021), translation (Brown et al., 2020), scientific synthetic data generation (Hämäläinen et al., 2023), and code generation (Mcnutt et al., 2023). In this study, we focus on leveraging the cutting-edge GPT- 3.5-Turbo model2 to explore its capabilities and limitations in synthesizing data for text classifica- tion tasks with different subjectivity levels. Generative AI in synthetic data generation. Re- cent advancements in generative AI have motivated numerous studies to explore the potential of lever- 2We used GPT-3.5-Turbo as the foundational model to generate synthetic data because at the time of this study, an official API for the more advanced GPT-4 model was not yet available from OpenAI. 3 Methodolgy In this section, we outline the procedure we have followed when leveraging the large language model to generate the synthetic training data for text clas- sification. We consider two data generation settings in this study, i.e., the zero-shot setting and the few- shot setting. 3.1 Zero-shot Synthetic Data Generation Under the zero-shot synthetic data generation set- ting, given a text classification task, we assume that the real-world data in the form of “text-label pairs” do not exist. Thus, in order to obtain syn- thetic training data for the text classification task, two sequential prompts are constructed and sup- plied to the pretrained large language model (i.e., the GPT-3.5-Turbo model). First, a customized “context prompt” relevant to the targeted domain of interest is used to set the context. For example, in the case of the IMDB movie review classification task (Maas et al., 2011), the customized context prompt used is “Imagine you are a movie reviewer on the IMDB platform”. This prompt aims to en- courage the LLM to generate synthetic data that resemble the real texts produced in the targeted domain. After the context is set, a second prompt, i.e., the “data generation prompt”, is provided to the LLM, instructing the model to generate texts with a specific style, label (with respect to the clas- sification task of interest), and word limit. For example, for the IMDB movie review classification task, the style of the text is a movie review, and the label is a targeted sentiment conveyed by the review (i.e., “positive” or “negative”). To further enhance the diversity of the generated data, after the generation of every n data points (i.e., texts of targeted styles, labels, and word limits)3, we pro- vide a “diversity prompt” to the LLM—“Can you provide something more diverse compared to the previously generated data?”—aiming to increase the diversity of the synthetic data generated. 3.2 Few-shot Synthetic Data Generation Under the few-shot synthetic data generation set- ting, we assume that a small amount of real-world data are available for the text classification task. These data points can then serve as the examples 3To increase data diversity while maintaining a reasonable data generation speed, n is set to 10 for generating short texts (i.e., texts with a maximum length of 30 words), and 1 for generating longer paragraphs. for the large language model in the data generation process, which can potentially provide LLM with insights of the patterns exhibited in the real-world data. We again start the data generation process by using a context prompt to set the context. However, different from that in the zero-shot setting, here, each time before we instruct the LLM to generate a piece of text, we first provide the model with a few randomly sampled real-world data instances (including both the text and the label) as the exam- ples. To keep the LLM from merely rephrasing the provided examples, an additional prompt is used to impose a constraint on the LLM in generating the synthetic data (i.e., “You should imitate the exam- ple I have provided, but you cannot simply modify or rewrite the example I have given.”). For more details about prompts used for gener- ating data for each type of text classification task, please refer to the App. D. 4 Evaluation I: Comparison Across Different Types of Tasks In our first evaluation study, we investigate into how well the synthetic data generated by LLM under both zero-shot and few-shot settings can support effective model training for different types of text classification tasks. We are especially interested in comparing the model performance between those trained on the real-world data and on the LLM- generated synthetic data, and in understanding how the performance of those models trained on the LLM-generated synthetic data varies with the sub- jectivity of the text classification task. 4.1 Datasets and Tasks We experiment with 10 representative datasets covering a variety of text classification tasks: AG’s news (Zhang et al., 2015), IMDB reviews (Maas et al., 2011), SMS spam (Almeida et al., 2011), Financial phrase bank (Malo et al., 2014), Reddit emotion (Demszky et al., 2020), Rela- tion classification (Gao et al., 2019), Tweet irony speech (Van Hee et al., 2018), Tweet emotions (Mo- hammad et al., 2018), Sarcasm news (Misra and Arora, 2023, Misra and Grover, 2021), and Humor speech (Annamoradnejad and Zoghi, 2020). See App. A.1 for detailed descriptions of datasets and the corresponding text classification tasks. These datasets are selected with the goal of spanning a wide range of task subjectivity in mind. For exam- ple, we conjecture that classifying the news topic category (e.g., as that in the AG’s news dataset) is relatively objective, while determining whether texts are humorous (e.g., as that in the Humor speech dataset) is quite subjective (Veatch, 1998). 4.2 Task-level Subjectivity Determination To formally determine the subjectivity levels of dif- ferent text classification tasks, we first conduct a crowdsourced study to collect subjectivity judge- ments from the crowd. Study procedure. We adopt a comparative ap- proach to collect crowdsourced subjectivity judge- ments in this study. Specifically, we recruited crowd workers from Amazon Mechanical Turk (MTurk), and each worker was asked to complete a sequence of 10 subjectivity judgement tasks. In each task, we randomly sampled a pair of text clas- sification tasks from the 10 tasks that we considered in this evaluation, and we presented to the worker the task description, label description, and task ex- amples for each task in the pair. Then, the worker was asked to determine which text classification task in the pair was more objective, with “objec- tivity” of a task defined as “the classification of a piece of text is based on clear, identifiable features in the text (e.g., keywords or phrases), and can be done without being affected by any personal inter- pretation of the text resulted from personal biases, emotions or beliefs.” The study was restricted to U.S. workers. Each worker was allowed to partic- ipate only once and received a $1.2 payment. An attention check question was included in the study to validate the worker’s engagement, and only the data from workers who successfully passed the at- tention check were considered valid. Ranking task subjectivity. After excluding re- sponses from inattentive workers, a total of 540 pairwise subjectivity comparisons for the 10 tasks were obtained from 54 workers. For each pair of tasks, we aggregated relative subjectivity judg- ments made on this pair to determine which task was perceived as more subjective (i.e., less objec- tive). To produce a ranking of the subjectivity of the 10 tasks, we constructed a directed graph based on the pairwise subjectivity comparisons—each task was a node in this graph, and directed edges were added between each pair of tasks, pointing from the one that was deemed as more subjective (on the aggregate level) to the one deemed as less subjective. The topological sort algorithm (Cormen et al., 2022) was then applied to this directed graph to obtain a linear ordering of the nodes. If a cycle was detected within the graph, the corresponding tasks were considered to have the same level of subjectivity and were merged into a single meta- node before re-runing the algorithm. Our final task subjectivity ranking results are shown in Table 1. 4.3 Model Training Given a text classification task, following the pro- cedures outlined in Sections 3.1 and 3.2, 3,000 syn- thetic data points were generated for each candidate label under both zero-shot and few-shot settings. We then trained classification models using the real- world training data provided by the original dataset, the synthetic data generated under the zero-shot settings, and the synthetic data generated under the few-shot settings4, respectively. Specifically, we utilized the pre-trained BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) models from Hug- gingface’s transformers library (Wolf et al., 2020) as the encoders, and used the representation em- beddings from the last layer of these models as the input to our classification models. The classifica- tion model itself comprised a hidden layer of 768 units and an output layer, and it was fine-tuned with a learning rate of 5e − 5 and a batch size of 64. For datasets that provided official partitions for training and test sets, we directly evaluated the classifica- tion model’s performance on the test sets. Other- wise, we randomly divided the dataset into training (70%), validation (5%), and test (25%) sets5. Mod- els’ performance was evaluated via Macro-F1 and Accuracy scores, and they were computed by com- paring the model’s predictions with the gold labels provided in the test sets. To ensure the robustness of our results, all experiments were repeated three times, and the average performance across these repetitions was reported. 4.4 Evaluation Results Table 1 summarizes the comparative performance of classification models trained with different data. Below, we highlight a few key observations we get from this comparison. 4Under the few-shot setting, we randomly sampled 10% of the data points from the real-world training data provided in the original dataset as the example pool to guide the LLM’s synthetic data generation process, but only the sythetic data generated were used to train the models. 5To ensure a fair comparison, we maintained an equal size for both the real-world and synthetic training data by downsampling the dataset with a larger size. BERT RoBERTa Dataset Subjectivity Real-world data Zero-shot setting Few-shot setting Real-world data Zero-shot setting Few-shot setting AG Relation IMDB SMS spam ⋆ ⋆⋆ ⋆⋆⋆ ⋆⋆⋆⋆ Reddit emotion ⋆⋆⋆⋆⋆ Tweet irony ⋆⋆⋆⋆⋆ Tweet emotions ⋆⋆⋆⋆⋆ Sarcasm Financial Humor speech ⋆⋆⋆⋆⋆ ⋆⋆⋆⋆⋆ ⋆⋆⋆⋆⋆ Macro-F1 Accuracy Score Macro-F1 Accuracy Score Macro-F1 Accuracy Score Macro-F1 Accuracy Score Macro-F1 Accuracy Score Macro-F1 Accuracy Score 95.3% 98.6% 87.6% 97.2% 93.7% 72.2% 77.7% 89.9% 83.2% 97.0% 95.3% 98.6% 87.6% 98.8% 94.6% 73.9% 81.1% 90.3% 84.6% 97.0% 89.3% (-6.0%) 89.3% (-6.0%) 91.5% (-3.8%) 91.6% (-3.7%) 92.4% (-6.2%) 92.7% (-5.9%) 96.4% (-2.2%) 96.4% (-2.2%) 81.2% (-6.4%) 81.5% (-6.1%) 81.1% (-6.5%) 81.2% (-6.4%) 93.8% (-3.4%) 95.1% (-3.7%) 94.3% (-2.9%) 94.8% (-4.0%) 72.7% (-21.0%) 74.4% (-20.2%) 81.9% (-11.8%) 82.0% (-12.6%) 63.4% (-8.8%) 63.6% (-10.3%) 81.5% (+9.3%) 81.9% (+8.0%) 58.1% (-19.6%) 64.5% (-16.6%) 64.6% (-13.1%) 69.1% (-12.0%) 51.1% (-38.8%) 51.2% (-39.1%) 63.6% (-26.3%) 64.8% (-25.5%) 48.2% (-35.0%) 60.7% (-23.9%) 70.6% (-12.6%) 74.2% (-10.4%) 56.0% (-41.0%) 61.7% (-35.3%) 86.9% (-10.1%) 87.0% (-10.0%) 94.6% 97.0% 89.0% 97.3% 91.3% 74.0% 75.8% 91.8% 85.0% 96.7% 94.6% 96.9% 89.0% 98.8% 92.1% 75.5% 78.9% 92.0% 86.6% 96.7% 88.6% (-6.0%) 88.6% (-6.0%) 92.9% (-1.7%) 92.9% (-1.7%) 91.4% (-5.6%) 91.6% (-5.3%) 94.1% (-2.9%) 94.1% (-2.8%) 81.2% (-7.8%) 81.3% (-7.7%) 82.4% (-1.6%) 82.4% (-1.6%) 93.5% (-3.8%) 95.9% (-2.9%) 94.0% (-3.3%) 95.7% (-3.1%) 77.9% (-13.4%) 78.1% (-14.0%) 87.5% (-3.8%) 87.7% (-4.4%) 57.8% (-16.2%) 59.1% (-16.4%) 83.3% (+9.3%) 83.7% (+8.2%) 64.6% (-11.2%) 71.5% (-7.4%) 66.3% (-9.5%) 72.7% (-6.2%) 54.3% (-37.5%) 54.3% (-37.7%) 61.5% (-30.3%) 63.6% (-28.4%) 58.5% (-26.5%) 70.3% (-16.3%) 75.0% (-10.0%) 78.9% (-7.7%) 54.9% (-41.8%) 60.9% (-35.8%) 84.0% (-12.7%) 84.0% (-12.7%) Table 1: Comparing the performance of classification models trained on the LLM-generated synthetic data under the zero-shot or few-shot settings, with those trained with the original real-world data, in terms of Macro-F1 (%) and Accuracy Score (%). In the “Subjectivity” column, more "⋆" symbols indicate a higher level of task subjectivity. Models trained on the real-world data consis- tently outperform those trained on the synthetic data. Our results indicate that models trained on the original real-world data consistently outper- form their counterparts trained on the synthetic data generated under either zero-shot or few-shot set- tings, almost for every task. In particular, with the RoBERTa model, we observe that the average im- provements of the model trained on the real-world data over the models trained on zero-shot synthetic data and few-shot synthetic data are 16.9% and 6.7% in terms of Macro-F1, and 14.9% and 6.1% in terms of accuracy. Similar trends are observed with the BERT model as well. Guiding LLM with real-world data examples can boost the effectiveness of the synthetic data. We also observe that models trained on those syn- thetic data generated under the few-shot settings almost always outperform those trained on the syn- thetic data generated under the zero-shot settings. For instance, for the BERT model, we see an aver- age increase of 10.6% and 8.8% in Macro-F1 and accuracy scores, respectively, across the 10 tasks in the few-shot setting, as compared to the zero-shot setting. Similarly, with the RoBERTa model, there is an average increase of 10.3% in Macro-F1 and 8.9% in accuracy scores across the 10 tasks when the real-world data are used as examples for LLM to mimic in the synthetic data generation process. For more analysis of the few-shot synthetic data, please see App. B.2 and B.3. Synthetic data support more effective model training for tasks that are less subjective. Finally, we notice that for classification tasks with relatively low levels of subjectivity (e.g., those in the AG’s news, Relation classification, IMDB reviews, and SMS spam datasets), the performance difference between models trained on the synthetic data and those trained on the real-world data is remarkably small. However, for tasks with high subjectivity, (a) Remote Clique (b) Chamfer Distance Figure 1: Comparing the diversity of the real-world data and the synthetic data. the performance decrease resulted from the usage of the synthetic data is more significant—for in- stance, across the cluster of 6 tasks with the highest level of subjectivity in our evaluation, there is an average decrease of 27.4% and 24.2% in Macro-F1 and accuracy, respectively, comparing the BERT models trained on the zero-shot synthetic data with those trained on the real-world data. In other words, for text classification tasks that are highly objective, there is great potential in training high-performing models simply based on synthetic data generated by LLMs, but the same method falls short in gen- erating synthetic data that can effectively support model training for highly subjective classifications. 4.5 Exploratory Analysis: Data Diversity To explore the potential reasons underlying the model performance difference, we conducted an exploratory analysis on the diversity of the training data. Following Rhys Cox et al. (2021), we used the Remote Clique Score (i.e., the average mean distance of a data instance to other instances) and the Chamfer Distance Score (i.e., the average mini- mum distance of a data instance to other instances) to quantify the diversity of a set of data. For both metrics, higher values indicate greater data diver- sity. As shown in Figure 1, we find that in general, the real-world data appear to be more diverse than Dataset AG Relation IMDB SMS Spam Reddit Emotion Humor Speech Tweet Irony Sarcasm Tweet Emotions Finanical Average Agreement a 0.80 (4.2) 0.78 (4.5) 0.76 (7.3) 0.73 (8.5) 0.69 (6.6) 0.68 (7.1) 0.68 (6.7) 0.64 (7.7) 0.64 (4.6) 0.57 (7.6) Krippendorff’s α Subjectivity Level 0.51 ⋆ 0.43 ⋆⋆ 0.19 ⋆⋆⋆ 0.27 ⋆⋆⋆⋆ 0.30 ⋆⋆⋆⋆⋆ 0.06 ⋆⋆⋆⋆⋆ 0.03 ⋆⋆⋆⋆⋆ 0.01 ⋆⋆⋆⋆⋆ 0.17 ⋆⋆⋆⋆⋆ -0.03 ⋆⋆⋆⋆⋆ Table 2: The average instance-level annotation agreement for different types of tasks, alongside the corresponding task-level subjectivity. Numbers in parentheses in the first row represent the average number of annotations received per task instance. Higher values for both the average agreement a and Krippendorff’s α indicate a higher degree inter-annotator agreement. the synthetic data generated under the few-shot set- tings, which in turn seem to be more diverse than the zero-shot synthetic data. This might partially explain why models trained on the real-world data and the few-shot synthetic data tend to outperform those trained on the zero-shot synthetic data. In addition, we also notice that compared to that on the low subjectivity tasks (i.e., AG, Relation, IMDB, Spam), the differences in data diversity between the real-world data and the synthetic data seem to be more salient on the high subjectivity tasks (i.e., the other 6 tasks), especially in terms of the Chamfer Distance Score. In fact, a t-test shows that the decrease of the Chamfer Distance Score in the zero-shot synthetic data compared to the real data is significantly larger for the high subjectivity tasks than for the low subjectivity tasks (p < 0.01). This suggests that for tasks with high subjectivity, such as interpreting humor or sarcasm in language, LLMs may not be able to generate data instances that can cover the full spectrum of real- life scenarios, which may limit the performance of models trained on the synthetic data. 5 Evaluation II: Comparison Across Different Task Instances In the previous section, we have discovered that the subjectivity of a task can adversely affect the performance of classification models trained on the LLM-generated synthetic data. However, even for the same type of task, the classification for each in- dividual task instance may exhibits different levels of subjectivity as well. Naturally, one may won- der whether models trained on the LLM-generated synthetic data may show different performance on task instances of different subjectivity. We aim to explore the answers to this question in this section. 5.1 Instance-level Subjectivity Determination Given a text classification task and a specific text in- stance, we consider the degree of agreement among annotators on the label of this text as a proxy for the subjectivity of this instance—a lower level of agreement means that annotators hold more diver- gent views, hence the task may have a higher level of subjectivity. Thus, to formally quantify the sub- jectivity of different instances for different tasks, we again conduct a crowdsourced study to collect instance-level annotations. Study procedure. We again considered the 10 types of text classification tasks as that in the first evaluation study. For each type of task, we ran- domly sampled 50 text instances per category from the test set to compose our “evaluation dataset” for that task. We then recruited U.S. workers from MTurk to complete annotation tasks for those in- stances in our evaluation dataset. Specifically, each worker was randomly assigned to one type of text classification tasks. After going through a brief in- struction of the assigned task, the worker was asked to complete 20 classification tasks of the assigned type to get a payment of $1.2, where the texts pre- sented in these 20 tasks were randomly sampled from the evaluation dataset for the assigned type of task. Again, we included two attention check ques- tions in our study to filter out inattentive workers. We ensured that each task instance received at least three annotations from unique MTurk workers. Computing instance subjectivity. Based on an- notations we obtained from attentive workers, we quantify the subjectivity level of each task instance using the fraction of annotators who agree with the majority label for the task instance, that is: maxy∈Y ai = (cid:80)Ki k=1 Ki 1(rk i = y) (1) where Y = {1, · · ·, Y } is the set of all possible labels, Ki is the total number of annotators who la- beled instance i, and rk i is the k-th annotator’s anno- tation on instance i. Intuitively, a lower value of ai suggests that consensus is less likely to be reached among annotators on instance i, thus instance i may have a higher level of subjectivity. In Table 2, we report the average values of ai (i.e., a) for instances in the evaluation datasets of different types of tasks, (a) AG (b) Relation (c) IMDB Reviews (d) SMS Spam (e) Reddit Emotion (f) Sarcasm News (g) Humor Detection (h) Tweet Emotions (i) Tweet Irony Speech (j) Financial Phrasebank Figure 2: Changes in the accuracy of the BERT model trained on zero-shot synthetic data as the instance-level annotation agreement threshold varies. The solid blue line in each plot is the linear regression fitted on the data, and the R-squared score quantifies the goodness of fit. The Spearman’s ρ assesses the strength of rank correlation between the instance-level agreement threshold and the model accuracy for each task. Higher values for both R- squared and Spearman’s ρ, ideally close to 1, indicate a stronger monotonic relationship between the instance-level subjectivity and the model accuracy. along with the average inter-annotator agreement on each task instance (as measured by the Krip- pendorff’s α) as well as the task-level subjectivity level for different types of tasks. We can see that a closely aligns with the Krippendorff’s α, and tasks with higher levels of subjectivity also exhibit a higher value of a in general, indicating that ai can potentially serve as a reasonable proxy for the subjectivity of each task instance. 5.2 Evaluation Results We now look into whether models trained on the LLM-generated synthetic data exhibit different per- formance on instances with different levels of sub- jectivity, and we focus on the models trained on zero-shot synthetic data in this evaluation. Specifi- cally, given a classification task, we trained a BERT model using the zero-shot synthetic data and com- puted its accuracy on the subset of task instances in the evaluation dataset whose instance-level an- notation agreement (i.e., ai) exceeds a threshold γ, and we repeated this computation for many times as we varied the value of γ. Figure 2 illustrates how the model accuracy varies with the instance-level annotation agreement threshold γ for different types of tasks. For most tasks (except for the tasks in the Scarcasm News and Finanical Phrasebank datasets), we observe a strong monotonically increasing relationship be- tween γ and the model accuracy, with correlations between them (i.e., β) being positive and values of the Spearman’s rank correlation coefficient ρ often exceeding 0.85. Since increasing the instance-level annotation agreement threshold γ effectively filters out task instances with high subjectivity, this ob- servation suggests that models trained on synthetic data indeed tend to have varying performance on different instances—even within the same type of tasks, these models still perform better on those task instances with low subjectivity. As a comparison, we also investigate into whether models trained on the real-world data ex- hibit similar behaviors. The detailed results are reported in App. C. On the high level, while we also observe the trend that these models’ perfor- mance appears to increase as the instance-level task subjectivity decreases, such relationship is usually weaker than that illustrated in the models trained on the synthetic data (e.g., β and ρ are smaller). 6 Conclusions and Discussions In this paper, we present an initial exploration into factors that moderate the effectiveness of LLM- generated synthetic data for facilitating the training of text classification models. Our results show that the performance of the models trained on synthetic data decreases both for classification tasks with higher levels of subjectivity and on task instances with higher subjectivity. In this section, we provide some potential explanations for the observations of our study, and discuss the implications, limitations, and future directions of our work. 6.1 Why subjectivity adversely impacts the effectiveness of the synthetic data? We provide a few explanations for why task sub- jectivity is found to be negatively associated with the performance of models trained on the LLM- generated synthetic data. First, highly subjective tasks often require a deep understanding of nuanced human emotions and contextual subtleties, as well as the ability to discern and accurately interpret dif- ferent perspectives. As such, LLMs may encounter limitations in generating data that can capture the extensive range and complexity of real-life use of language. Indeed, as shown in our exploratory analysis in Section 4.5, the diversity of the LLM- generated synthetic data appears to be particularly limited on tasks with high subjectivity, when com- pared to the real-world data. This implies that one potential way to improve the effectiveness of syn- thetic data on high subjectivity tasks is to increase the data diversity and ensure the synthetic data can better reflect real-world data distributions. Second, specific to the relationship between the instance-level subjectivity and model performance, we note that the “gold label” of a task instance is usually decided by a majority vote within a group of annotators. This means that the gold label may not represent the perspective of each individ- ual (Goyal et al., 2022), and they are sometimes “biased” themselves depending on the annotator decomposition (Li et al., 2022). Thus, it may be challenging for LLMs to generate synthetic data to recover such potentially biased “majority view,” especially if the LLMs are trained to maintain neu- trality. Alternatively, one may ask for subjective task instances that humans can hardly reach any consensus on, whether the “gold label” is really the only “correct” label? If not, a rethinking of how to develop and evaluate models for these task instances is urgently needed. 6.2 Explaining a few exceptions In Table 1, we surprisingly find that on the Tweet irony detection tasks, models trained on the few- shot synthetic data even outperform models trained on the real-world data. One plausible explanation is that the nature of generating irony texts for so- cial media involves a creative writing task with few language formality constraints, and recent research suggests that LLMs have the potential to exhibit comparable creativity with human writers in such task (Franceschelli and Musolesi, 2023). Another exception we find is in Section 5.2—for the Fi- nancial Phrasebank and Scarcasm datasets, unlike other tasks, the effectiveness of the models trained on the synthetic data do not vary much with the instance-level task subjectivity. We conjecture that this can be caused by some task-specific proper- ties. On the Financial Phasebank dataset, accurate sentiment analysis requires the understanding of specialized terminology related to finance. Simi- larly, the Sarcasm detection task aims at identifying sarcasm in news headlines from selected sources and requires the comprehension on political top- ics. Thus, on these tasks, LLMs might not be fully equipped with the necessary domain knowledge to create effective synthetic data under the zero- shot setting. In fact, as shown in Figure 2, models trained on the zero-shot synthetic data have very low performance on these two datasets, regardless of the subjectivity levels of task instances. 6.3 Limitations and future work We acknowledge that task subjectivity may not be the only factor that moderates the effectiveness of the LLM-generated synthetic data. Future studies can look into the potential moderating role of other factors, such as language formality and the require- ment for domain-specific knowledge. Our reliance on crowd workers in determining task subjectivity may introduce some variability due to their lack of linguistic expertise. Our evaluation is also based on the GPT-3.5-Turbo model only. It is important to note that the conclusions we get here may not generalize to other LLMs (e.g., the more advanced GPT-4), considering the continuous improvements of LLMs in generating human-like texts. Our findings suggest that incorporating real- world data examples into the synthetic data genera- tion process can increase the data diversity and boost the performance of the resulting models. Thus, future work can explore strategies that lever- age human intelligence, such as feedback or direct intervention in the generation process, to further enrich the diversity of synthetic data (Chung et al., 2023) and to identify the most “informative” type of data instance to generate. Finally, the signifi- cant correlation between the subjectivity of tasks or instances and the performance of models trained on synthetic data also suggests the potential to uti- lize the performance of such models as a proxy for approximating task or instance subjectivity, or to estimate the reliability of gold labels. References Karan Aggarwal, Henry Jin, and Aitzaz Ahmad. 2022. Entity-controlled synthetic text generation using con- textual question and answering with pre-trained lan- guage models. all MiniLM-L6-v2. 2023. sentence-transformers/all- minilm-l6-v2. Accessed on Hugging Face Model Hub. Available from: https://huggingface.co/ sentence-transformers/all-MiniLM-L6-v2. Tiago A. Almeida, Jose Maria Gomez Hidalgo, and Akebo Yamakami. 2011. Contributions to the study of sms spam filtering: New collection and results. In Proceedings of the 2011 ACM Symposium on Docu- ment Engineering (DOCENG’11). Issa Annamoradnejad and Gohar Zoghi. 2020. Colbert: Using bert sentence embedding for humor detection. arXiv preprint arXiv:2004.12765. Emile Benveniste. 1971. Subjectivity in language. Problems in general linguistics, 1:223–30. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. Yao Dou, Maxwell Forbes, Rik Koncel-Kedziorski, Noah A Smith, and Yejin Choi. 2021. Is gpt-3 text indistinguishable from human text? scarecrow: A framework for scrutinizing machine text. arXiv preprint arXiv:2107.01294. Paul Ekman et al. 1999. Basic emotions. Handbook of cognition and emotion, 98(45-60):16. Giorgio Franceschelli and Mirco Musolesi. 2023. On arXiv the creativity of large language models. preprint arXiv:2304.00008. Jiahui Gao, Renjie Pi, LIN Yong, Hang Xu, Jiacheng Ye, Zhiyong Wu, WEIZHONG ZHANG, Xiaodan Liang, Zhenguo Li, and Lingpeng Kong. 2022. Self- guided noise-free data generation for efficient zero- shot learning. In The Eleventh International Confer- ence on Learning Representations. Victor Besnier, Himalaya Jain, Andrei Bursuc, Matthieu Cord, and Patrick Pérez. 2020. This dataset does not exist: training models from generated images. In ICASSP 2020-2020 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE. Jiahui Gao, Renjie Pi, LIN Yong, Hang Xu, Jiacheng Ye, Zhiyong Wu, WEIZHONG ZHANG, Xiaodan Liang, Zhenguo Li, and Lingpeng Kong. 2023. Self- guided noise-free data generation for efficient zero- shot learning. In The Eleventh International Confer- ence on Learning Representations. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Maximillian Chen, Alexandros Papangelis, Chenyang Tao, Andy Rosenbaum, Seokhwan Kim, Yang Liu, Zhou Yu, and Dilek Hakkani-Tur. 2022. Weakly supervised data augmentation through prompt- arXiv preprint ing for dialogue understanding. arXiv:2210.14169. John Joon Young Chung, Ece Kamar, and Saleema Amershi. 2023. Increasing diversity while main- taining accuracy: Text data generation with large language models and human interventions. arXiv preprint arXiv:2306.04140. Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, and Noah A Smith. 2021. All that’s’ human’is not gold: Evaluating hu- man evaluation of generated text. arXiv preprint arXiv:2107.00061. Thomas H Cormen, Charles E Leiserson, Ronald L Introduction to Rivest, and Clifford Stein. 2022. algorithms. MIT press. Tianyu Gao, Xu Han, Hao Zhu, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2019. FewRel 2.0: Towards more challenging few-shot relation classi- fication. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 6251–6256, Hong Kong, China. Association for Com- putational Linguistics. Mitchell L Gordon, Michelle S Lam, Joon Sung Park, Kayur Patel, Jeff Hancock, Tatsunori Hashimoto, and Michael S Bernstein. 2022. Jury learning: Integrat- ing dissenting voices into machine learning models. In Proceedings of the 2022 CHI Conference on Hu- man Factors in Computing Systems, pages 1–19. Nitesh Goyal, Ian D Kivlichan, Rachel Rosen, and Lucy Vasserman. 2022. Is your toxicity my toxicity? ex- ploring the impact of rater identity on toxicity annota- tion. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2):1–28. Perttu Hämäläinen, Mikke Tavast, and Anton Kunnari. 2023. Evaluating large language models in gener- ating synthetic hci research data: A case study. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, New York, NY, USA. Association for Computing Machinery. Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. GoEmotions: A Dataset of Fine-Grained Emo- tions. In 58th Annual Meeting of the Association for Computational Linguistics (ACL). Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. arXiv preprint arXiv:2203.09509. Ruifei He, Shuyang Sun, Xin Yu, Chuhui Xue, Wenqing Zhang, Philip Torr, Song Bai, and Xiaojuan Qi. 2022. Is synthetic data from generative models ready for im- age recognition? arXiv preprint arXiv:2210.07574. Nitin Jindal and Bing Liu. 2007. Review spam detection. In Proceedings of the 16th international conference on World Wide Web, pages 1189–1190. Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative ad- versarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recogni- tion, pages 4401–4410. Varun Kumar, Ashutosh Choudhary, and Eunah Cho. 2020. Data augmentation using pre-trained trans- former models. arXiv preprint arXiv:2003.02245. Zhuoyan Li, Zhuoran Lu, and Ming Yin. 2022. Towards better detection of biased language with scarce, noisy, and biased annotations. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, pages 411–423. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Lin- guistics. P. Malo, A. Sinha, P. Korhonen, J. Wallenius, and P. Takala. 2014. Good debt or bad debt: Detecting se- mantic orientations in economic texts. Journal of the Association for Information Science and Technology, 65. Andrew M Mcnutt, Chenglong Wang, Robert A De- line, and Steven M. Drucker. 2023. On the design of ai-powered code assistants for notebooks. In Pro- ceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, New York, NY, USA. Association for Computing Machinery. Yu Meng, Jiaxin Huang, Yu Zhang, and Jiawei Han. 2022. Generating training data with language mod- els: Towards zero-shot language understanding. Ad- vances in Neural Information Processing Systems, 35:462–477. Rishabh Misra and Prahal Arora. 2023. Sarcasm detec- tion using news headlines dataset. AI Open, 4:13–18. Rishabh Misra and Jigyasa Grover. 2021. Sculpting Data for ML: The first act of Machine Learning. Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. Semeval- 2018 task 1: Affect in tweets. In Proceedings of the 12th international workshop on semantic evaluation, pages 1–17. Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. 2021. Glide: To- wards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741. OpenAI. 2023. Gpt-4 technical report. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Samuel Rhys Cox, Yunlong Wang, Ashraf Abdul, Chris- tian von der Weth, and Brian Y. Lim. 2021. Directed diversity: Leveraging language embedding distances for collective creativity in crowd ideation. In Pro- ceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI ’21, New York, NY, USA. Association for Computing Machinery. Gaurav Sahu, Pau Rodriguez, Issam H. Laradji, Parmida Atighehchian, David Vazquez, and Dzmitry Bah- danau. 2022. Data augmentation for intent classi- fication with off-the-shelf large language models. Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, and Noah A Smith. 2021. Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. arXiv preprint arXiv:2111.07997. Ruixiang Tang, Xiaotian Han, Xiaoqian Jiang, and Xia Hu. 2023. Does synthetic data generation of arXiv preprint llms help clinical text mining? arXiv:2303.04360. Cynthia Van Hee, Els Lefever, and Véronique Hoste. 2018. Semeval-2018 task 3: Irony detection in en- glish tweets. In Proceedings of The 12th Interna- tional Workshop on Semantic Evaluation, pages 39– 50. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Thomas C Veatch. 1998. A theory of humor. Zirui Wang, Adams Wei Yu, Orhan Firat, and Yuan Cao. 2021. Towards zero-label language learning. arXiv preprint arXiv:2109.09193. Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M Dai, and Quoc V Le. 2021. Finetuned lan- guage models are zero-shot learners. arXiv preprint arXiv:2109.01652. Janyce Wiebe, Theresa Wilson, Rebecca Bruce, Matthew Bell, and Melanie Martin. 2004. Learn- ing subjective language. Computational linguistics, 30(3):277–308. Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of abusive language: the problem of biased datasets. In Proceedings of the 2019 conference of the North American Chap- ter of the Association for Computational Linguistics: human language technologies, volume 1 (long and short papers), pages 602–608. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Jiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiangtao Feng, Zhiyong Wu, Tao Yu, and Lingpeng Kong. 2022. Zerogen: Efficient zero-shot learning via dataset generation. arXiv preprint arXiv:2202.07922. Kang Min Yoo, Dongju Park, Jaewook Kang, Sang- Woo Lee, and Woomyeong Park. 2021. Gpt3mix: Leveraging large-scale language models for text aug- mentation. arXiv preprint arXiv:2104.08826. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In NIPS. Yuxuan Zhang, Huan Ling, Jun Gao, Kangxue Yin, Jean- Francois Lafleche, Adela Barriuso, Antonio Torralba, and Sanja Fidler. 2021. Datasetgan: Efficient labeled data factory with minimal human effort. Jiawei Zhou, Yixuan Zhang, Qianni Luo, Andrea G Parker, and Munmun De Choudhury. 2023. Synthetic lies: Understanding ai-generated misinformation and evaluating algorithmic and human solutions. CHI ’23, New York, NY, USA. Association for Computing Machinery. A Appendices A.1 Dataset and Task Descriptions AG’s News: This task involves classifying news articles from the subset of AG’s News Topic Classification dataset into one of thee categories: World, Sports and Sci/Tech. The AG’s News Topic Classification dataset, collected from over 2,000 news sources by the academic news search en- gine, ComeToMyHead, consists of a training set of 120,000 instances and a test set of 7,600 instances. Relation Classification: This task requires the identification of the relationships between two en- tities within a given sentence. In this study, we focus on four relations: ‘country’, ‘league’, ‘screen- writer’, and ‘tributary’. The dataset comprises English text sourced from Wikipedia and supple- mented with crowdsourced English annotations. Each relation has 700 instances. As the dataset does not provide an official division into train, val- idation, and test sets, we randomly allocated the dataset into train (70%), validation (5%), and test (25%) sets. In our evaluation, this process was re- peated three times, with the average performance reported. IMDB Reviews: This task requires classifying the sentiment of movie reviews from the IMDB platform into one of two categories: positive (pos) or negative (neg). The dataset comprises 50,000 movie reviews evenly split, with 25,000 designated for training and 25,000 for testing. SMS Message Spam: This task involves the clas- sification of SMS messages from the SMS Spam Collection v.1 dataset into either ‘ham’ (legitimate) or ‘spam’ categories. The training dataset contains 5,574 English messages, each labeled according to its legitimacy. As the dataset does not provide an official division into train, validation, and test sets, we randomly divided the dataset into train (70%), validation (5%), and test (25%) sets. In our evalu- ation, this process was repeated three times, with the average performance reported. Financial Phrasebank: This task entails the clas- sification of finance-related sentences into one of three categories—positive, negative, or neutral— based on the sentiment expressed by the sentence. The dataset comprises 4,840 English sentences sourced from financial news articles. As the dataset does not provide an official division into train, val- idation, and test sets, we randomly allocated the dataset into train (70%), validation (5%), and test (25%) sets. In our evaluation, this process was re- peated three times, with the average performance reported. Reddit Emotion: The Reddit Emotion is the sub- set of the Go Emotions dataset. The Go Emotions dataset is comprised of 58,009 comments collected from Reddit, and each comment has been annotated with respect to 28 emotion categories. In this task, we focus on three basic emotions (Ekman et al., 1999): joy, sadness, and surprise. Tweet Irony Speech: The task involves classifying tweets into two categories: irony, non-irony. The dataset, which is composed of English-language tweets, has been manually annotated for these spe- cific categories. The distribution of the data in- cludes a training set of 2,862 instances and a test set of 784 instances. Tweet Emotion: The task involves classifying tweets into four emotion categories: anger, joy, optimism, sadness. Each tweet in this English- language dataset has been annotated by human re- viewers with respect to these emotional categories. The dataset is partitioned into a training set of 3,257 instances and a test set of 1,421 instances. Sarcasm News Headlines: This task requires dis- tinguishing between sarcastic and non-sarcastic news headlines. The dataset comprises 26,709 headlines from two news sources: TheOnion, rep- resenting sarcasm, and HuffPost, representing non- sarcasm. As the dataset does not provide an official division into train, validation, and test sets, we randomly allocated the dataset into train (70%), validation (5%), and test (25%) sets. In our evalu- ation, this process was repeated three times, with the average performance reported. Humor Speech Detection: This task involves dis- cerning humorous from non-humorous content for short texts. The dataset, specifically curated for hu- mor detection, is composed of 200,000 instances, balanced between humorous and non-humorous data. It is divided into a training set of 160,000 instances and a test set of 40,000 instances. B Evaluation I: Comparison Across Different Types of Tasks (Additional Results) B.1 Convergence Analysis Figure B.1 illustrates the training curves of classifi- cation models across the 10 types of tasks. We find that compared to the training curves derived from the real-world data, models trained on the synthetic data exhibit a faster convergence rate and a greater (a) AG’s News (b) Relation (c) IMDB Reviews (d) SMS Spam (e) Financial Phrasebank (f) Reddit Emotion (g) Sarcasm News (h) Humor Detection (i) Tweet Emotions (j) Tweet Irony Speech Figure B.1: The training curves for classification models trained with the real-world data, the zero-shot synthetic data, and the few-shot synthetic data. Task real BERT synthetic real + synthetic real RoBERTa synthetic real+ synthetic Macro-F1 Accuracy Score Macro-F1 Accuracy Score Macro-F1 Accuracy Score Macro-F1 Accuracy Score Macro-F1 Accuracy Score Macro-F1 Accuracy Score AG Relation IMDB SMS Spam Reddit Emotion Tweet Irony Tweet Emotion Sarcasm Financial Humor Speech 93.1% 96.8% 77.4% 98.2% 92.5% 67.3% 64.5% 76.1% 72.5% 94.8% 93.2% 96.8% 78.6% 98.2% 92.5% 68.2% 64.5% 78.3% 75.1% 94.7% 91.5% (-1.6%) 91.6% (-1.6%) 93.1% (+0.0%) 93.1% (-0.1%) 96.4% (-0.4%) 96.4% (-0.4%) 96.7% (-0.1%) 96.8% (+0.0%) 81.1% (+3.7%) 81.2% (+2.6%) 80.2% (+2.8%) 80.1% (+1.5%) 94.3% (-3.9%) 94.8% (-3.4%) 98.1% (-0.1%) 98.2% (+0.0%) 81.9% (-10.6%) 82.0% (-10.5%) 91.8% (-0.7%) 91.8% (-0.7%) 81.5% (+14.2%) 81.9% (+13.7%) 81.2% (+13.9%) 81.5% (+13.3%) 64.6% (+0.1%) 69.1% (+4.6%) 70.4% (+5.9%) 70.5% (+6.0%) 63.6% (-12.5%) 64.8% (-13.5%) 77.5% (+1.4%) 76.4% (-1.9%) 70.6% (-1.9%) 74.2% (-0.9%) 74.6% (+2.1%) 76.3% (+1.2%) 86.9% (-7.9%) 87.0% (-7.7%) 93.3% (-1.5%) 93.3% (-1.4%) 93.6% 97.6% 75.7% 98.1% 91.7% 66.4% 72.2% 72.4% 76.9% 95.3% 93.6% 97.6% 76.1% 98.1% 91.8% 67.2% 72.5% 72.5% 78.2% 95.3% 92.9% (-0.7%) 92.9% (-0.7%) 93.4% (-0.2%) 93.5% (-0.1%) 94.1% (-3.5%) 94.1% (-3.5%) 97.1% (-0.5%) 97.3% (-0.3%) 82.4% (+6.7%) 82.4% (+6.3%) 81.0% (+5.3%) 81.1% (+5.0%) 94.0% (-4.1%) 95.7% (-2.4%) 98.1% (+0.0%) 98.1% (+0.0%) 87.5% (-4.2%) 87.7% (-4.1%) 90.4% (-1.3%) 90.8% (-1.0%) 83.3% (+16.9%) 83.7% (+16.5%) 80.8% (+14.4%) 81.3% (+14.1%) 66.3% (-5.9%) 72.7% (+0.2%) 73.4% (+1.2%) 73.5% (+1.0%) 61.5% (-10.9%) 63.6% (-8.9%) 72.9% (+0.5%) 73.2% (+0.7%) 75.0% (-1.9%) 78.9% (+0.7%) 78.4% (+1.5%) 80.1% (+1.9%) 84.0% (-11.3%) 84.0% (-11.3%) 94.6% (-0.7%) 94.6% (-0.7%) Table B.1: Comparing the performance of classification models trained using three types of data: a small amount of the real-world data used as the examples for guiding LLM in synthetic data generation (i.e., “real”), few- shot synthetic data generated by the LLM (i.e., “synthetic”), and a combination of both (“real+synthetic”). The performance is measured in terms of Macro-F1 (%) and Accuracy Score (%). propensity to overfit. This indicates that under both zero-shot and few-shot settings, the synthetic data generated by the LLM may lack a degree of diver- sity and falls short in fully capturing the complex patterns found in the real world language contexts. B.2 Potential of Few-shot Synthetic Data for Data Augmentation In the main text, the model performance we report for the “few-shot synthetic data” are based on mod- els that are trained only on the synthetic data. As we assume that a small amount of real-world data are available under the few-shot data generation setting, a natural question to ask is whether the few-shot synthetic data can be used to augment the real-world data (which are used as the exam- ples in the synthetic data generation process) and improve the model performance. Answering this question, Table B.1 compares the performance of classification models trained only on the limited set of real-world data (i.e., those used as example to guide LLM in generating synthetic data), only on the few-shot synthetic data generated, and on the combination of both data. We find that the comparison between the performance of models trained exclusively on the limited real-world data and models trained exclusively on few-shot syn- thetic data is task-dependent. However, when the few-shot synthetic data is combined with the small set of real-world data, the resulting model can out- perform the model trained only on the real-world data for many tasks. This highlights the potential of the few-shot synthetic data for data augmentation. B.3 Similarity between the Synthetic Data and the Real Data In the few-shot setting, we utilized real-world data examples to guide the generation of synthetic data. To quantify the similarity between the real-world data examples and the few-shot synthetic data gen- erated, we employed a pre-trained Sentence Trans- former model (all MiniLM-L6-v2, 2023) to convert texts into vector embeddings. We then computed the cosine similarity between the embeddings of Dataset AG News Relation IMDB Spam Financial p-value p < 0.001 p < 0.001 p < 0.1 p < 0.001 p < 0.001 Reddit Emotion p < 0.001 Sarcasm Humor p < 0.001 p < 0.001 Tweet Emotion p < 0.001 Tweet Irony p < 0.001 Figure B.2: Average top 5 cosine similarity between the real and synthetic data Table B.2: T-test results for the similarity comparison. Dataset Real data Zero-shot data BBC news Amazon review SST-2 Yelp 94.3 93.6 87.8 91.2 89.2 86.4 91.8 87.7 real-world examples and the embeddings of the the synthetic texts. The consine similarity metric ranges from -1 to 1, and we rescaled it to the in- terval of [0, 1], with 1 representing the highest level of similarity. Then, for each real-world ex- ample, we obtained its mean similarity with the top 5 most similar synthetic texts in the synthetic data and then computed the average mean simi- larity scores across all real-world examples within each type of classification tasks. As a reference, we also conducted the same computation between the real-world examples and the synthetic data gener- ated under the zero-shot settings, and results of the similarity comparisons are shown in Figure B.2. Visually, we find a consistent trend that the few- shot synthetic data has a higher level of similarity with the real-world examples compared to the zero- shot synthetic data. We then performed t-tests on each classification task to determine whether the difference of the average cosine similarity scores for the zero-shot and few-shot synthetic data is significant. The results are shown in Table B.2, which indicates that the difference is statistically significant for all but the IMDB review classifica- tion task. In other words, the few-shot synthetic data is more similar to the real-world data than the zero-shot synthetic data, which may partly explains why models trained on the few-shot synthetic data tend to outperform models trained on the zero-shot synthetic data. Table B.3: Comparing the performance of classification models trained on the LLM-generated synthetic data under the zero-shot with those trained with the original real-world data, in terms of Macro-F1 (%) B.4 Additional Results of Zero-shot Synthetic Data for a few More “less subjective” Tasks To validate our observations regarding “subjectiv- ity” in the data, we conducted additional experi- ments on a few more datasets which represent less subjective text classification tasks: the BBC News dataset, SST-2 movie review, Amazon US review, and Yelp review. We compared the performance of BERT models trained on real data with those trained on zero-shot synthetic data. As indicated in Table B.3, the average performance difference between real-world data and zero-shot synthetic data is only 4.2%. This gap is notably smaller than what is observed in tasks with greater subjectivity, reinforcing the finding that the subjectivity of a task can indeed diminish the effectiveness of synthetic data. B.5 Additional Results of More LLMs To examine whether our findings hold true for decoder-based models as well as models that are reasonably large, we conducted the same evaluation studies using the GPT2-large (774M) and Llama2 (7B) models. We conducted this evaluation on 6 selected datasets from the entire set of 10 datasets Dataset Subjectivity Level Real data GPT2-Large Llama 2 GPT-3.5 turbo AG IMDB SMS Tweet Emotion Humor Speech Tweet Irony ⋆ 95.3 86.5 88.7 89.3 ⋆⋆⋆⋆⋆ 77.7 52.2 59.1 58.5 ⋆⋆⋆⋆⋆ 97.0 51.5 57.2 56.0 ⋆⋆⋆⋆⋆ 72.2 60.8 63.1 63.4 ⋆⋆⋆⋆ 97.2 86.4 88.5 93.8 ⋆⋆⋆ 87.6 80.9 82.4 81.2 Table B.4: Comparing the performance of Bert classification models trained on synthetic data generated by various LLMs within a zero-shot setting using Macro-F1 (%) as the metric. Dataset Subjectivity Level Real data Direct Prompt Zero-shot AG IMDB SMS Tweet Emotion Humor Speech Tweet Irony ⋆ 95.3 86.5 89.3 ⋆⋆⋆⋆⋆ 97.0 59.2 56.0 ⋆⋆⋆⋆⋆ 77.7 54.3 58.5 ⋆⋆⋆⋆⋆ 72.2 61.1 63.4 ⋆⋆⋆⋆ 97.2 89.4 93.8 ⋆⋆⋆ 87.6 82.8 81.2 Table B.5: Performance comparisons in terms of Macro-F1 (%) between “direct prompt” and “zero-shot data generation” using GPT-3.5 turbo. For the zero-shot synthetica data and real data, we adopted the Bert model as the base for classification. zero-shot synthetic data, the performance of mod- els trained on the real-world data is less affected by the subjectivity of the task instance (i.e., β and ρ are smaller), except for that on the Scarcasm News and Financial Phrasebank datasets. D Additional Details on the Generation of Synthetic Data The prompts we used to generate synthetic data un- der both the zero-shot setting and the few-shot set- ting are shown in the Table D.1 and the Table D.2. which covered different levels of subjectivity. As indicated in Table B.4, we observed that models trained on the LLM-generated synthetic data only exhibits slight variations among different LLMs for each respective task. The overall trend remains consistent: the effectiveness of synthetic data tends to be higher for tasks with lower subjectivity. B.6 Additional Results of Direct Prompt by LLMs While LLMs are capable of generating high-quality synthetic data through prompting, their direct clas- sification performance can sometimes lag behind that of smaller models trained on this synthetic data. As shown in Table B.5, for many tasks, directly prompting GPT-3.5 turbo model for classification often yields poorer results compared to a smaller model trained on the synthetic data. This discrep- ancy might arise because the prompt constraints defining the label space for the LLM can some- times be too lax, making accurate classification challenging. C Evaluation II: Comparison Across Different Task Instances (Additional Results) In order to investigate how models trained on the real-world data perform across task instances of varying subjectivity, we used BERT as the foun- dational model for training a classification model with the real-world data. As depicted in Figure C.1, we observed that compared to models trained on (a) AG (b) Relation (c) IMDB Reviews (d) SMS Spam (e) Reddit Emotion (f) Sarcasm News (g) Humor Detection (h) Tweet Emotions (i) Tweet Irony Speech (j) Financial Phrasebank Figure C.1: Changes in the accuracy of the BERT model trained on real-world data as the instance-level annotation agreement threshold varies. The solid blue line in each plot is the linear regression fitted on the data, and the R-squared score quantifies the goodness of fit. The Spearman’s ρ assesses the strength of rank correlation between the instance-level agreement threshold and the model accuracy for each task. Higher values for both R-squared and Spearman’s ρ, ideally close to 1, indicate a stronger monotonic relationship between the instance-level subjectivity and the model accuracy. Task Zero-shot/Few-shot AG Relation IMDB SMS spam Reddit emotion Context Prompt: Now you are a journalist writing news articles. You are given a topic and must write a corresponding news article for it. You are also given a length requirement. You must ensure your news meets the length requirement. Data Generation Prompt: Can you write a news report with the topic {label}? The length requirement is: {num_words} words. Please be creative and write unique news articles. Context Prompt: Now you are a Wikipedia editor. You need to generate new records for describing the relation between entities. You are given a relation type, as well as a sentence describing the relationship. You must write a sentence to describe the specified relationship between the two entities that you came up with. Data Generation Prompt: Give me one pair of entities, which have the relation: {label}, and generate a sentence which contains the pair of entities that have the relation: {label}. The description of the relation is: {label_description}. Context Prompt: Now you are a movie critic. You need to have delicate emotions, unique perspectives, and a distinctive style. You are going to write a highly polar review for a movie and post it on IMDB. You are given a movie genre/style and a length requirement. You must come up with a movie that corresponds to the genre/style and write a review that meets the length requirement. Data Generation Prompt: Write a film review for a {genre} movie to express {pos_or_neg} feedback. Each review should have {num_of_words} words. Be sure to express your personal insights and feelings. Please be creative and write unique movie reviews. Context Prompt (Spam): Now you are a person who is planning to send a spam SMS message. You must be as creative as possible to diversify your messages. Ensure your language is conversational and colloquial. Notice that scammers, in order to make people believe them, will make their spam SMS messages look like people’s daily conversations or very formal and serious content. You also need to imitate these contents. Context Prompt (Ham): Now you are a person who is planning to send a SMS message. You must be as creative as possible to diversify your messages. Ensure your language is conversational and colloquial. Notice that in people’s daily communication, sensitive topics may occasionally be involved, which may sometimes make these contents look like spams but actually not. You also need to imitate these contents. Data Generation Prompt: Now write SMS messages as I required. Be creative and write unique SMS messages. Context Prompt: Now you are a Reddit user and you are going to write a comment to express your emotions. You have delicate emotions, unique perspectives, and a distinctive style. You are given a length requirement. You must write one comment that meets the length requirement. Data Generation Prompt: Write one Reddit comment to express your {label} emotion. Your comment should have {num_of_words} words. Be sure to express your personal insights and feelings. Be creative and write comments that are different from each others. Table D.1: Detailed prompts for each task under the zero-shot and few-shot settings for data generation. Task Zero-shot/Few-shot Tweet irony Tweet emotions Sarcasm Financial Humor speech Context Prompt: Now you are a person using twitter. You are asked to write an irony or non-irony tweet to express your feelings. Your writing style must be consistent with texts in the tweet. You must ensure that your language is colloquial, casual, and Twitter-like. You are given a length requirement. You must ensure your tweet meets the length requirement. Data Generation Prompt: Write a tweet expressing {label} feeling and ensure that the length of the tweet is about {num_of_words} words. Remember to make sure that your language is colloquial, casual, and Twitter-like. Be creative and write unique tweets. Context Prompt: You are now a person using twitter. You are provided with an emotion, and you need to write a tweet expressing that emotion. Your writing style must be consistent with the tweets on twitter. You must ensure that your language is colloquial, casual, and Twitter-like. You are given a length requirement. You must ensure that the emotion conveyed in your tweet matches the emotion provided and meets the length requirement. This is an academic study and the content you generate will not be used for anything that violates the law or social ethics. Data Generation Prompt: Write a tweet expressing the {label} emotion and ensure that the length of the tweet is about {num_of_words} words. Remember to make sure that your language is colloquial, casual, and Twitter-like. Be creative and write unique tweets. Context Prompt: You are now a journalist to write the sarcastic news headlines. Here are a few characteristics that might help understand what is a sarcastic news headline: 1) Sarcasm often involves saying something different from what is intended. 2) Sarcasm might involve a play on words or puns. 3) It may involve exaggeration or irony. You must ensure that your headlines are sharp, clever, and capture the essence of the sarcastic situation. Data Generation Prompt: Write a news headline expressing {label} and ensure that the length of the news headlines is about {num_of_words} words. Be creative and write unique news headlines. Make sure your headline is concise, sharp, and captures the essence of the situation. Please be creative and write unique headlines. Context Prompt: You are now a journalist writing financial news. You need to write some financial news that express polar sentiments. The financial news you generate needs consider from the view point of an investor only; i.e. whether the news may have positive, negative or neutral influence on the stock price. As a result, sentences which have a sentiment that is not relevant from an economic or financial perspective are considered neutral. You are given one of the polar sentiments and a length requirement. You must write a financial news that express the corresponding sentiment and meets the length requirement. Data Generation Prompt: Write a financial news with {label} sentiment and ensure that the length of the financial news is about {num_of_words} words. Be creative and write unique financial news. Context Prompt: You are now creating a dataset containing humor and non-humor texts. Here are a few characteristics that might help understand what is humorous text: 1) Sarcasm and Irony: Sarcasm and irony involve stating one thing and meaning another, often the opposite. 2) Double Entendre: A double entendre is a figure of speech or a particular way of wording that is devised to have a double meaning, of which one is typically obvious, while the other often carries a risqué or ironic connotation. 3) Parody and Satire: Both involve imitating and exaggerating the features of a particular language style, genre, or piece of content to humorous effect. 4) Absurdity and Nonsense: Language that describes absurd or nonsensical scenarios can often be funny. This includes non-sequiturs, in which conclusions do not follow from their premises, and other forms of illogical statements. Data Generation Prompt: Write a {label} short text and ensure that the length of the short text is about {num_of_words} words. Be creative and write unique short text. Table D.2: Detailed prompts for each task under the zero-shot and few-shot settings for data generation (Continued).
synthetic_cpt
8
Unveiling_the_Flaws_Exploring_Imperfections_in_Synthetic_Data_and_Mitigation_Strategies_for_Large_Language_Models.pdf
4 2 0 2 n u J 8 1 ] L C . s c [ 1 v 7 9 3 2 1 . 6 0 4 2 : v i X r a Unveiling the Flaws: Exploring Imperfections in Synthetic Data and Mitigation Strategies for Large Language Models Jie Chen1,2∗, Yupeng Zhang1∗, Bingning Wang1†, Wayne Xin Zhao2†, Ji-Rong Wen2, and Weipeng Chen1 1Baichuan Inc. 2Gaoling School of Artificial Intelligence, Renmin University of China Abstract Synthetic data has been proposed as a solution to address the issue of high-quality data scarcity in the training of large language models (LLMs). Studies have shown that synthetic data can effectively improve the perfor- mance of LLMs on downstream benchmarks. However, despite its potential benefits, our analysis suggests that there may be inherent flaws in synthetic data. The uniform format of synthetic data can lead to pattern overfitting and cause significant shifts in the output distribution, thereby reducing the model’s instruction-following capabilities. Our work delves into these specific flaws associated with question-answer (Q-A) pairs, a prevalent type of synthetic data, and presents a method based on unlearning techniques to mitigate these flaws. The empirical results demonstrate the effectiveness of our approach, which can reverse the instruction-following issues caused by pattern overfitting without compromising performance on benchmarks at relatively low cost. Our work has yielded key insights into the effective use of synthetic data, aiming to promote more robust and efficient LLM training. 1 Introduction The remarkable success of large language models (LLMs) (Zhao et al., 2023) largely depends on the quality and diversity of the datasets used for training. However, acquiring large amounts of high-quality data can be challenging due to data scarcity, privacy concerns, and high costs (Liu et al., 2024a). Synthetic data has emerged as a promising solution to address these challenges (Nikolenko, 2019). Synthetic data, generated through algorithms or generative models rather than collected from real-world events, can be produced at scale and supplement areas where real-world data is scarce or difficult to obtain, such as in mathematical or reasoning tasks. Numer- ous studies have demonstrated the efficacy of synthetic data in improving model perfor- mance (Microsoft, 2024; Mukherjee et al., 2023). Among the various methods of generating synthetic data, a common approach is the creation of synthetic question-answer (Q-A) pairs (NVIDIA, 2024; Maini et al., 2024b; Wei et al., 2023), as Q-A pairs exhibit diversity and richness, encompassing a range of question types from simple factual queries to complex reasoning problems. Another prevalent method is to generate data closely mimicking downstream tasks (Luo et al., 2023; Yu et al., 2023a). These methods have achieved excellent performance on both general-purpose and specialized benchmarks for LLMs. Despite numerous experiments demonstrating that synthetic data significantly enhances the capabilities of pre-trained models on downstream benchmarks, in this work, we observe a notable decline in the instruction-following capabilities of models after being pre-trained on synthetic data, specifically on synthetic Q-A pairs generated by GPT-4, and subsequent ∗Equal contribution †Corresponding author, [email protected], [email protected] 1 Figure 1: The overall pipeline of our study. supervised fine-tuning (SFT). This observation prompts a deeper investigation into the underlying causes. While existing studies have extensively covered the applications of synthetic data, there is a notable lack of studies examining its impact on the instruction- following capabilities of LLMs. Furthermore, studies addressing the flaws in synthetic data have primarily focused on historical models or those with capabilities similar to currently trained models (Shumailov et al., 2024; Seddik et al., 2024; Alemohammad et al., 2023), leaving a gap in exploring the deficiencies of synthetic data generated by advanced models like GPT-4. Our work focuses on exploring the inherent flaws of synthetic data and its impact on LLMs. We find that the token distribution of synthetic data significantly differs from that of the real pre-training data, with synthetic data patterns being relatively uniform. Consequently, models trained on such synthetic data are likely to experience pattern overfitting, leading to substantial shifts in their output distributions and resulting in inferior performance. Based on these observations, we propose a novel strategy that leverages unlearning tech- niques to reduce the impact of misleading synthetic data patterns while preserving the LLM’s foundational abilities on benchmarks and restoring its instruction-following ca- pabilities. This strategy employs a lower-bounded forgetting loss, which is controllable and superior to traditional unlearning approaches. Our experimental results demonstrate that this strategy effectively mitigates the adverse impacts of synthetic data, balancing the LLM’s performance on benchmarks with its ability to follow instructions at significantly low training costs. Our contributions are summarized as follows: • Identification of Synthetic Data Limitations: We provide a comprehensive analysis of the inherent limitations in synthetic data, specifically synthetic Q-A pairs, focusing on data distribution differences and pattern overfitting observed in models. • Unlearn Method to Address Synthetic Data Issues: We propose a novel unlearning strategy that effectively mitigates the adverse effects of synthetic data, thereby preserving the LLM’s foundational abilities on benchmarks while reversing its instruction-following capabilities at significantly low training costs. 2 Related Work Applications and Limitations of Synthetic Data. Studies have shown that synthetic data has achieved remarkable results on downstream benchmarks (Luo et al., 2023; Microsoft, 2024; Mukherjee et al., 2023; Wei et al., 2023), addressing issues such as data scarcity and privacy (Liu et al., 2024a; Villalobos et al., 2022; Maini et al., 2024b). For instance, Microsoft’s 2 Phi-3 (Microsoft, 2024) model, trained on heavily filtered publicly available web data and synthetic data, has outperformed much larger models on both academic benchmarks and internal testing. MagicoderS-CL-7B (Wei et al., 2023), a 7B parameter code model trained on synthetic code problems and answers generated by LLMs, even surpasses the prominent ChatGPT on many coding benchmarks. However, synthetic data is not without flaws. Several critical issues have been identified, particularly concerning model performance and data distribution integrity. One significant concern is the phenomenon of model collapse (Shumailov et al., 2024; Seddik et al., 2024), where training on model-generated data leads to the disappearance of the tails of the original content distribution. Furthermore, the recursive use of synthetic data in training generative models can amplify artifacts and biases, ultimately degrading model performance, as demonstrated by the concept of Model Autophagy Disorder (MAD) (Alemohammad et al., 2023). Task-specific synthetic data often lacks diversity and exhibits regional biases (Yu et al., 2023b), with effectiveness varying by task nature (Li et al., 2023). LLM Unlearning. Unlearning in LLMs involves the elimination of specific undesired targets while preserving overall performance (Liu et al., 2024b). Strategies vary from specific data points to higher-level concepts such as harmful language or specific knowledge domains (Jang et al., 2022; Lu et al., 2022; Eldan & Russinovich, 2023). Effective unlearning requires robustness and generalization (Patil et al., 2024; Maini et al., 2024a; Shi et al., 2023) with efficient handling of computational costs (Pawelczyk et al., 2023). Existing unlearning methods leverage various fine-tuning techniques, including gradient ascent, parameter- efficient fine-tuning, and KL-divergence-based methods, each with unique strengths and limitations regarding runtime and memory costs (Yao et al., 2024; Jang et al., 2022; Eldan & Russinovich, 2023). While unlearning methods have been utilized to manage harmful data and reduce hallucinations in models, their application to synthetic data remains underexplored. Our research aims to fill this gap by applying unlearning strategies to mitigate the adverse effects of synthetic data on LLMs. 3 Experimental Setup In this section, we outline the experimental design, including dataset selection, model configurations, and evaluation benchmarks. Datasets. We utilize five distinct datasets: • NonSynth data: A comprehensive non-synthetic dataset collected from diverse sources (Sol- daini et al., 2024; Penedo et al., 2023; Soboleva et al., 2023), including webpages, books, research papers, and codebases. • SynthQA data: Synthetic Q-A pairs generated by GPT-4, based on a variety of sources in- cluding webpages, books, and other textual materials, covering topics such as mathematics, coding, and general knowledge. • MixedIns data: Instructions consisting of general knowledge, mathematics, and coding, primarily generated by GPT-4 and human contributors. • U33B data (Yuan et al., 2023): Aggregated synthetic dataset of diverse reasoning paths generated from GSM8K dataset by multiple LLMs to enhance mathematical reasoning capabilities. • OpenHermes-2.5 data (Teknium, 2023): An extension of the OpenHermes-1 dataset, primar- ily consisting of synthetically generated instruction and chat samples. Models. We use the following models in our experiments: • BaseLM: A Llama-like (Touvron et al., 2023) 2B model trained from scratch. We set the learning rate to 1.0 × 10−4 and adopt a cosine learning rate schedule, training on a total of 1 trillion tokens. The details of hyperparameters are listed in Table 1. 3 Position Embedding Hidden Size FFN Size Heads Layers Context Length RoPE (Su et al., 2023) 2, 048 5, 504 32 32 4, 096 Table 1: The architecture details of BaseLM. • BaseLM-Chat (MixedIns/OpenHermes-2.5): Chat models obtained by performing SFT on BaseLM using MixedIns or OpenHermes-2.5 data. We set the learning rate to 2.0 × 10−5, the number of epochs to 2, the context length to 4, 096, and the batch size to 64. Benchmarks. We evaluate the capabilities of models using the following benchmarks: • Bilingual Capabilities: Evaluated using the MMLU (Hendrycks et al., 2021), CMMLU (Li et al., 2024) and C-Eval (Huang et al., 2023) benchmarks to assess the models’ proficiency in handling both English and Chinese tasks. • Coding Proficiency: Assessed with the HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021) benchmarks, which measure the models’ ability to generate correct and efficient code snippets based on given problems. • Mathematical Reasoning: Measured using the GSM8K (Cobbe et al., 2021) benchmark, which tests the models’ ability to solve complex mathematical problems. • Instruction-Following Capability: Analyzed through FollowBench (Jiang et al., 2024) and MT-bench (Zheng et al., 2023), evaluating the models’ ability to understand and follow complex instructions. 4 Defect Analysis of Synthetic Data In this section, we systematically analyze the flaws of synthetic data, specifically synthetic Q-A pairs, by examining their data distribution differences and pattern overfitting observed in LLMs. This analysis is crucial to understand how synthetic data impacts the LLMs’ foundational abilities on benchmarks and instruction-following capabilities. 4.1 Data Distribution Differences One of the primary concerns with synthetic data is the potential mismatch between its distribution and that of real-world data. This discrepancy can result in models that perform well on synthetic data but fail to generalize effectively to real-world scenarios. Data Characteristic Differences. Synthetic data generated by LLMs often exhibits distinct distributional characteristics compared to non-synthetic data. To illustrate these differences, we sample 2, 000 entries from both NonSynth and SynthQA data. Using the embeddings from the last hidden state of BaseLM, we apply t-SNE (Van der Maaten & Hinton, 2008) for dimensionality reduction and visualize the data distributions in Figure 2. The t-SNE visualization reveals that the clusters of NonSynth and SynthQA data have considerable areas of non-overlapping, which indicates that SynthQA data does not perfectly replicate the characteristics of NonSynth data. Such differences may lead to misinterpretations of real-world scenarios by LLMs trained on synthetic data. Simplified Data Patterns. Synthetic data often contains repetitive and structurally pre- dictable elements, which simplify the complexity of real-world interactions and patterns. This simplification can result in data that fails to capture the intricacies of human language and interaction. To explore this, we again sample 2, 000 entries from both NonSynth and Syn- thQA data and calculate the token frequencies based on the tokenizer of BaseLM. Figure 3 presents the kernel density estimation (KDE) (Parzen, 1962) plot of token IDs. We observe that the distribution of token frequencies for SynthQA data exhibits several noticeable small peaks compared to NonSynth data. We find that these peaks correspond to tokens with a high degree of structural consistency within SynthQA data. Specifically, tokens 4 Figure 2: t-SNE visualization of data distributions. The clusters of NonSynth and SynthQA data show considerable non-overlap. Figure 3: Kernel density estimation of token IDs for NonSynth and SynthQA data. The token frequency distribution for SynthQA data shows several small peaks, indicating high structural consistency for specific tokens compared to NonSynth data. like "question" (ID: 44246), "answer" (ID: 63264), and "summary" (ID: 16752) contribute to these observable peaks. The presence of these structural tokens indicates a repetitive pattern in SynthQA data, reflecting its inherent simplicity and lack of variability compared to NonSynth data. By over-representing certain tokens, synthetic datasets risk failing to encapsulate the full spectrum of linguistic diversity found in non-synthetic data, which may lead to models trained on such data being less robust and adaptable. 4.2 Pattern Overfitting In this part, we investigate the detrimental effects of synthetic data on instruction-following capabilities and output distributions of LLMs. Our analysis highlights how synthetic 5 6040200204040200204060NonSynth dataSynthQA data020000400006000080000100000Token ID0.000.020.040.060.080.10Density ()SynthQA DataNonSynth Data (a) OpenHermes-2.5 (b) MixedIns Figure 4: Kernel density estimation of perplexity values for OpenHermes-2.5 and MixedIns data using BaseLM, SynthLM and UnlearnLM. SynthLM shows a noticeable shift and reduced variance, while UnlearnLM corrects the distribution shift. data, specifically synthetic Q-A pairs, can cause overfitting to specific patterns observed in Section 4.1, potentially affecting the performance of chat models. Instruction-Following Capability Decline. While synthetic data has shown considerable potential in enhancing the foundational abilities on benchmarks for LLMs in the pre-training stage, our work identifies significant challenges when these models undergo SFT. Specifi- cally, we observe a notable decline in the instruction-following capabilities of chat models, underscoring critical limitations associated with the use of synthetic Q-A pairs. To investi- gate this issue, we design a series of experiments. We mix 2% SynthQA data with NonSynth data to create a dataset containing 300 billion tokens and perform continued pre-training on BaseLM with a fixed learning rate of 5.0 × 10−5. The evaluation results, presented in Table 2 ( SynthLM v.s. BaseLM ), show that the foundational abilities of BaseLM has significantly improved after training with synthetic Q-A pairs. We validate the role of synthetic data through ablation experiments in Section 6. However, following SFT, we notice a severe decline in instruction-following capabilities in the resulting chat model, as shown in Table 3 ( SynthModel-Chat v.s. BaseLM-Chat ). Output Distribution Changes. Due to simplified data patterns in synthetic data, a critical concern is its propensity to cause overfitting. To investigate this effect, we sample 2, 000 entries each from OpenHermes-2.5 and MixedIns data. We then calculate their perplexity using BaseLM and SynthLM. Figure 4 shows the KDE plot of perplexity values for these two types of data. We can clearly observe that the perplexity distribution for SynthLM exhibits a noticeable shift and reduced variance compared to BaseLM, which is similar to the phenomenon of model collapse (Shumailov et al., 2024). This suggests a tendency for the model to overfit to the patterns present in the synthetic data, reducing its ability to deal with real-world variability. 5 Unlearning-Based Mitigation Strategy In this section, we introduce our unlearning strategy and describe the experiments con- ducted to implement this approach. 5.1 Unlearning Strategy To address the identified flaws in synthetic data, we propose a mitigation strategy based on unlearning techniques. Typically, unlearning is applied to remove harmful data or 6 05101520Perplexity0.000.050.100.150.200.25DensityBaseLMSynthLMUnlearnLM010203040Perplexity0.000.010.020.030.040.050.060.070.08DensityBaseLMSynthLMUnlearnLM reduce model hallucinations. In this context, we leverage unlearning to recalibrate the LLM’s understanding, mitigating the adverse effects of synthetic data while preserving its beneficial attributes. Task Description. In the task where the LLM predicts the next token yi based on an existing token sequence y<i = [y1, y2, . . . , yi−1], let p(y<i; θ) denote the predicted probability of yi. Formally, this can be expressed as: p(y<i; θ) = P(yi | y<i; θ), where θ represents the parameters of the LLM. The prediction accuracy is evaluated using the cross-entropy loss function. Specifically, the loss for predicting yi is given by l(p(y<i; θ), yi), where l(input, target) denotes the cross-entropy loss between the predicted probability distribution and the actual target token. Unlearning Loss. Following previous work (Yao et al., 2024), the unlearning loss function we designed consists of three parts: • Lower-Bounded Forgetting Loss: This component focuses on forgetting the biased distribu- tion of specific synthetic data. Unlike previous methods that apply gradient ascent (Thudi et al., 2022) (i.e., adding a negative sign to the cross-entropy loss to introduce irrelevant elements into the predictions), we have observed that this method has uncontrolled loss due to the logarithm approaching zero without a lower bound. Therefore, we designed a simple yet effective lower-bounded forgetting loss by inverting the model prediction probabilities in the cross-entropy loss. This retains the original forgetting loss function’s features while adding a lower bound (i.e., 0). We validate the effectiveness of our forgetting loss approach through ablation experiments in Section 6. The designed lower-bounded forgetting loss Lfgt can be defined as: Lfgt = l(1 − p(ysyn <i ; θ), ysyn i ). |ysyn| ∑ i=1 • Replay Loss: We sample a portion of the data from the trained non-specific synthetic data for replay, using the cross-entropy loss to allow the model to retain memory of historical knowledge. The replay loss Lrpy can be defined as: Lrpy = |ynon-syn| ∑ i=1 l(p(ynon-syn <i ; θ), ynon-syn i ). • Bias Mitigation Loss: After unlearning, we aim to ensure that the LLM’s output distribution on the trained non-specific synthetic data does not change excessively. Therefore, we calculate the KL divergence between the current model and the original model on the data used for replay, as the bias mitigation loss Lmtn to preserve the original performance: Lmtn = |ynon-syn| ∑ i=1 KL(p(ynon-syn <i ; θori) ∥ p(ynon-syn <i ; θi)), (1) where θori represents the parameters of the original model. Finally, we obtain the total unlearning loss function as follows: Lunlearn = wfgt · Lfgt + wrpy · Lrpy + wmtn · Lmtn, where w∗ denotes the weights corresponding to each part of the loss L∗. 5.2 Unlearning Experiments In this part, we detail the experimental process of applying unlearning techniques. Our objective is mitigate the adverse effects on models trained with synthetic data. Specifically, 7 Models C-Eval CMMLU MMLU HumanEval MBPP GSM8K Avg. BaseLM SynthLM 39.05 47.71 46.79 RefineLM UnlearnLM 48.09 38.83 47.56 47.15 47.29 38.08 47.27 45.82 47.53 9.76 18.90 17.07 20.73 12.00 18.40 18.30 18.60 15.09 16.60 13.42 11.45 25.47 32.74 31.42 32.28 Table 2: Evaluation results of base models with continued pre-training and unlearning. SynthLM is obtained by training BaseLM with a dataset containing 300 billion tokens, of which 2% are from the SynthQA data. RefineLM is derived from SynthLM by further training with an additional 300 billion tokens of NonSynth data. UnlearnLM is obtained by performing our unlearning strategy on SynthLM using 1 billion tokens from the SynthQA data. Models BaseLM-Chat SynthLM-Chat RefineLM-Chat UnlearnLM-Chat FollowBench SSR HSR 39.95 38.29 39.60 42.00 27.58 24.00 25.22 27.87 MT-Bench C-Eval CMMLU MMLU HumanEval MBPP GSM8K 5.45 5.39 5.43 5.85 39.92 49.50 47.71 49.12 40.16 48.37 47.40 48.83 41.55 49.06 47.08 48.82 18.29 21.95 17.68 20.12 17.80 22.60 23.60 21.80 14.33 22.21 22.37 21.99 Table 3: Evaluation results of chat models with continued pre-training and unlearning. Models with the suffix "-Chat" represent chat models derived from their corresponding base models in Table 2 through SFT on the MixedIns data. we aim to enhance the instruction-following capabilities of models while preserving their foundational abilities. Basic Implementation. We utilize NonSynth data containing 300 billion tokens to perform continued pre-training on SynthLM in Table 2, with the aim of recovering the model’s instruction-following capabilities. We utilize a fixed learning rate of 5.0 × 10−5 during the training process. From the results in Table 2 and 3, we can clearly observe that exten- sive training with non-synthetic data leads to enhanced instruction-following capabilities ( RefineLM-Chat v.s. SynthLM-Chat ) at the cost of a decline in overall base model perfor- mance ( RefineLM v.s. SynthLM ). However, this approach does not completely eliminate the negative impact of the synthetic data on the model. Unlearning Strategy Implementation. We propose employing the unlearning strategy on SynthLM. We apply lower-bounded forgetting loss on texts from the SynthQA data with 1 billion tokens. Concurrently, we perform replay loss and bias mitigation loss on the trained NonSynth data alongside the unlearning process. We use a fixed learning rate of 5.0 × 10−5 and set the weights wfgt = 0.01, wrpy = wmtn = 1. As can be seen from Table 2 and 3, although unlearning leads to a slight decrease in foundational abilities of base ( UnlearnLM v.s. SynthLM ) and chat ( UnlearnLM-Chat v.s. SynthLM-Chat ) models, especially math abilities, there is a considerable improvement in instruction-following capabilities ( UnlearnLM-Chat v.s. BaseLM-Chat ). Distribution Shift Correction. The unlearning process partially corrects the output distri- bution shift of the LLM. Following the experiments in Section 4.2, we include the perplexity distribution of UnlearnLM on OpenHermes-2.5 and MixedIns data in Figure 4. It can be ob- served that the distribution shift has been effectively corrected after unlearning, indicating a significant reduction in pattern overfitting. 8 It’s worth noting that the instruction-following capabilities of UnlearnLM-Chat after un- learning with just 1 billion tokens surpass the performance of both RefineLM-Chat trained on 300 billion tokens and BaseLM-Chat. Additionally, the foundational abilities of Un- learnLM are comparable to those of RefineLM, suggesting that the beneficial effects of synthetic data on model performance have been preserved. This underscores the efficacy of our method in achieving more robust and efficient LLM training at significantly lower training costs. 6 Ablation Study Models FollowBench SSR HSR MT-Bench GSM8K BaseLM-Chat (O.H.) SynthLM* (U33B)-Chat (O.H.) UnlearnLM* (U33B)-Chat (O.H.) 40.25 39.95 40.21 27.27 25.13 27.26 5.76 5.61 5.87 34.27 43.06 42.00 Table 4: Evaluation results of chat models with continued pre-training on U33B data and subsequent unlearning. SynthLM*(U33B) is the base model trained with 40 billion tokens including 2% U33B data. UnlearnLM*(U33B) is derived from SynthLM*(U33B) by applying our unlearning strategy. Models with the suffix "-Chat(O.H.)" represent chat models derived from their corresponding base model through SFT on the OpenHermes-2.5 data. 6.1 Effectiveness of Unlearning Strategy To explore the effectiveness of our unlearning strategy across different types of synthetic data, we conduct experiments using the U33B data. We first perform continued pre- training on the BaseLM with 40 billion tokens of data, including 2% U33B data, resulting in SynthLM*(U33B). We utilize a fixed learning rate of 5.0 × 10−5 during the training process. Following this, we apply our unlearning strategy to mitigate the adverse effects of U33B data on instruction-following capabilities while preserving its positive impact on founda- tional abilities, particularly in mathematics. Specifically, we employ the same unlearning parameters as described in Section 5.2, resulting in UnlearnLM*(U33B). We conduct SFT on the resulting models using OpenHermes-2.5 data. The evaluation results are presented in Table 4. The results indicate that while the model trained with U33B data improves its mathematical abilities, it exhibits a decline in instruction-following capabilities. However, after applying our unlearning strategy, the instruction-following capabilities are restored, while retaining the enhancements in mathematical abilities provided by the U33B data. These findings suggest that our unlearning strategy could be extended to other types of open-source synthetic data. 6.2 Impact of Synthetic Data on Model Performance To verify that SynthQA data, rather than NonSynth data, contributes to the significant performance improvements in BaseLM, we conduct a controlled ablation experiment. We evaluate two models: NonSynthLM, which is the BaseLM trained with 40 billion tokens of NonSynth data, and MixSynthLM, which is the BaseLM trained with 40 billion tokens of data including 2% SynthQA data. To ensure a fair comparison and better verify the impact of synthetic data, the NonSynth data used to train both NonSynthLM and MixSynthLM is the same high-quality data corpus used to generate the SynthQA data. The evaluation result is shown in Table 5. We can see that MixSynthLM exhibits markedly superior performance enhancements. This confirms that synthetic data plays a critical role in boosting base model performance. 9 Models C-Eval CMMLU MMLU HumanEval MBPP GSM8K Avg. 39.05 BaseLM MixSynthLM 44.63 NonSynthLM 42.33 38.83 44.12 40.46 38.08 45.00 40.88 9.76 18.29 18.29 12.00 19.40 17.80 15.09 14.95 12.21 25.47 31.07 28.66 Table 5: Evaluation results of BaseLM with continued pre-training on synthetic and non- synthetic data. MixSynthLM is BaseLM trained with 40 billion tokens including 2% SynthQA data. NonSynthLM is BaseLM trained with 40 billion tokens of NonSynth data. Models C-Eval CMMLU MMLU HumanEval MBPP GSM8K Avg. SynthLM UnlearnLM (GA) UnlearnLM (Ours) 47.71 26.58 48.09 47.56 25.08 47.29 47.27 39.28 47.53 18.90 11.59 20.73 18.40 9.60 18.60 16.60 6.82 11.45 32.74 19.82 32.28 Table 6: Evaluation results of SynthLM with different unlearning strategies applied. Un- learnLM (GA) is derived from SynthLM by applying traditional gradient ascent loss. Un- learnLM (Ours) is derived by applying our lower-bounded forgetting loss. 6.3 Efficacy of Bounded Forgetting Loss When introducing our unlearning strategy in Section 5.1, we use the lower-bounded for- getting loss to forget the biased distribution of specific synthetic data. To evaluate the effectiveness of this approach compared to the traditional gradient ascent loss, we conduct a comparative experiment where the SynthLM in Table 2 undergo unlearning using both the lower-bounded forgetting loss and the traditional gradient ascent loss. As shown in Table 6, we can clearly observe that the model subjected to traditional gradient ascent loss exhibits severe performance degradation. This may be due to the uncontrolled magnitude of negative loss during training. Conversely, the lower-bounded forgetting loss results only in a partial decline in mathematical abilities. 7 Conclusion In this work, we have systematically explored the potential issues associated with synthetic data, particularly focusing on synthetic Q-A pairs, and their impact on the performance of LLMs. Our analysis has identified inherent flaws in synthetic data, such as pattern overfitting and significant shifts in output distribution, which can degrade the instruction- following capabilities of LLMs. To mitigate these adverse effects, we have proposed an innovative unlearning-based strategy. This strategy employs a lower-bounded forgetting loss, which is controllable and superior to traditional unlearning approaches at significantly lower training costs. The empirical results demonstrate that our strategy effectively ad- dresses the limitations of synthetic data and corrects the output distribution shift, thereby enhancing the instruction-following capabilities while preserving foundational capabilities of LLMs on benchmarks. Our work has demonstrated a viable path to leverage the advan- tages of synthetic data without being adversely affected by its limitations, enhancing the robustness and efficiency of LLM training. 8 Limitations Despite our substantial efforts, several limitations warrant further consideration. Firstly, while our unlearning-based strategy has shown promise in mitigating the negative effects of synthetic data, it may still cause degradation in specific model capabilities, such as mathematical reasoning. Moreover, its scalability to much larger models remains untested. As LLMs continue to grow in size and complexity, the computational efficiency and practical applicability of this strategy require further validation. Additionally, this study primarily focuses on the flaws and mitigation strategies related to Q-A pair synthetic data. Although we have demonstrated the effectiveness of our unlearning strategy on the open-source 10 synthetic dataset U33B, many other forms of synthetic data remain unexplored. Furthermore, the quality of synthetic data generated by GPT-4 used in this study may not fully represent the entire spectrum of synthetic data quality. Different synthetic data generation techniques and tools can produce data with varying degrees of imperfections, potentially impacting the effectiveness of our mitigation strategy. Further investigation into more advanced unlearning techniques is necessary to minimize these side effects. We will continue to refine and enhance our method in future work. References Sina Alemohammad, Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz Humayun, Hossein Babaei, Daniel LeJeune, Ali Siahkoohi, and Richard G. Baraniuk. Self-consuming generative models go mad, 2023. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program synthesis with large language models, 2021. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mo- hammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code, 2021. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems, 2021. Ronen Eldan and Mark Russinovich. Who’s harry potter? approximate unlearning in llms, 2023. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding, 2021. Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models, 2023. Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, and Minjoon Seo. Knowledge unlearning for mitigating privacy risks in language models. arXiv preprint arXiv:2210.01504, 2022. Yuxin Jiang, Yufei Wang, Xingshan Zeng, Wanjun Zhong, Liangyou Li, Fei Mi, Lifeng Shang, Xin Jiang, Qun Liu, and Wei Wang. Followbench: A multi-level fine-grained constraints following benchmark for large language models, 2024. Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. Cmmlu: Measuring massive multitask language understanding in chinese, 2024. Zhuoyan Li, Hangxiao Zhu, Zhuoran Lu, and Ming Yin. Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849, 2023. 11 Ruibo Liu, Jerry Wei, Fangyu Liu, Chenglei Si, Yanzhe Zhang, Jinmeng Rao, Steven Zheng, Daiyi Peng, Diyi Yang, Denny Zhou, and Andrew M. Dai. Best practices and lessons learned on synthetic data for language models, 2024a. URL https://arxiv.org/pdf/ 2404.07503v1. Sijia Liu, Yuanshun Yao, Jinghan Jia, Stephen Casper, Nathalie Baracaldo, Peter Hase, Xiaojun Xu, Yuguang Yao, Hang Li, Kush R. Varshney, Mohit Bansal, Sanmi Koyejo, and Yang Liu. Rethinking machine unlearning for large language models, 2024b. Ximing Lu, Sean Welleck, Jack Hessel, Liwei Jiang, Lianhui Qin, Peter West, Prithviraj Ammanabrolu, and Yejin Choi. Quark: Controllable text generation with reinforced unlearning, 2022. Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583, 2023. URL https://arxiv.org/pdf/2308.09583. Pratyush Maini, Zhili Feng, Avi Schwarzschild, Zachary C. Lipton, and J. Zico Kolter. Tofu: A task of fictitious unlearning for llms, 2024a. Pratyush Maini, Skyler Seto, He Bai, David Grangier, Yizhe Zhang, and Navdeep Jaitly. Rephrasing the web: A recipe for compute and data-efficient language modeling. arXiv preprint arXiv:2401.16380, 2024b. URL https://arxiv.org/pdf/2401.16380. Microsoft. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219, 2024. URL https://arxiv.org/pdf/2404.14219. Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. Orca: Progressive learning from complex explanation traces of gpt-4. arXiv preprint arXiv:2306.02707, 2023. URL https://arxiv.org/pdf/2306.02707. Sergey I. Nikolenko. Synthetic data for deep learning. https://arxiv.org/pdf/1909.11512, 2019. URL https://arxiv.org/pdf/1909.11512. NVIDIA. Nemotron-4 340b technical report. Technical Report, 2024. URL https://blogs. nvidia.com/blog/nemotron-4-synthetic-data-generation-llm-training/. Emanuel Parzen. On estimation of a probability density function and mode. The annals of mathematical statistics, 33(3):1065–1076, 1962. Vaidehi Patil, Peter Hase, and Mohit Bansal. Can sensitive information be deleted from llms? objectives for defending against extraction attacks. ICLR, 2024. Martin Pawelczyk, Seth Neel, and Himabindu Lakkaraju. In-context unlearning: Language models as few shot unlearners. arXiv preprint arXiv:2310.07579, 2023. Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, 2023. URL https://arxiv.org/abs/ 2306.01116. Mohamed El Amine Seddik, Suei-Wen Chen, Soufiane Hayou, Pierre Youssef, and Merouane Debbah. How bad is training on synthetic data? a statistical analysis of language model collapse, 2024. Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, and Luke Zettlemoyer. Detecting pretraining data from large language models. arXiv preprint arXiv:2310.16789, 2023. Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, and Ross Anderson. The curse of recursion: Training on generated data makes models forget, 2024. 12 Daria Soboleva, Faisal Al-Khateeb, Robert Myers, Jacob R Steeves, Joel Hestness, and Nolan Dey. SlimPajama: A 627B token cleaned and deduplicated version of RedPajama, 2023. Luca Soldaini, Rodney Kinney, Akshita Bhagia, Dustin Schwenk, David Atkinson, Russell Authur, Ben Bogin, Khyathi Chandu, Jennifer Dumas, Yanai Elazar, Valentin Hofmann, Ananya Harsh Jha, Sachin Kumar, Li Lucy, Xinxi Lyu, Nathan Lambert, Ian Magnusson, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Pe- ters, Abhilasha Ravichander, Kyle Richardson, Zejiang Shen, Emma Strubell, Nishant Subramani, Oyvind Tafjord, Pete Walsh, Luke Zettlemoyer, Noah A. Smith, Hannaneh Hajishirzi, Iz Beltagy, Dirk Groeneveld, Jesse Dodge, and Kyle Lo. Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research. arXiv preprint, 2024. Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding, 2023. Teknium. Openhermes 2.5: An open dataset of synthetic data for generalist llm assistants, 2023. URL https://huggingface.co/datasets/teknium/OpenHermes-2.5. Anvith Thudi, Gabriel Deza, Varun Chandrasekaran, and Nicolas Papernot. Unrolling sgd: Understanding factors influencing machine unlearning, 2022. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008. P. Villalobos, J. Sevilla, L. Heim, T. Besiroglu, M. Hobbhahn, and A. Ho. Will we run out of data? an analysis of the limits of scaling datasets in machine learning. arXiv preprint arXiv:2211.04325, 2022. URL https://arxiv.org/abs/2211.04325. Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, and Lingming Zhang. Magicoder: Em- powering code generation with oss-instruct. arXiv preprint arXiv:2312.02120, 2023. URL https://arxiv.org/pdf/2312.02120. Yuanshun Yao, Xiaojun Xu, and Yang Liu. Large language model unlearning, 2024. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284, 2023a. URL https://arxiv.org/pdf/2309.12284. Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, and Chao Zhang. Large language model as attributed training data generator: A tale of diversity and bias, 2023b. Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and Jingren Zhou. Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825, 2023. URL https://arxiv. org/pdf/2308.01825. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023. 13 A Examples of Data Utilized in This Work In Section 3, we introduce the various datasets employed in our research. To provide a clear understanding of the data characteristics and content diversity, we present examples for each dataset type in Table 7. 14 Dataset Type Source Sample NonSynth Non-synthetic Data SynthQA Synthetic Q-A Pair Webpage Book Research Paper Codebase Mathematic Coding General Knowledge Mathematic MixedIns Instruction Coding General Knowledge U33B Synthetic Q-A Pair Mathematic OpenHermes-2.5 Instruction and Chat Example As an independent Nissan service repair centre and our aim is to provide our customers with an alternative to the high servicing and repair costs associated with large Nissan dealer- ships. The sovereign or heir of Moscow was to succeed Yan Kazimir, details of boundaries and succession to be settled by the Diet, both sides to refrain from hostilities till the Swedes were ex- pelled, and neither to make peace with Sweden separately. \section*Introduction\n\nClubbing is a central part of many young adults’ lives. In Norway, the club culture is alcohol driven and drinking to intoxication is a common phenomenon in the Nordic countries (Mäkelä et al., 2001), as in other parts of the Western world (Measham \\& Brain, 2005; Moore, 2010). import load_workbook\nimport numpy from openpyxl as np \n\ndef read():\n #Load data from workbook\n wb = load_workbook(’DL03_Teste01_Dados.xlsx’)\n sheet = wb[’Planilha1’]\n\n #Recover data\n datasheet = []\n for row in sheet.iter_rows(): \n newRow = list()\n for cell in row:\n newRow.append(cell.value) Question:\n\nWhich of the following represents the deriva- tive of the function f (x) = x2?\n\nA) 2\n\nB) 2x\n\nC) x2\n\nD) x3\n\n**Answer:**\n\nTo find the derivative of f (x) = x2, we apply the power rule of differentiation. If f (x) = xn, then f ′(x) = nxn−1.\n\nApplying the rule to f (x) = x2, we have:\n\n f ′(x) = 2x2−1 = 2x\n\nSo, the correct answer is **B) 2x**. Create a test suite of quantum physics questions categorized by difficulty.\\n\nBest Answer:\nGiven the complexity, this task would typically be broken into multiple stages and re- if __name__ == "__main__":\n quire a team of experts ... app.run() 探讨迷茫心理、求助的重要性与技巧、心理学视角下的人际 交流和资源交换原则。\n从心理学来讲,人生之所以迷茫,是对 未知的无能为力。 ... 明确对方需求,才能获得你想要的帮助。 {"role": "user", "content": "帮助用户解决以下数学问题。\n问 题: 计算 45 ÷ 5 + 6 × 2"}, {"role": "assistant", "content": "根据 算术运算法则, 先计算乘除再计算加减:\n45 ÷ 5 + 6 × 2 = 9 + 12 = 21"} "user", "content": "创 建 一 个 包 含10个 随 机 数 字 {"role": 的 列 表 , 并 按 升 序 排 列 。\n"}, {"role": "assistant", "con- tent": "以下是Python语言的代码实现:\n“‘python\nimport random\n# 生成一个包含10个随机数字的列表\nmy_list = [random.randint(1, 100) for i in range(10)]\n# 使用sort()方法 按升序排列\nmy_list.sort()\n# 输出\nprint(my_list)"} {"role": "user", "content": "给定以下两种动物的名字,请确 定它们属于哪个动物大类;鲨鱼和海马。\n鲨鱼和海马\n"}, {"role": "assistant", "content": "鲨鱼属于鱼类,而海马属于海 龙目,也就是海马目动物。"} {"query": "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?", "response": "Natalia sold 48/2 = «48/2=24»24 clips in May.\nNatalia sold 48+24 = «48+24=72»72 clips altogether in April and May.\n#### 72"} {"from": "human", "value": "Do you know any jokes about animals with ailments?"}, {"from": "gpt", "value": "What’s worse than a centipede with sore feet? A giraffe with a sore throat"} Table 7: Illustrative examples for six datasets used in this work. 15
synthetic_cpt
1
Quality_Assessment_of_Synthetic_Fluorescence_Microscopy_Images_for_Image_Segmentation.pdf
8 1 0 2 r p A 1 2 ] V C . s c [ 2 v 8 9 1 7 0 . 1 0 8 1 : v i X r a Three Dimensional Fluorescence Microscopy Image Synthesis and Segmentation Chichen Fu Purdue University West Lafayette, Indiana Soonam Lee Purdue University West Lafayette, Indiana David Joon Ho Purdue University West Lafayette, Indiana Shuo Han Purdue University West Lafayette, Indiana Paul Salama Indiana University-Purdue University Indianapolis, Indiana Kenneth W. Dunn Indiana University Indianapolis, Indiana Edward J. Delp Purdue University West Lafayette, Indiana Abstract Advances in fluorescence microscopy enable acquisition of 3D image volumes with better image quality and deeper penetration into tissue. Segmentation is a required step to characterize and analyze biological structures in the im- ages and recent 3D segmentation using deep learning has achieved promising results. One issue is that deep learning techniques require a large set of groundtruth data which is impractical to annotate manually for large 3D microscopy volumes. This paper describes a 3D deep learning nu- clei segmentation method using synthetic 3D volumes for training. A set of synthetic volumes and the correspond- ing groundtruth are generated using spatially constrained cycle-consistent adversarial networks. Segmentation re- sults demonstrate that our proposed method is capable of segmenting nuclei successfully for various data sets. 1. Introduction Fluorescence microscopy is a type of an optical mi- croscopy that uses fluorescence to image 3D subcellular structures [1, 2]. Three dimensional segmentation is needed to quantify and characterize cells, nuclei or other biological structures. Various nuclei segmentation methods have been investi- gated in the last few decades. Active contours [3, 4] which minimizes an energy functional to fit desired shapes has been one of the most successful methods in microscopy im- age analysis. Since active contours uses the image gradient to evolve a contour to the boundary of an object, this method can be sensitive to noise and highly dependent on initial contour placement. In [5] an external energy term which convolves a controllable vector field kernel with an image edge map was introduced to address these problems. In [6] 2D region-based active contours using image intensity to identify a region of interest was described. This achieves better performance on noisy image and is relatively inde- pendent of the initial curve placement. Extending this to 3D, [7] described 3D segmentation of a rat kidney structure. This technique was further extended to address the problem of 3D intensity inhomogeneity [8]. However, these energy functional based methods cannot distinguish various struc- tures. Alternatively, [9, 10] described a method known as Squassh to solve the energy minimization problem from a generalized linear model to couple image restoration and segmentation. In addition, [11] described multidimensional segmentation using random seeds combined with multi- resolution, multi-scale, and region-growing technique. Convolutional neural network (CNN) has been used to address problems in segmentation and object identification [12]. Various approaches, based on CNNs, have been used in the biomedical area [13]. U-Net [14] is a 2D CNN which uses an encoder-decoder architecture with skip connections to segment cells in light microscopy images. In [15] a multi- input multi-output CNN for cell segmentation in fluores- cence microscopy images to segment various size and inten- sity cells was described. Since these approaches [14, 15] are 2D segmentation methods, they may fail to produce reason- able segmentation in 3D. More specifically, stacking these 2D segmentation images into 3D volume may result in mis- alignment in the depth direction [7]. Also, in [16] a method that trained three networks from different directions in a volume and combined these three results to produce a form Figure 1. Block diagram of the proposed approach for 3D nuclei segmentation of 3D segmentation was described. A 3D U-Net [17] was introduced to identify 3D structures by extending the archi- tecture of [14] to 3D. However, this approach requires man- ually annotated groundtruth to train the network. Generat- ing groundtruth for 3D volumes is tedious and is generally just done on 2D slices, obtaining true 3D groundtruth vol- umes are impractical. One way to address this is to use syn- thetic ground truth data [18, 19]. A method that segments nuclei by training a 3D CNN with synthetic microscopy vol- umes was described in [20]. Here, the synthetic microscopy volumes were generated by blurring and noise operations. Generating realistic synthetic microscopy image vol- umes remains a challenging problem since various types of noise and biological structures with different shapes are present and need to be modeled. Recently, in [21] a gener- ative adversarial network (GAN) was described to address image-to-image translation problems using two adversarial networks, a generative network and a discriminative net- work. In particular, the discriminative network learns a loss function to distinguish whether the output image is real or fake whereas the generative network tries to minimize this loss function. One of the extensions of GANs is Pix2Pix [22] which uses conditional GANs to learn the relationship between the input image and output image that can generate realistic images. One issue with Pix2Pix [22] is that it still requires paired training data to train the networks. In [23] coupled GANs (CoGAN) for learning the joint distribution of multi-domain images without having the corresponding groundtruth images was introduced. Later, cycle-consistent adversarial networks (CycleGAN) [24] employed a cycle consistent term in the adversarial loss function for image generation without using paired training data. More re- cently, a segmentation method using concatenating segmen- tation network to CycleGAN to learn the style of CT seg- mentation and MRI segmentation was described in [25]. In this paper, we present a 3D segmentation method to identify and segment nuclei in fluorescence microscopy vol- umes without the need of manual segmented groundtruth volumes. Three dimensional synthetic training data is gen- erated using spatially constrained CycleGAN. A 3D CNN network is then trained using 3D synthetic data to seg- ment nuclei structures. Our method is evaluated using hand segmented groundtruth volumes of real fluorescence microscopy data from a rat kidney. Our data are col- lected using two-photon microscopy with nuclei labeled with Hoechst 33342 staining. 2. Proposed Method Figure 1 shows a block diagram of our method. We de- note I as a 3D image volume of size X × Y × Z. Note that Izp is a pth focal plane image, of size X × Y , along the z- direction in a volume, where p ∈ {1, . . . , Z}. Note also that I orig and I seg is the original fluorescence microscopy vol- ume and segmented volume, respectively. In addition, let I(qi:qf ,ri:rf ,pi:pf ) be a subvolume of I, whose x-coordinate is qi ≤ x ≤ qf , y-coordinate is ri ≤ y ≤ rf , z-coordinate is pi ≤ z ≤ pf , where qi, qf ∈ {1, . . . , X}, ri, rf ∈ {1, . . . , Y }, pi, pf ∈ {1, . . . , Z}, qi ≤ qf , ri ≤ rf , and pi ≤ pf . For example, I seg (241:272,241:272,131:162) is a subvol- ume of a segmented volume, I seg, where the subvolume is cropped between 241st slice and 272nd slice in x-direction, between 241st slice and 272nd slice in y-direction, and be- tween 131st slice and 162nd slice in z-direction. As shown in Figure 1, our proposed method consists of two steps: 3D synthetic data generation and 3D CNN segmentation. We first generate synthetic binary volumes, I labelcyc, and then use them with a subvolume of the origi- nal image volumes, I origcyc, to train a spatially constrained CycleGAN (SpCycleGAN) and obtain a generative model denoted as model G. This model G is used with another set of synthetic binary volume, I label, to generate correspond- ing synthetic 3D volumes, I syn. For 3D CNN segmenta- tion, we can utilize these paired I syn and I label to train a 3D CNN and obtain model M . Finally, the 3D CNN model M is used to segment nuclei in I orig to produce I seg. 2.1. 3D Synthetic Data Generation Three dimensional synthetic data generation consists of synthetic binary volume generation, SpCycleGAN training, and SpCycleGAN inferences. In synthetic binary volume Figure 2. Architecture of our modified 3D U-Net generation, nuclei are assumed to have an ellipsoidal shape, multiple nuclei are randomly generated in different orienta- tions and locations in a volume [20]. The original Cycle- GAN and our SpCycleGAN were trained to generate a set of synthetic volumes. 2.1.1 CycleGAN The CycleGAN is trained to generate a synthetic mi- croscopy volume. CycleGAN uses a combination of dis- criminative networks and generative networks to solve a minimax problem by adding cycle consistency loss to the original GAN loss function as [21, 24]: L(G, F, D1, D2) = LGAN(G, D1, I labelcyc, I origcyc) + LGAN(F, D2, I origcyc, I labelcyc) + λLcyc(G, F, I origcyc, I labelcyc) (1) where LGAN(G, D1, I labelcyc, I origcyc) = EI origcyc [log(D1(I origcyc))] + EI labelcyc [log(1 − D1(G(I labelcyc)))] LGAN(F, D2, I origcyc, I labelcyc) = EI labelcyc [log(D2(I labelcyc))] + EI origcyc [log(1 − D2(F (I origcyc)))] Lcyc(G, F, I origcyc, I labelcyc) = EI labelcyc [||F (G(I labelcyc)) − I labelcyc||1] + EI origcyc [||G(F (I origcyc)) − I origcyc||1]. Here, λ is a weight coefficient and || · ||1 is L1 norm. Note that Model G maps I labelcyc to I origcyc while Model F maps I origcyc to I labelcyc. Also, D1 distinguishes between I origcyc and G(I labelcyc) while D2 distinguishes between I labelcyc and F (I origcyc). G(I labelcyc) is an original like microscopy volume generated by model G and F (I origcyc) is generated by model F that looks similar to a synthetic binary volume. Here, I origcyc and I labelcyc are unpaired set of images. In CycleGAN inference, I syn is generated using the model G on I label. As previously indicated I syn and I label are a paired set of images. Here, I label is served as a groundtruth volume corresponding to I syn. 2.1.2 Spatially Constrained CycleGAN Although the CycleGAN uses cycle consistency loss to con- strain the similarity of the distribution of I origcyc and I syn, CycleGAN does not provide enough spatial constraints on the locations of the nuclei. CycleGAN generates realistic synthetic microscopy images but a spatial shifting on the location of the nuclei in I syn and I label was observed. To create a spatial constraint on the location of the nuclei, a net- work H is added to the CycleGAN and takes G(I labelcyc) as an input to generate a binary mask, H(G(I labelcyc)). Here, the architecture of H is the same as the architecture of G. Network H minimizes a L2 loss, LSpatial, between H(G(I labelcyc)) and I labelcyc. LSpatial serves as a spatial regulation term in the total loss function. The network H is trained together with G. The loss function of the SpCycle- GAN is defined as: L(G, F, H, D1, D2) = LGAN(G, D1, I labelcyc, I origcyc) + LGAN(F, D2, I origcyc, I labelcyc) + λ1Lcyc(G, F, I origcyc, I labelcyc) + λ2Lspatial(G, H, I origcyc, I labelcyc) (2) where λ1 and λ2 are the weight coefficients for Lcyc and Lspatial, respectively. Note that first three terms are the same and already defined in Equation (1). Here, Lspatial can be expressed as Lspatial(G, H, I origcyc, I labelcyc) = EI labelcyc [||H(G(I labelcyc)) − I labelcyc||2]. of the 3D window may not be segmented correctly. Hence, only the central subvolume of the output of the 3D window with size of 32 × 32 × 32 is used to generate the corre- sponding subvolume of I seg with size of 32 × 32 × 32. This process is done until the 3D window maps an entire volume. 2.2. 3D U Net Figure 2 shows the architecture of our modified 3D U- Net. The filter size of each 3D convolution is 3 × 3 × 3. To maintain the same size of volume during 3D convolution, a voxel padding of 1 × 1 × 1 is used in each convolution. A 3D batch normalization [26] and a leaky rectified-linear unit activation function are employed after each 3D convo- lution. In the downsampling path, a 3D max pooling uses 2 × 2 × 2 with stride of 2 is used. In the upsampling path, feature information is retrieved using 3D transpose convo- lutions. Our modified 3D U-Net is one layer deeper than conventional U-Net as can be seen in Figure 2. Our train- ing loss function can be expressed as a linear combination of the Dice loss (LDice) and the binary cross-entropy loss (LBCE) such that Lseg(T, S) = µ1LDice(T, S) + µ2LBCE(T, S) (3) where LDice(T, S) = N i=1 tisi) 2(P N N i=1 s2 i=1 t2 i + P P i N 1 N X i=1 LBCE(T, S) = − ti log(si) + (1 − ti) log(1 − si), respectively [27]. Note that T is the set of the targeted groundtruth values and ti ∈ T is a targeted groundtruth value at ith voxel location. Similarly, S is a probability map of binary volumetric segmentation and si ∈ S is a proba- bility map at ith voxel location. Lastly, N is the number of entire voxels and µ1, µ2 serve as the weight coefficient between to loss terms in Equation (3). The network takes a grayscale input volume with size of 64 × 64 × 64 and produces an voxelwise classified 3D volume with the same size of the input volume. To train our model M , V pairs of synthetic microscopy volumes, I syn, and synthetic binary volumes, I label, are used. 2.2.1 Inference For the inference step we first zero-padded I orig by 16 voxels on the boundaries. A 3D window with size of 64 × 64 × 64 is used to segment nuclei. Since the zero padded I orig is bigger than the 3D window, the 3D win- dows is slided to x, y, and z-directions by 32 voxels on zero- padded I orig [20]. Nuclei partially observed on boundaries 3. Experimental Results We tested our proposed method on two different rat kid- ney data sets. These data sets contain grayscale images of size X = 512 × Y = 512. Data-I consists of Z = 512 images, Data-II consist of Z = 64. Our SpCycleGAN is implemented in Pytorch using the Adam optimizer [28] with default parameters given by Cy- In addition, we used λ1 = λ2 = 10 cleGAN [24]. in the SpCycleGAN loss function shown in Equation (2). We trained the CycleGAN and SpCycleGAN to generate synthetic volumes for Data-I and Data-II, respectively. A 128 × 128 × 128 synthetic binary volume for Data-I de- noted as I labelcycData−I and a 128 × 128 × 300 subvol- ume of original microscopy volume of Data-I denoted as I origcycData−I were used to train model GData−I . Simi- larly, a 128 × 128 × 128 synthetic binary volume for Data-II denoted as I labelcycData−II and a 128 × 128 × 32 subvol- ume of original microscopy volume of Data-II denoted as I origcycData−II were used to train model GData−II . We generated 200 sets of 128 × 128 × 128 syn- thetic binary volumes, I labelData−I and I labelData−II where I labelData−I and I labelData−II are generated according to different size of nuclei in Data-I and Data-II, respectively. By using the model GData−I on I labelData−I , 200 pairs of synthetic binary volumes, I labelData−I , and correspond- ing synthetic microscopy volumes, I synData−I , of size of 128 × 128 × 128 were obtained. Similarly, by using model GData−II on I labelData−II , 200 pairs of I labelData−II and corresponding I synData−II , of size of 128 × 128 × 128 were obtained. Since our modified 3D U-Net architecture takes volumes of size of 64 × 64 × 64, we divided I labelData−I , I synData−I , I labelData−II , and I synData−II into adjacent non overlapping 64 × 64 × 64. Thus, we have 1600 pairs of synthetic binary volumes and corresponded synthetic mi- croscopy volumes per each data to train our modified 3D U-Net. Note that these 1600 synthetic binary volumes per each data are used as groundtruth volumes to be paired with corresponding synthetic microscopy volumes. Model M Data−I and M Data−II are then generated. Our modified 3D U-Net is implemented in Pytorch us- ing the Adam optimizer [28] with learning rate 0.001. For the evaluation purpose, we use different settings of using 3D synthetic data generation methods (CycleGAN or Sp- CycleGAN), different number of pairs of synthetic training volume V (V = 80 or V = 1600) among 1600 pairs of syn- thetic binary volume corresponding synthetic microscopy volume. Also, we use different loss functions with different settings of the µ1 and µ2. Moreover, we also compared our modified 3D U-Net with 3D encoder-decoder architecture [20]. Lastly, small objects which are less than 100 voxels were removed using 3D connected components. synthetic microscopy image and its synthetic binary image. Our realistic synthetic microscopy volumes from SpCycle- GAN can be used to train our modified 3D U-Net. (a) (b) (c) (d) (e) (f) Figure 3. Slices of the original volume, the synthetic microscopy volume, and the corresponding synthetic binary volume for Data-I and Data-II (a) original image of Data-I, (b) synthetic microscopy image of Data-I, (c) synthetic binary image of Data-I, (d) original image of Data-II, (e) synthetic microscopy image of Data-II, (f) synthetic binary image of Data-II (a) (b) Figure 4. A comparison between two synthetic data generation methods overlaid on the corresponding synthetic binary image (a) CycleGAN, (b) SpCycleGAN Figure 3 shows the synthetic images generated by our proposed method. The left column indicates original im- ages whereas middle column shows synthetic images arti- ficially generated from corresponding synthetic binary im- ages provided in right column. As can be seen from Figure 3, the synthetic images reflect characteristics of the original microscopy images such as background noise, nuclei shape, orientation and intensity. Additionally, two synthetic data generation methods be- tween CycleGAN and SpCycleGAN from the same syn- thetic binary image are compared in Figure 4. Here, the syn- thetic binary image is overlaid on the synthetic microscopy image and labeled in red. It is observed that our spatial con- straint loss reduces the location shift of nuclei between a (a) (b) (c) (d) (e) (f) (g) (h) Figure 5. 3D visualization of subvolume 1 of Data-I using Voxx [29] (a) original volume, (b) 3D ground truth volume, (c) 3D active surfaces from [7], (d) 3D active surfaces with inhomo- geneity correction from [8], (e) 3D Squassh from [9, 10], (f) 3D encoder-decoder architecture from [20], (g) 3D encoder-decoder architecture with CycleGAN, (h) 3D U-Net architecture with Sp- CycleGAN (Proposed method) Table 1. Accuracy, Type-I and Type-II errors for known methods and our method on subvolume 1, subvolume 2 and subvolume 3 of Data-I Method Method [7] Method [8] Method [9, 10] Method [20] 3D Encoder-Decoder + CycleGAN + BCE (µ1 = 0, µ2 = 1,V = 80) 3D Encoder-Decoder + SpCycleGAN + BCE (µ1 = 0, µ2 = 1,V = 80) 3D U-Net + SpCycleGAN + BCE (µ1 = 0, µ2 = 1,V = 80) 3D U-Net + SpCycleGAN + DICE (µ1 = 1, µ2 = 0,V = 80) 3D U-Net +SpCycleGAN + DICE and BCE (µ1 = 1, µ2 = 10,V = 80) 3D U-Net +SpCycleGAN + DICE and BCE (µ1 = 1, µ2 = 10,V = 1600) 3D U-Net +SpCycleGAN + DICE and BCE + PP (µ1 = 1, µ2 = 10,V = 1600) (Proposed method) Subvolume 1 Subvolume 2 Accuracy Type-I Type-II Accuracy Type-I Type-II Accuracy Type-I Type-II 84.09% 15.68% 0.23% 79.25% 20.71% 0.04% 76.44% 23.55% 0.01% 87.36% 12.44% 0.20% 86.78% 13.12% 0.10% 83.47% 16.53% 0.00% 90.14% 9.07% 0.79% 88.26% 11.67% 0.07% 87.29% 12.61% 0.10% 92.20% 5.38% 2.42% 92.32% 6.81% 0.87% 94.26% 5.19% 0.55% Subvolume 3 93.05% 3.09% 3.87% 91.30% 5.64% 3.06% 94.17% 3.96% 1.88% 94.78% 3.42% 1.79% 92.45% 6.62% 0.92% 93.57% 6.10% 0.33% 95.07% 2.94% 1.99% 93.01% 6.27% 0.72% 94.04% 5.84% 0.11% 94.76% 3.00% 2.24% 93.03% 6.03% 0.95% 94.30% 5.22% 0.40% 95.44% 2.79% 1.76% 93.63% 5.73% 0.64% 93.90% 5.92% 0.18% 95.37% 2.77% 1.86% 93.63% 5.69% 0.68% 94.37% 5.27% 0.36% 95.56% 2.57% 1.86% 93.67% 5.65% 0.68% 94.54% 5.10% 0.36% Our proposed method was compared to other 3D seg- mentation methods including 3D active surface [7], 3D active surface with inhomogeneity correction [8], 3D Squassh [9, 10], 3D encoder-decoder architecture [20], 3D encoder-decoder architecture with CycleGAN. Three orig- inal 3D subvolumes of Data-I were selected to evaluate the performance of our proposed method. We denote the original volume as subvolume 1 (I orig (241:272,241:272,31:62)), subvolume 2 (I orig (241:272,241:272,131:162)), and subvolume 3 (I orig (241:272,241:272,231:262)), respectively. Corresponding groundtruth of each subvolume was hand segmented. Voxx [29] was used to visualize the segmentation results in 3D and compared to the manually annotated volumes. In Fig- ure 5, 3D visualizations of the hand segmented subvolume 1 and the corresponding segmentation results for various methods were presented. As seen from the 3D visualiza- tion in Figure 5, our proposed method shows the best per- formance among presented methods visually compared to hand segmented groundtruth volume. In general, our pro- posed method captures only nuclei structure whereas other presented methods falsely detect non-nuclei structures as nuclei. Note that segmentation results in Figure 5(g) yields smaller segmentation mask and suffered from location shift. Our proposed method shown in Figure 5(h) outperforms Figure 5(g) since our proposed method uses spatially con- strained CycleGAN and takes consideration of the Dice loss and the binary cross-entropy loss. , Type-I error = nFP ntotal All segmentation results were evaluated quantitatively based on voxel accuracy, Type-I error and Type-II error met- rics, using 3D hand segmented volumes. Here, accuracy = nTP+nTN , where ntotal nTP, nTN, nFP, nFN, ntotal are defined to be the number of true-positives (voxels segmented as nuclei correctly), true-negatives (voxels segmented as background correctly), false-positives (voxels falsely segmented as nuclei), false- negatives (voxels falsely segmented as background), and the total number of voxels in a volume, respectively. , Type-II error = nFN ntotal The quantitatively evaluations for the subvolumes are shown in Table 1. Our proposed method outperforms other compared methods. The smaller Type-I error shows our proposed method successfully rejects non-nuclei structures during segmentation. Also, our proposed method has rea- sonably low Type-II errors compared to other segmentation methods. Moreover, in this table, we show that our pro- to better results since 3D U-Net has skip connections that can preserve spatial information. In addition, the combi- nation of two loss functions such as the Dice loss and the BCE loss turns out to be better for the segmentation task in our application. In particular, the Dice loss constrains the shape of the nuclei segmentation whereas the BCE loss reg- ulates voxelwise binary prediction. It is observed that train- ing with more synthetic volumes can generalize our method to achieve better segmentation accuracy. Finally, the post- processing (PP) that eliminates small components helps to improve segmentation performance. To make this clear, segmentation results were color coded using 3D connected component labeling and overlaid on the original volumes. The method from [20] cannot dis- tinguish between nuclei and non-nuclei structures including noise. This is especially recognizable from segmentation results of Data-I in which multiple nuclei and non-nuclei structures are colored with the same color. As can be ob- served from Figure 6(e) and 6(f), segmentation masks are smaller than nuclei size and suffered from location shifts. Conversely, our proposed method shown in Figure 6(g) and 6(h) segments nuclei with the right shape at the correct lo- cations. 4. Conclusion In this paper we presented a modified 3D U-Net nuclei segmentation method using paired synthetic volumes. The training was done using synthetic volumes generated from a spatially constrained CycleGAN. The combination of the Dice loss and the binary cross-entropy loss functions are op- timized during training. We compared our proposed method to various segmentation methods and with manually anno- tated 3D groundtruth from real data. The experimental re- sults indicate that our method can successfully distinguish between non-nuclei and nuclei structure and capture nu- clei regions well from various microscopy volumes. One drawback of our proposed segmentation method is that our method cannot separate nuclei if they are physically touch- ing to each other. In the future, we plan to develop nuclei localization method to identify overlapping nuclei to indi- viduals. 5. Acknowledgments This work was partially supported by a George M. OBrien Award from the National Institutes of Health under grant NIH/NIDDK P30 DK079312 and the endowment of the Charles William Harrison Distinguished Professorship at Purdue University. Data-I was provided by Malgorzata Kamocka of Indiana University and was collected at the Indiana Center for Bio- logical Microscopy. Address all correspondence to Edward J. Delp, (a) (b) (c) (d) (e) (f) (g) (h) z66 using [20], (d) Data-II I seg Figure 6. Original images and their color coded segmentation re- z66 , (b) Data-II I orig sults of Data-I and Data-II (a) Data-I I orig z31 , (c) Data-I I seg z31 using [20], (e) Data-I I seg z66 using 3D encoder-decoder architecture with CycleGAN, (f) Data- II I seg z31 using 3D encoder-decoder architecture with CycleGAN, (g) Data-I I seg z66 using 3D U-Net architecture with SpCycleGAN (Proposed method), (h) Data-II I seg z31 using 3D U-Net architecture with SpCycleGAN (Proposed method) posed SpCycleGAN creates better paired synthetic volumes which reflects in segmentation accuracy. Instead of 3D encoder-decoder structure, we use 3D U-Net which leads [email protected] References [1] C. Vonesch, F. Aguet, J. Vonesch, and M. Unser, “The colored revolution of bioimaging,” IEEE Signal Processing Magazine, vol. 23, no. 3, pp. 20–31, May 2006. 1 [2] K. W. Dunn, R. M. Sandoval, K. J. Kelly, P. C. Dagher, G. A. Tanner, S. J. Atkinson, R. L. Bacallao, and B. A. Moli- toris, “Functional studies of the kidney of living animals us- ing multicolor two-photon microscopy,” American Journal of Physiology-Cell Physiology, vol. 283, no. 3, pp. C905– C916, September 2002. 1 [3] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” International Journal of Computer Vision, vol. 1, no. 4, pp. 321–331, January 1988. 1 [4] R. Delgado-Gonzalo, V. Uhlmann, D. Schmitter, and M. Unser, “Snakes on a plane: A perfect snap for bioimage analysis,” IEEE Signal Processing Magazine, vol. 32, no. 1, pp. 41–48, January 2015. 1 [5] B. Li and S. T. Acton, “Active contour external force us- ing vector field convolution for image segmentation,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2096– 2106, August 2007. 1 [6] T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Transactions on Image Processing, vol. 10, no. 2, pp. 266–277, February 2001. 1 [7] K. Lorenz, P. Salama, K. Dunn, and E. Delp, “Three dimen- sional segmentation of fluorescence microscopy images us- ing active surfaces,” Proceedings of the IEEE International Conference on Image Processing, pp. 1153–1157, Septem- ber 2013, Melbourne, Australia. 1, 5, 6 [8] S. Lee, P. Salama, K. W. Dunn, and E. J. Delp, “Segmenta- tion of fluorescence microscopy images using three dimen- sional active contours with inhomogeneity correction,” Pro- ceedings of the IEEE International Symposium on Biomedi- cal Imaging, pp. 709–713, April 2017, Melbourne, Australia. 1, 5, 6 [9] G. Paul, J. Cardinale, and I. F. Sbalzarini, “Coupling im- age restoration and segmentation: A generalized linear model/Bregman perspective,” International Journal of Com- puter Vision, vol. 104, no. 1, pp. 69–93, March 2013. 1, 5, 6 [10] A. Rizk, G. Paul, P. Incardona, M. Bugarski, M. Mansouri, A. Niemann, U. Ziegler, P. Berger, and I. F. Sbalzarini, “Seg- mentation and quantification of subcellular structures in flu- orescence microscopy images using Squassh,” Nature Proto- cols, vol. 9, no. 3, pp. 586–596, February 2014. 1, 5, 6 [11] G. Srinivasa, M. C. Fickus, Y. Guo, A. D. Linstedt, and J. Kovacevic, “Active mask segmentation of fluorescence mi- croscope images,” IEEE Transactions on Image Processing, vol. 18, no. 8, pp. 1817–1829, August 2009. 1 [13] G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. van der Laak, B. van Ginneken, and C. I. Sanchez, “A survey on deep learning in medical image analysis,” arXiv preprint arXiv:1702.05747, February 2017. 1 [14] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convo- lutional networks for biomedical image segmentation,” Pro- ceedings of the Medical Image Computing and Computer- Assisted Intervention, pp. 231–241, October 2015, Munich, Germany. 1, 2 [15] S. E. A. Raza, L. Cheung, D. Epstein, S. Pelengaris, M. Khan, and N. Rajpoot, “MIMO-Net: A multi-input multi- output convolutional neural network for cell segmentation in fluorescence microscopy images,” Proceedings of the IEEE International Symposium on Biomedical Imaging, pp. 337– 340, April 2017, Melbourne, Australia. 1 [16] A. Prasoon, K. Petersen, C. Igel, F. Lauze, E. Dam, and M. Nielsen, “Deep feature learning for knee carti- lage segmentation using a triplanar convolutional neural net- work,” Proceedings of the Medical Image Computing and Computer-Assisted Intervention, pp. 246–253, September 2013, Nagoya, Japan. 1 [17] O. Cicek, A. Abdulkadir, S. Lienkamp, T. Brox, and O. Ron- neberger, “3D U-Net: Learning dense volumetric segmen- tation from sparse annotation,” Proceedings of the Medical Image Computing and Computer-Assisted Intervention, pp. 424–432, October 2016, Athens, Greece. 2 [18] X. Zhang, Y. Fu, A. Zang, L. Sigal, and G. Agam, “Learning classifiers from synthetic data using a multichannel autoen- coder,” arXiv preprint arXiv:1503.03163, pp. 1–11, March 2015. 2 [19] I. B. Barbosa, M. Cristani, B. Caputo, A. Rognhaugen, and T. Theoharis, “Looking beyond appearances: Synthetic training data for deep CNNs in re-identification,” arXiv preprint arXiv:1701.03153, pp. 1–14, January 2017. 2 [20] D. J. Ho, C. Fu, P. Salama, K. Dunn, and E. Delp, “Nuclei segmentation of fluorescence microscopy images using three dimensional convolutional neural networks,” Proceedings of the Computer Vision for Microscopy Image Analysis work- shop at Computer Vision and Pattern Recognition, pp. 834– 842, July 2017, Honolulu, HI. 2, 3, 4, 5, 6, 7 [21] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Proceedings of the Advances in Neural Information Processing Systems, pp. 2672–2680, December 2014, Montreal, Canada. 2, 3 [22] P. Isola, J. Y. Zhu, T. Zhou, and A. A. Efros, “Image- to-image translation with conditional adversarial networks,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5967–5976, July 2017, Hon- olulu, HI. 2 [12] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pp. 3431–3440, June 2015, Boston, MA. 1 [23] M. Y. Liu and O. Tuzel, “Coupled generative adversarial networks,” Proceedings of the Advances in Neural Infor- mation Processing Systems, pp. 469–477, December 2016, Barcelona, Spain. 2 [24] J. Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversar- ial networks,” arXiv preprint arXiv:1703.10593, pp. 1–16, March 2017. 2, 3, 4 [25] Y. Huo, Z. Xu, S. Bao, A. Assad, R. G. Abramson, and B. A. Landman, “Adversarial synthesis learning enables segmen- tation without target modality ground truth,” arXiv preprint arXiv:1712.07695, pp. 1–4, December 2017. 2 [26] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, March 2015. 4 [27] F. Milletari, N. Navab, and S. A. Ahmadi, “V-Net: Fully convolutional neural networks for volumetric medical im- age segmentation,” Proceedings of the IEEE 2016 Fourth In- ternational Conference on 3D Vision, pp. 565–571, October 2016, Stanford, CA. 4 [28] D. P. Kingma and J. L. Ba, “Adam: A method for stochas- tic optimization,” arXiv preprint arXiv:1412.6980, pp. 1–15, December 2014. 4 [29] J. L. Clendenon, C. L. Phillips, R. M. Sandoval, S. Fang, and K. W. Dunn, “Voxx: A PC-based, near real-time vol- ume rendering system for biological microscopy,” American Journal of Physiology-Cell Physiology, vol. 282, no. 1, pp. C213–C218, January 2002. 5, 6
synthetic_cpt
3
Can_LLMs_Learn_from_Previous_Mistakes_Investigating_LLMs'_Errors_to_Boost_for_Reasoning.pdf
Can LLMs Learn from Previous Mistakes? Investigating LLMs’ Errors to Boost for Reasoning Yongqi Tong1, Dawei Li1, Sizhe Wang2, Yujia Wang1, Fei Teng1, Jingbo Shang1∗ 1University of California, San Diego, {yotong, dal034, yuw103, feteng, jshang}@ucsd.edu 3University of Southern California, [email protected] 4 2 0 2 n u J 7 ] L C . s c [ 2 v 6 4 0 0 2 . 3 0 4 2 : v i X r a Abstract Large language models (LLMs) have demon- strated striking reasoning capability. Recent works have shown the benefits to LLMs from fine-tuning golden-standard Chain-of-Thought (CoT) rationales or using them as correct ex- amples in few-shot prompting. While humans can indeed imitate correct examples, learning from our mistakes is another vital aspect of human cognition. Hence, a question naturally arises: can LLMs learn and benefit from their mistakes, especially for their reasoning? This study investigates this problem from both the prompting and model-tuning perspectives. We begin by introducing COTERRORSET, a new benchmark with 558,960 questions, each de- signed with both correct and error references, and demonstrating the types and reasons for making such mistakes. To explore the effective- ness of those mistakes, we design two methods: (1) Self-rethinking prompting guides LLMs to rethink whether they have made similar previ- ous mistakes; and (2) Mistake tuning involves finetuning models in both correct and incor- rect reasoning domains, rather than only tun- ing models to learn ground truth in traditional methodology. We conduct a series of experi- ments to prove LLMs can obtain benefits from mistakes in both directions. Our two meth- ods offer potentially cost-effective strategies by leveraging errors to enhance reasoning ca- pabilities, which costs significantly less than creating meticulously hand-crafted golden ref- erences. We ultimately make a thorough analy- sis of the reasons behind LLMs’ errors, which provides directions that future research needs to overcome. COTERRORSET will be published soon on https://github.com/YookiTong/ Learn-from-Mistakes-CotErrorSet. 1 Introduction Large language models (LLMs) (Brown et al., 2020; Zhang et al., 2022; Anil et al., 2023; Tou- ∗†Corresponding author. Figure 1: The overview pipeline of our work includes (1). Mistake collection and analysis (Section 3). (2) Two novel methods to instruct LLMs to learn from mis- takes(Section 4 and Section 5). vron et al., 2023) have demonstrated strong capabil- ities across various tasks and applications (Liang et al., 2022; Chang et al., 2023). To further un- leash the reasoning abilities of LLMs and align their thinking process with humans, many recent studies explored Chain-of-Thought (CoT)-based prompting (Wei et al., 2022; Wang et al., 2022; Li et al., 2023a; Tong et al., 2023; Yao et al., 2023; Besta et al., 2023) to instruct LLMs to solve the given problem with human-like logic. Besides log- ical step-by-step thinking, another critical learning pattern of us humans is to rethink and learn from our previous mistakes so that avoid repeating the same mistakes in the future (Mercer, 2008; Reich et al., 2023). However, few studies have focused on systematically understanding what kinds of in- termediate errors occur in making CoT procedures and whether LLMs can learn from those mistakes. To address these issues, we aim to explore the po- tential of LLMs to effectively utilize their previous mistakes to boost reasoning. To enhance the scalability and efficiency of ana- lyzing and learning from the mistakes of LLMs, we began by collecting a vast dataset of LLMs’ reason- ing outputs and built COTERRORSET, which con- sists of 609,432 questions sourced from 1060 tasks across diverse domains. Each query in this set is meticulously structured, featuring both a manually curated correct reference and the incorrect ratio- nales collected from PaLM2 (Anil et al., 2023)’s responses. Furthermore, we prompt the LLMs with the correct reference and the incorrect responses in order to make it reflect why making such mistakes. The introspective responses are also collected and subsequently utilized in our work. We employ this data for cluster analysis to identify specific details of the errors. With our COTERRORSET, we introduce two in- novative paradigms, namely mistake tuning and self-rethinking, aimed at efficiently augmenting LLMs by leveraging their historical errors during both tuning and inference stages. Diverging from the conventional approach of only relying on cor- rect rationales in traditional supervised fine-tuning, our mistake tuning strategy incorporates combi- nations of both correct references and incorrect rationales. To facilitate the learning process for LLMs, we introduce the prefixes [CORRECT RA- TIONALE] and [INCORRECT RATIONALE] be- fore the corresponding rationales. Intuitively, this prompt tuning facilitates LLMs to distinguish be- tween correct and incorrect rationales while avoid- ing corruption from the incorrect ones with the two separated prefixes. For self-rethinking, inspired by contrastive in-context learning (Gao and Das, 2024), we expose LLMs to both correct and in- correct rationales in demonstration samples. After obtaining the initial answer output by the LLM, we iteratively prompt it to rethink and rectify the result based on the historical mistakes. To manage com- putational resources and prevent potential loops, we implement a threshold, limiting the number of times the model can engage in self-rethinking and corrections. Figure 1 gives an overview pipeline of our work. To substantiate the efficacy of our proposed methodologies and to delve into the learning ca- pabilities of LLMs from their mistakes, we under- take experiments encompassing diverse reasoning tasks and LLMs of varying sizes. The application of our methods consistently yields performance enhancements across a spectrum of tasks, under- scoring the effectiveness and broad applicability of our approaches in leveraging LLMs’ mistakes during both the tuning and inference stages. Addi- tionally, we conduct thorough analyses of the error types exhibited by LLMs, offering comprehensive insights and guidance on mitigating the most preva- lent errors in these models. In general, our contributions are as follows: • A large-scale error set, COTERRORSET, is constructed for scalable analysis and learning from the LLMs’ mistakes. • We novelly designed two paradigms for LLMs to utilize and learn from their previous mis- takes at both fine-tuning and inference stages. • With extensive experiments, we validate the effectiveness of our proposed methods and provide further hints based on analysis of LLMs’ error types. 2 Related Work Human-like Reasoning with LLMs. CoT (Wei et al., 2022) demonstrate the great potential of equipping LLMs with human-like reasoning ca- pability. Following them, various logical and struc- tural reasoning strategies (Wang et al., 2022; Zhou et al., 2022; Creswell and Shanahan, 2022; Besta et al., 2023; Li et al., 2023b; Lightman et al., 2023) are proposed to align LLMs’ thinking pro- cesses with humans. These enhanced reasoning ap- proaches have been adopted in different tasks and areas, including commonsense reasoning (Geva et al., 2021; Ahn et al., 2022), logical reason- ing (Pan et al., 2023; Lei et al., 2023) and mathe- matical reasoning (Cobbe et al., 2021; Hendrycks et al., 2021) and achieved promising performance. In this work, we aim to investigate whether LLMs can benefit from rethinking and learning from pre- vious mistakes, which is one of the most important learning patterns of humans. Refined Reasoning Errors. Several studies focus on adjusting their reasoning pathways to ar- rive at better solutions. Huang et al. (2022) in- troduce self-improve that employs CoT plus self- consistency to obtain high-confidence solutions on a large set of unlabeled questions. The self- generated content is then used for fine-tuning in subsequent iterations, thereby further augmenting its reasoning capabilities. Madaan et al. (2023) pro- pose a self-refine technique that encourages LLMs to autonomously correct their outputs without the need for external data or feedback. However, it has been argued by some researchers that LLMs face challenges in self-correcting their responses in the absence of external feedback, and under certain conditions, such attempts might even deteriorate their performance (Huang et al., 2023). Based on that, An et al. (2023) suggest fine-tuning LLMs using pairs consisting of errors and their respective corrections generated by GPT-4 as a supervisory mechanism. Nevertheless, our work is pioneering in highlighting the impact of exposing mistake ex- amples on in-context learning. Furthermore, our experiments reveal that in the process of model tuning, learning from mistakes can inherently en- hance itself by merely being exposed to correct examples and errors, without depending on explicit corrections from teacher models. 3 A Novel Dataset: COTERRORSET 3.1 Dataset Construction In order to investigate whether incorrect ratio- nales can also contribute to LLMs’ reasoning performance, we introduce COTERRORSET, a novel benchmark based on the source of COT- COLLECTION (Kim et al., 2023), built upon var- ious domains, including multiple-choice QA, ex- tractive QA, closed-book QA, formal logic, natu- ral language inference, and arithmetic reasoning. Our dataset’s question and reference are obtained from the following datasets: QASC (Khot et al., 2020), AQuA (Ling et al., 2017), GSM8K (Cobbe et al., 2021), QED (Lamm et al., 2021), Strate- gyQA (Geva et al., 2021), SenseMaking (Wang et al., 2019), CREAK (Onoe et al., 2021), e- SNLI (Camburu et al., 2018) and ECQA (Aggar- wal et al., 2021). Each task within this collection is systematically organized to include a question and a correct reference, followed by an incorrect response and the demonstrations why making such mistakes. The errors and demonstrations are both generated from PaLM2. COTERRORSET diverges from traditional CoT datasets by employing PaLM2’s mistakes and the reasons behind them. We utilized PaLM2 to gen- erate rationales for each question in the dataset, focusing specifically on collecting incorrect ratio- nales. Recent research has demonstrated LLMs’ capability to provide high-quality data (Li et al., 2024a; Tong et al., 2024; Li et al., 2024b) and feed- back (Pan et al., 2024; Tan et al., 2024). Following this idea, we provide PaLM2 with both correct ref- erences and its incorrect answers to demonstrate Figure 2: The pipeline to construct COTERRORSET. By providing PaLM2 with the correct reference and the incorrect response generated by itself, we prompt it to introspect and grasp the underlying reasons for its errors. and reflect why it makes such mistakes. The steps of the construction process are shown in Figure 2. This systematic collection of incorrect rationales can make COTERRORSET a promising benchmark in providing future improvements from a different perspective. One example is shown in Table 1. Questions: Combine facts and answer this: Which meridian extends across Europe, the Mediterranean Sea, Africa, Asia, the Pacific Ocean, North America, and the Atlantic Ocean? Target: The Cimarron meridian Reference: The Cimarron meridian extends across Eu- rope, the Mediterranean Sea, Africa, Asia, the Pacific Ocean, North America and the Atlantic Ocean. Incorrect Rationale: The 180th meridian extends across Europe, the Mediterranean Sea, Africa, Asia, the Pacific Ocean, North America and the Atlantic Ocean. Error Causes: Making mistakes in incorrect rationales, such as claiming the 180th meridian extends across various continents and oceans, can lead to significant misinformation and confusion. This particular error demonstrates a fundamental misunderstanding of ge- ography, as the 180th meridian primarily runs through the Pacific Ocean and does not cross the regions listed. Such inaccuracies underscore the importance of fact- checking in educational content to prevent the spread of misconceptions. Correcting these mistakes not only clar- ifies the factual information but also serves as a valuable learning opportunity, emphasizing the need for accuracy and critical evaluation of information. Table 1: An example in COTERRORSET. The content of Incorrect Rationale and Error Causes are generated by PaLM2 as indicated in Figure 2. 3.2 Error Analysis with COTERRORSET After collecting the COTERRORSET dataset, we observe that the error types in it are very intricate and diverse. The intricacy poses obstacles to subse- Figure 3: Our pipeline for clustering PaLM2’s mistakes. quent enhancement efforts. In order to tackle this issue and gain a more overarching understanding of LLMs’ error types, we utilize an LLM-based un- supervised clustering approach shown in Figure 3 to match diverse error types into more general cate- gories. To be specific, we begin by extracting the spe- cific error keywords from each error cause. Subse- quently, we input all the extracted keywords into the LLMs and prompt them to generate more gen- eral categories that encompass the entire spectrum of error names. Following this automated cluster- ing process, we manually review each cluster, mak- ing necessary adjustments to refine the matching results. Finally, we distill the diverse error types into several abstract categories, such as calculation error, numeric error, and logical error in domains of arithmetic reasoning and logical error, common- sense error, linguistic error, and context error in domains of commonsense reasoning. A detailed definition of each error category is shown in Ap- pendix C. We put results and analysis in Section 8. 4 Our Methodology: Self-rethinking Self-rethinking offers an innovative approach to encourage LLMs to consider if they are repeating past errors. This method starts with an initial CoT reasoning. Following this, the model uses the pro- vided reasoning outputs and a random selection of examples from COTERRORSET. This step is de- signed to assess if the model’s most recent response includes similar inaccuracies. If errors are detected, it will formulate a new rationale and undergo the evaluation process again. This cycle continues un- til the model deems its latest answer to be correct or it reaches a set limit of evaluation rounds. The main goal is to empower the LLM to learn from its errors introspectively and minimize the recurrence of such mistakes. One example is shown in Table 2. The core of self-rethinking lies in the backward- checking stage. In this phase, the LLM reviews its reasoning chain, but with a specific focus on the error types it previously identified. This explicit demonstration of errors, coupled with the question, golden reference, and incorrect rationales, is instru- mental in enabling the LLM to recognize specific types of mistakes it tends to make. This targeted review helps the LLM to not just correct the ran- dom errors but to consciously avoid repeating the same types of mistakes it has made in the past. The process includes a loop for error correction and confirmation. If the LLM finds that it has repeated any of the previously identified mistakes, it revisits the reasoning process to correct them. Otherwise, the last response is adopted as the final result. Moreover, the iterative checking process should have a crucial repeating boundary, denoted as k iterations. If the LLM’s error-checking and correc- tion cycles surpass this predefined threshold and errors still persist, the process concludes under the assumption that the issue at hand or the error de- tection might exceed the LLM’s current capabili- ties. This constraint prevents the LLM from being caught in an endless loop of self-rethinking, ensur- ing the efficiency and practicality of the reasoning process. 5 Our Methodology: Mistake Tuning In order to fully investigate the other potential uti- lization of our principles, we introduce mistake tun- ing, which demonstrates our motivation is a broad and pioneering framework not only in the field of in-context learning. This approach is designed to finetune LLMs on the combinations of both cor- rect rationales and incorrect mistakes. By simply appending prefixes [CORRECT RATIONALE] and [INCORRECT RATIONALE] before correspond- ing rationales, mistake tuning can further improve LLMs’ abilities to distinguish between correct and incorrect rationale. Mistake tuning is built upon the foundational mo- tivations and conclusions of self-rethinking, where LLMs can learn from the implicit reasons and types of mistakes they made in CoT reasoning. This pro- cess can be formulated as: p = [Q ⊕ S ⊕ R], L = − |p| (cid:88) t=1 logP (pt|p<t), (1) (2) Where Q, S and R represent the given question, special prefix and corresponding rationale respec- tively. ⊕ represents the operation of concatenation. Step 1: CoT reasoning – Question: Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May? – Answer: Let’s think step by step. – Response: Natalia sold 48 * 2 = 96 clips in May. Natalia sold 48+96 = 144 clips altogether in April and May. Step 2: self-rethinking – Question: Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May? – Your output: Natalia sold 48 * 2 = 96 clips in May. Natalia sold 48+96 = 144 clips altogether in April and May. – Do you make similar mistakes with the following examples: ###Error Type 1: Misapplication of Algebraic Identities: ... – Response: Yes, I make a mistake. Step 3: correction (if they think they make mistakes) – So the correct answer is: – Response: Natalia sold 48/2 = 24 clips in May. Natalia sold 48+24 = 72 clips altogether in April and May. Table 2: One example of interactive prompting and responses for self-rethinking. Black texts are the prompting while the red content serves as LLMs’ response example. Mistake tuning presents a cost-effective, straight- forward, and efficient alternative. Previous work has proven pretraining with some controlled sig- nals based on human feedback can result in LLMs’ better ability to generate more satisfactory con- tents (Korbak et al., 2023; Keskar et al., 2019). Hence, incorporating fixed prefixes in finetuning LLMs in the field of reasoning can also help models differentiate information from golden references and mistakes. Our results also demonstrate its ef- fectiveness for promoting LLMs’ reasoning abili- ties without additional costs similar to annotating golden reasoning references. 6 Experiments In this section, we conducted a series of exper- iments to compare the proposed self-rethinking methods with the existing approach on both arith- metic and commonsense reasoning benchmarks. 6.1 Experiment Setup We conduct comparisons between self-rethinking and several other baselines on multiple bench- marks. Baselines: We select the following reason- ing baselines to evaluate our framework, self- rethinking’s performance. • Standard prompting (Brown et al., 2020): the basic reasoning promptings with prefixes as question and answer. • Chain-of-Thought (CoT) (Madaan et al., 2023): a technique that enhances large lan- guage models’ ability to perform complex and multi-step reasoning by guiding them through a problem-solving process step by step, signif- icantly improving their performance on tasks that require deeper cognitive processing. • Self-refine (Madaan et al., 2023): an approach that enables LLMs to iteratively improve their initial outputs by providing feedback to them- selves and refining their responses. • Self-consistency (Wang et al., 2022): a decod- ing strategy that enhances CoT prompting in LLMs by sampling multiple reasoning paths and selecting the most consistent answer. Benchmarks: We consider the following ex- isting math problems benchmarks designed with human rationale reference. • GSM8K benchmark of math word prob- lems (Cobbe et al., 2021). • AQuA dataset of algebraic math prob- lems (Ling et al., 2017). • MathQA benchmark of multiple-choice math problems (Amini et al., 2019). • Openbook benchmark modeled after open book exams for assessing human understand- ing of a subject (Mihaylov et al., 2018). • LogiQA dataset sourced from expert-written questions for testing human logical reason- ing (Liu et al., 2020). • Critical Reasoning in MARB benchmark of several graduate admission tests, highlighting the reasoning to assumptions, conclusions and paradoxes in arguments (Tong et al., 2023). Models: In order to evaluate self-rethinking’s effects, we choose PaLM2 (Anil et al., 2023) Methods Standard Prompting (Brown et al., 2020) CoT (Madaan et al., 2023) Self-refine (Madaan et al., 2023) Self-consistency (Wang et al., 2022) Self-rethinking (Ours) GSM8K AQuA MathQA OpenbookQA LogiQA 17.06 56.29 34.74 58.38 65.13 22.40 32.11 39.92 42.80 44.72 27.57 30.89 54.01 41.37 43.95 80.92 82.66 28.75 87.61 87.71 41.21 41.05 35.99 42.88 49.12 CR 24.45 51.98 12.28 22.58 54.53 Table 3: PaLM2’s accuracy on the existing baselines and our methods, self-rethinking prompting. Self-rethinking shows consistent improvements but uses less inference time compared with self-consistency. Methods 8-shot CoT 8-shot self-rethinking GSM8K AQuA MathQA LogiQA 64.56 70.15 30.65 34.80 36.21 40.56 29.57 33.64 Table 4: PaLM2’s accuracy results on few-shot Chain- of-Thought(CoT) and our methods, self-rethinking. We select 8-shot examples from the corresponding trainset. Then we collect PaLM2’s incorrect rationales of those 8 examples. The part of the original correct reference is CoT’s demonstrations. Those generated incorrect rationales serve as demonstrations for the rethink stage. Methods CoT Self-rethinking GSM8K AQuA OpenbookQA 97.93 98.02 88.98 91.03 93.21 95.07 CR 78.92 81.37 Table 5: GPT4’ results on zero-shot Chain-of-Thought (CoT) and our methods, self-rethinking. and GPT4 (OpenAI, 2023) as the baseline model. PaLM2 is a dense left-to-right, decoder-only lan- guage model. It is pre-trained on a high-quality corpus of 780 billion tokens with filtered webpages, books, Wikipedia, news articles, source code, and social media conversations. GPT4 is a large-scale multi-modal state-of-the-art model that exhibits human-level performance on various tasks. We use PaLM2’s TEXT-BISON-001 and GPT4’s GPT-4 models provided in their APIs. For mistake tuning, we choose two different- sized Flan T5 (Chung et al., 2022), which are specifically designed for instruction tuning strate- gies. This model excels in understanding and gen- erating human-like text, demonstrating remarkable performance across a wide range of natural lan- guage processing tasks. Training Details: All of the following experi- ments were designed with a common setting, em- ploying a random seed of 42, learning rate=1e-4. Considering the vast number of data in AQuA, we only randomly select 10,000 of them to represent the differences in tuning on two different domains. 6.2 Self-rethinking Results Table 3 presents PaLM2’s evaluation results on chosen benchmarks. In this experiment, we set our method, self-rethinking’s k equal to 1 to trade between the accuracy and computing resources. In order to align the commuting budget with our methods, we set the times of inference in self- consistency to 3. Our approach involves an initial zero-shot CoT inference, then rethinking whether this rationale has made similar errors. This leads to the final answer if no errors are found. If inaccura- cies are detected, it combines a demonstration and the previously suspected erroneous answer for a third inference to arrive at the final answer. Hence, the overall inference times in our methods are be- tween 2 and 3 times per question, which is still lower than self-consistency here. With the considered computational settings, the self-rethinking method shows superior perfor- mance with significant improvements, especially in GSM8K, AQuA, MathQA, and LogiQA, clearly outperforming self-consistency under a similar computing cost. However, while our method sur- passes CoT in performance on the MathQA dataset, it falls short of achieving self-refine results. It’s important to note that this dataset is specifically tailored towards operation-based arithmetic prob- lems rather than general questions, aiming to gauge the models’ proficiency in tackling complex is- sues (Amini et al., 2019). This suggests that the nature of the MathQA dataset may inherently be more suitable for self-refine. In contrast to our ap- proach, which aims to amend responses by identify- ing and addressing typical errors. Table 5 compares GPT4’s performance of CoT and self-rethinking. The results demonstrate a notable improvement when using our self-rethinking method over CoT. These findings suggest that self-rethinking is a more effective approach for enhancing GPT-4’s performance. Table 4 presents the 8-shot examples of CoT and self-rethinking, using the PaLM2 model across four different tasks: GSM8K, AQuA, MathQA, and LogiQA. A key part of the process involved collecting PaLM2’s incorrect rationales for these examples, which were then used as learning demon- strations to rethink. The results show a clear advan- tage of the self-rethinking method over the standard 8-shot CoT approach. These results highlight the efficacy of the self-rethinking method in improving accuracy in few-shot learning scenarios for com- plex problem-solving tasks. Notably, self-refine shares our basic motivations about self-refining or self-correcting their answers but without utilizing any mistake samples. The result shows that our self-rethinking outperformed self-refine by a considerable margin across most of the datasets. This indicates the importance of our proposal for utilizing previous mistake examples. While self-refine demonstrates improvements in three arithmetic reasoning datasets, it concurrently exhibits substantial performance drops in common- sense reasoning datasets. By contrast, our self- rethinking consistently outperforms the standard method in various domains. This further implies the introduction of previous mistakes can stabilize the refinement and rethinking process. In conclusion, our self-rethinking method achieved remarkable accuracy improvements in most tests, particularly in scenarios that demand high logical rigor and offer the opportunity to learn from errors by identifying fixed logical patterns, especially in arithmetic reasoning tasks. It indi- cates self-rethinking effectiveness in tasks requir- ing strong logic and prone to minor errors. Addi- tionally, the self-rethinking method proves partic- ularly beneficial in assisting LLMs in identifying and rectifying low-level mistakes or misunderstand- ings that are within the model’s capabilities but have been previously overlooked. This capability indicates that self-rethinking can serve as a valu- able tool in refining the accuracy and reliability of responses in LLMs, especially in complex problem- solving contexts. Models Flan-T5-large (780M) Flan-T5-xl (3B) Methods Standard finetuning Mistake tuning Standard finetuning Mistake tuning GSM8K MathQA AQuA 13.10 42.79 18.07 48.95 17.81 47.24 20.99 52.22 14.28 18.36 23.81 24.29 Table 6: Accuracy of Standard finetuning models (with only correct rationales) vs. our methods, mistake tuning (combined correct and incorrect rationales). Mistake tuning shows consistent and superior performance com- pared with only fine-tuned correct rationales. ing the impact of combining correct and incorrect rationales. The data presented in Table 6 reveals significant insights into the performance of Flan- T5 models under mistake tuning, which involves integrating both correct and incorrect rationales. This approach is evident across different model scales, whether it’s the smaller 780M version or the larger 3B variant. Notably, in the MathQA do- main, Flan-T5-large(780M) tuned by our methods demonstrates superior performance compared to PaLM2, achieving an accuracy of 48.95% versus 41.37%. This phenomenon suggests that LLMs can benefit from engaging with incorrect reason- ing, thereby enhancing their problem-solving and reasoning capabilities. It extends beyond merely bolstering the model’s grasp of correct CoT, to also encompassing the ability to identify and learn from incorrect rationales. Furthermore, the expense of obtaining ground truth or hand-crafted references is significantly higher compared to generating and collecting in- correct rationales. This cost disparity underscores the practical value of our approach, offering a more cost-effective solution without compromising the quality of training data for machine learning mod- els. All mentioned provides a direction for further work of reasoning, which involves not only en- hancing the model’s understanding and learning of correct CoT but also the ability to identify and learn from incorrect rationales. 7 Further Studies Figure 4: Accuracy of different re-thinking iterations(k). As the value of k increases, the overall prediction accu- racy improves. 6.3 Mistake Tuning Results Iteration Times Table 6 showcases the performance of the Flan-T5 models in the context of mistake tuning, highlight- In this section, we conduct experiments to assess the impact of different rethinking iterations, de- 7.1 Hyperparameter Analysis of Rethinking noted as k, on the performance of our framework. We evaluate it on two mainstream benchmarks in the field of mathematics and commonsense rea- soning, GSM8K and LogiQA. Figure 4 represents the detailed trend under varying re-thinking times. Notably, as k increases from 1 to 24, GSM8K rep- resents a growth of 8.11% and 12.37% in LogiQA. It is evident as k increases, both LLMs’ arithmetic and commonsense reasoning accuracy exhibit an upward trend. This trend suggests a positive corre- lation between the number of rethinking iterations and the overall reasoning abilities. These observa- tions indicate self-thinking’s potential benefits with more inference time. CAT. DEM. COR. INC. GSM8K LogiQA ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ 64.30 62.70 65.70 65.13 50.21 48.57 51.01 49.21 Table 7: Impact of Component Combinations. CAT. stands for the previous mistakes’ type name, DEM. are the reasons for making such mistakes, and COR. and INC. mean corresponding correct and incorrect rationale examples. All components here are generated by LLM itself before reasoning. 7.2 Ablation Study on Rethinking Process In this ablation study, we examined the impact of various component combinations in promptings to guide LLMs to self-rethinking . Table 7 shows the performance of different components. The results indicate that the inclusion or exclusion of differ- ent components has varying effects on PaLM2’s accuracy in domains of GSM8K and LogiQA. How- ever, the overall performance across various com- ponents is relatively similar. It performs similarly well regardless of the specific combination of com- ponents, indicating good generalizability of the method. This study suggests our method’s flexibil- ity and stability in future usage. 8 Unveiling LLM’s Reasoning Errors In this section, we delve into the detailed types and underlying reasons that lead to mistakes in LLMs’s inference process. We sample mistake examples from GSM8K and LogiQA to conduct an in-depth analysis of both arithmetic and commonsense rea- soning. We put some examples in Appendix B. For commonsense reasoning, we find errors like the misinterpretation of facts or concepts usu- ally arise due to the model’s limitations in under- (a) Commonsense Reasnoing (b) Arithmetic Reasoning Figure 5: PaLM2’s error type distribution in the com- monsense and arithmetic reasoning task. standing and applying context accurately. This reveals current LLMs may still fall short of consis- tently recalling precise factual knowledge within a given context. Consequently, this underscores the imperative to advance toward the develop- ment of Retrieval-Augmented Generation(RAG) systems (Guu et al., 2020; Mallen et al., 2022), as they hold the promise of yielding more faithful and contextually aligned results. Additionally, er- rors stemming from logical fallacies or incorrect inferences reveal LLMs’ reliance on pattern recog- nition over logical reasoning, sometimes leading them to make logically inconsistent or unsupported connections by the given facts. As shown in Figure 5, the most errors made by LLMs in arithmetic reasoning are about calculation. This can be attributed to the different nature of LLMs compared to other tools like calculators. To address this issue, Chen et al. (2022)’s suggestion using Program-of-Thought (PoT) is a promising approach to instruct LLMs to generate a segment of code to solve the given problem, resulting in more accurate calculation results. Furthermore, it’s important to note that logical error is also a type of error that LLMs always suffer from. Com- pared with calculation errors and numeric errors, the causes of logical errors are more complicated and nuanced. For instance, errors like misinterpret- ing given data or misapplying arithmetic operations reveal a lack of depth in understanding mathemati- cal relationships. This can result from the model’s limitations in comprehending the nuances of math- ematical concepts or its inability to correctly infer the needed function from the context of the ques- tion. In the future, more fine-grained analysis and methods are needed to address such complex logi- cal errors in arithmetic reasoning. Context48%Linguistics13%Commonsense13%Logical26%ContextLinguisticsCommonsenseLogicalCalculation59%Numeric7%Logical34%CalculationNumericLogical 9 Conclusions and Future Work Acknowledgements In this work, we explore whether LLMs can learn from their mistakes. In order to investigate LLMs’ abilities to differentiate and learn from mistakes, we introduce COTERRORSET, a novel benchmark collecting both correct and incorrect CoT rationales across various domains and designed with demon- strations for making errors. We propose two possi- ble solutions to expose the effects of mistakes from different perspectives: self-rethinking and mistake tuning. Both of them have achieved consistent and significant improvements, which demonstrates the potential benefits of learning from reasoning er- rors. In the last, we conduct a comprehensive and detailed analysis of LLMs’ common mistakes in both arithmetic and commonsense reasoning. The findings will provide a clear direction for future improvements. For future work, we envision proposing corre- sponding algorithms or loss functions to learn im- plicit information from mistakes. The primary in- tent of this work is to provide a new paradigm so there are still a lot of improvements that can be down following this work. For example, in- corporating contrastive learning to differentiate correct references and errors is intuitive to make more improvements. Also, some memorization and retrieval-augmented skills can help models benefit from mistakes similar to each question. Limitations In addition to the noted challenge of fine-tuning commercial LLMs, we recognize several other specific limitations in our study that require at- tention. Primarily, our self-rethinking methodol- ogy may not be entirely suitable for tasks where a distinct, objective label is not readily available, such as in machine translation or dialogue gener- ation. These areas pose a unique challenge as the correctness of outputs can often be subjective or context-dependent, making it difficult to apply our approach effectively. Moreover, our utilization of the COTERRORSET collection for mistake tuning necessitates a ground truth label for each sample, posing a potential impediment to the applicability of our method in low-resource scenarios. In the future, we will continually improve our method and bring the concept of learning from mistakes to wider scenarios and applications. Thanks again for your thoughtful insights and informative com- ments. Our work is sponsored in part by NSF CAREER Award 2239440, NSF Proto-OKN Award 2333790, as well as generous gifts from Google, Adobe, and Teradata. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as neces- sarily representing the views, either expressed or implied, of the U.S. Government. The U.S. Gov- ernment is authorized to reproduce and distribute reprints for government purposes not withstanding any copyright annotation hereon. References Shourya Aggarwal, Divyanshu Mandowara, Vishwajeet Agrawal, Dinesh Khandelwal, Parag Singla, and Di- nesh Garg. 2021. Explanations for commonsenseqa: New dataset and models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 3050–3065. Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Haus- man, et al. 2022. Do as i can, not as i say: Ground- ing language in robotic affordances. arXiv preprint arXiv:2204.01691. Aida Amini, Saadia Gabriel, Peter Lin, Rik Koncel- Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. Mathqa: Towards interpretable math word problem solving with operation-based formalisms. arXiv preprint arXiv:1905.13319. Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng, Jian-Guang Lou, and Weizhu Chen. 2023. Learn- ing from mistakes makes llm better reasoner. arXiv preprint arXiv:2310.20689. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin John- son, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403. Maciej Besta, Nils Blach, Ales Kubicek, Robert Ger- stenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, et al. 2023. Graph of thoughts: Solv- ing elaborate problems with large language models. arXiv preprint arXiv:2308.09687. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natu- ral language inference with natural language expla- nations. Advances in Neural Information Processing Systems, 31. Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al. 2023. A sur- vey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. 2022. Program of thoughts prompting: Disentangling computation from reason- ing for numerical reasoning tasks. arXiv preprint arXiv:2211.12588. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Antonia Creswell and Murray Shanahan. 2022. Faith- ful reasoning using large language models. arXiv preprint arXiv:2208.14271. Xiang Gao and Kamalika Das. 2024. Customizing lan- guage model responses with contrastive in-context learning. arXiv preprint arXiv:2401.17390. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346– 361. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In International confer- ence on machine learning, pages 3929–3938. PMLR. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Ja- cob Steinhardt. 2021. Measuring mathematical prob- lem solving with the math dataset. arXiv preprint arXiv:2103.03874. Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022. Large language models can self-improve. arXiv preprint arXiv:2210.11610. Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for control- lable generation. arXiv preprint arXiv:1909.05858. Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020. Qasc: A dataset for question answering via sentence compo- sition. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8082–8090. Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, and Minjoon Seo. 2023. The cot collection: Improving zero-shot and few-shot learning of language models via chain-of-thought fine-tuning. arXiv preprint arXiv:2305.14045. Tomasz Korbak, Kejian Shi, Angelica Chen, Rasika Vinayak Bhalerao, Christopher Buck- ley, Jason Phang, Samuel R Bowman, and Ethan Perez. 2023. Pretraining language models with human preferences. In International Conference on Machine Learning, pages 17506–17533. PMLR. Matthew Lamm, Jennimaria Palomaki, Chris Alberti, Daniel Andor, Eunsol Choi, Livio Baldini Soares, and Michael Collins. 2021. Qed: A framework and dataset for explanations in question answering. Transactions of the Association for computational Linguistics, 9:790–806. Bin Lei, Chunhua Liao, Caiwen Ding, et al. 2023. Boosting logical reasoning in large language mod- els through a new framework: The graph of thought. arXiv preprint arXiv:2308.08614. Dawei Li, Yaxuan Li, Dheeraj Mekala, Shuyao Li, Xueqi Wang, William Hogan, Jingbo Shang, et al. 2023a. Dail: Data augmentation for in- context learning via self-paraphrase. arXiv preprint arXiv:2311.03319. Dawei Li, Zhen Tan, Tianlong Chen, and Huan Liu. 2024a. Contextualization distillation from large lan- guage model for knowledge graph completion. arXiv preprint arXiv:2402.01729. Dawei Li, Shu Yang, Zhen Tan, Jae Young Baik, Sunkwon Yun, Joseph Lee, Aaron Chacko, Bojian Hou, Duy Duong-Tran, Ying Ding, et al. 2024b. Dalk: Dynamic co-augmentation of llms and kg to answer alzheimer’s disease questions with scientific literature. arXiv preprint arXiv:2405.04819. Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. 2023b. Making language models better reasoners with step-aware In Proceedings of the 61st Annual Meet- verifier. ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5315–5333. Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xiny- ing Song, and Denny Zhou. 2023. Large language models cannot self-correct reasoning yet. arXiv preprint arXiv:2310.01798. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Ku- mar, et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Let’s verify step by step. arXiv preprint arXiv:2305.20050. Zhen Tan, Alimohammad Beigi, Song Wang, Ruocheng Guo, Amrita Bhattacharjee, Bohan Jiang, Mansooreh Karami, Jundong Li, Lu Cheng, and Huan Liu. 2024. Large language models for data annotation: A survey. arXiv preprint arXiv:2402.13446. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun- som. 2017. Program induction by rationale genera- tion: Learning to solve and explain algebraic word problems. arXiv preprint arXiv:1705.04146. Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. 2020. Logiqa: A challenge dataset for machine reading compre- arXiv preprint hension with logical reasoning. arXiv:2007.08124. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. 2023. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651. Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Hannaneh Hajishirzi, and Daniel Khashabi. 2022. When not to trust language models: Inves- tigating effectiveness and limitations of paramet- ric and non-parametric memories. arXiv preprint arXiv:2212.10511. Neil Mercer. 2008. Talk and the development of rea- soning and understanding. Human development, 51(1):90–100. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct elec- tricity? a new dataset for open book question answer- ing. arXiv preprint arXiv:1809.02789. Yasumasa Onoe, Michael JQ Zhang, Eunsol Choi, and Greg Durrett. 2021. Creak: A dataset for com- monsense reasoning over entity knowledge. arXiv preprint arXiv:2109.01653. OpenAI. 2023. Gpt-4 technical report. Liangming Pan, Michael Saxon, Wenda Xu, Deepak Nathani, Xinyi Wang, and William Yang Wang. 2023. Automatically correcting large language models: Sur- veying the landscape of diverse self-correction strate- gies. arXiv preprint arXiv:2308.03188. Liangming Pan, Michael Saxon, Wenda Xu, Deepak Nathani, Xinyi Wang, and William Yang Wang. 2024. Automatically correcting large language models: Sur- veying the landscape of diverse automated correction strategies. Transactions of the Association for Com- putational Linguistics, 12:484–506. Taly Reich, Alex Kaju, and Sam J Maglio. 2023. How to overcome algorithm aversion: Learning from mis- takes. Journal of Consumer Psychology, 33(2):285– 302. Yongqi Tong, Sizhe Wang, Dawei Li, Yifan Wang, Simeng Han, Zi Lin, Chengsong Huang, Jiaxin Huang, and Jingbo Shang. 2024. Optimizing lan- guage model’s reasoning abilities with weak supervi- sion. arXiv preprint arXiv:2405.04086. Yongqi Tong, Yifan Wang, Dawei Li, Sizhe Wang, Zi Lin, Simeng Han, and Jingbo Shang. 2023. Elimi- nating reasoning via inferring with planning: A new framework to guide llms’ non-linear thinking. arXiv preprint arXiv:2310.12342. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Cunxiang Wang, Shuailong Liang, Yue Zhang, Xiaonan Li, and Tian Gao. 2019. Does it make sense? and why? a pilot study for sense making and explanation. arXiv preprint arXiv:1906.00363. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits rea- soning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, et al. 2022. Least-to-most prompting enables complex reason- arXiv preprint ing in large language models. arXiv:2205.10625. A Algorithm for self-rethinking Algorithm 1 self-rethinking Mistakes = {...} Correct & Incorrect Examples = {...} ErrorCounter ← 0 Prompt: Why you made the mistakes? Mistakes ← Error Type, Demonstrations, Examples. Stage1 Prompt: Let’s think step by step. Stage2 Prompt: Do you make the same mistakes in Mistakes? while ErrorCounter < k do if Yes then go to Step2 ErrorCounter ← ErrorCounter + 1 else if No then get the answer break end if end while if ErrorCounter == k then Assume: Problem or error detection exceeds the model’s capabilities. end if Prompt: So the final answer is: B Reasoning Mistake Examples Error name: Misinterpretation of Given Data Error type: Logical – Example: Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May? – Correct Answer: Natalia sold 48/2 = 24 clips in May. Natalia sold 48+24 = 72 clips altogether in April and May. – Incorrect Rationale: Natalia sold 48 * 2 = 96 clips in May. Natalia sold 48+96 = 144 clips altogether in April and May. – Demonstration: Mistaking multiplication for division led to a significant overestimate of the total clips sold. Error type: Overlooking Details Error type: Logical – Example: Mark has a garden with flowers. He planted plants of three different colors in it. Ten of them are yellow, and there are 80% more of those in purple. There are only 25% as many green flowers as there are yellow and purple flowers. How many flowers does Mark have in his garden? – Correct Answer: There are 80/100 * 10 = 8 more purple flowers than yellow flowers. So in Mark’s garden, there are 10 + 8 = 18 purple flowers. Purple and yellow flowers sum up to 10 + 18 = 28 flowers. That means in Mark’s garden there are 25/100 * 28 = 7 green flowers. So in total Mark has 28 + 7 = 35 plants in his garden. – Incorrect Rationale: There are 80/100 * 10 = 8 more purple flowers than yellow flowers. So in Mark’s garden, there are 10 + 8 = 18 purple flowers. That means in Mark’s garden there are 25/100 * 18 = 4.5 green flowers. So in total Mark has 10 + 18 + 4.5 = 32.5 plants in his garden. – Demonstration: Neglecting to consider both yellow and purple flowers in the green flower calculation led to a significant underestimation of the total number of flowers in Mark’s garden. Error name: Misapplication of Arithmetic Operation Error type: Calculation – Example: Weng earns $12 an hour for babysitting. Yesterday, she just did 50 minutes of babysitting. How much did she earn? – Correct Answer: Weng earns 12/60 = $0.2 per minute. Working 50 minutes, she earned 0.2 x 50 = $10. – Incorrect Rationale: Weng earns 12/60 = $2 per minute. Working 50 minutes, she earned 2 x 50 = $100. – Demonstration: Confusing the rate per hour with the rate per minute led to a substantial overestimation of earnings. Error name: Numerical Error type: Numeric – Example: The chicken crossed the road to get to the other side twice for the thrill of it. The first time, it had to dodge 23 speeding cars. The second time, a person tried to catch it and accidentally pulled out twice as many feathers as the number of cars the chicken had dodged. The chicken had 5263 feathers before its thrill-seeking road crossings. How many feathers did it have afterward? – Correct Answer: The chicken lost 23 * 2 = «23*2=46»46 feathers on its second road crossing.Thus, it had 5263 - 46 = «5263-46=5217»5217 feathers after crossing the road twice. – Incorrect Rationale: The chicken lost 23 * 2 = «23*2=46»46 feathers on its second road crossing. Thus, it had 5263 - 46 = «5263-52=5211»5211 feathers after crossing the road twice. – Demonstration: 1. The correct answer is 5217, while your answer is 5211. 2. Your answer is wrong because you subtracted 52 instead of 46. 3. The type name of the incorrect answer is numerical. Table 8: Examples of Error Types in Arithmetic Reasoning. All contents are generated by PaLM2 itself. Error name: Logical Fallacy or Incorrect Inference Error type: Logical – Example: "When standing miles away from Mount Rushmore" – Correct Rationale: Objects appear smaller when viewed from a greater distance. – Incorrect Rationale: "The mountains do not look smaller when standing miles away from Mount Rushmore. They look larger." (Logical fallacy) – Demonstration: 1. The correct rationale is that objects appear smaller when viewed from a greater distance, whereas the incorrect rationale states the opposite. 2. This is a logical fallacy as it contradicts a basic principle of perception. 3. The type name of the incorrect rationale is logical. Error name: Incorrect Assumptions or Generalizations Error type: Logical – Example: "Poison causes harm to which of the following?" – Correct Rationale: Poison affects living organisms. – Incorrect Rationale: "Robots do not get hurt by poison." (Incorrect generalization about the effects of poison) – Demonstration: 1. The correct rationale is that poison affects living organisms, but the incorrect rationale generalizes that robots are immune to poison. 2. This is an incorrect generalization because robots, being non-living entities, are not subject to biological effects. 3. The type name of the incorrect rationale is logical. Error name: Misunderstanding Literal vs. Metaphorical Language Error type: Linguistics – Example: "When food is reduced in the stomach" – Correct Rationale: Digestion involves the breakdown of food by stomach acid. – Incorrect Rationale: "Choice D is incorrect because it is not a fact." (Misunderstanding metaphorical language) – Demonstration: 1. The correct rationale is about the literal process of digestion, whereas the incorrect rationale misinterprets the metaphorical language. 2. This demonstrates a misunderstanding of metaphorical language. 3. The type name of the incorrect rationale is linguistics. Error name: Factual Inaccuracy Error type: Commonsense – Example: "You can make a telescope with a" – Correct Rationale: A telescope requires specific optical elements to function. – Incorrect Rationale: "A telescope needs a lens and a magnifying glass is a lens, so glass is a good choice." (Factually inaccurate about how telescopes are made) – Demonstration: 1. The correct rationale is that a telescope requires specific optical elements, whereas the incorrect rationale assumes any lens, like a magnifying glass, can make a telescope. 2. This shows a factual inaccuracy in understanding how telescopes are constructed. 3. The type name of the incorrect rationale is commonsense. Error type: Misunderstanding Context or Relevance Error type: Context – Example: "an inherited characteristic found on all mammals is" – Correct Rationale: Inherited characteristics in mammals include features like fur. – Incorrect Rationale: "Shoes are not found on all mammals" (Misunderstanding the context of biological characteristics) – Demonstration: 1. The correct rationale focuses on relevant inherited physical traits like fur. 2. This error illustrates a clear lack of understanding of the context. 3. The type name of the incorrect rationale should be context. Table 9: Examples of Error Types in Commonsense Reasoning. All contents are generated by PaLM2 itself. C More Details about LLM-based Clustering Approach Input Output Please generate several keywords to cover all the following error types, and assign each keyword to an error type category. Output in the following format: [Specific Error Category1]: [keyword1], [keyword2] [Specific Error Category2]: [keyword3], [keyword4] Keywords: {keywords} Mathematical: {keywords cluster1} Numerical: {keywords cluster2} Arithmetic: {keywords cluster3} Calculation: {keywords cluster4} Table 10: Detailed input and output of our LLM-based clustering method. Error Type Calculation Numeric Logical Linguistics Definition Mistakes or inaccuracies that occur during the process of performing math- ematical calculations. These errors can arise from various sources and can occur at any stage of a mathematical problem-solving process. Numeric errors in the context of mathematical reasoning refer to inaccura- cies that arise from the representation and manipulation of numerical values. These errors can occur at various stages of mathematical computations and can result from limitations in the precision of the representation of real numbers or mistakes in handling numerical data. Logical errors involve mistakes in the overall reasoning or strategy used to solve a mathematical problem. This type of error may not be immediately apparent during the calculation process but can lead to incorrect final results. It could include using an incorrect formula or assumptions, misunderstand- ing the problem statement, or applying the wrong concept. Errors in linguistics involve inaccuracies or mistakes in the use of language. These can include grammatical errors, misuse of vocabulary, incorrect syn- tax, or problems in semantics. Linguistic errors may arise from a lack of understanding of the rules of a language, misinterpretation of meaning, or the inability to effectively convey a message in a given language. Such errors can affect the clarity, coherence, and overall effectiveness of commu- nication. Commonsense Commonsense errors refer to mistakes or inaccuracies that occur in the application of general world knowledge or everyday reasoning. These errors can arise from misconceptions, flawed logic, or misunderstandings of basic principles that are widely accepted as common knowledge. Commonsense errors often lead to conclusions or decisions that, upon closer examination, are illogical or inconsistent with general understanding of the world. Errors of misunderstanding context or relevance occur when there is a failure to correctly interpret or apply the relevant information in a given scenario. This type of error typically involves overlooking key aspects of a context, making inappropriate generalizations, or failing to distinguish between literal and metaphorical language. These errors can significantly alter the intended meaning or relevance of a response in reasoning tasks. Context Table 11: PaLM2’s Understanding and Definitions for Error Types. All contents are generated by itself after providing its mistakes and corresponding golden-standard references.
synthetic_cpt
1
Impact-generated_seismic_signals_on_Mars.pdf
manuscript submitted to JGR: Planets Effect of impact velocity and angle on deformational heating and post-impact temperature S. Wakita1,2, H. Genda3, K. Kurosawa4, T. M. Davison 5, and B. C. Johnson 1,6 1Department of Earth, Atmospheric, and Planetary Sciences, Purdue University, West Lafayette, IN, USA 2Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA 3Earth-Life Science Institute, Tokyo Institute of Technology, Meguro, Japan 4Planetary Exploration Research Center, Chiba Institute of Technology, Narashino, Japan 5Department of Earth Science and Engineering, Imperial College London, London, UK 6Department of Physics and Astronomy, Purdue University, West Lafayette, IN, USA Key Points: • We examined the dependence of impact heating on impact angle and velocity us- ing a shock physics code. • The amount of heated mass is similar among > 45◦ impacts, while it is smaller for shallower impacts. • We derived an empirical formula for the cumulative heated mass over 1000 K dur- ing oblique impacts. 2 2 0 2 g u A 6 1 ] P E . h p - o r t s a [ 1 v 0 3 6 7 0 . 8 0 2 2 : v i X r a Corresponding author: Shigeru Wakita, [email protected] –1– manuscript submitted to JGR: Planets Abstract The record of impact induced shock-heating in meteorites is an important key for un- derstanding the collisional history of the solar system. Material strength is important for impact heating, but the effect of impact angle and impact velocity on shear heating remains poorly understood. Here, we report three-dimensional oblique impact simula- tions, which confirm the enhanced heating due to material strength and explore the ef- fects of impact angle and impact velocity. We find that oblique impacts with an impact angle that is steeper than 45 degree produce a similar amount of heated mass as verti- cal impacts. On the other hand, grazing impacts produce less heated mass and smaller heated regions compared to impacts at steeper angles. We derive an empirical formula of the heated mass, as a function of the impact angle and velocity. This formula can be used to estimate the impact conditions (velocity and angle) that had occurred and caused Ar loss in the meteoritic parent bodies. Furthermore, our results indicate that grazing impacts at higher impact velocities could generate a similar amount of heated material as vertical impacts at lower velocities. As the heated material produced by grazing im- pacts has experienced lower pressure than the material heated by vertical impacts, our results imply that grazing impacts may produce weakly shock-heated meteorites. Plain Language Summary Meteorites are extraterrestrial materials that have been delivered to the Earth from asteroids. The materials in meteorites can record information about their formation and subsequent evolution. Thus, they are excellent sources of information used to explore the history of the solar system. One such feature recorded is evidence of shock: high pres- sures and temperatures caused by collisions between asteroids. Previous work investi- gating impacts found that material strength is a key factor in determining the amount of impact heating, especially in low-speed collisions, like those expected to occur in the main asteroid belt. In this work, we explore various oblique incidence impacts to study the effects of material strength by using a shock physics model. We confirm that ma- terial strength plays a key role in oblique impacts, just as in head-on impacts. Our re- sults show that head-on and 45 degree impacts can generate nearly the same amount of heated mass in total. However, more oblique impacts with a shallower angle produce less heated mass than other steeper-angle impacts (i.e., head-on and 45 degree impacts). We also find that low-speed vertical impacts and high-speed grazing impacts can produce the same amount of material in asteroids that have experienced a given temperature. 1 Introduction Asteroids may impact other terrestrial bodies at various impact velocities and an- gles. Current asteroids collide with each other at a mean impact velocity of 5 km/s (Bottke et al., 1994; Farinella & Davis, 1992), while asteroids hit the Moon and Mars at over 10 km/s (Ivanov, 2001; Yue et al., 2013). We observe the evidence of impacts as craters on Moon, Mars, and asteroids, even on small-sized bodies, like Ryugu and Bennu (Walsh et al., 2019; Morota et al., 2020). As asteroids have experienced such collisions over the history of the solar system, exploring ancient evidence of impacts can reveal the early collisional history of asteroids (e.g., Sugita et al., 2019). Ejecta produced by impacts on asteroids sometimes travel to the Earth and are found as meteorites. Shock metamor- phism in meteorites provides evidence of past impacts on their parent bodies (e.g., Keil et al., 1994; Scott, 2002). Using shock-induced textures in meteorites, we categorize them by the degree of shock metamorphism (St¨offler et al., 1991; Scott et al., 1992; Rubin et al., 1997; St¨offler et al., 2018). When a strong shock propagates in the parent body, it might lead to the melting of materials. Weak shocks, however, are unable to produce melted textures; weakly shocked meteorites have experienced shock pressure less than 40 GPa. Nevertheless, weakly shocked meteorites also show evidence of moderate impact heat- –2– manuscript submitted to JGR: Planets ing (∼ 700 − 800 ◦C), such as the dehydration of phyllosilicate minerals (Nakamura, 2005; Nakato et al., 2008; Abreu & Bullock, 2013) and the reset of 40Ar-39Ar ages (Bogard, 1995, 2011; Weirich et al., 2012; Cohen, 2013). Thus, studying impact heating is essen- tial for a better understanding of the solar system. Oblique impacts are more likely to occur than head-on impacts in the solar sys- tem (Shoemaker, 1962). The most probable impact angle (θimp) is 45◦. The probabil- ity of impact occuring with an impact angle between θimp and θimp+dθimp is given as sin(2θimp)dθimp (Shoemaker, 1962). To understand the nature of oblique impacts, nu- merical simulations have been performed using smoothed particle hydrodynamics (SPH; e.g., Genda et al., 2012; Monaghan, 1992; Benz & Asphaug, 1994; Jutzi, 2015; Sugiura et al., 2021; Okamoto et al., 2020; Citron & Stewart, 2022) and grid-based hydrodynamic codes (CTH, SOVA, iSALE-3D; e.g., Pierazzo & Melosh, 2000b, 2000a; Elbeshausen et al., 2009; Elbeshausen & W¨unnemann, 2011; Elbeshausen et al., 2013; Davison et al., 2014; Artemieva & Morgan, 2017; Wakita et al., 2019). Previous work has examined the crater volume and heated mass (Pierazzo & Melosh, 2000b, 2000a; Elbeshausen et al., 2009, 2013; Davison et al., 2011, 2014). The volume heated to any given temperature depends on the impactor diameter, mass, velocity, and angle (Pierazzo & Melosh, 2000a). Recently, the material strength in the target has been recognized as an additional im- portant parameter for heating (Quintana et al., 2015; Kurosawa & Genda, 2018; Kuro- sawa et al., 2021; Wakita & Genda, 2019; Wakita et al., 2019). As extraterrestrial materials have experienced impacts on their parent asteroids, impact heating is crucial to understand their record. When we consider the material strength in rocks, the temperature increase is much higher than previously expected (e.g., Kuro- sawa & Genda, 2018). Dissipation of the kinetic energy due to plastic deformation in pressure- strengthened rocks is equivalent to an increase in internal energy, leading to higher tem- peratures during, and after, decompression. Kurosawa and Genda (2018) confirmed the post-shock heating due to plastic deformation in head-on impacts using a shock physics code. Although the following work explored an oblique impact of 45◦ at 5 km/s (Wakita et al., 2019), impacts at a variety of impact angles and velocities must be explored to better understand the effects of deformational heating. Davison et al. (2014) showed the dependence of the heated mass on impact angle, however, the estimated post-shock tem- perature is based on the peak shock pressure, and ignores enhanced deformational heat- ing (Kurosawa & Genda, 2018). Here we perform oblique impact simulations with material strength to examine the dependence of impact heating on impact velocities and angles. While we vary impact velocity and angle systematically with the same method as in previous work (Wakita et al., 2019), we also conduct a companion series of the impacts without material strength. Simulations without material strength have no deformational heating. Thus, compar- ison of simulations with and without strength allows us to quantitatively determine the effect of deformational heating (Kurosawa & Genda, 2018; Wakita et al., 2019). Con- sidering 1000 K as a reference temperature, which is the temperature for Ar age reset- ting (following previous work, e.g., Marchi et al., 2013), we provide an empirical rela- tionship of the heated mass which exceeds that temperature. 2 Methods We use the iSALE-3D shock physics code (Elbeshausen et al., 2009; Elbeshausen & W¨unnemann, 2011; Collins et al., 2016), to simulate a spherical impactor striking a flat-surface target at oblique angles. iSALE-3D uses a solver as described in Hirt et al. (1974). iSALE-2D is limited to simulating vertical-incidence impacts due to its use of axial symmetry. Thus, to simulate a range of impact angles, we use the fully three-dimensional version, iSALE-3D. The results of iSALE-2D and vertical impacts of iSALE-3D have pre- viously been shown to agree well (e.g., Elbeshausen et al., 2009; Davison et al., 2011, 2014; –3– manuscript submitted to JGR: Planets Raducan et al., 2022). Note that some examined about crater formation and impact ejecta, but Davison et al. (2014) compared the impact heating (similar to ours but without the shear heating). Since material strength is important for studying the effect of impact heat- ing, we employ a strength model appropriate for rocky materials (see below, Collins et al., 2004; Melosh et al., 1992; Ivanov et al., 1997). It is well known that porous targets produce more melt than non-porous targets in iSALE-2D (W¨unnemann et al., 2006; Davi- son et al., 2010), and a porosity compaction model is implemented into the iSALE-3D (W¨unnemann et al., 2006; Collins et al., 2011). However, to reduce the parameter space and compare it with previous work (e.g., Kurosawa & Genda, 2018), we only consider non-porous materials in this study. Nevertheless, we can apply our results in this study to well-consolidated rocky materials, such as ordinary chondrites having a porosity less than 10% (e.g., Flynn et al., 2018; Ostrowski & Bryson, 2019). We assume the impactor and the target have the same composition of dunite, using the ANEOS equation of state (Benz et al., 1989). In this work, we model dunite as non-porous with material strength. Since the dunite has a well-defined equation of state (Benz et al., 1989), it is widely used to simulate the bodies in the inner solar system (e.g., Davison et al., 2010; Johnson et al., 2015). It also represents meteoritic material well (ordinary chondrite, Svetsov & Shu- valov, 2015). Thus, material parameters for dunite without porosity in this work are rep- resentative of compact bodies in the inner solar system. As previously noted, simulations without strength are only used to quantify the effects of material strength and are not meant to represent specific solar system bodies. For the input parameters, we take the same values shown in Table S1 of Kurosawa and Genda (2018). When we simulate impacts with material strength, we use two models to describe the yield strength of intact and damaged rock; the Lundborg strength model (Lundborg, 1968) and the Drucker-Prager model (Drucker & Prager, 1952), respectively. We com- bine these two models using a damage parameter D which depends on total plastic strain (e.g., Collins et al., 2004). D ranges from 0 (intact rocks) to 1 (thoroughly fractured rock). A damage parameter D is initially set as 0 and damage is accumulated as material de- forms and accumulates plastic strain according to the damage model of Ivanov et al. (1997). When the shock pressure exceeds the Hugoniot elastic limit, materials become thoroughly damaged. We find material is completely damaged out to approximately 4 impactor radii from the point of impact (see also Fig. 5 of Wakita et al. (2019)). The yield strength Y , that of intact rock Yi, and that of damaged rock Yd are written as follows, Y = (1 − D)Yi + DYd, Yi = Ycoh,i + µintP 1 + µintP Ylimit − Ycoh,i , Yd = min(Ycoh + µP, Yi), (1) (2) (3) where Ycoh,i is the cohesion for intact rock at zero pressure, µint is the coefficient of in- ternal friction for intact rock, P is the temporal mean pressure, Ylimit is the limiting strength at high pressure, and µ is the damaged friction coefficient, respectively. To simultane- ously handle both intact and thoroughly fractured rocks (Equation 1), we use the Lund- borg model for intact rock (Equation 2) and the Druker-Prager model for the thoroughly fractured rock (Equation 3), respectively. The damaged friction coefficient µ is one of the important parameters for impact heating (Kurosawa & Genda, 2018; Wakita & Genda, 2019; Wakita et al., 2019). The dependence of the damaged friction coefficient has been explored and the smaller value represents the case without material strength (Kurosawa & Genda, 2018). Following the fiducial case in their work, we also adopt µ = 0.6 which is a typical value pertaining to granular materials made of rocky materials (e.g., Collins et al., 2004). Note that the limiting strength Ylimit is another influential parameter (see also Figure S1 of Kurosawa & Genda, 2018). The limiting strength, also known as the von-Mises strength, is the maximum strength material can have regardless of confining pressure. Following Kurosawa and Genda (2018), we use Ylimit of 3.5 GPa in this work. –4– manuscript submitted to JGR: Planets To examine the dependence of impact heating on impact properties, we vary the impact velocity (vimp = 2, 3, 5, 7, 10 km/s) and the impact angles (θimp = 15◦, 30◦, 45◦, 60◦, 75◦, 90◦). Note that we measure the impact angles from the target surface, i.e., 90◦ is a vertical impact. We fix a radius of the impactor (Rimp) as 2 km with a resolution of 0.1 km, which corresponds to 20 cells per projectile radius (CPPR). In this study, we assume that the size ratio of the impactor to the target is small enough to neglect the curvature of the target body. A resolution of 20 CPPR is sufficient to resolve cumula- tive heated mass as demonstrated in previous work (see Text S3 and Figure S2 in Wakita et al., 2019). As the numerical diffusion exaggerates the temperature of material near the contact boundary between the impactor and the target (Kurosawa & Genda, 2018), we need to omit these artificial overheated regions from further analysis. In our case, that region corresponds to 0.3 times of impactor mass (Mimp). Though we count that region in the results, we do not discuss the results that are less than 0.3 Mimp (see following sections). We place Lagrangian tracer particles in each cell of a high-resolution zone at the initial state of simulations. Note that the computational region in iSALE-3D con- sists of a high-resolution zone and an extension zone. The cell size in the extension zone is larger than that in the high-resolution zone and increases according to the distance from the boundary between the high-resolution zone (see also Figure 1 of Davison et al., 2011). To track the highly heated region (i.e., > 1000 K), we take the number of high- resolution cells as 220 in horizontal direction (x), 100 in vertical direction (z), and 150 in depth direction (y), respectively. We also note that we horizontally shift the origin of the impact point according to the impact angle; the origin is located in the middle at θimp = 90◦ and it is shifted 20 cells in the downrange direction at θimp = 45◦. The to- tal number of tracer particles is as large as ∼ 2 × 106. The Lagrangian tracers move through the Eulerian grid tracking the temperature and movement of a parcel of mate- rial. Note that we neglect radiative and conductive cooling during impact events. These are less effective than the cooling due to expansion over the timescales considered here (e.g., Sugita & Schultz, 2002). 3 Results The heated region depends on the impact obliquity. Figure 1 represents the dis- tribution of peak temperature (Tpeak) at a time of 5 ts after impact, in a suite of sim- ulations with material strength and an impact velocity of 5 km/s, where ts is a charac- teristic time for impactor penetration, ts = 2Rimp/vimp. As shown in the provenance plots (right panels), the heated area becomes shallower for more oblique impacts (see dashed lines, which show the Tpeak = 1000 K isotherm). Comparing impacts with and with- out material strength (Figure 2), we can confirm material strength enhances the tem- perature increase. This is consistent with previous work (Kurosawa & Genda, 2018; Wakita et al., 2019). As confirmed in Kurosawa and Genda (2018), more kinetic energy is con- verted into internal energy in the target with material strength. Since deformational heat- ing cannot occur in simulations without material strength, we observe such additional heating only in case with material strength. Most material has reached its peak temper- ature at 2.5 ts, as the isothermal line of 1000 K indicates (see white dotted line and black dashed line in Figure 3 (b)). To ensure material has reached its peak temperature we focus our analysis of peak temperatures reached before 5 ts. Also, we have confirmed that the cumulative heated mass is similar at 2.5 – 10 ts and does not change significantly after 5 ts (see Figure S5 in Wakita et al., 2019). To examine the effect of material strength, we compare the cumulative heated mass in the target with/without material strength (see Figure 4). Note that we consider the number of the tracer particles for which the peak temperature is beyond a given Tpeak, and regard their total mass as the cumulative heated mass. As shown in Figure 4 (c) (vimp = 5 km/s), the heated mass with material strength (right-hand panel) is always larger than that without material strength (pure hydrodynamic case, left-hand panel) at a given –5– manuscript submitted to JGR: Planets Snapshot of a cross section (x-z plane). The impactor hits the target with material Figure 1. strength at (x,z) = (0,0) with vimp= 5 km/s and (a) θimp = 90◦, (b) θimp = 60◦, (c) θimp = 45◦, and (d) θimp = 30◦respectively. Each tracer particle is colored by peak temperature (Tpeak). Left panel depicts time = 5 ts and right panel shows the provenance plots, where we put the tracer particles back in their original locations. Dashed lines on the right panel indicate the isother- mal line of Tpeak= 1000 K. Note we also plot the tracer particles that are within the overheated regions near the contact boundary between the impactor and the target. –6– manuscript submitted to JGR: Planets Figure 2. Same as Figure 1, but for the case without material strength. –7– manuscript submitted to JGR: Planets Figure 3. Same as Figure 1 (c) (θimp = 45 with vimp = 5 km/s), but color illustrates tem- poral temperature (T ) on left columns and Tpeak on right columns at (a) 1.0 ts and (b) 2.5 ts. Black dashed lines indicate the isothermal line of Tpeak = 1000 K at 5 ts (Figure 1 (c)) and white dotted lines indicate the isothermal line at each time. –8– manuscript submitted to JGR: Planets Figure 4. Cumulative heated mass of target materials produced by impacts at (a) vimp = 2 km/s, (b) vimp = 3 km/s, (c) vimp = 5 km/s, and (d) vimp = 10 km/s, respectively. M Hydro target and M MS target are the cumulative heated mass in the target without and with material strength, which are normalized by the impactor mass. Left-hand side in each panel depicts the cases without ma- terial strength (the pure hydrodynamic cases) and right-hand side with material strength. Each line represent θimp (see legends). Note that the shaded region depicts the artificial overheated region due to the overshooting in temperature at the contact boundary between the impactor and the target. –9– manuscript submitted to JGR: Planets impact angle. The difference between the case with/without material strength reaches a factor of ten at the most. Material strength enhances the impact heating regardless of θimp. On the other hand, the effect of shear heating for the cumulative heated mass depends on vimp. For lower impact velocity scenarios (Figure 4 (a) and (b)), our results show the cumulative heated mass in the target with material strength is ∼10 times larger than that without material strength. Previous work indicated a combination of the ma- terial strength and movement of the impactor results in the enhanced heating in the oblique impacts (Wakita et al., 2019). Since the heated mass without material strength at lower velocity has a larger difference between vertical and oblique impacts, it implies that ma- terial strength is more effective than a movement of the impactor. The heated mass with- out material strength approaches that with material strength as vimp increases (see Fig- ure 4 (d)). This is also consistent with previous findings (Quintana et al., 2015; Kuro- sawa & Genda, 2018). As a result, the difference between the heated mass of 90◦ and 45◦ in the case without material strength approaches to that with material strength (Fig- ure 4 (d)). Thus, the material strength more effectively increases for the cumulative heated mass in lower vimp scenarios than higher vimp. We now consider the results with material strength and discuss the effect of θimp on impact heating. Previous work focused on oblique impacts at vimp = 5 km/s showed that the cumulative heated mass of 90◦ and 45◦ are almost the same (Wakita et al., 2019). We find the heated mass of 75◦ and 60◦ are between that of 90◦ and 45◦, and their dif- ference is within a factor of 1.5 (Figure 4 (c)). On the contrary, we find that shallower impacts (θimp ≤ 30◦) produce much less heated mass than 45◦ impacts. Grazing im- pactors are decapitated before they penetrate into the target and heating is limited mainly to the lower hemisphere (Schultz & Gault, 1990; Davison et al., 2011). While the heated area of θimp ≤ 30◦ impacts has a similar width as the 45◦ impacts, the shallower pen- etration of grazing impactors results in the heated area extending to smaller depths (see dashed lines in Figure 1 (c) and (d)). As a result, the heated mass of θimp ≤ 30◦ be- comes smaller than other higher angle impacts. Note that it is beyond the focus of our paper to find the threshold impact angle where cumulative heated mass begins decreas- ing. While the cumulative heated mass takes a similar value for > 45◦ impacts, the cu- mulative heated mass by < 30◦ impacts is always less than that regardless of the impact velocities (vimp). Our results show impacts of θimp ≥ 45◦ produce a similar heated mass within less than a factor of 1.5 (right panels in Figure 4). Note that the results saturates around M MS target/Mimp ∼ 100 (see Figure 4 (d)), because our total computational region in the target that we track with tracer particles is as large as Mtarget/Mimp (cid:39) 102 (see Methods). On the contrary, the ratio of the cumulative heated mass with shallower an- target(θimp = 45◦)) is ∼ gles (θimp ≤ 30◦) to that with θimp = 45◦ (M MS 0.6 (θimp = 30◦) and ∼ 0.2 (θimp = 15◦). Nevertheless, the heated mass from shal- lower angle impacts (θimp ≤ 30◦) is less than steeper angle impacts, the effect of ma- terial strength on the degree of impact heating is still significant (see Figure 4). target(θimp)/M MS 4 Discussion Here we discuss the impact induced heated materials using our results. If the im- pact heats the target enough beyond a threshold temperature, it could trigger Ar loss and reset the target’s 40Ar-39Ar age. Kurosawa and Genda (2018) showed that the thresh- old of impact velocity for Ar loss would be 2 km/s in the target with the material strength, which is lower than 8 km/s in the case without material strength. We investigate the cu- mulative heated mass of the impact-induced Ar age resetting during oblique impacts, by assuming 1000 K as an index temperature, which is the closure temperature of typ- ical Ar carrier mineral (e.g., feldspar, Cohen, 2013). Figure 5 summarizes the cumula- tive heated mass of Tpeak > 1000 K in case of the material strength. The dependence of the cumulative heated mass on θimp are similar regardless of vimp; the heated mass –10– manuscript submitted to JGR: Planets of θimp ≥ 45◦ are always larger than θimp ≤ 30◦ at given vimp. The high-velocity im- pacts produce a larger amount of the cumulative heated mass than the low-velocity case. While 40Ar-39Ar age has been used to estimate the latest impact event on the parent body of meteorites (Bogard, 2011), it is also possible to estimate their original depth. When M MS target/Mimp is sufficiently large (e.g., ≥ 1), we can regard such an impact condition as resetting the Ar age. Thus, our results showed that grazing impacts (e.g., θimp = 30◦ with 5 few km/s) can contribute to the Ar age resetting (Figure 5). Since grazing im- pacts excavate relatively shallower material (see Figure 1 (d)), it may imply that me- teorites might originate from shallower depth than previously thought. We need to mention that the usage of Tpeak can exaggerate the cumulative heated mass. Because the Tpeak records the temperature during the shock, it may be overesti- mated due to the numerical diffusion. Even if the Tpeak is the same, the difference in pres- sure may indicate the different status (i.e., shock state or post-shock state). In such a case, especially for the phenomena that would take a time to occur (e.g., dehydration), the post-shock temperature (Tpost, the temperature after the pressure becomes less than 105 Pa) can be useful. When we compare the peak temperature with the post-shock tem- perature, we find that the former is overestimated by about 100 K in comparison with the latter at Tpeak = 1000 K in the case of vimp = 5 km/s (Figure 6). Although Tpost is a more accurate way, it is computationally expensive to examine all tracer particles and incapable in our current work. Because some particles, that are at the contact be- tween the impactor and the target, take a longer time (> 5ts) to decrease pressure to 105 Pa (see white region in Figure 7). However, the difference between Tpeak and Tpost are acceptable and our estimate on Tpeak is appropriate in accounting the cumulative heated mass. Kurosawa and Genda (2018) calculated the cumulative heated mass for Ar loss using the entropy corresponding to 1000 K at 105 Pa. Our result of M MS target/Mimp = 7.04 with a vertical impacts at vimp = 10 km/s are within 11% difference from their re- sult of 6.32 (see Figure 3 in Kurosawa & Genda, 2018). This agreement, suggests that the use of peak temperature to estimate the cumulative heated mass after the impact- induced shock heating is reasonable. It is worth exploring the dependence of the cumulative heated mass on impact an- gle and velocity. To estimate the cumulative heated mass at various impact velocities and angles, we derive an empirical formula. Note that an artificial increase in temper- ature around the contact boundary between impactor and target in numerical simula- tions is reported (Kurosawa & Genda, 2018). We use the numerical results over 0.3 Mimp which are free from the artificial overheating and thus more conservative. To minimize the number of coefficients in the empirical formula, we adopt the following equation, M formula target (Tpeak)/Mimp = (cid:8)C1 sin(θimp) + C2 sin2(θimp) + (1 − C1 − C2) sin3(θimp)(cid:9) C3(0.5v2 (4) imp/ETpeak)C4, where ETpeak is the specific internal energy at Tpeak. The procedure to obtain the em- pirical formula is described as follows. We prepare five datasets at a given impact ve- locity (2, 3, 5, 7, and 10 km/s), which are the normalized heated mass as a function of impact angle. We confirm that the five datasets almost converge into a single trend against the change in impact angle. Then, we fit the all data with a third order polynomial func- tion with two physical constraints, which are (1) the normalized heated mass equals zero at θimp = 0◦, and (2) the normalized heated mass equals unity at θimp = 90◦. Thus, the dependence of impact angle on the heated mass at a given impact velocity can be described with two fitting coefficients (C1 and C2 within bracket in Equation 4). Next, we divide the heated masses by the effects of the impact angle, resulting in the corrected heated mass depending only on impact velocity, which corresponds to the specific inter- nal energy. The corrected data points can be fitted well by a power-law with two fitting constants (C3 and C4 in Equation 4). By combining impact angle and velocity depen- dence, we obtain the empirical formula shown as Equation (4) with four coefficients. For the case of Tpeak = 1000 K (Figure 5), we find that the coefficients are C1 = −0.249, C2 = 3.40, C3 = 1.07, and C4 = 0.749, respectively. Note that ETpeak = 4.16 MJ/kg –11– manuscript submitted to JGR: Planets Figure 5. Cumulative heated mass of Tpeak over 1000 K in the target with material strength as a function of θimp. Symbols are colored according to vimp (see legend). The shaded region indicates the artificial overheated region near the contact of the impactor and target. Figure 6. Heatmaps of difference between Tpeak and post temperature Tpost at the time of 5 ts, (a) θimp = 90◦ with vimp = 5 km/s and (b) θimp = 45◦ with vimp = 5 km/s. The gray contour represents their mass fraction in the target normalized by the mass of the impactor. The cell size is 50 K; for example, the cell of (Tpeak = 1000–1050 K, Tpeak - Tpost = 50–100K) represents the mass fraction is on the order of 10−2 for both cases. A green dash-dotted line indicates the weighted average of Tpeak − Tpost. –12– 15∘30∘45∘60∘75∘90∘θimp10−1100101MMStarget/Mimpvimp∘=∘10∘km/svimp∘=∘7∘km/svimp∘=∘5∘km/svimp∘=∘3∘km/svimp∘=∘2∘km/sTpeak>∘1000∘K manuscript submitted to JGR: Planets Same viewing in right columns of Figure 1, but color illustrates Tpeak - Tpost at 5 Figure 7. ts, (a) θimp = 90◦ with vimp = 5 km/s and (b) θimp = 45◦ with vimp = 5 km/s. Note that we only plot the tracer particles in the target that decrease to 105 Pa within 5 ts (white regions are out of this condition). Dashed lines indicate the isothermal line of Tpeak = 1000 K in Figure 1 (a) and (c). Dotted circles represent the provenance location of the impactor. at Tpeak = 1000 K. Figure 8 represents that the empirical formula works properly within the error of ±21%(2σ) to our numerical results with material strength. This formula would be useful to estimate the cumulative heated mass resulting from various impact condi- tions. We also discuss the occurrence of dehydration reactions of phyllosilicate (e.g., ser- pentine). As the dehydration of hydrous materials is an endothermic reaction, additional calculations are required to assess the dehydrated mass accurately (e.g., Kurosawa et al., 2021). Additionally, Kurosawa et al. (2021) experimentally showed that the efficiency of impact devolatilization on carbonaceous asteroid-like materials (e.g., calcite) is con- siderably low (< 10% of theoretical prediction). Thus, the dehydrated mass estimated by numerical simulations will overestimate regardless of considering the reaction heat. Although we may overestimate the heated mass by using peak temperature but ignor- ing the reaction heat, it is worth considering the impact conditions that may induce the dehydration. We here take a temperature threshold for the dehydration as 873 K, at which the dehydration of phyllosilicate starts to occur (Lange & Ahrens, 1982; Nozaki et al., 2006; Nakato et al., 2008). Note that the Hugoniot curve for dunite and serpentine dif- fers (Benz et al., 1989; Brookshaw, 1998). Based on shock heating alone, the serpentine will reach a similar temperature of the dunite at a given impact condition. Thus, the amount of dehydrated mass may not change even if the equation state of serpentine is used. Please also note that our numerical setup is different from previous work of investigating the fate of hydrous minerals during the impacts which suggested that the serpentine in the core can avoid the dehydration (Wakita & Genda, 2019). Since they considered a dunite layer over the serpentine core, the dunite layer might insulate the serpentine core from the direct impact-induced heating that we explore in this work. The vertical impact with material strength at vimp = 2 km/s produces dehydrated materials of M MS 0.4 (Figures 4 and 9). Grazing impacts (θimp ≤ 30◦) into the target with material strength require a higher impact velocity to produce a similar amount of impact heated material as vertical impacts (see also Figure 9); θimp = 30◦ with vimp = 3 km/s (M MS target/Mimp ∼ 0.5) and θimp = 15◦ with vimp = 5 km/s (M MS grazing impacts with material strength require lower velocities than vertical impacts with- out material strength to produce a similar amount of impact heated material (at an im- target/Mimp ∼ 0.4). Nevertheless, those target/Mimp ∼ –13– manuscript submitted to JGR: Planets Figure 8. Comparison of cumulative heated mass in the target (Tpeak > 1000 K). While nu- merical results are plotted on y-axis, the results from the empirical formula (Equation 4) are shown in x-axis. The gray solid diagonal line indicates both results are the same, and the dotted lines represent they are within an error of ±21%. The symbols are the same viewing as in Figure 5 (see legend). –14– 100101Mformulatarget/Mimp100101MMStarget/Mimpvimp = 10 km/svimp = 7 km/svimp = 5 km/svimp = 3 km/svimp = 2 km/sθimp = 90∘θimp = 75∘θimp = 60∘θimp = ∘5∘θimp = 30∘θimp = 15∘Tpeak> 1000 K manuscript submitted to JGR: Planets pact velocity of vimp = 7 km/s, M Hydro target /Mimp ∼ 0.5, Figure 10). Oblique impacts gen- erate heated materials at lower peak pressure than vertical impacts at a given impact velocity (Wakita et al., 2019). We also find grazing impacts at higher impact velocities could produce dehydrated material at lower peak pressures than vertical impacts with lower impact velocity (see Figure 11). As the high temperature region of oblique impacts is widely distributed (see Figure 1), the weakly shocked region is also close to the sur- face, potentially to be ejected. If this ejected material eventually lands on the Earth as meteorites, this implies that shock heated meteorites could have experienced a wide range of peak pressures. Thus, oblique impacts of θimp < 45◦ may have produced dehydrated minerals in weakly shock-metamorphosed meteorites. The time takes for hydrous minerals to dehydrate depends on the reaction temper- ature. While dehydrated materials in carbonaceous chondrites indicate shock heating (e.g., Nakamura, 2005; Nakato et al., 2008; Abreu & Bullock, 2013), carbonaceous chon- drites have generally experienced weak shocks. The dehydration starts to occur at about 873 K (600 ◦C, Lange & Ahrens, 1982; Nozaki et al., 2006; Nakato et al., 2008), but some work examines the duration time at higher temperature. Nozaki et al. (2006) conducted heating experiments on carbonaceous chondrites: Both short (10 s) and long (120 s) heat- ing at a temperature of 1173 K (900 ◦C) decomposed the hydrous minerals. They in- dicated that the temperature is more important for dehydration than duration. Other experimental work implies that a weakly-shocked and dehydrated carbonaceous chon- drite would have experienced 1–100 hour heating at 1173 K or a 10-1000 days heating at 973 K (700 ◦C) (Nakato et al., 2008). While our impact simulations are unable to con- sider the duration materials spend at elevated temperatures, we can estimate the cool- ing time of the hydrous materials. Assuming that hydrous material is heated to a depth of 10 m (d, smaller than our cell size), its cooling time scale would be over 10,000 days. Note that we take the thermal diffusivity (κ) of 10−7m2/s, which is a typical value of the carbonaceous chondrite (Opeil et al., 2020), and calculate tcool ∼ d2/κ. The ma- terial in much deeper locations would take longer time. Although we are unaware of the time to dehydrate at 873 K, the cumulative heated mass above 873 K may represent the amount of dehydrated minerals. 5 Conclusions The mass heated by oblique impacts is a key to the understanding of the history of asteroids and meteorites. While Kurosawa and Genda (2018) clarified the importance of the material strength of the target in the vertical impact, the following work confirmed this in the oblique impacts at a given impact velocity and angles (Wakita et al., 2019). We advanced their work by examining the dependence of the impact velocities and an- gles. Considering material strength in the target, we have performed a series of oblique impact simulations over a range of impact velocities and angles. The cumulative heated mass in the target with material strength is always larger than that without material strength, which indicates the material strength enhances impact heating. Our oblique impacts simulations with material strength showed that vertical impacts and impacts with steeper angles (≥ 45◦) generate a similar cumulative heated mass, within a factor of 1.5. Grazing angle impacts (≤ 30◦) produce less heated mass than other oblique im- pacts regardless of impact velocity. From our impact simulations of a wide parameter space, we derived an empirical formula for material with peak temperature over 1000 K, which can be used to understand 40Ar-39Ar age resetting. Vertical impacts at low im- pact velocity and grazing impacts at high impact velocity produce a similar heated mass, but have differences in their peak pressure, indicating that grazing impacts are more likely to be responsible for impact heating in weakly shocked meteorites. –15– manuscript submitted to JGR: Planets Figure 9. Same as Figure 5, but for the case of Tpeak > 873 K. Figure 10. Same as Figure 9, but for the case without material strength. –16– 15∘30∘45∘60∘75∘90∘θimp10−1100101MMStarget/Mimpvimp∘=∘10∘km/svimp∘=∘7∘km/svimp∘=∘5∘km/svimp∘=∘3∘km/svimp∘=∘2∘km/sTpeak>∘873∘K15∘30∘45∘60∘75∘90∘θimp10−1100101Mhydrotarget/Mimpvimp∘=∘10∘km/svimp∘=∘7∘km/svimp∘=∘5∘km/sTpeak>∘873∘K manuscript submitted to JGR: Planets Figure 11. Heatmaps of peak pressure and peak temperature at the time of 5 ts, (a) θimp = 90◦ with vimp = 2 km/s, (b) θimp = 30◦ with vimp = 3 km/s, and (c) θimp = 15◦ with vimp = 5 km/s, respectively. The green dotted vertical line indicates Tpeak= 873 K, the temperature threshold for the dehydration. The black dot-dashed line represents the Hugoniot curve of dunite. The gray contour represents the mass fraction in the target normalized by the mass of the im- pactor. Open Research All our data are given by using iSALE-3D and our input files are available (Wakita et al., 2022). We also provide the cumulative heated mass of various impacts with/without material strength for Tpeak of every 10 K as a Data Set. Please note that usage of the iSALE-3D code is restricted to those who have contributed to the development of iSALE- 2D, and iSALE-2D is distributed on a case-by-case basis to academic users in the im- pact community. It requires a registration from the iSALE webpage (https://isale-code.github.io/) and usage of iSALE-2D and computational requirements are also shown in there. We directly plot figures from our binary data using pySALEPlot which is included in iSALE- 3D and developed by TMD. Please also note that pySALEPlot in the current stable re- lease of iSALE-2D (Dellen) would not work for the data from iSALE-3D. Acknowledgments We gratefully acknowledge the developers of iSALE-3D, including Dirk Elbeshausen, Kai W¨unnemann, and Gareth Collins. This work has been supported in part by JSPS, Japan KAKENHI Grant Number JP17H06457 and JP17H02990. HG and KK are supported by JSPS KAKENHI Grant JP19H00726. KK is supported by JSPS KAKENHI Grants JP18H04464 and JP21K18660. TMD is funded by STFC grant ST/S000615/1. References Abreu, N. M., & Bullock, E. S. (2013, December). Graves Nunataks (GRA) 06100 as indicators of shock-driven hydrothermal al- teration in the CR chondrite parent body. Meteoritics and Planetary Science, 48 , 2406–2429. doi: 10.1111/maps.12227 Opaque assemblages in CR2 Artemieva, N., & Morgan, J. Quantifying the Release of Climate-Active Gases by Large Meteorite Impacts With a Case Study of Chicxulub. Geophysi- cal Research Letters, 44 (20), 10,180–10,188. doi: 10.1002/2017GL074879 (2017). Benz, W., & Asphaug, E. (1994, January). Impact Simulations with Fracture. I. Method and Tests. Icarus, 107 (1), 98–116. doi: 10.1006/icar.1994.1009 Benz, W., Cameron, A. G. W., & Melosh, H. J. (1989). The origin of the Moon and doi: 10.1016/0019 Icarus, 81 (1), 113–131. the single-impact hypothesis III. -1035(89)90129-2 –17– manuscript submitted to JGR: Planets Bogard, D. D. (1995). Impact ages of meteorites: A synthesis. Meteoritics, 30 (3), 244–268. doi: 10.1111/j.1945-5100.1995.tb01124.x Bogard, D. D. (2011). K–Ar ages of meteorites: Clues to parent-body thermal histo- ries. Geochemistry, 71 (3), 207–226. doi: 10.1016/j.chemer.2011.03.001 Bottke, W. F., Nolan, M. C., Greenberg, R., & Kolvoord, R. A. (1994, February). ıcarus, 107 , 255–268. doi: 10 Velocity distributions among colliding asteroids. .1006/icar.1994.1021 Brookshaw, L. (1998). An Equation of State for Serpentine. Tech. Rep. Work- ing Paper Series SC-MC-9813, Faculty of Sciences, University of Southern Queensland 2.1, 3.5 . Citron, R. I., & Stewart, S. T. (2022, May). Large Impacts onto the Early Earth: The Planetary Science Journal , Planetary Sterilization and Iron Delivery. 3 (5), 116. doi: 10.3847/PSJ/ac66e8 Cohen, B. A. (2013). The Vestan cataclysm: Impact-melt clasts in howardites and the bombardment history of 4 Vesta. Meteoritics & Planetary Science, 48 (5), 771–785. doi: 10.1111/maps.12101 Collins, G. S., Elbeshausen, D., Davison, T. M., W¨unnemann, K., Ivanov, B., & Melosh, H. J. (2016, July). iSALE-Dellen manual. doi: 10.6084/m9.figshare.3473690.v2 Collins, G. S., Melosh, H. J., & Ivanov, B. A. (2004, February). Modeling damage and deformation in impact simulations. Meteoritics and Planetary Science, 39 , 217–231. doi: 10.1111/j.1945-5100.2004.tb00337.x Collins, G. S., Melosh, H. J., & W¨unnemann, K. (2011). Improvements to the $\epsilon$-$\alpha$ porous compaction model for simulating impacts into high-porosity solar system objects. ing, 38 (6), 434–439. doi: 10.1016/j.ijimpeng.2010.10.013 Davison, T. M., Ciesla, F. J., Collins, G. S., & Elbeshausen, D. (2014, December). The effect of impact obliquity on shock heating in planetesimal collisions. Me- teoritics and Planetary Science, 49 (12), 2252–2265. doi: 10.1111/maps.12394 International Journal of Impact Engineer- Davison, T. M., Collins, G. S., & Ciesla, F. J. (2010, July). Numerical modelling of heating in porous planetesimal collisions. Icarus, 208 , 468–481. doi: 10.1016/j .icarus.2010.01.034 Davison, T. M., Collins, G. S., Elbeshausen, D., W¨unnemann, K., & Kearsley, A. Numerical modeling of oblique hypervelocity impacts on (2011, October). strong ductile targets. Meteoritics and Planetary Science, 46 (10), 1510–1524. doi: 10.1111/j.1945-5100.2011.01246.x Drucker, D. C., & Prager, W. (1952). SOIL MECHANICS AND PLASTIC ANAL- YSIS OR LIMIT DESIGN. Quarterly of Applied Mathematics, 10 (2), 157–165. doi: 10.1090/qam/48291 Elbeshausen, D., & W¨unnemann, K. ISALE-3D: A three- (2011, January). dimensional, multi-material, multi-rheology hydrocode and its applications to large-scale geodynamic processes. Society Symposium. Proceedings, 11th Hypervelocity Impact Elbeshausen, D., W¨unnemann, K., & Collins, G. S. Scaling of oblique impacts in frictional targets: Implications for crater size and formation mechanisms. Icarus, 204 (2), 716–731. doi: 10.1016/j.icarus.2009.07.018 (2009, December). Elbeshausen, D., W¨unnemann, K., & Collins, G. S. (2013, November). The transi- tion from circular to elliptical impact craters. Journal of Geophysical Research: Planets, 118 (11), 2013JE004477. doi: 10.1002/2013JE004477 Farinella, P., & Davis, D. R. (1992, May). Collision rates and impact velocities in the Main Asteroid Belt. Icarus, 97 , 111–123. doi: 10.1016/0019-1035(92)90060 -K Flynn, G. J., Consolmagno, G. J., Brown, P., & Macke, R. J. ber). properties of their parent bodies. (2018, Septem- Physical properties of the stone meteorites: Implications for the doi: Geochemistry, 78 (3), 269–298. –18– manuscript submitted to JGR: Planets 10.1016/j.chemer.2017.04.002 Genda, H., Kokubo, E., & Ida, S. (2012, January). Merging Criteria for Giant Im- pacts of Protoplanets. The Astrophysical Journal , 744 (2), 137. doi: 10.1088/ 0004-637X/744/2/137 Hirt, C. W., Amsden, A. A., & Cook, J. L. (1974). Eulerian computing method for all flow speeds. Physics, 14 (3), 227–253. doi: 10.1016/0021-9991(74)90051-5 An arbitrary Lagrangian- Journal of Computational Ivanov, B. A. (2001). Mars/Moon Cratering Rate Ratio Estimates. Space Science Reviews, 96 (1), 87–104. doi: 10.1023/A:1011941121102 Ivanov, B. A., Deniem, D., & Neukum, G. (1997). Implementation of dynamic strength models into 2D hydrocodes: Applications for atmospheric breakup and impact cratering. 411–430. doi: 10.1016/S0734-743X(97)87511-2 International Journal of Impact Engineering, 20 (1), Johnson, B. C., Minton, D. A., Melosh, H. J., & Zuber, M. T. (2015, January). Im- pact jetting as the origin of chondrules. Nature, 517 (7534), 339–341. doi: 10 .1038/nature14105 Jutzi, M. (2015, March). SPH calculations of asteroid disruptions: The role of pres- doi: 10.1016/j.pss.2014.09 \planss, 107 , 3–9. sure dependent failure models. .012 Keil, K., Haack, H., & Scott, E. R. D. (1994). Catastrophic fragmentation of as- teroids: Evidence from meteorites. Planetary and Space Science, 42 (12), 1109– 1122. doi: 10.1016/0032-0633(94)90011-6 Kurosawa, K., & Genda, H. (2018, January). Effects of Friction and Plastic Defor- mation in Shock-Comminuted Damaged Rocks on Impact Heating. Geophysical Research Letters, 45 , 620–626. doi: 10.1002/2017GL076285 Kurosawa, K., Genda, H., Azuma, S., & Okazaki, K. (2021). The Role of Post- Shock Heating by Plastic Deformation During Impact Devolatilization of Calcite (CaCO3). Geophysical Research Letters, 48 (7), e2020GL091130. doi: 10.1029/2020GL091130 Lange, M. A., & Ahrens, T. J. (1982, January). tion of Serpentine and the Evolution of Planetary Atmospheres. nar and Planetary Science Conference Proceedings, 87 , 451-A456. 10.1029/JB087iS01p0A451 Impact Induced Dehydra- Lu- doi: Lundborg, N. (1968). Strength of rock-like materials. International Journal of Rock Mechanics and Mining Sciences & Geomechanics Abstracts, 5 (5), 427–454. doi: 10.1016/0148-9062(68)90046-6 Marchi, S., Bottke, W. F., Cohen, B. A., W¨unnemann, K., Kring, D. A., McSween, H. Y., . . . Russell, C. T. (2013, April). High-velocity collisions from the lunar cataclysm recorded in asteroidal meteorites. Nature Geoscience, 6 (4), 303–307. doi: 10.1038/ngeo1769 Melosh, H. J., Ryan, E. V., & Asphaug, E. (1992, September). Dynamic fragmenta- tion in impacts - Hydrocode simulation of laboratory impacts. Journal of Geo- physical Research: Planets, 97 , 14. doi: 10.1029/92JE01632 Monaghan, J. J. (1992). Smoothed particle hydrodynamics. Annual Review of As- tron and Astrophys, 30 , 543–574. doi: 10.1146/annurev.aa.30.090192.002551 Morota, T., Sugita, S., Cho, Y., Kanamaru, M., Tatsumi, E., Sakatani, N., . . . (2020). Tsuda, Y. Hayabusa2: Implications for surface evolution. doi: 10.1126/science.aaz6306 Sample collection from asteroid (162173) Ryugu by Science, 368 (6491), 654–659. Nakamura, T. (2005). Post-hydration thermal metamorphism of carbonaceous chon- Journal of Mineralogical and Petrological Sciences, 100 (6), 260–272. drites. doi: 10.2465/jmps.100.260 Nakato, A., Nakamura, T., Kitajima, F., & Noguchi, T. (2008, August). Evalu- ation of dehydration mechanism during heating of hydrous asteroids based on mineralogical and chemical analysis of naturally and experimentally –19– manuscript submitted to JGR: Planets heated CM chondrites. 10.1186/BF03352837 Earth, Planets, and Space, 60 (8), 855–864. doi: Nozaki, W., Nakamura, T., & Noguchi, T. (2006). Bulk mineralogical changes of hydrous micrometeorites during heating in the upper atmosphere at tempera- tures below 1000 °C. Meteoritics & Planetary Science, 41 (7), 1095–1114. doi: 10.1111/j.1945-5100.2006.tb00507.x Okamoto, T., Kurosawa, K., Genda, H., & Matsui, T. (2020). Impact Ejecta Near the Impact Point Observed Using Ultra-high-Speed Imaging and SPH Simula- tions and a Comparison of the Two Methods. Journal of Geophysical Research: Planets, 125 (4), e2019JE005943. doi: 10.1029/2019JE005943 Opeil, C. P., Britt, D. T., Macke, R. J., & Consolmagno, G. J. The sur- prising thermal properties of CM carbonaceous chondrites. Meteoritics & Plan- etary Science, 55 (8). doi: 10.1111/maps.13556 (2020). Ostrowski, D., & Bryson, K. (2019, January). The physical properties of meteorites. Planetary and Space Science, 165 , 148–178. doi: 10.1016/j.pss.2018.11.003 Pierazzo, E., & Melosh, H. J. Hydrocode modeling of oblique impacts: The fate of the projectile. Meteoritics and Planetary Science, 35 (1), 117–130. doi: 10.1111/j.1945-5100.2000.tb01979.x (2000a, January). Pierazzo, E., & Melosh, H. J. (2000b, May). Melt Production in Oblique Impacts. Icarus, 145 (1), 252–261. doi: 10.1006/icar.1999.6332 Quintana, S. N., Crawford, D. A., & Schultz, P. H. (2015, January). Analysis of Im- pact Melt and Vapor Production in CTH for Planetary Applications. Procedia Engineering, 103 , 499–506. doi: 10.1016/j.proeng.2015.04.065 Raducan, S. D., Davison, T. M., & Collins, G. S. (2022, March). Ejecta distribu- tion and momentum transfer from oblique impacts on asteroid surfaces. Icarus, 374 , 114793. doi: 10.1016/j.icarus.2021.114793 Rubin, A. E., Scott, E. R. D., & Keil, K. (1997, February). Shock metamorphism of enstatite chondrites. Geochimica et Cosmochimica Acta, 61 (4), 847–858. doi: 10.1016/S0016-7037(96)00364-X Schultz, P. H., & Gault, D. E. (1990, January). Prolonged global catastrophes from oblique impacts. In Global Catastrophes in Earth History; An Interdisciplinary Conference on Impacts, Volcanism, and Mass Mortality. Geological Society of America. doi: 10.1130/SPE247-p239 Scott, E. R. D. (2002). Meteorite Evidence for the Accretion and Collisional Evolu- tion of Asteroids. In Asteroids III (pp. 697–709). Scott, E. R. D., Keil, K., & St¨offler, D. Shock metamorphism of carbonaceous chondrites. Geochimica et Cosmochimica Acta, 56 (12), 4281– 4293. doi: 10.1016/0016-7037(92)90268-N (1992, December). Shoemaker, E. M. (1962). Interpretation of lunar craters. In Z. Kopal (Ed.), Physics and astronomy of the Moon (pp. 283–359). Academic Press. doi: 10.1016/B978 -1-4832-3240-9.50012-2 St¨offler, D., Hamann, C., & Metzler, K. (2018, January). Shock metamorphism of planetary silicate rocks and sediments: Proposal for an updated clas- sification system. 10.1111/maps.12912 Meteoritics and Planetary Science, 53 (1), 5–49. doi: St¨offler, D., Keil, K., & Scott, E. R. D. Shock metamorphism of ordinary chondrites. Geochimica et Cosmochimica Acta, 55 (12), 3845–3867. doi: 10.1016/0016-7037(91)90078-J (1991, December). Sugita, S., Honda, R., Morota, T., Kameda, S., Sawada, H., Tatsumi, E., . . . Tsuda, The geomorphology, color, and thermal properties of doi: Y. Ryugu: Implications for parent-body processes. 10.1126/science.aaw0422 Science, 364 (6437). (2019, April). Sugita, S., & Schultz, P. H. (2002, February). Initiation of Run-Out Flows on Venus by Oblique Impacts. Icarus, 155 (2), 265–284. doi: 10.1006/icar.2001.6731 –20– manuscript submitted to JGR: Planets Sugiura, K., Kobayashi, H., Watanabe, S.-i., Genda, H., Hyodo, R., & Inutsuka, S.-i. SPH simulations for shape deformation of rubble-pile as- (2021, September). teroids through spinup: The challenge for making top-shaped asteroids Ryugu and Bennu. Icarus, 365 , 114505. doi: 10.1016/j.icarus.2021.114505 Svetsov, V. V., & Shuvalov, V. V. (2015, November). Water delivery to the Moon by asteroidal and cometary impacts. \planss, 117 , 444–452. doi: 10.1016/j.pss .2015.09.011 Wakita, S., & Genda, H. (2019, August). Fates of hydrous materials during planetes- imal collisions. Icarus, 328 , 58–68. doi: 10.1016/j.icarus.2019.03.008 Wakita, S., Genda, H., Kurosawa, K., Davison, M., Thomas, & Johnson, C., (2022, August). Brandon. gle on deformational heating and post-impact temperature”. 10.5281/zenodo.6798859 Dataset of ”Effect of impact velocity and an- doi: Zenodo. Wakita, S., Genda, H., Kurosawa, K., & Davison, T. M. (2019). Enhancement of Im- pact Heating in Pressure-Strengthened Rocks in Oblique Impacts. Geophysical Research Letters, 46 (23), 13678–13686. doi: 10.1029/2019GL085174 Walsh, K. J., Jawin, E. R., Ballouz, R. L., Barnouin, O. S., Bierhaus, E. B., Con- Craters, boulders and regolith of (2019). nolly, H. C., . . . Team, T. O.-R. (101955) Bennu indicative of an old and dynamic surface. Nature Geoscience, 12 (4), 242–246. doi: 10.1038/s41561-019-0326-6 Weirich, J. R., Isachsen, C. E., Johnson, J. R., & Swindle, T. D. (2012, January). Variability of diffusion of argon in albite, pyroxene, and olivine in shocked and unshocked samples. doi: 10.1016/j.gca.2011.10.040 Geochimica et Cosmochimica Acta, 77 , 546–560. W¨unnemann, K., Collins, G. S., & Melosh, H. J. (2006, February). A strain-based porosity model for use in hydrocode simulations of impacts and implications for transient crater growth in porous targets. 10.1016/j.icarus.2005.10.013 Icarus, 180 , 514–527. doi: Yue, Z., Johnson, B. C., Minton, D. A., Melosh, H. J., Di, K., Hu, W., & Liu, Y. (2013). Projectile remnants in central peaks of lunar impact craters. Nature Geoscience, 6 (6), 435–437. doi: 10.1038/ngeo1828 –21–
synthetic_cpt
2
Towards_More_Effective_Table-to-Text_Generation_Assessing_In-Context_Learning_and_Self-Evaluation_with_Open-Source_Models.pdf
4 2 0 2 r p A 0 3 ] V C . s c [ 1 v 7 8 1 0 0 . 5 0 4 2 : v i X r a Towards End-to-End Semi-Supervised Table Detection with Semantic Aligned Matching Transformer Tahira Shehzadi*1,2,3[0000−0002−7052−979X], Shalini Sarode1,3[0009−0007−9968−4068], Didier Stricker1,2,3, and Muhammad Zeshan Afzal1,2,3[0000−0002−0536−6867] 1 Department of Computer Science, Technical University of Kaiserslautern, 67663, Germany 2 Mindgarage, Technical University of Kaiserslautern, 67663, Germany 3 German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany {[email protected]} Abstract. Table detection within document images is a crucial task in document processing, involving the identification and localization of tables. Recent strides in deep learning have substantially improved the accuracy of this task, but it still heav- ily relies on large labeled datasets for effective training. Several semi-supervised ap- proaches have emerged to overcome this challenge, often employing CNN-based de- tectors with anchor proposals and post-processing techniques like non-maximal sup- pression (NMS). However, recent advancements in the field have shifted the focus towards transformer-based techniques, eliminating the need for NMS and empha- sizing object queries and attention mechanisms. Previous research has focused on two key areas to improve transformer-based detectors: refining the quality of object queries and optimizing attention mechanisms. However, increasing object queries can introduce redundancy, while adjustments to the attention mechanism can in- crease complexity. To address these challenges, we introduce a semi-supervised ap- proach employing SAM-DETR, a novel approach for precise alignment between ob- ject queries and target features. Our approach demonstrates remarkable reductions in false positives and substantial enhancements in table detection performance, par- ticularly in complex documents characterized by diverse table structures. This work provides more efficient and accurate table detection in semi-supervised settings. Keywords: Semi-Supervised Learning · Detection Transformer · SAM-DETR · Ta- ble Analysis · Table Detection. 1 Introduction Document analysis has been the fundamental task in various workflow pipelines[1,2], with document summarization as its core task. The essential task in document analysis is iden- tifying graphical objects like tables, figures, and text paragraphs. Previously, this task was carried out manually by analyzing the documents, understanding their contents, and summarizing them. However, the number of documents that need to be analyzed has dras- tically increased, and manual inspection is impossible. The growing number of documents led businesses to use more efficient and reliable automated methods. Optical character recognition(OCR) [3,4] and rule-based table detection approaches[5,6,7] are classical ap- proaches for visual summarization. These methods perform well for documents with highly structured layouts because they are rule-based[5,6,7]. However, they struggle to adapt to varying and newer table designs, such as borderless tables. These limitations has shifted the research focus to developing techniques using deep learning [8,9,10,11]. These methods show significant improvements over traditional approaches [12], precisely detecting tables in doc- uments irrespective of their structure. This advancement provides a notable improvement in document analysis and visual summarization. Deep learning methods [13,14,15,16,17,18] eliminate handcrafted rules and excel at gen- eralizing problems. However, their reliance on large amounts of labeled data for training 2 T. Shehzadi et al. counteracts the aim of reducing manual work. Generating these labels is time-consuming and prone to errors [19]. Although these supervised deep learning approaches achieve state- of-the-art results on public benchmarks, their usage in industries is limited without similarly large annotated datasets in specific domains. Semi-supervised learning methods [20] have emerged as a solution to insufficient labeled data for deep learning applications. Recent ad- vancements [21,22,23] utilize two detectors: one generates pseudo-labels for unlabeled data, and the other refines predictions using these pseudo-labels and a smaller set of labeled data. These detectors update each other throughout training [24,25,26,27]. However, it’s impor- tant to note that the initial pseudo-label generator is often not robust, potentially leading to inaccurate labels and affecting overall performance. Additionally, there are two major drawbacks in the earlier CNN-based semi-supervised methods[28,21,22]: First, they rely on anchor points for region proposals that require manual tuning. Second, they use post-processing techniques like Non-Maximal Suppression(NMS) to limit the number of overlapping predictions. The emergence of transformer-based meth- ods [29,30,31,32] make the network end-to-end without NMS and anchor-free. This is possi- ble due to their dependence on the attention mechanism and object queries. Consequently, there has been research mainly to improve the quality of object queries and improve the at- tention mechanism[33]. For example, Deformable DETR [30], AdaMixer [31] and REGO [34] focus on advancing the attention mechanism. Meanwhile, models like DN DETR [35], DAB DETR [36], and DINO DETR [29] are dedicated to improving the quality of object queries, and H-DETR [37], Co-DETR [32], and FANet [38] aim to increase the quantity of object queries. However, this increase leads to redundant predictions, adversely affecting perfor- mance. To counter this, a dual-stage object query approach has been proposed, combining one-to-one and one-to-many matching strategies. Despite its effectiveness, this method still impacts performance [37]. Addressing these challenges, we employ SAM-DETR [39], a novel model designed to optimize the matching process between object queries and corresponding target features in a semi-supervised setting. This approach effectively reduces false positives and improves table detection performance in complex documents. In this paper, we introduce a novel semi-supervised approach for table detection, em- ploying SAM-DETR [39] detector. Our main objective is to solve the non-robustness of the pseudo-label generation process. The training procedure consists of two modules: the teacher and the student. The teacher module consists of a pseudo-labeling framework, and the student uses these pseudo-labels along with a smaller set of labeled data to produce the final predictions. The pseudo-labeling process is optimized by iteratively refining the labels and the detector. The teacher module is updated by an Exponential Moving Average (EMA) from the student to improve the pseudo-label generation and detection modules. Our approach differs from conventional pseudo-labeling methods by incorporating a SAM- DETR detector without object proposal generation and post-processing steps like NMS. We enhance the ability to accurately match object queries with corresponding target features in complex documents, particularly excelling in the detection and handling of tables in semi- supervised settings. The intrinsic flexibility of this method enables consistent and reliable performance in various scenarios, including diverse table sizes and scales, within a semi- supervised learning context. Furthermore, this framework creates a reinforcing loop where the Teacher model consistently guides and improves the Student model. Our evaluation results demonstrate that our semi-supervised table detection approach achieves superior results compared to both CNN-based and other transformer-based semi-supervised meth- ods without needing object proposals and post-processing steps such as NMS. We summarize the primary contributions of this paper as follows: • We introduce a novel semi-supervised approach for table detection. This approach elim- inates the need for object proposals and post-processing techniques like Non-maximal Suppression (NMS). Semi-Supervised Table Detection with Semantic Aligned Matching Transformer 3 • To the best of our knowledge, this is the first network that optimizes the matching process between object queries and corresponding target features in a semi-supervised setting. • We conduct comprehensive evaluations on four diverse datasets: PubLayNet, ICDAR- 19, TableBank, and Pubtables. Our approach achieves results comparable to CNN-based and transformer-based semi-supervised methods without requiring object proposal pro- cesses and Non-maximal Suppression (NMS) in post-processing. 2 Related Work Analyzing document images involves the integral table detection task. This segment sum- marizes techniques for detecting tables, especially those involving complex structures. Initial methods relied on rules or metadata [40,41,42,43]. Meanwhile, more recent advances employ statistical and deep learning techniques [13,44,45,46], improving system adaptability and generalizability. 2.1 Table Detection Approaches Rule-based Approaches Itonori et al. [40] laid the groundwork for table detection. The central focus was identifying tables as distinct text blocks using predefined rules. Building upon this, methods like [42] improved the approach by integrating various tech- niques, including table detection based on layout [47] or extracting tables from HTML- formatted documents [48]. Although effective for specific document types, these rule-based methods[5,6,7,49,50] lacked the flexibility to be universally applicable. Learning-based Approaches Cesarini et al. [51] deviates from rule-based approaches by pioneering a supervised learning system for identifying table objects in document images. Their approach transforms a document image into an MXY tree model by classifying the blocks surrounded by vertical and horizontal lines as table objects. They further employed Hidden Markov Models [52,53] and an SVM classifier, along with conventional heuristics [54] for table detection. These techniques still needed additional data like ruling lines. In contrast, Deep Learning-based methods, further categorized as object detection, seman- tic segmentation, and bottom-up approaches, have demonstrated superior accuracy and efficiency over traditional techniques. Approaches Based on Semantic Segmentation. Approaching table detection as a seg- mentation problem, methods like [55,56,57,58] generate pixel-level segmentation masks and then aggregate the masks to achieve final table detection. These methods utilize existing semantic segmentation networks and outperform traditional methods on various benchmark datasets [59,60,61,62,63,64,65]. Yang et al.’s [55] approach introduced a fully convolutional network (FCN) [66]. They used additional linguistic and visual features to enhance the seg- mentation results of page objects. He et al. [56] developed a multi-scale FCN that generates segmentation masks and their contours for table/text areas. They isolate the final table blocks after further refining the masks. Bottom-Up Methods. These methods treat table detection as a graph-labeling task with graph nodes as page elements and edges as connections between them. Li et al. [67] used a conventional layout analysis to identify line areas. They then utilized two CNN-CRF networks to categorize these lines into four classes: text, figure, formula, and table. Later, they predicted a cluster for each pair of line areas. Holecek et al. [68] and Riba et al. [69] constructed a graph to establish the document layout and viewed text areas as nodes. They then used graph-neural networks for classifying nodes and edges. These methods require certain assumptions, like the necessity of text line boxes as additional input. Object Detection-Focused Techniques Table detection in document images [70,71] is considered an object detection challenge, treating tables as natural objects. Hao et 4 T. Shehzadi et al. al. [72] and Yi et al. [73] utilized R-CNN for table detection, but their performance still depended on heuristic rules, similar to earlier methods. Subsequently, more advanced single- stage object detectors like RetinaNet [74] and YOLO [75], as well as two-stage detec- tors like Fast R-CNN [8], Faster R-CNN [9], Mask R-CNN [76], and Cascade Mask R- CNN [77], were employed for detecting various document elements, including figures and formulas [78,79,80,81,82,83,13,84]. Additional enhancement techniques, such as image trans- formations involving coloration and dilation, were applied by [79,82,85]. Siddiqui et al. [86] integrate deformable convolution and RoI-Pooling [87] into Faster R-CNN for improved handling of geometrical changes. Agarwal et al. [83] combined a composite network [88] with deformable convolution to enhance the efficiency of the two-stage Cascade R-CNN. These CNN-based object detectors include heuristic stages like proposal generation and post-processing steps like non-maximal suppression (NMS). Our semi-supervised model treats detection as a set prediction task, eliminating the need for anchor generation and post-processing stages like NMS, resulting in a more streamlined and efficient detection process. 2.2 Semi-Supervised Learning in Object Detection Semi-supervised object detection can be classified into consistency-based methods [89,90] and pseudo-label generation methods [21,22,23,91,92,93,94,95]. Our work focuses on the lat- ter. Earlier works [21,22] employ diverse data augmentation techniques to generate pseudo- labels for unlabeled data. Meanwhile, [23] introduces SelectiveNet for pseudo-label gener- ation by superimposing a bounding box from an unlabeled image onto a labeled image to ensure localization consistency within the labeled dataset. However, this approach involves a complex detection process due to image alteration. STAC [94] proposes to use strong aug- mentation for pseudo-label creation and weak augmentation for model training. Our method introduces a seamless end-to-end semi-supervised approach for table detection. Similar to other pseudo-label techniques [21,22,23,94,95], it incorporates a multi-level training strat- egy without the need for anchor generation and post-processing steps like Non-Maximal Suppression (NMS). 3 Methodology First, the paper reviews SAM-DETR, a recent approach for detecting objects using trans- formers, in Section 3.1. Then, Section 3.2 describes our semi-supervised approach for learn- ing with limited supervision and the generation of pseudo-labels for training. 3.1 Revisiting SAM-DETR DEtection TRansformer (DETR) [96] introduces an encoder-decoder network for object detection. The encoder network extracts features from the image to focus on key details. The decoder then processes these features with object queries, using self-attention and cross- attention mechanisms to identify and locate objects. However, DETR’s initial non-selective approach in processing images and object queries can lead to slower detection, especially in semi-supervised learning with limited data. By refining the attention mechanism and enhancing the quality and quantity of object queries, researchers aim to boost DETR’s efficiency, accuracy, and training speed [33]. SAM-DETR, as shown in Fig. 1 stands out for its innovative addition of a semantics aligner module and learnable reference boxes within the Transformer decoder part of DETR. Overall, SAM-DETR’s enhancements to the original DETR model focus on making the object detection process more efficient in terms of accuracy and speed. Semi-Supervised Table Detection with Semantic Aligned Matching Transformer 5 Fig. 1: Overview of SAM-DETR [39]. (a) the architecture of a single decoder layer in SAM- DETR, showing the role of learnable reference boxes in generating position embeddings for each object query. (b) the pipeline of the Semantics Aligner. The process includes the use of reference boxes for feature extraction via RoIAlign, the prediction of salient points in the targeted region, and the generation of new, semantically aligned query embeddings, which are further refined by incorporating attributes from previous queries. Image from [39]. Semantics Aligner. Semantic-Aligned Matching focuses on improving the interaction between object queries and encoded image features. Generally, the cross-attention module uses a dot-product method, which is effective in identifying similarities between two vectors. This method typically guides object queries to focus on regions of the image that are more similar. However, the original DETR model does not ensure that object queries and encoded image features are in the same embedding space, leading to less effective matching and requiring extensive training time. To address this, the Semantic-Aligned Matching approach introduces a mechanism to align object queries with encoded image features semantically. This alignment ensures that both are in the same embedding space, making the dot-product a more meaningful measure of similarity. As a result, object queries are more likely to focus on semantically similar regions, enhancing the efficiency and effectiveness of the object detection process. Multi-Head Attention and Salient Points. In DETR, multi-head attention is crucial for focusing on different image parts, enhancing scene understanding. SAM-DETR builds on this by identifying key points on objects, using ConvNet and MLP to predict these points for better alignment and detection. Features from these points are integrated with multi- head attention, allowing each head to concentrate on specific, significant object features, improving accuracy and localization. Reweighted Queries. The Semantics Aligner in DETR aligns object queries with encoded image features but initially misses crucial information from previous embeddings. To address this, it uses a linear projection and sigmoid function to create reweighting coefficients, applied to both new and positional query embeddings. This ensures important features are emphasized and previous data is utilized, significantly enhancing detection. 3.2 Semi-Supervised SAM-DETR We propose a semi-supervised learning approach that improves object detection through semantic alignment and utilizes limited labeled data for training, as shown in Fig. 2. The model leverages fully labeled and unlabeled data for object detection tasks in the semi- 6 T. Shehzadi et al. Fig. 2: Illustration of our Semi-Supervised Table Detection Framework. This dual- component system involves a Student module that learns from a mix of labeled data and strongly augmented unlabeled images, and a Teacher module that refines its understand- ing using weakly augmented unlabeled images. The Student module updates the Teacher module using Exponential Moving-Average (EMA) during training. Within this setup, the Semantics Aligner (SA) is key in the decoder of the student-teacher framework, fine-tuning the relationship between object queries and the image features that have been encoded, ensuring a more effective and accurate detection of tables in various documents. supervised setting. It consists of two key modules: the student and teacher modules. The student module processes both labeled and unlabeled images. Strong augmentation is ap- plied to unlabeled data, while strong and weak augmentations are applied to labeled data. The teacher module operates on unlabeled images with weak augmentations. It plays a crucial role in generating pseudo-labels for unlabeled data. These pseudo-labels are then employed for supervised training by the student module. Weak augmentation is applied to the unlabeled data for the teacher module to produce more accurate pseudo-labels. In contrast, the student module, designed for more challenging learning, utilizes strong aug- mentation for unlabeled data. At the start of training, the teacher and student models are randomly initialized. As training progresses, the teacher model is continuously updated by the student model using an exponential moving average (EMA) strategy. For the student module, the student’s queries Qs and features Fs are fed into the decoder. Similarly, in the teacher module, the teacher’s queries Qt and features Ft go through a similar process with the teacher’s decoder as follows: ˆos = Decoders (Qs, Fs) ˆot = Decodert (Qt, Ft) (1) (2) In the decoder, the Semantics Aligner processes the encoded image features for students Fs and teachers Ft, both initially in 1D sequences of dimensions HW × d. The Aligner converts these features into 2D maps with dimensions H × W × d, using the reference boxes of object queries, denoted as Rbox for the teacher. After this transformation, the aligner employs RoIAlign to extract region-level features, represented as F R for the teacher, from the encoded image features. The final s step involves generating new object queries, Qnew and their position embeddingsQnew pos, through resampling based on F R t as follows. for the student and F R t for the student and Rbox s and F R s t s = RoIAlign(Fs, Rbox F R s ), F R t = RoIAlign(Ft, Rbox t ) (3) Semi-Supervised Table Detection with Semantic Aligned Matching Transformer 7 Qnew t Qnew s , Qnew , Qnew s,pos = Resample(F R t,pos = Resample(F R Next, we extract features via a ConvNet and MLP to identify salient points within these and Qnew regions. These points are then used to create new object query embeddings Qnew , ensuring they stay within reference boxes for accuracy. Finally, position embeddings Qnew s,pos and Qnew t,pos derived from these points are concatenated, feeding into a multi-head cross- attention module for further processing. s , Rbox s t , Rbox t , Qs), , Qt) (5) (4) s t Rsp s = Concat (cid:0)(cid:8)F R Qnew Qnew s = M LP (ConvN et(F R s )) s [. . . , x, y, . . .] for x, y ∈ Rsp s s,pos = Concat(Sin(Rbox s , Rsp s )) Rsp t = Concat (cid:0)(cid:8)F R Qnew Qnew t = M LP (ConvN et(F R t )) t [. . . , x, y, . . .] for x, y ∈ Rsp t t,pos = Concat(Sin(Rbox t , Rsp t )) (cid:9)(cid:1) (cid:9)(cid:1) (6) (7) (8) (9) (10) (11) The semantics aligner generates new object queries aligned with image features and incorpo- rates previous query embeddings by generating reweighting coefficients. These coefficients, created through linear projection and sigmoid functions, are applied to new and old query embeddings to emphasize key features. This approach ensures that the valuable information from previous queries is effectively utilized. Qnew s = Qnew s ⊗ σ(QsW RWs1 s ), Qnew t = Qnew t ⊗ σ(QtW RWt1 t ) Qnew s,pos = Qnew s,pos ⊗ σ(QsWs RWs2), Qnew t,pos = Qnew t,pos ⊗ σ(QtWt RWt2) (12) (13) Here, WRWt1 and WRWt2 are used to denote linear projection functions. The symbol σ(·) refers to the sigmoid function, while ⊗ represents the operation of element-wise multiplica- tion. The subscripts t and s refer to the teacher and student module, respectively. Combining the semantic alignment capabilities with the semi-supervised approach allows the model to effectively utilize labeled and unlabeled data, leading to improved object detection perfor- mance. This approach is particularly useful when labeled data is limited, as it maximizes the information extracted from available resources. 4 Pseudo-Label Filtering Framework In our semi-supervised learning framework, we employ the Top-K pseudo-label filtering technique to augment the training process of our machine learning models, especially when the labeled data is limited. This approach is instrumental in making the most of the unla- beled data. Here, the key strategy is pseudo-labeling, where our model generates labels for the unlabeled data based on its current level of understanding. However, diverging from the traditional method of relying on the single most confident prediction, our top-k approach considers each data point’s top ’k’ predictions. For instance, if ’k’ is set at 3, the model evaluates and includes the three highest probable labels for each piece of unlabeled data in the training process. The benefits of our top-k strategy are significant. Firstly, it broad- ens the model’s exposure to more challenging ’hard samples’ data points that are typically difficult to classify and might be overlooked by standard top-1 pseudo-labeling methods. Including a wider range of examples substantially improves the model’s learning. Secondly, our approach is effective in cases involving objects or data points with similar features. By 8 T. Shehzadi et al. acknowledging and incorporating ambiguity through multiple potential labels, the model is better equipped to handle complex classification scenarios where clear-cut distinctions between categories are not always evident. Implementing the top-k pseudo-label filtering in our semi-supervised learning setting is a pivotal step towards enhancing the model’s accuracy and robustness, ensuring a more comprehensive and enhanced learning process. The teacher model generates pseudo boxes for unlabeled images, and the student model is trained on labeled images with ground-truth annotations and unlabeled images with pseudo boxes treated as ground-truth. Therefore, the overall loss is defined as the weighted sum of supervised and unsupervised losses: L = Ls + αLu, (14) Where Ls represents the supervised loss for labeled images, Lu represents the unsu- pervised loss for unlabeled images, and α with value 0.25 controls the contribution of the unsupervised loss. Both losses are normalized by the respective number of images in the training data batch: Ls = Lu = 1 Nl 1 Nu Nl(cid:88) (Lcls(I l i ) + Lreg(I l i )), i=1 Nu(cid:88) i=1 (Lcls(I u i ) + Lreg(I u i )), (15) (16) Where I l i indicates the i-th labeled image, I u indicates the i-th unlabeled image, Lcls is i the classification loss, Lreg is the box regression loss, Nl is the number of labeled images, and Nu is the number of unlabeled images. Overall, our semi-supervised learning setting enhances the model’s accuracy and robustness, ensuring a more comprehensive learning process. 5 Experimental Setup 5.1 Datasets TableBank: TableBank [64], a prominent dataset in the field of document analysis, ranks as the second-largest collection for table recognition tasks. This dataset comprises 417,000 document images, annotated via a process of crawling the arXiv database. It categorizes tables into three types: LaTeX images (253,817), Word images (163,417), and a combined set (417,234). Furthermore, TableBank provides data for table structure recognition. In our study, we utilizeonly the table detection component of the TableBank dataset. PubLayNet: PubLayNet [60], a sizable dataset in the public domain, encompasses 335,703 images for training, 11,240 for validation, and 11,405 for testing. It features annotations like polygonal segmentation and bounding boxes for figures, lists, titles, tables, and texts in images sourced from research papers and articles. The dataset’s evaluation employed the COCO analytics method [97]. We selectively used 102,514 images from the 86,460 table annotations in PubLayNet for our experiments. PubTables: PubTables-1M [65], specifically tailored for table detection in scientific doc- uments, is an extensive dataset featuring nearly one million tables. It stands out for its comprehensive annotations, including precise location information, crucial for accurately detecting tables within diverse documents. Its large scale and meticulous annotations make it a significant resource for developing and refining table detection algorithms. ICDAR-19: The ICDAR 2019 competition for Table Detection and Recognition (cTDaR) [59] introduced two novel datasets (modern and historical) for the table detection task (TRACK A). To facilitate direct comparisons with previous methods [82], we provide results at an Intersection over Union (IoU) threshold of 0.8 and 0.9. Semi-Supervised Table Detection with Semantic Aligned Matching Transformer 9 5.2 Evaluation Criteria We assess the effectiveness of our semi-supervised table detection method through specific evaluation metrics: Precision, Recall, and F1-score. Precision [98] is the ratio of correctly predicted positive observations (True Positives) to the total predicted positive observations (True Positives + False Positives). Recall [98] measures the proportion of actual positives correctly identified (True Positives) out of the total actual positives (True Positives + False Negatives). The F1-score [98] is the harmonic mean of Precision and Recall. Moreover, We evaluate our approach using AP@50 and AP@75, which assess precision at 50% and 75% IoU thresholds, reflecting moderate and high localization accuracy respectively, alongside average recall, measuring our model’s capacity to detect all relevant instances 5.3 Implementation Details We use the ResNet-50 backbone on 8 Nvidia RTXA6000 GPUs, initially trained on the ImageNet dataset, to evaluate the effectiveness of our semi-supervised method. We train on a diverse range of datasets, including PubLayNet, ICDAR-19, PubTables, and all subsets of the TableBank dataset, taking randomly 10%, 30%, and 50% labeled data with the remaining as unlabeled. We conduct pseudo-labeling with a 0.7 threshold and optimize using AdamW. Our training spans 120 epochs, reducing the learning rate by 10% after the 110th epoch, and we typically set our batch size to 16. We adopt DETR’s data augmentation strategy, which involves horizontal flipping, random cropping, and resizing. Additionally, we apply strong augmentation techniques such as horizontal flips, resizing, patch removal, cropping, conversion to grayscale, and Gaussian blur. For weak augmentation, we focus mainly on horizontal flipping. Setting the number of queries (N) in the decoder to 30 gives the best results. Our resizing approach ensures the image’s longest side is at most 1333 pixels and the shortest side is at least 480 pixels. These strategic adjustments and augmentations boost the model’s performance and efficiency. Table 1: Performance of our semi-supervised transformer-based approach on different splits of TableBank dataset with varying percentage label data. Table 2: Recall results comparison of our semi-supervised approach with pre- vious semi-supervised table detection ap- proach. Here Def-semi refers to [99]. Dataset TableBank-word TableBank-latex TableBank-both Labels mAP AP50 AP75 ARL 97.4 95.3 10% 98.2 95.8 30% 98.3 95.8 50% 92.9 94.1 94.3 93.9 94.5 94.8 10% 30% 50% 10% 30% 50% 91.2 93.7 94.8 92.7 93.8 94.2 97.6 97.3 97.9 95.8 95.2 96.1 96.4 96.3 97.0 94.6 95.2 95.8 95.3 97.7 98.1 93.6 93.6 95.8 Dataset Labels Def-semi Our TableBank-word TableBank-latex TableBank-both 10% 30% 50% 10% 30% 50% 10% 30% 50% 87.1 92.1 94.5 74.3 89.0 91.4 90.1 91.5 95.3 97.4 98.2 98.3 95.3 97.7 98.1 93.6 93.6 95.8 6 Results and Discussion 6.1 TableBank In our study, we evaluate our approach using the TableBank dataset, examining perfor- mance across various splits with different proportions of labeled data: 10%, 30%, and 50%. 10 T. Shehzadi et al. Table 1 shows we achieve mAP of 92.9%, 91.2%, and 92.7% by using 10% labels of Table- Bank word, latex, and both splits, respectively. Unlike previous semi-supervised table de- tection method [99], which employs deformable DETR [30] with a focus on improving the attention mechanism to improve the performance. Our semi-supervised approach optimizes the matching process between object queries and image features. As a result, our semi- supervised strategy achieves significantly higher recall rates than earlier semi-supervised methods, as shown in Tables 2. This improvement shows the effectiveness of semi-supervised table detection, particularly when dealing with limited labeled data. Table 3 presents a Table 3: Comparative analysis of our semi-supervised approach with previous supervised and semi-supervised methods on the TableBank-Both dataset using 10%, 30%, and 50% labeled data. Here, the results are reported on mAP. Method Approach Detector 10% 30% 50% Ren et al. [9] Zhu et al. [30] STAC [94] Unbiased Teacher [100] Humble Teacher [101] Soft Teacher [28] Shehzadi et al. [99] Our supervised supervised semi-supervised semi-supervised semi-supervised semi-supervised semi-supervised Deformable DETR semi-supervised Faster R-CNN Deformable DETR Faster R-CNN Faster R-CNN Faster R-CNN Faster R-CNN Sam-DETR 80.1 80.8 82.4 83.9 83.4 83.6 84.2 92.7 80.6 82.6 83.8 86.4 86.2 86.8 86.8 93.8 83.3 86.9 87.1 88.5 87.9 89.6 91.8 94.2 comparative analysis of our semi-supervised approach against prior supervised and semi- supervised methods using the TableBank-both dataset, which includes splits with 10%, 30%, and 50% labeled data. The outcomes demonstrate that our approach outperforms the earlier methods across these varying levels of labeled data. This is a significant find- ing, highlighting the effectiveness of our semi-supervised strategy in scenarios with limited labeled data availability. 6.2 PubLayNet We also evaluate the performance of our transformer-based semi-supervised learning model on the PubLayNet dataset, experimenting with different ratios of labeled to unlabeled data (10%, 30%, and 50%). This study aims at understanding the model’s performance in scenarios with limited labeled data, a common challenge in real-world applications. Table 4 shows we achieve mAP of 89.9%, 90.9%, and 93.2% by using 10%, 30%, and 50% labels of PubLayNet dataset. We shows the visual analysis of our semi-supervised approach in Fig. 3. Our semi-supervised approach also provides higher recall than the previous semi-supervised approach, as observed in Table 5. Table 4: Performance of our semi-supervised transformer-based on PubLayNet approach dataset with varying percentage label data. Dataset Label-percent mAP AP50 AP75 ARL 96.6 96.9 97.3 10% 30% 50% 89.9 90.9 93.2 94.3 94.9 95.0 97.1 97.4 97.7 PubLayNet Table 5: Recall results comparison of our approach with previous semi- supervised table detection approach. Method 10% 30% 50% Shehzadi et al. [99] 91.0 96.0 93.2 96.6 96.9 97.3 Our Semi-Supervised Table Detection with Semantic Aligned Matching Transformer 11 Fig. 3: Visual Analysis of our semi-supervised approach. Here, blue represents ground truth and red denotes our predictions results using 10% labels on PubLayNet datatset. We also compare our approach against traditional deep learning methods, both super- vised and semi-supervised, to highlight advancements. A key focus is the model’s perfor- mance with only 10% labeled data, where we observe that our approach achieves the highest mAP score of 89.9, as detailed in Table 6. This shows the effectiveness of our method in leveraging minimal labeled data, demonstrating the significant potential of our approach for practical applications in table detection and recognition. Table 6: Comparative analysis of our semi-supervised approach with previous supervised and semi-supervised methods on PubLayNet table class dataset using 10%, 30%, and 50% labeled data. Here, the results are reported on mAP. Method Approach Detector 10% 30% 50% Ren et al. [9] Zhu et al. [30] Soft Teacher [28] Shehzadi et al. [99] Our Faster R-CNN 83.4 supervised Deformable DETR 83.9 supervised semi-supervised 88.3 semi-supervised Deformable DETR 88.4 semi-supervised 87.9 86.6 88.1 86.8 92.5 89.5 92.8 90.3 89.9 90.9 93.2 Faster R-CNN SAM-DETR 6.3 PubTables In this subsection, we detail our experimental results for the PubTables dataset in a semi- supervised setting using different percentages of labeled data. Our analysis includes a com- parison between our transformer-based semi-supervised method and earlier CNN-based and transformer-based supervised approaches. As shown in Table 7, our semi-supervised 12 T. Shehzadi et al. approach achieves a 92.3 mAP score even with only 10% of the data labeled, which high- lights the effectiveness of our method in utilizing a smaller amount of labeled data to attain high accuracy. Table 7: Performance of our semi-supervised transformer- based approach on the PubTables dataset with varying lev- els of labeled data (10%, 30%, 50%). Results show high accuracy with even a minimal amount of labeled data. Dataset Label mAP AP50 AP75 PubTables 10% 30% 50% 92.3 93.5 93.8 93.7 94.8 94.8 93.8 93.7 94.8 ARL 87.8 88.1 88.3 Table 8 presents a comparison between our semi-supervised approach and previous su- pervised methods. While a direct comparison isn’t feasible due to different percentages of label data for training, our results are notably comparable. For instance, a Faster R-CNN model trained on fully labeled data achieved an mAP of 82.5, whereas our semi-supervised approach reached an mAP of 92.3 using only 10% labeled data. Table 8: Comparative Analysis of Semi-Supervised and Supervised Methods. It clearly shows that our semi-supervised model achieves comparable results even with limited data. Method Approach Detector mAP AP50 AP75 Smock et al. [65] Smock et al. [65] Our supervised supervised Faster R-CNN 82.5 96.6 semi-supervised (10%) SAM-DETR 92.3 DETR 98.5 995 93.7 92.7 98.8 93.8 Comparisons with Previous Table Detection Approaches. In Table 9, we present a comprehensive comparison of our semi-supervised table detection approach against exist- ing supervised and semi-supervised methods. Our approach facilitates learning with signif- Table 9: Comparative analysis of our semi-supervised approach with previous supervised and semi-supervised methods. Here, the results are reported on mAP. Method Approach Labels TableBank PubLayNet PubTables CDeC-Net [83] CasTabDetectoRS [45] Faster R-CNN [60] VSR [102] Smock et al. [65] Shehzadi et al. [99] Our supervised supervised supervised supervised supervised semi-supervised semi-supervised 100% 100% 100% 100% 100% 10% 10% 96.5 95.3 - - - 84.2 92.7 97.8 - 90 95.69 - 88.4 89.9 - - 96.6 - 92.3 Semi-Supervised Table Detection with Semantic Aligned Matching Transformer 13 icantly fewer labeled instances. Our semi-supervised method performs well despite limited labeled data, achieving high mAP scores on datasets and outperforming previous semi- supervised models. It shows improved performance in scenarios with scarce labeled data, offering comparable results to fully supervised methods while using only 10% of their labeled data. 6.4 ICDAR-19 In our analysis, we additionally conduct an evaluation of the ICDAR-19 TrackA table detection dataset across different Intersection over Union (IoU) thresholds using 50% labeled data. Furthermore, we compare our semi-supervised approach with earlier supervised and semi-supervised strategies, as depicted in Table 10. The results, utilizing 50% labeled data, show that our transformer-based semi-supervised framework surpasses prior semi-supervised methods, demonstrating superior accuracy. Table 10: Performance comparison between the proposed semi-supervised approach and previous state-of-the-art results on the dataset of ICDAR 19 Track A (Modern). Method Approach IoU=0.8 IoU=0.9 Recall Precision F1-Score Recall Precision F1-Score 94.0 TableRadar [59] 93.0 NLPR-PAL [59] 86.0 Lenovo Ocean [59] 93.4 CDeC-Net [83] HybridTabNet [46] 93.3 Shehzadi et al. [99] semi-supervised (50%) 71.1 semi-supervised (50%) 73.5 supervised supervised supervised supervised supervised Our 95.0 93.0 88.0 95.3 92.0 82.3 83.8 94.5 93.0 87.0 94.4 92.8 76.3 77.2 89.0 86.0 81.0 90.4 90.5 66.3 68.4 90.0 86.0 82.0 92.2 89.5 76.8 77.8 89.5 86.0 81.5 91.3 90.2 71.2 72.1 7 Ablation Study In the ablation study, we evaluate the model’s performance using only 30% of the labeled data from the PubTables dataset. The study observes the effect of varying the pseudo- labeling confidence threshold, the number of filtered pseudo-labels, and the number of learnable queries, offering insights into their roles in enhancing model performance in doc- ument analysis tasks. Pseudo-Labeling confidence threshold The choice of a confidence threshold in pseudo- labeling influences the performance of our semi-supervised approach, as observed in Ta- ble 11. A low threshold leads to the filtering of a large number of pseudo-labels. However, these include incorrect pseudo-labels, introducing noise into the training process, and poten- tially degrading the model’s performance. On the other hand, a high threshold ensures the generation of high-quality pseudo-labels, reducing the risk of noise. However, this results in fewer pseudo-labels fed into the student network, thus not fully leveraging the advantages of semi-supervised learning. The balance between generating enough pseudo-labels and ensur- ing that these pseud-labels are accurate enough to be useful is crucial in optimizing model performance. Influence of Learnable queries Quantity We examine the effect of both increasing and decreasing the number of input queries on the performance of our semi-supervised approach, as highlighted in Table 12. While increasing the queries can improve the model’s ability to 14 T. Shehzadi et al. Table 11: Performance comparison using dif- ferent Pseudo-labeling confidence threshold values. The best threshold values are shown in bold. Table 12: Performance comparison using different numbers of learnable queries to the decoder input. Here, the best performance results are shown in bold. Threshold 0.5 0.6 0.7 0.8 0.9 AP 89.8 90.4 93.5 90.2 88.6 AP50 91.3 92.1 94.8 91.7 89.3 AP75 90.4 91.5 93.7 90.2 89.1 Queries 10 30 60 100 300 AP 88.5 93.5 91.8 88.6 82.1 AP50 87.8 94.8 92.8 90.2 85.3 AP75 86.8 93.7 91.5 87.3 84.1 detect and focus on a wide range of features, enhancing accuracy in complex detection tasks, it also leads to more overlapping predictions, necessitating the use of Non-Maximum Suppression (NMS). Conversely, decreasing the number of queries reduces computational complexity but limits the model’s detection capabilities. We find that our model achieves the best performance with 30 queries. Deviating from this optimal count, whether by in- creasing or decreasing the number of queries, significantly impacts the model’s accuracy and efficiency. Table 13: Performance evaluation using top-k pseudo-labels. The best results are in bold. Top-k 1 2 3 4 AP 90.5 91.7 93.5 92.8 AP50 93.8 94.4 94.8 94.2 AP75 91.2 91.9 93.7 92.5 Influence of quantity of Pseudo-label Filtering In Table 13, we observe the impact of varying quantities of filtered pseudo-labels generated by the teacher network on model performance. While including more pseudo-labels enhances model performance, it is also vital to consider their quality. Selecting more pseudo-labels, such as the top-4, inherently introduces some lower-quality labels into the training process. Including less reliable pseudo- labels can adversely affect the model’s performance, highlighting the need for a balanced approach in pseudo-label selection that optimizes quantity and quality to achieve the best model performance. 8 Conclusion Our research addresses the challenge of accurately and efficiently detecting document ob- jects, such as tables and text, in semi-supervised settings. This approach utilizes minimal labeled data and employs student-teacher networks that mutually update during training. Previous transformer-based research focused on improving attention or increasing the num- ber of object queries, which impacts training time and performance. We eliminate the need for NMS and focus on matching between object queries and image features. Our novel ap- proach using SAM-DETR in a semi-supervised setting helps align object queries with target features, significantly reducing false positives and improving the detection of document ob- jects in complex layouts. In short, our semi-supervised method enhances the accuracy of document analysis, particularly in scenarios with limited labeled data. Semi-Supervised Table Detection with Semantic Aligned Matching Transformer 15 References 1. T. M. Breuel and K. Tombre, Document Analysis Systems: Theory and Practice. World Scientific Publishing, 2017. 2. R. Kasturi, L. O’Gorman, and V. Govindaraju, “Document image analysis: A primer,” Sad- hana - Academy Proceedings in Engineering Sciences, vol. 27, pp. 3–22, 02 2002. 3. Z. Zhao, M. Jiang, S. Guo, Z. Wang, F. Chao, and K. C. Tan, “Improving deep learning based optical character recognition via neural architecture search,” in 2020 IEEE Congress on Evolutionary Computation (CEC), 2020, pp. 1–7. 4. D. Van Strien, K. Beelen, M. C. Ardanuy, K. Hosseini, B. McGillivray, and G. Colavizza, “Assessing the impact of ocr quality on downstream nlp tasks,” 2020. 5. B. Co¨uasnon and A. Lemaitre, “Recognition of tables and forms,” in Handbook of Document Image Processing and Recognition, 2014. 6. R. Zanibbi, D. Blostein, and J. R. Cordy, “A survey of table recognition,” Document Analysis and Recognition, vol. 7, no. 1, pp. 1–16, 2004. 7. A. M. Jorge, L. Torgo et al., “Design of an end-to-end method to extract information from tables,” International Journal of Document Analysis and Recognition (IJDAR), vol. 8, no. 2, pp. 144–171, 2006. 8. R. B. Girshick, “Fast R-CNN,” CoRR, vol. abs/1504.08083, 2015. [Online]. Available: http://arxiv.org/abs/1504.08083 9. S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster R-CNN: towards real-time object [Online]. detection with region proposal networks,” CoRR, vol. abs/1506.01497, 2015. Available: http://arxiv.org/abs/1506.01497 10. J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” CoRR, vol. abs/1612.08242, 2016. [Online]. Available: http://arxiv.org/abs/1612.08242 11. K. He, G. Gkioxari, P. Doll´ar, and R. Girshick, “Mask r-cnn,” in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2980–2988. 12. T. Orosz, R. V´agi, G. M. Cs´anyi, D. Nagy, I. ¨Uveges, J. P. Vad´asz, and A. Megyeri, “Evaluating human versus machine learning performance in a legaltech problem,” Applied Sciences, vol. 12, no. 1, 2022. [Online]. Available: https://www.mdpi.com/2076-3417/12/1/ 297 13. S. Schreiber, S. Agne, I. Wolf, A. Dengel, and S. Ahmed, “Deepdesrt: Deep learning for detection and structure recognition of tables in document images,” in 2017 14th IAPR In- ternational Conference on Document Analysis and Recognition (ICDAR), vol. 01, 2017, pp. 1162–1167. 14. M. Minouei, K. A. Hashmi, M. R. Soheili, M. Z. Afzal, and D. Stricker, “Continual learning for table detection in document images,” Applied Sciences, vol. 12, no. 18, 2022. [Online]. Available: https://www.mdpi.com/2076-3417/12/18/8969 15. K. A. Hashmi, D. Stricker, M. Liwicki, M. N. Afzal, and M. Z. Afzal, “Guided table structure recognition through anchor optimization,” CoRR, vol. abs/2104.10538, 2021. [Online]. Available: https://arxiv.org/abs/2104.10538 16. K. A. Hashmi, A. Pagani, M. Liwicki, D. Stricker, and M. Z. Afzal, “Cascade formula detection in scanned [Online]. Available: composite backbone images,” Applied Sciences, vol. 11, no. 16, 2021. network with deformable document https://www.mdpi.com/2076-3417/11/16/7610 for 17. S. Sinha, K. A. Hashmi, A. Pagani, M. Liwicki, D. Stricker, and M. Z. Afzal, “Rethinking learnable proposals for graphical object detection in scanned document images,” Applied Sciences, vol. 12, no. 20, 2022. [Online]. Available: https://www.mdpi.com/2076-3417/12/ 20/10578 18. S. Naik, K. A. Hashmi, A. Pagani, M. Liwicki, D. Stricker, and M. Z. Afzal, “Investigating attention mechanism for page object detection in document images,” Applied Sciences, vol. 12, no. 15, 2022. [Online]. Available: https://www.mdpi.com/2076-3417/12/15/7486 19. T. Fredriksson, D. Issa Mattos, J. Bosch, and H. Olsson, Data Labeling: An Empirical Inves- tigation into Industrial Challenges and Mitigation Strategies, 11 2020, pp. 202–216. 20. J. E. Van Engelen and H. H. Hoos, “A survey on semi-supervised learning,” Machine learning, vol. 109, no. 2, pp. 373–440, 2020. 16 T. Shehzadi et al. 21. I. Radosavovic, P. Doll´ar, R. B. Girshick, G. Gkioxari, and K. He, “Data distillation: Towards omni-supervised learning,” CoRR, vol. abs/1712.04440, 2017. [Online]. Available: http://arxiv.org/abs/1712.04440 22. B. Zoph, G. Ghiasi, T.-Y. Lin, Y. Cui, H. Liu, E. D. Cubuk, and Q. Le, “Rethinking pre-training and self-training,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., vol. 33. Curran Associates, [Online]. Available: https: //proceedings.neurips.cc/paper/2020/file/27e9661e033a73a6ad8cefcde965c54d-Paper.pdf 23. Y. Li, D. Huang, D. Qin, L. Wang, and B. Gong, “Improving object detection with selective self-supervised self-training,” CoRR, vol. abs/2007.09162, 2020. [Online]. Available: https://arxiv.org/abs/2007.09162 Inc., 2020, pp. 3833–3845. 24. K. Wang, X. Yan, D. Zhang, L. Zhang, and L. Lin, “Towards human-machine cooperation: Self-supervised sample mining for object detection,” CoRR, vol. abs/1803.09867, 2018. [Online]. Available: http://arxiv.org/abs/1803.09867 25. P. Tang, C. Ramaiah, R. Xu, and C. Xiong, “Proposal learning for semi-supervised object [Online]. Available: https://arxiv.org/abs/ detection,” CoRR, vol. abs/2001.05086, 2020. 2001.05086 26. P. K. Rhee, E. Erdenee, S. D. Kyun, M. U. Ahmed, and S. Jin, “Active and semi-supervised learning for object detection with imperfect data,” Cognitive Systems Research, vol. 45, pp. 109–123, 2017. [Online]. Available: https://www.sciencedirect.com/science/article/pii/ S1389041716301127 27. Q. Xie, Z. Dai, E. H. Hovy, M. Luong, and Q. V. Le, “Unsupervised data augmentation,” CoRR, vol. abs/1904.12848, 2019. [Online]. Available: http://arxiv.org/abs/1904.12848 28. M. Xu, Z. Zhang, H. Hu, J. Wang, L. Wang, F. Wei, X. Bai, and Z. Liu, “End-to-end semi-supervised object detection with soft teacher,” CoRR, vol. abs/2106.09018, 2021. [Online]. Available: https://arxiv.org/abs/2106.09018 29. H. Zhang, F. Li, S. Liu, L. Zhang, H. Su, J. Zhu, L. M. Ni, and H.-Y. Shum, “Dino: Detr with improved denoising anchor boxes for end-to-end object detection,” 2022. [Online]. Available: https://arxiv.org/abs/2203.03605 30. X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai, “Deformable DETR: deformable transformers for end-to-end object detection,” CoRR, vol. abs/2010.04159, 2020. [Online]. Available: https://arxiv.org/abs/2010.04159 31. Z. Gao, L. Wang, B. Han, and S. Guo, “Adamixer: A fast-converging query-based object detector,” 2022. [Online]. Available: https://arxiv.org/abs/2203.16507 32. Z. Zong, G. Song, and Y. Liu, “Detrs with collaborative hybrid assignments training,” in Proceedings of the IEEE/CVF international conference on computer vision, 2023, pp. 6748– 6758. 33. T. Shehzadi, K. A. Hashmi, D. Stricker, and M. Z. Afzal, “Object detection with transformers: A review,” 2023. 34. Z. Chen, J. Zhang, and D. Tao, “Recurrent glimpse-based decoder for detection with transformer,” CoRR, vol. abs/2112.04632, 2021. [Online]. Available: https://arxiv.org/abs/ 2112.04632 35. F. Li, H. Zhang, S. Liu, J. Guo, L. M. Ni, and L. Zhang, “Dn-detr: Accelerate detr training by introducing query denoising,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 13 619–13 627. 36. S. Liu, F. Li, H. Zhang, X. Yang, X. Qi, H. Su, J. Zhu, and L. Zhang, “DAB-DETR: dynamic anchor boxes are better queries for DETR,” CoRR, vol. abs/2201.12329, 2022. [Online]. Available: https://arxiv.org/abs/2201.12329 37. D. Jia, Y. Yuan, H. He, X. Wu, H. Yu, W. Lin, L. Sun, C. Zhang, and H. Hu, “Detrs with hybrid matching,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 19 702–19 712. 38. Y. Zhao, Y. Cai, W. Wu, and W. Wang, “Explore faster localization learning for scene text IEEE, detection,” in 2023 IEEE International Conference on Multimedia and Expo (ICME). 2023, pp. 156–161. 39. G. Zhang, Z. Luo, Y. Yu, K. Cui, and S. Lu, “Accelerating detr convergence via semantic- aligned matching,” 2022. Semi-Supervised Table Detection with Semantic Aligned Matching Transformer 17 40. K. Itonori, “Table structure recognition based on textblock arrangement and ruled line posi- tion,” in Proceedings of 2nd International Conference on Document Analysis and Recognition (ICDAR ’93), 1993, pp. 765–768. 41. S. Tupaj, Z. Shi, C. H. Chang, and H. Alam, “Extracting tabular information from text files,” EECS Department, Tufts University, Medford, USA, vol. 1, 1996. 42. S. Chandran and R. Kasturi, “Structural recognition of tabulated data,” in Proceedings of 2nd International Conference on Document Analysis and Recognition (ICDAR ’93), 1993, pp. 516–519. 43. Y. Hirayama, “A method for table structure analysis using dp matching,” in Proceedings of 3rd International Conference on Document Analysis and Recognition, vol. 2, 1995, pp. 583–586 vol.2. 44. S. A. Siddiqui, M. I. Malik, S. Agne, A. Dengel, and S. Ahmed, “Decnt: Deep deformable cnn for table detection,” IEEE Access, vol. 6, pp. 74 151–74 161, 2018. 45. K. A. Hashmi, A. Pagani, M. Liwicki, D. Stricker, and M. Z. Afzal, “Castabdetectors: Cascade network for table detection in document images with recursive feature pyramid and switchable atrous convolution,” Journal of Imaging, vol. 7, 2021. 46. D. Nazir, K. A. Hashmi, A. Pagani, M. Liwicki, D. Stricker, and M. Z. Afzal, “Hybridtabnet: Towards better table detection in scanned document images,” Applied Sciences, vol. 11, no. 18, 2021. [Online]. Available: https://www.mdpi.com/2076-3417/11/18/8396 47. P. Pyreddy and W. B. Croft, “Tintin: a system for retrieval in text tables,” in Digital library, 1997. 48. A. Pivk, P. Cimiano, Y. Sure, M. Gams, V. Rajkoviˇc, and R. Studer, “Transforming arbitrary tables into logical form with tartar,” Data & Knowledge Engineering, vol. 60, no. 3, pp. 567–595, 2007. [Online]. Available: https://www.sciencedirect.com/science/article/pii/ S0169023X06000620 49. S. Khusro, A. Latif, and I. Ullah, “On methods and tools of table detection, extraction and annotation in pdf documents,” Journal of Information Science, vol. 41, no. 1, pp. 41–57, 2015. 50. D. W. Embley, M. Hurst, D. Lopresti, and G. Nagy, “Table-processing paradigms: a research survey,” International Journal of Document Analysis and Recognition (IJDAR), vol. 8, no. 2, pp. 66–86, 2006. 51. F. Cesarini, S. Marinai, L. Sarti, and G. Soda, “Trainable table location in document images,” in 2002 International Conference on Pattern Recognition, vol. 3, 2002, pp. 236–240 vol.3. 52. A. C. e. Silva, “Learning rich hidden markov models in document analysis: Table location,” in 2009 10th International Conference on Document Analysis and Recognition, 2009, pp. 843– 847. 53. A. Silva, “Parts that add up to a whole: a framework for the analysis of tables,” Edinburgh University, UK, 2010. 54. T. Kasar, P. Barlas, S. Adam, C. Chatelain, and T. Paquet, “Learning to detect tables in scanned document images using line information,” in 2013 12th International Conference on Document Analysis and Recognition. IEEE, 2013, pp. 1185–1189. 55. X. Yang, M. E. Y¨umer, P. Asente, M. Kraley, D. Kifer, and C. L. Giles, “Learning to extract semantic structure from documents using multimodal fully convolutional neural network,” CoRR, vol. abs/1706.02337, 2017. [Online]. Available: http://arxiv.org/abs/1706.02337 56. D. He, S. Cohen, B. Price, D. Kifer, and C. L. Giles, “Multi-scale multi-task fcn for semantic page segmentation and table detection,” in 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), vol. 01, 2017, pp. 254–261. 57. I. Kavasidis, S. Palazzo, C. Spampinato, C. Pino, D. Giordano, D. Giuffrida, and P. Messina, “A saliency-based convolutional neural network for table and chart detection in digitized documents,” CoRR, vol. abs/1804.06236, 2018. [Online]. Available: http://arxiv.org/abs/1804.06236 58. S. Paliwal, V. D, R. Rahul, M. Sharma, and L. Vig, “Tablenet: Deep learning model for end-to-end table detection and tabular data extraction from scanned document images,” CoRR, vol. abs/2001.01469, 2020. [Online]. Available: http://arxiv.org/abs/2001.01469 59. L. Gao, Y. Huang, H. D´ejean, J.-L. Meunier, Q. Yan, Y. Fang, F. Kleber, and E. Lang, “Icdar 2019 competition on table detection and recognition (ctdar),” in 2019 International IEEE, 2019, pp. 1510–1515. Conference on Document Analysis and Recognition (ICDAR). 18 T. Shehzadi et al. 60. X. Zhong, J. Tang, and A. J. Yepes, “Publaynet: largest dataset ever for document layout analysis,” in 2019 International Conference on Document Analysis and Recognition (ICDAR). IEEE, Sep. 2019, pp. 1015–1022. 61. A. Mondal, P. Lipps, and C. V. Jawahar, “IIIT-AR-13K: A new dataset for graphical [Online]. Available: object detection in documents,” CoRR, vol. abs/2008.02569, 2020. https://arxiv.org/abs/2008.02569 62. M. C. G¨obel, T. Hassan, E. Oro, and G. Orsi, “Icdar 2013 table competition,” 2013 12th International Conference on Document Analysis and Recognition, pp. 1449–1453, 2013. 63. L. Gao, X. Yi, Z. Jiang, L. Hao, and Z. Tang, “Icdar2017 competition on page object detec- tion,” in 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), vol. 01, 2017, pp. 1417–1422. 64. M. Li, L. Cui, S. Huang, F. Wei, M. Zhou, and Z. Li, “Tablebank: A benchmark dataset for table detection and recognition,” 2019. 65. B. Smock, R. Pesala, and R. Abraham, “PubTables-1M: Towards comprehensive table ex- traction from unstructured documents,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 4634–4642. 66. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” CoRR, vol. abs/1411.4038, 2014. [Online]. Available: http://arxiv.org/abs/ 1411.4038 67. X.-H. Li, F. Yin, and C.-L. Liu, “Page object detection from pdf document images by deep structured prediction and supervised clustering,” in 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 3627–3632. 68. M. Holecek, A. Hoskovec, P. Baudis, and P. Klinger, “Line-items and table understanding [Online]. Available: in structured documents,” CoRR, vol. abs/1904.12577, 2019. http://arxiv.org/abs/1904.12577 69. P. Riba, L. Goldmann, O. R. Terrades, D. Rusticus, A. Forn´es, and J. Llad´os, “Table detection in business document images by message passing networks,” Pattern Recognition, vol. 127, p. 108641, 2022. [Online]. Available: https://www.sciencedirect.com/science/ article/pii/S0031320322001224 70. M. Minouei, K. A. Hashmi, M. R. Soheili, M. Z. Afzal, and D. Stricker, “Continual learning for table detection in document images,” Applied Sciences, vol. 12, no. 18, 2022. [Online]. Available: https://www.mdpi.com/2076-3417/12/18/8969 71. A. K¨olsch, M. Z. Afzal, M. Ebbecke, and M. Liwicki, “Real-time document image classification using deep cnn and extreme learning machines,” in 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), vol. 01, 2017, pp. 1318–1323. 72. L. Hao, L. Gao, X. Yi, and Z. Tang, “A table detection method for pdf documents based on convolutional neural networks,” 2016 12th IAPR Workshop on Document Analysis Systems (DAS), pp. 287–292, 2016. 73. X. Yi, L. Gao, Y. Liao, X. Zhang, R. Liu, and Z. Jiang, “Cnn based page object detection in document images,” in 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), vol. 01, 2017, pp. 230–235. 74. T. Lin, P. Goyal, R. B. Girshick, K. He, and P. Doll´ar, “Focal loss for dense object detection,” CoRR, vol. abs/1708.02002, 2017. [Online]. Available: http://arxiv.org/abs/1708.02002 75. Y. Fang, B. Liao, X. Wang, J. Fang, J. Qi, R. Wu, J. Niu, and W. Liu, “You only look at one sequence: Rethinking transformer in vision through object detection,” CoRR, vol. abs/2106.00666, 2021. [Online]. Available: https://arxiv.org/abs/2106.00666 76. K. He, G. Gkioxari, P. Doll´ar, and R. B. Girshick, “Mask R-CNN,” CoRR, vol. abs/1703.06870, 2017. [Online]. Available: http://arxiv.org/abs/1703.06870 77. Z. Cai and N. Vasconcelos, “Cascade R-CNN: delving into high quality object detection,” CoRR, vol. abs/1712.00726, 2017. [Online]. Available: http://arxiv.org/abs/1712.00726 78. N. D. Vo, K. Nguyen, T. V. Nguyen, and K. Nguyen, “Ensemble of deep object detectors for page object detection,” in Proceedings of the 12th International Conference on Ubiquitous Information Management and Communication, ser. IMCOM ’18. New York, NY, USA: Association for Computing Machinery, 2018. [Online]. Available: https://doi.org/10.1145/3164541.3164644 79. A. Gilani, S. R. Qasim, I. Malik, and F. Shafait, “Table detection using deep learning,” in 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), vol. 01, 2017, pp. 771–776. Semi-Supervised Table Detection with Semantic Aligned Matching Transformer 19 80. Y. Huang, Q. Yan, Y. Li, Y. Chen, X. Wang, L. Gao, and Z. Tang, “A yolo-based table detection method,” in 2019 International Conference on Document Analysis and Recognition (ICDAR), 2019, pp. 813–818. 81. X. Zheng, D. Burdick, L. Popa, and N. X. R. Wang, “Global table extractor (GTE): A framework for joint table identification and cell structure recognition using visual context,” CoRR, vol. abs/2005.00589, 2020. [Online]. Available: https://arxiv.org/abs/2005.00589 82. D. Prasad, A. Gadpal, K. Kapadni, M. Visave, and K. Sultanpure, “Cascadetabnet: An approach for end to end table detection and structure recognition from image-based documents,” CoRR, vol. abs/2004.12629, 2020. [Online]. Available: https://arxiv.org/abs/ 2004.12629 83. M. Agarwal, A. Mondal, and C. V. Jawahar, “Cdec-net: Composite deformable cascade net- work for table detection in document images,” in 2020 25th International Conference on Pattern Recognition (ICPR), 2021, pp. 9491–9498. 84. T. Shehzadi, K. A. Hashmi, D. Stricker, M. Liwicki, and M. Z. Afzal, “Bridging the perfor- mance gap between detr and r-cnn for graphical object detection in document images,” arXiv preprint arXiv:2306.13526, 2023. 85. S. Arif and F. Shafait, “Table detection in document images using foreground and background features,” in 2018 Digital Image Computing: Techniques and Applications (DICTA), 2018, pp. 1–8. 86. S. A. Siddiqui, M. I. Malik, S. Agne, A. Dengel, and S. Ahmed, “Decnt: Deep deformable cnn for table detection,” IEEE Access, vol. 6, pp. 74 151–74 161, 2018. 87. J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei, “Deformable [Online]. Available: http: convolutional networks,” CoRR, vol. abs/1703.06211, 2017. //arxiv.org/abs/1703.06211 88. Y. Liu, Y. Wang, S. Wang, T. Liang, Q. Zhao, Z. Tang, and H. Ling, “Cbnet: A novel composite backbone network architecture for object detection,” CoRR, vol. abs/1909.03625, 2019. [Online]. Available: http://arxiv.org/abs/1909.03625 89. J. Jeong, S. Lee, J. Kim, and N. Kwak, “Consistency-based semi-supervised learning for object detection,” in Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett, Eds., vol. 32. Curran Associates, Inc., 2019. [Online]. Available: https://proceedings.neurips.cc/paper/ 2019/file/d0f4dae80c3d0277922f8371d5827292-Paper.pdf 90. P. Tang, C. Ramaiah, R. Xu, and C. Xiong, “Proposal learning for semi-supervised object [Online]. Available: https://arxiv.org/abs/ detection,” CoRR, vol. abs/2001.05086, 2020. 2001.05086 91. T. Shehzadi, K. A. Hashmi, A. Pagani, M. Liwicki, D. Stricker, and M. Z. Afzal, “Mask-aware semi-supervised object detection in floor plans,” Applied Sciences, vol. 12, no. 19, 2022. [Online]. Available: https://www.mdpi.com/2076-3417/12/19/9398 92. G. Kallempudi, K. A. Hashmi, A. Pagani, M. Liwicki, D. Stricker, and M. Z. Afzal, “Toward semi-supervised graphical object detection in document images,” Future Internet, vol. 14, no. 6, 2022. [Online]. Available: https://www.mdpi.com/1999-5903/14/6/176 93. T. Shehzadi, K. A. Hashmi, D. Stricker, and M. Z. Afzal, “Sparse semi-detr: Sparse learnable queries for semi-supervised object detection,” arXiv preprint arXiv:2404.01819, 2024. 94. K. Sohn, Z. Zhang, C. Li, H. Zhang, C. Lee, and T. Pfister, “A simple semi-supervised [Online]. learning framework for object detection,” CoRR, vol. abs/2005.04757, 2020. Available: https://arxiv.org/abs/2005.04757 95. K. Wang, X. Yan, D. Zhang, L. Zhang, and L. Lin, “Towards human-machine cooperation: Self-supervised sample mining for object detection,” CoRR, vol. abs/1803.09867, 2018. [Online]. Available: http://arxiv.org/abs/1803.09867 96. N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in European conference on computer vision. Springer, 2020, pp. 213–229. 97. T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick, “Microsoft COCO: common objects in context,” CoRR, vol. abs/1405.0312, 2014. [Online]. Available: http://arxiv.org/abs/1405.0312 98. D. M. W. Powers, “Evaluation: from precision, recall and f-measure to roc, informedness, [Online]. Available: markedness and correlation,” CoRR, vol. abs/2010.16061, 2020. https://arxiv.org/abs/2010.16061 20 T. Shehzadi et al. 99. T. Shehzadi, K. Azeem Hashmi, D. Stricker, M. Liwicki, and M. Zeshan Afzal, “Towards end- to-end semi-supervised table detection with deformable transformer,” in Document Analysis and Recognition - ICDAR 2023, G. A. Fink, R. Jain, K. Kise, and R. Zanibbi, Eds. Cham: Springer Nature Switzerland, 2023, pp. 51–76. 100. Y. Liu, C. Ma, Z. He, C. Kuo, K. Chen, P. Zhang, B. Wu, Z. Kira, and P. Vajda, “Unbiased teacher for semi-supervised object detection,” CoRR, vol. abs/2102.09480, 2021. [Online]. Available: https://arxiv.org/abs/2102.09480 101. Y. Tang, W. Chen, Y. Luo, and Y. Zhang, “Humble teachers teach better students for semi-supervised object detection,” CoRR, vol. abs/2106.10456, 2021. [Online]. Available: https://arxiv.org/abs/2106.10456 102. P. Zhang, C. Li, L. Qiao, Z. Cheng, S. Pu, Y. Niu, and F. Wu, “VSR: A unified framework for document layout analysis combining vision, semantics and relations,” CoRR, vol. abs/2105.06220, 2021. [Online]. Available: https://arxiv.org/abs/2105.06220
synthetic_cpt
1
Efficient_domain_adaptation_of_language_models_in_ASR_systems_using_Prompt-tuning.pdf
PROMPT TUNING GPT-2 LANGUAGE MODEL FOR PARAMETER-EFFICIENT DOMAIN ADAPTATION OF ASR SYSTEMS Saket Dingliwal, Ashish Shenoy*, Sravan Bodapati, Ankur Gandhe, Ravi Teja Gadde, Katrin Kirchhoff {skdin, ashenoy, sravanb, aggandhe, gadderav, katrinki}@amazon.com 2 2 0 2 l u J 1 2 ] L C . s c [ 3 v 8 1 7 8 0 . 2 1 1 2 : v i X r a ABSTRACT Automatic Speech Recognition (ASR) systems have found their use in numerous industrial applications in very diverse domains creating a need to adapt to new domains with small memory and deployment overhead. In this work, we introduce domain-prompts, a methodology that involves training a small number of domain embedding parameters to prime a Transformer-based Language Model (LM) to a par- ticular domain. Using this domain-adapted LM for rescoring ASR hypotheses can achieve 7-13% WER reduction for a new domain with just 1000 unlabeled textual domain-specific sentences. This improvement is comparable or even bet- ter than fully fine-tuned models even though just 0.02% of the parameters of the base LM are updated. Additionally, our method is deployment-friendly as the learnt domain em- beddings are prefixed to the input to the model rather than changing the base model architecture. Therefore, our method is an ideal choice for on-the-fly adaptation of LMs used in ASR systems to progressively scale it to new domains. Index Terms— domain-adaptation, prompt-tuning, gpt2, multi-domain ASR, parameter-efficiency, low-data setting 1. INTRODUCTION Automatic Speech Recognition (ASR) systems form a key component of various products across industry. With recent advancements [1, 2, 3], they have been deployed in a wide range of domains, including healthcare, travel reservations, and customer services etc. A typical technique to improve the performance of these systems is to do a rescoring of the n-best hypotheses with an external Language Model (LM) [2]. Recent pretrained Transformer-based LMs such as GPT- 2 [4] and BERT [5] have been shown [6] to be more effective than conventional LSTM based LMs for rescoring. However, their use in an industrial ASR system that needs to incremen- tally support new domains poses the following challenge. As showcased in [7, 8], domain-specific data is useful for im- proving performance in a domain. However, retraining or maintaining copies of Transformer-based LMs for each do- main separately is not scalable as updating and storing mil- lions of parameters comes with a large cost. Therefore, a *work carried out while working at Amazon need for an efficient domain-adaptation method for such LMs is evident. [9, 10, 11] used external knowledge, memory and context respectively to improve performance in specific dif- ficult domains, while [12, 13] adapted the neural LM used within the system. However, to the best of our knowledge, ours is the first work to propose and study methods to do effi- cient domain-adaptation of Transformer-based LMs to benefit ASR systems. Language modeling literature [14, 15, 16] in- troduced novel methodologies to solve a related problem of efficiently adapting such LMs to specific tasks. Instead of fine-tuning and storing millions of parameters for each task, they propose augmenting the frozen task-agnostic model with a handful of task-specific trainable parameters. For exam- ple, AdapterHub [14] introduced new task-specific layers in conjunction to frozen pre-trained weights of LMs. More re- cent models, such as GPT-3 [17], are able to solve new tasks with the help of just the textual descriptions of the task (called prompts). Extending [18], the focus of this work is to adapt such LMs to different domains of the same task rather than solv- ing multiple tasks. Our objective is to learn a small set of domain-specific parameters to score ASR hypotheses better than the base Transformer-based LM without the domain data. Drawing ideas from prompt-tuning [15] for task adap- tation, we introduce domain-prompts for our goal. We define domain-prompts as domain-specific embeddings, which when prefixed to the sequence of token embeddings, and passed through a pretrained Transformer LM, return the probability distribution of the next token, close to that given by a fully domain-adapted LM. Our main contributions are summarized as follows: (1) we introduce a new methodology domain- prompts, which is the first attempt to apply prompt-tuning for parameter-efficient domain-adaptation of Transformer-based LMs for their use in ASR systems, (2) In new domains with limited data, we demonstrate that rescoring ASR hypotheses with LM adapted using our method can achieve 7-13% WER reduction while using a handful of additional domain-specific parameters (3) Along with saving memory and training cost, domain-prompts can match or even beat the performance of fully fine-tuned models with no change to the deployment of the base model, thereby making it the ideal choice for on-the-fly domain adaptation for industrial ASR systems. Fig. 1. Domain Prompts: training (left) and inference (right) methodology for domain-adaptation 2. METHODOLOGY GPT-3 [17] introduced natural-language prompts as textual descriptions of a task. For example - prefixing ”translate the sentence to French ” to the input for the machine transla- tion task. In prompt-tuning [15], rather than designing these prompts manually, the model learns their embeddings using a few labeled examples from the task. We demonstrate that such embeddings can also be learnt for different domains, i.e., we can prefix a sentence with additional domain-specific em- bedding vectors such that it improves the perplexity of the sentences from that domain. We use the self-supervised task of predicting the next token in unlabeled domain-specific text for training. An unlabelled sentence is a sequence of T to- kens {x1, x2 . . . xT }. Let xφ 1:T be the corresponding concate- nation of d-dimensional embedding vectors for these tokens, given by the embedding matrix parameterized by φ. These vectors are propagated through multiple Transformer layers before taking a softmax to obtain the probability distribution over the vocabulary for the next possible token. Mathemati- cally, we denote the probability of predicting xt token at tth time-step as pθ(xt|xφ 1:<t) where θ represents all the param- eters in the Transformer layers. Both θ and φ have large di- mensions and are trained together on a large corpus of text. , xφD d2 . . . xφD dk ] concatenated together as xφD d1:k In our method, for any domain D, we begin with pre- trained {θ, φ} and introduce a small number of additional pa- rameters φD in the form of k d-dimensional embedding vec- tors [xφD . We d1 prefix them to each sentence while predicting the next token at each time step. While training, we keep {θ, φ} fixed and learn φD by minimizing the cross-entropy loss between the true token value xt and its predicted probability. Eq. 1 repre- sents the loss for one such sequence and we add the loss value for all the sentences from the domain. φD = argmin φD t=T (cid:88) t=1 − log pθ(xt|xφD d1:k ; xφ 1:<t) (1) We hypothesize that the self-attention layer in Transformer- based LMs will create an interaction between φD and the embeddings of the tokens from sentences, thereby improving the score to cater to the domain D. During inference, we prefix the trained domain-prompts (xφD ) of the correspond- d1:k ing domain to the hypotheses from the ASR system and use perplexity scores from the model for rescoring as explained in Fig. 1. Further, in our implementation, we used gradient descent to minimize the loss and instead of initializing the parameters φD randomly, we begin with token embeddings (using φ from pretrained model) of the k most frequent words in the training sentences of the domain, following prior work [15]. Also, to ensure inference latency does not increase due to additional computations, we use caching to store the state of the Transformer after propagating the k domain-prompts through the Transformer layers. These domain-embeddings are constant for all the hypotheses in the domain and hence its forward pass through Transformer layers can be precomputed to ensure the latency for scoring a hypothesis is same for both base and adapted versions of the Transformer LM. 3. EXPERIMENTAL SETUP To test the effectiveness of the proposed domain-prompts, we run extensive experimentation with numerous adaptation baselines with four different domains, model sizes, initializa- tions and training set sizes. We use two versions of GPT-2 architecture [4] as our base models: (1) gpt2 (117M (mil- lion) parameters) (2) gpt2-medium (345M parameters). We conduct all our experiments on an AWS ec2 instance with 8 Tesla V100 GPUs using a hybrid ASR system. Such a system consists of an Acoustic Model (AM), and two different LMs. First pass LM is an ngram LM which is directly composed with lattice from the AM while second pass LM is a Neural LM and is used for rescoring the n-best hypotheses decoded from the lattice. Our AM is trained on 12k hours of audio. Our first pass LM is a 4-gram model with Kneser-Ney (KN) [19] smoothing with vocabulary size of 500k words trained on a heterogeneous corpus of text. We use scores from dif- ferent second pass LMs (and interpolate with scores from the AM and the first pass LM) to rescore n-best lists with n = 10. Our performance metric is Word Error Rate (WER) and we report WER Reduction % (WERR) over the baseline. Table 1. Generating text from gpt2 adapted to airlines domain Input tokens no-adaptation full-fine-tuning domain-prompts ”hello how are you” hello how are you doing? I’m really happy with the results. hello how are you able to get a new flight I’m flying from London Heathrow to Dubai hello how are you able to get a refund on the flight I’m flying from Glasgow to Madrid today We evaluate under two different settings representative of any industrial multi-domain ASR system: (1) New domains with limited data: To simulate the scenario of on-the-fly adapta- tion to new/unseen domains, we use only 1k domain-specific sentences to adapt the second pass LM. (2) Domains with large data: 50k domain-specific sentences are added to the training set of both the first and the second pass LM. Dataset: For our experiments, we need domain-wise (i) tex- tual data for LM adaptation, (ii) audio data for evaluation. We were not able to find any public dataset that can be split domain-wise and meets both the criteria. Therefore, we use in-house datasets from four different domains with all Per- sonal Identifiable Information (PII) removed. For textual data, we use 1k and 50k domain-specific conversational sen- tences in the two settings respectively. This data is split into 80:20 as train and dev set respectively. The perplexity on the dev set is used for tuning the hyperparameters like learning rate etc for all the baselines. For evaluation, we use 500 labelled 8khz audios per domain which are single utterances from a conversational task-oriented dialog system. Baselines: The different choices of second pass LM used for comparison in Table 2 are defined below: (1) no rescoring (baseline): Use the 1-best hypothesis with no second pass LM. (2) LSTM LM: 2-layer LSTM based LM with embed- ding dimension (d), hidden dimension (h) and a word-based tokenizer with vocabulary size (V ). (3) no adaptation: Out- of-the-box Transformer-based LMs without the use of any do- (4) tuning embedding main data or any parameter update. layer: Update the parameters of the embedding matrix (φ) using domain-specific data while keeping θ fixed. (5) prompt- designing: Prefix manually-defined prompts to the hypothe- ses without any training. 20 most frequent words from the do- main were used as fixed prompts. (6) domain-embedding [7]: Learn the embedding of a special domain token. It is equiva- lent to k = 1 in our method. (7) domain Adapters: Adapters [14] are typically used for parameter-efficient task adapta- tion of Transformer-based LM. However, we train domain Adapters here using the self-supervised task of next token (8) domain- prediction with different reduction factors (c). prompts: Update the domain embeddings (φD) for different values of k and initializations of φD. (9) full fine-tuning: Up- date all the parameters (θ, φ) of the base LM. (10) oracle: pick hypothesis in the n-best list with minimum WER to know the upper bound of improvements through rescoring. 4. RESULTS AND DISCUSSION Domain adaptation methods are used to prime the LM to a particular domain. As shown in Table 1, the LM adapted us- ing domain-prompts learnt from airlines data, completes sen- tence to a very domain-specific utterance in contrast to the out-of-the-box LM extending the input to a generic sentence. We showcase some qualitative results in Table 3, where per- plexity scores from vanilla gpt2 and domain-prompts (k = 50) for similar sounding hypotheses is provided. These exam- ples indicate how domain information helps to disambiguate to choose the right hypothesis. We summarize the WERR scores of all our methods in the two different settings in Ta- ble 2. The first column contains the name of the methodol- ogy and its hyper-parameter, second column is the base model used while the third column represents the number of domain- specific trainable parameters needed in addition to the base model. Note that the goal of the experiments is not only to find the best performing method, but also to discover settings that achieve optimal performance with the minimal number of additional parameters. This is a critical decision point for systems to scale to potentially hundreds of domains as storage and training cost are directly linked to the number of domain- specific parameters. The adaptation to domain-specific data is useful for per- formance in all the domains in both the settings (row 4 vs. 19 or row 3 vs. 18). Even rescoring with dialog-gpt2-medium [20] (row 5), which is pretrained on a large dialog corpus is not as effective as adaptation to small amounts of domain data. The WERR numbers vary across different domains but the relative performance for different domain-adaptation methods is consistent across all domains. The domains are fairly different from each other. Domains like healthcare have a large number of unseen technical words and hence improvements through rescoring are relatively small. In the large-data-setting, the domain-specific data is also added to first pass LM and hence the quality of the n-best hypotheses is better, which leads to larger performance improvements through rescoring (row 20). Similar to results in [6], we observe Transformer-based LMs perform better than LSTM based methods (row 1 and 2 vs. 18). For fair comparison, we increase the size of the LSTM model and pretrain it with wiki- text-103 [21] (row 2), but it still cannot match Transformer models. Also, as we increase the size of the Transformer, the performance improves (row 18 vs. 19), further indicating the need for parameter-efficient adaptation methods. Our main conclusions about our method are as follows: Domain prompts are the most parameter efficient: domain- prompts uses < 0.02% of parameters of the base model per domain to achieve performance comparable to domain- specific fully fine-tuned models with millions of parameters (row 17 vs. 19). Although fine-tuned models perform better than our method in the large-data-setting, their improvement comes at the cost of deploying separate models for each do- main. This is expected when adequate data is available, large Table 2. Comparison of different domain-adaptation methods for different domains in parameter count and WERR% metric. domain adaptation methods base model no rescoring 1 LSTM LM (d = h = 256, V = 15k) 2 LSTM LM (d = h = 512, V = 229k) 3 no adaptation 4 no adaptation 5 no adaptation 6 tuning embedding layer 7 prompt-designing 8 domain-embedding 9 domain Adapter (c = 512) 10 domain Adapter (c = 16) 11 domain Adapter (c = 512) 12 domain Adapter (c = 16) 13 domain-prompts (k = 10, vocab init) 14 domain-prompts (k = 50, random init) 15 domain-prompts (k = 50, vocab init) 16 domain-prompts (k = 200, vocab init) 17 domain-prompts (k = 50, vocab init) 18 full-fine-tuning 19 full-fine-tuning 20 oracle - LSTM wiki103-LSTM gpt2 gpt2-medium dialog-gpt2-medium gpt2 gpt2 gpt2 gpt2 gpt2 gpt2-medium gpt2-medium gpt2 gpt2 gpt2 gpt2 gpt2-medium gpt2 gpt2-medium - # additional domain-specific params 0 8.7M 121M 0 0 0 40M 0 768 0.3M 1.1M 1M 3.6M 7680 0.04M 0.04M 0.16M 0.05M 117M 345M - Low data setting (1k sentences) WER Relative % ↑ healthcare - 0 0 2.4 3.4 1.0 3.9 2.4 3.4 3.4 5.8 6.3 4.8 5.3 6.3 6.8 7.3 7.7 6.8 7.2 29.8 fastfood - 0 0 2.0 3.4 1.5 4.9 0 2.0 5.9 7.8 10.7 9.8 7.3 8.8 8.8 9.3 13.1 9.8 11.2 32.8 insurance - 0 3.9 2.7 4.8 3.2 6.0 2.7 4.3 5.9 6.4 8.1 7.0 3.2 6.5 7.0 5.9 8.1 6.4 7.0 21.1 airlines - 0 3.1 0.5 3.6 3.1 3.6 0.5 0.5 4.6 4.6 7.1 7.6 3.8 6.1 5.1 6.1 8.1 6.6 7.1 22.0 Large data setting (50k sentences) WER Relative % ↑ healthcare - 0 0 0.8 0.8 0 0 0 0 0.8 0.8 1.6 3.2 3.2 2.4 4.0 4.0 5.7 4.8 7.2 36.2 fastfood - 0.8 1.7 0.8 3.4 2.6 9.4 1.7 1.7 7.7 11.1 8.6 12.0 7.7 8.6 9.4 9.2 11.1 12.9 16.2 38.6 insurance - 6.2 6.2 3.3 8.2 5.1 6.2 4.1 3.1 6.2 8.2 10.3 10.3 7.2 8.2 9.2 10.3 12.4 12.3 12.4 34.7 airlines - 4.8 7.3 2.4 4.9 4.9 7.3 3.7 3.7 6.1 8.5 9.8 8.5 6.1 6.1 8.5 8.5 11.0 8.5 8.5 39.3 number of domain-specific parameters can capture larger amount of domain information. Adapters which are com- monly used for their efficiency, have limited efficacy when compared to our method. Domain prompts with 20 times less parameters, can beat its performance (row 11 vs. 17). Further, Adapters have another limitation that their number of parameters scale with the number of layers in the base Trans- former model (gpt2 vs.gpt2-medium) while domain-prompts depends only on the embedding dimension d. Fine-tuning a subset of parameters in Transformer-based LM (row 6) is not effective as manually selecting a subset of most influential parameters is difficult and performs worse than our method in both performance and cost. Methods like fixing prompts or training a single domain embedding vector use no or very small number of parameters but their improvements are only marginal over unadapted base LM (row 3 vs. 7 and 8). Domain prompts achieves best performance for new/unseen domains: This setting represents common practical applica- tions where a new domain with limited amount of available data needs to be added to the ASR system. Here, domain- prompts can reap both the benefits: (1) rich pretraining of Transformer based LM (2) no overfitting on limited number of examples. This is evident from fact that fine-tuned gpt2 performs slightly better than corresponding prompt-tuned version (row 16 vs. 18) while opposite is true for gpt2- medium (row 17 vs. 19) indicating updating large number of parameters is prone to overfitting. Hence, domain-prompts presents an ideal case to capture all the necessary domain specific information from 1k examples in its limited domain- specific parameters and achieve 7-13% WERR improvement. Domain prompts are deployment friendly: In addition to performance and cost benefits, domain-prompts can easily be used for new domains without having to deploy new models or introducing new architectures. Domain prompts are pre- fixed to the input keeping all the base model parameters un- Table 3. Qualitative examples: domain-adapted gpt2 prefers hypothesis (green) over hypothesis with incorrect tokens (red) hypothesis perplexity ↓ vanilla gpt2 adapted gpt2 insurance i would like to retrieve my code i would like to retrieve my quote airlines what’s the point to tell you for frequent flyer number what’s the points tally for frequent flyer number 172.5 238.1 313.5 598.9 40.4 21.4 19.9 17.6 changed, while all other adaptation methods require updating the parameters inside the base model architecture. Domain prompts provides hyper-parameter (k) to trade- off performance and cost: Comparing row 13, 14 and 16 in Table 2, we see that the performance of models improves as we increase k, although the improvements saturate. This provides ASR system developers a parameter to control cost as per their requirements and availability of domain data. Initialization with common vocabulary token embeddings helps: Comparing rows 14 and 15, initializing φD with token embedding of most frequent domain words gives marginal improvements. Since these words are representative of the domain, they prove to be a useful starting point. 5. CONCLUSION Domain prompts provides a scalable and parameter-efficient method to add domain information to Transformer based It saves storage and training cost without compro- LMs. It also achieves the best performance mising performance. for new domains with only handful of available examples. Rather than updating the base model parameters, the new parameters are added as prefixes to input, hence our method doesn’t require model deployments per domain. Therefore, our method becomes an ideal choice for on-the-fly adaptation of second pass LMs for incrementally scaling the industrial ASR system to new domains with negligible overhead. 6. REFERENCES [1] Alex Graves, neural networks,” 2012. “Sequence transduction with recurrent arXiv preprint arXiv:1211.3711, [2] William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals, “Listen, attend and spell: A neural network for large vocabulary conversational speech recognition,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016, pp. 4960–4964. [3] Qian Zhang, Han Lu, Hasim Sak, Anshuman Tripathi, Erik McDermott, Stephen Koo, and Shankar Kumar, “Transformer transducer: A streamable speech recog- nition model with transformer encoders and rnn-t loss,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 7829–7833. [4] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al., “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, pp. 9, 2019. [5] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, “Bert: Pre-training of deep bidirec- tional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018. [6] Kazuki Irie, Albert Zeyer, Ralf Schl¨uter, and Hermann “Language modeling with deep transformers,” Ney, arXiv preprint arXiv:1905.04226, 2019. [7] Ashish Shenoy, Sravan Bodapati, Monica Sunkara, Srikanth Ronanki, and Katrin Kirchhoff, “Adapting Long Context NLM for ASR Rescoring in Conversa- tional Agents,” in Proc. Interspeech 2021, 2021, pp. 3246–3250. [8] Ashish Shenoy, Sravan Bodapati, and Katrin Kirchhoff, “Asr adaptation for e-commerce chatbots using cross- utterance context and multi-task language modeling,” Proceedings of The 4th Workshop on e-Commerce and NLP, 2021. [9] Nilaksh Das, Duen Horng Chau, Monica Sunkara, Sra- van Bodapati, Dhanush Bekal, and Katrin Kirchhoff, “Listen, know and spell: Knowledge-infused subword modeling for improving asr performance of oov named entities,” in ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 7887–7891. [10] Dhanush Bekal, Ashish Shenoy, Monica Sunkara, Sra- van Bodapati, and Katrin Kirchhoff, “Remember the context! asr slot error correction through memoriza- tion,” in 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 2021, pp. 236–243. [11] Mahaveer Jain, Gil Keren, Jay Mahadeokar, Geoffrey “Con- arXiv preprint Zweig, Florian Metze, and Yatharth Saraf, textual rnn-t for open domain asr,” arXiv:2006.03411, 2020. [12] Junho Park, Xunying Liu, Mark JF Gales, and Phil C Woodland, “Improved neural network based language modelling and adaptation,” in Eleventh Annual Confer- ence of the International Speech Communication Asso- ciation, 2010. [13] Tanel Alum¨ae, “Multi-domain neural network language model.,” in INTERSPEECH, 2013, vol. 13, pp. 2182– 2186. [14] Jonas Pfeiffer, Andreas R¨uckl´e, Clifton Poth, Aishwarya Ivan Vuli´c, Sebastian Ruder, Kyunghyun “Adapterhub: A frame- arXiv preprint Kamath, Cho, and Iryna Gurevych, work for adapting transformers,” arXiv:2007.07779, 2020. [15] Brian Lester, Rami Al-Rfou, and Noah Constant, “The power of scale for parameter-efficient prompt tuning,” arXiv preprint arXiv:2104.08691, 2021. [16] Xiang Lisa Li and Percy Liang, “Prefix-tuning: Op- arXiv timizing continuous prompts for generation,” preprint arXiv:2101.00190, 2021. [17] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Nee- lakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al., “Language models are few-shot learners,” arXiv preprint arXiv:2005.14165, 2020. [18] Saket Dingliwal, Ashish Shenoy, Sravan Bodapati, Ankur Gandhe, Ravi Teja Gadde, and Katrin Kirch- hoff, “Efficient domain adaptation of language mod- els in ASR systems using prompt-tuning,” CoRR, vol. abs/2110.06502, 2021. [19] Reinhard Kneser and Hermann Ney, “Improved backing-off for m-gram language modeling,” in 1995 international conference on acoustics, speech, and sig- nal processing. IEEE, 1995, vol. 1, pp. 181–184. [20] Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan, “Dialogpt: Large-scale generative pre- training for conversational response generation,” arXiv preprint arXiv:1911.00536, 2019. [21] Stephen Merity, Caiming Xiong, James Bradbury, and “Pointer sentinel mixture models,” Richard Socher, arXiv preprint arXiv:1609.07843, 2016.
synthetic_cpt
2
SLED_Self_Logits_Evolution_Decoding_for_Improving_Factuality_in_Large_Language_Models.pdf
Submitted to ‘Chinese Physics C' Modeling and Analysis of SLED LI Lin1,2 , FANG WenCheng1 ,WANG Chao-Peng1 ,GU Qiang1* 1 Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800, China 2 University of Chinese Academy of Science, Beijing 100049, China Abstract SLED is a crucial component for C-band microwave acceleration unit of SXFEL. To study the behavior of SLED (SLAC Energy Doubler), mathematic model is commonly built and analyzed. In this paper, a new method is proposed to build the model of SLED at SINAP. With this method, the parameters of the two cavities can be analyzed separately. Also it is suitable to study parameter optimization of SLED and analyze the effect from the parameters variations. Simulation results of our method are also presented. Key works SLED, Mathematic model, Energy multiplication factor, coupling coefficient * Corresponding author(email: [email protected]) Submitted to ‘Chinese Physics C' 1 Introduction A compact soft X-ray Free Electron Laser (SXFEL) The performance of SLED is determined by the structure of the storage cavities. The energy facility is presently planned at Shanghai Institute multiplication factor can be expressed as: of Applied Physics, CAS [1], and some analytical modeling and simulation research is ongoing. The high power RF system for SXFEL comprises a RF power source, a constant gradient accelerating (1) Where is the filling time of the accelerating structure and waveguide components For getting a structure, is the filling time of the cavity. high constant gradient field in the accelerating structure, the existing klystron power source of 50 MW cannot meet the power requirement of the field target, and a pulse compressor is required to , , is the gradient of the group velocity along the accelerating structure. , is the multiply the power from the klystron [2]. There cavity coupling coefficient. are different types of pulse compressor which satisfies the requirements. In our case, a SLED type pulse compressor is proposed for the C-band RF system in SXFEL. To study the performance of the pulse compressor and analyze the parameters, an effective way is building a mathematic model, then the model based simulations can be implemented to verify Fig. 1 The structure of the SLED the design. In this paper, a mathematic model of the SLED presented, which is a powerful tool for control system development. With this model, the parameters of SLED are optimized and the effects of the parameters variations are analyzed correspondingly. 2.2 Modeling SLED using S11 Based on references [3][4][5], a model of SLED can be constructed by the energy conservation. However the models based on these methods contain only the amplitude information of the input and output signal, no phase information can be reflected, and not suitable for the case of two 2 Modeling of the SLED asymmetry cavities. 2.1 Structure of the SLED In the paper the technology of two ports terminal SLED is a RF pulse compressor which was firstly network is used to model the behavior of the invented by Farkers Z at 1974[4].The SLED is cavity and input coupler. The cavity itself is composed of two identical high Q-factor cavities equivalent to a RLC circuit, and the input coupler attached to a 3 dB coupler. The structure of SLED can be presented by an ideal transformer as show is shown in Fig. 1. in Fig. 2. M111111vggeMvTTcaaTcT1211lngTTvcag12Teinputoutputcavity1cavity2 Submitted to ‘Chinese Physics C' Fig. 2 The equivalent circuit of SLED Fig. 3 Cvity model at the transmission line side. Using the definition of reflection coefficient for two ports microwave network in Fig. 3, the formula of reflection coefficient can be presented as [6]: (2) Where , , is the cavity unloaded quality factor and is the coupling coefficient. Substituting the impedance into equation (2), then equation (2) can be represented by the differential equation between the input and output voltage (3) The input voltage and output voltage are and image parts. Put the phasor definition into the modulated sine wave with the frequency of , the equation above and assume phase and amplitude information is the most useful define the detuning as , and . If the part that we should consider, so, they can be detuning is small than working frequency, we will written into phasor as get the approximation of . Normally, the voltage changes slowly, so the item Where the and are the complex amplitude for the second derivation is always smaller than as vectors which contain the amplitude and phase information, which can be expressed by their real others. So the equation (3) can be simplified as The equation (3) can also be expressed by the Due to the 3dB power divider between the input transfer function as and output ports, the relationship between the (5) input and output of the SLED for each cavity can be present respectively as (4) (6) LCRZ0From transimister1:nZ0From transimister'cavZX=0X=L0'0'11ZZZZVVScavcavinout2020'22sssZZcav002QRCQ00020ZnRQQext'cavZinininoutoutoutVdtdVdtVdVdtdVdtVd202220221212inVoutVtioutouttjinineVVeVV,inVoutV102220ininoutoutVjdtVdVjdtVd11sVjsjssVinout11sVejsjssVinsledjcavityoutsled_21__211121 Submitted to ‘Chinese Physics C' From the structure of the SLED as shown in Fig. 1, equations (6) and (7), the SLED model can be the output signal of the SLED is formed by the constructed as Fig.4 reflected signals of the two cavities. Based on the (7) Fig.4 The SLED mode Table 1 Key parameters for Energy gain factor 3 Results of simulation by SLED model calculation RF frequency Accelerating structure Filling time Based on previous theoretical analysis, we have RF pulse length carried out some simulations by the SLED model Reverse time 5712MHz 372 ns 2.5 us 2.0 us to optimize parameters. 3.1 Study of the working point of the The optimal parameters are decided by the SLED The coupling coefficient and quality factor of the cavity dominate the performance of the RF practical requirements. The point on the straight line in the Fig.5 shows the optimal operating point where both and are small when the energy pulse compressor, such as the energy multiply multiply factor reach the maximum value. factor and power efficiency. The parameters According to the simulation results, the operating relevant to the calculation of the energy multiply point is selected when quality factor factor and RF power efficiency is list in table 1. and the coupling coefficient Using the parameters in table 1 and by tuning the coupling coefficient and quality factor , the tendency of the energy multiply factor and RF power efficiency can be mapped, as shown in Fig. 5. The power efficiency and the energy multiply increase with the quality factor and the input coupling coefficient. . And the corresponding energy multiply factor is 1.9029. Compared with the original working point ( , ), the fabrication is easier and the energy multiplication factor was reduced by one 0.1%. sVjsjsesVinsledjcavityoutsled_22__2111210Q0Q0Q1600000Q0.71800000Q5.82SLED_Qout1SLED_IoutV_for_rV_for_iV_ref_rV_ref_iV2_S11V_for_rV_for_iV_ref_rV_ref_iV1_S11ReImReImReIm|u|u|u|u-K--K--K--K-pi/2pi/2ReImReImReIm|u|u|u|u2SLED_Qin1SLED_Iin Submitted to ‘Chinese Physics C' factor error of the cavity is shown in Fig. 6(c). The design value is not optimal values for getting the maximum energy multiplication factor which agrees well with analysis in the above section. Fig.5 The energy multiply factor and RF power efficiency mapped with the unloaded quality factor and coupling coefficient 3.2 Course tuning before operation In practice, more attentions are paid on the energy multiply factor. There are some frequency deviation from the operating frequency due to the temperature drift, some coupling coefficient deviation from the desired value and the unloaded quality factor error of the cavity due to machining tolerance. Using the SLED model, we can get the Energy multiplication factor change caused by the cavity frequency deviation, the input coupling coefficient deviation and the unloaded quality factor fluctuation, as shown in Fig.6. Fig. 6(a) and (b) shows that the energy multiplication factor declines with the increasing of frequency deviation and the coupling coefficient far from the desired value. The affect caused by the unloaded quality Fig. 6 The Energy multiplication factor change with the cavity frequency deviation (a) , the input coupling coefficient deviation (b) and unloaded quality factor fluctuation ( , ). Before the actual operation, the RF pulse compressor should be tuned to maintained the 0Q0.71600000Q X= 160000Y= 7Level= 1.9029Q0M11.21.41.61.82x 1053456789101.651.71.751.81.851.9 X= 160000Y= 7Level= 0.69795Q011.21.41.61.82x 105345678910 50% 52% 54% 56% 58% 60% 62% 64% 66% 68% 70%f1(kHz)f2(kHz) M / M0 -80-60-40-20020406080-80-60-40-20020406080 -8.00% -6.00% -4.00% -2.00%a12 M/ M0 -2-1.5-1-0.500.511.52-2-1.5-1-0.500.511.52 -1.80% -1.60% -1.40% -1.20% -1.00% -0.80% -0.60% -0.40% -0.20%bQ1Q2 M/ M0 -4-3-2-101234x 104-4-3-2-101234x 104 -3.00% -2.50% -2.00% -1.50% -1.00% -0.50% 0.00% 0.50%c Submitted to ‘Chinese Physics C' energy multiply factor fluctuation less than 1%. According to the Fig.6, the frequency should be controlled in the range of kHz, the accuracy of the in the range of , which is identical Fig. 7 The Energy multiplication factor change with the cavity frequency deviation According to the analysis above, the parameters requirements for getting energy multiplication factor flatness 0.01% are list in table 2. with the result in Fig. 5(a) and the in the range Table 2 parameter control range of±2x104. Since the coupling coefficient and cannot be tuned during the operation, so during the cold test they should be tuning in an optimal range in order to get a high precision, such as ± 0.25 for the coupling constant and±2x103 for in order to attain the flatness of the energy gain factor 0.01%. 3.3 Fine tuning during operation After cold test, the coupling coefficient and unloaded quality factor are tuned at the proper values and during the operation of the RF pulse compressor, they are fixed and cannot be tuned. There is only one parameter, frequency deviation, can be changed and tuned by controlling the temperature cooling water. During the operation, as the requirement of the flatness of the energy gain factor is less than 0.01%, the frequency of the cavity should be controlled within ±0.2 kHz. As the temperature expansion coefficient of the SLED Parameters Coarse tuning Fine tuning frequency deviation ±30kHz temperature ±0.02 0C (±2kHz ) coupling deviation unloaded quality difference efficient ±0.25 factor ±2x10 4 4 Conclusion RF compressor as a key technology for particle accelerators has been widely studied in many accelerator laboratories, such as KEK, CERN, IHEP and SLAC. But there are only a few researchers using equivalent circuit model to study the behavior of SLED and analyze the parameters. A detail process for building a mathematic model is shown in this paper. The simulation results and the analysis of the parameters deviation of the cavities are also presented. During our modeling analysis, the flatness of the energy gain factor 0.01% can be achieved with our model when the temperature of the cooling water , cavity is about 106 kHz/0C, the water temperature then the maximum frequency detuning will be should be controlled within ±0.02 0C. controlled in 2 kHz. The SLED model can be used to make further study of the RF pulse compressor, more specific measurements will be carried out in the future. References 1 Feng C, Zhao Z T. Hard X-ray free-electron laser based on echo-enabled staged harmonic generation scheme. Chinses Sci Bull, 2010, 55: 221-227. 2 FANG WenCheng, GU Qiang, et al. Design optimization of aC-ban traveling-wave accelerating structure for a compact 3020Q0Q0QCTc002.0f1(kHz)f2(kHz) M / M0 -505-5-4-3-2-1012345-3.5-3-2.5-2-1.5-1x 10-4 Submitted to ‘Chinese Physics C' X-ray Free Electron Laser facility. Chinese Sci Bull, 2011, 56:3420-3425 3 Farkes Z. Hogg H, Loew G, et al. SLED: A method of doubling SLAC’s energy. SLAC Pubs and Reports. Menlo Park, CA, USA, 1974, SLAC-PUB-1453 4 GU Peng-Da. Research on new Type RF Pulse Compressor. IHEP Ph.D. Thesis, 1999(in Chinese) 5 GENG Zhe-Qiao. Design and Construction of the Phasing System for BEPCII Linac. IHEP Ph. D. Thesis ,2007 (in Chinese) 6 David M. Pozar. Microwave Engineering. Third Edition.