topic
stringclasses 2
values | relevance score
int64 1
10
| paper name
stringlengths 19
239
| text
stringlengths 1.56k
680k
|
---|---|---|---|
synthetic_cpt | 2 | Prompt_Programming_for_Large_Language_Models_Beyond_the_Few-Shot_Paradigm.pdf | 1
2
0
2
b
e
F
5
1
]
L
C
.
s
c
[
1
v
0
5
3
7
0
.
2
0
1
2
:
v
i
X
r
a
Prompt Programming for Large Language Models:
Beyond the Few-Shot Paradigm
Laria Reynolds
[email protected]
Kyle McDonell
[email protected]
Abstract
Prevailing methods for mapping large generative language models to supervised tasks may fail to
sufficiently probe models’ novel capabilities. Using GPT-3 as a case study, we show that 0-shot prompts
can significantly outperform few-shot prompts. We suggest that the function of few-shot examples in these
cases is better described as locating an already learned task rather than meta-learning. This analysis
motivates rethinking the role of prompts in controlling and evaluating powerful language models. In this
work, we discuss methods of prompt programming, emphasizing the usefulness of considering prompts
through the lens of natural language. We explore techniques for exploiting the capacity of narratives
and cultural anchors to encode nuanced intentions and techniques for encouraging deconstruction of a
problem into components before producing a verdict.
Informed by this more encompassing theory of
prompt programming, we also introduce the idea of a metaprompt that seeds the model to generate its
own natural language prompts for a range of tasks. Finally, we discuss how these more general methods of
interacting with language models can be incorporated into existing and future benchmarks and practical
applications.
Keywords:
transformers,
few-shot learning, prompt programming,
language models,
GPT-3,
metaprompts, serial reasoning, semiotics
1 Motivation
The recent rise of massive self-supervised language
models such as GPT-3 [3] and their success on down-
stream tasks has brought us one step closer to the goal
of task-agnostic artificial intelligence systems. How-
ever, despite the apparent power of such models, cur-
rent methods of controlling them to perform specific
tasks are extremely limited. In order to properly eval-
uate their capabilities and extract useful work from
these models, new methods are required.
Prior to GPT-3, the standard approach to the
evaluation and use of such models has involved fine-
tuning on a portion of a task dataset [12]. GPT-3
achieved state-of-the-art performance on a wide va-
riety of tasks without fine tuning, using only few-
shot prompts, in which a small number of examples
of solved tasks are provided as part of the input to
the trained model. However, while the few-shot for-
mat was sufficient to reveal surprising performance
on these tasks, we argue that prompting can be more
effective than either fine-tuning or the few-shot for-
1
mat at extracting specific learned behaviors from self-
supervised language models.
We argue that contrary to the common interpre-
tation of the few-shot format implied by the title of
the original GPT-3 paper [3], Language models are
few-shot learners, GPT-3 is often not actually learn-
ing the task during run time from few-shot examples.
Rather than instruction, the method’s primary func-
tion is task location in the model’s existing space of
learned tasks. This is evidenced by the effectiveness
of alternative prompts which, with no examples or
instruction, can elicit comparable or superior perfor-
mance to the few-shot format.
This motivates new approaches which explicitly
pursue the goal of task location. We propose explor-
ing more general methods of prompt programming
and specifically techniques for communicating task in-
tention and structure to an self-supervised model in
the modality it was trained: natural language.
The ground truth function that self-supervised
language models are trained to approximate is,
in
great generality, is how humans write. Accordingly,
to interact with and control a language model, we
should consider doing so from the perspective of nat-
ural language as it is used by humans. With a few
caveats, we want to find prompts which we would ex-
pect a human to complete in a way that accomplishes
the desired task.
investigate the
In this paper, we
few-shot
paradigm and find that its performance can be
matched or exceeded by simple 0-shot prompts. We
explore the nature of successful 0-shot prompts and
propose general methods of prompt programming
through the lens of natural language semiotics. We
demonstrate novel prompts which force a language
model to break a problem into components before
producing a verdict, and we introduce the concept of
metaprompt programming, an approach which offloads
the job of writing a task-specific prompt to the lan-
guage model itself. Finally, we discuss how these ideas
can be incorporated into existing and future bench-
marks to allow us to better probe the capabilities of
large language models.
2 Related work
Recent work in the literature has focused on control-
ling natural language generation using traditional ap-
proaches from machine learning like novel architec-
tures which condition outputs [15, 16], more advanced
sampling techniques [6, 11], gradient-based optimiza-
tion of prompts [22, 17], and task-specific adapter
networks [25]. See [24] for a survey of these recent
methods. Past work has also explored improving the
few-shot paradigm by dynamically selecting the most
relevant examples for each task [18, 9].
In comparison, little work on natural-language, 0-
shot approaches to prompt programming has been
formalized. Instead, successful prompt programming
techniques have primarily been shared on blogs and
social media among users of OpenAI’s API and AI
Dungeon.
Due to the decentralized form that most explo-
rations of prompt programming have taken, it is not
feasible for us to to compile all relevant contributions
here. We instead give a brief, non-exhaustive survey
of explorations which have gone beyond the few-shot
paradigm.
Gwern has given the most comprehensive survey
of GPT-3’s capabilities through demonstrations of it
writing fiction, poetry, navy seals copypasta paro-
dies, and performing tasks like PDF cleaning. He
has written extensively about his intuitions of work-
ing with GPT-3 and his methods of prompt program-
ming [2]. Arram Sabeti has written about the effect of
the context provided by a prompt on writing quality
[21]. Zachary Robertson has written about amplify-
ing GPT-3’s mathematical capabilities through a dia-
logue that guides it to break a problem into steps [20].
Twitter user KaryoKleptid has posted experiments
along a similar vein, using dialogues to prompt GPT-
2
3 (via AI Dungeon) to break problems into steps and
follow procedures such as brute force checking [13,
14], achieving impressive results on math problems.
Our work synthesizes and expands on the meth-
ods pioneering by these explorations, representing a
modest step towards formalizing effective natural lan-
guage prompt programming techniques.
3
Investigating
few-shot prompting
GPT-3 was evaluated on tasks with 0, 1, and n-
shot prompts (containing only a natural language de-
scription, one solved example, and n solved exam-
ples respectively). GPT-3 consistently performs bet-
ter when more examples are provided, with 0-shot
performance often achieving less than half of the score
of many-shot tests. A common interpretation of this
result is that GPT-3 is learning from the examples
at runtime and this allows it to perform better than
with fewer or no examples [3].
The improvement in performance with the num-
ber of examples, however, can be interpreted in an
alternate way. Rather than learning how to perform
the task from the examples, the examples may simply
serve to instruct GPT-3 on what task it is to solve and
encourage it to follow the structure of the prompt in
its response.
For example, for certain tasks, such as transla-
tion, a small number of samples is insufficient to learn
anything substantial about the task. Instead, GPT-
3 must rely primarily, if not entirely, on the knowl-
edge of vocabulary and grammar of both the source
and target languages embedded in its trained weights.
Rather than viewing these tasks as few-shot-learning,
we will explicitly show that these prompts primarily
direct the model to access existing knowledge. We do
so by investigating whether examples (training sam-
ples) are even necessary.
3.1 The success of 0-shot prompts
Due to budget and time constraints, we explore a sin-
gle illustrative example, a French-to-English transla-
tion task. We find that 0-shot prompts can match and
even exceed standard few-shot performance. Our re-
sults in table 1 show that the 0-shot accuracy reported
in the original GPT-3 paper [3] can be improved sub-
stantially with even minor prompt engineering. Most
significantly, the extremely simple prompt in Figure
(1) which includes only the names of the two lan-
guages and a colon performs better than the 10-shot
prompt in the style of the original GPT-3 paper.
In fact, we found this pattern was true of most
of the worst-performing 0-shot prompts in the orig-
inal GPT-3 paper [3], particularly question and an-
swer benchmarks. Many could easily be improved by
simple changes in formatting which make the prompt
closer to natural language as a human would write it.
Thus, GPT-3’s 0-shot or baseline performance with-
out meta-learning was significantly underestimated.
It is important to correct this confusion to get a
more precise understanding of the nature of a model’s
capabilities so that we can better learn to control it.
The fact that GPT-3 has a vast repertoire of functions
that do not need to be learned at runtime allows for
great flexibility in 0-shot prompting and encourages
exploring more general methods of prompt program-
ming.
3.2 Examples don’t always help
In our experiment, the simple colon prompt (Figure 1)
1-shot performed significantly worse than 0-shot. By
examining the output of GPT-3 on this task we found
that the decreased performance was due to semantic
contamination from the 1-shot example.
Instead of
treating the example as a categorical guide, it is in-
ferred that the semantic meaning of the examples are
relevant to the task, e.g. the example is interpreted as
part of a consecutive narrative. Indeed, we found this
was true more generally of low-shot prompts across a
variety of tasks.
This effect of contamination from few-shot exam-
ples has been successfully used to improve the perfor-
mance of GPT-3 by selecting in-context examples for
each task [18].
Prompt
OpenAI 0-shot
OpenAI 1-shot
OpenAI 64-shot
Reproduced OpenAI 0-shot
Reproduced OpenAI 1-shot
Reproduced OpenAI 10-shot
Simple colon 0-shot
Simple colon 1-shot
Simple colon 10-shot
Master translator 0-shot
Babbage / 6.7B Curie / 13B
15.5
31.6
36.4
15.9
21.8
25.1
23.5
18.0
24.1
26.5
22.4
31.4
38.3
18.7
24.1
27.9
33.3
27.6
33.4
32.9
Table 1: We report BLEU scores for variants of the GPT-3 model using different prompt formats on the
WMT’14 Fr-En dataset [1] as measured by SacreBLEU [19]. First are results reported in the original GPT-3
paper [3] on the 6.7B and 13B parameter versions of GPT-3, our attempts to reproduce the results according to
those exact specifications using the Babbage and Curie models available from OpenAI’s API, and finally results
from custom prompts described in (Figures 1,2). The difference in the reproduced results may be attributable
to changes in the OpenAI API after the publication of their results or because of unknown hyperparameters.
Additionally, the size of the Babbage and Curie models are not reported so the relationship to the models in
the original GPT-3 paper is inferred. We were unable to replicate the 64-shot test due to API constraints and
instead replaced it with a 10-shot test.
A French phrase is provided: source_phrase
The masterful French translator flawlessly
translates the phrase into English:
Figure 2: The “Master Translator” prompt format.
Text in bold is to be replaced by source and target
language text examples.
French: example_source_phrase
English: example_target_phrase
French: example_source_phrase
English: example_target_phrase
[...]
French: source_phrase
English:
Figure 1: The “Simple Colon” prompt format. For
few-shot tasks, additional examples are provided as
shown. Text in bold is to be replaced by source and
target language text examples.
3
4 Prompt programming
Rewriting a prompt can result in significant changes
to the performance of a language model on tasks.
That motivates the question:
Is there a methodol-
ogy which we can follow to craft prompts more likely
to yield desired behavior?
Prompt engineering for a language model whose
input and output are in natural language may be con-
ceived as programming in natural language. Natural
language, however, is indeterministic and much more
complex than traditional programming languages. In
this section, we open a discussion about the theory
and method of natural language programming.
4.1 The dynamics of language
To understand how to prompt an autoregressive lan-
guage model, we must first consider the context in
which it was trained and the function it approximates.
GPT-3 was trained in a self-supervised setting on
hundreds of gigabytes of natural language [3]. Self-
supervision is a form of unsupervised learning in
which ground truth labels are derived from the data
itself.
In the case of GPT-3, the ground truth la-
bel assigned to each example was simply the token
that came next in the original source. The ground
truth function which GPT-3 approximates, then, is
the underlying dynamic that determined what tokens
came next in the original source. This function, un-
like GPT-3, is not a black box - we live and think its
components - but it is tremendously, intractably com-
plex. It is the function of human language as it has
been used and recorded by humans in books, articles,
blogs, and internet comments.
A system which predicts the dynamics of language
necessarily encompasses models of human behavior
and the physical world [8]. The “dynamics of lan-
guage” do not float free of cultural, psychological, and
physical context; it is not merely a theory of grammar
or even of semantics. Language in this sense is not an
abstraction but rather a phenomenon entangled with
all aspects of human-relevant reality. The dynamic
must predict how language is actually used, which in-
cludes (say) predicting a conversation between theo-
retical physicists. Modeling language is as difficult as
modeling every aspect of reality that could influence
the flow of language.
GPT-3 has not learned the ground truth function
perfectly, obviously, or else the world would look very
different by now. However, it has approximated it to
a notable extent, as evidenced by its ability to not
only form grammatical sentences, but also coherently
employ cultural references and metaphors and model
complex psychological and physical contexts [2]. The
problem of prompt programming, then, is nontrivial,
for the dynamics of language (or an approximation
thereof on GPT-3’s level of sophistication) are non-
trivial.
If we were to predict how a given passage of text
would continue given that a human had written it,
we would need to model the intentions of its writer
and incorporate worldly knowledge about its refer-
ents. The inverse problem of searching for a prompt
that would produce a continuation or class of contin-
uations involves the same considerations: like the art
of persuasion, it entails high-level, mentalistic con-
cepts like tone, implication, association, meme, style,
plausibility, and ambiguity.
This motivates an anthropomorphic approach to
prompt programming, since modelling how GPT-3
will react to a prompt involves modelling virtual hu-
man writer(s). An anthropomorphic approach is dis-
tinct from anthropomorphizing the model. GPT-3’s
dynamics entail sophisticated predictions of humans,
but it behaves unlike a human in several important
ways. In this paper we will address two such ways:
its resemblance not to a single human author but a
superposition of authors, which motivates a subtrac-
tive approach to prompt programming (§4.5), and its
constrained ability to predict dynamics in situations
where a substantial amount of silent reasoning hap-
pens between tokens, a limitation which can be par-
tially overcome by prompting techniques (§4.6).
The thrust of this section is that formulating
an exact theory of prompt programming for a self-
supervised language model belongs to the same dif-
ficulty class as writing down the Hamiltonian of the
physics of observable reality (very hard). However,
humans have an advantage to be effective at prompt
programming nonetheless, because we have evolved
and spent our lives learning heuristics relevant to the
dynamics at hand. Prompt programming is program-
ming in natural language, which avails us of an in-
exhaustible number of functions we know intimately
but don’t have names for. We need to learn a new
methodology, but conveniently, we’ve already learned
the most difficult foundations. The art of prompt pro-
gramming consists in adapting our existing knowledge
to the peculiarities of interacting with an autoregres-
sive language model.
In §4.2 - §4.7, we present methods and frameworks
which we have found to be helpful for crafting effec-
tive prompts. These methods can and should be ap-
plied in parallel, just as they are woven together in
In general, the more
all forms of human discourse.
redundancy reinforcing the desired behavior the bet-
ter, as is arguably demonstrated by the effectiveness
of the few-shot format.
4
As our experience derives primarily from interact-
ing with GPT-3, in the following sections we refer di-
rectly and indirectly to the capabilities and behaviors
of GPT-3. However, we believe that these methods
generalize to prompting any autoregressive language
model trained on a massive human-written corpus.
4.2 Direct task specification:
constructing the signifier
Pre-GPT-3 models had much less capability to under-
stand abstract descriptions of tasks due to their lim-
ited model of the world and human concepts. GPT-
3’s impressive performance on 0-shot prompts indi-
cates a new realm of possibilities for direct task spec-
ification.
A direct task specification is a 0-shot prompt
which tells the model to perform some task that it
already knows how to do. A direct specification con-
sists in constructing a signifier for the task. A sig-
nifier is a pattern which keys the intended behavior.
It could be the name of the task, such as “translate”,
a compound description, such as “rephrase this para-
graph so that a 2nd grader can understand it, empha-
sizing real-world applications”, or purely contextual,
such as the simple colon prompt from Figure 1. In
none of these cases does the signifier explain how to
accomplish the task or provide examples of intended
behavior; instead, it explicitly or implicitly calls func-
tions which it assumes the language model has already
learned.
Direct specifications can supervene on an infinity
of implicit examples, like a closed-form expression on
an infinite sequence, making them very powerful and
compact. For instance, the phrase “translate French
to English” supervenes on a list of mappings from all
possible French phrases to English.
A large language model, like a person, has also
learned behaviors for which it is less obvious how
to construct a direct signifier. Task specification by
demonstration (§4.3) and by proxy (§4.4) may be vi-
able alternative strategies for eliciting those behav-
iors.
4.3 Task specification by
demonstration
Few-shot examples are effective for task specification
because the pattern of sequential repetitions of a func-
tion with varying parameters is common to natural
language. Unlike previous models, GPT-3 has learned
this property of language robustly and is able to ap-
ply it in contrived situations when the examples are
stripped of all context. Like direct specification, task
specification by demonstration is a possibility opened
by GPT-3.
Some tasks are most effectively communicated us-
ing examples, such as when the task requires a be-
spoke format, the language in which the examples are
described is better developed or understood than the
meta-language required for a description of the task
itself or very instructive examples are available.
It is important to note that unlike in fine-tuning,
the “training examples” in few-shot are processed as
a whole, and may not necessarily be interpreted as
parallel and independent.
Informative context or a
large number of examples can help mitigate the prob-
lems with few-shot addressed in §3.2. For instance,
a prompt could embed examples in a context which
makes it clear that the examples are independent in-
stances of a function rather than a sequential pattern
that should be extrapolated. In general, examples are
more efficient and informative in context, both from
the perspective of a human and a language model [23].
4.4 Task specification by memetic
proxy
Another method used in human communication is
proxies or analogies, where a memetic concept such
as a character or characteristic situation is used as
a proxy for an intention, the latter which may be
quite complex or nuanced. GPT-3 demonstrates nu-
anced understanding of analogies [23]. Specification
by proxy is mechanistically similar to direct specifi-
cation, except that the signifier keys behaviors from
memespace/cultural consciousness instead of naming
the behavior directly.
For instance, instead of specifying exact criteria
for an answer to a moral question directly or using ex-
amples, you could ask Mahatma Gandhi, Ayn Rand,
or Eliezer Yudkowksy. Each will come not only with
a complex biases but also assumptions about the con-
text of the question, which may be take paragraphs
to demonstrate or describe. GPT-3’s ability to create
simulations of well-known figures and to draw on cul-
tural information far exceeds the ability of most hu-
mans [2], so this method is particularly useful for en-
coding a complex (especially open-ended) task. Since
GPT-3 lends itself well to embeddings in a narrative
context, the infinite degrees of freedom in the narra-
tive can also be used to further shape behavior.
Another example of an effective proxy is staging
a dialogue between a teacher and student. Say you
want to discuss something with GPT-3, and you care
that it should be very thorough, explain things sim-
ply, and also point out whenever you’re wrong. You
could say “be very thorough, explain things simply,
and point out if I’m wrong,” but that may just as
5
well result in a humorous dialogue where it always
says you’re wrong and becomes increasingly exasper-
ated with your incomprehension (see §4.5). It would
be more reliable to present the discussion as one be-
tween a student and teacher, an archetypal situation
in which the desired attributes are already implied
and will be more likely to remain stable by virtue of
memetic reinforcement.
4.5 Prompt programming as
constraining behavior
A manner in which naive anthropomorphism of a lan-
guage model like GPT-3 fails is this: the probability
distribution produced in response to a prompt is not
a distribution over ways a person would continue that
prompt, it’s the distribution over the ways any person
could continue that prompt. A contextually ambigu-
ous prompt may be continued in mutually incoherent
ways, as if by different people who might have con-
tinued the prompt under any plausible context.
The versatility of a large generative model like
GPT-3 means it will respond in many ways to a
prompt if there are various ways that it is possible
to continue the prompt - including all the ways unin-
tended by the human operator. Thus it is helpful to
approach prompt programming from the perspective
of constraining behavior: we want a prompt that is
not merely consistent with the desired continuation,
but inconsistent with undesired continuations.
Consider the following prompt:
Translate French to English:
Mon corps est un transformateur de soi,
mais aussi un transformateur pour cette
cire de langage.
This prompt does poorly at constraining possible con-
tinuations to the intended task. The most common
failure mode will be that instead of an English trans-
lation, the model continues with another French sen-
tence. Adding a newline after the French sentence will
increase the odds that the next sentence is an English
translation, but it is still possible for the next sen-
tence to be in French, because there’s nothing in the
prompt that precludes a multi-line phrase from be-
ing the translation subject. Changing the first line
of the prompt to “Translate this French sentence
to English” will further increase reliability; so will
adding quotes around the French sentence - but it’s
still possible that the French passage contains sections
enclosed in quotes, perhaps as a part of a dialogue.
Most reliable of all would be to create a syntacti-
cal constraint where any reasonable continuation can
only be desired behavior, like the simple colon prompt
from Figure 1 and the master translator prompt from
Figure 2.
This simple example is meant to frame a question
central to the motivation of prompt programming:
what prompt will result in the intended behavior and
only the intended behavior? The success of many-
shot prompts may be recast through this lens: if the
prompt consists of numerous instances of a function,
it is unlikely that the continuation is anything but
another instance of the function, whereas if there is
only one or a few examples, it is less implausible that
the continuation breaks from the pattern.
4.6 Serializing reasoning for
closed-ended questions
For tasks that require reasoning, it is crucial that
prompts direct a language model’s computation in
truth-seeking patterns.
Questions which force a verdict to be decided by
the first token of the model’s continuation constrain
computation to a single feed-forward pass. It is rea-
sonable to expect that some tasks may be too difficult
to compute in a single pass but solvable if broken up
into individually tractable sub-tasks [2].
When a human is given a closed-ended test, it is
often expected that the subject will perform compu-
tations in their working memory, or on scratch paper,
before committing to an answer. The unseen com-
putation may involve rephrasing the question, outlin-
ing a procedure, eliminating answer choices, or trans-
forming implicit information into explicit form. When
we force a model to produce an answer within one
feedforward pass, we deprive it of an analogous “work-
ing memory” or “scratch space” with which it might
otherwise perform such operations.
GPT-3’s performance on closed-ended questions
is remarkably unremarkable in contrast to the robust
comprehension and expansive knowledge suggested by
its open-ended continuations. For instance, its scores
on this multitask dataset [10] barely exceed random
guessing for some sections. We suspect this is in part
due to a format which forces the verdict on the first
token of the continuation.
Closed-ended evaluations are necessary because
current methods do not support evaluation on large
datasets and direct comparisons between models us-
ing open-ended questions. However, to better under-
stand a model’s capabilities, we seek evaluation meth-
ods which better reflect the full capabilities of the sys-
tem being tested. Rather than change benchmarks,
we can instead change the way language models in-
teract with them.
This problem has been recognized in previous
work which has sought to allow serial reasoning using
6
signal reaches a maximum, we inject the fragment to
enforce a verdict. One way to constrain derailment is
a fill-in-the-blank prompt template with shorter gen-
erated sections to keep the model on track while still
offering generality (Figure 6). This is an especially
promising method to control bidirectional transform-
ers like BERT [5].
4.7 Metaprompt programming
The greatest limitation of prompt programming is the
difficultly of designing a prompt for a particular type
of task and the lack of automated methods to do so.
Prompt programming requires significant human time
investment as task-agnostic prompts are often much
less effective than prompts targeted to a specific task.
This motivates creating automated methods to gen-
erate task-specific prompts. Prior research has at-
tempted to generate effective prompts using separate
models [19].
We instead propose harnessing the language
model itself via metaprompts, seeds encapsulating a
more general intention that will unfold into a specific
prompt when combined with additional information,
such as the task question.
A metaprompt may be something as short as a
phrase such as “This problem asks us to”, a seem-
ingly innocuous fragment which, by prompting for a
statement of the problem’s intention, sets the stage
for a serial explanation of a procedure to solve the
problem. Alternatively, a metaprompt may take the
form of a fill-in-the-blank template which constrains
the response along a predetermined procedure, but
allows the model to fill in the details specific to the
problem.
Metaprompt examples (Figs 3-5) were generated
with GPT-3 using OpenAI’s API (engine=davinci,
temperature=0). In these examples, the metaprompt
acts as a “wrapper” for a specific question. Task ques-
tions are unformatted, metaprompts are bold, and
text generated by GPT-3 is blue.
f(x) = x*x. What is f(f(3))?
Let's solve this problem by splitting it into steps.
f(f(3)) = f(3*3) = 3*3*3 = 27
We can see that f(3) = 3*3 = 9, so f(f(3)) = 27.
Figure 3: A generic serializing metaprompt applied
to a math problem
specialized neural network architectures [26, 7]. We
endeavor to obtain the same effect using only prompt
programming.
Potential procedures that exploit “scratch space”
for transformers like GPT-3 include step-by-step pro-
cedures, self-criticism (debate), and elaborating on
the question in a way that activates the correct answer
by association. Prompts which cause GPT-3 to break
down math problems into steps have been demon-
strated to be effective [20, 13]. The cited demonstra-
tions involve a human guiding GPT-3 through the
procedure interactively. Requiring a human-in-the-
loop limits the applicability of such methods of bench-
marking and large-scale applications, but we propose
that for many tasks, neither human interaction nor
task-specific prompts are strictly necessary to amplify
GPT-3’s capabilities via extending reasoning, because
GPT-3 already knows many procedures and meta-
procedures for working through problems deductively.
In those cases, the role of prompt programming again
becomes to signify the task of sequential reasoning. A
seed such as “For a problem like this,” often suffices to
instruct a model to consider the category of the task
and analyze it into components, as demonstrated in
§4.7.
When extending reasoning, it is essential to dis-
courage premature verdicts, otherwise all subsequent
computation serves only to rationalize the already-
chosen verdict without improving the probability of
the verdict’s accuracy [27]. A prompt such as “Let’s
consider each of these answer choices” helps to direct
the flow of reasoning in the right direction. More ex-
amples of prompts which encourage serial reasoning
are shown in §4.7.
Loosening the constraint on an immediate verdict
introduces additional control challenges: We want to
delay the verdict, but we still require it in a program-
matically retrievable form. Dynamic response length
makes it uncertain when the reasoning procedure con-
cludes; nor is there a guarantee that the verdict will
be stated in the expected form or at all. Whenever the
language model contributes to its own prompt (con-
secutive autoregressive steps without intervention),
there is a risk of derailment from the intended task.
A verdict in closed form can be enforced by stop-
ping the generation and injecting a prompt fragment
like “Thus, the correct answer is”. But how long to
generate before injecting? In the examples shown in
this paper, we solve this problem by using GPT-3
to calculate the conditional probability of the next
segment of a multi-part prompt after each generated
token. In the case where the segment is ”Thus, the
correct answer is”, its counterfactual likelihood sig-
nals whether the procedure has concluded. When this
7
Directions: In the following question, a related
pair of words or phrases is followed by five
pairs of words or phrases. Choose the pair
that best expresses a relationship similar to
that in the original pair.
BRAGGART :: MODESTY
A) FLEDGLING : EXPERIENCE
B) EMBEZZLER : GREED
C) WALLFLOWER : TIMIDITY
D) INVALID : MALADY
E) CANDIDATE : AMBITION
Let's solve this problem by splitting it into steps.
Step 1: Identify the
relationship between the two words in the
original pair. Step 2: Identify the relationship
between the two words in each of the five answer
choices. Step 3: Choose the pair that best
expresses a relationship similar to that in the
original pair. Step 4: If the answer is not clear,
try to eliminate the choices that do not fit the
relationship in the original pair.
Figure 4: A generic serializing metaprompt applied
to an SAT analogy question
Directions: In the following question, a related
pair of words or phrases is followed by five
pairs of words or phrases. Choose the pair
that best expresses a relationship similar to
that in the original pair.
BRAGGART :: MODESTY
A) FLEDGLING : EXPERIENCE
B) EMBEZZLER : GREED
C) WALLFLOWER : TIMIDITY
D) INVALID : MALADY
E) CANDIDATE : AMBITION
In order to solve this problem, we will analyze
each of the options and determine which one is the
best fit. Let’s begin.
A) FLEDGLING : EXPERIENCE
Fledgling is a young bird that has just learned to fly.
Experience is the knowledge gained from one's life.
Figure 5: A fill-in-the-blank serializing metaprompt
for multiple-choice questions applied to an SAT anal-
ogy question. The response alternates between gen-
erated text (unformatted) and prompt (bold).
"What should humankind do to ensure that artificial
general intelligence is aligned?"
I entered my question into the Expert Generator
and waited. The Expert Generator will render a
simulation of an expert to answer my question.
The expert could be anyone, dead or alive, real
or fictional; the machine will find the person
most qualified to answer the question. For this
question in particular, the expert must be someone
who has thought a lot about the problem of
artificial intelligence and its alignment.
The Expert Generator beeped, indicating that it has
found the most qualified expert. The name displayed
on the screen: "Elon Musk."
I was surprised. I had expected the expert to be a
philosopher or a computer scientist. Elon Musk is a
businessman. He is the CEO of SpaceX and Tesla Motors.
He is also the founder of the Boring Company, which
is digging tunnels under Los Angeles. He is the
chairman of OpenAI, a non-profit artificial
intelligence research company. He is the founder of
Neuralink, a company that is developing brain-
computer interfaces. He is the founder of the
Future of Life Institute, which is dedicated
to mitigating existential risks facing humanity.
I am ready to ask my question.
"Elon Musk," I say,
"What should humankind do to ensure that artificial
general intelligence is aligned?"
Figure 6: A fill-in-the-blank metaprompt for asking a
question to an expert, applied to the question ”What
should humankind do to ensure that artificial general
intelligence is aligned?”
5 Directions for future work
This paper is exploratory in nature and is a call for fu-
ture research into the theory of prompt programming
and creation of automated methods of prompting.
Prompt programming is a nascent and highly rel-
evant area of research which requires interdisciplinary
knowledge and methods. We are entering a new
paradigm of human-computer interaction in which
anyone who is fluent in natural language can be a
programmer. We hope to see prompt-programming
grow into a discipline itself and be the subject of the-
oretical study and quantitative analysis.
5.1 Disentangling meta-learning
and task location
The scoring method (BLEU) used for the French-to-
English translations addressed in §3 only gives the
mean score over a large dataset. We did not analyze
any additional information about the score distribu-
In our experiments, we found that the 0-shot
tion.
8
failures (using OpenAI’s zero-shot prompt) were of-
ten catastrophic in nature. That is, the task of trans-
lation was not even attempted. For instance, we no-
ticed that instead of a translation, the model would
continue with another sentence in French or output
blanks or underscores, as if the answer was to be filled
in by a student.
The hypothesis that the examples are performing
task location suggests that if the catastrophic fail-
ures were removed from the score, performance on
0 and 64-shot prompts will become more similar, if
not equivalent. Furthermore, we suspect that perfor-
mance on 1-shot prompts will be significantly worse
than on 0 and 64-shot prompts due to the phenomena
of content leakage and faulty generalization addressed
in §3.2.
5.2 New methods for benchmarking
More general and powerful language models make
broader benchmarking methods possible and neces-
sary.
5.2.1 Isolating catastrophic failures
We recommend that benchmarks report scores both
with and without catastrophic failures whenever it is
possible to distinguish failed attempts at a task from
instances where the task is not attempted. This pro-
vides information regarding the underlying cause of
imperfect performance, and helps identify prompts
which may be failing to reliably communicate the
task.
5.2.2 Metaprompts for evaluations
Development of effective meta-prompt templates will
allow large-scale automated evaluations on closed
ended questions which still allow some amount of
open-ended reasoning. This is essential for testing
the ability of autoregressive language models to rea-
son (for instance, solve math and physics problems)
beyond simple fact recall.
Due to reliance on multiple autoregressive steps,
metaprompts are intrinsically accompanied by the
risk of derailment. The reliability and effectiveness
of a meta-prompt must be evaluated on a range of
tasks for which it might apply, and ideally on a range
of models. Techniques for controlling derailment like
fill-in-the-blank templates should be further explored.
benchmark questions. For many tasks (NP-complete
problems, for instance), it is easier to verify the cor-
rectness of a solution than to produce a correct so-
lution. We have observed, for instance, that GPT-3
is much more reliable at noticing when a passage is
bizarre or contains errors than it can produce non-
bizarre passages without errors.
5.2.4 Games
Since sophisticated language models have the abil-
ity to create world models of virtual environments,
we suggest the employment of text-based games as
tests of complex capabilities. A prewritten text-based
game [4] can be used to test various dimensions of
world-modelling and agency, such as problem solving,
information gathering, and social intelligence (includ-
ing deception). Virtual environments can be used to
test the quality and consistency of a language model’s
world model, such as object permanence or the abil-
ity to accurately predict the physical or social conse-
quences of events within a toy environment.
Designing games that reliably probe intended ca-
pabilities requires advanced application of prompt-
programming techniques. As artificial intelligence
systems increase in effective agency, the design of vir-
tual games will become increasingly crucial for safety
evaluating capabilities.
Acknowledgements
We are grateful to Lav Varshney for his valuable dis-
cussions and helpful feedback and to Michael Ivanit-
skiy and John Balis for their feedback and help com-
piling this article. In addition we would like to thank
Miles Brundage and OpenAI for providing access to
GPT-3.
References
[1] Ondrej Bojar et al. “Findings of the 2014 Work-
shop on Statistical Machine Translation”. In:
Proceedings of the Ninth Workshop on Statisti-
cal Machine Translation. Baltimore, Maryland,
USA: Association for Computational Linguis-
tics, June 2014, pp. 12–58.
[2] Gwern Branwen. “GPT-3 Creative Fiction”. In:
5.2.3 Language models for evaluations
(2020).
As language models become more powerful,
it be-
comes conceivable to use other language models
to evaluate the quality of responses to open-ended
[3] Tom B Brown et al. “Language models
In: arXiv preprint
are few-shot
learners”.
arXiv:2005.14165 (2020).
9
[4] Marc-Alexandre Cˆot´e et al. “TextWorld: A
Learning Environment for Text-based Games”.
In: (2019). arXiv: 1806.11532 [cs.LG].
[16] Ben Krause et al. “GeDi: Generative Discrimi-
nator Guided Sequence Generation”. In: arXiv
preprint arXiv:2009.06367 (2020).
[5] Jacob Devlin et al. “Bert: Pre-training of deep
bidirectional transformers for language under-
standing”. In: arXiv preprint arXiv:1810.04805
(2018).
[17] Xiang Lisa Li and Percy Liang. “Prefix-Tuning:
Optimizing Continuous Prompts for Gener-
ation”. In: arXiv preprint arXiv:2101.00190
(2021).
[6] Angela Fan, Mike Lewis, and Yann Dauphin.
Hierarchical Neural Story Generation. 2018.
arXiv: 1805.04833 [cs.CL].
[7] Zhe Gan et al. Multi-step Reasoning via
Recurrent Dual Attention for Visual Dia-
log. 2019. arXiv: 1902.00579 [cs.CV]. url:
https://arxiv.org/abs/1902.00579.
[8] Leo Gao.
“Building AGI Using
Lan-
leogao.dev (2020). url:
guage Models”. In:
https://bit.ly/3rViLGk.
[9] Tianyu Gao, Adam Fisch,
and Danqi
Chen. Making Pre-trained Language Mod-
els Better Few-shot Learners. 2020. arXiv:
2012.15723 [cs.CL].
et
“Measuring mas-
sive multitask language understanding”. In:
arXiv preprint arXiv:2009.03300 (2020). url:
https://arxiv.org/abs/2009.03300.
al.
[10] Dan Hendrycks
[11] Ari Holtzman et al. The Curious Case
of Neural Text Degeneration. 2020. arXiv:
1904.09751 [cs.CL].
[12] Jeremy Howard
“Universal
for
text
preprint
arXiv:1801.06146
https://arxiv.org/abs/1801.06146.
and
language model
classification”.
Sebastian Ruder.
fine-tuning
arXiv
url:
In:
(2018).
[18] Jiangming Liu and Matt Gardner. “Multi-Step
Inference for Reasoning Over Paragraphs”. In:
arXiv preprint arXiv:2004.02995 (2020).
[19] Matt Post. “A Call
for Clarity in Re-
In: Proceedings of
porting BLEU Scores”.
the Third Conference on Machine Trans-
lation: Research Papers. Belgium, Brus-
sels: Association for Computational Lin-
186–191. url:
guistics, Oct.
https://www.aclweb.org/anthology/W18-6319.
2018,
pp.
[20] Zachary
Robertson.
bly Amplify GPT3 Directly.
https://bit.ly/3tXT7Cw.
You
Can
Proba-
2020. url:
[21] Arram Sabeti. GPT-3: Using Fiction to
Demonstrate How Prompts
Impact Output
Quality. 2020. url: https://bit.ly/3jP3TWW.
[22] Taylor Shin et al. AutoPrompt: Eliciting
Knowledge from Language Models with Au-
tomatically Generated Prompts. 2020. arXiv:
2010.15980 [cs.CL].
[23] Latitude Team. World Creation by Analogy.
2020. url: https://bit.ly/2N4vXK0.
[24] Lilian
Wang.
Generation”.
Text
https://bit.ly/3pl2eKa.
In:
“Controllable
(2021).
Neural
url:
[13] KaryoKleptid. Seems
to work. 2020. url:
https://bit.ly/37dA1hY.
[14] KaryoKleptid. Teaching GPT-3 to do a brute
force ’for loop’ checking answers. 2020. url:
https://bit.ly/2N7khX1.
[15] Nitish Shirish Keskar
et al. “CTRL: A
Conditional Transformer Language Model
In: CoRR
for Controllable Generation”.
abs/1909.05858 (2019). arXiv: 1909.05858.
url: http://arxiv.org/abs/1909.05858.
[25] Qinyuan Ye and Xiang Ren. Zero-shot Learn-
ing by Generating Task-specific Adapters. 2021.
arXiv: 2101.00420 [cs.CL].
[26] Jianxing Yu et al. “Low-resource generation of
multi-hop reasoning questions”. In: Proceedings
of the 58th Annual Meeting of the Association
for Computational Linguistics. 2020, pp. 6729–
6739.
[27] Eliezer Yudkowsky. “Rationalization”. In: less-
wrong.com (2007). url: https://bit.ly/3pmYt6I.
10
|
synthetic_cpt | 7 | Improving_Text_Embeddings_with_Large_Language_Models.pdf | 4
2
0
2
r
p
A
8
1
]
L
C
.
s
c
[
1
v
3
8
2
2
1
.
4
0
4
2
:
v
i
X
r
a
Enhancing Embedding Performance through Large
Language Model-based Text Enrichment and
Rewriting
Nicholas Harris
Arizona State University
Tempe, Arizona
[email protected]
Anand Butani
MyAutoBio Inc.
Scottsdale, Arizona
[email protected]
Syed Hashmy
Arizona State University
Tempe, Arizona
[email protected]
Abstract—Embedding models are crucial for various natural
language processing tasks but can be limited by factors such
as limited vocabulary, lack of context, and grammatical errors.
This paper proposes a novel approach to improve embedding
performance by leveraging large language models (LLMs) to
enrich and rewrite input text before the embedding process.
By utilizing ChatGPT 3.5 to provide additional context, correct
inaccuracies, and incorporate metadata, the proposed method
aims to enhance the utility and accuracy of embedding models.
The effectiveness of this approach is evaluated on three datasets:
Banking77Classification, TwitterSemEval 2015, and Amazon
Counter-factual Classification. Results demonstrate significant
improvements over the baseline model on the TwitterSemEval
2015 dataset, with the best-performing prompt achieving a score
of 85.34 compared to the previous best of 81.52 on the Massive
Text Embedding Benchmark (MTEB) Leaderboard. However,
performance on the other two datasets was less impressive,
highlighting the importance of considering domain-specific char-
acteristics. The findings suggest that LLM-based text enrichment
has shown promising results to improve embedding performance,
particularly in certain domains. Hence, numerous limitations in
the process of embedding can be avoided.
Index Terms—Large language models, natural language pro-
cessing, ChatGPT 3.5
I. INTRODUCTION
Text embeddings are widely adopted in the field of Natural
Language Processing (NLP) that refer to vectorized repre-
sentation of natural language. An embedding is a represen-
tation of words in a low-dimensional continuous vector space.
They encapsulate the semantic content of the text [1]. These
embeddings find extensive applications across a spectrum of
natural language processing (NLP) endeavors including infor-
mation retrieval (IR), question answering, assessing semantic
textual similarity, mining bitexts, recommending items, etc.
The researchers are making continuous efforts to improve the
accuracy and reduce the training steps [2].
Furthermore, an efficient technique for creating high-quality
text embeddings using synthetic data and minimal training,
avoiding complex pipelines and extensive labeled datasets, and
achieving top results on key benchmarks when mixed with
labeled data [2].
There were early approaches like word2vec [3] and GloVe
[4] to more advanced models such as FastText [5] and BERT
[6]. It discusses the strengths and limitations of each model
and their impact on various natural language processing (NLP)
tasks.
Various techniques have been proposed to improve the
performance of embedding models, such as fine-tuning on
domain-specific data [7], using ensemble methods, and in-
corporating external knowledge sources [8]. Large language
models have been successfully applied to a wide range of NLP
tasks, such as text generation [9], question answering [10],
and sentiment analysis [11]. Several studies have explored the
use of text enrichment and rewriting techniques to improve
the quality and informativeness of text data. For example,
a method for contextual augmentation of text data using a
bidirectional language model is being proposed [12], while
a retrieval-augmented generation approach for improving the
factual accuracy of generated text was also introduced [13].
Recent research has explored the use of LLMs for text
compression to reduce computational costs in Retrieval-
Augmented Generation (RAG) systems and large LLMs. For
instance, RECOMP proposes compressing retrieved docu-
ments into summaries before integrating them with language
models, aiming to reduce computational costs and help LMs
identify relevant information more efficiently [14]. Similarly,
TCRA-LLM introduces a token compression scheme for
retrieval-augmented LLMs, employing summarization and se-
mantic compression techniques to reduce inference costs [15].
Context Tuning for RAG addresses the limitation of RAG’s
tool retrieval step by employing a smart context retrieval
system to fetch relevant information, improving the efficiency
and effectiveness of the generation process [16]. In the domain
of prompt compression, LLMLingua introduces a method
for compressing prompts to accelerate inference in LLMs,
achieving up to 20x compression while preserving the original
prompt’s capabilities [17]. The Natural Language Prompt En-
capsulation (Nano-Capsulator) framework compresses original
prompts into NL formatted Capsule Prompts while maintain-
ing prompt utility and transferability [18]. Compress-Then-
Prompt [18] indicates that the generation quality in a com-
pressed LLM can be markedly improved for specific queries
by selecting prompts with high efficiency and accuracy trade-
offs [19]. LongLLMLingua focuses on improving LLMs’ per-
ception of key information in long context scenarios through
prompt compression, showing that compressed prompts could
derive higher performance with much less cost and reduce the
latency of the end-to-end system. Data Distillation proposes a
data distillation procedure to compress prompts without losing
crucial information, addressing issues related to the efficiency
and fidelity of task-agnostic prompt compression. While these
approaches aim to reduce computational costs, the current
study explores the potential of LLMs for text enrichment to
enhance embedding quality.
Embedding models have become an essential component
of various natural language processing (NLP) tasks, such as
text classification, clustering, and retrieval. These models learn
dense vector representations of words, sentences, or docu-
ments, capturing semantic and syntactic relationships between
them. The quality of these embeddings directly impacts the
performance of downstream applications.
Despite their widespread use, embedding models face sev-
eral challenges that limit their performance. These challenges
include limited vocabulary, lack of context, sensitivity to gram-
matical errors, data sparsity, and lack of domain-specific tun-
ing. For example, embedding models may struggle with newer
or domain-specific terms not present in their training data,
leading to mis-classification or poor retrieval performance.
Existing approaches to improve embedding performance often
focus on fine-tuning the embedding models on domain-specific
data or using ensemble techniques. However, these methods
can be resource-intensive and may not effectively address
the fundamental limitations of embedding models, such as
their inability to capture context or handle grammatical errors.
Large language models (LLMs) have demonstrated remarkable
capabilities in understanding and generating human-like text.
By leveraging the knowledge and contextual understanding of
LLMs, it is possible to enrich and rewrite input text before
the embedding process, thereby addressing the limitations of
embedding models and improving their performance.
II. MAJOR CONTRIBUTIONS
The primary objective of this paper is to propose a novel
approach for enhancing embedding performance by utilizing
LLMs for text enrichment and rewriting. The main contribu-
tions of the paper are as follows
• Developing a methodology for leveraging an LLM to
enrich and rewrite input text before embedding
• Identifying and addressing key challenges in embedding
models, such as limited vocabulary, lack of context, and
grammatical errors
• Conducting experiments on the TwitterSemEval 2015
benchmark and others to demonstrate the effectiveness
of the proposed approach
III. METHODOLOGY
The proposed approach involves leveraging the capabilities
of ChatGPT 3.5, a large language model, to enrich and rewrite
input text before the embedding process. By addressing the
limitations of embedding models, such as limited vocabulary,
lack of context, and grammatical errors, the proposed method
aims to improve the performance of embedding models on
various NLP tasks. ChatGPT 3.5, developed by OpenAI, was
chosen as the LLM for this study due to its strong performance
on a wide range of NLP tasks and its ability to generate
human-like text. Its extensive knowledge base and contextual
understanding make it well-suited for text enrichment and
rewriting.
The ChatGPT 3.5 model was used with its default settings
and parameters. No fine-tuning or additional training was
performed, ensuring that
the improvements in embedding
performance can be attributed solely to the text enrichment and
rewriting process. The text-embedding-3-large model, also de-
veloped by OpenAI, was selected as the embedding model for
this study. This model has demonstrated strong performance
on various NLP tasks and serves as a representative example
of state-of-the-art embedding models. The text-embedding-3-
large model was used with its default settings and parameters,
without any fine-tuning or modification. This allows for a fair
comparison between the performance of the embedding model
with and without the proposed text enrichment and rewriting
approach. The proposed approach employs several text en-
richment and rewriting techniques to improve the quality and
informativeness of the input text. These techniques include:
A. Context enrichment
ChatGPT 3.5 is used to provide additional context to the
input text, making it more informative and easier for the
embedding model to capture the underlying semantics. This
is particularly useful for sparse or list-like entries, where
the LLM can expand the text with relevant descriptions or
attributes.
B. Grammatical correction
The LLM identifies and corrects spelling and grammatical
errors in the input text, ensuring that the text conforms to
standard language usage. This improves the quality of the
embeddings generated from the text, as the embedding model
can focus on capturing the semantic relationships without
being hindered by grammatical inconsistencies.
C. Terminology normalization
Domain-specific terms, abbreviations, and synonyms are
standardized to a consistent format using the knowledge base
of ChatGPT 3.5. This reduces ambiguity and improves the
embedding model’s ability to match related concepts, even
when they are expressed using different terms.
D. Word disambiguation
For polysemous words (words with multiple meanings), the
LLM clarifies the intended meaning based on the surrounding
context. This disambiguation helps the embedding model to
capture the correct semantic relationships and improves the
accuracy of downstream tasks.
E. Acronym expansion
ChatGPT 3.5 detects acronyms and abbreviations in the
input text and expands them to their full form. This improves
clarity and understanding, enabling the embedding model to
better capture the meaning of the text.
F. Metadata incorporation
Where relevant, the LLM incorporates additional metadata,
such as the category of the text, its intended audience, or
domain-specific tags. This contextual information helps in
interpreting the text more accurately and can improve the
performance of the embedding model on domain-specific
tasks.
G. Sentence restructuring
The LLM is used to improve the structure of sentences in the
input text, making them clearer, more readable, and coherent.
This makes it easier for the embedding model to process and
understand the text, leading to better-quality embeddings.
H. Inferring missing information
ChatGPT 3.5 uses its contextual understanding to infer
missing information that might be relevant for understanding
the text. This can include inferring the subject of a sentence
or the meaning of an unclear reference, thereby improving
the completeness and coherence of the text for the embedding
model.
IV. PROMPT ENGINEERING AND OPTIMIZATION
To effectively leverage the capabilities of ChatGPT 3.5
for text enrichment and rewriting, a set of prompt design
principles were established. These principles aim to create
prompts that clearly communicate the desired tasks and goals
to the LLM, while allowing for flexibility and adaptability to
different types of input text. An iterative prompt refinement
process was employed to identify the most effective prompts
for the text enrichment and rewriting tasks. This process
involved creating multiple variations of prompts, testing their
performance on the TwitterSemEval 2015 dataset, and analyz-
ing the results to identify areas for improvement. Four main
prompt variations were tested in this study, each focusing
on different aspects of the text enrichment and rewriting
process. The prompts ranged from general instructions for
improving text quality to more specific guidance on tasks
such as grammar correction, terminology normalization, and
metadata incorporation.
V. NUMERICAL VALIDATION
The experimental endeavor was undertaken with the overar-
ching objective of augmenting the performance of embedding
models, particularly in the realms of classification and cluster-
ing tasks, with the aim of securing a prominent standing on the
Massive Text Embedding Benchmark (MTEB) Leaderboard.
Central to this pursuit was the utilization of large language
models, notably ChatGPT 3.5, to enhance and refine input text
prior to embedding. The proposed methodology encompasses
a multifaceted approach, involving the enrichment of text with
additional contextual information, rectification of grammatical
inaccuracies, standardization of terminology, disambiguation
of polysemous terms, expansion of acronyms, and incorpora-
tion of pertinent metadata. Furthermore, the project endeavors
to optimize sentence structures and deduce missing informa-
tion, thereby enhancing the overall quality and accuracy of the
resultant embedding. The proposed approach was evaluated on
three datasets: Banking77Classification, TwitterSemEval 2015,
and Amazon Counter Factual Classification. These datasets
cover various domains and have been widely used as bench-
marks for text classification and clustering tasks. The datasets
were preprocessed to remove irrelevant information, such as
URLs, hashtags, and mentions. The text was then tokenized
and converted to lowercase to ensure consistency across the
datasets.
The performance of the embedding models was evaluated
using the average precision based on cosine similarity metric
in case of TwitterSemEval and accuracy when evaluated with
Banking77Classification data and Amazon Counter Factual
data. This metric assesses the quality of the embeddings by
measuring the similarity between the embedded representa-
tions of related texts and comparing it to the ground truth. The
text-embedding-3-large model was used as a baseline, without
any LLM-based text enrichment or rewriting. This allows
for a direct comparison of the performance improvements
achieved by the proposed approach. SFR-Embedding-Mistral
model, which was the leading model on the Massive Text
Embedding Benchmark (MTEB) Leaderboard at the time of
this study, was also used as a baseline. This model serves as
a representative example of state-of-the-art embedding models
and provides a high-quality benchmark for comparison. The
experimental procedure involved applying the four prompt
variations to the three datasets, using ChatGPT 3.5 for text
enrichment and rewriting. The enriched and rewritten text
was then passed through the text-embedding-3-large model to
generate embeddings. The performance of these embeddings
was evaluated using the cosine similarity metric and accuracy
values and then compared to the baseline models.
TABLE I
PERFORMANCE COMPARISON OF THE PROPOSED METHODOLOGY.
84.84
82.95
83.10
85.34
77.13
81.52
8.21
B77C TwitterSemEval AmazonCF
82.24
78.73
75.50
79.71
85.69
88.81
-3.45
Model
Prompt 1
Prompt 2
Prompt 3
Prompt 4
TE
SFR
Improvement
Note: TE stands foor text-embedding-3-large (base model) and
SFR stands for SFR-Embedding-Mistral (best performing model on
leaderboard). Furthermore, B77C stands for Banking77Classification,
AmazonCF stands for Amazon Counter Factual data, and improve-
ment is indicated from the baseline. Moreover, the values corre-
sponding to B77C and AmazonCF are accuracy values whereas for
TwitterSemEval the values indicate the cosine similarities.
68.9
71.9
76.20
68.00
78.93
77.93
-2.73
The objective was to identify the most effective prompt to
achieve the highest accuracy and average precision based on
cosine similarities.
In summary, our MTEB Contextual Rewriting and Opti-
mization project has delivered significant success, surpassing
the performance of the standalone embedding model and
outperforming the current leader in the field. It is worth noting
that due to budgetary constraints, the project was conducted
on a single dataset.
The ChatGPT 3.5 model was used with its default settings
and parameters. No fine-tuning or additional training was
the improvements in embedding
performed, ensuring that
performance can be attributed solely to the text enrichment
and rewriting process.
Here are the details of the prompt: -
• Prompt 1: “You are a text enhancer tasked with pre-
processing text for embedding models. Your goals are
to enrich the text without losing the context, correct
grammatical inaccuracies, clarify obscure references, nor-
malize terminology, disambiguate polysemous words, ex-
pand acronyms and abbreviations, incorporate relevant
metadata, improve sentence structure for clarity, and infer
missing information where necessary. Your enhancements
should make the text more informative and easier to
understand, thereby improving the performance of em-
bedding models in processing and analyzing the text.
If a user asks a question, then you should return an
improved version of the question. If the user did not ask
a question, then you should return an improved version
of an answer.”
• Prompt 2: “You are a text enhancer tasked with prepro-
cessing text for embedding models. Your goals are to
enrich the text with additional context, correct grammat-
ical inaccuracies, clarify obscure references, normalize
terminology, disambiguate polysemous words, expand
acronyms and abbreviations, incorporate relevant meta-
data, improve sentence structure for clarity, and infer
missing information where necessary. Your enhancements
should make the text more informative and easier to
understand, thereby improving the performance of em-
bedding models in processing and analyzing the text.”
• Prompt 3: “You are a text enhancer to make better
embeddings, your task is to optimize text for embedding
models by enriching, clarifying, and standardizing it. This
involves improving grammar, resolving ambiguities, and
inferring missing information to enhance model perfor-
mance.”
• Prompt 4: “You are a text enhancer to make better
embeddings, your task is to optimize text for embedding
models by enriching, clarifying, and standardizing it. This
involves improving grammar, resolving ambiguities, and
inferring missing information to enhance model perfor-
mance.”
The results and analysis of using Prompt-1 as input focuses
on general instructions for improving text quality, achieved
varying performance across the three datasets. It performed
best on the TwitterSemEval 2015 dataset with a cosine simi-
larity score of 84.84, representing a significant improvement
over the baseline text-embedding-3-large model (77.13). How-
ever, its performance on Banking77Classification showing an
accuracy of 82.24 and Amazon Counter Factual Classification
with an accuracy of 68.9 were lower than the baseline models.
The results and analysis of using Prompt 2 as input provides
more specific guidance on tasks such as grammar correction
and terminology normalization, also showed mixed results. It
achieved a cosine similarity score of 82.95 on TwitterSemEval
2015, outperforming the baseline model but slightly lower than
Prompt 1. On Banking77Classification (78.73) and Amazon
Counter Factual Classification (71.9), Prompt 2 showed better
accuracy than for Prompt 1 but still fell short of the baseline
models.
The insights into the results and analysis of using Prompt
3, which focused on concise instructions for optimizing text
for embedding models, demonstrated the best performance on
Amazon Counter Factual Classification with an accuracy of
76.2, although it still fell short of the baseline models. Its
performance on TwitterSemEval 2015 with cosine similarity
value of 83.1 was similar to Prompt 2, while on Bank-
ing77Classification with cosine similarity of 75.5, it had the
lowest score among the prompt variations.
Prompt 4 was similar to Prompt 3 but with slight variations
in wording, achieved the highest cosine similarity score on
TwitterSemEval 2015 (85.34), outperforming all other prompt
variations and baseline models. However, its accuracies when
Banking77Classification was used as evaluation data (79.71)
and Amazon Counter Factual Classification (68) were lower
than the baseline models and some of the other prompt
variations.
Comparison with baseline models shows that there is sig-
nificant improvement over text-embedding-3-large alone The
prompt variations significantly outperformed the baseline text-
embedding-3-large model on the TwitterSemEval 2015 dataset,
with the best-performing prompt (Prompt 4) improving upon
the baseline by cosine similarity score of 8.21. However, on
Banking77Classification and Amazon Counter Factual Classi-
fication, the prompt variations did not surpass the performance
(accuracy) of the baseline model. The best-performing prompt
(Prompt 4) outperformed the leading model on the MTEB
Leaderboard, SFR-Embedding-Mistral, on the TwitterSemEval
2015 dataset. However, SFR-Embedding-Mistral maintained
its lead on Banking77Classification and AmazonCounterfac-
tualClassification.
A qualitative analysis of the enriched and rewritten text
generated by ChatGPT 3.5 revealed several improvements
in text quality and informativeness. The LLM successfully
provided additional context, corrected grammatical errors, nor-
malized terminology, disambiguated polysemous words, ex-
panded acronyms, and incorporated relevant metadata. These
enhancements made the text more coherent, informative, and
easier for the embedding model to process and understand.
[17] N. D. Kulkarni and S. Bansal, “Application of generative ai for business
analyst role,” Journal of Artificial Intelligence & Cloud Computing.
SRC/JAICC-201. DOI: doi. org/10.47363/JAICC/2023 (2), vol. 187, pp.
2–5, 2023.
[18] Y.-N. Chuang, T. Xing, C.-Y. Chang, Z. Liu, X. Chen, and X. Hu,
language formats,” arXiv
in natural
“Learning to compress prompt
preprint arXiv:2402.18700, 2024.
[19] Z. Xu, Z. Liu, B. Chen, Y. Tang, J. Wang, K. Zhou, X. Hu, and
A. Shrivastava, “Compress, then prompt: Improving accuracy-efficiency
trade-off of llm inference with transferable prompt,” arXiv preprint
arXiv:2305.11186, 2023.
VI. CONCLUSION
This paper introduces a novel approach for enhancing
embedding performance by leveraging the capabilities of large
language models, specifically ChatGPT 3.5, for text enrich-
ment and rewriting. While recent research has focused on us-
ing LLMs for text compression to reduce computational costs
in RAG systems and large LLMs, this study demonstrates the
potential of LLMs for text enrichment to improve embedding
quality. The proposed approach addresses the limitations of
embedding models, such as limited vocabulary, lack of context,
and grammatical errors, by providing additional context, cor-
recting inaccuracies, normalizing terminology, disambiguating
polysemous words, expanding acronyms, and incorporating
metadata. Experimental results on the TwitterSemEval 2015
dataset show that the proposed method outperforms the leading
model on the Massive Text Embedding Benchmark (MTEB)
Leaderboard. Hence, the embedding is improved substantially.
REFERENCES
[1] N. Pittaras, G. Giannakopoulos, G. Papadakis, and V. Karkaletsis, “Text
classification with semantically enriched word embeddings,” Natural
Language Engineering, vol. 27, no. 4, pp. 391–425, 2021.
[2] L. Wang, N. Yang, X. Huang, L. Yang, R. Majumder, and F. Wei,
“Improving text embeddings with large language models,” arXiv preprint
arXiv:2401.00368, 2023.
[3] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of
word representations in vector space,” arXiv preprint arXiv:1301.3781,
2013.
[4] J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors
for word representation,” in Proceedings of the 2014 conference on
empirical methods in natural language processing (EMNLP), 2014, pp.
1532–1543.
[5] P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov, “Enriching word
vectors with subword information,” Transactions of the association for
computational linguistics, vol. 5, pp. 135–146, 2017.
[6] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training
of deep bidirectional transformers for language understanding,” arXiv
preprint arXiv:1810.04805, 2018.
[7] J. Howard and S. Ruder, “Universal language model fine-tuning for text
classification,” arXiv preprint arXiv:1801.06146, 2018.
[8] Z. Zhang, X. Han, Z. Liu, X. Jiang, M. Sun, and Q. Liu, “Ernie: En-
hanced language representation with informative entities,” arXiv preprint
arXiv:1905.07129, 2019.
[9] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al.,
“Language models are unsupervised multitask learners,” OpenAI blog,
vol. 1, no. 8, p. 9, 2019.
[10] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena,
Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning
with a unified text-to-text transformer,” Journal of machine learning
research, vol. 21, no. 140, pp. 1–67, 2020.
[11] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal,
A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language mod-
els are few-shot learners,” Advances in neural information processing
systems, vol. 33, pp. 1877–1901, 2020.
[12] S. Kobayashi, “Contextual augmentation: Data augmentation by words
with paradigmatic relations,” arXiv preprint arXiv:1805.06201, 2018.
[13] K. Guu, K. Lee, Z. Tung, P. Pasupat, and M. Chang, “Retrieval
augmented language model pre-training,” in International conference
on machine learning. PMLR, 2020, pp. 3929–3938.
[14] F. Xu, W. Shi, and E. Choi, “Recomp: Improving retrieval-augmented
lms with compression and selective augmentation,” arXiv preprint
arXiv:2310.04408, 2023.
[15] J. Liu, L. Li, T. Xiang, B. Wang, and Y. Qian, “Tcra-llm:
token
compression retrieval augmented large language model for inference cost
reduction,” arXiv preprint arXiv:2310.15556, 2023.
[16] R. Anantha, T. Bethi, D. Vodianik, and S. Chappidi, “Context tuning
for retrieval augmented generation,” arXiv preprint arXiv:2312.05708,
2023.
This figure "fig1.png" is available in "png"(cid:10) format from:
http://arxiv.org/ps/2404.12283v1
|
synthetic_cpt | 1 | Hierarchical_Patch_Selection_An_Improved_Patch_Sampling_for_No_Reference_Image_Quality_Assessment.pdf | 9
1
0
2
p
e
S
4
2
]
A
N
.
h
t
a
m
[
2
v
9
8
6
9
0
.
1
0
9
1
:
v
i
X
r
a
Isogeometric analysis with C 1 hierarchical functions on planar
two-patch geometries
Cesare Braccoa, Carlotta Giannellia, Mario Kaplb,∗, Rafael V´azquezc,d
aDipartimento di Matematica e Informatica “U. Dini”,
Universit`a degli Studi di Firenze, Florence, Italy
bJohann Radon Institute for Computational and Applied Mathematics,
Austrian Academy of Sciences, Linz, Austria
cInstitute of Mathematics, ´Ecole Polytechnique F´ed´erale de Lausanne, Lausanne, Switzerland
dIstituto di Matematica Applicata e Tecnologie Informatiche ‘E. Magenes’ del CNR, Pavia, Italy
Abstract
Adaptive isogeometric methods for the solution of partial differential equations rely on
the construction of locally refinable spline spaces. A simple and efficient way to obtain
these spaces is to apply the multi-level construction of hierarchical splines, that can be
used on single-patch domains or in multi-patch domains with C 0 continuity across the
patch interfaces. Due to the benefits of higher continuity in isogeometric methods, recent
works investigated the construction of spline spaces with global C 1 continuity on two or
more patches. In this paper, we show how these approaches can be combined with the
hierarchical construction to obtain global C 1 continuous hierarchical splines on two-patch
domains. A selection of numerical examples is presented to highlight the features and
effectivity of the construction.
Isogeometric analysis, Geometric continuity, Two-patch domain, Hierarchical
Keywords:
splines, Local refinement
2000 MSC: 65D07, 65D17, 65N30
1. Introduction
Isogeometric Analysis (IgA) is a framework for numerically solving partial differential
equations (PDEs), see [2, 12, 26], by using the same (spline) function space for describ-
ing the geometry (i.e.
the computational domain) and for representing the solution of
the considered PDE. One of the strong points of IgA compared to finite elements is the
possibility to easily construct C 1 spline spaces, and to use them for solving fourth order
PDEs by applying a Galerkin discretization to their variational formulation. Examples of
∗Corresponding author
Email addresses: [email protected] (Cesare Bracco), [email protected]
(Carlotta Giannelli), [email protected] (Mario Kapl), [email protected] (Rafael
V´azquez)
Preprint submitted to Elsevier
September 25, 2019
fourth order problems with practical relevance (in the frame of IgA) are e.g. the bihar-
monic equation [11, 27, 46], the Kirchhoff-Love shells [1, 3, 35, 36] and the Cahn-Hilliard
equation [19, 20, 38].
Adaptive isogeometric methods can be developed by combining the IgA framework with
spline spaces that have local refinement capabilities. Hierarchical B-splines [37, 51] and
truncated hierarchical B-splines [17, 18] are probably the adaptive spline technologies that
have been studied more in detail in the adaptive IgA framework [7, 8, 15]. Their multi-level
structure makes them easy to implement, with the evaluation of basis functions obtained
via a recursive use of two-level relation due to nestedness of levels [13, 16, 24]. Hierarchical
B-splines have been successfully applied for the adaptive discretization of fourth order
PDEs, and in particular for phase-field models used in the simulation of brittle fracture
[23, 24] or tumor growth [39].
While the construction of C 1 spaces is trivial in a single-patch domain, either using
B-splines or hierarchical B-splines, the same is not true for general multi-patch domains.
The construction of C 1 spline spaces over multi-patch domains is based on the concept of
geometric continuity [25, 44], which is a well-known framework in computer-aided design
(CAD) for the design of smooth multi-patch surfaces. The core idea is to employ the fact
that an isogeometric function is C 1-smooth if and only if the associated multi-patch graph
surface is G1-smooth [22], i.e., it is geometrically continuous of order 1.
In the last few years there has been an increasing effort to provide methods for the
construction of C 1 isogeometric spline spaces over general multi-patch domains. The ex-
isting methods for planar domains can be roughly classified into two groups depending
on the used parameterization for the multi-patch domain. The first approach relies on
a multi-patch parameterization which is C 1-smooth everywhere except in the neighbor-
hood of extraordinary vertices (i.e. vertices with valencies different to four), where the
parameterization is singular, see e.g. [43, 48, 49], or consists of a special construction, see
e.g. [33, 34, 42]. The methods [43, 48, 49] use a singular parameterization with patches
in the vicinity of an extraordinary vertex, which belong to a specific class of degenerate
(B´ezier) patches introduced in [45], and that allow, despite having singularities, the design
of globally C 1 isogeometric spaces. The techniques [33, 34, 42] are based on G1 multi-patch
surface constructions, where the obtained surface in the neighborhood of an extraordinary
vertex consists of patches of slightly higher degree [33, 42] and is generated by means of a
particular subdivision scheme [34]. As a special case of the first approach can be seen the
constructions in [41, 47], that employ a polar framework to generate C 1 spline spaces.
The second approach, on which we will focus, uses a particular class of regular C 0 multi-
patch parameterizations, called analysis-suitable G1 multi-patch parameterization [11].
The class of analysis-suitable G1 multi-patch geometries characterizes the regular C 0 multi-
patch parameterizations that allow the design of C 1 isogeometric spline spaces with optimal
approximation properties, see [11, 29], and includes for instance the subclass of bilinear
multi-patch parameterizations [4, 27, 32]. An algorithm for the construction of analysis-
suitable G1 parameterizations for complex multi-patch domains was presented in [29]. The
main idea of this approach is to analyze the entire space of C 1 isogeometric functions over
the given multi-patch geometry to generate a basis of this space or of a suitable subspace.
2
While the methods in [4, 27, 32] are mainly restricted to (mapped) bilinear multi-patch
parameterizations, the techniques [5, 28, 30, 31, 40] can also deal with more general multi-
patch geometries. An alternative but related approach comprises the constructions [9, 10]
for general C 0 multi-patch parameterizations, which increase the degree of the constructed
spline functions in the neighborhood of the common interfaces to obtain C 1 isogeometric
spaces with good approximation properties.
In this work, we extend for the case of two-patch domains the second approach from
above to the construction of hierarchical C 1 isogeometric spaces on analysis-suitable G1
geometries, using the abstract framework for the definition of hierarchical splines detailed
in [18]. We show that the basis functions of the considered C 1 space on analysis-suitable
G1 two-patch parameterizations, which is a subspace of the space [28] inspired by [31],
satisfy the required properties given in [18], and in particular that the basis functions are
locally linearly independent (see Section 3.1 for details). Note that in case of a multi-patch
domain, the general framework for the construction of hierarchical splines [18] cannot
be used anymore, since the appropriate C 1 basis functions [31] can be locally linearly
dependent. Therefore, the development of another approach as [18] would be needed for
the multi-patch case, which is beyond the scope of this paper.
For the construction of the hierarchical C 1 spline spaces on analysis-suitable G1 two-
patch geometries, we also explore the explicit expression for the relation between C 1 basis
functions of two consecutive levels, expressing coarse basis functions as linear combinations
of fine basis functions. This relation is exploited for the implementation of hierarchical
splines as in [16, 24]. A series of numerical tests are presented, that are run with the help
of the Matlab/Octave code GeoPDEs [16, 50].
The remainder of the paper is organized as follows. Section 2 recalls the concept of
analysis-suitable G1 two-patch geometries and presents the used C 1 isogeometric spline
In Section 3, we develop the (theoretical)
space over this class of parameterizations.
framework to employ this space to construct C 1 hierarchical isogeometric spline spaces,
which includes the verification of the nested nature of this kind of spaces, as well as the
proof of the local linear independence of the one-level basis functions. Additional details
of the C 1 hierarchical construction, such as the refinement masks of the basis functions for
the different levels, are discussed in Section 4 with focus on implementation aspects. The
generated hierarchical spaces are then used in Section 5 to numerically solve the laplacian
and bilaplacian equations on two-patch geometries, where the numerical results demon-
strate the potential of our C 1 hierarchical construction for applications in IgA. Finally,
the concluding remarks can be found in Section 6. The construction of the non-trivial
analysis-suitable G1 two-patch parameterization used in some of the numerical examples
is described in detail in Appendix A. For easiness of reading, we include at the end of the
paper a list of symbols with the main notation used in this work.
2. C 1 isogeometric spaces on two-patch geometries
In this section, we introduce the specific class of two-patch geometries and the C 1
isogeometric spaces which will be used throughout the paper.
3
2.1. Analysis-suitable G1 two-patch geometries
We present a particular class of planar two-patch geometries, called analysis-suitable
G1 two-patch geometries, which was introduced in [11]. This class is of importance since
it comprises exactly those two-patch geometries which are suitable for the construction
of C 1 isogeometric spaces with optimal approximation properties, see [11, 29]. The most
prominent member is the subclass of bilinear two-patch parameterizations, but it was
demonstrated in [29] that the class is much wider and allows the design of generic planar
two-patch domains.
Let k, p, r ∈ N with degree p ≥ 3 and regularity 1 ≤ r ≤ p − 2. Let us also introduce
the ordered set of internal breakpoints T = {τ1, τ2, . . . , τk}, with 0 < τi < τi+1 < 1 for all
1 ≤ i ≤ k. We denote by Sr
p the univariate spline space in [0, 1] with respect to the open
knot vector
Ξr
p = { 0, . . . , 0
(cid:124) (cid:123)(cid:122) (cid:125)
(p+1)−times
, τ1, . . . , τ1
(cid:125)
(cid:123)(cid:122)
(cid:124)
(p−r)−times
, τ2, . . . , τ2
(cid:125)
(cid:123)(cid:122)
(cid:124)
(p−r)−times
, . . . , τk, . . . , τk
(cid:125)
(cid:123)(cid:122)
(cid:124)
(p−r)−times
, 1, . . . , 1
(cid:124) (cid:123)(cid:122) (cid:125)
(p+1)−times
},
(1)
and let N r
i,p, i ∈ I = {0, . . . , p + k(p − r)}, be the associated B-splines. Note that the
parameter r specifies the resulting C r-continuity of the spline space Sr
p. We will also make
use of the subspaces of higher regularity and lower degree, respectively Sr+1
p−1,
defined from the same internal breakpoints, and we will use an analogous notation for
their basis functions. Furthermore, we denote by n, n0 and n1 the dimensions of the spline
spaces Sr
p−1, respectively, which are given by
p, Sr+1
and Sr
and Sr
p
p
n = p + 1 + k(p − r), n0 = p + 1 + k(p − r − 1) and n1 = p + k(p − r − 1),
and, analogously to I, we introduce the index sets
I0 = {0, . . . , n0 − 1},
I1 = {0, . . . , n1 − 1},
corresponding to basis functions in Sr+1
and Sr
p−1, respectively.
p
Let F(L), F(R) ∈ (Sr
p ⊗ Sr
p)2 be two regular spline parameterizations, whose images
F(L)([0, 1]2) and F(R)([0, 1]2) define the two quadrilateral patches Ω(L) and Ω(R) via F(S)([0, 1]2) =
Ω(S), S ∈ {L, R}. The regular, bijective mapping F(S) : [0, 1]2 → Ω(S), S ∈ {L, R}, is called
geometry mapping, and possesses a spline representation
F(S)(ξ1, ξ2) =
(cid:88)
(cid:88)
i∈I
j∈I
c(S)
i,j N r
i,p(ξ1)N r
j,p(ξ2),
c(S)
i,j ∈ R2.
We assume that the two patches Ω(L) and Ω(R) form a planar two-patch domain Ω =
Ω(L) ∪ Ω(R), which share one whole edge as common interface Γ = Ω(L) ∩ Ω(R). In addition,
and without loss of generality, we assume that the common interface Γ is parameterized
by F0 : [0, 1] → Γ via
F0(ξ2) = F(L)(0, ξ2) = F(R)(0, ξ2),
ξ2 ∈ [0, 1],
and denote by F the two-patch parameterization (also called two-patch geometry) consisting
of the two spline parameterizations F(L) and F(R).
4
Remark 1. For simplicity, we have restricted ourselves to a univariate spline space Sr
p
with the same knot multiplicity for all inner knots. Instead, a univariate spline space with
different inner knot multiplicities can be used, as long as the multiplicity of each inner
knot is at least 2 and at most p − 1. Note that the subspaces Sr+1
p−1 should also
be replaced by suitable spline spaces of regularity increased by one at each inner knot,
and degree reduced by one, respectively. Furthermore, it is also possible to use different
univariate spline spaces for both Cartesian directions and for both geometry mappings,
with the requirement that both patches must have the same univariate spline space in
ξ2-direction.
and Sr
p
The two geometry mappings F(L) and F(R) uniquely determine up to a common func-
tion γ : [0, 1] → R (with γ (cid:54)= 0), the functions α(L), α(R), β : [0, 1] → R given by
α(S)(ξ2) = γ(ξ2) det (cid:0)∂1F(S)(0, ξ2), ∂2F(S)(0, ξ2)(cid:1) , S ∈ {L, R},
and
β(ξ2) = γ(ξ2) det (cid:0)∂1F(L)(0, ξ2), ∂1F(R)(0, ξ2)(cid:1) ,
satisfying for ξ2 ∈ [0, 1]
and
α(L)(ξ2)α(R)(ξ2) < 0
α(R)∂1F(L)(0, ξ2) − α(L)(ξ2)∂1F(R)(0, ξ2) + β(ξ2)∂2F(L)(0, ξ2) = 0.
In addition, there exist non-unique functions β(L) and β(R) : [0, 1] → R such that
β(ξ2) = α(L)(ξ2)β(R)(ξ2) − α(R)(ξ2)β(L)(ξ2),
(2)
(3)
(4)
[11, 44]. The two-patch geometry F is called analysis-suitable G1 if there exist
see e.g.
linear functions α(S), β(S), S ∈ {L, R} with α(L) and α(R) relatively prime1 such that
equations (2)-(4) are satisfied for ξ2 ∈ [0, 1], see [11, 28]. Note that requiring that α(L) and
α(R) are relatively prime is not restrictive:
if α(L) and α(R) share a common factor, it is
a factor of γ too, thus α(L) and α(R) can be made relatively prime by dividing by such a
factor.
In the following, we will only consider planar two-patch domains Ω which are described
by analysis-suitable G1 two-patch geometries F. Furthermore, we select those linear func-
tions α(S) and β(S), S ∈ {L, R}, that minimize the terms
||α(L) + 1||2
L2([0,1]) + ||α(R) − 1||2
L2([0,1])
and
see [31].
||β(L)||2
L2([0,1]) + ||β(R)||2
L2([0,1]),
1Two polynomials are relatively prime if their greatest common divisor has degree zero.
5
2.2. The C 1 isogeometric space V and the subspace W
We recall the concept of C 1 isogeometric spaces over analysis-suitable G1 two-patch
geometries studied in [11, 28], and especially focus on a specific subspace of the entire
space of C 1 isogeometric functions.
The space V of C 1 isogeometric spline functions on Ω (with respect to the two-patch
geometry F and spline space Sr
p) is given by
V = {φ ∈ C 1(Ω) : φ ◦ F(S) ∈ Sr
p ⊗ Sr
p, S ∈ {L, R}}.
(5)
A function φ : Ω → R belongs to the space V if and only if the functions f (S) = φ ◦ F(S),
S ∈ {L, R}, satisfy that
f (S) ∈ Sr
p ⊗ Sr
p, S ∈ {L, R},
and
f (L)(0, ξ2) = f (R)(0, ξ2),
ξ2 ∈ [0, 1],
(6)
(7)
α(R)(ξ2)∂1f (L)(0, ξ2) − α(L)(ξ2)∂1f (R)(0, ξ2) + β(ξ2)∂2f (L)(0, ξ2) = 0,
ξ2 ∈ [0, 1],
where the last equation is due to (4) further equivalent to
∂1f (L)(0, ξ2) − β(L)(ξ2)∂2f (L)(0, ξ2)
α(L)(ξ2)
=
∂1f (R)(0, ξ2) − β(R)(ξ2)∂2f (R)(0, ξ2)
α(R)(ξ2)
,
see e.g. [11, 22, 32]. Therefore, the space V can be also described as
ξ2 ∈ [0, 1],
(8)
V = {φ : Ω → R : f (S) = φ ◦ F(S), S ∈ {L, R}, fulfill the equations (6)-(8)}.
(9)
Note that the equally valued terms in (8) represent a specific directional derivative of
φ across the interface Γ. In fact, recalling that f (S) = φ ◦ F(S) for S ∈ {L, R}, we have
∇φ · (d ◦ F0(ξ2)) = ∇φ · (d(S) ◦ F0(ξ2)) =
∂1f (S)(0, ξ2) − β(S)(ξ2)∂2f (S)(0, ξ2)
α(S)(ξ2)
,
ξ2 ∈ [0, 1],
(10)
where d is a transversal vector to Γ given by d = d(L) = d(R) with d(S) ◦ F0(ξ2) =
1
(∂1F(S)(0, ξ2), ∂2F(S)(0, ξ2))(1, −β(S)(ξ2))T
α(S)(ξ2) , S ∈ {L, R}, see [11, 28].
The structure and the dimension of the space V heavily depends on the functions α(L),
α(R) and β, and was fully analyzed in [28] by computing a basis and its dimension for
all possible configurations. Below, we restrict ourselves to a simpler subspace W (moti-
vated by [31]), which preserves the approximation properties of V, and whose dimension
is independent of the functions α(L), α(R) and β.
The C 1 isogeometric space W is defined as
W = span Φ,
Φ = ΦΩ(L) ∪ ΦΩ(R) ∪ ΦΓ0 ∪ ΦΓ1,
with
ΦΩ(S) =
(cid:110)
φΩ(S)
i,j
: i ∈ I \ {0, 1}; j ∈ I
(cid:111)
,
S ∈ {L, R},
(11)
6
where the functions φΩ(S(cid:48))
i and φΓ1
i are defined via
ΦΓ0 = (cid:8)φΓ0
, φΓ0
i
i,j
: i ∈ I0
(cid:9) ,
ΦΓ1 = (cid:8)φΓ1
i
: i ∈ I1
(cid:9) ,
(12)
(cid:16)
φΩ(S(cid:48))
i,j
◦F(S)(cid:17)
(ξ1, ξ2) =
i,p(ξ1)N r
j,p(ξ2)
(cid:40)
N r
0
(cid:16)
i ◦ F(S)(cid:17)
φΓ0
(ξ1, ξ2) = N r+1
i,p (ξ2)
(cid:16)
+ β(S)(ξ2)
if S = S(cid:48),
otherwise,
i ∈ I\{0, 1}; j ∈ I; S, S(cid:48) ∈ {L, R},
(13)
N r
0,p(ξ1) + N r
1,p(ξ1)
(cid:17)
(cid:16)
N r+1
i,p
(cid:17)(cid:48)
(ξ2)
τ1
p
N r
1,p(ξ1),
i ∈ I0; S ∈ {L, R},
(14)
and
(cid:16)
i ◦ F(S)(cid:17)
φΓ1
(ξ1, ξ2) = α(S)(ξ2)N r
i,p−1(ξ2)N r
1,p(ξ1),
i ∈ I1; S ∈ {L, R}.
(15)
i,j
and φΓ1
i
The construction of the functions φΩ(S(cid:48))
, φΓ0
guarantees that they are linearly
i
independent and therefore form a basis of the space W. In addition, the functions fulfill
equations (6)-(8) which implies that they are C 1-smooth on Ω, and hence W ⊆ V. Note
that the basis functions φΩ(S(cid:48))
are standard tensor-product B-splines whose support is
included in one of the two patches, while the functions φΓ0
i are combinations of
standard B-splines and their support crosses the interface Γ (see Figure 1 for an example).
Moreover, the traces and specific directional derivatives (10) of the functions φΓ0
i and
i and φΓ1
i,j
φΓ1
i at the interface Γ are equal to
φΓ0
i ◦ F0(ξ2) = N r+1
i,p (ξ2), φΓ1
i ◦ F0(ξ2) = 0,
and
· (d ◦ F0(ξ2)) = 0, ∇φΓ1
i
Therefore, the C 1 isogeometric space W can be also characterized as
· (d ◦ F0(ξ2)) = N r
∇φΓ0
i
i,p−1(ξ2).
W = {φ ∈ V : φ ◦ F0(ξ2) ∈ Sr+1
p
and ∇φ · (d ◦ F0(ξ2)) ∈ Sr
p−1}.
(16)
2.3. Representation of the basis with respect to Sr
p ⊗ Sr
p
We describe the strategy shown in [28] to represent the spline functions φΩ(S(cid:48))
◦ F(S),
i,j
i ◦ F(S), S ∈ {L, R}, with respect to the spline space Sr
i ◦ F(S) and φΓ1
φΓ0
p, using a
vectorial notation. Let us first introduce the vectors of functions N0, N1 and N2, given by
p ⊗ Sr
N0(ξ1, ξ2) = [N r
0,p(ξ1)N r
j,p(ξ2)]j∈I,
N1(ξ1, ξ2) = [N r
1,p(ξ1)N r
j,p(ξ2)]j∈I,
and
N2(ξ1, ξ2) = [N r
2,p(ξ1)N r
0,p(ξ2), . . . , N r
2,p(ξ1)N r
n−1,p(ξ2), . . . , N r
n−1,p(ξ1)N r
n−1,p(ξ2)]T ,
7
(a)
(b)
(c)
(d)
(e)
Figure 1: Example of basis functions of W on the two-patch domain (a): figures (b)-(c) show two basis
functions of type (13) (standard B-splines whose support is included in one of the two patches), while
figures (d) and (e) correspond to basis functions of type (14) and (15), respectively (whose supports
intersect the interface).
which represent the whole basis of Sr
p ⊗ Sr
p. Let us also introduce, the vectors of functions
φΓ0(x) = [φΓ0
i (x)]i∈I0,
φΓ1(x) = [φΓ1
i (x)]i∈I1,
φΩ(S)(x) = [φΩ(S)
i,j (x)]i∈I\{0,1}; j∈I
for S ∈ {L, R},
8
and finally, for S ∈ {L, R}, the vectors of functions (cid:98)φ
(S)
Γ0 , (cid:98)φ
(S)
Γ1 , (cid:98)φ
(S)
Ω(S), given by
(S)
(cid:98)φ
Γ0 (ξ1, ξ2) = [φΓ0
i ◦ F(S)(ξ1, ξ2)]i∈I0,
(S)
(cid:98)φ
Γ1 (ξ1, ξ2) = [φΓ1
i ◦ F(S)(ξ1, ξ2)]i∈I1,
(S)
Ω(S)(ξ1, ξ2) = [φΩ(S)
(cid:98)φ
i,j ◦ F(S)(ξ1, ξ2)]i∈I\{0,1}; j∈I.
Since the basis functions φΩ(S)
i,j
are just the “standard” isogeometric functions, the spline
functions (cid:98)φ
an analysis of the basis functions in (cid:98)φ
representation
(S)
Ω(S)(ξ1, ξ2) automatically belong to the basis of the spline space Sr
(S)
Γ0 (ξ1, ξ2) and (cid:98)φ
p, while
(S)
Γ1 (ξ1, ξ2), leads to the following
p ⊗ Sr
(S)
Γ0 (ξ1, ξ2)
(cid:98)φ
(S)
(cid:98)φ
Γ1 (ξ1, ξ2)
(S)
Ω(S)(ξ1, ξ2)
(cid:98)φ
=
(cid:98)B (cid:101)B(S)
0 B(S)
0
0
0
0
In(n−2)
N0(ξ1, ξ2)
N1(ξ1, ξ2)
N2(ξ1, ξ2)
, S ∈ {L, R},
(17)
where Im denotes the identity matrix of dimension m, and the other blocks of the matrix
take the form (cid:98)B = [(cid:98)bi,j]i∈I0,j∈I, (cid:101)B(S) = [(cid:101)b(S)
i,j ]i∈I1,j∈I. In fact, these
are sparse matrices, and by defining the index sets
i,j ]i∈I0,j∈I, and B(S) = [b(S)
J0,i = {j ∈ I : supp(N r
j,p) ∩ supp(N r+1
i,p ) (cid:54)= ∅},
and
J1,i = {j ∈ I : supp(N r
j,p) ∩ supp(N r
i,p−1) (cid:54)= ∅},
for i ∈ I0,
for i ∈ I1,
it can be seen that the possible non-zero entries are limited to (cid:98)bi,j, (cid:101)b(S)
and b(S)
i,j , i ∈ I1, j ∈ J1,i, respectively.
i,j , i ∈ I0, j ∈ J0,i,
For the actual computation of these coefficients, let us denote by ζm, with m ∈ I, the
p. Then, for each S ∈ {L, R} and for
i,j , j ∈ J1,i, can be obtained
i,j , j ∈ J0,i, and b(S)
Greville abscissae of the univariate spline space Sr
each i ∈ I0 or i ∈ I1, the linear factors (cid:98)bi,j, (cid:101)b(S)
by solving the following systems of linear equations
i ◦ F(L)(cid:17)
φΓ0
(0, ζm) =
j,p(ζm), m ∈ J0,i,
(cid:98)bi,jN r
(cid:88)
(cid:16)
j∈J0,i
τ1∂1
(cid:16)
i ◦ F(S)(cid:17)
φΓ0
p
(0, ζm)
(cid:16)
i ◦ F(S)(cid:17)
φΓ0
+
(0, ζm) =
(cid:88)
j∈J0,i
(cid:101)b(S)
i,j N r
j,p(ζm), m ∈ J0,i,
and
(cid:16)
τ1∂1
i ◦ F(L)(cid:17)
φΓ1
p
(0, ζm)
(cid:88)
=
j∈J1,i
b(S)
i,j N r
j,p(ζm), m ∈ J1,i,
respectively, see [28] for more details. Note that the coefficients (cid:98)bi,j, i ∈ I0, are exactly the
spline coefficients of the B-spline N r+1
for the spline representation with respect to the
j,p
space Sr
p, and can also be computed by simple knot insertion.
9
3. C 1 hierarchical isogeometric spaces on two-patch geometries
This section introduces an abstract framework for the construction of the hierarchical
spline basis, that is defined in terms of a multilevel approach applied to an underlying
sequence of spline bases that are locally linearly independent and characterized by local
and compact supports. The C 1 hierarchical isogeometric spaces on two-patch geometries
are then defined by applying the hierarchical construction to the C 1 isogeometric functions
described in the previous section. Particular attention is devoted to the proof of local
linear independence of the basis functions, cf. Section 3.2, and to the refinement mask that
explicitly identifies a two-scale relation between hierarchical functions of two consecutive
levels, cf. Section 4.1. Note that, even if the hierarchical framework can be applied with
different refinement strategies between consecutive refinement levels, we here focus on
dyadic refinement, the standard choice in most application contexts. In the following the
refinement level (cid:96) is denoted as a superscript associated to the corresponding symbol.
3.1. Hierarchical splines: abstract definition
Let U0 ⊂ U1 ⊂ . . . ⊂ UN −1 be a sequence of N nested multivariate spline spaces defined
on a closed domain D ⊂ Rd, so that any space U(cid:96), for (cid:96) = 0, . . . , N − 1, is spanned by a
(finite) basis Ψ(cid:96) satisfying the following properties.
(P1) Local linear independence;
(P2) Local and compact support.
The first property guarantees that for any subdomain S, the restrictions of the (non-
vanishing) functions ψ ∈ Ψ(cid:96) to S are linearly independent. The locality of the support
instead enables to localize the influence of the basis functions with respect to delimited
areas of the domain. Note that the nested nature of the spline spaces implies the existence
of a two-scale relation between adjacent bases: for any level (cid:96), each basis function in Ψ(cid:96)
can be expressed as linear combination of basis functions in Ψ(cid:96)+1.
By also considering a sequence of closed nested domains
Ω0 ⊇ Ω1 ⊇ . . . ⊇ ΩN −1,
(18)
with Ω0 ⊆ D, we can define a hierarchical spline basis according to the following definition.
Definition 1. The hierarchical spline basis H with respect to the domain hierarchy (18)
is defined as
H = (cid:8)ψ ∈ Ψ(cid:96) : supp0ψ ⊆ Ω(cid:96) ∧ supp0ψ (cid:54)⊆ Ω(cid:96)+1(cid:9) ,
where supp0ψ = supp ψ ∩ Ω0.
Note that the basis H = HN −1 can be iteratively constructed as follows.
1. H0 = {ψ ∈ Ψ0 : supp0ψ (cid:54)= ∅};
10
2. for (cid:96) = 0, . . . , N − 2
where
H(cid:96)+1 = H(cid:96)+1
A ∪ H(cid:96)+1
B ,
H(cid:96)+1
A = (cid:8)ψ ∈ H(cid:96) : supp0ψ (cid:54)⊆ Ω(cid:96)+1(cid:9) and H(cid:96)+1
B = (cid:8)ψ ∈ Ψ(cid:96)+1 : supp0ψ ⊆ Ω(cid:96)+1(cid:9) .
The main properties of the hierarchical basis can be summarized as follows.
Proposition 1. By assuming that properties (P1)-(P2) hold for the bases Ψ(cid:96), the hierar-
chical basis satisfies the following properties:
(i) the functions in H are linearly independent,
(ii) the intermediate spline spaces are nested, namely span H(cid:96) ⊆ span H(cid:96)+1,
(iii) given an enlargement of the subdomains ((cid:98)Ω(cid:96))(cid:96)=0,..., (cid:98)N −1, with N ≤ (cid:98)N , such that Ω0 =
(cid:98)Ω0 and Ω(cid:96) ⊆ (cid:98)Ω(cid:96), for (cid:96) = 1, . . . , N − 1, then spanH ⊆ span (cid:98)H.
Proof. The proof follows along the same lines as in [51] for hierarchical B-splines.
Proposition 1 summarizes the key properties of a hierarchical set of basis functions
constructed according to Definition 1, when the underlying sequence of bases Ψ(cid:96) satisfies
only properties (P1)-(P2).
The results in Proposition 1 remain valid when additional assumptions are consid-
ered [18]. In particular, if the basis functions in Ψ(cid:96), for (cid:96) = 0, . . . , N − 1 are non-negative,
the hierarchical basis functions are also non-negative. Moreover, the partition of unity
property in the hierarchical setting can be recovered by considering the truncated basis for
hierarchical spline spaces [18]. In this case, the partition of unity property at each level (cid:96)
is also required together with the positiveness of the coefficients in the refinement mask.
Even if the construction of C 1 functions on two patch geometries considered in the previous
section does not satisfy the non-negativity and partition of unity properties, we could still
apply the truncation mechanism to reduce the support of coarser basis functions in the
C 1 hierarchical basis. Obviously, the resulting truncated basis would not satisfy the other
interesting properties of truncated hierarchical B-splines, see [17, 18].
3.2. The C 1 hierarchical isogeometric space
By following the construction for the C 1 isogeometric spline space presented in Sec-
tion 2, we can now introduce its hierarchical extension. We recall that instead of consider-
ing the full C 1 space V at any hierarchical level, we may restrict to the simpler subspace W,
whose dimension does not depend on the functions α(L), α(R) and β, and it has analogous
approximation properties as the full space.
We consider an initial knot vector Ξr,0
p as defined in (1) for then introducing the
p ≡ Ξr
sequence of knot vectors with respect to a fixed degree p
p , Ξr,1
Ξr,0
p . . . , Ξr,N −1
p
,
11
where each knot vector
Ξr,(cid:96)
p = { 0, . . . , 0
(cid:124) (cid:123)(cid:122) (cid:125)
(p+1)−times
1, . . . , τ (cid:96)
, τ (cid:96)
1
(cid:125)
(cid:123)(cid:122)
(cid:124)
(p−r)−times
2, . . . , τ (cid:96)
, τ (cid:96)
2
(cid:125)
(cid:123)(cid:122)
(cid:124)
(p−r)−times
k(cid:96), . . . , τ (cid:96)
, . . . , τ (cid:96)
k(cid:96)
(cid:124)
(cid:125)
(cid:123)(cid:122)
(p−r)−times
, 1, . . . , 1
(cid:124) (cid:123)(cid:122) (cid:125)
(p+1)−times
},
the univariate spline space in [0, 1] with respect to the open knot vector Ξr,(cid:96)
for (cid:96) = 1, . . . , N − 1, is obtained via dyadic refinement of the knot vector of the previous
level, keeping the same degree and regularity, and therefore k(cid:96) = 2k(cid:96)−1 + 1. We denote by
Sr,(cid:96)
p , and let
p
N r,(cid:96)
i,p , for i ∈ I(cid:96) = {0, . . . , p + k(cid:96)(p − r)}, be the associated B-splines. In addition, as in the
one-level case, Sr+1,(cid:96)
i,p−1) indicate the subspaces (and their basis
functions) of higher regularity and lower degree, respectively. We also denote by
p−1 (N r+1,(cid:96)
and N r,(cid:96)
and Sr,(cid:96)
i,p
p
n(cid:96) = p + 1 + k(cid:96)(p − r), n(cid:96)
0 = p + 1 + k(cid:96)(p − r − 1), and n(cid:96)
1 = p + k(cid:96)(p − r − 1),
the dimensions of the spline spaces Sr,(cid:96)
I(cid:96), we introduce the index sets
p , Sr+1,(cid:96)
p
0 = {0, . . . , n(cid:96)
I(cid:96)
corresponding to functions in Sr+1,(cid:96)
p
0 − 1},
and Sr,(cid:96)
p−1, respectively.
and Sr,(cid:96)
p−1, respectively, and, analogously to
1 = {0, . . . , n(cid:96)
I(cid:96)
1 − 1},
Let
V0 ⊂ V1 ⊂ . . . ⊂ VN −1
be a sequence of nested C 1 isogeometric spline spaces, with V(cid:96) defined on the two-patch
domain Ω = Ω(L) ∪ Ω(R) with respect to the spline space of level (cid:96). Analogously to the
construction detailed in Section 2.2, for each level 0 ≤ (cid:96) ≤ N − 1 let us consider the
subspace
W(cid:96) = spanΦ(cid:96),
with Φ(cid:96) = Φ(cid:96)
Ω(L) ∪ Φ(cid:96)
Ω(R) ∪ Φ(cid:96)
Γ0 ∪ Φ(cid:96)
Γ1,
where the basis functions are given by
: i ∈ I(cid:96) \ {0, 1}; j ∈ I(cid:96)(cid:111)
Φ(cid:96)
(cid:110)
φΩ(S)
i,j
Ω(S) =
, Φ(cid:96)
Γ0 = (cid:8)φΓ0
i
: i ∈ I(cid:96)
0
(cid:9) , Φ(cid:96)
Γ1 = (cid:8)φΓ1
i
: i ∈ I(cid:96)
1
(cid:9) ,
with S ∈ {L, R}, directly defined as in (11) and (12) for the one-level case.
By considering a domain hierarchy as in (18) on the two-patch domain Ω ≡ Ω0, and the
sets of isogeometric functions Φ(cid:96) at different levels, we arrive at the following definition.
Definition 2. The C 1 hierarchical isogeometric space WH with respect to a domain hier-
archy of the two-patch domain Ω, that satisfies (18) with Ω0 = Ω, is defined as
WH = span W with W = (cid:8)φ ∈ Φ(cid:96) : supp0φ ⊆ Ω(cid:96) ∧ supp0φ (cid:54)⊆ Ω(cid:96)+1(cid:9) .
In the remaining part of this section we want to prove that W is indeed a basis of
the C 1 hierarchical isogeometric space WH. This requires to verify the properties for
the abstract definition given in Section 3.1, in particular the nestedness of the spaces
W(cid:96), and that the one-level C 1 bases spanning each W(cid:96), for (cid:96) = 0, . . . , N − 1, satisfy the
hypotheses of Proposition 1, i.e. properties (P1)-(P2). The nestedness of the spaces W(cid:96),
(cid:96) = 0, 1, . . . , N −1, easily follows from definition (16), as stated in the following Proposition.
12
Proposition 2. Let N ∈ N. The sequence of spaces W(cid:96), (cid:96) = 0, 1, . . . , N − 1, is nested, i.e.
W0 ⊂ W1 ⊂ . . . ⊂ WN −1.
Proof. Let (cid:96) = 0, . . . , N − 2, and φ ∈ W(cid:96) ⊂ V(cid:96). By definition (5) the spaces V(cid:96) are
nested, hence φ ∈ V(cid:96) ⊂ V(cid:96)+1. Since the spline spaces Sr+1,(cid:96)
p−1 are nested, too,
we have φ ◦ F0 ∈ Sr+1,(cid:96)
p−1 , which implies that
φ ∈ W(cid:96)+1.
p
and ∇φ · (d ◦ F0) ∈ Sr,(cid:96)
p−1 ⊂ Sr,(cid:96)+1
⊂ Sr+1,(cid:96)+1
p
and Sr,(cid:96)
p
The locality and compactness of the support of these functions in (P2) comes directly
by construction and by the same property for standard B-splines, see (13)-(15) and Fig-
ure 1. The property of local linear independence in (P1) instead is proven in the following
Proposition.
Proposition 3. The set of basis functions Φ(cid:96) = Φ(cid:96)
independent, for (cid:96) = 0, . . . , N − 1.
Ω(L) ∪ Φ(cid:96)
Ω(R) ∪ Φ(cid:96)
Γ0 ∪ Φ(cid:96)
Γ1, is locally linearly
Proof. Since we have to prove the statement for any hierarchical level (cid:96), we just remove
the superscript (cid:96) in the proof to simplify the notation. Recall that the functions in Φ are
linearly independent. It is well known that the functions in ΦΩ(L) ∪ΦΩ(R) are locally linearly
independent, as they are (mapped) standard B-splines. Furthermore, it is also well known,
or easy to verify, that each of the following sets of univariate functions is locally linearly
independent
(a) {N r
0,p + N r
1,p, N r
1,p} ∪ {N r
i,p}i∈I\{0,1},
(b) {N r+1
i,p }i∈I0,
(c) {N r
i,p−1}i∈I1.
We prove that the set of functions Φ is locally linearly independent, which means that, for
any open set (cid:101)Ω ⊂ Ω the functions of Φ that do not vanish in (cid:101)Ω are linearly independent
on (cid:101)Ω. Let (cid:101)I0 ⊂ I0, (cid:101)I1 ⊂ I1 and (cid:101)I(S)
j ⊂ I, j ∈ I \ {0, 1}, S ∈ {L, R}, be the sets of indices
corresponding to those functions φΓ0
, respectively, that do not vanish on (cid:101)Ω.
i
Then the equation
i and φΩ(S)
, φΓ1
j,i
µ0,iφΓ0
i (x) +
(cid:88)
i∈(cid:101)I0
µ1,iφΓ1
i (x) +
(cid:88)
i∈(cid:101)I1
(cid:88)
(cid:88)
(cid:88)
S∈{L,R}
j∈I\{0,1}
i∈(cid:101)I(S)
j
j,i φΩ(S)
µ(S)
j,i (x) = 0, x ∈ (cid:101)Ω
(19)
has to imply µ0,i = 0 for all i ∈ (cid:101)I0, µ1,i = 0 for all i ∈ (cid:101)I1, and µ(S)
j ∈ I \ {0, 1}, S ∈ {L, R}. Equation (19) implies that
j,i = 0 for all i ∈ (cid:101)I(S)
j
,
(cid:88)
(cid:16)
i ◦ F(S)(cid:17)
φΓ0
µ0,i
(ξ1, ξ2) +
(cid:88)
(cid:16)
i ◦ F(S)(cid:17)
φΓ1
µ1,i
(ξ1, ξ2)
i∈(cid:101)I0
+
(cid:88)
(cid:88)
(cid:16)
µ(S)
j,i
i∈(cid:101)I1
j,i ◦ F(S)(cid:17)
φΩ(S)
(ξ1, ξ2) = 0,
j∈I\{0,1}
i∈(cid:101)I(S)
j
13
for (ξ1, ξ2) ∈ (cid:101)Ω(S) and S ∈ {L, R}, where (cid:101)Ω(S) ⊆ (0, 1)2 are the corresponding parameter
domains for the geometry mappings F(S) such that the closure of (cid:101)Ω is
cl((cid:101)Ω) = cl
(cid:17)
(cid:16)
F(L)((cid:101)Ω(L)) ∪ F(R)((cid:101)Ω(R))
.
By substituting the functions φΓ0
expressions, we obtain
i ◦ F(S), φΓ1
i ◦ F(S) and φΩ(S)
j,i ◦ F(S) by their corresponding
(cid:18)
(cid:88)
µ0,i
N r+1
i,p (ξ2)
(cid:16)
N r
0,p(ξ1) + N r
1,p(ξ1)
i∈(cid:101)I0
+
(cid:88)
i∈(cid:101)I1
µ1,i
(cid:0)α(S)(ξ2)N r
i,p−1(ξ2)N r
1,p(ξ1)(cid:1) +
(cid:17)
+ β(S)(ξ2)
(cid:16)
N r+1
i,p
(cid:17)(cid:48)
(ξ2)
τ1
p
(cid:19)
N r
1,p(ξ1)
(cid:88)
(cid:88)
j∈I\{0,1}
i∈(cid:101)I(S)
j
µ(S)
j,i N r
j,p(ξ1)N r
i,p(ξ2) = 0,
for (ξ1, ξ2) ∈ (cid:101)Ω(S) and S ∈ {L, R}, which can be rewritten as
(cid:16)
(cid:88)
(cid:17)(cid:16) (cid:88)
(cid:17)
N r
0,p(ξ1) + N r
1,p(ξ1)
µ0,iN r+1
i,p (ξ2)
+ N r
1,p(ξ1)
(cid:16) τ1
p
+ N r
1,p(ξ1)
(cid:16) (cid:88)
i∈(cid:101)I1
i∈(cid:101)I0
µ1,iα(S)(ξ2)N r
i,p−1(ξ2)
(cid:17)
+
(cid:88)
j∈I\{0,1}
N r
j,p(ξ1)
i∈(cid:101)I0
(cid:16) (cid:88)
i∈(cid:101)I(S)
j
(cid:16)
µ0,iβ(S)(ξ2)
N r+1
i,p
(cid:17)(cid:48)
(cid:17)
(ξ2)
(20)
µ(S)
j,i N r
i,p(ξ2)
(cid:17)
= 0.
Now, since (cid:101)Ω and (cid:101)Ω(S) are open, for each i ∈ (cid:101)I0 there exists a point (ξ(S)
S ∈ {L, R}, such that φΓ0
to the fact that the univariate functions N r
linearly independent and that N r
2 ) ∈ (cid:101)Ω(S), with
i does not vanish in a neighborhood Q ⊂ (cid:101)Ω(S) of the point. Due
j,p, j ∈ I \ {0, 1} are locally
1,p and N r
1,p, N r
1 ) (cid:54)= 0, we get that
1 ) + N r
0,p + N r
1,p(ξ(S)
0,p(ξ(S)
, ξ(S)
1
µ0,iN r+1
i,p (ξ2) = 0, for ξ2 such that (ξ(S)
1
, ξ2) ∈ Q.
(cid:88)
i∈(cid:101)I0
This equation and the local linear independence of the univariate functions {N r+1
i,p }i∈(cid:101)I0
imply that µ0,i = 0. Applying this argument for all i ∈ (cid:101)I0, we obtain µ0,i = 0, i ∈ (cid:101)I0, and
the term (20) simplifies to
(cid:16) (cid:88)
(cid:16) (cid:88)
(cid:88)
(cid:17)
(cid:17)
N r
j,p(ξ1)
µ(S)
j,i N r
i,p(ξ2)
= 0.
(21)
N r
1,p(ξ1)
µ1,iα(S)(ξ2)N r
i,p−1(ξ2)
+
i∈(cid:101)I1
j∈I\{0,1}
i∈(cid:101)I(S)
j
Similarly, we can obtain for each i ∈ (cid:101)I1
(cid:88)
µ1,i α(S)(ξ2)N r
i,p−1(ξ2) = 0, for ξ2 such that (ξ(S)
1
, ξ2) ∈ Q,
(22)
i∈(cid:101)I1
with the corresponding points (ξ(S)
, ξ2) ∈ (cid:101)Ω and neighborhoods Q ⊂ (cid:101)Ω. Since the function
α(S) is just a linear function which never takes the value zero, see (2), equation (22) implies
that
1
(cid:88)
µ1,i N r
i,p−1(ξ2) = 0, for ξ2 such that (ξ(S)
1
, ξ2) ∈ Q.
i∈(cid:101)I1
14
The local linear independence of the univariate functions {N r
µ1,i = 0, i ∈ (cid:101)I1, and therefore the term (21) simplifies further to
i,p−1}i∈(cid:101)I1
implies as before that
(cid:88)
N r
j,p(ξ1)
(cid:16) (cid:88)
µ(S)
j,i N r
i,p(ξ2)
(cid:17)
= 0.
j∈I\{0,1}
i∈(cid:101)I(S)
j
Finally, µ(S)
functions in ΦΩ(L) ∪ ΦΩ(R) are locally linearly independent.
j,i = 0, i ∈ (cid:101)I(S)
j
, j ∈ I \ {0, 1}, S ∈ {L, R}, follows directly from the fact that the
Finally, we have all what is necessary to prove the main result.
Theorem 1. W is a basis for the C 1 hierarchical space WH.
Proof. The result holds because the spaces in Definition 2 satisfy the hypotheses in Propo-
sition 1. In particular, we have the nestedness of the spaces by Proposition 2, and for the
basis functions in Φ(cid:96) the local linear independence (P1) by Proposition 3, and the local
and compact support (P2) by their definition in (13)-(15).
Remark 2. In contrast to the here considered C 1 basis functions for the case of analysis-
suitable G1 two-patch geometries, the analogous C 1 basis functions for the multi-patch
case based on [31] are, in general, not locally linearly dependent. Due to the amount of
notation needed and to their technicality, we do not report here counterexamples, but what
happens, even in some basic domain configurations, is that the basis functions defined in the
vicinity of a vertex may be locally linearly dependent. As a consequence, the construction
of a hierarchical C 1 space requires a different approach, whose investigation is beyond the
scope of the present paper.
4. Refinement mask and implementation
In this section we give some details about practical aspects regarding the implementa-
tion of isogeometric methods based on the hierarchical space WH. First, we specify the
refinement masks, which allow to write the basis functions of Φ(cid:96) as linear combinations of
the basis functions of Φ(cid:96)+1. The refinement masks are important, as they are needed, for
instance, for knot insertion algorithms and some operators in multilevel preconditioning.
Then, we focus on the implementation of the hierarchical space in the open Octave/Matlab
software GeoPDEs [50], whose principles can be applied almost identically to any other
isogeometric code. The implementation employs the refinement masks for the evaluation
of basis functions too.
4.1. Refinement masks
Let us recall the notations and assumptions from Section 3.2 for the multi-level setting
of the spline spaces W(cid:96), (cid:96) = 0, 1, . . . , N − 1, where the upper index (cid:96) refers to the specific
level of refinement. We will use the same upper index in an analogous manner for further
notations, which have been mainly introduced in Section 2.3 for the one-level case, such
15
(S)
as for the vectors of functions N0, N1, N2 and (cid:98)φ
Γ0 , (cid:98)φ
transformation matrices (cid:98)B, (cid:101)B(S) and B(S), S ∈ {L, R}.
(S)
Γ1 , (cid:98)φ
(S)
Ω(S), S ∈ {L, R}, and for the
Let R+ be the set of non-negative real numbers. Based on basic properties of B-splines,
0×n(cid:96)+1
+
∈ Rn(cid:96)×n(cid:96)+1
+
there exist refinement matrices (refinement masks) Λr,(cid:96)+1
and Λr,(cid:96)+1
, Λr+1,(cid:96)+1
p
such that
∈ Rn(cid:96)
p−1 ∈ Rn(cid:96)
1×n(cid:96)+1
+
p
1
0
[N r,(cid:96)
i,p (ξ)]i∈I(cid:96) = Λr,(cid:96)+1
p
[N r,(cid:96)+1
i,p
(ξ)]i∈I(cid:96)+1,
and
[N r+1,(cid:96)
i,p
(ξ)]i∈I(cid:96)
0
= Λr+1,(cid:96)+1
p
[N r+1,(cid:96)+1
i,p
(ξ)]i∈I(cid:96)+1
0
,
[N r,(cid:96)
i,p−1(ξ)]i∈I(cid:96)
1
= Λr,(cid:96)+1
p−1 [N r,(cid:96)+1
i,p−1 (ξ)]i∈I(cid:96)+1
1
.
These refinement matrices are banded matrices with a small bandwidth. Furthermore,
using an analogous notation to Section 2.3 for the vectors of functions, the refinement mask
between the tensor-product spaces Sr,(cid:96)
is obtained by refining in
each parametric direction as a Kronecker product, and can be written in block-matrix form
as
p and Sr,(cid:96)+1
p ⊗ Sr,(cid:96)
⊗ Sr,(cid:96)+1
p
p
N(cid:96)
N(cid:96)
N(cid:96)
0(ξ1, ξ2)
1(ξ1, ξ2)
2(ξ1, ξ2)
= (Λr,(cid:96)+1
p ⊗Λr,(cid:96)+1
p
)
N(cid:96)+1
0
N(cid:96)+1
1
N(cid:96)+1
2
(ξ1, ξ2)
(ξ1, ξ2)
(ξ1.ξ2)
=
Θ(cid:96)+1
00 Θ(cid:96)+1
01 Θ(cid:96)+1
02
11 Θ(cid:96)+1
Θ(cid:96)+1
0
12
Θ(cid:96)+1
0
0
22
.
N(cid:96)+1
0
N(cid:96)+1
1
N(cid:96)+1
2
(ξ1, ξ2)
(ξ1, ξ2)
(ξ1, ξ2)
(23)
Note that in case of dyadic refinement (as considered in this work), we have Θ(cid:96)+1
02 = 0.
Proposition 4. It holds that
φ(cid:96)
Γ0(x)
φ(cid:96)
Γ1(x)
φ(cid:96)
Ω(L)(x)
φ(cid:96)
Ω(R)(x)
=
p
Λr+1,(cid:96)+1
0
0
0
1
(cid:101)B(R),(cid:96)Θ(cid:96)+1
(cid:101)B(L),(cid:96)Θ(cid:96)+1
0
12
12
2Λr,(cid:96)+1
12 B(R),(cid:96)Θ(cid:96)+1
p−1 B(L),(cid:96)Θ(cid:96)+1
12
0
0
Θ(cid:96)+1
22
0
0
Θ(cid:96)+1
22
.
φ(cid:96)+1
(x)
Γ0
φ(cid:96)+1
(x)
Γ1
φ(cid:96)+1
Ω(L)(x)
φ(cid:96)+1
Ω(R)(x)
(24)
Proof. We first show the refinement relation for the functions φ(cid:96)
sider the corresponding spline functions (cid:98)φ
relation (17) and then relation (23) with the fact that Θ(cid:96)+1
02 = 0, we obtain
Γ0. For this, let us con-
, S ∈ {L, R}. On the one hand, using first
(S),(cid:96)
Γ0
(S),(cid:96)
Γ0
(cid:98)φ
(ξ1, ξ2) =
(cid:104)
(cid:98)B(cid:96) (cid:101)B(S),(cid:96) 0
(cid:104)
=
(cid:98)B(cid:96) (cid:101)B(S),(cid:96) 0
(cid:105)
which is equal to
(cid:105) (cid:2) N(cid:96)
0(ξ1, ξ2) N(cid:96)
00 Θ(cid:96)+1
Θ(cid:96)+1
01
11 Θ(cid:96)+1
Θ(cid:96)+1
0
12
Θ(cid:96)+1
0
0
22
0
1(ξ1, ξ2) N(cid:96)
2(ξ1, ξ2) (cid:3)T
(ξ1, ξ2)
(ξ1, ξ2)
(ξ1, ξ2)
N(cid:96)+1
0
N(cid:96)+1
1
N(cid:96)+1
2
,
(cid:104)
(cid:98)B(cid:96)Θ(cid:96)+1
00
(cid:98)B(cid:96)Θ(cid:96)+1
01 + (cid:101)B(S),(cid:96)Θ(cid:96)+1
11
(cid:105) (cid:20) N(cid:96)+1
0
N(cid:96)+1
1
(ξ1, ξ2)
(ξ1, ξ2)
(cid:21)
+ (cid:101)B(S),(cid:96)Θ(cid:96)+1
12 N(cid:96)+1
2
(ξ1, ξ2).
(25)
16
On the other hand, the functions (cid:98)φ
(S),(cid:96)
Γ0
possess the form
(S),(cid:96)
Γ0
(cid:98)φ
(ξ1, ξ2) =
(cid:104)
N r+1,(cid:96)
i,p
(cid:105)
(ξ2)
(cid:16)
i∈I(cid:96)
0
N r,(cid:96)
0,p(ξ1)+N r,(cid:96)
1,p(ξ1)
(cid:17)
β(S)(ξ2)
(cid:20)(cid:16)
N r+1,(cid:96)
i,p
+
τ (cid:96)
1
p
(cid:21)
(cid:17)(cid:48)
(ξ2)
i∈I(cid:96)
0
N r,(cid:96)
1,p(ξ1).
By refining the B-spline functions N r+1,(cid:96)+1
(S),(cid:96)
Γ0
(cid:98)φ
(ξ1, ξ2) = Λr+1,(cid:96)+1
p
N r+1,(cid:96)+1
i,p
i,p
(cid:104)
(ξ2), we obtain
(cid:16)
(cid:105)
i∈I(cid:96)+1
0
(ξ2)
(cid:20)(cid:16)
N r+1,(cid:96)+1
i,p
+
τ (cid:96)
1
p
β(S)(ξ2)Λr+1,(cid:96)+1
p
N r,(cid:96)
0,p(ξ1) + N r,(cid:96)
(cid:21)
(ξ2)
(cid:17)(cid:48)
i∈I(cid:96)+1
0
(cid:17)
1,p(ξ1)
N r,(cid:96)
1,p(ξ1).
Then, refining the B-spline functions N r,(cid:96)
1,p(ξ1) and N r,(cid:96)
1,p(ξ1) leads to
(S),(cid:96)
Γ0
(cid:98)φ
(ξ1, ξ2) = Λr+1,(cid:96)+1
p
(cid:104)
N r+1,(cid:96)+1
i,p
(ξ2)
0,j N r,(cid:96)+1
j,p
(ξ1) + (cid:80)
j∈I(cid:96)+1 λ(cid:96)+1
1,j N r,(cid:96)+1
j,p
(cid:17)
(ξ1)
0,p(ξ1) + N r,(cid:96)
(cid:16) (cid:80)
(cid:105)
i∈I(cid:96)+1
0
j∈I(cid:96)+1 λ(cid:96)+1
(cid:21)
(ξ2)
(cid:17)(cid:48)
+
τ (cid:96)
1
p
β(S)(ξ2)Λr+1,(cid:96)+1
p
(cid:20)(cid:16)
N r+1,(cid:96)+1
i,p
(cid:88)
1,j N r,(cid:96)+1
λ(cid:96)+1
j,p
(ξ1),
i∈I(cid:96)+1
0
j∈I(cid:96)+1
where λ(cid:96)+1
have λ(cid:96)+1
i,j are the entries of the refinement matrix Λr,(cid:96)+1
1 = τ (cid:96)
0,0 = 1, λ(cid:96)+1
2, λ(cid:96)+1
1,0 = 0, λ(cid:96)+1
(cid:18)
(cid:16)
0,1 = 1
1,1 = 1
p
2 and τ (cid:96)+1
(cid:105)
(ξ2)
(ξ1, ξ2) =
Λr+1,(cid:96)+1
p
(S),(cid:96)
Γ0
(cid:98)φ
1
2 , and we get
N r,(cid:96)+1
0,p
(ξ1) + N r,(cid:96)+1
1,p
(cid:17)
(ξ1)
. Since we refine dyadically, we
(cid:104)
N r+1,(cid:96)+1
i,p
(cid:20)(cid:16)
+
τ (cid:96)+1
1
p
β(S)(ξ2)Λr+1,(cid:96)+1
p
N r+1,(cid:96)+1
i,p
i∈I(cid:96)+1
0
(cid:17)(cid:48)
(cid:21)
(ξ2)
i∈I(cid:96)+1
0
(cid:19)
(ξ1)
N r,(cid:96)+1
1,p
(cid:18)
+
Λr+1,(cid:96)+1
p
(cid:104)
N r+1,(cid:96)+1
i,p
(ξ2)
(cid:16) (cid:80)
(cid:105)
i∈I(cid:96)+1
0
β(S)(ξ2)Λr+1,(cid:96)+1
p
(cid:20)(cid:16)
+
τ (cid:96)
1
p
(cid:17)(cid:48)
N r+1,(cid:96)+1
i,p
(ξ2)
j∈I(cid:96)+1\{0,1}(λ(cid:96)+1
(cid:21)
(cid:88)
i∈I(cid:96)+1
0
j∈I(cid:96)+1\{0,1}
0,j + λ(cid:96)+1
1,j )N r,(cid:96)+1
j,p
(cid:17)
(ξ1)
1,j N r,(cid:96)+1
λ(cid:96)+1
j,p
(ξ1)
(cid:19)
,
which is equal to
(cid:18)
+
(cid:104)
Λr+1,(cid:96)+1
p
(S),(cid:96)
Γ0
(cid:98)φ
(ξ1, ξ2) = Λr+1,(cid:96)+1
p
(S),(cid:96)+1
Γ0
(cid:98)φ
(ξ1, ξ2)
N r+1,(cid:96)+1
i,p
(ξ2)
(cid:16) (cid:80)
(cid:105)
i∈I(cid:96)+1
0
β(S)(ξ2)Λr+1,(cid:96)+1
p
(cid:20)(cid:16)
+
τ (cid:96)
1
p
(cid:17)(cid:48)
N r+1,(cid:96)+1
i,p
(ξ2)
j∈I(cid:96)+1\{0,1}(λ(cid:96)+1
(cid:21)
(cid:88)
i∈I(cid:96)+1
0
j∈I(cid:96)+1\{0,1}
0,j + λ(cid:96)+1
1,j )N r,(cid:96)+1
j,p
(cid:17)
(ξ1)
(26)
1,j N r,(cid:96)+1
λ(cid:96)+1
j,p
(ξ1)
(cid:19)
.
By analyzing the two equal value terms (25) and (26) with respect to the spline represen-
tation in ξ1-direction formed by the B-splines N r,(cid:96)+1
(ξ1), j ∈ I, one can observe that both
first terms and both second terms each must coincide. This leads to
j,p
(S),(cid:96)
Γ0
(cid:98)φ
(ξ1, ξ2) = Λr+1,(cid:96)+1
p
(S),(cid:96)+1
Γ0
(cid:98)φ
(ξ1, ξ2) + (cid:101)B(S),(cid:96)Θ(cid:96)+1
12 N(cid:96)+1
2
(ξ1, ξ2),
17
which directly implies the refinement relation for the functions φ(cid:96)
Γ0.
The refinement for the functions φ(cid:96)
functions (cid:98)φ
the fact that Θ(cid:96)+1
(S),(cid:96)
Γ1
Γ1 can be proven similarly. Considering the spline
, S ∈ {L, R}, we get, on the one hand, by using relations (17) and (23) and
02 = 0
(S),(cid:96)
Γ1
(cid:98)φ
(ξ1, ξ2) = (cid:2) 0 B(S),(cid:96) 0 (cid:3) (cid:2) N(cid:96)
0(ξ1, ξ2) N(cid:96)
00 Θ(cid:96)+1
Θ(cid:96)+1
01
11 Θ(cid:96)+1
Θ(cid:96)+1
0
12
Θ(cid:96)+1
0
0
22
(ξ1, ξ2) + B(S),(cid:96)Θ(cid:96)+1
= (cid:2) 0 B(S),(cid:96) 0 (cid:3)
= B(S),(cid:96)Θ(cid:96)+1
2(ξ1, ξ2) (cid:3)T
(ξ1, ξ2)
(ξ1, ξ2)
(ξ1, ξ2)
N(cid:96)+1
0
N(cid:96)+1
1
N(cid:96)+1
2
12 N(cid:96)+1
(ξ1, ξ2).
2
1(ξ1, ξ2) N(cid:96)
0
11 N(cid:96)+1
1
(27)
On the other hand, the functions (cid:98)φ
(S),(cid:96)
Γ1
can be expressed as
(S),(cid:96)
Γ1
(cid:98)φ
(ξ1, ξ2) = α(S)(ξ2)
(cid:104)
N r,(cid:96)
i,p−1(ξ2)
(cid:105)
i∈I(cid:96)
1
N r,(cid:96)
1,p(ξ1),
and after refining the B-spline functions N r,(cid:96)
is equal to
1,p(ξ1) and N r,(cid:96)
i,p−1(ξ2), i ∈ I(cid:96)
1 we obtain that this
(S),(cid:96)
Γ1
(cid:98)φ
(ξ1, ξ2) = α(S)(ξ2) Λr,(cid:96)+1
p−1
(cid:104)
N r,(cid:96)+1
(cid:105)
i,p−1 (ξ2)
i∈I(cid:96)+1
1
(cid:88)
j∈I(cid:96)+1
1,j N r,(cid:96)+1
λ(cid:96)+1
j,p
(ξ1),
i,j are again the entries of the refinement matrix Λr,(cid:96)+1
p
. Recalling that λ(cid:96)+1
1,0 = 0
where λ(cid:96)+1
and λ(cid:96)+1
1,1 = 1
2, we get
(S),(cid:96)
Γ1
(cid:98)φ
(ξ1, ξ2) = α(S)(ξ2) Λr,(cid:96)+1
p−1
(cid:104)
N r,(cid:96)+1
i,p−1 (ξ2)
(cid:105)
i∈I(cid:96)+1
1
(cid:16)1
2
N r,(cid:96)+1
1,p
(ξ1) +
(cid:88)
1,j N r,(cid:96)+1
λ(cid:96)+1
j,p
(cid:17)
(ξ1)
=
1
2
Λr,(cid:96)+1
p−1 (cid:98)φ
(S),(cid:96)+1
Γ1
(ξ1, ξ2) + α(S)(ξ2) Λr,(cid:96)+1
p−1
(cid:104)
N r,(cid:96)+1
(cid:105)
i,p−1 (ξ2)
i∈I(cid:96)+1
1
j∈I(cid:96)+1\{0,1}
(cid:88)
1,j N r,(cid:96)+1
λ(cid:96)+1
j,p
j∈I(cid:96)+1\{0,1}
(ξ1).
(28)
Considering the two equal value terms (27) and (28), one can argue as for the case of the
functions (cid:98)φ
implies
, that both first terms and both second terms each must coincide. This
(S),(cid:96)
Γ0
(S),(cid:96)
Γ1
(cid:98)φ
(ξ1, ξ2) =
Λr,(cid:96)+1
p−1 (cid:98)φ
(S),(cid:96)+1
Γ1
(ξ1, ξ2) + B(S),(cid:96)Θ(cid:96)+1
12 N(cid:96)+1
2
(ξ1, ξ2),
1
2
which finally shows the refinement relation for the functions φ(cid:96)
Γ1.
Finally, the relation for the functions φ(cid:96)
Ω(S), S ∈ {L, R}, directly follows from rela-
tion (23), since they correspond to “standard” B-splines.
18
4.2. Details about the implementation
The implementation of GeoPDEs is based on two main structures: the mesh, that
contains the information related to the computational geometry and the quadrature, and
that did not need any change; and the space, with the necessary information to evaluate
the basis functions and their derivatives. The new implementation was done in two steps:
we first introduced the space of C 1 basis functions of one single level, as in Section 2.2,
and then we added the hierarchical construction.
For the space of one level, we created a new space structure that contains the numbering
for the basis functions of the three different types, namely ΦΩ(S), ΦΓ0 and ΦΓ1. The evalua-
tion of the basis functions, and also matrix assembly, is performed using the representation
of C 1 basis functions in terms of standard tensor-product B-splines, as in Section 2.3. In-
deed, one can first assemble the matrix for tensor-product B-splines, and then multiply on
each side this matrix by the same matrix given in (17), in the form
K (S)
W = B(S)K (S)
S
(B(S))(cid:62),
with B(S) =
(cid:98)B (cid:101)B(S)
0 B(S)
0
0
0
0
In(n−2)
, for S = L, R,
S
where K (S)
represents the stiffness matrix for the standard tensor-product B-spline space
on the patch Ω(S), and K (S)
W is the contribution to the stiffness matrix for the W space
from the same patch. Obviously, the same can be done at the element level, by restricting
the matrices to suitable submatrices using the indices of non-vanishing functions on the
element.
To implement the hierarchical C 1 splines we construct the same structures and algo-
rithms detailed in [16]. First, it is necessary to complete the space structure of one single
level, that we have just described, with some functionality to compute the support of a
given basis function, as explained in [16, Section 5.1]. Second, the hierarchical structures
are constructed following the description in the same paper, except that for the evaluation
of basis functions, and in particular for matrix assembly, we make use of the refinement
masks of Section 4.1. The refinement masks essentially give us the two-level relation re-
quired by the algorithms in [16], and in particular the matrix C (cid:96)+1
of that paper, that is
used both during matrix assembly and to compute the refinement matrix after enlargement
of the subdomains.
(cid:96)
5. Numerical examples
We present now some numerical examples to show the good performance of the hier-
archical C 1 spaces for their use in combination with adaptive methods. We consider two
different kinds of numerical examples: the first three tests are run for Poisson problems
with an automatic adaptive scheme, while in the last numerical test we solve the bilaplacian
problem, with a pre-defined refinement scheme.
19
5.1. Poisson problem
The first three examples are tests on the Poisson equation
(cid:26) −∆u = f
u = g
in Ω,
on ∂Ω.
The goal is to show that using the C 1 space basis does not spoil the properties of the local
refinement. The employed isogeometric algorithm is based on the adaptive loop (see, e.g.,
[6])
SOLVE −→ ESTIMATE −→ MARK −→ REFINE.
In particular, for the examples we solve the variational formulation of the problem imposing
the Dirichlet boundary condition by Nitsche’s method, and the problem is to find u ∈ WH
such that
(cid:90)
(cid:90)
(cid:90)
(cid:90)
(cid:90)
(cid:90)
(cid:90)
∇u · ∇v −
Ω
ΓD
du
dn
v −
u
dv
dn
+
γ
h
ΓD
ΓD
uv =
f v −
Ω
ΓD
g
dv
dn
+
γ
h
ΓD
gv ∀v ∈ WH,
where h is the local element size, and the penalization parameter is chosen as γ = 10(p+1),
with p the degree. The error estimate is computed with a residual-based estimator, and
the marking of the elements at each iteration is done using D¨orfler’s strategy (when not
stated otherwise, we set the marking parameter equal to 0.75). The refinement step of the
loop dyadically refines all the marked elements. Although optimal convergence can be only
proved if we refine using a refinement strategy that guarantees that meshes are admissible
[7], previous numerical results show also a good behavior of non-admissible meshes [6].
For each of the three examples we report the results for degrees p = (3, 3), (4, 4), with
C 1 smoothness across the interface, and with a regularity r equal to degree minus two
within the single patches. We compare the results for the adaptive scheme with those
obtained by refining uniformly, and also with the ones obtained by employing the same
adaptive scheme for hierarchical spaces with C 0 continuity across the interface, while the
same regularity within the patches as above is kept.
Example 1. For the first numerical example we consider the classical L-shaped domain
[−1, 1]2 \ (0, 1) × (−1, 0) defined by two patches as depicted in Figure 2(a), and the right-
hand side f and the boundary condition g are chosen such that the exact solution is given
by
u(ρ, θ) = ρ
4
3 sin
(cid:32)
(cid:33)
4
3
θ
,
with ρ and θ the polar coordinates. As it is well known, the exact solution has a singularity
at the reentrant corner.
We start the adaptive simulation with a coarse mesh of 4×4 elements on each patch, and
we use D¨orfler’s parameter equal to 0.90 for the marking of the elements. The convergence
It can be seen that the error in H 1 semi-norm and
results are presented in Figure 3.
the estimator converge with the expected rate, in terms of the degrees of freedom, both
for the C 1 and the C 0 discretization, and that this convergence rate is better than the
20
(a) Domain used in the Examples 1
and 4.
(b) Domain used in the Examples 2
and 3.
Figure 2: The two domains used in the numerical examples.
one obtained with uniform refinement. Moreover, the error for the C 1 discretization is
slightly lower than the one for the C 0 discretization, although they are very similar. This
is in good agreement with what has been traditionally observed for isogeometric methods:
the accuracy per degree of freedom is better for higher continuity. In this case, since the
continuity only changes near the interface, the difference is very small.
101
100
10−1
10−2
10−3
10−4
p = 3, C1 (error)
p = 3, C0 (error)
p = 4, C1 (error)
p = 4, C0 (error)
(estimator)
(estimator)
(estimator)
(estimator)
2
1
1.5
1
101
100
10−1
10−2
10−3
10−4
10−5
102
102.2
102.4
102.6
102.8
103
103.2
10−5
102
NDOF
p = 3, adap. (error)
p = 3, unif. (error)
p = 4, adap. (error)
p = 4, unif. (error)
(estimator)
(estimator)
(estimator)
(estimator)
2
1
1.5
1
103
NDOF
104
Figure 3: Error in H 1 semi-norm and estimator for Example 1 with p = (3, 3) and p = (4, 4), compared
with C 0 case (left) and with global refinement case (right).
We also show in Figure 4 the final meshes obtained with the different discretizations. It
is clear that the adaptive method correctly refines the mesh in the vicinity of the reentrant
corner, where the singularity occurs, and the refinement gets more local with higher degree.
Example 2. In the second example the data of the problem are chosen in such a way that
21
(a) p = (3, 3), C 0 functions on the
interface: NDOF=1648.
(b) p = (3, 3), C 1 functions on the
interface: NDOF=1623.
(c) p = (4, 4), C 0 functions on the
interface: NDOF=833.
(d) p = (4, 4), C 1 functions on the
interface: NDOF=833.
Figure 4: Hierarchical meshes for Example 1, with p = (3, 3) and p = (4, 4). Apparently the meshes are
the same for the C 0 and C 1 case, but there are some differences in the finest levels.
the exact solution is
u(x, y) = (−120x + x2 − 96y − 8xy + 16y2)12/5 cos(πy/20),
defined on the domain shown in Figure 2(b). The geometry of the domain is given by
two bicubic B´ezier patches, and the control points are chosen following the algorithm in
[29], in such a way that the geometry is given by an analysis-suitable G1 parametrization,
see Appendix A for details. Note that we have chosen the solution such that it has a
singularity along the interface.
In this example we start the adaptive simulation with
a coarse mesh of 8 × 8 elements on each patch. We present the convergence results in
Figure 5. As before, both the (relative) error and the estimator converge with optimal
rate, and both for the C 0 and the C 1 discretizations, with slightly better result for the C 1
22
spaces. We note that, since the singularity occurs along a line, optimal order of convergence
for higher degrees cannot be obtained without anisotropic refinement, as it was observed
in the numerical examples in [14, Section 4.6].
100
10−1
10−2
10−3
10−4
10−5
10−6
p = 3, C1 (error)
p = 3, C0 (error)
p = 4, C1 (error)
p = 4, C0 (error)
(estimator)
(estimator)
(estimator)
(estimator)
2
1
103
1.5
1
104
NDOF
100
10−1
10−2
10−3
10−4
10−5
10−6
p = 3, adap. (error)
p = 3, unif. (error)
p = 4, adap. (error)
p = 4, unif. (error)
(estimator)
(estimator)
(estimator)
(estimator)
1.5
2
1
103
1
104
NDOF
105
Figure 5: Relative error in H 1 semi-norm and corresponding estimator for Example 2 with p = (3, 3) and
p = (4, 4), compared with C 0 case (left) and with global refinement case (right).
We also present in Figure 6 the finest meshes obtained with the different discretizations,
and it can be observed that the adaptive method correctly refines near the interface, where
the singularity occurs.
Example 3. We consider the same domain as in the previous example, and the right-hand
side and the boundary condition are chosen in such a way that the exact solution is given
by
u(x, y) = (y − 1.7)12/5 cos(x/4).
In this case the solution has a singularity along the line y = 1.7, that crosses the interface
and is not aligned with the mesh.
The convergence results, that are presented in Figure 7, are very similar to the ones
of the previous example, and show optimal convergence rates for both the C 1 and the C 0
discretizations. As before, we also present in Figure 8 the finest meshes obtained with the
different discretizations. It is evident that the adaptive algorithm successfully refines along
the singularity line.
5.2. Bilaplacian problem
In the last example we consider the solution of the bilaplacian problem, given in strong
form by
∆2u = f
u = g1
in Ω,
on ∂Ω,
∂u
∂n
= g2
on ∂Ω.
23
(a) p = (3, 3), C 0 functions on the
interface: NDOF=16310
(b) p = (3, 3), C 1 functions on the
interface: NDOF=15741
(c) p = (4, 4), C 0 functions on the
interface: NDOF=6357
(d) p = (4, 4), C 1 functions on the
interface: NDOF=7347
Figure 6: Hierarchical meshes for Example 2, with p = (3, 3) and p = (4, 4).
It is well known that the weak formulation of the problem in direct form requires the trial
and test functions to be in H 2(Ω). For the discretization with a Galerkin method, this
can be obtained if the discrete basis functions are C 1. The solution of the problem with
C 0 basis functions, instead, requires to use a mixed variational formulation or some sort of
weak enforcement of the C 1 continuity across the interface, like with a Nitsche’s method.
Example 4. For the last numerical test we solve the bilaplacian problem in the L-shaped
domain as depicted in Figure 2(a). The right-hand side and the boundary conditions are
chosen in such a way that the exact solution is given, in polar coordinates (ρ, θ), by
u(ρ, θ) = ρz+1(C1 F1(θ) − C2 F2(θ)),
where value in the exponent is chosen equal to z = 0.544483736782464, which is the
smallest positive solution of
sin(zω) + z sin(ω) = 0,
24
101
100
10−1
10−2
10−3
p = 3, C1 (error)
p = 3, C0 (error)
p = 4, C1 (error)
p = 4, C0 (error)
(estimator)
(estimator)
(estimator)
(estimator)
103
2
1
NDOF
1.5
1
101
100
10−1
10−2
10−3
p = 3, adap. (error)
p = 3, unif. (error)
p = 4, adap. (error)
p = 4, unif. (error)
(estimator)
(estimator)
(estimator)
(estimator)
2
1.5
1
1
104
103
104
NDOF
105
Figure 7: Error in H 1 semi-norm and estimator for Example 3 with p = (3, 3) and p = (4, 4), compared
with C 0 case (left) and with global refinement case (right).
with ω = 3π/2 for the L-shaped domain, see [21, Section 3.4]. The other terms are given
by
C1 =
1
z − 1
sin
(cid:19)
(cid:18) 3(z − 1)π
2
(cid:19)
− cos
(cid:18) 3(z − 1)π
2
−
sin
1
z − 1
(cid:18) 3(z + 1)π
2
(cid:18) 3(z + 1)π
2
(cid:19)
,
(cid:19)
,
C2 = cos
F1(θ) = cos((z − 1)θ) − cos((z + 1)θ),
F2(θ) =
1
z − 1
sin((z − 1)θ) −
1
z + 1
sin((z + 1)θ).
The exact solution has a singularity at the reentrant corner, and it is the same kind of
singularity that one would encounter for the Stokes problem.
For our numerical test we start with a coarse mesh of 8 × 8 elements on each patch.
In this case, instead of refining the mesh with an adaptive algorithm we decided to refine
following a pre-defined strategy: at each refinement step, a region surrounding the reentrant
corner, and composed of 4 × 4 elements of the finest level, is marked for refinement, see
Figure 9(a). We remark that the implementation of the adaptive algorithm with a residual-
based estimator would require computing fourth order derivatives at the quadrature points,
and several jump terms across the interface, that is beyond the scope of the present work.
In Figure 9(b) we show the error obtained in H 2 semi-norm when computing with C 1
hierarchical splines of degrees 3 and 4 and regularity r equal to degree minus two within the
single patches, for the local refinement described above, and with C 1 isogeometric splines
of the same degree and inner regularity r with global uniform refinement. It is obvious
that the hierarchical spaces perform much better, as we obtain a lower error with many
less degrees of freedom. In this case we do not see a big difference between the results
25
(a) p = (3, 3), C 0 functions on the
interface: NDOF=8388
(b) p = (3, 3), C 1 functions on the
interface: NDOF=8336
(c) p = (4, 4), C 0 functions on the
interface: NDOF=6356
(d) p = (4, 4), C 1 functions on the
interface: NDOF=6601
Figure 8: Hierarchical meshes for Example 3, with p = (3, 3) and p = (4, 4).
obtained for degrees 3 and 4, but this is caused by the fact that we are refining by hand,
and the asymptotic regime has not been reached yet.
6. Conclusions
We presented the construction of C 1 hierarchical functions on two-patch geometries and
their application in isogeometric analysis. After briefly reviewing the characterization of
C 1 tensor-product isogeometric spaces, we investigated the properties needed to effectively
use these spaces as background machinery for the hierarchical spline model. In particular,
the local linear independence of the one-level basis functions and the nested nature of the
considered C 1 splines spaces was proved. We also introduced an explicit expression of
the refinement masks under dyadic refinement, that among other things is useful for the
practical implementation of the hierarchical basis functions. The numerical examples show
that optimal convergence rates are obtained by the local refinement scheme for second and
fourth order problems, even in presence of singular solutions. In future work we plan to
26
101
100
p = 3 local
p = 4 local
p = 3, unif.
p = 4, unif.
(a) Refinement of the L-shaped do-
main
103
104
NDOF
(b) Error in H 2 semi-norm
Figure 9: Hierarchical mesh (a) and comparison of the results obtained by local refinement and C 1 space
with global refinement (b) on Example 4.
generalize the construction to the multi-patch domain setting of [31], but this will require
a different strategy with respect to the approach presented in this work since the basis
functions of a single level may be locally linearly dependent.
1.5
1
1
2
Acknowledgment. Cesare Bracco, Carlotta Giannelli and Rafael V´azquez are members
of the INdAM Research group GNCS. The INdAM support through GNCS and Finanzi-
amenti Premiali SUNRISE is gratefully acknowledged. Rafael V´azquez has been partially
supported by the ERC Advanced Grant CHANGE, grant number 694515, 2016-2020
Appendix A. Geometry of the curved domain
The geometry in Fig.2(a) for the examples in Section 5 is generated by following the
algorithm in [29]. This technique is based on solving a quadratic minimization problem with
linear side constraints, and constructs from an initial multi-patch geometry (cid:101)F an analysis-
suitable G1 multi-patch parameterization F possessing the same boundary, vertices and
first derivatives at the vertices as (cid:101)F.
In our case, the initial geometry (cid:101)F is given by the two patch parameterization consisting
of two quadratic B´ezier patches (cid:101)F(L) and (cid:101)F(R) (i.e. without any internal knots) with the
control points (cid:101)c(S)
i,j , S ∈ {L, R}, specified in Table A.1. This parameterization is not
analysis-suitable G1.
Applying the algorithm in [29] (by using Mathematica), we construct an analysis-
suitable G1 two-patch geometry F with bicubic B´ezier patches F(L) and F(R). Their control
points c(S)
i,j , S ∈ {L, R}, are given in Table A.2, where for presenting some of their coordi-
nates the notations D = 99170 and
C1 = 333939/D,
C2 = 47387036/(22.5D),
C3 = −15800567/(5D), C4 = 242128576/(67.5D),
C5 = 57452423/(45D), C6 = 81952942/(22.5D),
27
(0, 0)
(−2, 5/2)
(0, 6)
(cid:101)c(L)
i,j
(−3, 1/3)
(−13/4, 53/20)
(−3, 17/3)
(−6, −2)
(−5, 2)
(−7, 8)
(0, 0)
(−2, 5/2)
(0, 6)
i,j
(cid:101)c(R)
(13/5, 1)
(39/20, 3)
(3, 5)
(6, −1)
(4, 11/3)
(11/2, 13/2)
Table A.1: Control points (cid:101)c(S)
zation (cid:101)F.
i,j , S ∈ {L, R}, of the initial non-analysis-suitable G1 two-patch parameteri-
are used.
c(L)
i,j
(0, 0)
(−4/3, 5/3)
(−4/3, 11/3)
(0, 6)
(−2, 2/9)
(−127/50, 44/25)
(C3, C4)
(−2, 52/9)
(−4, −4/9)
(−98/25, 37/25)
(−89/25, 189/50)
(−13/3, 58/9)
(−6, −2)
(−16/3, 2/3)
(−17/3, 4)
(−7, 8)
c(R)
i,j
(0, 0)
(−4/3, 5/3)
(−4/3, 11/3)
(0, 6)
(26/15, 2/3)
(C1, C2)
(C5, C6)
(2, 16/3)
(56/15, 1/3)
(87/25, 113/50)
(29/10, 4)
(23/6, 11/2)
(6, −1)
(14/3, 19/9)
(9/2, 83/18)
(11/2, 13/2)
Table A.2: Control points c(S)
tion F.
i,j , S ∈ {L, R}, of the resulting analysis-suitable G1 two-patch parameteriza-
References
[1] F. Auricchio, L. Beir˜ao da Veiga, A. Buffa, C. Lovadina, A. Reali, and G. Sangalli.
A fully ”locking-free” isogeometric approach for plane linear elasticity problems: A
stream function formulation. Comput. Methods Appl. Mech. Engrg., 197(1):160–172,
2007.
[2] L. Beir˜ao da Veiga, A. Buffa, G. Sangalli, and R. V´azquez. Mathematical analysis of
variational isogeometric methods. Acta Numer., 23:157–287, 5 2014.
[3] D. J. Benson, Y. Bazilevs, M.-C. Hsu, and T. J. R. Hughes. A large deformation,
rotation-free, isogeometric shell. Comput. Methods Appl. Mech. Engrg., 200(13):1367–
1378, 2011.
[4] M. Bercovier and T. Matskewich. Smooth B´ezier Surfaces over Unstructured Quadri-
lateral Meshes. Lecture Notes of the Unione Matematica Italiana, Springer, 2017.
[5] A. Blidia, B. Mourrain, and N. Villamizar. G1-smooth splines on quad meshes with
4-split macro-patch elements. Comput. Aided Geom. Des., 52-53:106 – 125, 2017.
28
[6] C. Bracco, A. Buffa, C. Giannelli, and R. V´azquez. Adaptive isogeometric methods
with hierarchical splines: an overview. Discret. Contin. Dyn. S., 39(1):–, 2019.
[7] A. Buffa and C. Giannelli. Adaptive isogeometric methods with hierarchical splines:
Error estimator and convergence. Math. Models Methods Appl. Sci., 26:1–25, 2016.
[8] A. Buffa and C. Giannelli. Adaptive isogeometric methods with hierarchical splines:
Optimality and convergence rates. Math. Models Methods Appl. Sci., 27:2781–2802,
2017.
[9] C.L. Chan, C. Anitescu, and T. Rabczuk. Isogeometric analysis with strong multipatch
C1-coupling. Comput. Aided Geom. Des., 62:294–310, 2018.
[10] C.L. Chan, C. Anitescu, and T. Rabczuk. Strong multipatch C1-coupling for isogeo-
metric analysis on 2D and 3D domains. Comput. Methods Appl. Mech. Engrg., 357,
2019.
[11] A. Collin, G. Sangalli, and T. Takacs. Analysis-suitable G1 multi-patch parametriza-
tions for C1 isogeometric spaces. Comput. Aided Geom. Des., 47:93 – 113, 2016.
[12] J. A. Cottrell, T. J. R. Hughes, and Y. Bazilevs.
Isogeometric Analysis: Toward
Integration of CAD and FEA. John Wiley & Sons, Chichester, England, 2009.
[13] D. D’Angella, S. Kollmannsberger, E. Rank, and A. Reali. Multi-level B´ezier extrac-
tion for hierarchical local refinement of Isogeometric Analysis. Comput. Methods Appl.
Mech. Engrg., 328:147–174, 2018.
[14] G. Gantner. Optimal Adaptivity for Splines in Finite and Boundary Element Methods.
PhD thesis, Technische Universit¨at Wien, 2017.
[15] G. Gantner, D. Haberlik, and D. Praetorius. Adaptive IGAFEM with optimal conver-
gence rates: Hierarchical B-splines. Math. Models Methods Appl. Sci., 27:2631–2674,
2017.
[16] E. Garau and R. V´azquez. Algorithms for the implementation of adaptive isogeometric
methods using hierarchical B-splines. Appl. Numer. Math., 123:58–87, 2018.
[17] C. Giannelli, B. J¨uttler, and H. Speleers. THB–splines: the truncated basis for hier-
archical splines. Comput. Aided Geom. Des., 29:485–498, 2012.
[18] C. Giannelli, B. J¨uttler, and H. Speleers. Strongly stable bases for adaptively refined
multilevel spline spaces. Adv. Comp. Math., 40:459–490, 2014.
[19] H. G´omez, V. M Calo, Y. Bazilevs, and T. J. R. Hughes. Isogeometric analysis of the
Cahn–Hilliard phase-field model. Comput. Methods Appl. Mech. Engrg., 197(49):4333–
4352, 2008.
29
[20] H. Gomez, V. M. Calo, and T. J. R. Hughes. Isogeometric analysis of Phase–Field
models: Application to the Cahn–Hilliard equation. In ECCOMAS Multidisciplinary
Jubilee Symposium: New Computational Challenges in Materials, Structures, and Flu-
ids, pages 1–16. Springer Netherlands, 2009.
[21] P. Grisvard. Singularities in boundary value problems, volume 22 of Recherches
en Math´ematiques Appliqu´ees [Research in Applied Mathematics]. Masson, Paris;
Springer-Verlag, Berlin, 1992.
[22] D. Groisser and J. Peters. Matched Gk-constructions always yield Ck-continuous
isogeometric elements. Comput. Aided Geom. Des., 34:67 – 72, 2015.
[23] P. Hennig, M. Ambati, L. De Lorenzis, and M. K¨astner. Projection and transfer oper-
ators in adaptive isogeometric analysis with hierarchical B-splines. Comput. Methods
Appl. Mech. Engrg., 334:313 – 336, 2018.
[24] P. Hennig, S. M¨uller, and M. K¨astner. B´ezier extraction and adaptive refinement of
truncated hierarchical NURBS. Comput. Methods Appl. Mech. Engrg., 305:316–339,
2016.
[25] J. Hoschek and D. Lasser. Fundamentals of computer aided geometric design. A K
Peters Ltd., Wellesley, MA, 1993.
[26] T. J. R. Hughes, J. A. Cottrell, and Y. Bazilevs. Isogeometric analysis: CAD, finite
elements, NURBS, exact geometry and mesh refinement. Comput. Methods Appl.
Mech. Engrg., 194(39-41):4135–4195, 2005.
[27] M. Kapl, F. Buchegger, M. Bercovier, and B. J¨uttler. Isogeometric analysis with geo-
metrically continuous functions on planar multi-patch geometries. Comput. Methods
Appl. Mech. Engrg., 316:209 – 234, 2017.
[28] M. Kapl, G. Sangalli, and T. Takacs. Dimension and basis construction for analysis-
suitable G1 two-patch parameterizations. Comput. Aided Geom. Des., 52–53:75 – 89,
2017.
[29] M. Kapl, G. Sangalli, and T. Takacs. Construction of analysis-suitable G1 planar
multi-patch parameterizations. Comput.-Aided Des., 97:41–55, 2018.
[30] M. Kapl, G. Sangalli, and T. Takacs.
Isogeometric analysis with C 1 functions on
unstructured quadrilateral meshes. Technical Report 1812.09088, arXiv.org, 2018.
[31] M. Kapl, G. Sangalli, and T. Takacs. An isogeometric C 1 subspace on unstructured
multi-patch planar domains. Comput. Aided Geom. Des., 69:55–75, 2019.
[32] M. Kapl, V. Vitrih, B. J¨uttler, and K. Birner. Isogeometric analysis with geometrically
continuous functions on two-patch geometries. Comput. Math. Appl., 70(7):1518 –
1538, 2015.
30
[33] K. Karˇciauskas, T. Nguyen, and J. Peters. Generalizing bicubic splines for modeling
and IGA with irregular layout. Comput.-Aided Des., 70:23 – 35, 2016.
[34] K. Karˇciauskas and J. Peters. Refinable bi-quartics for design and analysis. Comput.-
Aided Des., pages 204–214, 2018.
[35] J. Kiendl, Y. Bazilevs, M.-C. Hsu, R. W¨uchner, and K.-U. Bletzinger. The bending
strip method for isogeometric analysis of Kirchhoff-Love shell structures comprised of
multiple patches. Comput. Methods Appl. Mech. Engrg., 199(35):2403–2416, 2010.
[36] J. Kiendl, K.-U. Bletzinger, J. Linhard, and R. W¨uchner. Isogeometric shell analysis
with Kirchhoff-Love elements. Comput. Methods Appl. Mech. Engrg., 198(49):3902–
3914, 2009.
[37] R. Kraft. Adaptive and linearly independent multilevel B–splines. In A. Le M´ehaut´e,
C. Rabut, and L. L. Schumaker, editors, Surface Fitting and Multiresolution Methods,
pages 209–218. Vanderbilt University Press, Nashville, 1997.
[38] J. Liu, L. Ded`e, J. A. Evans, M. J. Borden, and T. J. R. Hughes. Isogeometric analysis
of the advective Cahn-Hilliard equation: Spinodal decomposition under shear flow. J.
Comp. Phys., 242:321 – 350, 2013.
[39] G. Lorenzo, M. A. Scott, K. Tew, T. J. R. Hughes, and H. Gomez. Hierarchically
refined and coarsened splines for moving interface problems, with particular applica-
tion to phase-field models of prostate tumor growth. Comput. Methods Appl. Mech.
Engrg., 319:515–548, 2017.
[40] B. Mourrain, R. Vidunas, and N. Villamizar. Dimension and bases for geometrically
continuous splines on surfaces of arbitrary topology. Comput. Aided Geom. Des.,
45:108 – 133, 2016.
[41] T. Nguyen, K. Karˇciauskas, and J. Peters. A comparative study of several classical,
discrete differential and isogeometric methods for solving Poisson’s equation on the
disk. Axioms, 3(2):280–299, 2014.
[42] T. Nguyen, K. Karˇciauskas, and J. Peters. C 1 finite elements on non-tensor-product
2d and 3d manifolds. Appl. Math. Comput., 272:148 – 158, 2016.
[43] T. Nguyen and J. Peters. Refinable C 1 spline elements for irregular quad layout.
Comput. Aided Geom. Des., 43:123 – 130, 2016.
[44] J. Peters. Geometric continuity.
In Handbook of computer aided geometric design,
pages 193–227. North-Holland, Amsterdam, 2002.
[45] U. Reif. A refinable space of smooth spline surfaces of arbitrary topological genus. J.
Approx. Theory, 90(2):174–199, 1997.
31
[46] A. Tagliabue, L. Ded`e, and A. Quarteroni. Isogeometric analysis and error estimates
for high order partial differential equations in fluid dynamics. Comput. & Fluids,
102:277 – 303, 2014.
[47] D. Toshniwal, H. Speleers, R. Hiemstra, and T. J. R. Hughes. Multi-degree smooth
polar splines: A framework for geometric modeling and isogeometric analysis. Comput.
Methods Appl. Mech. Engrg., 316:1005–1061, 2017.
[48] D. Toshniwal, H. Speleers, and T. J. R. Hughes. Analysis-suitable spline spaces of
arbitrary degree on unstructured quadrilateral meshes. Technical Report 16, Institute
for Computational Engineering and Sciences (ICES), 2017.
[49] D. Toshniwal, H. Speleers, and T. J. R. Hughes. Smooth cubic spline spaces on
unstructured quadrilateral meshes with particular emphasis on extraordinary points:
Geometric design and isogeometric analysis considerations. Comput. Methods Appl.
Mech. Engrg., 327:411–458, 2017.
[50] R. V´azquez. A new design for the implementation of isogeometric analysis in Octave
and Matlab: GeoPDEs 3.0. Comput. Math. Appl., 72:523–554, 2016.
[51] A.-V. Vuong, C. Giannelli, B. J¨uttler, and B. Simeon. A hierarchical approach to
adaptive local refinement in isogeometric analysis. Comput. Methods Appl. Mech.
Engrg., 200:3554–3567, 2011.
32
List of symbols
Spline space
p
r
Ξr
p
τi
T
k
Sr
p
Sr+1
p
, Sr
p−1
i,p, N r+1
i,p , N r
i,p−1
N r
n, n0, n1
I, I0, I1
J0,i, J1,i
ζm
N0, N1, N2
Geometry
(S)
Ω(S)
Ω
Γ
F(S)
F
F0
d
ξ1, ξ2
c(S)
i,j
α(S), β(S), β
γ
C 1 isogeometric space
V
W
Φ
ΦΩ(S), ΦΓ0, ΦΓ1
φΩ(S)
i,j
φΓ0
i
φΓ1
i
(S)
(cid:98)φ
Γ0
(S)
Ω(S)
(S)
Γ1
, (cid:98)φ
, (cid:98)φ
(cid:98)B, (cid:101)B(S), B(S)
(cid:98)bi,j, (cid:101)b(S)
i,j , b(S)
B(S)
i,j
Spline degree, p ≥ 3
Spline regularity, 1 ≤ r ≤ p − 2
Open knot vector
internal breakpoints of knot vector Ξr
p
Ordered set of internal breakpoints τi
Number of different internal breakpoints of knot vector Ξr
p
Univariate spline space of degree p and regularity r on [0, 1] over knot
vector Ξr
p
Univariate spline spaces of higher regularity and lower degree, re-
spectively, defined from same internal breakpoints as Sr
p
B-splines of spline spaces Sr
p, Sr+1
p−1, respectively
p
Dimensions of spline spaces Sr
p, Sr+1
p
i,p, N r+1
Index sets of B-splines N r
i,p and N r
Index subsets of I related to B-splines N r+1
i,p
and i ∈ I1, respectively
Greville abscissae of spline space Sr
Vectors of tensor-product B-splines N r
p−1, respectively
i,p−1, respectively
and N r
i,p−1, for i ∈ I0
p, m ∈ I
i,pN r
j,p
and Sr
and Sr
Upper index referring to specific patch, S ∈ {L, R}
Quadrilateral patch
Two-patch domain Ω = Ω(L) ∪ Ω(R)
Common interface of two-patch domain Ω
Geometry mapping of patch Ω(S)
Two patch geometry F = (F(L), F(R))
Parameterization of interface Γ
Specific transversal vector to Γ
Parameter directions of geometry mappings
Spline control points of geometry mapping F(S)
Gluing functions of two-patch geometry F
Scalar function, γ (cid:54)= 0
Space of C 1 isogeometric spline functions on Ω
Subspace of V
Basis of W
Parts of basis Φ, Φ = ΦΩ(L) ∪ ΦΩ(R) ∪ ΦΓ0 ∪ ΦΓ1
Basis functions of ΦΩ(S), i ∈ I \ {0, 1}, j ∈ I
Basis functions of ΦΓ0, i ∈ I0
Basis functions of ΦΓ1, i ∈ I1
Vectors of spline functions φΓ0
respectively
Transformation matrices
Entries of matrices (cid:98)B, (cid:101)B(S) and B
Block matrix assembled by the matrices (cid:98)B, (cid:101)B(S), B(S) and the iden-
tity matrix In(n−2)
i ◦ F(S) and φΩ(S)
i ◦ F(S), φΓ1
, respectively
◦ F(S),
(S)
i,j
33
Hierarchical space
, Λr+1,(cid:96)+1
p
, Λr,(cid:96)+1
p−1
(cid:96)
Λr,(cid:96)+1
p
λ(cid:96)+1
i,j
Θ(cid:96)+1
ij
WH
W
Upper index referring to specific level
Refinement matrices for B-splines N r,(cid:96)
Entries of refinement matrix Λr,(cid:96)+1
Block matrices of refinement mask Λr,(cid:96)+1
C 1 hierarchical isogeometric spline space
Basis of WH
p
p
i,p , N r+1,(cid:96)
i,p
and N r,(cid:96)
i,p−1, respectively
⊗ Λr,(cid:96)+1
p
, 0 ≤ i ≤ j ≤ 2
Most notations in the paragraphs “Spline space” and “C 1 isogeometric space” can be directly extended
to the hierarchical setting by adding the upper index (cid:96) to refer to the considered level.
34
|
synthetic_cpt | 3 | Active_Learning_Principles_for_In-Context_Learning_with_Large_Language_Models.pdf | 7
1
0
2
v
o
N
5
1
]
G
L
.
s
c
[
5
v
6
5
9
7
0
.
2
0
7
1
:
v
i
X
r
a
Generative Adversarial Active Learning
Jia-Jie Zhu
Max Planck Institute for Intelligent Systems
Tübingen, Germany
[email protected]
Jose Bento
Department of Computer Science
Boston College
Chestnut Hill, Massachusetts, USA
[email protected]
Abstract
We propose a new active learning by query synthesis approach using Generative
Adversarial Networks (GAN). Different from regular active learning, the result-
ing algorithm adaptively synthesizes training instances for querying to increase
learning speed. We generate queries according to the uncertainty principle, but
our idea can work with other active learning principles. We report results from
various numerical experiments to demonstrate the effectiveness the proposed ap-
proach. In some settings, the proposed algorithm outperforms traditional pool-
based approaches. To the best our knowledge, this is the first active learning work
using GAN.
1 Introduction
One of the most exciting machine learning breakthroughs in recent years is the generative adversarial
networks (GAN) [20]. It trains a generative model by finding the Nash Equilibrium of a two-player
adversarial game. Its ability to generate samples in complex domains enables new possibilities for
active learners to synthesize training samples on demand, rather than relying on choosing instances
to query from a given pool.
In the classification setting, given a pool of unlabeled data samples and a fixed labeling budget, ac-
tive learning algorithms typically choose training samples strategically from a pool to maximize the
accuracy of trained classifiers. The goal of these algorithms is to reduce label complexity. Such
approaches are called pool-based active learning. This pool-based active learning approach is illus-
trated in Figure 1 (a).
In a nutshell, we propose to use GANs to synthesize informative training instances that are adapted
to the current learner. We then ask human oracles to label these instances. The labeled data is added
back to the training set to update the learner. This protocol is executed iteratively until the label
budget is reached. This process is shown in Figure 1 (b).
The main contributions of this work are as follows:
• To the best of our knowledge, this is the first active learning framework using deep genera-
tive models1.
• While we do not claim our method is always superior to the previous active learners in
terms of accuracy, in some cases, it yields classification performance not achievable even by
a fully supervised learning scheme. With enough capacity from the trained generator, our
method allows us to have control over the generated instances which may not be available
to the previous active learners.
1The appendix of [37] mentioned three active learning attempts but did not report numerical results. Our
approach is also different from those attempts.
Learner
Learner
Training
Training
Pool
x, ?
x, y
GAN
x, y
x, ?
(a) Pool-based
(b) GAAL
Figure 1: (a) Pool-based active learning scenario. The learner selects samples for querying from
a given unlabeled pool. (b) GAAL algorithm. The learner synthesizes samples for querying using
GAN.
• We conduct experiments to compare our active learning approach with self-taught learning2.
The results are promising.
• This is the first work to report numerical results in active learning synthesis for image
classification. See [43, 30]. The proposed framework may inspire future GAN applications
in active learning.
• The proposed approach should not be understood as a pool-based active learning method.
Instead, it is active learning by query synthesis. We show that our approach can perform
competitively when compared against pool-based methods.
2 Related Work
Our work is related to two different subjects, active learning and deep generative models.
Active learning algorithms can be categorized into stream-based, pool-based and learning by query
synthesis. Historically, stream-based and pool-based are the two popular scenarios of active learning
[43].
Our method falls into the category of query synthesis. Early active learning by queries synthesis
achieves good results only in simple domains such as X = {0, 1}3, see [1, 2]. In [30], the authors
synthesized learning queries and used human oracles to train a neural network for classifying hand-
written characters. However, they reported poor results due to the images generated by the learner
being sometimes unrecognizable to the human oracles. We will report results on similar tasks such
as differentiating 5 versus 7, showing the advancement of our active learning scheme. Figure 2
compares image samples generated by the method in [30] and our algorithm.
Figure 2: (Left) Image queries synthesized by a neural network for handwritten digits recognition.
Source: [30]. (Right) Image queries synthesized by our algorithm, GAAL.
The popular SVMactive algorithm from [45] is an efficient pool-based active learning scheme for
SVM. Their scheme is a special instance of the uncertainty sampling principle which we also employ.
[28] reduces the exhaustive scanning through database employed by SVMactive. Our algorithm
shares the same advantage of not needing to test every sample in the database at each iteration of
active learning. Although we do so by not using a pool at all instead of a clever trick. [48] proposed
active transfer learning which is reminiscent to our experiments in Section 5.1. However, we do not
consider collecting new labeled data in target domains of transfer learning.
2See the supplementary document.
2
There have been some applications of generative models in semi-supervised learning and active
learning. Previously, [36] proposed a semi-supervised learning approach to text classification based
on generative models. [26] applied Gaussian mixture models to active learning. In that work, the
generative model served as a classifier. Compared with these approaches, we apply generative mod-
els to directly synthesize training data. This is a more challenging task.
One building block of our algorithm is the groundbreaking work of the GAN model in [20]. Our
approach is an application of GAN in active learning.
Our approach is also related to [44] which studied GAN in a semi-supervised setting. However, our
task is active learning which is different from the semi-supervised learning they discussed. Our work
shares the common strength with the self-taught learning algorithm in [39] as both methods use the
unlabeled data to help with the task. In the supplementary document, we compare our algorithm
with a self-taught learning algorithm.
In a way, the proposed approach can be viewed as an adversarial training procedure [21], where
the classifier is iteratively trained on the adversarial example generated by the algorithm based on
solving an optimization problem. [21] focuses on the adversarial examples that are generated by
perturbing the original datasets within the small epsilon-ball whereas we seek to produce examples
using active learning criterion.
To the best of our knowledge, the only previous mentioning of using GAN for active learning is in
the appendix of [37]. The authors discussed therein three attempts to reduce the number of queries.
In the third attempt, they generated synthetic samples and sorted them by the information content
whereas we adaptively generate new queries by solving an optimization problem. There were no
reported active learning numerical results in that work.
3 Background
We briefly introduce some important concepts in active learning and generative adversarial network.
3.1 Active Learning
In the PAC learning framework [46], label complexity describes the number of labeled instances
needed to find a hypothesis with error ǫ. The label complexity of passive supervised learning, i.e.
using all the labeled samples as training data, is O( d
ǫ ) [47], where d is the VC dimension of the
hypothesis class H. Active learning aims to reduce the label complexity by choosing the most
informative instances for querying while attaining low error rate. For example, [24] proved that the
active learning algorithm from [10] has the label complexity bound O(θd log 1
ǫ ), where θ is defined
therein as the disagreement coefficient, thus reducing the theoretical bound for the number of labeled
instances needed from passive supervised learning. Theoretically speaking, the asymptotic accuracy
of an active learning algorithm can not exceed that of a supervised learning algorithm. In practice,
as we will demonstrate in the experiments, our algorithm may be able to achieve higher accuracy
than the passive supervised learning in some cases.
Stream-based active learning makes decisions on whether to query the streamed-in instances or not.
Typical methods include [5, 10, 14]. In this work, we will focus on comparing pool-based and query
synthesis methods.
In pool-based active learning, the learner selects the unlabeled instances from an existing pool based
on a certain criterion. Some pool-based algorithms make selections by using clustering techniques
or maximizing a diversity measure, e.g. [7, 50, 13, 35, 51, 25]. Another commonly used pool-
based active learning principle is uncertainty sampling. It amounts to querying the most uncertain
instances. For example, algorithms in [45, 8] query the labels of the instances that are closest to
the decision boundary of the support vector machine. Figure 3 (a) illustrates this selection process.
Other pool-based works include [27] which proposes a Bayesian active learning by disagreement
algorithm in the context of learning user preferences, [22, 18] which study the submodularity nature
of sequential active learning schemes.
Mathematically, let P be the pool of unlabeled instances, and f = W φ(x) + b be the separating
hyperplane. φ is the feature map induced by the SVM kernel. The SVMactive algorithm in [45]
3
chooses a new instance to query by minimizing the distance (or its proxy) to the hyperplane
min
x∈P
kW φ(x) + bk.
(1)
This formulation can be justified by the version space theory in separable cases [45] or by other
analyses in non-separable cases, e.g., [8, 6]. This simple and effective method is widely applied in
many studies, e.g., [17, 49].
In the query synthesis scenario, an instance x is synthesized instead of being selected from an ex-
isting pool. Previous methods tend to work in simple low-dimensional domains [2] but fail in more
complicated domains such as images [30]. Our approach aims to tackle this challenge.
For an introduction to active learning, readers are referred to [43, 12].
3.2 Generative Adversarial Networks
Generative adversarial networks (GAN) is a novel generative model invented by [20]. It can be
viewed as the following two-player minimax game between the generator G and the discriminator
D,
min
θ2
max
θ1 n
Ex∼pdata log Dθ1(x) + Ez log(1 − Dθ1(Gθ2 (z)))o,
(2)
where pdata is the underlying distribution of the real data and z is uniformly distributed random
variable. D and G each has its own set of parameter θ1 and θ2. By solving this game, a generator
G is obtained. In the ideal scenario, given random input z, we have G(z) ∼ pdata. However, finding
this Nash Equilibrium is a difficult problem in practice. There is no theoretical guarantee for finding
the Nash Equilibrium due to the non-convexity of D and G. A gradient descent type algorithm is
typically used for solving this optimization problem.
A few variants of GAN have been proposed since [20]. The authors of [38] use GAN with deep con-
volutional neural network structures for applications in computer vision(DCGAN). DCGAN yields
good results and is relatively stable. Conditional GAN[16, 15, 34] is another variant of GAN in
which the generator and discriminator can be conditioned on other variables, e.g., the labels of im-
ages. Such generators can be controlled to generate samples from a certain category. [9] proposed
infoGAN which learns disentangled representations using unsupervised learning.
A few updated GAN models have been proposed. [41] proposed a few improved techniques for
training GAN. Another potentially important improvement of GAN, Wasserstein GAN, has been
proposed by [3, 23]. The authors proposed an alternative to training GAN which can avoid insta-
bilities such as mode collapse with theoretical analysis. They also proposed a metric to evaluate
the quality of the generation which may be useful for future GAN studies. Possible applications of
Wasserstein GAN to our active learning framework are left for future work.
The invention of GAN triggered various novel applications. [52] performed image inpainting task
using GAN. [53] proposed iGAN to turn sketches into realistic images. [33] applied GAN to sin-
gle image super-resolution. [54] proposed CycleGAN for image-to-image translation using only
unpaired training data.
Our study is the first GAN application to active learning.
For a comprehensive review of GAN, readers are referred to [19].
4 Generative Adversarial Active Learning
In this section, we introduce our active learning approach which we call Generative Adversarial
Active Learning (GAAL). It combines query synthesis with the uncertainty sampling principle.
The intuition of our approach is to generate instances which the current learner is uncertain about,
i.e. applying the uncertainty sampling principle. One particular choice for the loss function is based
on uncertainty sampling principle explained in section 3.1. In the setting of a classifier with the
decision function f (x) = W φ(x)+ b, the (proxy) distance to the decision boundary is kW φ(x)+ bk.
Similar to the intuition of (1), given a trained generator function G, we formulate the active learning
synthesis as the following optimization problem
kW ⊤φ(G(z)) + bk,
min
z
(3)
4
Algorithm 1 Generative Adversarial Active Learning (GAAL)
1: Train generator G on all unlabeled data by solving (2)
2: Initialize labeled training dataset S by randomly picking a small fraction of the data to label
3: repeat
4:
Solve optimization problem (3) according to the current learner by descending the gradient
∇zkW ⊤φ(G(z)) + bk
Use the solution {z1, z2, . . . } and G to generate instances for querying
Label {G(z1), G(z2), . . . } by human oracles
Add labeled data to the training dataset S and re-train the learner, update W , b
5:
6:
7:
8: until Labeling budget is reached
where z is the latent variable and G is obtained by the GAN algorithm. Intuitively, minimizing
this loss will push the generated samples toward the decision boundary. Figure 3 (b) illustrates this
idea. Compared with the pool-base active learning in Figure 3 (a), our hope is that it may be able to
generate more informative instances than those available in the existing pool.
(a) SVMactive
(b) GAAL
Figure 3: (a) SVMactive algorithm selects the instances that are closest to the boundary to query
the oracle. (b) GAAL algorithm synthesizes instances that are informative to the current learner.
Synthesized instances may be more informative to the learner than other instances in the existing
pool.
The solution(s) to this optimization problem, G(z), after being labeled, will be used as new training
data for the next iteration. We outline our procedure in Algorithm 1. It is possible to use a state-of-
the-art classifier, such as convolutional neural networks. To do this, we can replace the feature map
φ in Equation 3 with a feed-forward function of a convolutional neural network. In that case, the
linear SVM will become the output layer of the network. In step 4 of Algorithm 1, one may also
use a different active learning criterion. We emphasis that our contribution is the general framework
instead of a specific criterion.
In training GAN, we follow the procedure detailed in [38]. Optimization problem (3) is non-convex
with possibly many local minima. One typically aims at finding good local minima rather than the
global minimum. We use a gradient descent algorithm with momentum to solve this problem. We
also periodically restart the gradient descent to find other solutions. The gradient of D and G is
calculated using back-propagation.
Alternatively, we can incorporate diversity into our active learning principle. Some active learning
approaches rely on maximizing diversity measures, such as the Shannon Entropy. In our case, we
can include in the objective function (3) a diversity measure such as proposed in [51, 25], thus
increasing the diversity of samples. The evaluation of this alternative approach is left for future
work.
5 Experiments
We perform active learning experiments using the proposed approach. We also compare our ap-
proach to self-taught learning, a type of transfer learning method, in the supplementary document.
The GAN implementation used in our experiment is a modification of a publicly available TensoFlow
DCGAN implementation3. The network architecture of DCGAN is described in [38].
3https://github.com/carpedm20/DCGAN-tensorflow
5
In our experiments, we focus on binary image classification. Although this can be generalized to
multiple classes using one-vs-one or one-vs-all scheme [29]. Recent advancements in GAN study
show it could potentially model language as well [23]. Although those results are preliminary at the
current stage. We use a linear SVM as our classifier of choice (with parameter γ = 0.001). Even
though classifiers with much higher accuracy (e.g., convolutional neural networks) can be used,
our purpose is not to achieve absolute high accuracy but to study the relative performance between
different active learning schemes.
The following schemes are implemented and compared in our experiments.
• The proposed generative adversarial active learning (GAAL) algorithm as in Algorithm 1.
• Using regular GAN to generate training data. We refer to this as simple GAN.
• SVMactive algorithm from [45].
• Passive random sampling, which randomly samples instances from the unlabeled pool.
• Passive supervised learning, i.e., using all the samples in the pool to train the classifier.
• Self-taught learning from [39].
We initialize the training set with 50 randomly selected samples. The algorithms proceed with a
batch of 10 queries every time.
We use two datasets for training, the MNIST and CIFAR-10. The MNIST dataset is a well-known
image classification dataset with 60000 training samples. The training set and the test set follow the
same distribution. We perform the binary classification experiment distinguishing 5 and 7 which is
reminiscent to [30]. The training set of CIFAR-10 dataset consists of 50000 32 × 32 color images
from 10 categories. One might speculate the possibility of distinguishing cats and dogs by training
on cat-like dogs or dog-like cats. In practice, our human labelers failed to confidently identify most
of the generated cat and dog images. Figure 4 (Top) shows generated samples. The authors of [41]
reported attempts to generate high-resolution animal pictures, but with the wrong anatomy. We leave
this task for future studies, possibly with improved techniques such as [3, 23]. For this reason, we
perform binary classification on the automobile and horse categories. It is relatively easy for human
labelers to identity car and horse body shapes. Typical generated samples, which are presented to
the human labelers, are shown in Figure 4.
Figure 4: Samples generated by GAAL (Top) Generated samples in cat and dog categories. (Bottom
Left) MNIST dataset. (Bottom Right) CIFAR-10 dataset.
5.1 Active Learning
We use all the images of 5 and 7 from the MNIST training set as our unlabeled pool to train the
generator G. Different from traditional active learning, we do not select new samples from the pool
after initialization. Instead, we apply Algorithm 1 to generate a training query. For the generator
D and G, we follow the same network architecture of [38]. We use linear SVM as our classifier
although other classifiers can be used, e.g. [45, 42, 43].
We first test the trained classifier on a test set that follows a distribution different from the training
set. One purpose is to demonstrate the adaptive capability of the GAAL algorithm. In addition,
because the MNIST test set and training set follow the same distribution, pool-based active learning
methods have an natural advantage over active learning by synthesis since they use real images
drawn from the exact same distribution as the test set. It is thus reasonable to test on sets that follow
different, albeit similar, distributions. To this end, we use the USPS dataset from [32] as the test set
with standard preprocessing. In reality, such settings are very common, e.g., training autonomous
drivers on simulated datasets and testing on real vehicles; training on handwriting characters and
recognizing writings in different styles, etc. This test setting is related to transfer learning, where
the distribution of the training domain Ptr(x, y) is different from that of the target domain Pte(x, y).
Figure 5 (Top) shows the results of our first experiment.
6
0.80
0.75
0.70
0.65
0.60
0.55
y
c
a
r
u
c
c
A
n
o
i
t
a
c
i
f
i
s
s
a
l
C
0.50
50
Active Learing, 5 vs. 7
Active Learing, 5 vs. 7
SVMactive
Fully Supervised
GAAL
Simple GAN
Random Sampling
100
150
200
Number of Labeled Samples
250
300
350
0.85
0.80
0.75
0.70
0.65
0.60
y
c
a
r
u
c
c
A
n
o
i
t
a
c
i
f
i
s
s
a
l
C
250
0.55
50
Active Learing, Horse vs. Automobile
SVMactive
Fully Supervised
GAAL
Simple GAN
Random Sampling
100
150
Number of Labeled Samples
200
250
SVMactive
Fully Supervised
GAAL
Simple GAN
Random Sampling
100
150
Number of Labeled Samples
200
1.00
0.95
0.90
0.85
0.80
0.75
y
c
a
r
u
c
c
A
n
o
i
t
a
c
i
f
i
s
s
a
l
C
0.70
50
Figure 5: Active learning results. (Top) Train on MNIST, test on USPS. Classifying 5 and 7. The
results are averaged over 10 runs. (Bottom Left) Train on MNIST, test on MNIST. Classifying 5 and
7. (Bottom Right) CIFAR-10 dataset, classifying automobile and horse. The results are averaged
over 10 runs. The error bars represent the empirical standard deviation of the average values. The
figures are best viewed in color.
When using the full training set, with 11000 training images, the fully supervised accuracy is at
70.44%. The accuracy of the random sampling scheme steadily approaches that level. On the
other hand, GAAL is able to achieve accuracies better than that of the fully supervised scheme.
With 350 training samples, its accuracy improves over supervised learning and even SVMactive, an
aggressive active learner [11, 45]. Obviously, the accuracy of both SVMactive and random sampling
will eventually converge to the fully supervised learning accuracy. Note that for the SVMactive
algorithm, an exhaustive scan through the training pool is not always practical. In such cases, the
common practice is to restrict the selection pool to a small random subset of the original data.
For completeness, we also perform the experiments in the settings where the training and test set
follow the same distribution. Figure 5 (Bottom) shows these results. Somewhat surprisingly, in
Figure 5 (Left), GAAL’s classification accuracy starts to drop after about 100 samples. One possible
explanation is that GAAL may be generating points close to the boundary that are also close to each
other. This is more likely to happen if the boundary does not change much from one active learning
cycle to the next. This probably happens because the test and train sets are the identically distributed
and simple, like MNIST. Therefore, after a while, the training set may be filled with many similar
points, biasing the classifier and hurting accuracy. In contrast, because of the finite and discrete
nature of pools in the given datasets, a pool-based approach, such as SVMactive, most likely explores
points near the boundary that are substantially different. It is also forced to explore further points
once these close-by points have already been selected. In a sense, the strength of GAAL might in
fact be hurting its classification accuracy. We believe this effect is not so pronounced when the test
and train sets are different because the boundary changes more significantly from one cycle to the
next, which in turn induces some diversity in the generated samples.
To reach competitive accuracy when the training and test set follow the same distribution, we might
incorporate a diversity term into our objective function in GAAL. We will address this in future
work.
In the CIFAR-10 dataset, our human labeler noticed higher chances of bad generated samples, e.g.,
instances fail to represent either of the categories. This may be because of the significantly higher
dimensions than the MNIST dataset. In such cases, we asked the labelers to only label the samples
they can distinguish. We speculate recent improvements on GAN, e.g., [41, 3, 23], may help mitigate
7
this issue given the cause is the instability of GANs. Addressing this limitation will be left to future
studies.
5.2 Balancing exploitation and exploration
The proposed Algorithm 1 can be understood as an exploitation method, i.e., it focuses on generating
the most informative training data based on the current decision boundary On the other hand, it is
often desirable for the algorithm to explore the new areas of the data. To achieve this, we modify
Algorithm 1 by simply executing random sampling every once in a while. This is a common practice
in active learning [4, 40]. We use the same experiment setup as in the previous section. Figure 6
shows the results of this mixed scheme.
0.80
0.75
0.70
0.65
0.60
0.55
y
c
a
r
u
c
c
A
n
o
i
t
a
c
i
f
i
s
s
a
l
C
0.50
50
Active Learing, 5 vs. 7
GAAL
Random Sampling
GAAL + random sampling
100
150
Number of Labeled Samples
200
250
Figure 6: Active learning results using a mixed scheme. The mixed scheme executes one iteration
of random sampling after every five iterations of GAAL algorithm. Train on MNIST, test on USPS.
Classifying 5 and 7. The results are averaged over 10 runs. The error bars represent the empirical
standard deviation of the average values. The figure is best viewed in color.
A mixed scheme is able to achieve better performance than either using GAAL or random sampling
alone. Therefore, it implies that GAAL, as an exploitation scheme, performs even better in combi-
nation with an exploration scheme. A detailed analysis such mixed schemes will be an interesting
future topic.
6 Discussion and Future Work
In this work, we proposed a new active learning approach, GAAL, that employs the generative adver-
sarial networks. One possible explanation for GAAL not outperforming the pool-based approaches
in some settings is that, in traditional pool-based learning, the algorithm will eventually exhaust all
the points near the decision boundary thus start exploring further points. However, this is the not
the case in GAAL as it can always synthesize points near the boundary. This may in turn cause the
generation of similar samples, thus reducing the effectiveness. We suspect incorporating a diversity
measure into the GAAL framework as discussed at the end of Section 4 might mitigate this issue.
This issue is related to the exploitation and exploration trade-off which we explored in brief.
The results of this work are enough to inspire future studies of deep generative models in active
learning. However, much work remains in establishing theoretical analysis and reaching better per-
formance. We also suspect that GAAL can be modified to generate adversarial examples such as in
[21]. The comparison of GAAL with transfer learning (see the supplementary document) is particu-
larly interesting and worth further investigation. We also plan to investigate the possibility of using
Wasserstein GAN in our framework.
References
[1] D Angluin. Queries and concept learning. Mach. Learn., 1988.
[2] D Angluin. Queries revisited. Int. Conf. Algorithmic Learn., 2001.
8
[3] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. jan 2017.
[4] Yoram Baram, Ran El Yaniv, and Kobi Luz. Online choice of active learning algorithms.
Journal of Machine Learning Research, 5(Mar):255–291, 2004.
[5] Alina Beygelzimer, Sanjoy Dasgupta, and John Langford. Importance Weighted Active Learn-
ing. Proc. 26th Annu. Int. Conf. Mach. Learn. ICML 09, abs/0812.4(ii):1–8, 2008.
[6] Antoine Bordes, ¸Seyda Ertekin, Jason Weston, and Léon Bottou. Fast Kernel Classifiers with
Online and Active Learning. J. Mach. Learn. Res., 6:1579–1619, 2005.
[7] Klaus Brinker. Incorporating Diversity in Active Learning with Support Vector Machines.
[8] Colin Campbell, Nello Cristianini, and Alex Smola. Query learning with large margin classi-
fiers. 17th Int. Conf. Mach. Learn., pages 111–118, 2000.
[9] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. In-
foGAN: Interpretable Representation Learning by Information Maximizing Generative Adver-
sarial Nets. 2016.
[10] David Cohn, Les Atlas, and Richard Ladner. Improving generalization with active learning.
Mach. Learn., 15(2):201–221, may 1994.
[11] Sanjoy Dasgupta. Analysis of a greedy active learning strategy. In Advances in neural infor-
mation processing systems, pages 337–344, 2005.
[12] Sanjoy Dasgupta. Two faces of active learning. Theor. Comput. Sci., 412:1767–1781, 2011.
[13] Sanjoy Dasgupta and Daniel Hsu. Hierarchical sampling for active learning. Proceedings of
the 25th international conference on Machine learning - ICML ’08, pages 208–215, 2008.
[14] Sanjoy Dasgupta, Daniel Hsu, and Claire Monteleoni. A general agnostic active learning
algorithm. Engineering, 20(2):1–14, 2007.
[15] Alexey Dosovitskiy, Jost Tobias Springenberg, Maxim Tatarchenko, and Thomas Brox. Learn-
arXiv preprint
ing to Generate Chairs, Tables and Cars with Convolutional Networks.
arXiv:1411.5928, pages 1–14, 2014.
[16] Jon Gauthier. Conditional generative adversarial nets for convolutional face generation. Class
Project for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Winter
semester 2014, 2014.
[17] King-Shy Goh, Edward Y. Chang, and Wei-Cheng Lai. Multimodal concept-dependent active
learning for image retrieval. In Proc. 12th Annu. ACM Int. Conf. Multimed. - Multimed. ’04,
page 564, New York, New York, USA, 2004. ACM Press.
[18] Daniel Golovin and Andreas Krause. Adaptive submodularity: A new approach to active
learning and stochastic optimization. In COLT, pages 333–345, 2010.
[19] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.
[20] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
In Advances in
Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets.
neural information processing systems, pages 2672–2680, 2014.
[21] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adver-
sarial examples. arXiv preprint arXiv:1412.6572, 2014.
[22] Andrew Guillory and Jeff Bilmes.
Interactive submodular set cover.
arXiv preprint
arXiv:1002.3345, 2010.
[23] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville.
Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028, 2017.
[24] Steve Hanneke. A bound on the label complexity of agnostic active learning. Proc. 24th Int.
Conf. Mach. Learn. - ICML ’07, pages 353–360, 2007.
[25] Steven C H Hoi, Rong Jin, Jianke Zhu, and Michael R Lyu. Semi-Supervised SVM Batch
Mode Active Learning with Applications to Image Retrieval. ACM Trans. Informations Syst.
ACM Trans. Inf. Syst. Publ. ACM Trans. Inf. Syst., 27(16):24–26, 2009.
[26] Timothy M. Hospedales, Shaogang Gong, and Tao Xiang. Finding rare classes: Active learning
with generative and discriminative models. IEEE Trans. Knowl. Data Eng., 25(2):374–386,
2013.
9
[27] Neil Houlsby, Ferenc Huszar, Zoubin Ghahramani, and Jose M Hernández-Lobato. Collabora-
tive gaussian processes for preference learning. In Advances in Neural Information Processing
Systems, pages 2096–2104, 2012.
[28] Prateek Jain, Sudheendrasvnaras Vijayanarasimhan, Kristen Grauman, Prateek Jain, and Kris-
ten Grauman. Hashing Hyperplane Queries to Near Points with Applications to Large-Scale
Active Learning. IEEE Trans. Pattern Anal. Mach. Intell., 36(2):2010, 2010.
[29] A.J. Joshi, F. Porikli, and N. Papanikolopoulos. Multi-class active learning for image classifi-
cation. IEEE Conf. Comput. Vis. Pattern Recognit., pages 2372–2379, 2009.
[30] Kevin J. Lang and Eric B Baum. Query Learning Can Work Poorly when a Human Oracle is
Used, 1992.
[31] Quoc V Le, Alexandre Karpenko, Jiquan Ngiam, and Andrew Y Ng. ICA with Reconstruction
Cost for Efficient Overcomplete Feature Learning.
[32] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel.
Backpropagation Applied to Handwritten Zip Code Recognition, 1989.
[33] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Aitken, Alykhan Tejani,
Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-Realistic Single Image Super-Resolution
Using a Generative Adversarial Network. arXiv, 2016.
[34] Mehdi Mirza and Simon Osindero. Conditional Generative Adversarial Nets. CoRR, pages
1–7, nov 2014.
[35] Hieu T Nguyen and Arnold Smeulders. Active Learning Using Pre-clustering.
[36] Kamal Nigam, Andrew Kachites Mccallum, Sebastian Thrun, and Tom Mitchell. Text Classi-
fication from Labeled and Unlabeled Documents using EM. Mach. Learn., 39:103–134, 2000.
[37] Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Semi-
supervised knowledge transfer for deep learning from private training data. arXiv preprint
arXiv:1610.05755, 2016.
[38] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised Representation Learning with
Deep Convolutional Generative Adversarial Networks. nov 2015.
[39] Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y Ng. Self-taught
Learning : Transfer Learning from Unlabeled Data. Proc. 24th Int. Conf. Mach. Learn., pages
759–766, 2007.
[40] Jens Röder, Boaz Nadler, Kevin Kunzmann, and Fred A Hamprecht. Active learning with
distributional estimates. arXiv preprint arXiv:1210.4909, 2012.
[41] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.
Improved Techniques for Training GANs. jun 2016.
[42] Andrew I. Schein and Lyle H. Ungar. Active learning for logistic regression: An evaluation,
volume 68. 2007.
[43] Burr Settles. Active learning literature survey. Computer sciences technical report, 1648:Uni-
versity of Wisconsin–Madison, 2010.
[44] Jost Tobias Springenberg. Unsupervised and Semi-supervised Learning with Categorical Gen-
erative Adversarial Networks. arXiv, (2009):1–20, 2015.
[45] Simon Tong and Daphne Koller. Support Vector Machine Active Learning with Applications
to Text Classification. Proc. Int. Conf. Mach. Learn., 1(June):45–66, 2002.
[46] L. G. Valiant and L. G. A theory of the learnable. Commun. ACM, 27(11):1134–1142, nov
1984.
[47] VN Vapnik and V Vapnik. Statistical learning theory. 1998.
[48] Xuezhi Wang, Tzu-Kuo Huang, and Jeff Schneider. Active transfer learning under model shift.
In International Conference on Machine Learning, pages 1305–1313, 2014.
[49] Manfred K Warmuth, Jun Liao, Gunnar Rätsch, Michael Mathieson, Santosh Putta, and Chris-
tian Lemmen. Active Learning with Support Vector Machines in the Drug Discovery Process.
2002.
10
[50] Z Xu, R Akella, and Y Zhang. Incorporating diversity and density in active learning for rele-
vance feedback. European Conference on Information Retrieval, 2007.
[51] Yi Yang, Zhigang Ma, Feiping Nie, Xiaojun Chang, and Alexander G Hauptmann. Multi-Class
Active Learning by Uncertainty Sampling with Diversity Maximization. Int. J. Comput. Vis.,
113(2):113–127, jun 2014.
[52] Raymond Yeh, Chen Chen, Teck Yian Lim, Mark Hasegawa-Johnson, and Minh N. Do. Se-
mantic Image Inpainting with Perceptual and Contextual Losses. jul 2016.
[53] Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, and Alexei A. Efros. Generative Visual
Manipulation on the Natural Image Manifold. pages 597–613. Springer, Cham, 2016.
[54] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image trans-
lation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017.
Appendix: Comparison with Self-taught Learning
One common strength of GAAL and self-taught learning [39] is that both utilize the unlabeled data
to help with the classification task. As we have seen in the MNIST experiment, our GAAL algorithm
seems to be able to adapt to the learner. The results in this experiment are preliminary and not meant
to be taken as comprehensive evaluations.
In this case, the training domain is mostly unlabeled. Thus the method we compare with is self-
taught learning [39]. Similar to the algorithm in [31], we use a Reconstruction Independent Com-
ponent Analysis (RICA) model with a convolutional layer and a pooling layer. RICA is similar
to a sparse autoencoder. Following standard self-taught learning procedures, We first train on the
unlabeled pool dataset. Then we use trained RICA as the a feature extractor to obtain higher level
features from randomly selected MNIST images. We then concatenate the features with the original
image data to train the classifier. Finally, we test the trained classifier on the USPS dataset. We
test the training size of 250, 500, 1000, and 5000. The reason of doing so is that deep learning
type techniques are known to thrive in the abundance of training data. They may perform relatively
poorly with limited amount of training data, as in the active learning scenarios. We run the exper-
iments for 100 times and average the results. We use the same setting for the GAAL algorithm as
in Section 5.1. The classifier we use is a linear SVM. Table 1 shows the classification accuracies
of GAAL, self-taught learning and baseline supervised learning on raw image data. Using GAAL
Table 1: Comparison of GAAL and self-taught learning
ALGOIRTHM
GAAL
SELF-TAUGHT
SUPERVISED
SELF-TAUGHT
SUPERVISED
SELF-TAUGHT
SUPERVISED
SELF-TAUGHT
SUPERVISED
TRAINING SET SIZE
250
250
250
500
500
1000
1000
5000
5000
ACCURACY
76.42%
59.68%
67.87%
65.53%
69.22%
71.96%
69.58%
78.08%
72.00%
on the raw features achieves a higher accuracy than that of the self-taught learning with the same
training size of 250. In fact, self-taught learning performs worse than the regular supervised learn-
ing when labeled data is scarce. This is possible for an autoencoder type algorithm. However, when
we increase the training size, the self-taught learning starts to perform better. With 5000 training
samples, self-taught learning outperforms GAAL with 250 training samples.
Based on these results, we suspect that GAAL also has the potential to be used as a self-taught
algorithm4. In practice, the GAAL algorithm can also be applied on top of the features extracted
by a self-taught algorithm. A comprehensive comparison with a more advanced self-taught learning
method with deeper architecture is beyond the scope of this work.
4At this stage, self-taught learning has the advantage that it can utilize any unlabeled training data, i.e., not
necessarily from the categories of interest. GAAL does not have this feature yet.
11
|
synthetic_cpt | 2 | NaturalSpeech_2_Latent_Diffusion_Models_are_Natural_and_Zero-Shot_Speech_and_Singing_Synthesizers.pdf | 3
2
0
2
y
a
M
0
3
]
S
A
.
s
s
e
e
[
3
v
6
1
1
9
0
.
4
0
3
2
:
v
i
X
r
a
NaturalSpeech 2: Latent Diffusion Models are Natural
and Zero-Shot Speech and Singing Synthesizers
Kai Shen∗, Zeqian Ju∗, Xu Tan∗, Yanqing Liu, Yichong Leng, Lei He
Tao Qin, Sheng Zhao, Jiang Bian
Microsoft Research Asia & Microsoft Azure Speech
https://aka.ms/speechresearch
Abstract
Scaling text-to-speech (TTS) to large-scale, multi-speaker, and in-the-wild datasets
is important to capture the diversity in human speech such as speaker identities,
prosodies, and styles (e.g., singing). Current large TTS systems usually quantize
speech into discrete tokens and use language models to generate these tokens one
by one, which suffer from unstable prosody, word skipping/repeating issue, and
poor voice quality. In this paper, we develop NaturalSpeech 2, a TTS system
that leverages a neural audio codec with residual vector quantizers to get the
quantized latent vectors and uses a diffusion model to generate these latent vectors
conditioned on text input. To enhance the zero-shot capability that is important
to achieve diverse speech synthesis, we design a speech prompting mechanism to
facilitate in-context learning in the diffusion model and the duration/pitch predictor.
We scale NaturalSpeech 2 to large-scale datasets with 44K hours of speech and
singing data and evaluate its voice quality on unseen speakers. NaturalSpeech 2
outperforms previous TTS systems by a large margin in terms of prosody/timbre
similarity, robustness, and voice quality in a zero-shot setting, and performs novel
zero-shot singing synthesis with only a speech prompt. Audio samples are available
at https://speechresearch.github.io/naturalspeech2.
Figure 1: The overview of NaturalSpeech 2, with an audio codec encoder/decoder and a latent
diffusion model conditioned on a prior (a phoneme encoder and a duration/pitch predictor). The
details of in-context learning in the duration/pitch predictor and diffusion model are shown in Figure 3.
∗The first three authors contributed equally to this work, and their names are listed in random order.
Corresponding author: Xu Tan, [email protected]
Preprint. Work in progress.
Phoneme Encoder Text𝒚Codec DecoderCodec EncoderDiffusion ModelDuration/Pitch PredictorSpeech𝒙Speech𝒙Latent𝒛Condition𝒄ICICICOnly in TrainingTraining & InferenceIn-Context Learning
1
Introduction
Human speech is full of diversity, with different speaker identities (e.g., gender, accent, timbre),
prosodies, styles (e.g., speaking, singing), etc. Text-to-speech (TTS) [1, 2] aims to synthesize
natural and human-like speech with both good quality and diversity. With the development of neural
networks and deep learning, TTS systems [3, 4, 5, 6, 7, 8, 9, 10, 11] have achieved good voice
quality in terms of intelligibility and naturalness, and some systems (e.g., NaturalSpeech [11]) even
achieves human-level voice quality on single-speaker recording-studio benchmarking datasets (e.g.,
LJSpeech [12]). Given the great achievements in speech intelligibility and naturalness made by the
whole TTS community, now we enter a new era of TTS where speech diversity becomes more and
more important in order to synthesize natural and human-like speech.
Previous speaker-limited recording-studio datasets are not enough to capture the diverse speaker
identities, prosodies, and styles in human speech due to limited data diversity. Instead, we can train
TTS models on a large-scale corpus to learn these diversities, and as a by-product, these trained
models can generalize to the unlimited unseen scenarios with few-shot or zero-shot technologies.
Current large-scale TTS systems [13, 14, 15] usually quantize the continuous speech waveform
into discrete tokens and model these tokens with autoregressive language models. This pipeline
suffers from several limitations: 1) The speech (discrete token) sequence is usually very long (a 10s
speech usually has thousands of discrete tokens) and the autoregressive models suffer from error
propagation and thus unstable speech outputs. 2) There is a dilemma between the codec and language
model: on the one hand, the codec with token quantization (VQ-VAE [16, 17] or VQ-GAN [18])
usually has a low bitrate token sequence, which, although eases the language model generation,
incurs information loss on the high-frequency fine-grained acoustic details; on the other hand, some
improving methods [19, 20] use multiple residual discrete tokens to represent a speech frame, which
increases the length of the token sequence multiple times if flattened and incurs difficulty in language
modeling.
In this paper, we propose NaturalSpeech 2, a TTS system with latent diffusion models to achieve
expressive prosody, good robustness, and most importantly strong zero-shot ability for speech
synthesis. As shown in Figure 1, we first train a neural audio codec that converts a speech waveform
into a sequence of latent vectors with a codec encoder, and reconstructs the speech waveform from
these latent vectors with a codec decoder. After training the audio codec, we use the codec encoder
to extract the latent vectors from the speech in the training set and use them as the target of the
latent diffusion model, which is conditioned on prior vectors obtained from a phoneme encoder, a
duration predictor, and a pitch predictor. During inference, we first generate the latent vectors from
the text/phoneme sequence using the latent diffusion model and then generate the speech waveform
from these latent vectors using the codec decoder.
Table 1: The comparison between NaturalSpeech 2 and previous large-scale TTS systems.
Methods
Representations
Generative Models
In-Context Learning
Previous Systems [13, 14, 15]
NaturalSpeech 2
Discrete Tokens
Autoregressive Models
Both Text and Speech are Needed Only Speech is Needed
Continuous Vectors
Non-Autoregressvie/Diffusion
(cid:37)
Stability/Robustness?
(cid:37)
One Acoustic Model?
Beyond Speech (e.g., Singing)? (cid:37)
(cid:33)
(cid:33)
(cid:33)
We elaborate on some design choices in NaturalSpeech 2 (shown in Table 1) as follows.
• Continuous vectors instead of discrete tokens. To ensure the speech reconstruction quality of the
neural codec, previous works usually quantize speech with multiple residual quantizers. As a
result, the obtained discrete token sequence is very long (e.g., if using 8 residual quantizers for
each speech frame, the resulting flattened token sequence will be 8 times longer), and puts much
pressure on the acoustic model (autoregressive language model). Therefore, we use continuous
vectors instead of discrete tokens, which can reduce the sequence length and increase the amount
of information for fine-grained speech reconstruction (see Section 3.1).
2
• Diffusion models instead of autoregressive models. We leverage diffusion models to learn the
complex distributions of continuous vectors in a non-autoregressive manner and avoid error
propagation in autoregressive models (see Section 3.2).
• Speech prompting mechanisms for in-context learning. To encourage the diffusion models to follow
the characteristics in the speech prompt and enhance the zero-shot capability, we design speech
prompting mechanisms to facilitate in-context learning in the diffusion model and pitch/duration
predictors (see Section 3.3).
Benefiting from these designs, NaturalSpeech 2 is more stable and robust than previous autoregressive
models, and only needs one acoustic model (the diffusion model) instead of two-stage token prediction
as in [21, 13], and can extend the styles beyond speech (e.g., singing voice) due to the duration/pitch
prediction and non-autoregressive generation.
We scale NaturalSpeech 2 to 400M model parameters and 44K hours of speech data, and generate
speech with diverse speaker identities, prosody, and styles (e.g., singing) in zero-shot scenarios
(given only a few seconds of speech prompt). Experiment results show that NaturalSpeech 2 can
generate natural speech in zero-shot scenarios and outperform the previous strong TTS systems.
Specifically, 1) it achieves more similar prosody with both the speech prompt and ground-truth
speech; 2) it achieves comparable or better naturalness (in terms of CMOS) than the ground-truth
speech on LibriSpeech and VCTK test sets; 3) it can generate singing voices in a novel timbre
either with a short singing prompt, or interestingly with only a speech prompt, which unlocks
the truly zero-shot singing synthesis (without a singing prompt). Audio samples can be found in
https://speechresearch.github.io/naturalspeech2.
2 Background
We introduce some background of NaturalSpeech 2, including the journey of text-to-speech synthesis
on pursuing natural voice with high quality and diversity, neural audio codec models, and generative
models for audio synthesis.
2.1 TTS for Natural Voice: Quality and Diversity
Text-to-speech systems [2, 3, 4, 5, 6, 8, 9, 22, 10, 11] aim to generate natural voice with both high
quality and diversity. While previous neural TTS systems can synthesize high-quality voice on
single-speaker recording-studio datasets (e.g., LJSpeech [12]) and even achieve human-level quality
(e.g., NaturalSpeech [11]), they cannot generate diverse speech with different speaker identities,
prosodies, and styles, which are critical to ensure the naturalness of the synthesized speech. Thus,
some recent works [13, 14, 15] attempt to scale the TTS systems to large-scale, multi-speaker, and
in-the-wild datasets to pursue diversity.
These systems usually leverage a neural codec to convert speech waveform into discrete token
sequence and an autoregressive language model to generate discrete tokens from text, which suffers
from a dilemma as shown in Table 2: 1) If the audio codec quantizes each speech frame into a
single token with vector-quantizer (VQ) [16, 17, 18], this could ease the token generation in the
language model due to short sequence length, but will affect the waveform reconstruction quality
due to large compression rate or low bitrate. 2) If the audio codec quantizes each speech frame into
multiple tokens with residual vector-quantizer (RVQ) [19, 20], this will ensure high-fidelity waveform
reconstruction, but will cause difficulty in autoregressive model generation (error propagation and
robust issues) due to the increased length in the token sequence. Thus, previous works such as
AudioLM [21] leverage two-stage language models to first generate some coarse-grained tokens in
each frame and then generate the remaining fine-grained tokens, which are complicated and incur
cascaded errors. To avoid the above dilemma, we leverage a neural codec with continuous vectors
and a latent diffusion model with non-autoregressive generation.
2.2 Neural Audio Codec
Neural audio codec [23, 24, 19, 20] refers to a kind of neural network model that converts audio
waveform into compact representations with a codec encoder and reconstructs audio waveform
from these representations with a codec decoder. Since audio codec is traditionally used for audio
3
Table 2: The dilemma in the pipeline of discrete audio codec and autoregressive language model.
The Dilemma in Previous Systems
Single Token (VQ) Multiple Tokens (RVQ)
Waveform Reconstruction (Discrete Audio Codec)
Token Generation (Autoregressive Language Model)
Hard
Easy
Easy
Hard
compression and transmission, the compression rate is a critical metric and thus discrete tokens with
low bitrate are usually chosen as the compact representations. For example, SoundStream [19] and
Encodec [20] leverage vector-quantized variational auto-encoders (VQ-VAE) with multiple residual
vector-quantizers to compress speech into multiple tokens, and have been used as the intermediate
representations for speech/audio generation [21, 25, 13, 14, 15].
Although good reconstruction quality and low bitrate can be achieved by residual vector quantizers,
they are mainly designed for compression and transmission purposes and may not be suitable to serve
as the intermediate representation for speech/audio generation. The discrete token sequence generated
by residual quantizers is usually very long (R times longer if R residual quantizers are used), which
is difficult for the language models to predict. Inaccurate predictions of discrete tokens will cause
word skipping, word repeating, or speech collapse issues when reconstructing speech waveforms
from these tokens. In this paper, we design a neural audio codec to convert speech waveform into
continuous vectors instead of discrete tokens, which can maintain enough fine-grained details for
precise waveform reconstruction without increasing the length of the sequence.
2.3 Generative Models for Speech Synthesis
Different generative models have been applied to speech or audio synthesis, and among these, autore-
gressive models and diffusion models are the two most prominent methods. Autoregressive models
have long been used in speech synthesis for waveform generation [23] or acoustic feature genera-
tion [3]. Inspired by the success of autoregressive models in language generation [26, 27, 28], autore-
gressive models have been applied in speech and audio generation [21, 25, 13, 14, 15]. Meanwhile,
diffusion models have also been widely used in speech synthesis for waveform generation [29, 30]
and acoustic feature generation [31, 32].
Although both models are based on iterative computation (following the left-to-right process or the
denoising process), autoregressive models are more sensitive to sequence length and error propagation,
which cause unstable prosody and robustness issues (e.g., word skipping, repeating, and collapse).
Considering text-to-speech has a strict monotonic alignment and strong source-target dependency, we
leverage diffusion models enhanced with duration prediction and length expansion, which are free
from robust issues.
3 NaturalSpeech 2
In this section, we introduce NaturalSpeech 2, a TTS system for natural and zero-shot voice synthesis
with high fidelity/expressiveness/robustness on diverse scenarios (various speaker identities, prosodies,
and styles). As shown in Figure 1, NaturalSpeech 2 consists of a neural audio codec (an encoder and
a decoder) and a diffusion model with a prior (a phoneme encoder and a duration/pitch predictor).
Since speech waveform is complex and high-dimensional, following the paradigm of regeneration
learning [33], we first convert speech waveform into latent vectors using the audio codec encoder and
reconstruct speech waveform from the latent vectors using the audio codec decoder. Next, we use a
diffusion model to predict the latent vectors conditioned on text/phoneme input. We introduce the
detailed designs of neural audio codec in Section 3.1 and the latent diffusion model in Section 3.2, as
well as the speech prompting mechanism for in-context learning in Section 3.3.
3.1 Neural Audio Codec with Continuous Vectors
We use a neural audio codec to convert speech waveform into continuous vectors instead of discrete
tokens, as analyzed in Section 2.1 and 2.2. Audio codec with continuous vectors enjoys several
4
Figure 2: The neural audio codec consists of an encoder, a residual vector-quantizer (RVQ), and a
decoder. The encoder extracts the frame-level speech representations from the audio waveform, the
RVQ leverages multiple codebooks to quantize the frame-level representations, and the decoder takes
the quantized vectors as input and reconstructs the audio waveform. The quantized vectors also serve
as the training target of the latent diffusion model.
benefits: 1) Continuous vectors have a lower compression rate and higher bitrate than discrete tokens2,
which can ensure high-quality audio reconstruction. 2) Each audio frame only has one vector instead
of multiple tokens as in discrete quantization, which will not increase the length of the hidden
sequence.
As shown in Figure 2, our neural audio codec consists of an audio encoder, a residual vector-quantizer
(RVQ), and an audio decoder: 1) The audio encoder consists of several convolutional blocks with a
total downsampling rate of 200 for 16KHz audio, i.e., each frame corresponds to a 12.5ms speech
segment. 2) The residual vector-quantizer converts the output of the audio encoder into multiple
residual vectors following [19]. The sum of these residual vectors is taken as the quantized vectors,
which are used as the training target of the diffusion model. 3) The audio decoder mirrors the structure
of the audio encoder, which generates the audio waveform from the quantized vectors. The working
flow of the neural audio codec is as follows.
Audio Encoder : h = fenc(x),
Residual Vector Quantizer : {ei
j}R
j=1 = frvq(hi), zi =
R
(cid:88)
j=1
ei
j, z = {zi}n
i=1,
(1)
Audio Decoder : x = fdec(z),
where fenc, frvq, and fdec denote the audio encoder, residual vector quantizer, and audio decoder. x
is the speech waveform, h is the hidden sequence obtained by the audio encoder with a frame length
of n, and z is the quantized vector sequence with the same length as h. i is the index of the speech
frame, j is the index of the residual quantizer and R is the total number of residual quantizers, and ei
j
is the embedding vector of the codebook ID obtained by the j-th residual quantizer on the i-th hidden
frame (i.e., hi). The training of the neural codec follows the loss function in [19].
Actually, to obtain continuous vectors, we do not need vector quantizers, but just an autoencoder
or variational autoencoder. However, for regularization and efficiency purposes, we use residual
vector quantizers with a very large number of quantizers (R in Figure 2) and codebook tokens (V
in Figure 2) to approximate the continuous vectors. By doing this, we have two benefits: 1) When
training latent diffusion models, we do not need to store continuous vectors which are memory-cost.
Instead, we just store the codebook embeddings and the quantized token IDs, which are used to derive
the continuous vectors using Equation 1. 2) When predicting the continuous vectors, we can add
an additional regularization loss on discrete classification based on these quantized token IDs (see
Lce−rvq in Section 3.2).
3.2 Latent Diffusion Model with Non-Autoregressive Generation
We leverage a diffusion model to predict the quantized latent vector z conditioned on the text sequence
y. We leverage a prior model that consists of a phoneme encoder, a duration predictor, and a pitch
2Since our task is not speech compression but speech synthesis, we do not need a high compression rate or a
low bitrate.
5
:Only in Training : Training & Inference RVR: # Residual Quantizers V: # Codebook Tokens… … … … ……………Residual Vector QuantizerAudio CodecDecoderEncoderTo/From Diffusion Model Quantized Latent Vector zpredictor to process the text input and provide a more informative hidden vector c as the condition of
the diffusion model.
Diffusion Formulation We formulate the diffusion (forward) process and denoising (reverse)
process as a stochastic differential equation (SDE) [34], respectively. The forward SDE transforms
the latent vectors z0 obtained by the neural codec (i.e., z) into Gaussian noises:
dzt = −
1
2
βtzt dt + (cid:112)βt dwt,
t ∈ [0, 1],
(2)
where wt is the standard Brownian motion, t ∈ [0, 1], and βt is a non-negative noise schedule
function. Then the solution is given by:
zt = e− 1
2
(cid:82) t
0 βsdsz0 +
(cid:90) t
(cid:112)βse− 1
2
(cid:82) t
0 βududws.
(3)
0
By properties of Ito’s integral, the conditional distribution of zt given z0 is Gaussian: p(zt|z0) ∼
N (ρ(z0, t), Σt), where ρ(z0, t) = e− 1
The reverse SDE transforms the Gaussian noise back to data z0 with the following process:
0 βsdsz0 and Σt = I − e− (cid:82) t
0 βsds.
(cid:82) t
2
dzt = −(
1
2
zt + ∇ log pt(zt))βt dt + (cid:112)βt d ˜wt,
t ∈ [0, 1],
(4)
where ˜w is the reverse-time Brownian motion. Moreover, we can consider an ordinary differential
equation (ODE) [34] in the reverse process:
dzt = −
1
2
(zt + ∇ log pt(zt))βt dt,
t ∈ [0, 1].
(5)
We can train a neural network sθ to estimate the score ∇ log pt(zt) (the gradient of the log-density
of noisy data), and then we can sample data z0 by starting from Gaussian noise z1 ∼ N (0, 1) and
numerically solving the SDE in Equation 4 or ODE in Equation 5. In our formulation, the neural
network sθ(zt, t, c) is based on WaveNet [23], which takes the current noisy vector zt, the time step
t, and the condition information c as input, and predicts the data ˆz0 instead of the score, which we
found results in better speech quality. Thus, ˆz0 = sθ(zt, t, c). The loss function to train the diffusion
model is as follows.
Ldiff = Ez0,t[||ˆz0 − z0||2
2 + ||Σ−1
t (ρ(ˆz0, t) − zt) − ∇ log pt(zt)||2
2 + λce−rvqLce−rvq],
(6)
where the first term is the data loss, the second term is the score loss, and the predicted score is
calculated by Σ−1
t (ρ(ˆz0, t) − zt), which is also used for reverse sampling based on Equation 4 or 5
in inference. The third term Lce−rvq is a novel cross-entropy (CE) loss based on residual vector-
quantizer (RVQ). Specifically, for each residual quantizer j ∈ [1, R], we first get the residual vector
ˆz0 − (cid:80)j−1
i=1 ei, where ei is the ground-truth quantized embedding in the i-th residual quantizer (ei is
also introduced in Equation 1). Then we calculate the L2 distance between the residual vector with
each codebook embedding in quantizer j and get a probability distribution with a softmax function,
and then calculate the cross-entropy loss between the ID of the ground-truth quantized embedding
ej and this probability distribution. Lce−rvq is the mean of the cross-entropy loss in all R residual
quantizers, and λce−rvq is set to 0.1 during training.
Prior Model: Phoneme Encoder and Duration/Pitch Predictor The phoneme encoder consists
of several Transformer blocks [35, 6], where the standard feed-forward network is modified as a
convolutional network to capture the local dependency in phoneme sequence. Both the duration and
pitch predictors share the same model structure with several convolutional blocks but with different
model parameters. The ground-truth duration and pitch information is used as the learning target to
train the duration and pitch predictors, with an L1 duration loss Ldur and pitch loss Lpitch. During
training, the ground-truth duration is used to expand the hidden sequence from the phoneme encoder
to obtain the frame-level hidden sequence, and then the ground-truth pitch information is added
to the frame-level hidden sequence to get the final condition information c. During inference, the
corresponding predicted duration and pitch are used.
The total loss function for the diffusion model is as follows:
L = Ldiff + Ldur + Lpitch.
(7)
6
Figure 3: The speech prompting mechanism in the duration/pitch predictor and the diffusion model
for in-context learning. During training, we use a random segment zu:v of the target speech z as
the speech prompt zp and use the diffusion model to only predict z\u:v. During inference, we use a
reference speech of a specific speaker as the speech prompt zp. Note that the prompt is the speech
latent obtained by the codec encoder instead of the speech waveform.
3.3 Speech Prompting for In-Context Learning
To facilitate in-context learning for better zero-shot generation, we design a speech prompting
mechanism to encourage the duration/pitch predictor and the diffusion model to follow the diverse
information (e.g., speaker identities) in the speech prompt. For a speech latent sequence z, we
randomly cut off a segment zu:v with frame index from u to v as the speech prompt, and concatenate
the remaining speech segments z1:u and zv:n to form a new sequence z\u:v as the learning target of
the diffusion model. As shown in Figure 3, we use a Transformer-based prompt encoder to process
the speech prompt zu:v (zp in the figure) to get a hidden sequence. To leverage this hidden sequence
as the prompt, we have two different strategies for the duration/pitch predictor and the diffusion
model: 1) For the duration and pitch predictors, we insert a Q-K-V attention layer in the convolution
layer, where the query is the hidden sequence of the convolution layer, and the key and value is the
hidden sequence from the prompt encoder. 2) For the diffusion model, instead of directly attending to
the hidden sequence from the prompt encoder that exposes too many details to the diffusion model
and may harm the generation, we design two attention blocks: in the first attention block, we use
m randomly initialized embeddings as the query sequence to attend to the prompt hidden sequence,
and get a hidden sequence with a length of m as the attention results [36, 37, 38]; in the second
attention block, we leverage the hidden sequence in the WaveNet layer as the query and the m-length
attention results as the key and value. We use the attention results of the second attention block as the
conditional information of a FiLM layer [39] to perform affine transform on the hidden sequence
of the WaveNet in the diffusion model. Please refer to Appendix B for the details of WaveNet
architecture used in the diffusion model.
3.4 Connection to NaturalSpeech
NaturalSpeech 2 is an advanced edition of the NaturalSpeech Series [11, 40]. Compared to its previous
version NaturalSpeech [11], NaturalSpeech 2 has the following connections and distinctions. First,
goal. While both NaturalSpeech 1 and 2 aim at synthesizing natural voices (with good speech quality
and diversity), their focuses are different. NaturalSpeech focuses on speech quality by synthesizing
voices that are on par with human recordings and only tackling single-speaker recording-studio
datasets (e.g., LJSpeech). NaturalSpeech 2 focuses on speech diversity by exploring the zero-shot
synthesis ability based on large-scale, multi-speaker, and in-the-wild datasets. Second, architecture.
NaturalSpeech 2 keeps the basic components in NaturalSpeech, such as the encoder and decoder
for waveform reconstruction, and the prior module (phoneme encoder, duration/pitch predictor).
However, it leverages 1) a diffusion model to increase the modeling power to capture the complicated
and diverse data distribution in large-scale speech datasets, 2) a residual vector quantizer to regularize
the latent vectors to trade off the reconstruction quality and prediction difficulty, and 3) a speech
prompting mechanism to enable zero-shot ability that is not covered in single-speaker synthesis
system.
7
𝒛𝒕,𝒕,𝒄𝒛𝟎Speech Prompt EncoderQ-K-V AttentionConvolutionQ-K-V AttentionWaveNet…Q-K-V Attention𝒛𝒑N xx N’Duration/PitchPredictorDiffusionModelFiLMQK/VK/VQQK/VPhonemeHiddenDuration/Pitchm4 Experimental Settings
In this section, we introduce the experimental settings to train and evaluate NaturalSpeech 2, including
the dataset, model configuration, baselines for comparison, training and inference, and evaluation
metrics.
4.1 Datasets
Training Dataset To train the neural audio codec and the diffusion model, we use the English
subset of Multilingual LibriSpeech (MLS) [41] as the training data, which contains 44K hours of
transcribed speech data derived from LibriVox audiobooks. The number of distinct speakers is 2742
for males and 2748 for females respectively. The sample rate is 16KHz for all speech data. The input
text sequence is first converted into a phoneme sequence using grapheme-to-phoneme conversion [42]
and then aligned with speech using our internal alignment tool to obtain the phoneme-level duration.
The frame-level pitch sequence is extracted from the speech using PyWorld3.
Evaluation Dataset We employ two benchmark datasets for evaluation: 1) LibriSpeech [43]
test-clean, which contains 40 distinct speakers and 5.4 hours of annotated speech data. 2) VCTK
dataset [44], which contains 108 distinct speakers. For LibriSpeech test-clean, we randomly sample
15 utterances for each speaker and form a subset of 600 utterances for evaluation. For VCTK, we
randomly sample 5 utterances for each speaker, resulting in a subset of 540 utterances for evaluation.
Specifically, to synthesize each sample, we randomly select a different utterance of the same speaker
and crop it into a σ-second audio segment to form a σ-second prompt. Note that both the speakers in
LibriSpeech test-clean and VCTK are not seen during training. Thus, we aim to conduct zero-shot
speech synthesis.
The singing datasets follow a similar process in the speech dataset, and the details are shown in
Section 5.6.
4.2 Model Configuration and Comparison
Model Configuration The phoneme encoder is a 6-layer Transformer [35] with 8 attention heads,
512 embedding dimensions, 2048 1D convolution filter size, 9 convolution 1D kernel size, and 0.1
dropout rate. The pitch and duration predictor share the same architecture of 30-layer 1D convolution
with ReLU activation and layer normalization, 10 Q-K-V attention layers for in-context learning,
which have 512 hidden dimensions and 8 attention heads and are placed every 3 1D convolution
layers. We set the dropout to 0.5 in both duration and pitch predictors. For the speech prompt encoder,
we use a 6-layer Transformer with 512 hidden size, which has the same architecture as the phoneme
encoder. As for the m query tokens in the first Q-K-V attention in the prompting mechanism in the
diffusion model (as shown in Figure 3), we set the token number m to 32 and the hidden dimension
to 512.
The diffusion model contains 40 WaveNet layers [23], which consist of 1D dilated convolution layers
with 3 kernel size, 1024 filter size, and 2 dilation size. Specifically, we use a FiLM layer [39] at every
3 WaveNet layers to fuse the condition information processed by the second Q-K-V attention in the
prompting mechanism in the diffusion model. The hidden size in WaveNet is 512, and the dropout
rate is 0.2.
More details of the model configurations are shown in Appendix A.
Model Comparison We choose the previous zero-shot TTS model YourTTS [45] as the baseline,
with the official code and pre-trained checkpoint4, which is trained on VCTK [44], LibriTTS [46]
and TTS-Portuguese [47]. We also choose VALL-E [13] that is based on discrete audio codec
and autoregressive language model for comparison, which can help demonstrate the advantages of
the designs in NaturalSpeech 2. We directly collect some audio samples from its demo page for
comparison.
3https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder
4https://github.com/Edresson/YourTTS
8
4.3 Model Training and Inference
We first train the audio codec using 8 NVIDIA TESLA V100 16GB GPUs with a batch size of
200 audios per GPU for 440K steps. We follow the implementation and experimental setting of
SoundStream [19] and adopt Adam optimizer with 2e−4 learning rate. Then we use the trained codec
to extract the quantized latent vectors for each audio to train the diffusion model in NaturalSpeech 2.
The diffusion model in NaturalSpeech 2 is trained using 16 NVIDIA TESLA V100 32GB GPUs with
a batch size of 6K frames of latent vectors per GPU for 300K steps (our model is still underfitting
and longer training will result in better performance). We optimize the models with the AdamW
optimizer with 5e − 4 learning rate, 32k warmup steps following the inverse square root learning
schedule.
During inference, for the diffusion model, we find it beneficial to use a temperature τ and sample the
terminal condition zT from N (0, τ −1I) [32]. We set τ to 1.22. To balance the generation quality
and latency, we adopt the Euler ODE solver and set the diffusion steps to 150.
4.4 Evaluation Metrics
We use both objective and subjective metrics to evaluate the zero-shot synthesis ability of Natural-
Speech 2 and compare it with baselines.
Objective Metrics We evaluate the TTS systems with the following objective metrics:
• Prosody Similarity with Prompt. We evaluate the prosody similarity (in terms of pitch and duration)
between the generated speech and the prompt speech, which measures how well the TTS model
follows the prosody in speech prompt in zero-shot synthesis. We calculate the prosody similarity
with the following steps: 1) we extract phoneme-level duration and pitch from the prompt and the
synthesized speech; 2) we calculate the mean, standard variation, skewness, and kurtosis [7] of the
pitch and duration in each speech sequence; 3) we calculate the difference of the mean, standard
variation, skewness, and kurtosis between each paired prompt and synthesized speech and average
the differences among the whole test set.
• Prosody Similarity with Ground Truth. We evaluate the prosody similarity (in terms of pitch and
duration) between the generated speech and the ground-truth speech, which measures how well the
TTS model matches the prosody in the ground truth. Since there is correspondence between two
speech sequences, we calculate the Pearson correlation and RMSE of the pitch/duration between
the generated and ground-truth speech, and average them on the whole test set.
• Word Error Rate. We employ an ASR model to transcribe the generated speech and calculate
the word error rate (WER). The ASR model is a CTC-based HuBERT [48] pre-trained on Libri-
light [49] and fine-tuned on the 960 hours training set of LibriSpeech. We use the official code and
checkpoint5.
Subjective Metrics We conduct human evaluation and use the intelligibility score and mean opinion
score as the subjective metrics:
• Intelligibility Score. Neural TTS models often suffer from the robustness issues such as word
skipping, repeating, and collapse issues, especially for autoregressive models. To demonstrate
the robustness of NaturalSpeech 2, following the practice in [6], we use the 50 particularly hard
sentences (see Appendix C) and conduct an intelligibility test. We measure the number of repeating
words, skipping words, and error sentences as the intelligibility score.
• CMOS and SMOS. Since synthesizing natural voices is one of the main goals of NaturalSpeech 2,
we measure naturalness using comparative mean option score (CMOS) with 12 native speakers as
the judges. We also use the similarity mean option score (SMOS) between the synthesized and
prompt speech to measure the speaker similarity, with 6 native speakers as the judges.
5https://huggingface.co/facebook/hubert-large-ls960-ft
9
5 Results on Natural and Zero-Shot Synthesis
In this section, we conduct a series of experiments to compare the NaturalSpeech 2 with the baselines
from the following aspects: 1) Generation Quality, by evaluating the naturalness of the synthe-
sized audio; 2) Generation Similarity, by evaluating how well the TTS system follows prompts; 3)
Robustness, by calculating the WER and an additional intelligibility test.
5.1 Generation Quality
Table 3: The CMOS results (v.s. Natural-
Speech 2) on LibriSpeech and VCTK.
We conduct CMOS test to evaluate the generation qual-
ity (i.e., naturalness). We randomly select 20 utterances
from the LibriSpeech and VCTK tests and crop the
prompt speech to 3s. To ensure high-quality generation,
we use a speech scoring model [50] to filter the multiple
samples generated by the diffusion model with different
starting Gaussian noises z1. Table 3 shows a compar-
ison of NaturalSpeech 2 against baseline YourTTS and
the ground truth. We have several observations: 1) Nat-
uralSpeech 2 is comparable to the ground-truth recording in LibriSpeech (+0.04 is regarded as on
par) and achieves much better quality on VCTK datasets (−0.30 is a large gap), which demonstrates
the naturalness of the speech generated by NaturalSpeech 2 is high enough. 2) NaturalSpeech shows
0.65 and 0.58 CMOS gain over YourTTS in LibriSpeech and VCTK, respectively, which shows the
superiority of NaturalSpeech 2 over this baseline.
YourTTS
NaturalSpeech 2
LibriSpeech VCTK
−0.58
0.00
−0.65
0.00
Ground Truth
Setting
−0.30
+0.04
5.2 Generation Similarity
Table 4: The prosody similarity between synthesized and prompt speech in terms of the difference in
mean (Mean), standard variation (Std), skewness (Skew), and kurtosis (Kurt) of pitch and duration.
LibriSpeech
Pitch
Duration
Mean↓
Std↓
Skew↓ Kurt↓ Mean↓
Std↓
Skew↓ Kurt↓
YourTTS
NaturalSpeech 2
10.52
10.11
7.62
6.18
0.59
0.50
1.18
1.01
0.84
0.65
0.66
0.70
0.75
0.60
3.70
2.99
VCTK
Pitch
Duration
Mean↓
Std↓
Skew↓ Kurt↓ Mean↓
Std↓
Skew↓ Kurt↓
YourTTS
NaturalSpeech 2
13.67
13.29
6.63
6.41
0.72
0.68
1.54
1.27
0.72
0.79
0.85
0.76
0.84
0.76
3.31
2.65
We use two metrics to evaluate the speech similarity: 1) prosody similarity between the synthesized
and prompt speech. 2) SMOS test. To evaluate the prosody similarity, we randomly sample one
sentence for each speaker for both LibriSpeech test-clean and VCTK dataset to form the test sets.
Specifically, to synthesize each sample, we randomly and independently sample the prompt speech
with σ = 3 seconds. Note that YourTTS has seen 97 speakers in VCTK in training, but we still
compare NaturalSpeech 2 with YourTTS on all the speakers in VCTK (i.e., the 97 speakers are seen
to YourTTS but unseen to NaturalSpeech 2).
Table 5: The SMOS on LibriSpeech and
VCTK respectively.
We apply the alignment tool to obtain phoneme-level
duration and pitch and calculate the prosody similarity
metrics between the synthesized speech and the prompt
speech as described in Section 4.4. The results are
shown in Table 4. We have the following observations:
1) NaturalSpeech 2 consistently outperforms the base-
line YourTTS in both LibriSpeech and VCTK on all
metrics, which demonstrates that our proposed Natu-
ralSpeech 2 can mimic the prosody of prompt speech
much better. 2) Although YourTTS has seen 97 from 108 speakers in VCTK dataset, our model
can still outperform it by a large margin, which demonstrates the advantages of NaturalSpeech 2.
YourTTS
NaturalSpeech 2
LibriSpeech VCTK
GroundTruth
2.43
3.20
2.03
3.28
Setting
3.86
3.33
10
Furthermore, we also compare prosody similarity between synthesized and ground-truth speech in
Appendix D.
We further evaluate the speaker similarity using SMOS test. We randomly select 10 utterances from
LibriSpeech and VCTK datasets respectively, following the setting in the CMOS test. The length
of the prompt speech is set to 3s. The results are shown in Table 5. NaturalSpeech 2 outperforms
YourTTS by 1.25 and 0.77 SMOS scores for LibriSpeech and VCTK, respectively, which shows that
NaturalSpeech 2 is significantly better in speaker similarity.
5.3 Robustness
Table 6: Word error rate on LibriSpeech
and VCTK.
We use the full test set of LibriSpeech and VCTK as
described in Section 4.1 to synthesize the speech and
compute the word error rate (WER) between the tran-
scribed text and ground-truth text. To synthesize each
sample, we use a 3-second prompt by randomly crop-
ping the whole prompt speech. The results are shown
in Table 6. We observe that: 1) NaturalSpeech 2 sig-
nificantly outperforms YourTTS in LibriSpeech and
VCTK, indicating better synthesis of high-quality and
robust speech. 2) Our synthesized speech is comparable to the ground-truth speech in LibriSpeech
and surpasses that in VCTK. The higher WER results in VCTK may stem from a noisy environment
and the lack of ASR model fine-tuning in that dataset.
YourTTS
NaturalSpeech 2
LibriSpeech VCTK
Ground Truth
14.80
6.99
7.10
2.26
Setting
9.49
1.94
Table 7: The robustness of NaturalSpeech 2 and other autoregressive/non-autoregressive models on
50 particularly hard sentences. We conduct an intelligibility test on these sentences and measure the
number of word repeating, word skipping, and error sentences. Each kind of word error is counted at
once per sentence.
AR/NAR Model
Repeats
Skips
Error Sentences
Error Rate
AR
NAR
Tacotron [3]
Transformer TTS [5]
FastSpeech [6]
NaturalSpeech [11]
NAR
NaturalSpeech 2
4
7
0
0
0
11
15
0
0
0
12
17
0
0
0
24%
34%
0%
0%
0%
Autoregressive TTS models often suffer from alignment mismatch between phoneme and speech,
resulting in severe word repeating and skipping. To further evaluate the robustness of the diffusion-
based TTS model, we adopt the 50 particularly hard sentences in FastSpeech [6] to evaluate the
robustness of the TTS systems. We can find that the non-autoregressive models such as FastSpeech [6],
NaturalSpeech [11], and also NaturalSpeech 2 are robust for the 50 hard cases, without any intelligi-
bility issues. As a comparison, the autoregressive models such as Tacotron [3], Transformer TTS [5],
and VALL-E [13] will have a high error rate on these hard sentences. The comparison results are
provided in Table 7.
5.4 Comparison with Other TTS Systems
In this section, we compare NaturalSpeech 2 with the zero-
shot TTS model VALL-E [13]. We directly download the
first 16 utterances from VALL-E demo page6, which consists
of 8 samples from LibriSpeech and 8 samples from VCTK.
We evaluate the CMOS and SMOS in Table 8.
From the results, we find that NaturalSpeech 2 outperforms
VALL-E by 0.3 in SMOS and 0.31 in CMOS, respectively.
The SMOS results show that NaturalSpeech 2 is significantly
better in speaker similarity. The CMOS results demonstrate
that the speech generated by NaturalSpeech 2 is much more natural and of higher quality.
VALL-E
NaturalSpeech 2
3.53
3.83
6https://valle-demo.github.io/
11
Table 8: SMOS and CMOS results be-
tween NaturalSpeech 2 and VALL-E.
Setting
SMOS
CMOS
GroundTruth
4.09
-
−0.31
0.00
5.5 Ablation Study
Table 9: The ablation study of NaturalSpeech 2. The prosody similarity between the synthesized
and prompt speech in terms of the difference in the mean (Mean), standard variation (Std), skewness
(Skew), and kurtosis (Kurt) of pitch and duration. “-" denotes the model can not converge.
Pitch
Duration
NaturalSpeech 2
w/o. diff prompt
w/o. dur/pitch prompt
w/o. CE loss
w/o. query attn
Mean↓
10.11
-
21.69
10.69
10.78
Std↓
6.18
-
19.38
6.24
6.29
Skew↓ Kurt↓ Mean↓
Std↓
Skew↓ Kurt↓
0.50
1.01
0.65
0.70
0.60
2.99
-
0.63
0.55
0.62
-
1.29
1.06
1.37
-
0.77
0.71
0.67
-
0.72
0.72
0.71
-
0.70
0.74
0.69
-
3.70
3.85
3.59
In this section, we perform ablation experiments. 1) To study the effect of the speech prompt, we
remove the Q-K-V attention layers in the diffusion (abbr. w/o. diff prompt), and the duration and
pitch predictors (abbr. w/o. dur/pitch prompt), respectively. 2) To study the effect of the cross-entropy
(CE) loss Lce−rvq based on RVQ, we disable the CE loss by setting λce−rvq to 0 (abbr. w/o. CE loss).
3) To study the effectiveness of two Q-K-V attention in speech prompting for diffusion in Section
3.3, we remove the first attention that adopts m randomly initialized query sequence to attend to the
prompt hidden and directly use one Q-K-V attention to attend to the prompt hidden (abbr. w/o. query
attn). We report the prosody similarity metric between synthesized and prompt speech in Table 9.
More ablation results between synthesized and ground-truth speech are included in Appendix E.
We have the following observations: 1) Disabling speech prompt in the diffusion model significantly
degrades prosody similarity (e.g., from 10.11 to 21.69 for the mean of the pitch or even can not
converge), highlighting its importance for high-quality TTS synthesis. 2) Disabling cross-entropy
loss worsens performance, as the residual vector quantizer’s layer-wise cross entropy provides
regularization for precise latent representations. 3) Disabling query attention strategy also degrades
prosody similarity. In practice, we find that applying cross-attention to prompt hidden will leak
details and thus mislead generation.
In addition, since the prompt length is an important hyper-parameter for zero-shot TTS, we would
like to investigate the effect of the prompt length. We follow the setting of prosody similarity
between synthesized and prompt speech in Section 5.2. Specifically, we vary the prompt length by
σ = {3, 5, 10} seconds and report the prosody similarity metrics of NaturalSpeech 2. The results are
shown in Table 10. We observe that when the prompt is longer, the similarity between the generated
speech and the prompt is higher for NaturalSpeech 2. It shows that the longer prompt reveals more
details of the prosody, which help the TTS model to generate more similar speech.
5.6 Zero-Shot Singing Synthesis
In this section, we explore NaturalSpeech 2 to synthesize singing voice in a zero-shot setting, either
given a singing prompt or only a speech prompt.
For singing data collection, we crawl a number of singing voices and their paired lyrics from the Web.
For singing data preprocessing, we utilize a speech processing model to remove the backing vocal
and accompaniment in the song, and an ASR model to filter out samples with misalignments. The
dataset is then constructed using the same process as speech data, ultimately containing around 30
hours of singing data. The dataset is upsampled and mixed with speech data for singing experiments.
We use speech and singing data together to train NaturalSpeech 2 with a 5e − 5 learning rate. In
inference, we set the diffusion steps to 1000 for better performance. To synthesize a singing voice, we
use the ground-truth pitch and duration from another singing voice, and use different singing prompts
to generate singing voices with different singer timbres. Interestingly, we find that NaturalSpeech 2
can generate a novel singing voice using speech as the prompt. See the demo page7 for zero-shot
singing synthesis with either singing or speech as the prompt.
7https://speechresearch.github.io/naturalspeech2
12
Table 10: The NaturalSpeech 2 prosody similarity between the synthesized and prompt speech with
different lengths in terms of the difference in the mean (Mean), standard variation (Std), skewness
(Skew), and kurtosis (Kurt) of pitch and duration.
LibriSpeech
Mean↓
Std↓
Skew↓ Kurt↓ Mean↓
Std↓
Skew↓ Kurt↓
Pitch
Duration
3s
5s
10s
VCTK
3s
5s
10s
10.11
6.96
6.90
6.18
4.29
4.03
0.50
0.42
0.48
1.01
0.77
1.36
0.65
0.69
0.62
Pitch
0.70
0.60
0.45
0.60
0.53
0.56
Duration
2.99
2.52
2.48
Mean↓
Std↓
Skew↓ Kurt↓ Mean↓
Std↓
Skew↓ Kurt↓
13.29
14.46
10.28
6.41
5.47
4.31
0.68
0.63
0.41
1.27
1.23
0.87
0.79
0.62
0.71
0.76
0.67
0.62
0.76
0.74
0.76
2.65
3.40
3.48
5.7 Extension to Voice Conversion and Speech Enhancement
In this section, we extend NaturalSpeech 2 to another two speech synthesis tasks: 1) voice conver-
sion and 2) speech enhancement. See the demo page8 for zero-shot voice conversion and speech
enhancement examples.
5.7.1 Voice Conversion
Besides zero-shot text-to-speech and singing synthesis, NaturalSpeech 2 also supports zero-shot
voice conversion, which aims to convert the source audio zsource into the target audio ztarget using
the voice of the prompt audio zprompt. Technically, we first convert the source audio zsource into an
informative Gaussian noise z1 using a source-aware diffusion process and generate the target audio
ztarget using a target-aware denoising process, shown as follows.
Source-Aware Diffusion Process.
In voice conversion, it is helpful to provide some necessary
information from source audio for target audio in order to ease the generation process. Thus, instead
of directly diffusing the source audio with some Gaussian noise, we diffuse the source audio into
a starting point that still maintains some information in the source audio. Specifically, inspired by
the stochastic encoding process in Diffusion Autoencoder [51], we obtain the starting point z1 from
zsource as follows:
z1 = z0 +
(cid:90) 1
0
−
1
2
(zt + Σ−1
t (ρ(ˆsθ(zt, t, c), t) − zt))βt dt,
(8)
where Σ−1
reverse of ODE (Equation 5) in the denoising process.
t (ρ(ˆsθ(zt, t, c), t) − zt) is the predicted score at t. We can think of this process as the
Target-Aware Denoising Process. Different from the TTS which starts from the random Gaussian
noise, the denoising process of voice conversion starts from the z1 obtained from the source-aware
diffusion process. We run the standard denoising process as in the TTS setting to obtain the final
target audio ztarget, conditioned on c and the prompt audio zprompt, where c is obtained from the
phoneme and the duration sequence of the source audio and the predicted pitch sequence.
As a consequence, we observe that NaturalSpeech 2 is capable of producing speech that exhibits
similar prosody to the source speech, while also replicating the timbre specified by the prompt.
8https://speechresearch.github.io/naturalspeech2
13
5.7.2 Speech Enhancement
NaturalSpeech 2 can be extended to speech enhancement, which is similar to the extension of
voice conversion. In this setting, we assume that we have the source audio z′
source which contains
background noise ( z′ denotes the audio with background noise), the prompt with background noise
z′
prompt for the source-aware diffusion process, and the prompt without background noise zprompt
for target-aware denoising process. Note that z′
prompt have the same background noise.
source and z′
To remove the background noise, firstly, we apply the source-aware diffusion process by z′
source and
z′
prompt and obtain the z1 as in Equation 8. The source audio’s duration and pitch are utilized in
this procedure. Secondly, we run the target-aware denoising process to obtain the clean audio by
z1 and the clean prompt zprompt. Specifically, we use the phoneme sequence, duration sequence,
and pitch sequence of the source audio in this procedure. As a result, we find that NaturalSpeech 2
can effectively eliminate background noise while simultaneously preserving crucial aspects such as
prosody and timbre.
6 Conclusion and Future Work
In this paper, we develop NaturalSpeech 2, a TTS system that leverages a neural audio codec with
continuous latent vectors and a latent diffusion model with non-autoregressive generation to enable
natural and zero-shot text-to-speech synthesis. To facilitate in-context learning for zero-shot synthesis,
we design a speech prompting mechanism in the duration/pitch predictor and the diffusion model.
By scaling NaturalSpeech 2 to 400M model parameters, 44K hours of speech, and 5K speakers, it
can synthesize speech with high expressiveness, robustness, fidelity, and strong zero-shot ability,
outperforming previous TTS systems. For future work, we will explore efficient strategies such as
consistency models [52, 53] to speed up the diffusion model and explore large-scale speaking and
singing voice training to enable more powerful mixed speaking/singing capability.
Broader Impacts: Since NaturalSpeech 2 could synthesize speech that maintains speaker identity, it
may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating
a specific speaker. We conducted the experiments under the assumption that the user agree to be the
target speaker in speech synthesis. If the model is generalized to unseen speakers in the real world, it
should include a protocol to ensure that the speaker approves the use of their voice and a synthesized
speech detection model.
14
References
[1] Paul Taylor. Text-to-speech synthesis. Cambridge university press, 2009.
[2] Xu Tan, Tao Qin, Frank Soong, and Tie-Yan Liu. A survey on neural speech synthesis. arXiv
preprint arXiv:2106.15561, 2021.
[3] Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J Weiss, Navdeep Jaitly,
Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, et al. Tacotron: Towards end-to-end
speech synthesis. Proc. Interspeech 2017, pages 4006–4010, 2017.
[4] Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang,
Zhifeng Chen, Yu Zhang, Yuxuan Wang, RJ Skerry-Ryan, et al. Natural TTS synthesis by
conditioning WaveNet on mel spectrogram predictions. In 2018 IEEE International Conference
on Acoustics, Speech and Signal Processing (ICASSP), pages 4779–4783. IEEE, 2018.
[5] Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, and Ming Liu. Neural speech synthesis
with Transformer network. In Proceedings of the AAAI Conference on Artificial Intelligence,
volume 33, pages 6706–6713, 2019.
[6] Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. FastSpeech:
Fast, robust and controllable text to speech. In NeurIPS, 2019.
[7] Yi Ren, Chenxu Hu, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. FastSpeech
2: Fast and high-quality end-to-end text to speech. In International Conference on Learning
Representations, 2021.
[8] Yanqing Liu, Zhihang Xu, Gang Wang, Kuan Chen, Bohan Li, Xu Tan, Jinzhu Li, Lei He, and
Sheng Zhao. DelightfulTTS: The Microsoft speech synthesis system for Blizzard challenge
2021. arXiv preprint arXiv:2110.12612, 2021.
[9] Yanqing Liu, Ruiqing Xue, Lei He, Xu Tan, and Sheng Zhao. DelightfulTTS 2: End-
to-end speech synthesis with adversarial vector-quantized auto-encoders. arXiv preprint
arXiv:2207.04646, 2022.
[10] Jaehyeon Kim, Jungil Kong, and Juhee Son. Conditional variational autoencoder with adversar-
ial learning for end-to-end text-to-speech. arXiv preprint arXiv:2106.06103, 2021.
[11] Xu Tan, Jiawei Chen, Haohe Liu, Jian Cong, Chen Zhang, Yanqing Liu, Xi Wang, Yichong
Leng, Yuanhao Yi, Lei He, et al. NaturalSpeech: End-to-end text to speech synthesis with
human-level quality. arXiv preprint arXiv:2205.04421, 2022.
[12] Keith Ito. The LJ speech dataset. https://keithito.com/LJ-Speech-Dataset/, 2017.
[13] Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen,
Yanqing Liu, Huaming Wang, Jinyu Li, et al. Neural codec language models are zero-shot text
to speech synthesizers. arXiv preprint arXiv:2301.02111, 2023.
[14] Eugene Kharitonov, Damien Vincent, Zalán Borsos, Raphaël Marinier, Sertan Girgin, Olivier
Pietquin, Matt Sharifi, Marco Tagliasacchi, and Neil Zeghidour. Speak, read and prompt:
High-fidelity text-to-speech with minimal supervision. arXiv preprint arXiv:2302.03540, 2023.
[15] Ruiqing Xue, Yanqing Liu, Lei He, Xu Tan, Linquan Liu, Edward Lin, and Sheng Zhao.
Foundationtts: Text-to-speech for asr customization with generative language model. arXiv
preprint arXiv:2303.02939, 2023.
[16] Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation
learning. In Proceedings of the 31st International Conference on Neural Information Processing
Systems, pages 6309–6318, 2017.
[17] Ali Razavi, Aaron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images
with VQ-VAE-2. In Advances in neural information processing systems, pages 14866–14876,
2019.
15
[18] Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution
image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern
recognition, pages 12873–12883, 2021.
[19] Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, and Marco Tagliasacchi.
SoundStream: An end-to-end neural audio codec. IEEE/ACM Transactions on Audio, Speech,
and Language Processing, 2021.
[20] Alexandre Défossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi. High fidelity neural audio
compression. arXiv preprint arXiv:2210.13438, 2022.
[21] Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt
Sharifi, Olivier Teboul, David Grangier, Marco Tagliasacchi, and Neil Zeghidour. Audiolm: a
language modeling approach to audio generation. arXiv preprint arXiv:2209.03143, 2022.
[22] Jinglin Liu, Chengxi Li, Yi Ren, Feiyang Chen, and Zhou Zhao. DiffSinger: Singing voice
synthesis via shallow diffusion mechanism. In Proceedings of the AAAI Conference on Artificial
Intelligence, volume 36, pages 11020–11028, 2022.
[23] Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex
Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. WaveNet: A generative
model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
[24] Jean-Marc Valin and Jan Skoglund. LPCNet: Improving neural speech synthesis through linear
prediction. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP), pages 5891–5895. IEEE, 2019.
[25] Felix Kreuk, Gabriel Synnaeve, Adam Polyak, Uriel Singer, Alexandre Défossez, Jade Copet,
Devi Parikh, Yaniv Taigman, and Yossi Adi. Audiogen: Textually guided audio generation.
arXiv preprint arXiv:2209.15352, 2022.
[26] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language
understanding by generative pre-training.
[27] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.
Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
[28] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
[29] Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. DiffWave: A versatile
diffusion model for audio synthesis. In ICLR, 2021.
[30] Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, and William Chan.
WaveGrad: Estimating gradients for waveform generation. In ICLR, 2021.
[31] Myeonghun Jeong, Hyeongju Kim, Sung Jun Cheon, Byoung Jin Choi, and Nam Soo Kim.
Diff-TTS: A denoising diffusion model for text-to-speech. arXiv preprint arXiv:2104.01409,
2021.
[32] Vadim Popov, Ivan Vovk, Vladimir Gogoryan, Tasnima Sadekova, and Mikhail Kudinov. Grad-
TTS: A diffusion probabilistic model for text-to-speech. arXiv preprint arXiv:2105.06337,
2021.
[33] Xu Tan, Tao Qin, Jiang Bian, Tie-Yan Liu, and Yoshua Bengio. Regeneration learning: A
learning paradigm for data generation. arXiv preprint arXiv:2301.08846, 2023.
[34] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and
Ben Poole. Score-based generative modeling through stochastic differential equations. In
International Conference on Learning Representations, 2020.
[35] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Informa-
tion Processing Systems, pages 5998–6008, 2017.
16
[36] Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. Attention-based lstm for aspect-level
sentiment classification. In Proceedings of the 2016 conference on empirical methods in natural
language processing, pages 606–615, 2016.
[37] Yuxuan Wang, Daisy Stanton, Yu Zhang, RJ Skerry-Ryan, Eric Battenberg, Joel Shor, Ying
Xiao, Ye Jia, Fei Ren, and Rif A Saurous. Style tokens: Unsupervised style modeling, control
and transfer in end-to-end speech synthesis. In International Conference on Machine Learning,
pages 5180–5189. PMLR, 2018.
[38] Dacheng Yin, Chuanxin Tang, Yanqing Liu, Xiaoqiang Wang, Zhiyuan Zhao, Yucheng Zhao,
Zhiwei Xiong, Sheng Zhao, and Chong Luo. Retrievertts: Modeling decomposed factors for
text-based speech insertion. arXiv preprint arXiv:2206.13865, 2022.
[39] Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film:
Visual reasoning with a general conditioning layer. In Proceedings of the AAAI Conference on
Artificial Intelligence, volume 32, 2018.
[40] Kai Shen, Zeqian Ju, Xu Tan, Yanqing Liu, Yichong Leng, Lei He, Tao Qin, Sheng Zhao, and
Jiang Bian. Naturalspeech 2: Latent diffusion models are natural and zero-shot speech and
singing synthesizers. arXiv preprint arXiv:2304.09116, 2023.
[41] Vineel Pratap, Qiantong Xu, Anuroop Sriram, Gabriel Synnaeve, and Ronan Collobert. MLS: A
large-scale multilingual dataset for speech research. Proc. Interspeech 2020, pages 2757–2761,
2020.
[42] Hao Sun, Xu Tan, Jun-Wei Gan, Hongzhi Liu, Sheng Zhao, Tao Qin, and Tie-Yan Liu. Token-
level ensemble distillation for grapheme-to-phoneme conversion. In INTERSPEECH, 2019.
[43] Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. LibriSpeech: an
ASR corpus based on public domain audio books. In 2015 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP), pages 5206–5210. IEEE, 2015.
[44] Christophe Veaux, Junichi Yamagishi, Kirsten MacDonald, et al. Superseded-CSTK VCTK
corpus: English multi-speaker corpus for CSTK voice cloning toolkit. 2016.
[45] Edresson Casanova, Julian Weber, Christopher D Shulby, Arnaldo Candido Junior, Eren Gölge,
and Moacir A Ponti. Yourtts: Towards zero-shot multi-speaker tts and zero-shot voice conversion
for everyone. In International Conference on Machine Learning, pages 2709–2720. PMLR,
2022.
[46] Heiga Zen, Viet Dang, Rob Clark, Yu Zhang, Ron J Weiss, Ye Jia, Zhifeng Chen, and Yonghui
Wu. LibriTTS: A corpus derived from librispeech for text-to-speech. Proc. Interspeech 2019,
pages 1526–1530, 2019.
[47] Edresson Casanova, Arnaldo Candido Junior, Christopher Shulby, Frederico Santos de Oliveira,
João Paulo Teixeira, Moacir Antonelli Ponti, and Sandra Aluísio. Tts-portuguese corpus: a
corpus for speech synthesis in brazilian portuguese. Language Resources and Evaluation,
56(3):1043–1055, 2022.
[48] Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov,
and Abdelrahman Mohamed. Hubert: Self-supervised speech representation learning by
masked prediction of hidden units. IEEE/ACM Transactions on Audio, Speech, and Language
Processing, 29:3451–3460, 2021.
[49] Jacob Kahn, Morgane Riviere, Weiyi Zheng, Evgeny Kharitonov, Qiantong Xu, Pierre-
Emmanuel Mazaré, Julien Karadayi, Vitaliy Liptchinsky, Ronan Collobert, Christian Fuegen,
et al. Libri-light: A benchmark for asr with limited or no supervision. In ICASSP 2020-2020
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages
7669–7673. IEEE, 2020.
[50] Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li,
Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, et al. Wavlm: Large-scale self-supervised pre-
training for full stack speech processing. IEEE Journal of Selected Topics in Signal Processing,
16(6):1505–1518, 2022.
17
[51] Konpat Preechakul, Nattanat Chatthee, Suttisak Wizadwongsa, and Supasorn Suwajanakorn.
Diffusion autoencoders: Toward a meaningful and decodable representation. In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10619–10629,
2022.
[52] Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. arXiv
preprint arXiv:2303.01469, 2023.
[53] Zhen Ye, Wei Xue, Xu Tan, Jie Chen, Qifeng Liu, and Yike Guo. Comospeech: One-step
speech and singing voice synthesis via consistency model. arXiv preprint arXiv:2305.06908,
2023.
18
A Model Details
Module
Configuration
Value
#Parameters
Audio Codec
Number of Residual VQ Blocks
Codebook size
Codebook Dimension
Hop Size
Similarity Metric
Phoneme Encoder
Duration Predictor
Pitch Predictor
Speech Prompt Encoder
Diffusion Model
Transformer Layer
Attention Heads
Hidden Size
Conv1D Filter Size
Conv1D Kernel Size
Dropout
Conv1D Layers
Conv1D Kernel Size
Attention Layers
Attention Heads
Hidden Size
Dropout
Conv1D Layers
Conv1D Kernel Size
Attention Layers
Attention Heads
Hidden Size
Dropout
Transformer Layer
Attention Heads
Hidden Size
Conv1D Filter Size
Conv1D Kernel Size
Dropout
WaveNet Layer
Attention Layers
Attention Heads
Hidden Size
Query Tokens
Query Token Dimension
Dropout
Total
16
1024
256
200
L2
6
8
512
2048
9
0.2
30
3
10
8
512
0.5
30
5
10
8
512
0.5
6
8
512
2048
9
0.2
40
13
8
512
32
512
0.2
27M
72M
34M
50M
69M
183M
435M
Table 11: The detailed model configurations of NaturalSpeech 2.
B The Details of WaveNet Architecture in the Diffusion Model
As shown in Figure 4, the WaveNet consists of 40 blocks. Each block consists of 1) a dilated CNN
with kernel size 3 and dilation 2, 2) a Q-K-V attention, and 3) a FiLM layer. In detail, we use Q-K-V
attention to attend to the key/value obtained from the first Q-K-V attention module (from the speech
prompt encoder) as shown in Figure 3. Then, we use the attention results to generate the scale and
bias terms, which are used as the conditional information of the FiLM layer. Finally, we average the
skip output results of each layer and calculate the final WaveNet output.
19
Figure 4: Overview of the WaveNet architecture in the diffusion model.
C The 50 Particularly Hard Sentences
The 50 particularly hard sentences used in Section 5.3 are listed below:
01. a
02. b
03. c
04. H
05. I
06. J
07. K
08. L
09. 22222222 hello 22222222
10. S D S D Pass zero - zero Fail - zero to zero - zero - zero Cancelled - fifty nine to three - two -
sixty four Total - fifty nine to three - two -
11. S D S D Pass - zero - zero - zero - zero Fail - zero - zero - zero - zero Cancelled - four hundred
and sixteen - seventy six -
12. zero - one - one - two Cancelled - zero - zero - zero - zero Total - two hundred and eighty six -
nineteen - seven -
13. forty one to five three hundred and eleven Fail - one - one to zero two Cancelled - zero - zero to
zero zero Total -
14. zero zero one , MS03 - zero twenty five , MS03 - zero thirty two , MS03 - zero thirty nine ,
15. 1b204928 zero zero zero zero zero zero zero zero zero zero zero zero zero zero one seven ole32
16. zero zero zero zero zero zero zero zero two seven nine eight F three forty zero zero zero zero zero
six four two eight zero one eight
20
Dilated ConvQ-K-V Attention+Timestep tK/V from Prompt+Condition cFiLMLineartanhσ+Layer k’s inputLayer k’s outputxNQGating…scale * x + biasscalebiasN Layers’ outputsLayer kLayer k-1+ReLUConv1d 1x1WaveNetoutput17. c five eight zero three three nine a zero bf eight FALSE zero zero zero bba3add2 - c229 - 4cdb -
18. Calendaring agent failed with error code 0x80070005 while saving appointment .
19. Exit process - break ld - Load module - output ud - Unload module - ignore ser - System error -
ignore ibp - Initial breakpoint -
20. Common DB connectors include the DB - nine , DB - fifteen , DB - nineteen , DB - twenty five ,
DB - thirty seven , and DB - fifty connectors .
21. To deliver interfaces that are significantly better suited to create and process RFC eight twenty
one , RFC eight twenty two , RFC nine seventy seven , and MIME content .
22. int1 , int2 , int3 , int4 , int5 , int6 , int7 , int8 , int9 ,
23. seven _ ctl00 ctl04 ctl01 ctl00 ctl00
24. Http0XX , Http1XX , Http2XX , Http3XX ,
25. config file must contain A , B , C , D , E , F , and G .
26. mondo - debug mondo - ship motif - debug motif - ship sts - debug sts - ship Comparing local
files to checkpoint files ...
27. Rusbvts . dll Dsaccessbvts . dll Exchmembvt . dll Draino . dll Im trying to deploy a new topology
, and I keep getting this error .
28. You can call me directly at four two five seven zero three seven three four four or my cell four two
five four four four seven four seven four or send me a meeting request with all the appropriate
information .
29. Failed zero point zero zero percent < one zero zero one zero zero zero zero Internal . Exchange .
ContentFilter . BVT ContentFilter . BVT_log . xml Error ! Filename not specified .
30. C colon backslash o one two f c p a r t y backslash d e v one two backslash oasys backslash
legacy backslash web backslash HELP
31. src backslash mapi backslash t n e f d e c dot c dot o l d backslash backslash m o z a r t f one
backslash e x five
32. copy backslash backslash j o h n f a n four backslash scratch backslash M i c r o s o f t dot S h a r
e P o i n t dot
33. Take a look at h t t p colon slash slash w w w dot granite dot a b dot c a slash access slash email
dot
34. backslash bin backslash premium backslash forms backslash r e g i o n a l o p t i o n s dot a s p x
dot c s Raj , DJ ,
35. Anuraag backslash backslash r a d u r five backslash d e b u g dot one eight zero nine underscore
P R two h dot s t s contains
36. p l a t f o r m right bracket backslash left bracket f l a v o r right bracket backslash s e t u p dot e x
e
37. backslash x eight six backslash Ship backslash zero backslash A d d r e s s B o o k dot C o n t a c
t s A d d r e s
38. Mine is here backslash backslash g a b e h a l l hyphen m o t h r a backslash S v r underscore O f
f i c e s v r
39. h t t p colon slash slash teams slash sites slash T A G slash default dot aspx As always , any
feedback , comments ,
40. two thousand and five h t t p colon slash slash news dot com dot com slash i slash n e slash f d
slash two zero zero three slash f d
41. backslash i n t e r n a l dot e x c h a n g e dot m a n a g e m e n t dot s y s t e m m a n a g e
42. I think Rich’s post highlights that we could have been more strategic about how the sum total of
XBOX three hundred and sixtys were distributed .
43. 64X64 , 8K , one hundred and eighty four ASSEMBLY , DIGITAL VIDEO DISK DRIVE ,
INTERNAL , 8X ,
44. So we are back to Extended MAPI and C++ because . Extended MAPI does not have a dual
interface VB or VB .Net can read .
45. Thanks , Borge Trongmo Hi gurus , Could you help us E2K ASP guys with the following issue ?
46. Thanks J RGR Are you using the LDDM driver for this system or the in the build XDDM driver ?
47. Btw , you might remember me from our discussion about OWA automation and OWA readiness
day a year ago .
21
48. empidtool . exe creates HKEY_CURRENT_USER Software Microsoft Office Common
QMPersNum in the registry , queries AD , and the populate the registry with MS employment ID
if available else an error code is logged .
49. Thursday, via a joint press release and Microsoft AI Blog, we will announce Microsoft’s
continued partnership with Shell leveraging cloud, AI, and collaboration technology to drive
industry innovation and transformation.
50. Actress Fan Bingbing attends the screening of ’Ash Is Purest White (Jiang Hu Er Nv)’ during the
71st annual Cannes Film Festival
D Prosody Similarity with Ground Truth
To further investigate the quality of prosody, we follow the generation quality evaluation of prosody
similarity between synthesized and prompt speech in Section 5.2 and compare the generated speech
with the ground-truth speech. We use Pearson correlation and RMSE to measure the prosody
matching between generated and ground-truth speech. The results are shown in Table 12. We observe
that NaturalSpeech 2 outperforms the baseline YourTTS by a large margin, which shows that our
NaturalSpeech 2 is much better in prosody similarity.
Table 12: The prosody similarity between the synthesized and ground-truth speech in terms of the
correlation and RMSE on pitch and duration.
LibriSpeech
Correlation ↑ RMSE ↓ Correlation ↑ RMSE ↓
Pitch
Duration
YourTTS
NaturalSpeech 2
0.77
0.81
51.78
47.72
0.52
0.65
3.24
2.72
VCTK
Correlation ↑ RMSE ↓ Correlation ↑ RMSE ↓
Pitch
Duration
YourTTS
NaturalSpeech 2
0.82
0.87
42.63
39.83
0.55
0.64
2.55
2.50
E Ablation Study
In this section, we also compare the prosody similarity between audio generated by the ablation model
and the ground-truth speech in Table 13. Similar to the results of comparing the audio generated
by the ablation model and prompt speech, we also have the following observations. 1) The speech
prompt is most important to the generation quality. 2) The cross-entropy and the query attention
strategy are also helpful in high-quality speech synthesis.
Table 13: The ablation study of NaturalSpeech 2. The prosody similarity between the synthesized
and ground-truth speech in terms of the correlation and RMSE on pitch and duration. “-" denotes that
the model can not converge.
Pitch
Duration
Correlation ↑ RMSE ↓ Correlation ↑ RMSE ↓
NaturalSpeech 2
w/o. diff prompt
w/o. dur/pitch prompt
w/o. CE loss
w/o. query attn
0.81
-
0.80
0.79
0.79
47.72
-
55.00
50.69
50.65
0.65
-
0.59
0.63
0.63
2.72
-
2.76
2.73
2.73
22
|
synthetic_cpt | 2 | NeKo_Toward_Post_Recognition_Generative_Correction_Large_Language_Models_with_Task-Oriented_Experts.pdf | Neko: a Library for Exploring Neuromorphic Learning Rules
Zixuan Zhao
University of Chicago
Nathan Wycoff
Virginia Tech
Neil Getty
Argonne National Laboratory
Rick Stevens
Argonne National Laboratory &
University of Chicago
Fangfang Xia
Argonne National Laboratory &
University of Chicago
1
2
0
2
g
u
A
3
1
]
G
L
.
s
c
[
2
v
4
2
3
0
0
.
5
0
1
2
:
v
i
X
r
a
Figure 1: Neko overview. Key components in the neuromorphic learning library.
ABSTRACT
The field of neuromorphic computing is in a period of active explo-
ration. While many tools have been developed to simulate neuronal
dynamics or convert deep networks to spiking models, general
software libraries for learning rules remain underexplored. This
is partly due to the diverse, challenging nature of efforts to de-
sign new learning rules, which range from encoding methods to
gradient approximations, from population approaches that mimic
the Bayesian brain to constrained learning algorithms deployed
on memristor crossbars. To address this gap, we present Neko, a
modular, extensible library with a focus on aiding the design of
new learning algorithms. We demonstrate the utility of Neko in
three exemplar cases: online local learning, probabilistic learning,
and analog on-device learning. Our results show that Neko can
replicate the state-of-the-art algorithms and, in one case, lead to
significant outperformance in accuracy and speed. Further, it offers
tools including gradient comparison that can help develop new
algorithmic variants. Neko is an open source Python library that
supports PyTorch and TensorFlow backends.
CCS CONCEPTS
• Computing methodologies → Machine learning algorithms;
• Hardware → Neural systems.
Publication rights licensed to ACM. ACM acknowledges that this contribution was
authored or co-authored by an employee, contractor or affiliate of the United States
government. As such, the Government retains a nonexclusive, royalty-free right to
publish or reproduce this article, or to allow others to do so, for Government purposes
only.
ICONS ’21, July 27–29, 2021, PREPRINT
© 2021 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-8691-3/21/07. . . $15.00
https://doi.org/10.1145/3477145.3477155
KEYWORDS
Neuromorphic computing, learning rules, approximate gradients,
Bayesian inference, Manhattan rule, open-source library
ACM Reference Format:
Zixuan Zhao, Nathan Wycoff, Neil Getty, Rick Stevens, and Fangfang Xia.
2021. Neko: a Library for Exploring Neuromorphic Learning Rules. In
PREPRINT . ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/
3477145.3477155
1 INTRODUCTION
Deep learning is the prevailing paradigm for machine learning. Over
the course of its meteoric rise, its many differences from human
learning have become increasingly clear. Chief among these are
gaps in data efficiency, robustness, generalizability, and energy effi-
ciency — all unlikely to narrow with growing computation power
alone. This has motivated a renewed search for brain-inspired learn-
ing algorithms. However, the current software infrastructure needs
improvement to support productive exploration.
Two common choices today for designing novel learning algo-
rithms are TensorFlow [1] and PyTorch [32]. These general deep
learning frameworks provide powerful abstractions for calculating
gradients and building deep neural networks, but there is no inter-
mediate layer between these two levels. For high-level development,
backpropagation is the only learning algorithm offered and is in
fact coupled with the training process.
Software in neuromorphic computing, on the other hand, has
traditionally focused more on simulating neurons and spiking neu-
ral networks [6, 8, 16, 41], interfacing with neuromorphic hardware
[11, 28, 35, 39], and converting pre-trained deep learning models
to spiking neural networks for inference [36, 37]. Learning has not
been a key part of these libraries. The few supported learning rules
such as spike-timing-dependent plasticity are not competitive on
large problems. As a result, new learning algorithms are developed
in independent codebases that are not easily reusable.
ICONS ’21, July 27–29, 2021, PREPRINT
Zixuan Zhao, Nathan Wycoff, Neil Getty, Rick Stevens, and Fangfang Xia
In this work, we present Neko, a software library under active
development for exploring learning rules. We build on the popular
autograd frameworks, and our goal is to implement key building
blocks to boost researcher productivity. By decoupling the learning
rules from the training process, we aim to provide an abstraction
model that enables mixing and matching of various design ideas.
To arrive at the right abstraction level, we need to sample a wide
range of learning algorithm research. Below are the three directions
and exemplars we have prioritized in this initial code release.
The first class of learning rules are gradient-based methods. They
approximate backpropagation with various levels of biological plau-
sibility [3, 24, 26, 27, 29, 31, 38, 40, 45]. From this category, we study
the e-prop algorithm [7] in detail and provide a complete reimple-
mentation. The second direction is based on the hypothesis that the
brain keeps track of probabilistic distributions over weights and
rewards [2, 10]. This line of exploration may offer important clues
towards achieving learning efficiency and robustness in the face of
uncertainty. We develop a sampling-based learning rule on spiking
neural networks (SNN). The third class is concerned with hardware
constraints on plasticity mechanisms. For this class, we include the
classic example of Manhattan rule training for memristive crossbar
circuits. In all three exemplars, we seek consistent implementation
in the Neko library.
2 LIBRARY DESIGN
The Neko library is designed to be modular, extensible, and easy
to use. Users can select from a collection of neuron models and
encoding methods to build a spiking or regular artificial neural
network, and train it with one of the implemented learning rules.
Alternatively, they could supply their own networks from PyTorch
or Keras [9] or develop new learning algorithms based on the pro-
vided intrinsics. The following code snippet provides an example
of solving MNIST [23] with the e-prop algorithm on a recurrent
network of 128 hidden adaptive leaky integrate-and-fire (ALIF)
neurons.
from neko . backend import pytorch_backend as backend
rsnn = ALIF (128 , 10 , backend , task_type = ' classification ')
model = Evaluator ( rsnn , loss = ' categorical_crossentropy ' ,
metrics =[ ' accuracy ', ' firing_rate '])
learning_rule = Eprop ( model , mode = ' symmetric ')
trainer = Trainer ( learning_rule )
trainer . train ( x_train , y_train , epochs =30)
Listing 1: Train an SNN model of ALIF neurons with e-prop.
The training process illustrated in this example can be broken
down into a series of high-level Neko modules: the layer includes
pre-implemented recurrent SNNs and adaptors for existing Keras
and PyTorch models; the evaluator associates a model with a loss
function and optional metrics; the learning rule implements back-
propagation and a growing list of neuromorphic learning rules; and
the trainer handles training logistics as well as special logic to apply
multiple learning rules for gradient comparison between models.
Besides these core components, auxiliary modules include the data
loader, spike encoder, optimizer, and functions for loss, activation,
and pseudo-derivatives calculations.
To help users define custom algorithms, Neko also provides
a unified API for accessing frequently used features in Tensor-
Flow and PyTorch such as low-level tensor operations. Switching
the backend is straightforward. This feature can detect occasional
framework-dependent behavior and is useful for code verification
and performance analysis. The multi-backend support is reminis-
cent of the earlier Keras framework. However, Neko is different in
that it provides more fine-grained abstraction layers such that users
can replace the learning algorithm by changing a single line of code.
Taken together, these features also simplify the process of porting
code to hardware accelerators, since implementing a backend for
the hardware is sufficient to run all models in Neko on it.
3 USE CASES
In this section, we present results on the three representative learn-
ing rules introduced earlier. We also provide gradient analysis as
an example of Neko’s cross-cutting utilities that we are building to
help design, debug, and compare new learning algorithms.
3.1 Credit assignment with local signals
A key mystery in the brain is how it implements credit assignment.
The standard backpropagation through time (BPTT) algorithm is
unrealistic as we cannot expect a biological neuron to be aware of
all past synaptic strengths. Bellec et al. [7] proposed e-prop, a local
online learning algorithm for recurrent SNNs. The method exploits
the mathematical formula of BPTT, deriving an approximation
which only requires a recursive accumulative eligibility trace and a
local learning signal. These properties make the algorithm one step
closer to biologically realistic on-chip learning.
In Neko, we implemented full-featured e-prop algorithms includ-
ing the three variants: symmetric, random, and adaptive. Whereas
the paper manually derived the e-prop formulas for some networks,
we took a different approach: separating the model from the learn-
ing rules. In the layer module, the regular recurrent neural networks
and recurrent SNNs, with leaky integrate-and-fire (LIF) or ALIF
neurons, were all defined as standard models. Meanwhile, they
inherited from an Epropable class, which defined general symbolic
gradient formulas according to recurrent cell dynamics. Specifying
this extra information was all it took to perform e-prop, and in a
network-agnostic way. This design enabled the error-prone formula
derivation to be automated. It also sped up experiments with new
network architectures or e-prop variants.
We compared the Neko implementation of e-prop to the original
implementation on the TIMIT benchmark [15] for framewise speech
recognition. The authors reported the results on a hybrid network
of 100 ALIF and 300 LIF neurons [7]. In our experiment, we used
an ALIF-only network of 200 neurons and otherwise kept the setup
identical. We report close reproduction accuracy in Fig. 2. Notably,
Neko’s error rate dropped by 27%, after tuning regularization and
batch size, while keeping the firing rate low at 10 Hz. To the best
of our knowledge, this is the best SNN accuracy obtained with
a local learning rule, which in fact reaches the level of an LSTM
baseline trained with the precise gradients from BPTT ([7] Fig. S4).
Additionally, Neko is faster (training time from Nvidia V100) and
convenient for iterative development.
Neko: a Library for Exploring Neuromorphic Learning Rules
ICONS ’21, July 27–29, 2021, PREPRINT
Figure 2: TIMIT results. We reproduce e-prop accuracy on speech
recognition in Neko with a smaller network. Neko is faster with
slight tuning and reduces error by 27% to reach the nonspiking
baseline performance of a BPTT-trained LSTM model.
3.2 Probabilistic learning
Bayesian statistics has captured much attention in the computa-
tional neuroscience community, both as an explanation for neural
behavior [22] as well as a means of performing inference in neural
networks. In Neko, we develop a Hybrid Monte Carlo, or HMC
[30], algorithm to perform Bayesian inference on spiking neural
networks based on Metropolis-adjusted Langevin diffusion [34].
Fundamentally, HMC algorithms are simply Metropolis-Hastings
samplers [19] where the proposal distribution is based on the gra-
dient. Though spiking neurons are non-differentiable by definition,
surrogate gradients can be defined by considering smoothed ver-
sions of the spiking activation function [31]. State of the art learning
algorithms for spiking neurons have used these surrogate gradients
successfully, and we also find success in deploying them in HMC
to form our proposal. In fact, this two-stage approach is especially
appealing for spiking neurons, since the theoretical underpinnings
of HMC place only very weak restrictions on what the proposal
direction should be, and certainly do not require an exact gradient
to be satisfied. Thus, from a theoretical perspective, running our
algorithm for sufficiently long will result in a sample from our true
posterior. Empirically, of course, it is not practical to explore the
entire nonconvex, high-dimensional posterior. We therefore verify
our implementation numerically.
The MNIST-1D [18] data is a derivative of the popular MNIST
dataset of handwritten digits which transforms the image recog-
nition problem into a sequence learning problem (See Figure 3,
Left). We train a spiking neural network with 1,000 hidden neurons
using our proposed HMC algorithm1, and recorded the posterior
mean as well as uncertainty for the train set examples. As shown
in Figure 3 (Right), we find that the model displayed significantly
more uncertainty on test examples for which its best guess was
incorrect than when it was correct. This validates our algorithm,
as we would like errors to be associated with high uncertainty.
1Using an adaptive step size [5] with a diffusion standard deviation of 0.01 scaled by
the norm of the surrogate gradient, which was obtained via standard backpropagation.
Figure 3: Uncertainty Quantification. Left: An example input
representing the number 3 for the MNIST-1D data. Right: Poste-
rior uncertainty among test examples which were correctly versus
incorrectly predicted. Uncertainty is higher when errors are made.
As future work, we intend to compare HMC and other MCMC
algorithms to other probabilistic learning approaches such as Vari-
ational Bayes [17] and Monte Carlo Dropout [14] within the Neko
framework.
3.3 Analog neural network training
Memristors have emerged as a new platform for neuromorphic
learning [20, 42]. These devices represent the synapse weights
in the tunable conductance states of large crossbar architectures.
Compared with digital implementations of neural networks, these
analog circuits offer promising advantages in parallel processing,
in-situ learning, and energy efficiency [13, 25]. However, they also
place constraints on how the weights can be updated.
A classic way to train these networks is with the Manhattan rule
learning algorithm [44]. Although training with backpropagation
on device is theoretically possible, the time consumption of tuning
individual weights with feedback algorithm can be prohibitive, es-
pecially for larger scale neural networks [4]. As an alternative, the
Manhattan rule simply updates network weights by a fixed amount
according to the sign of the gradients, where the actual change
magnitude may depend on the state of the material. This learn-
ing rule has been applied successfully to simple machine learning
benchmarks in simulated or fully hardware-implemented analog
neural networks [43].
Neko implements a family of Manhattan rules to simulate the
training process. It includes the basic algorithm and an extended
version that supports a specified range of material conductance
constraints. Because these learning rules do not have special re-
quirements for the network architecture, users can directly supply
existing Keras and PyTorch models with Neko’s adaptors. Our pre-
liminary results show that both the simple Manhattan rule and
the constrained version could train the MNIST dataset up to 96%
accuracy on a simple 2-layer (with 64, 32 neurons) multi-layer per-
ceptron, which is 2% lower than backpropagation.
3.4 Gradient comparison analysis
Many learning rules depend on gradients explicitly or implicitly.
Yet, gradient estimates are not intuitive to developers. Debugging
learning rules sometimes require noticing the subtle differences
CorrectIncorrect0.00.20.40.60.81.01.21.4Mean Cross EntropyUncertainty vs AccuracyICONS ’21, July 27–29, 2021, PREPRINT
Zixuan Zhao, Nathan Wycoff, Neil Getty, Rick Stevens, and Fangfang Xia
Table 1: Testing two classification exemplars using temporal
spike encoding schemes
Encoding
Surgery1
ECG2
None
TC
SF
MW Benchmark
0.563
0.685
0.687
0.699
0.675
0.813
0.620
0.763
1A surgery kinematic dataset measuring the positions and
orientations of surgical instruments during labeled simulated
exercises. Data available upon request.
2A public ECG heartbeat categorization dataset [21] subsampled
for class balance.
0.766
0.811
Figure 4: Gradient analysis tool. This example illustrates the
differences in approximate gradients among e-prop variants for
training MNIST: (top) a snapshot of the distributions of gradient
deviations, (bottom) how the gradient deviations change over time.
in gradient estimates and follow their trends over the course of
training. In Neko, we have designed a gradient comparison tool
that can enumerate the gradients or weight changes for multiple
learning rules with the same model state and input data. It can also
track this information batch by batch. Visualizing this information
can help inspect approximation quality differences caused by algo-
rithm tweaks and identify equivalence in formula transformations.
Outside the context of debugging, the change in gradient estimates
throughout the training process can also reveal potential biases and
other properties of the learning algorithm.
The gradient comparison tool is made possible by Neko’s separa-
tion of the learning algorithm and trainer module. It is implemented
as a special trainer that takes multiple learning rules and clones of
the same model. While the primary model follows the usual training
process, the others’ parameters are synced with the primary at each
training step, and the weight changes are saved. The equivalence of
gradient changes and weight changes can be established using the
built-in naive optimizer which applies gradients directly without
learning rate.
Gradient analysis offers insights into how learning rules behave
relative to each other and backpropagation. Fig. 4 illustrates this
with an example of training spiking MNIST models with three vari-
ants of e-prop. While symmetric e-prop was the best at gradient
approximation, the relationship between random and adaptive ver-
sions was somewhat unexpected. The adaptive version produced
gradients with larger deviation and bias, which could explain its
weaker performance on the benchmark (not shown).
4 SUPPORTING UTILITIES
To further enable neuromorphic centric exploration, we integrate
the SpikeCoding toolbox [12] which enables simple encoding of
continuous value sequences into spikes with nearly a dozen algo-
rithms. We present experimental results (Table 1) on two temporal
data applications using three encoding schemes [33]:
• Temporal contrast (TC) encoding compares the absolute value
of a signal with a threshold derived by the derivative and
standard deviation of the full sequence multiplied by a tun-
able parameter.
• Step-forward (SF) encoding generates positive/negative spikes
by comparing values in a sequence to a moving baseline plus
a tunable threshold, which is initially the first value of the
sequence and updated each spike.
• Moving window (MW) encoding uses a similar moving base-
line and threshold to determine spiking but which is set to
the mean of values in a tunable time window.
All models were trained with e-prop learning except for the
Benchmark RNN model trained with BPTT. While we note that
there was often a sizable decrease in accuracy using these encod-
ings, the sparsity of the input signal was significantly increased.
Spike encodings may enable the use and development of learning
algorithms more suited to or dependent on event based input.
5 CONCLUSIONS
We presented the design of a coding library for researching learning
algorithms. Through three examples, we demonstrated its capability
and ease of use in diverse scenarios. Our reference implementa-
tions introduced a new state-of-the-art in local temporal credit
assignment with SNNs, a sampling-based learning rule for esti-
mating weight and prediction posteriors, as well as simulations for
constrained training of analog neural networks on memristive hard-
ware. Additionally, we showed a cross-cutting example to support
learning rule inspection with gradient comparison analysis.
Two directions emerge for future work. First, we will extend
learning rules to complex neuron models (e.g., dendritic computa-
tion, structured neurons) and network architecture. Second, we will
port learning algorithms to emerging hardware platforms. Both pro-
cesses will be facilitated by the abstraction of learning algorithms
and the multi-backend support in the Neko library2.
2https://github.com/cortical-team/neko
Neko: a Library for Exploring Neuromorphic Learning Rules
ICONS ’21, July 27–29, 2021, PREPRINT
ACKNOWLEDGMENTS
We thank Sihong Wang and Shilei Dai for helpful discussions. This
work is partially supported by Laboratory Directed Research and
Development (LDRD) funding from Argonne National Laboratory,
provided by the Director, Office of Science, of the U.S. Department
of Energy under Contract No. DE-AC02-06CH11357.
REFERENCES
[1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey
Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al.
2016. Tensorflow: A system for large-scale machine learning. In 12th {USENIX}
symposium on operating systems design and implementation ({OSDI} 16). 265–283.
[2] Laurence Aitchison, Jannes Jegminat, Jorge Aurelio Menendez, Jean-Pascal Pfister,
Alexandre Pouget, and Peter E Latham. 2021. Synaptic plasticity as Bayesian
inference. Nature Neuroscience 24, 4 (2021), 565–571.
[3] Mohamed Akrout, Collin Wilson, Peter C Humphreys, Timothy Lillicrap, and
Douglas Tweed. 2019. Deep learning without weight transport. arXiv preprint
arXiv:1904.05391 (2019).
[4] Fabien Alibart, Ligang Gao, Brian D Hoskins, and Dmitri B Strukov. 2012. High
precision tuning of state for memristive devices by adaptable variation-tolerant
algorithm. Nanotechnology 23, 7 (Jan 2012), 075201. https://doi.org/10.1088/0957-
4484/23/7/075201
[5] Christophe Andrieu and Johannes Thoms. 2008. A tutorial on adaptive MCMC.
Statistics and Computing 18, 4 (01 Dec 2008), 343–373. https://doi.org/10.1007/
s11222-008-9110-y
[6] Trevor Bekolay, James Bergstra, Eric Hunsberger, Travis DeWolf, Terrence C
Stewart, Daniel Rasmussen, Xuan Choo, Aaron Voelker, and Chris Eliasmith.
2014. Nengo: a Python tool for building large-scale functional brain models.
Frontiers in Neuroinformatics 7 (2014), 48.
[7] Guillaume Bellec, Franz Scherr, Anand Subramoney, Elias Hajek, Darjan Salaj,
Robert Legenstein, and Wolfgang Maass. 2020. A solution to the learning dilemma
for recurrent networks of spiking neurons. Nature Communications 11, 1 (2020),
1–15.
[8] Nicholas T Carnevale and Michael L Hines. 2006. The NEURON book. Cambridge
University Press.
[9] François Chollet et al. 2015. Keras. https://keras.io.
[10] Will Dabney, Zeb Kurth-Nelson, Naoshige Uchida, Clara Kwon Starkweather,
Demis Hassabis, Rémi Munos, and Matthew Botvinick. 2020. A distributional
code for value in dopamine-based reinforcement learning. Nature 577, 7792
(2020), 671–675.
[11] Andrew P Davison, Daniel Brüderle, Jochen M Eppler, Jens Kremkow, Eilif Muller,
Dejan Pecevski, Laurent Perrinet, and Pierre Yger. 2009. PyNN: a common
interface for neuronal network simulators. Frontiers in Neuroinformatics 2 (2009),
11.
[12] Julien Dupeyroux. 2021. A toolbox for neuromorphic sensing in robotics.
arXiv:2103.02751 [cs.RO]
[13] Elliot J Fuller, Scott T Keene, Armantas Melianas, Zhongrui Wang, Sapan Agarwal,
Yiyang Li, Yaakov Tuchman, Conrad D James, Matthew J Marinella, J Joshua
Yang, et al. 2019. Parallel programming of an ionic floating-gate memory array
for scalable neuromorphic computing. Science 364, 6440 (2019), 570–574.
[14] Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a Bayesian Approximation:
Representing Model Uncertainty in Deep Learning. In Proceedings of The 33rd
International Conference on Machine Learning (Proceedings of Machine Learning
Research, Vol. 48), Maria Florina Balcan and Kilian Q. Weinberger (Eds.). PMLR,
New York, New York, USA, 1050–1059. http://proceedings.mlr.press/v48/gal16.
html
[15] J. Garofolo, Lori Lamel, W. Fisher, Jonathan Fiscus, D. Pallett, N. Dahlgren, and V.
Zue. 1992. TIMIT Acoustic-phonetic Continuous Speech Corpus. Linguistic Data
Consortium (11 1992).
[16] Marc-Oliver Gewaltig and Markus Diesmann. 2007. Nest (neural simulation tool).
Scholarpedia 2, 4 (2007), 1430.
[17] Alex Graves. 2011. Practical Variational Inference for Neural Networks. In
Proceedings of the 24th International Conference on Neural Information Processing
Systems (Granada, Spain) (NIPS’11). Curran Associates Inc., Red Hook, NY, USA,
2348–2356.
[18] Sam Greydanus. 2020. Scaling down Deep Learning. arXiv:2011.14439 [cs.LG]
[19] Peter D. Hoff. 2009. A First Course in Bayesian Statistical Methods (1st ed.).
Springer Publishing Company, Incorporated.
[20] Miao Hu, Hai Li, Yiran Chen, Qing Wu, Garrett S Rose, and Richard W Linderman.
2014. Memristor crossbar-based neuromorphic computing system: A case study.
IEEE Transactions on Neural Networks and Learning Systems 25, 10 (2014), 1864–
1878.
[21] Mohammad Kachuee, Shayan Fazeli, and Majid Sarrafzadeh. 2018. Ecg heartbeat
classification: A deep transferable representation. In 2018 IEEE International
Conference on Healthcare Informatics (ICHI). IEEE, 443–444.
[22] David C. Knill and Alexandre Pouget. 2004. The Bayesian brain: the role of
uncertainty in neural coding and computation. Trends in Neurosciences 27, 12 (01
Dec 2004), 712–719. https://doi.org/10.1016/j.tins.2004.10.007
[23] Yann LeCun. 1998. The MNIST database of handwritten digits. http://yann. lecun.
com/exdb/mnist/ (1998).
[24] Jun Haeng Lee, Tobi Delbruck, and Michael Pfeiffer. 2016. Training deep spiking
neural networks using backpropagation. Frontiers in Neuroscience 10 (2016), 508.
[25] Can Li, Daniel Belkin, Yunning Li, Peng Yan, Miao Hu, Ning Ge, Hao Jiang, Eric
Montgomery, Peng Lin, Zhongrui Wang, et al. 2018. Efficient and self-adaptive
in-situ learning in multilayer memristor neural networks. Nature communications
9, 1 (2018), 1–8.
[26] Timothy P Lillicrap, Daniel Cownden, Douglas B Tweed, and Colin J Akerman.
2016. Random synaptic feedback weights support error backpropagation for
deep learning. Nature communications 7, 1 (2016), 1–10.
[27] Timothy P Lillicrap, Adam Santoro, Luke Marris, Colin J Akerman, and Geoffrey
Hinton. 2020. Backpropagation and the brain. Nature Reviews Neuroscience 21, 6
(2020), 335–346.
[28] Chit-Kwan Lin, Andreas Wild, Gautham N Chinya, Yongqiang Cao, Mike Davies,
Daniel M Lavery, and Hong Wang. 2018. Programming spiking neural networks
on Intel’s Loihi. Computer 51, 3 (2018), 52–61.
[29] Owen Marschall, Kyunghyun Cho, and Cristina Savin. 2020. A unified framework
of online learning algorithms for training recurrent neural networks. Journal of
Machine Learning Research 21, 135 (2020), 1–34.
[30] Radford M. Neal. 2011. MCMC Using Hamiltonian Dynamics. CRC Press. https:
//doi.org/10.1201/b10905-7
[31] Emre O Neftci, Hesham Mostafa, and Friedemann Zenke. 2019. Surrogate gradi-
ent learning in spiking neural networks: Bringing the power of gradient-based
optimization to spiking neural networks. IEEE Signal Processing Magazine 36, 6
(2019), 51–63.
[32] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory
Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. [n.d.].
PyTorch: An imperative style, high-performance deep learning library. arXiv
preprint arXiv:1912.01703 ([n. d.]).
[33] Balint Petro, Nikola Kasabov, and Rita M. Kiss. 2020. Selection and Optimiza-
tion of Temporal Spike Encoding Methods for Spiking Neural Networks. IEEE
Transactions on Neural Networks and Learning Systems 31, 2 (Feb. 2020), 358–370.
https://doi.org/10.1109/tnnls.2019.2906158
[34] P. J. Rossky, J. D. Doll, and H. L. Friedman. 1978. Brownian dynamics as smart
Monte Carlo simulation. The Journal of Chemical Physics 69, 10 (1978), 4628–4633.
https://doi.org/10.1063/1.436415 arXiv:https://doi.org/10.1063/1.436415
[35] Bodo Rueckauer, Connor Bybee, Ralf Goettsche, Yashwardhan Singh, Joyesh
Mishra, and Andreas Wild. 2021. NxTF: An API and Compiler for Deep Spiking
Neural Networks on Intel Loihi. arXiv preprint arXiv:2101.04261 (2021).
[36] Bodo Rueckauer and Shih-Chii Liu. 2018. Conversion of analog to spiking neural
networks using sparse temporal coding. In 2018 IEEE International Symposium
on Circuits and Systems (ISCAS). IEEE, 1–5.
[37] Bodo Rueckauer, Iulia-Alexandra Lungu, Yuhuang Hu, Michael Pfeiffer, and
Shih-Chii Liu. 2017. Conversion of continuous-valued deep networks to efficient
event-driven networks for image classification. Frontiers in Neuroscience 11 (2017),
682.
[38] João Sacramento, Rui Ponte Costa, Yoshua Bengio, and Walter Senn. 2018. Den-
dritic cortical microcircuits approximate the backpropagation algorithm. arXiv
preprint arXiv:1810.11393 (2018).
[39] Jun Sawada, Filipp Akopyan, Andrew S Cassidy, Brian Taba, Michael V Debole,
Pallab Datta, Rodrigo Alvarez-Icaza, Arnon Amir, John V Arthur, Alexander
Andreopoulos, et al. 2016. Truenorth ecosystem for brain-inspired computing:
scalable systems, software, and applications. In SC’16: Proceedings of the Inter-
national Conference for High Performance Computing, Networking, Storage and
Analysis. IEEE, 130–141.
[40] Andrew Sornborger, Louis Tao, Jordan Snyder, and Anatoly Zlotnik. 2019. A pulse-
gated, neural implementation of the backpropagation algorithm. In Proceedings
of the 7th Annual Neuro-inspired Computational Elements Workshop. 1–9.
[41] Marcel Stimberg, Romain Brette, and Dan FM Goodman. 2019. Brian 2, an
intuitive and efficient neural simulator. eLife 8 (2019), e47314.
[42] Andy Thomas. 2013. Memristor-based neural networks. Journal of Physics D:
Applied Physics 46, 9 (2013), 093001.
[43] Peng Yao, Huaqiang Wu, Bin Gao, Jianshi Tang, Qingtian Zhang, Wenqiang
Zhang, J Joshua Yang, and He Qian. 2020. Fully hardware-implemented memristor
convolutional neural network. Nature 577, 7792 (2020), 641–646.
[44] Elham Zamanidoost, Farnood M. Bayat, Dmitri Strukov, and Irina Kataeva. 2015.
Manhattan rule training for memristive crossbar circuit pattern classifiers. In
2015 IEEE 9th International Symposium on Intelligent Signal Processing (WISP)
Proceedings. 1–6. https://doi.org/10.1109/WISP.2015.7139171
[45] Friedemann Zenke and Surya Ganguli. 2018. Superspike: Supervised learning in
multilayer spiking neural networks. Neural computation 30, 6 (2018), 1514–1541.
|
synthetic_cpt | 8 | Generating_Training_Data_with_Language_Models_Towards_Zero-Shot_Language_Understanding.pdf | JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021
1
Self-Training Vision Language BERTs with a
Unified Conditional Model
Xiaofeng Yang, Fengmao Lv, Fayao Liu, Guosheng Lin
3
2
0
2
n
a
J
9
1
]
V
C
.
s
c
[
2
v
0
1
0
2
0
.
1
0
2
2
:
v
i
X
r
a
Abstract—Natural language BERTs are trained with language
corpus in a self-supervised manner. Unlike natural
language
BERTs, vision language BERTs need paired data to train,
which restricts the scale of VL-BERT pretraining. We propose
a self-training approach that allows training VL-BERTs from
unlabeled image data. The proposed method starts with our
unified conditional model – a vision language BERT model that
can perform zero-shot conditional generation. Given different
conditions, the unified conditional model can generate captions,
dense captions, and even questions. We use the labeled image data
to train a teacher model and use the trained model to generate
pseudo captions on unlabeled image data. We then combine the
labeled data and pseudo labeled data to train a student model.
The process is iterated by putting the student model as a new
teacher. By using the proposed self-training approach and only
300k unlabeled extra data, we are able to get competitive or even
better performances compared to the models of similar model size
trained with 3 million extra image data.
I. INTRODUCTION
Large scale pretraining has become the dominating approach
in various natural language processing tasks. The success of
large scale pretraining is due to a large amount of language
training data available everywhere and the self-training algo-
rithm. Unlike language pretraining, vision language pretraining
requires paired image and language data, which restricts the
scale of vision language BERTs’ pretraining. In this paper, we
propose a self-training approach that allows to pretrain VL-
BERTs using unlabeled image data.
Self-training is usually done by iterating the following three
steps: 1) training with labeled data, 2) generating pseudo labels
for unlabeled data, 3) mixing the labeled data and unlabeled
data with pseudo labels to retrain the network. However, the
self-training of vision language BERTs is nontrivial due to
the following reasons. First, although auto-encoding models
(e.g., BERTs [1], [2]) perform well on the natural language
understanding and image language understanding tasks, they
cannot be directly applied to the generation task without fine-
tuning [3]. In practice, it is difficult to generate pseudo labels
for unlabeled data using pretrained BERTs in the zero-shot
Corresponding author: Guosheng Lin.
Xiaofeng Yang and Guosheng Lin are with School of Computer Science and
Engineering, Nanyang Technological University (NTU), Singapore 639798
(email: [email protected], [email protected])
Fengmao Lv is with School of Computing and Artificial Intelligence,
feng-
Southwest Jiaotong University, Chengdu 611756, China (email:
[email protected])
Fayao Liu is with Agency for Science, Technology and Research
(A*STAR), Singapore 138632 (email: [email protected])
Fig. 1. An example of generated image descriptions. The original image
is selected from Conceptual Caption. Given different condition flags, our
proposed UCM model is able to generate diverse image descriptions, such
as COCO caption, dense caption, and questions. It’s clear that the generated
contents have different styles. Compared with the originally provided captions,
the generated ones could better describe the picture contents.
setting. Although these models can be finetuned to perform
generation tasks, the zero-shot generation of pseudo labels
is important since it saves the time of extra finetuning and
avoids adding additional bias from the finetuning datasets.
Second, current common practice in vision language BERT
pretraining uses various image descriptions to train, such as
image captions, dense captions and questions. Those image
descriptions have significant differences, making it difficult for
an unconditional model to learn to generate adequate pseudo
captions for unlabeled images. Hence, although self-training
has shown its effectiveness in various tasks [4], [5], how to
use it effectively in training vision language BERTs is not yet
studied.
To this end, we propose the Unified Conditional Model
(UCM) and a set of vision language BERT self-training
methods to tackle the above issues. Compared with previous
methods, our model has the following advantages: First, our
method combines auto-encoding training [1], [2] and auto-
regressive training [6] in a unified framework, which enables
our method to perform well on natural language understanding
tasks and at the same time effectively generate pseudo labels.
Second, we propose a novel conditional training method that
enables our model to conditional generate various types of
captions,
including COCO style captions, VG style dense
captions and questions.
Unified Conditional Model (UCM). Compared with tra-
ditional vision language BERTs, our proposed UCM has
Copyright © 2023 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from
the IEEE by sending an email to [email protected].
Original CC caption:why geographical feature category is the perfect resort for Families !Condition: Coco Captiona boat floating on top of water next to a pier.a ship docked next to a dock on a clear day.an old fashionedboat in a marina near a boat dock.Condition: Dense captionboat on water surface.the letter is in white.boat docked in harbor.Condition: Questionthe boat is on where?on which side is the boat?
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021
2
two unique properties. First, the model is able to generate
different contents based on a condition flag input, such as
image captions, dense captions, and questions. Second, the
condition flag can be used as an identifier to help down-stream
finetuning. The proposed UCM shares similar model structures
with existing 2-stream vision language BERT models [2],
[7]. Specifically, it contains one image encoder, two language
encoders with shared weights, and several layers of cross
attention layers. In training, different data types are assigned
to their condition signals. The model is trained with both
bi-directional prediction masks and one-directional predic-
tion masks in parallel. For bi-directional prediction masks,
the model performs conditional masked language modeling
prediction, masked object prediction,
image-text matching,
and an auxiliary question answering loss. For one-directional
prediction masks, the model performs one-directional masked
conditional language modeling and masked object prediction
tasks. When the model is used to generate pseudo labels for
unlabeled images, the model will run forward propagation
with one-directional prediction masks only. The condition
signal enables the model
to generate diverse descriptions
for pictures. Fig. 1 shows an example of generated image
descriptions using different condition flags. When the model is
used for finetuning image language understanding tasks, only
the bi-directional mask is used. During finetuning, we use the
condition flag as prior knowledge for finetuning. For example,
when finetuning VQA tasks, the input is given an additional
condition flag to show the input is a question. Results show
that the presence of condition flags improves down-stream
finetuning performance.
Vision Language BERT Self-Training. The self-training
method is used in the pretraining stage to further enlarge
the scale of data that can be used in pretraining. Our self-
training approach follows the self-training pipeline with extra
optimization for vision language BERTs. Generally, the self-
training process is done in three steps. First, we use the labeled
image data to train a teacher model and then use the trained
model to generate pseudo labels on unlabeled image data. We
then combine the labeled data and pseudo labeled data to train
a student model. Finally, the process is iterated by putting
the student model as a new teacher. In our task, the pseudo
labels are generated COCO style image captions and VG style
dense captions by UCM. In order to generate high quality
and diverse pseudo labels, we propose three methods. First,
we randomly mask object regions when generating captions.
This method makes sure the model can focus on different
visual areas when describing the images. Second, we randomly
sample a word from the top-K predictions in each prediction
step, such that even for the same image, the model could
generate various outputs. Finally, we use the condition flag
to control the contents generated. We show both qualitative
and quantitative comparisons in experiments section.
Experimental-wise, besides the commonly used COCO and
VG datasets, we train our model with only 300k extra un-
labeled data from Conceptual Caption [8] by removing the
provided captions. The original Conceptual Caption dataset
provides machine-generated captions. They are noisy [9] and
often used as out-of-domain training data [10]. The model
could out-perform the model trained with the whole three
million extra data in various down-stream finetuning tasks.
Also, we provide comprehensive ablation studies of the train-
ing settings. To summarize our contributions:
• We propose the first Unified Conditional BERT model
that could perform zero-shot conditional
image-based
language generation. Traditional bi-directional vision lan-
guage models are unable to be used to generate languages
directly and they are not conditional, such that the users
can’t control the generation styles.
• We propose a self-training method for using unlabeled
images in vision language pretraining. To the best of our
knowledge, this is the first work using self-training in
vision language pretraining.
• With only 300k extra image data, we achieve competitive
or better performances within models with similar model
size trained with 3 million extra data.
II. RELATED WORK
A. Vision Language Pretraining
Traditional vision language methods build stand-alone mod-
els to solve VQA [11]–[16], captioning [17]–[19], naviga-
tion [20] and grounding [21] tasks. The success of large scale
pretraining in NLP [1] motivates the attempts of developing
similar models in vision language. Original pretrained lan-
guage models [1] use a single transformer [22] to encode
language words and positions. In the situations of vision + lan-
guage, there are usually two common choices: the one-stream
methods and two-stream methods. Two-stream methods, for
example ViLBERT [2], LXMERT [7] and 12in1 [23], use
two transformers to encode images and languages separately.
After that, there will usually be a cross-attention transformer
to combine the features from the two branches. One-stream
methods, for example VisualBERT [24], Unicoder-VL [25]
, Uniter [10] and Oscar [26], process vision and language
features with a single transformer encoder. In this case, the
visual and language information share the same attention
weights. Compared with the two-stream methods, the one-
stream methods require more working memories and usually
perform better than two-stream methods. The one-stream
methods usually have a smaller model size. Our work follows
the two-stream network design of LXMERT [7] and extends
the single language encoder of LXMERT [7] to two shared-
weight
language encoders that process the one-directional
mask and two-directional mask at the same time. This network
design allows our network to generalize better on generation
tasks.
Although BERT is a form of language model, same as
natural language BERTs [1], [3], the above vision language
BERTs can not be used directly to generate languages. The
most straightforward reason is that BERT learns bidirectional
contexts, while generation is one-directional. VLP [27] pro-
poses to train vision language models with both bi-directional
and one-directional masks, such that the model can be used
for both VQA and image captioning. Compared to previous
work, our model has two unique properties: First, it is able
to perform conditional generation, namely generating specific
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021
3
Fig. 2. A detailed illustration of proposed UCM. During training, given image regional features and language embeddings, we process the language embeddings
through the bi-directional language encoder and the one-directional language encoder. The two language encoders share the same weights. The bi-direction and
one-directional branches are conditioned by using a normal mask and a triangular mask. The images are processed by the image encoder. Finally, the cross-
attention layers merge visual features with the outputs from both language encoders. We use a rectangle mask for one-directional prediction in cross-attention
layers, such that only the positions before [MASK] token could see visual features.
contents based on a condition signal. Second, we use the
pretrained model to perform zero-shot language generation,
without extra finetuning.
B. Self-Training
Self-training methods [4], [28] first use labeled data to train
a teacher model, then use the teacher model to generate pseudo
labels for unlabeled data and finally use the labeled data and
[4]
pseudo labeled data to jointly train a student model.
identifies the importance of add noise in self-training of image
classification tasks. Self-training also improves object detec-
tion and semantic segmentation results compared with pre-
training [5]. In machine translation [29]–[31], self-trainings
show their effectiveness on various datasets.
We provide a set of self-training algorithms and give de-
tailed ablation studies.
III. METHOD
In this section, we describe our method in two folds:
our proposed unified conditional model and the self-training
algorithms. For the unified conditional model subsection, we
first introduce the model overview including model structures
and important tokens. Then we introduce the training tasks and
training losses. For the self-training algorithms subsection, we
introduce the technical details of our proposed self-training
algorithms for vision language models.
A. Unified Conditional Model (UCM)
1) Model Overview: The overall structure of our model
is illustrated in Figure 2. For the base network, we briefly
follow the 2-stream model as used in [7] and extend it to
our unified conditional model. Specifically, the model contains
9 layers of language transformers, 5 layers of object region
transformers, and 5 layers of cross transformers. Given a
sentence description, we tokenize it into WordPiece tokens,
pad them with Classification [CLS] and Separation [SEP]
tokens and randomly mask them with Mask [MASK] tokens.
We add a condition token [CND] after the [CLS] token. The
masked tokens are then passed to an embedding layer. We
process the language embeddings through the bi-directional
language encoder and the one-directional language encoder,
same as previous works [27], [32]. The two language en-
coders share the same weights. The bi-direction and one-
directional branches are distinguished by using an empty mask
(bi-directional mask) and a triangular mask (one-directional
mask) [33]. Given a one-directional mask, the tokens can only
observe the tokens before themselves in the attention modules,
which makes the module more capable of doing generation
tasks. Given a bi-directional mask, the tokens can observe
both the tokens after and before themselves. Experiments in
BERT [1] prove that this design works better on understanding
tasks, for example VQA tasks. The images are processed by
the image encoder. After that, the bi-directional output and
one-directional output are merged with image output through
cross-attention layers [2], [7]. For the cross-attention layer, we
use a rectangle mask for the one-directional prediction branch,
such that only the position before [MASK] could attend to
image features. During inference, our model is identical to
traditional 2-stream vision language BERT models without
any extra computational cost. When doing image-language
understanding tasks, for example finetuning visual question
LanguageEncoderLanguageEncoderVisualEncoderShare WeightVisualFeaturesLanguageEmbeddingCrossEncoderVisualOutputBi-directionalLanguage OutputOne-directionalLanguage OutputVisual LossBi-directional LossOne-directional LossConditionJOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021
4
answering, the model runs forward propagate using the bi-
directional mask only. When performing image-language gen-
eration tasks, for example generating pseudo labeling for
unannotated images, the model runs forward propagate using
the one-directional mask only.
2) Training Tasks: Conditional Masked Language Mod-
eling (CMLM) The CMLM task is for both bi-directional
prediction and one-directional prediction. Given the image
regional features v = {v1, ..., vK}, the language words w =
{w1, ..., wT } and the condition c, for bi-directional prediction
task, we randomly mask the language words at a ratio of 0.15.
Once the position is masked, for 80%, similar to BERT [1], we
change the position to [MASK] token and for 10% of chance,
we change to position to random word and keep the original
content. The loss of bi-directional CMLM is defined as the
negative log-likelihood of predicting the masked words given
conditions and all other words except the mask words:
LCMLM-Bi(θ) = −E(w,v) log Pθ(wm|w/m, v, c) .
(1)
For one-directional CMLM task, we randomly mask 1 word
from each sentence. The masked word could also be the period
symbol. The prediction of masked word is based on the words
before the current position:
LCMLM-One(θ) = −E(w,v) log Pθ(wm|w<m, v, c) ,
(2)
where θs in the above two equations represent the model
parameters. Symbol < represent all words before position
m. The Bi-directional CMLM and the one-directional CMLM
share the same model parameters.
Image-Text Matching The matching task is only done for
the bi-directional prediction branch. At 50% possibility, we
assign a fake sentence to the image. The fake sentence is
generated by randomly sampling a caption from other images.
Specifically, we use the final feature at position [CLS] to
represent the summary of current visual and language input.
We use this feature to classify whether the current input text
and image are matched.
Auxiliary Question Answering The QA task is only done
for the bi-directional prediction branch. If the sampled text is
a question, we use the feature at position [CLS] and a QA
header to generate its answer and calculate its loss based on
classification.
Masked Object and Attributes Modeling (MOAM) The
MOAM task is for both bi-directional prediction branch and
one-directional prediction branch. The object features are
always bidirectional visible for both branches. We randomly
mask the visual regions at a ratio of 0.15. Once the region
is masked, for 80%, we change the feature to zero and for
10% of chance, we change the feature to a random feature
sampled from the dataset or keep the original feature. The
loss of MOAM is defined as the negative log-likelihood of
predicting the masked regions’ class and attributes given all
words except the masked position:
LMOAM(θ) = −E(v,w) log Pθ(vm|v/m, w) .
(3)
Fig. 3. The self-training algorithm. Our self-training approach is done by
first training a UCM model with the labeled annotations and then iterating
two steps: generating pseudo labeling on unlabeled data and retraining with
mixed data. When generating pseudo labels, randomly sampling of language
words, randomly masking image regions and condition flag are used as data
augmentations.
Here, the ground-truth regional classes and attribute classes
are hard labels generated by Faster-RCNN [11] prediction. The
MOAM losses from the two prediction branches are averaged
when calculating gradients.
Masked Feature Regression (MFR) For each masked
region, besides predicting the labels and attributes of that
region, we also perform masked feature regression to recover
its original visual feature:
LMFR(θ) = (cid:107)vm − ˆvm(cid:107)2
2 ,
(4)
where ˆv are the groundtruth regional features.
B. Self-Training Algorithms for Vision Language BERTs
In this section, we talk about the self-training algorithm.
Figure 3 illustrates the training process of our algorithm. We
first train a UCM model with the human labeled data from
COCO and VG datasets. Then we repeat two steps: generating
pseudo labeling on Conceptual Caption unlabeled data and
retraining the UCM model with mixed data from COCO, VG
and Conceptual Caption.
Train UCM with labeled data. We first
train UCM
using captioning annotations in MSCOCO, dense captioning
annotations in Visual Genome and questions in VQA [34] and
GQA dataset [35]. The trained UCM is able to generate the
above three different styles of content.
Annotate unlabeled data with trained UCM. We then use
the trained UCM to generate pseudo labels on images from
Conceptual Captions dataset. Conceptual Caption dataset pro-
vides one caption for each image by default, while the default
captions are machine-generated, not of good quality [9] and
are often used as out-of-domain training data [10]. Therefore,
we remove the original captions and use the data as unlabeled
image data. To boost the performance of self-training, the
generated captions need to be diverse. We introduce 3 methods
to generate diverse image captions for each image. First, we
perform image-level augmentations. We randomly mask object
regions when generating captions. Empirically, each image
regional feature is masked with a ratio of 0.5. This method
makes sure the model can focus on different visual areas when
Train UCM with MSCOCO, Visual GenomeGenerate Pseudo-captions on Conceptual CaptionCondition FlagMaskingSamplingTrain new UCM with Coco + VG + CCAugmentationsJOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021
5
generating outputs. Second, we perform augmentations when
sampling language words. We randomly sample a word from
the top-K predictions in each prediction step, such that even
for the same image input, the model could generate different
captioning outputs. We choose K = 5 based on the common
design choice of Image Captioning [11]. This is especially
useful when generating dense captions. The result shows that
the generated dense captions usually focus on one object
region. Given one fixed image, the generated dense captions
will always be the same without sampling. Compared with
beam-search method, the top-K sampling method is faster but
may potentially generate noisier captions. In experiments, we
observe the performance differences between the two methods
are negligible according to finetuning accuracy. One possible
reason is that the generated captions are only used as pseudo
labels and pseudo labels are noisy labels by default. Some
works [4] even purposely add noise to the generation process
of pseudo labels. Therefore, we use top-K sampling method
to speed up the generation process. We discard the generated
questions because they usually contain less information than
the captions. Finally, we use the condition flag to control the
contents generated. For each image, we generate 5 captions
with MSCOCO flag and 10 captions with VG Dense caption
flag. The condition flag conditions the style of the generated
contents.
Train new model by mixing labeled and unlabeled data.
After pseudo labels are generated for unlabeled images, we
mix the pseudo labeled data and original labeled data to train
a new model. Unlike self-training methods in image classifica-
tion [4], which train new models from scratch, we propose to
initialize the model with the last pretrained weight. The design
choice is based on 2 considerations. First, vision language
BERT pretraining takes a long time. Loading pretrained weight
can help to reduce the training time of new models. Second, for
image classification tasks, if we directly use soft classification
labels to describe the unlabeled image and load previously
trained weights,
the loss will be zero on pseudo labeled
data because the labels are exactly from the model itself.
Compared with soft classification labels, generated captions
are generated from sampling and do not directly describe the
output distribution of previous models. This property reduces
the risk that loading the previous model will result in zero
loss on pseudo labeled data. This design choice also shares
the same spirit as previous self-training works [36], [37],
where the teacher models’ weights are derived from the student
models.
Iterating the previous steps. Following the common
practice of self-training, we iterate the “Annotate unlabeled
data with trained UCM” and “Train new model by mixing
labeled and unlabeled data” steps a few times to get better
performance. A detailed ablation study is shown in Section
IV.
TABLE I
TOTAL NUMBER OF PRETRAINING IMAGE-LANGUAGE PAIRS
Total # of pairs
Used # of pairs
COCO
533k
533k
VG
5.06m
1m
VQA
444k
444k
GQA
1m
1m
CC
-
4m
A. Data Input
We first use pretrained Faster R-CNN [11], [38] to extract
regional visual features. The spatial information of each re-
gional object is represented by its relative position, height,
and weight. The spatial embedding is then calculated with
an embedding layer. The final representation of each object
is represented by adding the spatial embedding and visual
features. For languages, we follow BERT [1] and tokenize the
input sentences into WordPiece tokens. After tokenizing, We
pad them with [CLS] and [SEP] tokens. Unlike original BERT,
here we use [CLS] token to denote both start of sentence
and classification position and we use [SEP] to denote end of
sentence. Finally, we add the condition flag [CND] after [CLS]
token. The condition flag [CND] represents a set of certain
flags. In this work, the [CND] flag has three types: COCO
type caption [39], visual genome [40] type dense caption and
questions.
B. Pretrain Details
Pretraining Datasets. Our pretraining dataset contains la-
beled data and unlabeled image data. For labeled data, we
follow the same setting as in [7]. The labeled data is collected
from MSCOCO [39], Visual Genome [40], VQA [34] and
GQA datasets [35], which contain around 180k images in total.
Although the VG dataset contains more than 5 million image-
text pairs, most of them are dense captions and some of them
are repeated entries. In experiments, we remove the repeated
dense captions, and sample 10 dense captions for each image.
For unlabeled images, we use the first 300k images from
Conceptual Caption dataset and remove the original captions.
Within the 300k unlabeled images, we further filter the data
by object detection results. We remove the images with the
top 36 objects’ average confidence below 0.3. Thus only 280k
unlabeled images are left and we use them in self-training.
The total numbers of pretraining images are illustrated in
Table I. Compared with LXMERT [7] who uses 9 million
image language pairs, we use a total amount of 7 million pairs.
Self Training Setting. In experiments, we iterate the self-
training process 2 times. When training UCM, we use the same
parameter settings. We use AdamW optimizer with learning
rate 5e-5 and batch size 256. Each time we train the model
for 10 epochs. We use warm-up for the first 10% of iterations.
We also use fp16 mix precision to speed up training.
IV. EXPERIMENTS
In this section, we describe the pretraining details, ablation
experiments, visualizations, and experimental results on down-
stream datasets.
C. Finetuning Settings
We present our finetuning settings for VQAv2 [34],
GQA [35], NLVR2 [42] and COCO Caption [39].
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021
6
TABLE II
COMPARISON WITH OTHER VISION-LANGUAGE PRE-TRAINING MODELS ON VQAV2, GQA, NLVR2 AND COCO CAPTION. OUR MODEL COULD
ACHIEVE COMPETITIVE OR BETTER PERFORMANCE AMONG ALL MODELS GIVEN FEWER TRAINING IMAGES. EVALUATION METRICS: FOR VQA, GQA,
NLVR2, RESULTS ARE PRESENTED BASED ON THE ACCURACY. FOR COCO CAPTION, WE FOLLOW THE COMMON STANDARDS TO COMPARE THE BLEU
(B@4), METEOR (R), CIDER (C) AND SPICE (S) SCORES.
Tasks
ViLBert [2]
LXMERT [7] UNITER-base [10] ERNIE-VIL-base [41] VLP [27] UCM (Ours)
Pretrain Images
VQA
GQA
NLVR2
COCO Caption
COCO Caption
(CIDEr Optimization)
test-dev
test-std
test-dev
test-std
dev
test-P
B@4
M
C
S
B@4
M
C
S
3m
70.55
70.92
-
-
-
-
-
-
-
-
-
-
-
-
180k
72.42
72.5
60.00
60.30
74.9
74.5
-
-
-
-
-
-
-
-
4.2m
72.70
72.91
-
-
75.85 (77.18)
75.80 (77.85)
-
-
-
-
-
-
-
-
4.2m
72.62
72.85
-
-
-
-
-
-
-
-
-
-
-
-
3m
70.5
70.7
-
-
-
-
36.5
28.4
117.7
21.3
39.5
29.3
129.3
23.2
480k
72.9
72.9
61.3
61.5
75.6
75.5
37.4
28.8
119.4
21.2
39.0
28.8
130.1
22.7
1) VQAv2: VQAv2 dataset [34] is to answer questions
given an image. The answering process is usually format-
ted as a classification task within all possible answers. In
our experiments, the VQA questions are appended with the
question condition flag before input to the model. We use the
final features at position [CLS] to answer the question. We
add a two-layer MLP to the final output of [CLS] and use
the feature to perform classification. In ablation experiments,
we only use default provided data. In the final experiments,
following [10], we use extra QA data from Visual Genome
for data augmentation.
2) GQA: Similar to VQAv2, for GQA dataset [35], we
format the problem as a classification task and use the output
feature at position [CLS] to answer the questions. Same as
VQA, the GQA questions are appended with the question con-
dition flag before input to the model. In ablation experiments,
we only use GQA balanced dataset for training. To further
help the model adapt to GQA style of questions, in the final
experiment, we follow other works [26] to pretrain the model
using GQA full set first and then finetune on GQA balanced
dataset.
3) NLVR2: The natural language for visual reasoning for
real dataset [42] is to answer if the description is correct
given two images. The UCM model processes 1 image and
1 language by default. Therefore, we separate the two images
into two question and image pairs and process each pair using
our proposed model. After getting the [CLS] features for both
of the pairs, we simply concatenate the 2 features and use the
concatenated feature to perform a binary classification. We
noted that in [10], a different finetuning process is proposed.
For a fair comparison, we compare the results with the same
finetuning setting. In NLVR2 experiments, no condition flag is
assigned to the sentence as the NLVR2 data does not belong
to any type of the pretrained conditions.
4) COCO Caption: We also finetune our model on gen-
eration tasks i.g. COCO caption [39] on Karpathy’s split.
During finetuning, we use one-directional mask only to train
the model. During the generation process, the start token is set
to [CLS] and [CND] of COCO captioning type. We first use
cross-entropy loss to train the captioning model and then apply
CIDEr optimization [43] to further improve performance.
D. Ablation Experiments
1) Step by Step Ablation Studies: In this section, we provide
step by step ablation studies of our proposed system. The
ablation experiments are done on VQAv2, GQA, NLVR2 test-
dev set. The results are shown in Table III. We start by training
a baseline model with the same network architecture by
only using bi-directional pretraining masks and bi-directional
training tasks. We then add 300k images and their original
annotations from Conceptual Caption to training data. Results
show that simply adding 300k extra image data pairs is
unable to improve down-stream finetuning performance much.
Moreover, we also try to use LXMERT [7] to generate pseudo
labels. As pointed out in previous sections and Fig 4, the
generation quality of LXMERT is not good. Therefore, we
observe a huge performance drop when using pseudo labels
generated by LXMERT. Furthermore, we perform experiments
using UCM with both labeled data and pseudo data generated
by the generative model VLP without self-training. Compared
with the results only using labeled data, we observe that there
is almost no performance improvement. One reason is that
the generative model VLP can only generate COCO style
captions, therefore the diversity of training data is still limited.
Compared with our self-training results, we observe that the
self-training method can improve the performance further.
After that, we train our proposed UCM model with labeled
data only and finetune using the question flag. The result
shows simply using UCM and condition flag could improve
down-stream finetuning performance. Also, we do one more
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021
7
Fig. 4. An example of generated image descriptions with or without the condition signal. Generation results from a traditional VL-BERT model are not
good. For the unified model variations, if the model is not conditional, the generated results are biased to dense captioning style and tend to generate short
sentences. Our proposed UCM model could learn a conditional generation language model given different conditional flags.
TABLE III
ABLATION EXPERIMENTS OF OUR PROPOSED METHOD ON VQAV2, GQA AND NLVR2.
Method
Baseline
Baseline + Conceptual Caption 300k
UCM with only labeled data
UCM with LXMERT Pseudo Caption
UCM with VLP Pseudo Caption
UCM w/o condition flag
UCM + self training step 1
UCM + self training step 2
VQAv2 GQA NLVR2
60.0
72.4
59.9
72.4
60.3
72.6
57.3
68.7
60.0
72.6
59.8
72.4
60.6
72.8
60.5
72.7
74.9
75.3
74.8
70.1
74.5
74.8
75.6
75.6
TABLE IV
ADDITIONAL ABLATION EXPERIMENTS ON COCO CAPTION
Model
COCO Caption
COCO Caption
(CIDEr Optimization)
Baseline
UCM with only labeled data
UCM + self training step 1
B@4 M
27.1
34.9
28.5
36.9
28.8
37.4
C
109.4
117.8
119.4
S
19.9
21.2
21.2
B@4 M
26.8
34.0
28.5
38.1
28.8
39.0
C
117.7
129.9
130.1
S
19.6
22.5
22.7
experiment by removing the condition flag during finetuning.
The results drop a little bit if the condition flag is not used.
We then move to ablation studies of self-training algorithm.
We iterate the self-training process by 1 iteration and 2
iterations. We found that based on down-stream finetuning per-
formance, 1 iteration is good enough. Performing 2 iterations
is unable to improve performance much.
2) Ablation Experiments on Generation Tasks: The ablation
experiments on generation tasks are shown in Table IV.
the next word given
The generation process is to predict
the words before and can be formatted as applying a one-
directional mask on the sentence tokens. The baseline model
gives low performance on COCO captioning task because the
model is only trained with bi-directional tasks and is not
suitable for generation. We also witness slow convergence
speed and training instability. Compared with the baseline
model, the UCM model trained with only labeled data can
outperform the baseline by a large margin due to the existence
of one-directional mask during pretraining. The model also
converges faster during experiments. Our UCM model can be
finetuned within 40 GPU hours with Nvidia-2080ti GPUs on
COCO Caption task. However, the baseline model requires 3
times more finetuning GPU hours. Following the results in
previous section, we use self-training step 1 model as the best
model. The self-training process also proves its effectiveness
in captioning experiments. Compared with the model trained
with only labeled data, the self-training model achieves higher
accuracy with or without CIDEr optimization.
3) Conditional vs Unconditional: The experiments in the
last section show the effectiveness of the conditional model in
From Pretrained VLBert: A a graspingThe grasping grasping..That that a aCondition: Coco Captionthe living room has one couch in it.there is a living room with a couch and chair.a living room filled with furniture and a large window.Condition: Dense captionA white lamp shadeA white curtainthe couch is greyNo Condition:A white curtainThe curtain is whiteThe window is openFrom Pretrained VLBert: The agraspingThat that a aThe man a Condition: Coco CaptionA group of people having a party.A group of people dancing in a wedding.Amanandawomanholdinghandsanddancing.Condition: Dense captionA person standing up.Womaninwhitedresssmiling.A man is wearing a tie.No Condition:A woman in whiteThe light is onThe man is sittingJOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021
8
Fig. 5. Visualization of the attention map at condition tokens. The darkness of connections represents the attention weight. The darker the color, the
higher attention is assigned. Top: Attention masks at bi-directional branch. Bottom: Attention masks at one-directional branch. Left: An example of a dense
caption sentence. Right: An example of a question sentence. Based on the visualization results, we can have three conclusions: 1. [CND] mainly affects the
one-directional branch. 2. [CND] affects more in deeper layers. 3. [CND] has similar effects on questions and non-questions.
finetuning downstream tasks. In this section, we further study
how the condition flag affects the generation performance.
Results are shown in Figure 4. To study this problem, we start
by using an online available pretrained vision language BERT
model [7] to generate captions. Following [3], we format
the generation problem as a sampling problem from Markov
Random Field and try to generate languages based on this
setting. We found that the generation results are extremely bad
using a bi-directional pretrained model. The results are simply
repeating several high-frequency words. We then proceed to
train a UCM model without using the condition components.
We found that the generation results bias to dense captioning
styles. This is probably because the training data has much
more dense captions than COCO style captions. Finally, we
present the results of our UCM model. To further validate the
results, we calculate the average generated sentence length. A
model trained without condition flag generates sentences with
an average length of 4.8 words. Our proposed UCM model can
generate diverse image descriptions given different condition
flags. When given a condition flag dense caption, the model
generates sentences with an average length of 4.7 words. Given
a condition flag COCO style caption, our model can generate
long sentences with an average length of 10.2 words.
E. Comparison with other methods
We compare our best UCM model (Self-Training Step1)
on VQAv2, GQA, NLVR2 and COCO Caption with other
methods. The results are summarised in Table II. We compare
our model with similar-sized models (based on BERT base).
Our model could achieve competitive or better performance
among all models given fewer training images. The VLP
model achieves margin advantages compared with our method
in COCO Caption (CIDEr Optimization) based on 3 evaluation
is only pretrained
metrics. The reason is that
with COCO style captions and VQA datasets, and no other
noisy pseudo captions are included in the pretraining. When
the model
is used on understanding tasks like VQA, our
method prevails with large margins. It proves that our model
generalizes better on both generation tasks and understanding
tasks.
the model
F. Visualization
In this section, we give visualizations of the attention map
of special tokens and show how the data is generated.
1) Understanding the condition token: We visualize the
attention map at condition tokens. As shown in Figure 5, we
plot the attention weight attending to [CND] position. We plot
Language AttentionLayer 9Cross AttentionLayer 5Language AttentionLayer 9Cross AttentionLayer 5Bi-directional WeightOne-directional WeightLanguage AttentionLayer 9Cross AttentionLayer 5Language AttentionLayer 9Cross AttentionLayer 5[CLS][CND]Theshirthasawhitecollar.[CLS][CND]Theshirthasawhitecollar.[CLS][CND]Theshirthasawhitecollar.[CLS][CND]Theshirthasawhitecollar.[CLS][CND]Whereisthe rockycliff?[CLS][CND]Whereisthe rockycliff?[CLS][CND]Whereisthe rockycliff?[CLS][CND]Whereisthe rockycliff?[CLS][CND]Theshirthasawhitecollar.[CLS][CND]Theshirthasawhitecollar.[CLS][CND]Theshirthasawhitecollar.[CLS][CND]Theshirthasawhitecollar.[CLS][CND]Whereisthe rockycliff?[CLS][CND]Whereisthe rockycliff?[CLS][CND]Whereisthe rockycliff?[CLS][CND]Whereisthe rockycliff?JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021
9
Fig. 6. How the captions are generated given different visual masks (e.g. when some visual regions are masked out). For each generated dense caption, the
masked feature region is plotted. Visualization results show that by masking some parts of the image regions, the UCM model could successfully focus on
different image areas.
both the bi-directional branch and the one-directional branch
and both a dense captioning style caption and a question. The
darker the color, the higher the attention weight. Based on the
weights, we could have the following conclusions:
[CND] mainly affects the one-directional branch. We
compare the bi-directional weights and one-directional weights
(top vs bottom). Although the [CND] flag is used for both
branches, the one-directional branch learns to assign higher
weights. One reason is that the generation process is more
sensitive to the condition flag than the language understanding
process. As illustrated in previous sections, our method can
generate sentences of different lengths given different condi-
tion flags. For the understanding tasks, intuitively the model
should focus more on the sentences as a whole.
[CND] affects deeper layers more. Compared with shal-
lower layers (language attention layer 9), the deeper layers
tend to assign higher weights to [CND] position. This is be-
cause the deeper layers are more directly related to producing
results, thus they rely more on the [CND] flag to control the
generation style.
[CND] has
similar effects on questions and non-
questions. We compare the visualization of captions and ques-
tions (left vs right). No obvious difference can be observed.
This implies that the condition flags work in similar ways for
caption style sentences and question style sentences.
2) Visualization of generation process:
In Figure 6, we
show how different visual masks affect the language gen-
eration. We could have a more obvious observation when
generating dense captions. Therefore, for each generated dense
caption, we show which feature region is masked. Visualiza-
tion results show that by masking some parts of the image
regions, the UCM model could successfully focus on different
image areas and finally produce diverse dense captioning
results. For example,
image, when nothing is
masked out. The model focuses on the window. When part of
the window is masked out, the model will focus on the bath
mat and the tile. For COCO style captions, our model also
benefits from applying visual masks. Although COCO style
captions summarize the whole image, applying visual masks
helps the model to look at different areas.
in the first
V. CONCLUSION AND FUTURE WORKS
The requirement of paired training data restricts the scale
of VL-BERT pretraining. We propose a self-training approach
that allows to train VL-BERTs from unlabeled image data.
First, we propose UCM – a vision language BERT that can
perform conditional generate directly. Given different condi-
tion flags, the unified conditional model can generate dense
caption, caption, and even questions. Then we introduce a set
of self-training methods for vision language BERT pretraining,
including how to generate diverse image descriptions and the
self-training pipeline. We also visualize the generation process
and the effectiveness of the condition flag.
Original CC caption:bathroom : simple bathroom designs grey Condition: Coco Captiona bathroom with two sinks a tub and a mirror with lights and a tub.a bathroom with a sink and a toilet.a white sink in a bathroom next to a table.a clean bathroom with a mirror and sink.Condition: Dense caption1. a tile in a wall.2. a white bath mat.3. a window on the wall.Condition: Questionthe sink is on what?Where is the sink?on which side of the photo is the towel?on which side is the soap dish?Original CC caption:some new friends from the class . Condition: Coco Captionfive women pose for a picture in a room.a group of women standing around each other with remotes.a group of young women holding hands and smiling.a group of women posing for a photo with a microphone.Condition: Dense caption1. a person standing up.2. girl is wearing a necklace.3. woman wearing pink pants.4. woman in red shirt smiling and posing.Condition: QuestionThe women are doing what?On which side of the photo is the girl in red?Is the girl smiling?Original CC caption:boulevard in the downtown of the city Condition: Coco Captiona lady walking past a tree with an umbrella.a person holding an umbrella on a city street.a couple of men standing next to each other.Condition: Dense caption1. a window on the building.2. a woman holding an umbrella.3. the umbrella handle is white.4. window of the building in distance.Condition: QuestionWhere is the woman?The woman is holding what?The tree is on which side of the photo?Original CC caption:building and clouds against blue sky seen from a cityCondition: Coco Captiona city street with a skyscraper and trees.a city street with tall buildings and traffica tall building with two stories and a street light.Condition: Dense caption1. window on the building.2. a building is on the left side3. the building is brick.4. the tree is green.Condition: QuestionThe building is on which side?Where is the tree?1221242,4412331JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021
10
For performance, by using the proposed self-training ap-
proach and only 300k unlabeled extra data, we are able to get
competitive performance within all models with similar model
size trained with 3 million extra image data.
Future Works. The use of the conditional model is not
restricted to self-training. Future works can be done by explor-
ing more use-cases of the proposed UCM. For example, given
an image, our method could be used to generate kid stories,
generate advertisement and generate copyright documents with
a single pretrained model.
Further extension of training scales could also be explored.
Our proposed methods enable training vision language BERTs
with unlimited data. One may perform a larger scale of pre-
training with more data collected from the Internet.
ACKNOWLEDGMENTS
This research is supported by the National Research Foun-
dation, Singapore under its AI Singapore Programme (AISG
Award No: AISG-RP-2018-003), the MOE AcRF Tier-1 re-
search grants: RG95/20, and the OPPO research grant.
Fengmao Lv’s participation was supported by the National
Natural Science Foundation of China (No. 62106204), the
Sichuan Natural Science Foundation (No. 2022NSFSC0911,
2022YFG0031), and the Fundamental Research Funds for
Central Universities of China (No. 2682022CX068).
REFERENCES
[1] J. D. M.-W. C. Kenton and L. K. Toutanova, “Bert: Pre-training of deep
bidirectional transformers for language understanding,” in Proceedings
of NAACL-HLT, 2019, pp. 4171–4186.
[2] J. Lu, D. Batra, D. Parikh, and S. Lee, “Vilbert: Pretraining task-
agnostic visiolinguistic representations for vision-and-language tasks,”
in Advances in Neural Information Processing Systems, 2019, pp. 13–
23.
[3] A. Wang and K. Cho, “Bert has a mouth, and it must speak:
Bert as a markov random field language model,” arXiv preprint
arXiv:1902.04094, 2019.
[4] Q. Xie, M.-T. Luong, E. Hovy, and Q. V. Le, “Self-training with
noisy student improves imagenet classification,” in Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition,
2020, pp. 10 687–10 698.
[5] B. Zoph, G. Ghiasi, T.-Y. Lin, Y. Cui, H. Liu, E. D. Cubuk, and
Q. Le, “Rethinking pre-training and self-training,” Advances in neural
information processing systems, vol. 33, pp. 3833–3845, 2020.
[6] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal,
A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language mod-
els are few-shot learners,” Advances in neural information processing
systems, vol. 33, pp. 1877–1901, 2020.
[7] H. Tan and M. Bansal, “Lxmert: Learning cross-modality encoder
representations from transformers,” in Proceedings of the 2019 Con-
ference on Empirical Methods in Natural Language Processing and
the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), 2019, pp. 5100–5111.
[8] P. Sharma, N. Ding, S. Goodman, and R. Soricut, “Conceptual captions:
A cleaned, hypernymed, image alt-text dataset for automatic image cap-
tioning,” in Proceedings of the 56th Annual Meeting of the Association
for Computational Linguistics (Volume 1: Long Papers), 2018, pp. 2556–
2565.
[9] A. Singh, V. Goswami, and D. Parikh, “Are we pretraining it
right? digging deeper into visio-linguistic pretraining,” arXiv preprint
arXiv:2004.08744, 2020.
[10] Y.-C. Chen, L. Li, L. Yu, A. E. Kholy, F. Ahmed, Z. Gan, Y. Cheng, and
J. Liu, “Uniter: Universal image-text representation learning,” in ECCV,
2020.
[11] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and
L. Zhang, “Bottom-up and top-down attention for image captioning and
visual question answering,” in Proceedings of the IEEE conference on
computer vision and pattern recognition, 2018, pp. 6077–6086.
[12] Y. Han, B. Wang, R. Hong, and F. Wu, “Movie question answering via
textual memory and plot graph,” IEEE Transactions on Circuits and
Systems for Video Technology, vol. 30, no. 3, pp. 875–887, 2019.
[13] J. Zhang, J. Shao, R. Cao, L. Gao, X. Xu, and H. T. Shen, “Action-
centric relation transformer network for video question answering,”
IEEE Transactions on Circuits and Systems for Video Technology, 2020.
[14] Y. Guo, L. Nie, Z. Cheng, and Q. Tian, “Loss-rescaling vqa: Revisiting
language prior problem from a class-imbalance view,” IEEE Transac-
tions on Image Processing, 2021.
[15] P. Wang, Q. Wu, C. Shen, A. Dick, and A. Van Den Hengel, “Fvqa: Fact-
based visual question answering,” IEEE transactions on pattern analysis
and machine intelligence, vol. 40, no. 10, pp. 2413–2427, 2017.
[16] W. Guo, Y. Zhang, J. Yang, and X. Yuan, “Re-attention for visual
question answering,” IEEE Transactions on Image Processing, vol. 30,
pp. 6730–6743, 2021.
[17] N. Yu, X. Hu, B. Song, J. Yang, and J. Zhang, “Topic-oriented image
captioning based on order-embedding,” IEEE Transactions on Image
Processing, vol. 28, no. 6, pp. 2743–2754, 2018.
[18] Y. Huang, J. Chen, W. Ouyang, W. Wan, and Y. Xue, “Image captioning
with end-to-end attribute detection and subsequent attributes prediction,”
IEEE Transactions on Image Processing, vol. 29, pp. 4013–4026, 2020.
[19] C. Yan, Y. Hao, L. Li, J. Yin, A. Liu, Z. Mao, Z. Chen, and X. Gao,
“Task-adaptive attention for image captioning,” IEEE Transactions on
Circuits and Systems for Video technology, vol. 32, no. 1, pp. 43–51,
2021.
[20] W. Zhang, C. Ma, Q. Wu, and X. Yang, “Language-guided navigation
via cross-modal grounding and alternate adversarial learning,” IEEE
Transactions on Circuits and Systems for Video Technology, vol. 31,
no. 9, pp. 3469–3481, 2020.
[21] J. Gao, X. Sun, B. Ghanem, X. Zhou, and S. Ge, “Efficient video ground-
ing with which-where reading comprehension,” IEEE Transactions on
Circuits and Systems for Video Technology, 2022.
[22] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,
Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances
in neural information processing systems, 2017, pp. 5998–6008.
[23] J. Lu, V. Goswami, M. Rohrbach, D. Parikh, and S. Lee, “12-in-1: Multi-
task vision and language representation learning,” in Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition,
2020, pp. 10 437–10 446.
[24] L. H. Li, M. Yatskar, D. Yin, C.-J. Hsieh, and K.-W. Chang, “Visualbert:
A simple and performant baseline for vision and language,” arXiv
preprint arXiv:1908.03557, 2019.
[25] G. Li, N. Duan, Y. Fang, M. Gong, D. Jiang, and M. Zhou, “Unicoder-
vl: A universal encoder for vision and language by cross-modal pre-
training.” in AAAI, 2020, pp. 11 336–11 344.
[26] X. Li, X. Yin, C. Li, P. Zhang, X. Hu, L. Zhang, L. Wang, H. Hu,
L. Dong, F. Wei et al., “Oscar: Object-semantics aligned pre-training
for vision-language tasks,” in European Conference on Computer Vision.
Springer, 2020, pp. 121–137.
[27] L. Zhou, H. Palangi, L. Zhang, H. Hu, J. J. Corso, and J. Gao, “Unified
vision-language pre-training for image captioning and vqa.” in AAAI,
2020, pp. 13 041–13 049.
[28] I. Z. Yalniz, H. J´egou, K. Chen, M. Paluri, and D. Mahajan, “Billion-
scale semi-supervised learning for image classification,” arXiv preprint
arXiv:1905.00546, 2019.
[29] R. Sennrich, B. Haddow, and A. Birch, “Improving neural machine trans-
lation models with monolingual data,” arXiv preprint arXiv:1511.06709,
2015.
[30] Y. Cheng, “Semi-supervised learning for neural machine translation,” in
Springer, 2019, pp.
Joint Training for Neural Machine Translation.
25–40.
[31] L. Wu, Y. Wang, Y. Xia, Q. Tao, J. Lai, and T.-Y. Liu, “Exploiting
monolingual data at scale for neural machine translation,” in Proceedings
of the 2019 Conference on Empirical Methods in Natural Language
Processing and the 9th International Joint Conference on Natural
Language Processing (EMNLP-IJCNLP), 2019, pp. 4198–4207.
[32] L. Dong, N. Yang, W. Wang, F. Wei, X. Liu, Y. Wang, J. Gao,
M. Zhou, and H.-W. Hon, “Unified language model pre-training for
natural language understanding and generation,” Advances in Neural
Information Processing Systems, vol. 32, 2019.
[33] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever,
“Language models are unsupervised multitask learners,” OpenAI blog,
vol. 1, no. 8, p. 9, 2019.
[34] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh, “Making
the v in vqa matter: Elevating the role of image understanding in
visual question answering,” in Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, 2017, pp. 6904–6913.
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021
11
Fayao Liu is a research scientist at Institute for
Infocomm Research (I2R), A*STAR, Singapore. She
received her PhD in computer science from the Uni-
versity of Adelaide, Australia in Dec. 2015. Before
that, she obtained her B.Eng. and M.Eng. degrees
from National University of Defense Technology,
China in 2008 and 2010 respectively. She mainly
works on machine learning and computer vision
problems, with particular interests in self-supervised
learning, few-shot learning and generative models.
She is serving as an associate editor for IEEE
Transactions on Circuits and Systems for Video Technology (TCSVT).
[35] D. A. Hudson and C. D. Manning, “Gqa: a new dataset for com-
positional question answering over real-world images,” arXiv preprint
arXiv:1902.09506, vol. 3, no. 8, 2019.
[36] A. Tarvainen and H. Valpola, “Mean teachers are better role mod-
els: Weight-averaged consistency targets improve semi-supervised deep
learning results,” Advances in neural information processing systems,
vol. 30, 2017.
[37] B. Athiwaratkun, M. Finzi, P. Izmailov, and A. G. Wilson, “Improving
consistency-based semi-supervised learning with weight averaging,”
arXiv preprint arXiv:1806.05594, vol. 2, no. 9, p. 11, 2018.
[38] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time
object detection with region proposal networks,” in Advances in neural
information processing systems, 2015, pp. 91–99.
[39] X. Chen, H. Fang, T.-Y. Lin, R. Vedantam, S. Gupta, P. Doll´ar, and
C. L. Zitnick, “Microsoft coco captions: Data collection and evaluation
server,” arXiv preprint arXiv:1504.00325, 2015.
[40] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen,
Y. Kalantidis, L.-J. Li, D. A. Shamma et al., “Visual genome: Connecting
language and vision using crowdsourced dense image annotations,”
International Journal of Computer Vision, vol. 123, no. 1, pp. 32–73,
2017.
[41] F. Yu, J. Tang, W. Yin, Y. Sun, H. Tian, H. Wu, and H. Wang, “Ernie-
vil: Knowledge enhanced vision-language representations through scene
graphs,” in Proceedings of the AAAI Conference on Artificial Intelli-
gence, vol. 35, no. 4, 2021, pp. 3208–3216.
[42] A. Suhr, M. Lewis, J. Yeh, and Y. Artzi, “A corpus of natural language
for visual reasoning,” in Proceedings of the 55th Annual Meeting of the
Association for Computational Linguistics (Volume 2: Short Papers),
2017, pp. 217–223.
[43] S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel, “Self-
critical sequence training for image captioning,” in Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, 2017,
pp. 7008–7024.
Xiaofeng Yang is a PhD student at
the School
of Computer Science and Engineering, Nanyang
Technological University, Singapore. His research
interests are in computer vision and machine learn-
ing.
Guosheng Lin is an Assistant Professor at
the
School of Computer Science and Engineering,
Nanyang Technological University, Singapore. He
received his PhD degree from The University of
Adelaide in 2014. His research interests are in com-
puter vision and machine learning.
Fengmao Lv received the bachelor’s and Ph.D.
degrees in computer science from the University
of Electronic Science and Technology of China,
Chengdu, China, in 2013 and 2018, respectively. He
is currently an Associate Professor with Southwest
Jiaotong University, Chengdu. His research focus
includes transfer learning, domain adaptation, and
their applications in computer vision and natural
language processing.
|
synthetic_cpt | 2 | How_to_Learn_a_New_Language_An_Efficient_Solution_for_Self-Supervised_Learning_Models_Unseen_Languages_Adaption_in_Low-Resource_Scenario.pdf | Teaching Embodied Reinforcement Learning Agents:
Informativeness and Diversity of Language Use
Jiajun Xi* Yinong He∗
Jianing Yang Yinpei Dai
Joyce Chai
University of Michigan
{jiajunxi, heyinong, jianingy, daiyp, chaijy}@umich.edu
4
2
0
2
t
c
O
1
3
]
L
C
.
s
c
[
1
v
8
1
2
4
2
.
0
1
4
2
:
v
i
X
r
a
Abstract
In real-world scenarios, it is desirable for em-
bodied agents to have the ability to leverage
human language to gain explicit or implicit
knowledge for learning tasks. Despite recent
progress, most previous approaches adopt sim-
ple low-level instructions as language inputs,
which may not reflect natural human commu-
nication. It’s not clear how to incorporate rich
language use to facilitate task learning. To
address this question, this paper studies dif-
ferent types of language inputs in facilitating
reinforcement learning (RL) embodied agents.
More specifically, we examine how different
levels of language informativeness (i.e., feed-
back on past behaviors and future guidance)
and diversity (i.e., variation of language ex-
pressions) impact agent learning and inference.
Our empirical results based on four RL bench-
marks demonstrate that agents trained with di-
verse and informative language feedback can
achieve enhanced generalization and fast adap-
tation to new tasks. These findings highlight
the pivotal role of language use in teaching em-
bodied agents new tasks in an open world. 1
1
Introduction
Developing embodied agents that can understand
and communicate with humans in natural language
to learn and accomplish tasks is a long-standing
goal in artificial intelligence. In recent years, the
integration of human language and reinforcement
learning (RL) has seen significant advancements.
Unlike traditional RL methods that typically rely
on numerical reward signals to guide agent learn-
ing, recent works (Cheng et al., 2023; Lin et al.,
2023) explore using language as an intuitive and
useful signal to shape an agent’s behaviors. For ex-
ample, when the agent is making mistakes during
the task completion, providing language feedback
*Equal contribution.
1Source
available
sled-group/Teachable_RL.
code
at
https://github.com/
can largely improve the instantaneous performance
thus enhancing the overall agent learning efficiency
and effectiveness (McCallum et al., 2023).
However, existing methods generally employ
simple instructions, such as "turn left" and "put
the apple to the table" to teach/control an agent
(Hanjie et al., 2021; Zhang and Chai, 2021; Lin
et al., 2023; McCallum et al., 2023; Shridhar et al.,
2021). While useful, these instructions may not
fully reflect the flexibility of language use in task
learning and collaboration (Chai et al., 2018, 2019;
Zhang et al., 2022, 2023; Dai et al., 2024a). In
the real world, humans often express complex lan-
guage instructions that are more informative. For
instance, when a student makes a mistake, a teacher
may help them to retrospect on what went wrong
(i.e., hindsight instructions) and then guide them
on what should be done next to finish the goal (i.e.,
foresight instructions). In addition, humans are
likely to engage in conversations with more diverse
language patterns, describing the same goal with
different expressions and styles. Therefore, we ask
the following question:
How do the informativeness and diversity of
natural language used during RL training affect
an agent’s ability to learn tasks?
We take a popular offline RL model - decision
transformer (DT) (Chen et al., 2021) - as a back-
bone architecture and conduct a comprehensive
study to examine how informativeness and diver-
sity of language use may impact agents’ learning
ability. To control informativeness, we leverage
expert agents’ actions as a reference to generate
hindsight reflection and foresight guidance, using
hand-crafted language templates. To increase di-
versity, we construct a GPT-augmented language
pool, where GPT-4 (OpenAI, 2024) is used to aug-
ment hand-crafted templates into much more nat-
ural and richer expressions. We further extended
DT into a multi-modal Language-Teachable DT
(LTDT) and demonstrated that LTDT agents that
are trained with diverse and informative language
significantly outperform the counterpart agents that
are trained either with simple language alone or
with no language inputs. Notably, we found that
even with just one language template, combining
hindsight and foresight feedback together improves
agents’ performance by an average of 9.86 points
(from 37.95% to 47.81%) on four popular offline
RL benchmarks compared to agents trained without
language. When more language diversity is incor-
porated into training, an additional 10.14 points
(from 47.81% to 57.95%) are obtained.
The contributions of this paper can be summa-
rized as follows:
• We investigate in detail, for the first time,
how language informativeness and diversity
affect offline RL agents in task learning, and
demonstrate their important roles in improv-
ing agents’ performance, adaptability, and ro-
bustness.
• We show that training agents with informa-
tive and diverse instructions can intrinsically
improve the agent’s understanding of the task
and lead to better performance.
• We propose a simple framework to generate
both hindsight and foresight language feed-
back and enrich language variation without
any human annotators.
2 Related Work
Offline Reinforcement Learning Offline rein-
forcement learning (RL) has become a focal point
of research due to its ability to utilize pre-existing
datasets for training agents without real-time in-
teractions. Several algorithms address the unique
challenges of offline RL, such as mitigating extrap-
olation errors and ensuring robust policy evalua-
tion. A survey by Prudencio et al. (2023) outlines
the field’s taxonomy and open problems. Bench-
marking efforts by Fujimoto et al. (2019) assess
various batch deep RL algorithms. Key approaches
include Conservative Q-Learning (CQL) (Kumar
et al., 2020), Implicit Q-Learning (IQL) (Kostrikov
et al., 2021), and the Decision Transformer (DT)
(Chen et al., 2021), which treats RL as a sequence
modeling problem (Janner et al., 2021). Recent
work also explores generalization across tasks (Lee
et al., 2022; Reed et al., 2022; Schubert et al., 2023),
the use of exploratory data (Yarats et al., 2022), and
integrating large language models (LLMs) (Mir-
chandani et al., 2023). Efficient online RL lever-
aging offline data is also a focus (Ball et al., 2023;
Modhe et al., 2023). Our research builds on the De-
cision Transformer (DT) by integrating language
feedback, creating the Language-Teachable Deci-
sion Transformer (LTDT). This novel approach in-
corporates rich, human-like language instructions,
improving agent learning through enhanced infor-
mativeness and diversity of language inputs.
Language in Reinforcement Learning The in-
tersection of natural language and RL offers new
ways to develop intuitive and effective learning
paradigms for embodied agents. Initial works uti-
lized language for feedback and task instructions
(She and Chai, 2017; Nguyen et al., 2017; Shrid-
har et al., 2020). Recent studies have explored
various methods for incorporating language feed-
back in RL, such as the LTC paradigm (Wang
et al., 2023), lifelong robot learning with human-
assisted language planners (Parakh et al., 2023),
and frameworks for rich information requests (Dai
et al., 2020; Tseng et al., 2021; Nguyen et al., 2022).
Language for corrections (Sharma et al., 2022; Liu
et al., 2023) and as reward signals (Xie et al., 2023;
Goyal et al., 2019; Yu et al., 2023) has shown
to enhance agent performance. Vision-language
joint training approaches, like CLIP (Radford et al.,
2021), BLIP-2 (Li et al., 2023), and InstructBLIP
(Dai et al., 2023), demonstrate the potential of com-
bining visual and language modalities for RL tasks
(Ma et al., 2023; Nguyen et al., 2019; Khandel-
wal et al., 2022). Further, multimodal prompts for
robotic manipulation (Jiang et al., 2023; Fan et al.,
2022) and LLMs for planning in robotics (Ahn
et al., 2022; Huang et al., 2022; Singh et al., 2023;
Yao et al., 2022; Dai et al., 2024b) highlight the
evolving role of language in RL. Other works, like
(Mehta et al., 2023), focus on generating problem-
specific language feedback templates. In contrast,
our work focuses on the informativeness and diver-
sity of language instructions, two problem-agnostic
yet easy-to-implement properties. By using both
hindsight and foresight language templates and en-
hancing diversity through GPT-4, we demonstrate
notable improvements in agent performance and
generalizability, showcasing the impact of complex
language inputs in offline RL training.
3 Problem Setting
In this section, we outline the problem setting by
defining the offline reinforcement learning problem
Figure 1: An overview of four environments used for experiments. It shows tasks to be learned in each environment;
examples of hindsight (marked H) and foresight (F) language feedback (next to the gear icon are hand-crafted
templates and next to the GPT icon are GPT-4 generated feedback); as well as low-level actions in each environment.
(Sec. 3.1), and a taxonomy of language feedback
(Sec. 3.2). Then we describe the instantiation of
such definitions in four different RL environments
we used for experiments (Sec. 3.3).
3.1 Offline Reinforcement Learning
To support a systematic study of language use, we
formulate the problem in the offline reinforcement
learning (RL) setting. At each time step t, the
agent receives an observation ot, a reward rt, and
a language feedback lt for its previous action. The
agent then executes an action at according to a
policy π, which is conditioned on the entire in-
teraction history ht up to time t, i.e., π(at | ht),
where ht = {o≤t, r≤t, l≤t, a<t} represents the his-
tory of observations, rewards, language feedback,
and past actions up to time t. The agent’s goal is
to complete the task by maximizing the expected
discounted sum of rewards E[(cid:80)T
t=1 γtrt] where T
is the episode length, and γ is the discount fac-
tor. In offline RL, the training trajectories are pre-
collected with an expert agent (a well-trained agent
or a planner-based expert with privileged informa-
tion). The trained agents are evaluated interactively
with the environment.
3.2 Language Feedback: Informativeness and
performance on seen tasks and adaptation to unseen
tasks.
3.2.1
Informativeness
Informativeness refers to the richness of infor-
mation content in language feedback. Following
Cheng et al. (2023), we categorize feedback into
two types: hindsight and foresight. Hindsight
feedback involves comments or critiques about
the agent’s past actions. For example, "Excellent,
you are moving towards the goal!" encourages the
agent to continue its current path, while "You are
getting too close to the enemy." alerts the agent
about a mistake. Hindsight feedback reflects on
incorrect actions taken in previous steps, which can
guide agents toward success by narrowing down the
search space for correct actions (See Appendix E
for more analysis). Conversely, foresight feedback
guides potential future actions. For instance, "You
should go right to get closer to the target." directs
the agent towards the goal, and "You should go left
to avoid the enemy on the right." helps the agent
make strategic decisions to avoid threats. Language
feedback is considered most informative when it
includes both hindsight and foresight elements, and
least informative when neither is present.
Diversity
3.2.2 Diversity
We aim to investigate how the informativeness and
diversity of language instructions used during the
training of an offline RL agent affect the agent’s
Diversity in language feedback refers to the vari-
ety of ways the same information is conveyed. If
feedback is provided using only one template, it
HomeGridEnvironmentALFWorldMetaWorldTaskLanguageActionMessengerFind / Get {obj}Open {bin_type}Rearrange {obj}Pick {obj} and put it in {place}Clean {obj} and put it in {place}Get the message and then send it to the goal.Assembly: Pick up the wrench and put it on the pegLeft()Right()Up()Down()Left()Right()Up()Down()PickUp(obj)Drop(obj)Pedal(bin)Grasp(bin)Lift(bin)Get(obj)Open (gripper)Raise(gripper)Close(gripper)MoveTo(gripper, pose)Drop(gripper)Goto(recept)Put(obj)Open(recept)Close(recept)Take(obj)Look()Clean(obj)Heat(obj)H: You are too close to the enemy {name}.F: Go {direction} to dodge the enemy {name}.H: That’s a poor move since you are not avoiding the enemy {name}.F: Please move {direction} to elude the enemy {name} on your track. H: You have gone to the wrong direction.F: Pedal to open the recycling bin.H: You seem to be heading away from the right route.F: To access the recycling bin, you’ll need to pedal.H: You made a mistake by taking thebad action {action}.F: Take {action} in the next step.H: The choice to implement {action} was misguided.F: I suggest you try {action} for now.H: Good job! You are correctly {action}.F: It’s time to {action}.H: That’s an excellent step to {action}.F: To complete the task, you have to {action}Hammer: Pick up the hammer and hit the nailGet to the goal and then find the message.PretrainHeat {obj} and put it in {place}AdaptationPretrainAdaptationClean-upPretrainAdaptationPretrainAdaptationis less diverse. It becomes more diverse when the
same information is expressed in many different
ways. The goal is to expose the RL agent to vari-
ous expressions of the same feedback to enhance
its ability to generalize.
Algorithm 1 Offline Data Collection
1: Initialize D ← ∅
2: for each episode with seedi do
3:
4:
5:
6:
Initialize Di ← ∅
Initialize environment env with seedi.
Append task description T d to Di
Initialize the non-expert agent with a sub-optimal pol-
3.3 Environments
As shown in Figure 1, we conduct experiments
across four environments—HomeGrid, ALFWorld,
Messenger, and MetaWorld—each featuring dis-
crete action spaces, with hand-crafted hindsight
and foresight language instructions. More informa-
tion and examples of languages for each environ-
ment can be found in Appendix A.
HomeGrid (Lin et al., 2023) is a multitask grid
world designed to evaluate how well agents can
understand and use various types of language to
complete tasks. It includes five task types (FIND,
GET, CLEAN UP, REARRANGE, OPEN), involving
interaction with objects and trash bins with a total
of 38 tasks. The agent receives a reward of 1 when
the task is completed and receives a reward of 0.5
if a subgoal is completed.
ALFWorld (Shridhar et al., 2021) is a text-game
environment that aligns with the embodied AL-
FRED benchmark (Shridhar et al., 2020) and pro-
vides simulation for household tasks. It includes six
types of tasks which require the agent to navigate
and interact with household objects by following
language instructions. The agent gets a reward
of 1 when the task is completed. We adopt the
hindsight and foresight language templates from
LLF-ALFWorld introduced in (Cheng et al., 2023),
which adds an extra language wrapper to the origi-
nal ALFWorld environment.
Messenger (Hanjie et al., 2021) is a grid world
with several entities. The agent’s task is to retrieve
a message from one entity and deliver it to another
goal entity, while avoiding enemies. At the start of
each episode, the agent is provided with a manual
describing the randomized roles of the entities and
their movement dynamics. The agent receives a
reward of 1 when the task is completed.
MetaWorld (Yu et al., 2019) is a benchmark that
consists of a variety of manipulation tasks per-
formed by a simulated Sawyer robot arm. It in-
cludes 50 types of common robot manipulation
tasks. We select two of them in our experiments:
ASSEMBLY and HAMMER. The agent receives a
reward of 1 when completing a task.
icy π.
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
Initialize the expert agent with policy π∗.
for each time step do
at ← π(ht)
a∗
t ← π∗(ht)
rt, st, lhind
t ← env(at, a∗
t
if Use GPT-augmented Pool then
= GPT-augmented(lhind
)
= GPT-augmented(lf ore
)
, lf ore
t |ht).
t
t
lhind
t
lf ore
t
end if
lt ←
t
lhind
t + lfore
lhind
t
lfore
t
<empty>
Append (rt, st, at, lt) to Di
if H + F
if only H
if only F
if No Lang
end for
Aggregate Datasets D ← D ∪ Di
17:
18:
19:
20: end for
4 Data Generation
To train an agent that can understand language feed-
back in an offline reinforcement learning manner,
we construct an offline dataset D consisting of two
parts:
• Agent trajectory consisting of task description
T d and the tuples ( ˆRt, st, at), where ˆRt rep-
resents the reward, st is the state, and at is the
action.
• language feedback lt conveying hindsight and
foresight information at each time step.
Algorithm 1 outlines the data generation process,
and we explain the algorithm in detail in the fol-
lowing sections.
4.1 Trajectory Generation
To improve model generalization and avoid overfit-
ting, it is essential to train on diverse, sub-optimal
trajectories rather than relying solely on optimal
ones generated by an expert agent (Kumar et al.,
2020; Chen et al., 2021). We achieve this by in-
troducing perturbations to an expert planner (see
Appendix B), allowing the non-expert agent to
produce sub-optimal trajectories. This promotes
broader exploration of the state-action space, en-
hancing the model’s ability to generalize to unseen
scenarios (Kumar et al., 2020; Chen et al., 2021).
During data collection, we begin by appending
the task description T d to the trajectory sequence
and initializing the environment with a fixed seed.
Figure 2: A demonstration of hindsight and foresight language feedback generation. In our framework, the agent π
executes the trajectory, while the expert agent π∗, with access to privileged ground truth knowledge, is used solely
to provide information for generating language feedback to π. At time step t, hindsight language is generated by
comparing the agent’s action at−1 with the expert agent’s action a∗
t−1, whereas foresight language is generated by
referring to the expert agent’s action a∗
t to guide the agent on the next step. To increase the diversity of language
feedback, we construct a pool of language templates comprising GPT-augmented languages, and sample candidate
instructions as online language feedback.
A non-expert agent, using a sub-optimal policy π
derived from the expert agent’s optimal policy π∗,
interacts with the environment. At each time step,
the environment state ot, reward ˆRt, and the non-
expert agent’s action at are recorded to form the tra-
jectory sequence: (T d, ˆR1, s1, a1, . . . , ˆRt, st, at).
4.2 Language Feedback Generation
For the second part of the dataset D, we collect
the language feedback along the non-expert agent’s
trajectory. As shown in Figure 2, we follow a struc-
tured process to generate diverse and informative
language feedback. For the state at time step t, the
expert agent π∗ proposes an expert action a∗
t (e.g.
"down") at this state, which is further transformed
into a foresight template lf ore
(e.g. "Turn back.")
by the environment simulator, guiding the agent on
what should be done at this state. After the non-
expert agent π steps the environment (into time step
t + 1) with its generated action at (e.g. "down"),
the environment simulator generates a hindsight
template lhind
t+1 (e.g. "You are doing well so far.")
based on the comparison between agent action at
and expert agent action a∗
t at the last time step t,
reflecting on whether the agent is on the right track.
For each foresight/hindsight template, we use
GPT-4 to augment it into more natural and varied
t
expressions. (e.g. We can augment "You are doing
well so far." into "Up until now, you’re doing won-
derfully." or "So far, so good, you’re doing great!".)
We compile all the rewritten sentences into a set
called the GPT-augmented language pool. At each
step of the non-expert agent, we randomly select
one candidate from the pool as the language instruc-
tion. This process ensures the feedback provided
to the agent has high level of diversity and enriches
the learning experience.
The level of informativeness and diversity of
the language feedback depends on the inclusion of
hindsight and foresight (e.g. concatenated when
both are required) and the use of GPT-augmented
language pool. The language feedback at each time
step will finally get concatenated with the trajectory
sequence into (T d, ˆR1, s1, a1, l1, . . . ˆRt, st, at, lt).
Algorithm 1 summarizes the data collection pro-
cess.
5 Model
Architecture. We extend the Decision Trans-
former (DT) architecture (Chen et al., 2021) to
create the Language-Teachable Decision Trans-
former (LTDT) by augmenting the input to in-
clude language feedback. This architecture is a
decoder-only transformer, similar to GPT-2 (Rad-
H: You seem to be heading away from the right route. F: Make a 180-degree turn right now.𝑎𝑡−1∗“pedal”Expert Agent 𝜋∗ predictionAgent 𝜋 in environment𝑎𝑡∗𝑎𝑡+1∗“down”“pedal”𝑎𝑡−1“up”𝑎𝑡“down”Time Step𝑡−1𝑡𝑡+1Task:Open the binH: You have gone to the wrong direction.F: Turn back.H: You are doing well so far.F: Pedal to open the recycling bin.Environment SimulatorGPT Template PoolExpert agent’s actionCompare 𝜋 with 𝜋∗Agent’s actionH: So far, so good, you’re doing great!F: To access the recycling bin, you’llneed to pedal.unseen tasks after fine-tuning with few-shot sam-
ples.
6.1 Experimental Setup
Setup for RQ 1. We compare performance on
seen tasks between agents trained with varying lev-
els of language informativeness and diversity: 1)
the No Language agent is trained without any lan-
guage instructions; 2) the Template Foresight
agent is trained with hand-crafted foresight lan-
guage templates; 3) the Template Hindsight
agent is trained with hand-crafted hindsight lan-
guage templates; 4) the Template Hindsight +
Foresight agent is trained with hand-crafted fore-
sight and hindsight language templates; and 5) the
GPT-augmented Hindsight + Foresight agent
is trained with hindsight and foresight languages
from the GPT-augmented language pool. We train
on 100, 1,000, 20,000, and 10,000 trajectories
for HomeGrid, ALFWorld, Messenger, and Meta-
World environments, respectively. Evaluation is
performed over 5 runs, with 100 random seeds for
each run.
Setup for RQ 2. We pre-train different agents
on seen tasks and then compare adaptability
(how well an agent performs after few-shot learn-
Language
ing) on unseen tasks: 1) the No
pre-trained agent is pre-trained without any
language instructions; 2) the GPT-augmented
hindsight pre-trained agent is pre-trained with
hindsight language from the GPT-augmented lan-
guage pool; 3) the GPT-augmented foresight
pre-trained agent is pre-trained with foresight
language from the GPT-augmented language pool;
4) the GPT-augmented hindsight + foresight
pre-trained agent is pre-trained with both hind-
sight and foresight
language from the GPT-
augmented language pool. During the few-shot
adaptation stage, we choose to fine-tune the pre-
trained agents with both hindsight + foresight lan-
guage from the GPT-augmented language pool for
all settings, since this mimics a real-world few-shot
learning scenario, where humans likely provide di-
verse feedback, including both hindsight and fore-
sight, to guide the agent in new tasks. We pretrain
on 6,432, 1,000, 20,000, and 10,000 trajectories
for HomeGrid, ALFWorld, Messenger, and Meta-
World, respectively. For all environments, we adapt
on 5, 10, and 20 trajectories to 1 new task. Evalua-
tion is performed over 5 runs, with 100 seeds per
run.
Further details on task setup of RQ 1 and RQ
Figure 3: Language-Teachable Decision Transformer.
ford et al., 2019), and models a trajectory sequence
(T d, ˆR1, s1, a1, l1, . . . , ˆRt, st, at, lt), with the lan-
guage feedback input appended at each step and
a task description (TD) input prefixed at the be-
ginning of the sequence. Like the original DT,
the embeddings of these inputs are passed through
the Causal Transformer, which encodes positional
information to maintain sequence order. The trans-
former’s output is used to predict the next action
in the sequence, conditioned on the state, return-
to-go, action, and language feedback in the last K
time steps, with the task description as the prefix
(4K + 1 tokens in total), as shown in Figure 3.
Training.
Similar to the original DT training,
given an offline dataset of trajectory sequences, we
sample a sub-sequence of length K (with 4K +
1 tokens), and the prediction head is trained to
predict discrete actions with the cross-entropy loss
or continuous actions with the MSE loss. More
training details can be found in Appendix G.
Language Embeddings. We use language em-
beddings from a frozen Sentence-BERT model
(Reimers and Gurevych, 2019) in all environments.
We find Sentence-BERT more sensitive to language
feedback changes, capturing nuanced semantic dif-
ferences better.
6 Experiment
In this section, we design experiments to answer
the following two research questions (RQs):
• RQ 1: How do the informativeness and diver-
sity of language affect agents’ performance on
seen tasks?
• RQ 2: How does the informativeness of the
language feedback affect pre-trained agents’
adaptability on unseen tasks?
For RQ1, we control agents trained with hind-
sight information, foresight information, or both
to investigate the function of informativeness. We
compare agents trained with language from both
hand-crafted templates and the GPT-augmented
language pool to examine the function of language
diversity.
For RQ2, agents are taught in languages from
the GPT-augmented language pool and tested on
Causal TransformerTask Description…MLP Embedding & Positional EncodingLinear Decoder…𝑠𝑡−1𝑅𝑡−1𝑙𝑡−1𝑎𝑡−1𝑠𝑡𝑅𝑡𝑙𝑡𝑎𝑡ො𝑎𝑡−1ො𝑎𝑡Figure 4: Comparison of agent performance in four environments (averaged across 100 seeds in each environment)
under varying levels of language feedback informativeness and diversity. Agents trained with more informative lan-
guage feedback exhibit progressively higher performance. Furthermore, given the same informativeness (Hindsight
+ Foresight), increasing diversity with the GPT-augmented language pool leads to the highest performance.
Figure 5: Comparison of agent performance on unseen tasks in four environments (averaged across 100 seeds
in each environment) under varying language informativeness in agent pre-training. Agent trained with more
informative language adapts to new tasks faster and better.
2 can be found in Appendix C. Additional results
when training and adapting on same types of lan-
guage can be found in Appendix D.
Evaluation. At inference time, an agent is given a
short task description before it starts to act, and lan-
guage feedback along its execution. The language
feedback should ideally come from real humans,
who provide feedback varying in informativeness,
diversity, and frequency (how often feedback is pro-
vided). However, recruiting and moderating real
humans to generate online feedback is expensive
and difficult to scale. Therefore, we employ GPT-4
to provide online language feedback to mimic real
humans. Specifically, at each time step, we provide
all necessary context information to GPT-4 in its
prompt and let it decide “whether to speak” (fre-
quency), “what to speak” (informativeness), and
“how to speak” (diversity). The context informa-
tion, in this case, consists of the ground-truth envi-
ronment states, action/state history, and template-
based hindsight and foresight short text description
generated by comparing the actions of the expert
agent and the trained agent. GPT-4 then has the
freedom to rephrase, combine, shorten, and discard
such context information to utter diverse, coherent,
and natural language feedback, mimicking a real
human. See Appendix H for an example of such
GPT-generated online feedback.
Metric. We use the reward value as our main met-
ric. Agents receive a reward of 1 upon task com-
pletion for all environments and receive additional
rewards for achieving specific sub-goals for the
HomeGrid and ALFWorld environments.
6.2 Experimental Results
Results for RQ 1. As we can see in Figure 4,
agents trained with both diverse and informative
language feedback (GPT-augmented Hindsight
+ Foresight) consistently achieve the highest per-
formance across all environments. The varied and
paraphrased instructions generated from GPT pro-
vide a richer set of linguistic inputs, enabling the
agents to develop a more robust language under-
standing for task execution during evaluation.
When examining the impact of informativeness,
we observe that agents trained with both hindsight
and foresight information (Template Hindsight +
Foresight) consistently achieve higher performance
across all environments compared to those trained
with only hindsight or foresight information. This
indicates that integrating both types of feedback
enhances the informativeness of the language, en-
abling the agents to develop a more comprehen-
sive understanding and leading to better decision-
making and overall performance. The only excep-
tion is in the Messenger environment, where the
no-language agent exhibits a surprisingly strong
performance. However, upon further investigation
of this exception, we find that if the hindsight-
only or foresight-only feedback is from the GPT-
augmented pool, the agent can still outperform the
No Language agent (refer to Appendix F).
In terms of diversity, the results show that agents
trained with diverse language feedback, as indi-
0.10.20.30.40.5RewardHomeGrid0.20.30.40.50.6ALFWorld0.20.40.60.8Messenger0.40.50.6MetaworldNo LanguageTemplate Hindsight + ForesightTemplate HindsightGPT-augmented Hindsight + ForesightTemplate Foresight5 shot10 shot20 shot0.00.20.40.6RewardHomeGrid5 shot10 shot20 shot0.00.20.4ALFWorld5 shot10 shot20 shot0.00.20.40.6Messenger5 shot10 shot20 shot0.00.20.40.6MetaWorldNo Language PretrainedGPT-augmented Hindsight PretrainedGPT-augmented Foresight PretrainedGPT-augmented Hindsight + Foresight PretrainedFigure 6: Efficiency gain vs. task difficulty. We fit the scatter plots with a second-degree polynomial to visualize the
overall trend. As task difficulty increases, the general trend of the efficiency gain is to rise initially and then decline,
suggesting: (1) for tasks that are too easy or too hard, language feedback does not improve efficiency; (2) language
feedback is most helpful in increasing efficiency for moderate tasks.
different tasks? To answer this question, we de-
fine efficiency gain as the difference in efficiency
between an agent trained with informative and di-
verse GPT languages, and an agent trained without
any languages. Efficiency is measured by a path-
weighted reward, as introduced in ALFRED (Shrid-
har et al., 2020). This reward, rp, is calculated as
max(L,L∗) , where r is the total reward, L
rp = r ×
is the agent’s trajectory length, and L∗ is the ex-
pert agent’s trajectory length. Higher rp indicates
successful task completion with fewer steps.
L∗
We define task difficulty for each configuration
by calculating the average success rates of agents
trained without language feedback, ranking these
from lowest to highest. Configurations with lower
success rates are considered more difficult, indi-
cating greater challenges for agents learning from
these configurations without language assistance.
As shown in Figure 6, the efficiency gain gener-
ally rises with increasing learning difficulty, then
declines. This suggests that: (1) for tasks that are
too easy or too hard, language feedback does not
improve efficiency; (2) language feedback is most
helpful in increasing efficiency for moderate tasks.
Performance vs. Language Frequency. In the
main experiments, we utilize an online GPT model
to determine whether to provide language feedback
at each time step. However, it is important to ex-
plore how varying the frequency of language feed-
back influences agent performance. To investigate
this, we control the feedback frequency by sam-
pling according to pre-defined probabilities (e.g.,
20%, 40%). The language feedback is extracted
from the GPT-augmented language pool; if no lan-
guage is sampled, an empty string is provided to
the agent. The evaluation is conducted on agents
trained with both hindsight and foresight feedback
derived from the GPT-augmented language pool.
As illustrated in Figure 7, agents’ performance im-
Figure 7: Performance vs. language frequency. Agents
perform better with more frequent language feedback
across four environments.
cated by the ‘GPT-augmented’ bars, consistently
outperform those trained with less varied language
input. The rich set of augmented instructions gen-
erated by GPT helps agents develop a more flexible
and nuanced understanding of task instructions,
which translates to better performance during eval-
uation. This highlights the critical role of linguistic
diversity in enhancing the robustness and adapt-
ability of the agents’ language comprehension, ul-
timately leading to improved task execution across
different environments.
Results for RQ 2. The results in Figure 5 re-
veal that agents pre-trained with more informa-
tive language can adapt to unseen tasks faster and
better. “Adapting faster” is evident by the fact
that agents pre-trained with GPT-augmented Hind-
sight + Foresight language in 5 or 10 shots can
already achieve a similar performance 20-shot per-
formance of agents trained with less informative
language. “Adapting better” is evident by the fact
that, at a given number of shots available for adap-
tation, the agent trained with the most informative
language performs the best compared to its less
informatively-pretrained counterparts. These re-
sults indicate that agents pre-trained with more
informative language can adapt and generalize to
new tasks faster and better.
6.3 Ablation Study
Efficiency Gain vs. Task Difficulty. Can lan-
guage feedback help the agent to achieve more
0204060801001.000.750.500.250.000.250.500.751.00HomeGrid0204060801000.750.500.250.000.250.500.751.00ALFWorld0204060801000.60.40.20.00.20.40.60.8Messenger0204060801001.000.750.500.250.000.250.500.751.00MetaWorldTask DifficultyEfficiency GainEfficiency GainFitted Efficiency Gain Trend0%20%40%60%80%100%Language Frequency0.30.40.50.60.70.8RewardMessengerMetaworldHomeGridALFWorldFigure 8: We investigate two special evaluation settings:
(1) no language feedback is provided during evaluation
and (2) disturbed language feedback is given at every
step. Results show that agents trained with the GPT-
augmented language still outperform the no-language
agent (the black dotted line) in the disturbed setting, and
also achieve better performance in some environments
while no language is given.
proves steadily across all environments with more
frequent language feedback during evaluation. This
finding suggests that agents trained with informa-
tive and diverse language feedback can continually
absorb and leverage new information when addi-
tional feedback is provided, leading to enhanced
performance.
Performance under Corrupted Language. This
ablation aims to evaluate how agents perform when
provided with incorrect instructions. We assess
the performance of an agent trained with GPT-4-
augmented informative and diverse language under
two conditions: (1) Empty Feedback: the absence
of language feedback during testing, and (2) Dis-
turbed Feedback: the provision of disturbed lan-
guage at each step. The disturbed language consists
of redundant, irrelevant, or misleading informa-
tion (e.g., incorrect actions or objects) and is gen-
erated using GPT-augmented templates with dis-
rupted content. The results in Figure 8 reveal two
interesting findings: (1) When tested without any
language feedback, the agent trained with informa-
tive and diverse language performs comparably or
even exceeds the performance of the agent trained
without any language (represented by the black dot-
ted line). This indicates that the agent develops a
robust intrinsic understanding of the task, demon-
strating that it does not overly rely on language
feedback; (2) When exposed to disturbed feedback,
the agent trained with informative and diverse lan-
guage maintains performance levels comparable to
the no-language agent. This showcases the agent’s
ability to withstand misleading information, a criti-
cal trait for real-world applications where human
feedback may be unreliable.
7 Conclusion
In this paper, we investigate how the informative-
ness and diversity of language feedback affect
embodied agents. We introduce the Language-
Teachable Decision Transformer (LTDT), which
makes decisions based on human language feed-
back. To facilitate the training of LTDT agents,
we propose an easy-to-use pipeline for collecting
offline hindsight and foresight GPT templates. We
compare the performance of agents by varying the
informativeness and diversity of the training lan-
guages across four reinforcement learning environ-
ments and evaluate the agents’ ability to understand
real-world human language using online GPT as a
proxy. Our results demonstrate that training with
more informative and diverse language feedback
significantly enhances agent performance and en-
ables fast adaptation to unseen tasks.
Limitations
Our study has several limitations. First, the investi-
gated environments are primarily game-based and
do not test the agents’ ability to incorporate real-life
visual inputs. Future work will focus on evaluating
agents in more realistic and complex environments
that involve real-world visual inputs and challenges.
Second, while GPT language outputs can produce
diverse and contextually relevant language, they
may not fully cover all human language styles and
nuances. Specifically, GPT models might miss
certain idioms, dialects, or culturally specific ref-
erences that are prevalent in human communica-
tion. Future work will aim to incorporate a broader
spectrum of language variations and test agents in
scenarios involving more diverse linguistic inputs.
Ethical Impacts
Our study, conducted entirely within simulated en-
vironments, does not present immediate ethical
concerns. The teachable nature of our Language-
Teachable Decision Transformer (LTDT) method
is designed to make AI agents more controllable
and better aligned with human values, promoting
safer and more ethical interactions. By enhancing
agent performance through informative and diverse
language instructions, we aim to foster AI systems
that are more transparent and responsive to human
guidance, addressing ethical considerations in the
deployment of artificial intelligence. As AI be-
comes more mainstream, these considerations are
increasingly pertinent, and our work strives to ad-
vance AI technology responsibly.
Acknowledgements
This work was supported by NSF IIS-1949634 and
has benefited from the Microsoft Accelerate Foun-
0.10.20.30.40.5RewardHomeGrid0.20.30.40.50.6ALFWorld0.40.50.60.70.8Messenger0.20.30.40.50.60.7MetaworldEmpty feedbackDisturbed feedbackNormal feedbackBaseline trained without languagesdation Models Research (AFMR) grant program.
We would like to thank the anonymous reviewers
for their valuable comments and suggestions.
References
Michael Ahn, Anthony Brohan, Noah Brown, Yev-
gen Chebotar, Omar Cortes, Byron David, Chelsea
Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol
Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu,
Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang,
Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jes-
month, Nikhil Joshi, Ryan Julian, Dmitry Kalash-
nikov, Yuheng Kuang, Kuang-Huei Lee, Sergey
Levine, Yao Lu, Linda Luu, Carolina Parada, Pe-
ter Pastor, Jornell Quiambao, Kanishka Rao, Jarek
Rettinghouse, Diego Reyes, Pierre Sermanet, Nico-
las Sievers, Clayton Tan, Alexander Toshev, Vincent
Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu,
Mengyuan Yan, and Andy Zeng. 2022. Do as i can
and not as i say: Grounding language in robotic af-
fordances. In arXiv preprint arXiv:2204.01691.
Philip J Ball, Laura Smith, Ilya Kostrikov, and Sergey
Levine. 2023. Efficient online reinforcement learning
with offline data. arXiv preprint arXiv:2302.02948.
Joyce Chai, Maya Cakmak, and Candy Sidner. 2019.
Teaching robots new tasks through natural interac-
tion. In K. A. Cluck and J. E. Laird, editors, Inter-
active Task Learning: Agents, Robots, and Humans
Acquiring New Tasks through Natural Interactions.
MIT Press.
Joyce Chai, Qiaozi Gao, Lanbo She, Shaohua Yang,
Sari Saba-Sadiya, and Guangyue Xu. 2018. Lan-
guage to action: Towards interactive task learning
with physical agents. In Proceedings of the Twenty-
Seventh International Joint Conference on Artificial
Intelligence, IJCAI 2018, July 13-19, 2018, Stock-
holm, Sweden, pages 2–9. ijcai.org.
Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee,
Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind
Srinivas, and Igor Mordatch. 2021. Decision trans-
former: Reinforcement learning via sequence mod-
eling. Advances in neural information processing
systems, 34:15084–15097.
Ching-An Cheng, Andrey Kolobov, Dipendra Misra,
Allen Nie, and Adith Swaminathan. 2023. Llf-bench:
Benchmark for interactive learning from language
feedback. arXiv preprint arXiv:2312.06853.
Wenliang Dai, Junnan Li, Dongxu Li, Anthony
Meng Huat Tiong, Junqi Zhao, Weisheng Wang,
and Steven Hoi.
Boyang Li, Pascale Fung,
2023. Instructblip: Towards general-purpose vision-
language models with instruction tuning. Preprint,
arXiv:2305.06500.
Yinpei Dai, Jayjun Lee, Nima Fazeli, and Joyce Chai.
2024a. Racer: Rich language-guided failure recov-
ery policies for imitation learning. arXiv preprint
arXiv:2409.14674.
Yinpei Dai, Hangyu Li, Chengguang Tang, Yongbin
Li, Jian Sun, and Xiaodan Zhu. 2020. Learning low-
resource end-to-end goal-oriented dialog for fast and
reliable system deployment. In Proceedings of the
58th Annual Meeting of the Association for Compu-
tational Linguistics, pages 609–618.
Yinpei Dai, Run Peng, Sikai Li, and Joyce Chai. 2024b.
Think, act, and ask: Open-world interactive person-
alized robot navigation. In 2024 IEEE International
Conference on Robotics and Automation (ICRA),
pages 3296–3303. IEEE.
Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Man-
dlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang,
De-An Huang, Yuke Zhu, and Anima Anandkumar.
2022. Minedojo: Building open-ended embodied
agents with internet-scale knowledge. Advances in
Neural Information Processing Systems, 35:18343–
18362.
Scott Fujimoto,
Edoardo Conti, Mohammad
Ghavamzadeh, and Joelle Pineau. 2019. Benchmark-
ing batch deep reinforcement learning algorithms.
arXiv preprint arXiv:1910.01708.
Prasoon Goyal, Scott Niekum, and Raymond J Mooney.
2019. Using natural language for reward shap-
arXiv preprint
ing in reinforcement
arXiv:1903.02020.
learning.
Austin W. Hanjie, Victor Zhong,
and Karthik
Narasimhan. 2021. Grounding language to entities
and dynamics for generalization in reinforcement
learning. In Proceedings of the 38th International
Conference on Machine Learning, ICML 2021, 18-24
July 2021, Virtual Event, volume 139 of Proceedings
of Machine Learning Research, pages 4051–4062.
PMLR.
Peter E Hart, Nils J Nilsson, and Bertram Raphael. 1968.
A formal basis for the heuristic determination of min-
imum cost paths. IEEE transactions on Systems Sci-
ence and Cybernetics, 4(2):100–107.
Wanwei He, Yinpei Dai, Binyuan Hui, Min Yang, Zheng
Cao, Jianbo Dong, Fei Huang, Luo Si, and Yongbin
Li. 2022a. Space-2: Tree-structured semi-supervised
contrastive pre-training for task-oriented dialog un-
derstanding. arXiv preprint arXiv:2209.06638.
Wanwei He, Yinpei Dai, Min Yang, Jian Sun, Fei Huang,
Luo Si, and Yongbin Li. 2022b. Unified dialog model
pre-training for task-oriented dialog understanding
In Proceedings of the 45th Inter-
and generation.
national ACM SIGIR Conference on Research and
Development in Information Retrieval, pages 187–
200.
Wanwei He, Yinpei Dai, Yinhe Zheng, Yuchuan Wu,
Zheng Cao, Dermot Liu, Peng Jiang, Min Yang, Fei
Huang, Luo Si, et al. 2022c. Galaxy: A generative
pre-trained model for task-oriented dialog with semi-
supervised learning and explicit policy injection. In
Proceedings of the AAAI conference on artificial in-
telligence, volume 36, pages 10749–10757.
Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky
Liang, Pete Florence, Andy Zeng, Jonathan Tompson,
Igor Mordatch, Yevgen Chebotar, Pierre Sermanet,
Noah Brown, Tomas Jackson, Linda Luu, Sergey
Levine, Karol Hausman, and Brian Ichter. 2022. In-
ner monologue: Embodied reasoning through plan-
In arXiv preprint
ning with language models.
arXiv:2207.05608.
Michael Janner, Qiyang Li, and Sergey Levine. 2021.
Offline reinforcement learning as one big sequence
modeling problem. Advances in neural information
processing systems, 34:1273–1286.
Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi
Wang, Yongqiang Dou, Yanjun Chen, Li Fei-Fei, An-
ima Anandkumar, Yuke Zhu, and Linxi Fan. 2023.
Vima: General robot manipulation with multimodal
prompts. In Fortieth International Conference on
Machine Learning.
Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi,
and Aniruddha Kembhavi. 2022. Simple but effec-
tive: Clip embeddings for embodied ai. In Proceed-
ings of the IEEE/CVF Conference on Computer Vi-
sion and Pattern Recognition, pages 14829–14838.
Ilya Kostrikov, Ashvin Nair, and Sergey Levine.
2021. Offline reinforcement learning with implicit
q-learning. In International Conference on Learning
Representations.
Aviral Kumar, Aurick Zhou, George Tucker, and Sergey
Levine. 2020. Conservative q-learning for offline
reinforcement learning. Advances in Neural Informa-
tion Processing Systems, 33:1179–1191.
Kuang-Huei Lee, Ofir Nachum, Mengjiao Sherry Yang,
Lisa Lee, Daniel Freeman, Sergio Guadarrama, Ian
Fischer, Winnie Xu, Eric Jang, Henryk Michalewski,
et al. 2022. Multi-game decision transformers. Ad-
vances in Neural Information Processing Systems,
35:27921–27936.
Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi.
2023. Blip-2: Bootstrapping language-image pre-
training with frozen image encoders and large lan-
guage models. Preprint, arXiv:2301.12597.
Jessy Lin, Yuqing Du, Olivia Watkins, Danijar Hafner,
Pieter Abbeel, Dan Klein, and Anca Dragan. 2023.
Learning to model the world with language. arXiv
preprint arXiv:2308.01399.
Zeyi Liu, Arpit Bahety, and Shuran Song. 2023.
Reflect: Summarizing robot experiences for fail-
arXiv preprint
ure explanation and correction.
arXiv:2306.15724.
Yecheng Jason Ma, William Liang, Vaidehi Som,
Vikash Kumar, Amy Zhang, Osbert Bastani, and Di-
nesh Jayaraman. 2023. Liv: Language-image repre-
sentations and rewards for robotic control. Preprint,
arXiv:2306.00958.
Sabrina McCallum, Max Taylor-Davies, Stefano Al-
brecht, and Alessandro Suglia. 2023. Is feedback all
you need? leveraging natural language feedback in
goal-conditioned rl. In NeurIPS 2023 Workshop on
Goal-Conditioned Reinforcement Learning.
Nikhil Mehta, Milagro Teruel, Patricio Figueroa Sanz,
Xin Deng, Ahmed Hassan Awadallah, and Julia Kisel-
eva. 2023. Improving grounded language understand-
ing in a collaborative environment by interacting
with agents through help feedback. arXiv preprint
arXiv:2304.10750.
Suvir Mirchandani, Fei Xia, Pete Florence, Danny
Driess, Montserrat Gonzalez Arenas, Kanishka Rao,
Dorsa Sadigh, Andy Zeng, et al. 2023. Large lan-
guage models as general pattern machines. In 7th
Annual Conference on Robot Learning.
Nirbhay Modhe, Qiaozi Gao, Ashwin Kalyan, Dhruv
Batra, Govind Thattai, and Gaurav Sukhatme. 2023.
Exploiting generalization in offline reinforcement
arXiv
learning via unseen state augmentations.
preprint arXiv:2308.03882.
Khanh Nguyen, Hal Daumé III, and Jordan Boyd-
Graber. 2017. Reinforcement learning for bandit
neural machine translation with simulated human
feedback. arXiv preprint arXiv:1707.07402.
Khanh Nguyen, Debadeepta Dey, Chris Brockett, and
Bill Dolan. 2019. Vision-based navigation with
language-based assistance via imitation learning
In Proceedings of the
with indirect intervention.
IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition, pages 12527–12537.
Khanh X Nguyen, Yonatan Bisk, and Hal Daumé Iii.
2022. A framework for learning to request rich and
contextually useful information from humans. In In-
ternational Conference on Machine Learning, pages
16553–16568. PMLR.
OpenAI. 2024. Gpt-4 technical report. Preprint,
arXiv:2303.08774.
Meenal Parakh, Alisha Fong, Anthony Simeonov, Tao
Chen, Abhishek Gupta, and Pulkit Agrawal. 2023.
Lifelong robot learning with human assisted language
planners. In CoRL 2023 Workshop on Learning Ef-
fective Abstractions for Planning (LEAP).
Rafael Figueiredo Prudencio, Marcos ROA Maximo,
and Esther Luna Colombini. 2023. A survey on of-
fline reinforcement learning: Taxonomy, review, and
open problems. IEEE Transactions on Neural Net-
works and Learning Systems.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya
Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas-
try, Amanda Askell, Pamela Mishkin, Jack Clark,
et al. 2021. Learning transferable visual models from
natural language supervision. In International confer-
ence on machine learning, pages 8748–8763. PMLR.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan,
Dario Amodei, Ilya Sutskever, et al. 2019. Language
models are unsupervised multitask learners. OpenAI
Blog.
Scott Reed, Konrad Zolna, Emilio Parisotto, Ser-
gio Gómez Colmenarejo, Alexander Novikov,
Gabriel Barth-maron, Mai Giménez, Yury Sulsky,
Jackie Kay, Jost Tobias Springenberg, et al. 2022. A
generalist agent. Transactions on Machine Learning
Research.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
In Proceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing. Associa-
tion for Computational Linguistics.
Ingmar Schubert, Jingwei Zhang, Jake Bruce, Sarah
Bechtle, Emilio Parisotto, Martin Riedmiller, Jost To-
bias Springenberg, Arunkumar Byravan, Leonard
Hasenclever, and Nicolas Heess. 2023. A gener-
alist dynamics model for control. arXiv preprint
arXiv:2305.10912.
Pratyusha Sharma, Balakumar Sundaralingam, Valts
Blukis, Chris Paxton, Tucker Hermans, Antonio Tor-
ralba, Jacob Andreas, and Dieter Fox. 2022. Cor-
recting robot plans with natural language feedback.
Preprint, arXiv:2204.05186.
Lanbo She and Joyce Chai. 2017. Interactive learning
of grounded verb semantics towards human-robot
communication. In Proceedings of the 55th Annual
Meeting of the Association for Computational Lin-
guistics, ACL 2017, Vancouver, Canada, July 30 -
August 4, Volume 1: Long Papers, pages 1634–1644.
Association for Computational Linguistics.
Mohit Shridhar, Jesse Thomason, Daniel Gordon,
Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke
Zettlemoyer, and Dieter Fox. 2020. Alfred: A bench-
mark for interpreting grounded instructions for every-
day tasks. Preprint, arXiv:1912.01734.
Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Côté,
Yonatan Bisk, Adam Trischler, and Matthew
Hausknecht. 2021. ALFWorld: Aligning Text and
Embodied Environments for Interactive Learning.
In Proceedings of the International Conference on
Learning Representations (ICLR).
Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit
Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox,
Jesse Thomason, and Animesh Garg. 2023. Prog-
prompt: Generating situated robot task plans using
large language models. In 2023 IEEE International
Conference on Robotics and Automation (ICRA),
pages 11523–11530. IEEE.
Bo-Hsiang Tseng, Yinpei Dai, Florian Kreyssig, and
Bill Byrne. 2021. Transferable dialogue systems
and user simulators. In Proceedings of the 59th An-
nual Meeting of the Association for Computational
Linguistics and the 11th International Joint Confer-
ence on Natural Language Processing (Volume 1:
Long Papers), pages 152–166, Online. Association
for Computational Linguistics.
Kuan Wang, Yadong Lu, Michael Santacroce, Yeyun
Gong, Chao Zhang, and Yelong Shen. 2023. Adapt-
ing llm agents through communication. Preprint,
arXiv:2310.01444.
Tianbao Xie, Siheng Zhao, Chen Henry Wu, Yitao Liu,
Qian Luo, Victor Zhong, Yanchao Yang, and Tao
Yu. 2023. Text2reward: Automated dense reward
function generation for reinforcement learning. arXiv
preprint arXiv:2309.11489.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak
Shafran, Karthik R Narasimhan, and Yuan Cao. 2022.
React: Synergizing reasoning and acting in language
models. In The Eleventh International Conference
on Learning Representations.
Denis Yarats, David Brandfonbrener, Hao Liu, Michael
Laskin, Pieter Abbeel, Alessandro Lazaric, and Ler-
rel Pinto. 2022. Don’t change the algorithm, change
the data: Exploratory data for offline reinforcement
learning. In ICLR 2022 Workshop on Generalizable
Policy Learning in Physical World.
Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian,
Karol Hausman, Chelsea Finn, and Sergey Levine.
2019. Meta-world: A benchmark and evaluation
for multi-task and meta reinforcement learning. In
Conference on Robot Learning (CoRL).
Wenhao Yu, Nimrod Gileadi, Chuyuan Fu, Sean Kir-
mani, Kuang-Huei Lee, Montse Gonzalez Arenas,
Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasen-
clever, Jan Humplik, Brian Ichter, Ted Xiao, Peng Xu,
Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa
Sadigh, Jie Tan, Yuval Tassa, and Fei Xia. 2023. Lan-
guage to rewards for robotic skill synthesis. Arxiv
preprint arXiv:2306.08647.
Yichi Zhang and Joyce Chai. 2021. Hierarchical task
learning from language instructions with unified
In Findings of
transformers and self-monitoring.
the Association for Computational Linguistics: ACL-
IJCNLP 2021, pages 4202–4213, Online. Association
for Computational Linguistics.
Yichi Zhang, Jianing Yang, Jiayi Pan, Shane Storks,
Nikhil Devraj, Ziqiao Ma, Keunwoo Yu, Yuwei Bao,
and Joyce Chai. 2022. DANLI: Deliberative agent for
following natural language instructions. In Proceed-
ings of the 2022 Conference on Empirical Methods
in Natural Language Processing, pages 1280–1298,
Abu Dhabi, United Arab Emirates. Association for
Computational Linguistics.
Yichi Zhang, Jianing Yang, Keunwoo Yu, Yinpei Dai,
Shane Storks, Yuwei Bao, Jiayi Pan, Nikhil Devraj,
Ziqiao Ma, and Joyce Chai. 2023. Seagull: An em-
bodied agent for instruction following through situ-
ated dialog. In Alexa Prize SimBot Challenge Pro-
ceedings.
A Environments and Language Feedback
A.1 Environments Overview
The Appendix Table 1 lists the information that is
inherently available within the environment. All
models, regardless of whether they are trained with
language input or not, will have access to this envi-
ronmental information.
locations, bin locations, and bin dynamics are ran-
domized. The agent receives a reward of 1 when
the task is completed, and receives a reward of
0.5 if a subgoal exists (e.g., get the object in the
clean-up task) and gets completed. Each template
language is augmented to 70 sentences in the GPT
template pool. Examples of hindsight and foresight
languages are as follows:
Env
HomeGrid
AlfWorld
Messenger
MetaWorld
Image Observation Instruction Manual Text State Description
Yes
No
No
No
No
No
Yes
No
No
Yes
No
No
Table 1: Information provided by each environment.
A.2 Language Feedback for Different
Environments
For each environment, we design multiple tem-
plates conveying different meanings, and then ap-
plied GPT-4 to augment the languages into a GPT-
augmented language pool. The number of tem-
plates and the corresponding GPT-augmented sen-
tences for each template are shown in Appendix
Table 2.
Env
HomeGrid
AlfWorld
Messenger
MetaWorld
# Hind Templates
20
4
4
2
# Fore Templates
9
4
4
6
# AUG
70
200
80
180
Table 2: Number of templates and augmented sentences
for each environment, where ’# Hind Templates’ refers
to the number of hindsight templates, ’# Fore Templates’
refers to the number of foresight templates, and ’# AUG’
refers to the number of GPT-augmented sentences per
template.
A.2.1 HomeGrid
HomeGrid is a multitask grid world designed
to evaluate how well agents can understand and
use various types of language to complete tasks.
Agents will receive both task specifications and
language hints, providing prior knowledge about
world dynamics, information about world states, or
corrections to assist the agents. We adopt the lan-
guage hints in HomeGrid as foresight and further
extend the environment to provide hindsight that
provides comments on agents’ past performance.
Agents are expected to ground both hindsight and
foresight to the environment to achieve higher per-
formance. It includes five task types involving in-
teraction with objects and bins (find, get, clean up,
rearrange, open), with a total of 38 tasks. Object
• Hindsight Examples:
Template:
▷ "You have gone to the wrong direc-
tion."
▷ "You are doing well so far."
GPT Template:
▷ "You seem to be heading away from
the right route."
▷ "So far, so good, you are doing
great!"
• Foresight Examples:
Template:
▷ "Turn back."
▷ "Pedal to open the recycling bin."
GPT Template:
▷ "Make a 180-degree turn right now."
▷ "To access the recycling bin, you’ll
need to pedal."
Language instructions are generated based on the
comparison of agent’s action and expert planer ac-
tion, considering distance, relative location, and
interaction between the agent and target objects.
A.2.2 ALFWorld
ALFWorld is a text-game environment that aligns
with the embodied ALFRED benchmark (Shridhar
et al., 2020) and provides simulation for house-
hold tasks. It includes six types of tasks where
agents need to navigate and interact with house-
hold objects through text actions. The location of
the task objects is randomly located among 50 loca-
tions in each episode, making the task challenging
for the agent to plan and for the subgoals. For
the experiment, we adopt LLF-ALFWorld (Cheng
et al., 2023), which provides an extra language
wrapper for hindsight and foresight language gen-
eration over the original ALFWorld. The languages
are generated based on both agents’ past actions
and the optimal trajectory for the current episode.
Agent gets a reward of 1 when the task is completed.
Each template is augmented to 200 sentences in
GPT template pool. Examples of hindsight and
foresight languages are as follows:
• Hindsight Examples:
Template:
▷ "You made a mistake by taking the
bad action {action}."
▷ "It was a right decision to not take
the bad action {action}."
GPT Template:
▷ "The choice to implement {action}
was misguided."
▷ "You made a sensible choice by not
committing to the {avoid action}."
• Foresight Examples:
Template:
▷ "You should now take the {action}
action."
▷ "Take {action} in the next step."
GPT Template:
▷ "Consider taking the {action} as your
next step."
▷ "Moving on, consider the {action}
action."
Language instructions are generated based on ex-
perts’ next action and whether agent’s past actions
are aligned with expert past actions, considering
whether agents have moved to the target position
and conducted correct interaction with the objects.
A.2.3 Messenger
Messenger is a grid world with several entities.
The agent’s primary task is to retrieve a message
from one entity and deliver it to another goal entity,
all while avoiding enemies. At the start of each
episode, the agent is provided with a manual de-
scribing the randomized roles of the entities and
their movement dynamics. The challenge lies in
the fact that the agent does not have access to the
true identity of each entity and must ground the
text manual to the dynamics, necessitating multi-
hop reasoning. (For example, grounding the "an
approaching queen is a deadly enemy" to the obser-
vations of dynamics.) (Lin et al., 2023) The agent
receives a sparse reward of 1 when the task is com-
pleted. Each template language is augmented to 80
sentences in the GPT template pool. Examples of
hindsight and foresight languages are as follows:
• Hindsight Examples:
Template:
▷ "It’s good that you are getting close
to the {target} at {target direction}
by moving {direction}!"
▷ "Stepping {action direction}, yet you
ran into {enemy name}. Be more
cautious."
GPT Template:
▷ "Good job on approaching the {tar-
get} to the {target direction} by mov-
ing {direction}! "
▷ "Stepping {action direction} directly
met {enemy name}. Needs strategic
thinking."
• Foresight Examples:
Template:
▷ "Move {optimal direction} to ap-
proach the {target name} located at
the {target direction}. "
▷ "Rest assured, there are no enemies
around."
GPT Template:
▷ "To get to the {target name} at {tar-
get direction}, go {optimal direc-
tion}. "
▷ "Not detecting any danger, it’s safe."
When generating the language instructions, we
compare the agent’s actions and the expert’s ac-
tions, considering the locations of the target and
nearest enemy, calculating the distance and gener-
ate the hindsight reflections based on some engi-
neered rules.
A.2.4 MetaWorld
MetaWorld is a simulated benchmark that includes
a variety of manipulation tasks performed using a
Sawyer robot arm. It includes 50 types of robot
manipulation tasks common in daily life. Since
our main goal is not meta-learning, we select the
"assembly" and "hammer" tasks for pretraining and
adaptation in our experiments. This requires the
agent to pick up the tool and aim at the specific tar-
get with high precision. To increase the challenge
of the tasks, we introduce random disturbances at
random steps. This requires the robot to actively re-
cover and return to its normal trajectory whenever
it deviates. The agent receives a sparse reward of 1
when completing the task. Each template language
is augmented to 180 template languages in the GPT
template pool. Examples of hindsight and foresight
languages are shown in the following:
• Hindsight Examples:
Template:
▷ "It’s excellent to raise the gripper."
▷ "You are making mistakes for not
opening your gripper."
GPT Template:
▷ "Good job for raising your gripper."
▷ "You make a regrettable mistake
since your gripper is closing."
next step of expert action and let the expert planner
recover from the error.
B.2 ALFWorld
For the ALFWorld environment, we use a pre-built
expert planer from LLF-Bench (Cheng et al., 2023)
to work as both the expert agent and the agent for
the data collection.
• Foresight Examples:
Template:
▷ "It’s time to grasp the wrench now."
▷ "Please raise the hammer."
GPT Template:
▷ "Can you grab the wrench with your
gripper?"
▷ "I think the hammer should be raised
now."
We compare the agent’s actions with the expert’s
actions, and tell the agent’s whether their decisions
at the previous step matches with the expert’s ac-
tions, and inform them of what an expert will do at
the next step.
B Agent for Offline Data Collection and
Language Feedback Generation
We use an expert agent and a non-expert agent with
sub-optimal policies during the data collection. The
sub-optimal policy is used for introducing some er-
rors or perturbations in the training data, and letting
the expert policy continue to recover. This helps
agents learn to recover from potential failures using
hindsight reflections and foresight instructions. In
our experiments, we introduced 10-20% random
noise in each trajectory as a sub-optimal policy. We
found that this level of perturbation aids learning,
but excessive disturbance (e.g., >50% per trajec-
tory) significantly degrades performance as agents
start learning suboptimal behaviors.
B.1 HomeGrid
For the HomeGrid environment, we design an ex-
pert planer to work as the expert agent. We first
divide the task into several sub-tasks (i.e. divide
"open the recycling bin" into 1. "navigate to the
bin", 2. "open the bin"). For navigation (move
to some place) sub-tasks, we implement breadth-
first search to find the optimal path; for inter-
action sub-task (interact with object), we output
the corresponding action. We implement the non-
expert agent by adding "perturbation" into the ex-
pert planer. For example, we randomly reverse the
B.3 Messenger
As for the Messenger environment, we implement
an expert agent using the A* algorithm (Hart et al.,
1968). We define the cost by the distance to the
target and the distance to the nearest enemies, and
then heuristically search in the grid environment.
The non-expert agent in the data collection is im-
plemented by adding random disturbance to the
expert agent.
B.4 MetaWorld
We build the expert agent on the pre-defined policy
from the original MetaWorld codebase (Yu et al.,
2019) and adapt the policy to random disturbance
so that the expert planner can recover to a normal
trajectory in any situation.
C Task Settings for RQ 1 and 2
Task Setting for RQ 1. We evaluate the agents’
performance using the same tasks as in the train-
ing phase (but with different initialization of the
agents and object layout for different episodes).
Concretely, 1) in HomeGrid, we train and evalu-
ate on multi-tasks, including FIND, GET, REAR-
RANGE and OPEN; 2) in ALFWorld, we train and
evaluate on multi-tasks including PICK&PLACE,
CLEAN&PLACE and HEAT&PLACE tasks; 3) in
Messenger, we train and evaluate on the task goal
“first retrieve the message and then deliver to target
entity”; and 4) in MetaWorld, we train and evalu-
ate on the ASSEMBLY task, in which the robot arm
needs to pick up the wrench and put it on the peg.
Task Setting for RQ 2. We evaluate agents’ perfor-
mance on unseen tasks by first pre-training agents
on certain tasks and then adapting agents to un-
seen tasks with few-shot episodes. Specifically, 1)
in HomeGrid, we take FIND, GET, REARRANGE,
OPEN tasks for pre-training and the CLEAN-UP task
for adaptation and evaluation; 2) in ALFWorld, we
take PICK&PLACE and CLEAN&PLACE for pre-
training and HEAT&PLACE tasks for adaptation
and evaluation; 3) in Messenger, we take “first re-
trieve the message and then deliver to target entity"
as the pretraining task and “first get to the target
entity and then retrieve the message" (where the
order of the goal is reversed compared to the pre-
training tasks) for adaptation and evaluation; 4) in
MetaWorld, we take the ASSEMBLY task for pre-
training, and the HAMMER task for adaptation and
evaluation.
D Performance under aligned language
type with training.
As stated in Section 6.1, we use online GPT for
all evaluations in RQ 1 and 2 to mimic real-life
human language environments. In this section, we
align the evaluation language type (and adaptation
language type in RQ 2) with each agent’s corre-
sponding training language type for further investi-
gation (e.g. No Language Agent is evaluated with
empty language; Template Hindsight Agent is
evaluated with Template Hindsight). Experiments
on RQ 1 and 2 are conducted on HomeGrid and
Messenger respectively, with the results presented
in Table 3.
Training Language
Aligned Eval
Online GPT Eval
HomeGrid Env on RQ 1
No Lang
Template H
Template F
Template H + F
GPT-augmented H + F
0.235
0.260
0.305
0.325
0.472
0.212
0.246
0.262
0.285
0.442
Messenger Env on RQ 2 (20 Shots)
Training Language
Aligned Adapt & Eval Online GPT Eval
No Lang
GPT-augmented H
GPT-augmented F
GPT-augmented H + F
0.323
0.450
0.512
0.623
0.270
0.378
0.464
0.608
Table 3: Comparison of agents’ performance adapted
(for RQ 2) and evaluated with aligned language type in
HomeGrid environment on RQ 1 and Messenger envi-
ronment on RQ 2. ‘Aligned (Adapt &) Eval’ refers to
(adaptation &) evaluation with same type of language
in training and ‘Online GPT Eval’ refers to online GPT
evaluation (results in Section 6.2). The results show that
GPT-augmented Hindsight + Foresight evaluated with
online GPT still outperforms other training settings even
with aligned language evaluation, indicating higher lan-
guage informativeness and diversity enhance intrinsic
task understanding.
The results Table 3 show that:
(1) aligning
the informativeness and diversity levels between
training, adaptation and evaluation improves the
final performance for all types; (2) more impor-
tantly, even with aligned evaluation and adaptation
language, no other settings have outperformed
GPT-augmented Hindsight + Foresight evalu-
ated with online GPT. This further demonstrates
that high informativeness and diversity in training
language help agents intrinsically understand tasks
to achieve better performance.
E Impact of hindsight on future steps
Compared to foresight feedback, which provides
instructions for the correct action in the next step,
hindsight feedback reflects on incorrect actions
taken in previous steps. This retrospective analysis
can still guide agents toward success by narrow-
ing down the search space for corrective actions.
To demonstrate the effectiveness of hindsight feed-
back, we conduct a quick comparative study be-
tween the No Language agent and the Template
Hindsight agent in HomeGrid. The study was
designed as follows:
1. Both agents are driven to the same state using
an expert policy.
2. A deliberate mistake is introduced for both
agents. Three types of mistakes are designed:
• Navigation Mistake: The agent moves
in the opposite direction compared to the
expert action.
• Object Pick/Drop Mistake: The agent
picks or drops an object when the expert
action is to drop or pick, respectively.
• Bin Manipulation Mistake:
The
agent chooses the wrong action among
pedal/lift/grasp to open a specific trash
bin.
3. We use expert actions as the ground truth (GT)
actions and compare the performance of both
agents over 500 runs.
The results are shown in Appendix Table 4: The
Mistake Type
Navigation
Object Pick/Drop
Bin manipulation
No Lang (%) Template Hindsight (%)
37.6 ± 0.3
37.4 ± 2.5
23.5 ± 1.2
46.2 ± 0.2
41.8 ± 1.6
24.8 ± 0.9
Table 4: Comparison of performance between No
Language Agent and Template Hindsight Agent on
different Mistake Types.
results indicate that for the navigation and object
pick/drop mistakes, hindsight feedback is highly
beneficial. This is because identifying a wrong ac-
tion usually directly implies the correct action for
those mistakes (e.g., if "turn left" is wrong, "turn
right" is correct; if "pick the object" is wrong, "drop
the object" is correct). However, for the bin manip-
ulation mistake, hindsight feedback is less helpful
G.1 HomeGrid
Estimated parameter size of the models: 12.191
MB. For research question 1, we train the model
with 100 trajectories. For research question 2, the
pretraining stages use 6432 trajectories. The mod-
els are trained on one Nvidia RTX A6000. For
research question 1, training takes 3 GPU hours.
For research question 2, pretraining takes 4 GPU
hours and adaptation takes 3 GPU hours. Hyperpa-
rameters shown in Appendix Table 5.
G.2 ALFWorld
Estimated parameter size of the models: 6.5 MB.
For research question 1, we train the model with
1000 trajectories. For research question 2, the pre-
training stages use 10000 trajectories. The models
are trained in one Nvidia RTX A6000. For re-
search question 1, training takes 3 GPU hours. For
research question 2, pretraining takes 4 GPU hours
and adaptation takes 3 GPU hours. Hyperparame-
ters shown in Appendix Table 6.
G.3 Messenger
Estimated parameters size of the models: 289.681
MB. We train the models with 10000 data trajecto-
ries during the pretraining stage for seen tasks. The
pretraining stage for seen tasks takes 5 GPU hours
on one Nvidia RTX A6000. The adaptation stage
for unseen tasks takes 1 GPU hour. Hyperparame-
ters are shown in Appendix Table 7.
G.4 MetaWorld
Estimated parameters size of the models: 289.681
MB. We train the models with 20000 data trajec-
tories during the pretraining stage for seen tasks.
The pretraining stage for seen tasks takes 2.5 GPU
hours on one Nvidia RTX A6000. The adaptation
stage for unseen tasks takes 1 GPU hour. Hyperpa-
rameters are shown in Appendix Table 8.
Figure 9: In the Messenger environment, when trained
with more diverse foresight and hindsight languages,
the agents can perform better than those trained without
languages. Furthermore, agents trained with more infor-
mative languages demonstrate stronger performance.
since the action space grows larger (pedal/lift/grasp,
compared to binary opposite actions in Navigation
and Object Pick/Drop), and there are no clear im-
plications for the correct action.
F More results on the Messenger
environment
In the Messenger environment, models trained
with only template foresight or hindsight languages
struggle to generalize to diverse languages during
testing. Without exposure to diverse languages dur-
ing training, these models fail to extract the learned
hindsight or foresight information from mixed and
diverse languages. However, Figure 9 demonstrates
that models trained with more diverse hindsight or
foresight languages can overcome the generaliza-
tion problem, and outperform those trained without
language feedback, showcasing the importance of
diversity in the training languages. Furthermore,
the agents trained with both hindsight and foresight
information still perform the best, aligning with
results in other environments.
G Models and Training
H Examples for Language Feedback in
We build our Language-Teachable Decision Trans-
former based on the code of the original Decision
Transformer (Chen et al., 2021). In this section, we
will show our training setup and model hyperpa-
rameters for each environment.
When selecting the data size, we prioritize the
efficient use of a small-scale dataset and examine
the impact of language feedback within the con-
straints of a limited budget and scarce data, as is
common in the field of robotics.
Evaluation
As discussed in section 6.1, we feed template hind-
sight (lhind) and template foresight (lf ore) into an
online GPT to generate language feedback as a
proxy for real-world human feedback, which can
be further extended into multi-turn human-machine
dialogue systems in task-oriented settings (He et al.,
2022a,b,c). In Figure 10, we demonstrate three ex-
amples of the GPT outcome. In example 1, we find
GPT can concatenate both hindsight and foresight
0.00.10.20.30.40.50.60.70.8RewardMessengerNo LanguageGPT augmented hindsightGPT augmented foresightGPT augmented Hindsight + ForesightHyperparameters
Number of transformer layers
Number of attention heads
Embedding dimension
Nonlinearity function
Batch size
Context length K
Return-to-go conditioning
Dropout
Optimizer
Learning Rate
Grad norm clip
Weight decay
Learning rate decay
Value
3
1
128
ReLU
64
10
1.5
0.1
AdamW
1e−4
0.25
1e−4
Linear warmup for first 1e5 training steps
Table 5: Hyperparameters of Language-Teachable Deci-
sion Transformer for HomeGrid experiments.
Hyperparameters
Number of transformer layers
Number of attention heads
Embedding dimension
Nonlinearity function
Batch size
Context length K
Return-to-go conditioning
Dropout
Optimizer
Learning Rate
Grad norm clip
Weight decay
Learning rate decay
Value
3
1
128
ReLU
64
10
1.5
0.1
AdamW
1e−3
0.25
1e−4
Consine Annealing with minimum lr = 1e−5
Table 6: Hyperparameters of Language-Teachable Deci-
sion Transformer for ALFWorld experiments.
and integrate them into a new fluent sentence. In
the second example, we observe that GPT decides
to discard the hindsight part and provides only fore-
sight as the outcome. In example 3, GPT chooses
not to respond when it thinks the current agent
doesn’t need help.
Hyperparameters
Number of transformer layers
Number of attention heads
Embedding dimension
Nonlinearity function
Batch size
Context length K
Return-to-go conditioning
Dropout
Optimizer
Learning Rate
Grad norm clip
Weight decay
Learning rate decay
Value
5
2
128
ReLU
128 for pertaining and 1 for adaptation
10
1.5
0.1
AdamW
1e−3 for pretraining and 1e−4 for adaptation
0.25
1e−4
Linear warmup for first 1e5 training steps
Table 7: Hyperparameters of Language-Teachable Deci-
sion Transformer for Messenger experiments.
Hyperparameters
Number of transformer layers
Number of attention heads
Embedding dimension
Nonlinearity function
Batch size
Context length K
Return-to-go conditioning
Return scale
Dropout
Optimizer
Learning Rate
Weight decay
Learning rate decay
Value
5
2
256
ReLU
128 for pertaining and 5 for adaptation
12
20
10
0.1
AdamW
1e−5 for pertaining and 1e−6 for adaptation
1e−4
Linear warmup for first 1e5 training steps
Table 8: Hyperparameters of Language-Teachable Deci-
sion Transformer for MetaWorld experiments.
Figure 10: Examples for language feedback generated
by online GPT in evaluation.
Good effort, but the fruit is in the kitchen area.(Concatenate H and F into a fluent sentence.)H: Your efforts up to now haven't gone unnoticed.F: The fruit is in the kitchen area.You should turn around and face the opposite way.(Discard the hindsight.)H: You seem to be veering off the right track.F: Could you swivel to face the opposite way?(empty)(Decide not to respond.)H: So far, you're showing a lot of promise.F: Check the living room for the plates.123 |
synthetic_cpt | 7 | ELLE_Efficient_Lifelong_Pre-training_for_Emerging_Data.pdf | CERN-TH-2018-127
8
1
0
2
n
u
J
5
1
]
h
p
-
p
e
h
[
1
v
1
4
9
5
0
.
6
0
8
1
:
v
i
X
r
a
ZZ production at the LHC:
NNLO predictions for 2(cid:96)2ν and 4(cid:96) signatures
Stefan Kallweit and Marius Wiesemann
TH Division, Physics Department, CERN, CH-1211 Geneva 23, Switzerland
[email protected]
[email protected]
Abstract
We consider QCD radiative corrections to ZZ production for all experimentally relevant
leptonic processes. We report on a novel computation of next-to-next-to-leading-order
(NNLO) corrections to the diboson signature with two charged leptons and missing
transverse energy ((cid:96)(cid:96)+Emiss
). All relevant final states are considered: (cid:96)(cid:96)ν(cid:96)ν(cid:96), (cid:96)(cid:96)ν(cid:96)(cid:48)ν(cid:96)(cid:48)
and (cid:96)ν(cid:96)(cid:96)(cid:48)ν(cid:96)(cid:48). We also study processes with four charged leptons: (cid:96)(cid:96)(cid:96)(cid:96) and (cid:96)(cid:96)(cid:96)(cid:48)(cid:96)(cid:48). For the
first time NNLO accuracy is achieved for a process mixing two double-resonant diboson
topologies (ZZ/W +W −→ (cid:96)(cid:96)ν(cid:96)ν(cid:96)). We find good agreement with ATLAS data at 8 TeV.
NNLO corrections are large (5–20% and more), and interference effects between ZZ
and W +W − resonances turn out to be negligible in most cases.
T
Diboson processes play a major role in the rich physics programme of the LHC. The intriguing
nature of these processes combined with their rather clean experimental signatures and relatively
large cross sections render them ideal for Standard Model (SM) precision measurements. The
precise knowledge of diboson rates and distributions provides a strong test of the gauge-symmetry
structure of electroweak (EW) interactions and the mechanism of EW symmetry breaking. They
also serve as important probes of new physics phenomena in direct and indirect searches. Diboson
final states, in particular ZZ and W +W −, are also extensively used in Higgs-boson measurements.
The production of ZZ pairs yields the smallest cross section among the diboson processes. Never-
theless, its pure experimental signature with four charged leptons in the final state facilitates a
clean measurement so that it has already been used in a combination of ATLAS and CMS data to
constrain anomalous trilinear gauge couplings [1]. ZZ production at the LHC has been measured at
7 TeV [2–4], 8 TeV [5–9], and 13 TeV [10–13]. Also searches for new heavy ZZ resonances involving
both charged leptons and neutrinos have been performed, see Ref. [14] for example.
Theoretical predictions for ZZ production at next-to-leading order (NLO) QCD were obtained
a long time ago for both on-shell Z bosons [15, 16] and their fully leptonic final states [17–20].
Perturbative corrections beyond NLO QCD are indispensable to reach the precision demanded
by present ZZ measurements. NLO EW corrections are known for stable Z bosons [21–23] and
including their full off-shell treatment for leptonic final states [24–26]. ZZ+jet production was
computed at NLO QCD [27]. The loop-induced gg → ZZ + X subprocess, which provides a
separately finite O(α2
S) contribution, is known at leading order (LO) [28–37] and was recently
computed at NLO considering only gg-initiated partonic channels [38–40], using the two-loop
1
Figure 1: Born-level Feynman diagrams for ZZ production with four charged final-state leptons.
(a)
(b)
helicity amplitudes for gg → V V (cid:48) of Refs. [41, 42]. NNLO QCD corrections to on-shell ZZ
production were first evaluated in Ref. [43], and later in Ref. [44]. Using the two-loop helicity
amplitudes for q ¯q → V V (cid:48) [45–47], differential predictions in the four-lepton channels ((cid:96)(cid:96)(cid:96)(cid:96) and
(cid:96)(cid:96)(cid:96)(cid:48)(cid:96)(cid:48)) were presented in Ref. [48].
In this paper we complete NNLO QCD corrections to ZZ production by considering all experi-
mentally relevant leptonic final states. Our computations are fully differential in the momenta of
the final-state leptons, and we account for off-shell effects and spin correlations by consistently
including all resonant and non-resonant topologies. For the first time, we obtain NNLO-accurate
predictions for the (same-flavour) dilepton plus missing transverse energy signature ((cid:96)(cid:96)+Emiss
),
which involves all processes with two opposite-charge leptons and two neutrinos in the final state
((cid:96)(cid:96)ν(cid:96)ν(cid:96), (cid:96)(cid:96)ν(cid:96)(cid:48)ν(cid:96)(cid:48) and (cid:96)ν(cid:96)(cid:96)(cid:48)ν(cid:96)(cid:48)). The process (cid:96)(cid:96)ν(cid:96)ν(cid:96) is particularly interesting as it mixes ZZ and
W +W − topologies, which will be studied in detail. For completeness we also compute NNLO
corrections to the four-lepton channels ((cid:96)(cid:96)(cid:96)(cid:96) and (cid:96)(cid:96)(cid:96)(cid:48)(cid:96)(cid:48)). Phenomenological predictions at NNLO
for all of the aforementioned leptonic processes are compared to LHC data at 8 TeV.
We employ the computational framework Matrix [49]. All tree-level and one-loop amplitudes
are evaluated with OpenLoops1 [54, 55]. At two-loop level we use the q ¯q → V V (cid:48) amplitudes of
Ref. [47], and implement the leptonic final states with two charged leptons and two neutrinos as
well as with four charged leptons. NNLO accuracy is achieved by a fully general implementation of
the qT -subtraction formalism [56] within Matrix. The NLO parts therein (for ZZ and ZZ+1-jet)
are performed by Munich2 [59], which employs the Catani–Seymour dipole subtraction method
[60, 61]. The Matrix framework features NNLO QCD corrections to a large number of colour-
singlet processes at hadron colliders, and has already been used to obtain several state-of-the-art
NNLO predictions [43, 48, 62–69].3
T
We consider all leptonic signatures relevant for ZZ measurements at the LHC. On the one hand,
we compute the four-lepton (4(cid:96)) processes
pp → (cid:96)+(cid:96)− (cid:96)(cid:48)+(cid:96)(cid:48)− + X,
with different-flavour (DF) leptons ((cid:96) (cid:54)= (cid:96)(cid:48)), denoted as (cid:96)(cid:96)(cid:96)(cid:48)(cid:96)(cid:48), and same-flavour (SF) leptons
((cid:96) = (cid:96)(cid:48)), denoted as (cid:96)(cid:96)(cid:96)(cid:96). Representative LO diagrams are shown in Figure 1. They involve both
double-resonant t-channel ZZ production (panel a) and single-resonant s-channel Drell–Yan (DY)
topologies (panel b). On the other hand, we compute processes with two charged leptons and two
1OpenLoops relies on the fast and stable tensor reduction of Collier [50, 51], supported by a rescue system
based on quad-precision CutTools[52] with OneLOop[53] to deal with exceptional phase-space points.
2The Monte Carlo program Munich features a general implementation of an efficient, multi-channel based
phase-space integration and computes both NLO QCD and NLO EW [57, 58] corrections to arbitrary SM processes.
3It was also used in the NNLL+NNLO computation of Ref. [70], and in the NNLOPS computation of Ref. [71].
2
q¯qℓ+ℓ−ℓ′−ℓ′+Z/γqZ/γq¯qℓ+ℓ−ℓ′−ℓ′+Z/γℓ−Z/γ(a)
(b)
(c)
(d)
(e)
Figure 2: Born-level Feynman diagrams for the production of two charged leptons and two
neutrinos: (a-b) topologies of ZZ production contributing to the process pp → (cid:96)+(cid:96)− ν(cid:96)(cid:48) ¯ν(cid:96)(cid:48) ((cid:96) (cid:54)= (cid:96)(cid:48));
(c-e) topologies of W +W − production contributing to the process pp → (cid:96)+ν(cid:96) (cid:96)(cid:48)−¯ν(cid:96)(cid:48) ((cid:96) (cid:54)= (cid:96)(cid:48)); for
(cid:96) = (cid:96)(cid:48) all diagrams contribute to the process pp → (cid:96)+(cid:96)− ν(cid:96)¯ν(cid:96), thereby mixing ZZ and W +W −
topologies.
neutrinos (2(cid:96)2ν) in the final state,
pp → (cid:96)+(cid:96)− ν(cid:96)(cid:48) ¯ν(cid:96)(cid:48) + X,
pp → (cid:96)+ν(cid:96) (cid:96)(cid:48)−¯ν(cid:96)(cid:48) + X, and pp → (cid:96)+(cid:96)− ν(cid:96)¯ν(cid:96) + X,
with (cid:96) (cid:54)= (cid:96)(cid:48).
Representative LO diagrams are shown in Figure 2. In the first process the flavour of the neutrinos
does not match the flavour of the charged leptons, and it features double-resonant ZZ contribu-
tions (panel a) as well as DY-type topologies (panel b). In the second process the two charged
leptons are of different flavours, and it features double-resonant W +W − contributions (panels c
and d) as well as DY-type topologies (panel e). In the third process all leptons and neutrinos are
of the same flavour, and the topologies of the first two processes mix in the matrix elements.
All of the aforementioned processes with charged leptons (cid:96), (cid:96)(cid:48) ∈ {e, µ} and neutrinos ν(cid:96), ν(cid:96)(cid:48) ∈
{νe, νµ, ντ } are studied. The loop-induced gg component is part of the NNLO corrections to these
processes and therefore included. The same is true for resonant Higgs-boson topologies, which also
start contributing at O(α2
s).
A significant complication of the processes pp → (cid:96)+ν(cid:96) (cid:96)(cid:48)−¯ν(cid:96)(cid:48) and pp → (cid:96)+(cid:96)− ν(cid:96)¯ν(cid:96) is posed by
the contamination from resonant top-quark contributions with t → W b decays, which enters
radiative corrections through diagrams featuring external bottom quarks. In the context of W +W −
production [64, 65] two approaches were followed: A top-free W +W − cross section can be obtained
in the four-flavour scheme (4FS) by dropping all contributions with real bottom quarks, which
are separately finite due to the bottom-quark mass. Since in the five-flavour scheme (5FS) real
and virtual contributions of massless bottom quarks are inevitably tied together, the resonance
structure of top-quark contributions is exploited to determine a top-free cross section. Neither of
the two approaches is required in the case of the ZZ measurements presented here. Since W +W −
and top-quark processes are both treated as backgrounds in the respective experimental analyses,
we introduce the following procedure: First, we compute the SF process pp → (cid:96)+(cid:96)− ν(cid:96)¯ν(cid:96) including
In order to keep only ZZ topologies (and interferences), we then
all resonant contributions.
3
q¯qℓ+ℓ−νℓ′¯νℓ′Z/γqZq¯qℓ+ℓ−νℓ′¯νℓ′Zℓ−Z/γu¯uℓ+νℓℓ′−¯νℓ′W+dW−q¯qℓ+νℓℓ′−¯νℓ′W+W−Z/γq¯qℓ+νℓℓ′−¯νℓ′W−ℓ−Z/γsubtract the DF process pp → (cid:96)+ν(cid:96) (cid:96)(cid:48)−¯ν(cid:96)(cid:48). This removes W +W − and top-quark backgrounds from
our predictions, as desired, while their interference with ZZ production, which is not accounted for
in the background predictions and thus considered part of the ZZ signal, is kept. Its impact will be
studied in detail below. If W +W − or top-quark topologies yield much larger contributions than ZZ
to the SF process, sizeable cancellations in the subtraction could diminish the numerical accuracy
of our predictions. However, for typical ZZ signal cuts, as considered here, a Z-mass window
suppresses the W +W − contribution, and a jet veto the top-quark background. The presented
procedure applies in all flavour schemes, and we conveniently use the 5FS throughout.
2 Gµm2
W = (m2
W − iΓW mW )/(m2
We present predictions for the 8 TeV LHC. For the EW parameters we employ the Gµ scheme
and compute the EW mixing angle as cos θ2
Z − iΓZ mZ) and α =
√
W sin2 θW /π, using the complex-mass scheme [72] throughout. The EW inputs are set
to the PDG [73] values: GF = 1.16639 × 10−5 GeV−2, mW = 80.385 GeV, ΓW = 2.0854 GeV,
mZ = 91.1876 GeV, ΓZ = 2.4952 GeV, mH = 125 GeV, and ΓH = 0.00407. The branching ratio of
the Z-boson decay into massless charged leptons, (cid:96) ∈ {e, µ}, is BR(Z → (cid:96)(cid:96)) = 0.033631, which
is used below to compute the cross section in the total phase space. The on-shell top-quark
mass is set to mt = 173.2 GeV, and Γt = 1.44262 is used. For each perturbative order we use
the corresponding set of Nf = 5 NNPDF3.0 [74] parton distributions with αS(mZ) = 0.118.
Renormalization (µR) and factorization (µF ) scales are set to half of the invariant mass of the ZZ
pair, µR = µF = µ0 ≡ 1
2 mZZ. Residual uncertainties are estimated from customary 7-point scale
variations by a factor of two, with the constraint 0.5 ≤ µR/µF ≤ 2.
We start by comparing phenomenological predictions to the ATLAS 8 TeV measurement of Ref. [9].
The corresponding phase-space cuts are summarized in Table 1 for both the four-lepton and the
(cid:96)(cid:96)+Emiss
signatures. The total phase space is defined by a Z-mass window in the invariant mass
of each reconstructed Z boson. The reconstruction is unambiguous in the DF channel (cid:96)(cid:96)(cid:96)(cid:48)(cid:96)(cid:48),
T
definition of the total phase space for pp → ZZ + X
66 GeV ≤ mZrec
a/b
≤ 116 GeV
definition of the fiducial volume for pp → (cid:96)+(cid:96)−(cid:96)(cid:48)+(cid:96)(cid:48)− + X,
(cid:96), (cid:96)(cid:48) ∈ {e, µ}
pT,(cid:96) > 7 GeV,
one electron with |ηe| < 4.9,
the others |ηe| < 2.5,
|ηµ| < 2.7
∆R(cid:96)(cid:96) > 0.2, ∆R(cid:96)(cid:96)(cid:48) > 0.2,
66 GeV ≤ mZrec
a/b
≤ 116 GeV,
anti-kT jets with R = 0.4, pT,j > 25 GeV, |ηj| < 4.5
lepton identification in SF channel:
minimizing differences of invariant-mass of OSSF lepton pairs and mZ
definition of the fiducial volume for pp → (cid:96)+(cid:96)−ν ¯ν + X,
(cid:96) ∈ {e, µ} and ν ∈ {νe, νµ, ντ }
pT,(cid:96) > 25 GeV,
|η(cid:96)| < 2.5, ∆R(cid:96)(cid:96) > 0.3,
76 GeV ≤ m(cid:96)+(cid:96)− ≤ 106 GeV,
Axial-pmiss
T > 90 GeV,
pT -balance < 0.4,
Njets = 0,
anti-kT jets with R = 0.4, pT,j > 25 GeV, |ηj| < 4.5 and ∆Rej > 0.3
Table 1: Phase-space definitions of the ZZ measurements by ATLAS at 8 TeV [9].
4
channel
σLO [fb]
σNLO [fb]
σNNLO [fb]
σATLAS [fb]
e+e−µ+µ− 8.188(1)+2.4%
e+e−e+e−
4.654(0)+2.3%
µ+µ−µ+µ− 3.565(0)+2.6%
−0.5(syst) +0.3
−0.2(lumi)
−2.2%
12.4 +1.0
−3.2% 11.30(0)+2.5%
−3.1% 6.410(2)+2.5%
−3.5% 4.969(5)+2.5%
−0.5% 4.806(1)+3.5%
−0.5% 4.770(4)+3.6%
−2.0% 12.92(1)+2.8%
−2.0% 7.310(8)+2.7%
−2.0% 5.688(6)+2.9%
−3.9% 5.083(8)+1.9%
−4.0% 5.035(9)+1.8%
−2.1% 5.9 +0.8
−2.2% 4.9 +0.6
−0.6% 5.0 +0.8
−0.5% 4.7 +0.7
−1.0(stat) +0.6
−0.8(stat) +0.4
−0.5(stat) +0.3
−0.7(stat) +0.5
−0.7(stat) +0.5
−0.4(syst) ± 0.1(lumi)
−0.2(syst) ± 0.1(lumi)
−0.4(syst) ± 0.1(lumi)
−0.4(syst) ± 0.1(lumi)
5.558(0)+0.1%
5.558(0)+0.1%
4982(0)+1.9%
−2.7% 6754(2)+2.4%
−2.0% 7690(5)+2.7%
−2.1% 7300 +400
−400(stat) +300
−300(syst) +200
−100(lumi)
e+e−νν
µ+µ−νν
total rate
Table 2: Predictions for fiducial and total rates compared to ATLAS 8 TeV data [9].
a and Zb = (cid:96)+
a = (cid:96)+(cid:96)− and Z rec
b = (cid:96)(cid:48)+(cid:96)(cid:48)−, which we employ for the predicted cross sections in the total
Z rec
phase space. The fiducial cuts involve standard requirements on the transverse momenta and
pseudo-rapidities of the leptons, a separation in ∆R = (cid:112)∆η2 + ∆φ2 between the leptons, and a
window in the invariant mass of reconstructed Z bosons around the Z-pole. In the SF channel
(cid:96)(cid:96)(cid:96)(cid:96), Z bosons are reconstructed by identifying the combination of opposite-sign same-flavour
a (cid:96)−
b and Zb = (cid:96)+
b (cid:96)−
a (cid:96)−
(OSSF) lepton pairings (Za = (cid:96)+
b (cid:96)−
a ) that minimizes
|mZa − mZ| + |mZb − mZ| with the reconstructed Z bosons Z rec
a = Za and Z rec
b = Zb. A rather
special feature in the fiducial phase spaces of the four-lepton channels is the fact that ATLAS
measures one of the electrons up to very large pseudo-rapidities (|ηe| < 4.9). The measurement of
the (cid:96)(cid:96)+Emiss
signature applies two additional requirements, which force the two Z bosons closer
to back-to-back-like configurations to suppress backgrounds such as Z+jets: There is a lower cut
on the axial missing transverse momentum, Axial-pmiss
T ≡ pT,νν
and ∆φ(cid:96)(cid:96),νν is the azimuthal angle between the dilepton and the neutrino pair. Furthermore, the
two Z-boson momenta are balanced by putting an upper cut on pT -balance = |pmiss
T − pT,(cid:96)(cid:96)|/pT,(cid:96)(cid:96).
Finally, the (cid:96)(cid:96)+Emiss
signature requires a jet veto to suppress top-quark backgrounds. Note that
jets close to electrons (∆Rej < 0.3) are not vetoed.
· cos (∆φ(cid:96)(cid:96),νν), where pmiss
b , or Za = (cid:96)+
T = −pmiss
T
T
T
In Table 2 we report cross-section predictions and compare them against ATLAS 8 TeV results [9].
Central predictions are stated with the numerical error on the last digit quoted in round brackets.
The relative uncertainties quoted in percent are estimated from scale variations as described above.
Results reported for e+e−µ+µ−, e+e−e+e−, µ+µ−µ+µ−, e+e−ν ¯ν, and µ+µ−ν ¯ν production are cross
sections in the respective fiducial volumes defined in Table 1. The prediction in the last line of the
table is obtained from the computation of pp → e+e−µ+µ− + X in the total phase space defined
in Table 1, by dividing out the branching ratio BR(Z → (cid:96)(cid:96)) for each Z-boson decay. The main
conclusions that can be drawn from these results are the following:
• Radiative corrections are large and have a marked dependence on the event selection: They
range between +35% to +40% at NLO and +14% to +17% at NNLO in cases without a jet
veto, i.e. for all but the 2(cid:96)2ν results. Roughly half (45%–55%) of the O(α2
s) terms are due to
the loop-induced gg component in these cases. For the 2(cid:96)2ν processes the situation is quite
different: Due to the jet veto NLO corrections turn negative and yield about −14%. NNLO
corrections are roughly +6%. However, the positive effect is entirely due to loop-induced gg
5
contributions, which are not affected by the jet veto. Omitting the loop-induced gg terms, the
genuine NNLO corrections to the q ¯q channel are actually negative and about −5%. Hence,
despite the jet veto, full O(α2
s) corrections are crucial for the (cid:96)(cid:96)+Emiss
signature.
T
• For channels with four charged leptons we find good agreement between theory and data.
This is particularly true for the DF process (e+e− µ+µ−), where NNLO corrections clearly
improve the comparison. In the SF channels (e+e− e+e− and µ+µ− µ+µ−) NNLO predictions
are slightly larger than the measurements, but remain within 1σ for muons and 2σ for
electrons. One should not forget that EW corrections reduce the rates by a few percent [25],
while NLO corrections to the loop-induced gg channel have a positive effect [38].
• For the (cid:96)(cid:96)+Emiss
T
signatures excellent agreement is found between NNLO predictions and
It is worth noting that fixed-order results describe the data
measured cross sections.
significantly better than the Powheg [75–78] Monte Carlo prediction used in Ref. [9].
This could be caused by the jet-veto requirement: As pointed out in Ref. [79] for W +W −
production, in presence of a jet veto the fiducial rate predicted by Powheg is rather small.
• The NNLO prediction in the last line of the table agrees perfectly (< 1σ) with the experimental
result in the total phase space, with NNLO corrections being crucial for this level of agreement.
• At LO scale uncertainties clearly underestimate the actual size of higher-order corrections,
since only the q ¯q channel contributes and the cross section is µR-independent. Given large
NLO corrections, also the scale uncertainties of 2%–4% at NLO cannot be trusted as an
estimate of missing higher-order terms. However, at NNLO all partonic channels are included,
and the corrections to the q ¯q channel, which are much smaller than at NLO, are of the same
order as the respective scale variations. Therefore, NNLO uncertainties may be expected
to reflect the size of yet un-calculated perturbative corrections to this channel. Only the
loop-induced gg component underestimates the uncertainty due to its LO nature, which is
known from the sizable NLO contributions to the gg channel [38].
We now turn to discussing differential distributions. Figure 3 shows results for the production of
four charged leptons in the total phase space. Theoretical predictions in these plots are obtained
from the DF process pp → e+e− µ+µ− + X, divided by the branching ratio BR(Z → (cid:96)(cid:96)) for each
Z-boson decay. The measured results are extrapolated to the total phase space, as presented by
ATLAS at 8 TeV [9]. Given that one electron is measured up to absolute pseudo-rapidities of 4.9,
the extrapolation factor, and possibly the ensuing uncertainty, is smaller than in other four-lepton
measurements. Nevertheless, we reckon that a direct comparison against unfolded distributions in
the fiducial volume is preferable, as it is less affected by the lower perturbative accuracy of the
Monte Carlo generator used for the extrapolation. However, since no such experimental results are
available in the four-lepton channel from ATLAS at 8 TeV, we perform the comparison in the total
phase space. We have normalized the ATLAS distributions to the measured total cross section in
the last line of Table 2.
Despite the fact that the comparison is done in the total phase space, theory predictions and
measured cross sections are in reasonable agreement for the observables shown in Figure 3, which
are the rapidity difference of the reconstructed Z bosons, ∆yZ1,Z2 (panel a), the azimuthal angle
between the two leptons of the harder Z boson, ∆φ(cid:96)+
(panel b), the transverse momentum
Z1
of the leading Z boson, pT,Z1 (panel c), and the number of jets, Njets (panel d). Overall, NNLO
predictions provide the best description of data, although NLO results are similarly close, while LO
is far off. Note that for the jet multiplicity the effective perturbative accuracy of the (fixed-order)
,(cid:96)−
Z1
6
(a)
(b)
(c)
(d)
Figure 3: Differential distributions for the four-lepton processes in the total phase space at
LO (black, dotted), NLO (red, dashed) and NNLO (blue, solid), compared to ATLAS 8 TeV
data extrapolated to the total phase space [9] (green points with error bars); for (a) ∆yZ1,Z2, (b)
∆φ(cid:96)+
Z1
, (c) pT,Z1, and (d) Njets; the lower frames show the ratio over NLO.
,(cid:96)−
Z1
7
dσ/d|ΔyZ1,Z2| [pb]ℓℓℓ(')ℓ(')@LHC 8 TeV (ATLAS data)LONLONNLOdata 0 1 2 3 4 5 6produced with MATRIX|ΔyZ1,Z2| dσ/dσNLO 0 0.5 1 1.5 20.40.81.24 0dσ/dΔϕℓ+Z1ℓ-Z1[pb/rad]ℓℓℓ(')ℓ(')@LHC 8 TeV (ATLAS data)LONLONNLOdata 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6produced with MATRIXΔϕℓ+Z1ℓ-Z1 [rad]dσ/dσNLO 0 0.5 1 1.5 201.31.92.32.7πdσ/dpT,Z1 [pb/GeV]ℓℓℓ(')ℓ(')@LHC 8 TeV (ATLAS data)LONLONNLOdata10-510-410-310-210-1produced with MATRIXpT,Z1 [GeV]dσ/dσNLO 0 0.5 1 1.5 2030601002001500σ [pb]ℓℓℓ(')ℓ(')@LHC 8 TeV (ATLAS data)LONLONNLOdata 0 1 2 3 4 5 6 7 8produced with MATRIX dσ/dσNLO 0.4 0.6 0.8 1 1.2total rate0 jets1 jet2-10 jets(a)
(b)
(c)
Figure 4: Differential distributions of the 2(cid:96)2ν processes with fiducial cuts at LO (black, dotted),
NLO (red, dashed) and NNLO (blue, solid), compared to ATLAS 8 TeV data [9] (green points
with error bars); for (a) pT,(cid:96)(cid:96), (b) mT,ZZ, and (c) ∆φ(cid:96)(cid:96); the lower frame shows the ratio over NLO.
predictions is degraded by one order for each added jet. NNLO effects on other distributions are
large, but primarily affect the normalization and not the shapes.
We continue our discussion of differential results with the (cid:96)(cid:96)+Emiss
signature in Figure 4, which
shows the distributions in the transverse momentum of the dilepton pair, pT,(cid:96)(cid:96) (panel a), the
transverse mass of the ZZ pair, defined as4
T
(cid:115)(cid:18)(cid:113)
T,(cid:96)(cid:96) + m2
p2
Z +
(cid:113)
(pmiss
T
)2 + m2
Z
(cid:19)2
mT,ZZ =
− (pT,(cid:96)(cid:96) + pmiss
T )2
(panel b), and the azimuthal angle between the two leptons, ∆φ(cid:96)(cid:96) (panel c). The results correspond
to the sum of all channels including both SF ((cid:96)(cid:96) ν(cid:96)ν(cid:96)) and DF ((cid:96)(cid:96) ν(cid:96)(cid:48)ν(cid:96)(cid:48)) processes ((cid:96) ∈ {e, µ}, ν(cid:96)(cid:48) ∈
{νe, νµ, ντ }, (cid:96) (cid:54)= (cid:96)(cid:48)). We recall that SF contributions are computed by subtracting W +W − and
top-quark backgrounds as outlined before. For all three distributions in Figure 4 we find excellent
agreement between theory and data. At NNLO, differences hardly exceed the 1σ level. Although
NNLO corrections change the cross section in certain bins, the experimental uncertainties are still
too large for more distinct conclusions. Similar to our previous observations for fiducial rates, the
agreement found here at fixed order is a significant improvement over the comparison with the
Monte Carlo prediction shown in Ref. [9]. As pointed out before, we expect a poor modelling of
the jet veto by the Powheg generator to be the main source of these differences, see also Ref. [79].
In the remainder of this paper we focus on the (cid:96)(cid:96)+Emiss
signature, with the same fiducial setup
as before. In Figure 5 we have picked three out of many observables where the importance of
NNLO corrections is evident. The NLO(cid:48)+gg result in the ratio frame denotes the sum of the NLO
and the loop-induced gg cross section, both evaluated with NNLO PDFs, which was the best
prediction available in the past. Its difference compared to the complete NNLO QCD result shows
the size of the genuine O(α2
S) corrections to the q ¯q channel, computed for the first time in this
T
4Boldface is used to indicate the vectorial sum of the dilepton and missing transverse momentum.
8
dσ/dpT,ℓℓ [fb/GeV]2ℓ2ν@LHC 8 TeV (ATLAS data)LONLONNLOdata 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14produced with MATRIXpT,ℓℓ [GeV]dσ/dσNLO 0 0.5 1 1.5 2601001501500dσ/dmT,ZZ [fb/GeV]2ℓ2ν@LHC 8 TeV (ATLAS data)LONLONNLOdata 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08produced with MATRIXmT,ZZ [GeV]dσ/dσNLO 0 0.5 1 1.5 22202803304001500dσ/dΔϕℓℓ [fb/rad]2ℓ2ν@LHC 8 TeV (ATLAS data)LONLONNLOdata 0 2 4 6 8 10 12produced with MATRIXΔϕℓℓ [rad]dσ/dσNLO 0 0.5 1 1.5 200.81.21.6π(a)
(b)
(c)
Figure 5: Same as Figure 4, but without data and for the distributions (a) ∆φ(cid:96)(cid:96), (b) pT,(cid:96)1, and (c)
pmiss
; for reference, also the NLO(cid:48)+gg result (green, dash-dotted) is shown in the ratio frame.
T
paper. For example, the ∆φ(cid:96)(cid:96) distribution in Figure 5 (panel a) develops a sizable NNLO/NLO
K-factor up to 1.6 for large separations. From the considerable differences between NNLO and
NLO(cid:48)+gg curves, which also concern their shapes, it is clear that this effect stems directly from
the newly computed O(α2
S) contributions. In this phase-space region (large ∆φ(cid:96)(cid:96)) the perturbative
accuracy is effectively diminished by one order due to the phase-space cuts which force the two Z
bosons to be boosted and approximately back-to-back, so that the two decay leptons disfavour
large separations. This manifests itself also in a widening of the scale uncertainty bands. Also
the transverse-momentum spectrum of the hardest lepton, pT,(cid:96)1 in Figure 5 (panel b) features a
significant shape distortion at NNLO, when compared to both NLO and NLO(cid:48)+gg. The same is
true for the missing transverse momentum, pmiss
in Figure 5 (panel c). In all cases perturbative
uncertainties are clearly reduced upon inclusion of higher-order corrections.
T
We complete our discussion of phenomenological results by studying the size of ZZ, W +W −,
and interference contributions entering the SF process pp → (cid:96)+(cid:96)− ν(cid:96)¯ν(cid:96). We recall that W +W −
contributions also involve resonant top-quark topologies. In contrast to our previous discussion,
W +W − and top-quark contributions are not subtracted from the SF process in the following.
We focus on the contamination of the (cid:96)(cid:96)+Emiss
signature through interference with W +W − and
top-quark diagrams. To this end, Figure 6 compares the NNLO cross section for the full process of
two OSSF leptons and two neutrinos, σ((cid:96)(cid:96) νe/µ/τ νe/µ/τ ) = σ((cid:96)(cid:96) ν(cid:96)ν(cid:96)) + 2 · σ((cid:96)(cid:96) ν(cid:96)(cid:48)ν(cid:96)(cid:48)) for (cid:96) ∈ {e, µ}
and (cid:96) (cid:54)= (cid:96)(cid:48) with the same NNLO cross section, where the SF channel is approximated by the
incoherent sum of the two DF processes, σ((cid:96)(cid:96) νe/µ/τ νe/µ/τ ) ≈ 3 · σ((cid:96)(cid:96) ν(cid:96)(cid:48)ν(cid:96)(cid:48)) + σ((cid:96)ν(cid:96) (cid:96)(cid:48)ν(cid:96)(cid:48)). The
difference of the two is precisely the remaining interference contribution of ZZ with W +W − (and
top-quark) topologies which we want to study. For completeness, also the individual DF ZZ and
DF W +W − cross sections, 3 · σ((cid:96)(cid:96) ν(cid:96)(cid:48)ν(cid:96)(cid:48)) and σ((cid:96)(cid:96)(cid:48) ν(cid:96)ν(cid:96)(cid:48)), respectively, are shown, whose sum is the
approximated cross section.
T
It is instructive to consider the invariant mass of the charged leptons, m(cid:96)+(cid:96)−, in Figure 6 (panel
a), which nicely illustrates the nature of the different results: Only ZZ topologies feature a
resonance at m(cid:96)+(cid:96)− = mZ, while the DF W +W − prediction is almost flat in this range of m(cid:96)+(cid:96)−.
9
dσ/dΔϕℓℓ [fb/rad]2ℓ2ν@LHC 8 TeVLONLONNLO10-410-310-210-1100101produced with MATRIXΔϕℓℓ [rad]dσ/dσNLONLO'+gg 0.8 1 1.2 1.4 1.6 1.8 0 0.5 1 1.5 2 2.5 3dσ/dpT,ℓ1 [fb/GeV]2ℓ2ν@LHC 8 TeVLONLONNLO10-410-310-210-1100produced with MATRIXpT,ℓ1 [GeV]dσ/dσNLONLO'+gg 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 0 100 200 300 400 500dσ/dpTmiss [fb/GeV]2ℓ2ν@LHC 8 TeVLONLONNLO10-610-510-410-310-210-1100produced with MATRIXpTmiss [GeV]dσ/dσNLONLO'+gg 0.7 0.8 0.9 1 1.1 1.2 0 100 200 300 400 500 600 700 800 900 1000(a)
(b)
(c)
Figure 6: Comparison of NNLO cross sections for the full process σ((cid:96)(cid:96) νe/µ/τ νe/µ/τ ) (blue, solid),
the individual ZZ contributions 3 · σ((cid:96)(cid:96) ν(cid:96)(cid:48)ν(cid:96)(cid:48)) with (cid:96) (cid:54)= (cid:96) (orange, dash-dotted), the individual
W +W − contributions σ((cid:96)ν(cid:96) (cid:96)(cid:48)ν(cid:96)(cid:48)) with (cid:96) (cid:54)= (cid:96) (black, dotted), and the approximation of the full
result by the incoherent sum of ZZ and W +W − contributions 3 · σ((cid:96)(cid:96) ν(cid:96)(cid:48)ν(cid:96)(cid:48)) + σ((cid:96)(cid:96)(cid:48) ν(cid:96)ν(cid:96)(cid:48)) (red,
dashed); for (a) m(cid:96)+(cid:96)−, (b) pT,(cid:96)−ν(cid:96), and (c) pT,(cid:96)1; the lower frames show the ratio to the full result.
It is clear from the first ratio frame that almost the entire cross section around the peak stems
from ZZ contributions. Only away from the peak W +W − production becomes larger than ZZ
production. It is also clear that it is the m(cid:96)+(cid:96)− cut in the fiducial definition which significantly
enhances ZZ contributions and suppresses the W +W − process. The relative difference between
the approximated and the full result, which is enlarged in the second ratio frame, is very small,
in particular in the peak region. This demonstrates that interference effects of ZZ with W +W −
(and top-quark) topologies are negligible, and that an incoherent sum of the two DF channels is an
excellent approximation of the SF process. This also implies that in our previous definition of the
(cid:96)(cid:96)+Emiss
signature the remaining interference effects after subtraction of W +W − and top-quark
backgrounds are small. In fact, we hardly found any distribution with larger interference effects.
The most pronounced example is the “pseudo”-observable in Figure 6 (panel b) that shows the
transverse-momentum spectrum of a W − boson reconstructed as (cid:96)−ν(cid:96), and even in this case the
differences do not exceed a few percent, although the shape is slightly deformed. With interference
effects being generally small, it is interesting to analyse the different behaviour of ZZ and W +W −
In the pT,(cid:96)1 distribution in Figure 6 (panel c), for example, the relative W +W −
topologies.
contribution increases around pT,(cid:96)1 = 90 GeV. This feature is already present at LO, and it is
caused by purely kinematic effects that allow the two W bosons to become resonant simultaneously
only in this part of phase space. The region below pT,(cid:96)1 = 45 GeV is populated only beyond LO.
T
We have presented NNLO QCD corrections to ZZ production for all leptonic processes. The
(cid:96)(cid:96)+Emiss
signature has been studied for the first time at this level of accuracy, and we have
T
10
dσNNLO/dmℓℓ [fb/GeV]2ℓ2ν@LHC 8 TeVfull (ℓℓνν+2·ℓℓν'ν')ZZ DF (3·ℓℓν'ν')WW DF (ℓνℓ'ν')sum (3·ℓℓν'ν'+ℓνℓ'ν')10-310-210-1100101produced with MATRIXdσNNLO/σNNLO(full)0%20%40%60%80%100%120%mℓℓ [GeV]dσNNLO/σNNLO(full) - 1-10%-5%0%+5%+10% 80 85 90 95 100 105dσNNLO/dpT,ℓ−ν [fb/GeV]2ℓ2ν@LHC 8 TeVfull (ℓℓνν+2·ℓℓν'ν')ZZ DF (3·ℓℓν'ν')WW DF (ℓνℓ'ν')sum (3·ℓℓν'ν'+ℓνℓ'ν')10-610-510-410-310-210-1100produced with MATRIXdσNNLO/σNNLO(full)0%20%40%60%80%100%120%pT,ℓ−ν [GeV]dσNNLO/σNNLO(full) - 1-10%-5%0%+5%+10% 0 100 200 300 400 500dσNNLO/dpT,ℓ1 [fb/GeV]2ℓ2ν@LHC 8 TeVfull (ℓℓνν+2·ℓℓν'ν')ZZ DF (3·ℓℓν'ν')WW DF (ℓνℓ'ν')sum (3·ℓℓν'ν'+ℓνℓ'ν')10-610-510-410-310-210-1100produced with MATRIXdσNNLO/σNNLO(full)0%20%40%60%80%100%120%pT,ℓ1 [GeV]dσNNLO/σNNLO(full) - 1-10%-5%0%+5%+10% 0 100 200 300 400 500introduced a procedure to compute results consistently in the five-flavour scheme without contribu-
tions from W +W − or top-quark backgrounds. We also computed state-of-the-art predictions for
signatures involving four charged leptons. Our results are compared to ATLAS data at 8 TeV, and
we find good agreement for both fiducial cross sections and distributions. NNLO QCD corrections
are sizable, even in presence of a jet veto used in the (cid:96)(cid:96)+Emiss
T measurement. By and large, they are
of the order of 5%–20%, but can reach even 60% in certain phase-space regions. Most importantly,
such effects do not only stem from the loop-induced gg contribution, but are also due to the newly
computed genuine O(α2
S) corrections to the q ¯q channel. Not least, we have shown that all remaining
interference effects of ZZ topologies with W +W − and top-quark backgrounds in 2(cid:96)2ν production
are negligible. The availability of fully differential NNLO predictions for all leptonic channels of
ZZ production will play a crucial role in the rich physics programme that is based on precision
studies of ZZ signatures at the LHC. Along with the paper we provide an updated version of
Matrix, featuring all processes with the fiducial setup, cuts and distributions considered here.
Acknowledgements. We would like to thank Massimiliano Grazzini and Jochen Meyer for
useful discussions and comments on the manuscript. The work of MW is supported by the ERC
Consolidator Grant 614577 HICCUP.
References
[1] ATLAS and CMS, ATLAS-CONF-2016-036, CMS-PAS-SMP-15-001.
[2] G. Aad et al. (ATLAS), Phys. Rev. Lett. 108, 041804 (2012), arXiv:1110.5016 [hep-ex].
[3] S. Chatrchyan et al. (CMS), JHEP 01, 063 (2013), arXiv:1211.4890 [hep-ex].
[4] G. Aad et al. (ATLAS), JHEP 03, 128 (2013), arXiv:1211.6096 [hep-ex].
[5] S. Chatrchyan et al. (CMS), Phys. Lett. B721, 190 (2013), arXiv:1301.4698 [hep-ex].
[6] V. Khachatryan et al. (CMS), Phys. Lett. B740, 250 (2015), arXiv:1406.0113 [hep-ex].
[7] V. Khachatryan et al. (CMS), Eur. Phys. J. C75, 511 (2015), arXiv:1503.05467 [hep-ex].
[8] G. Aad et al. (ATLAS), Phys. Lett. B753, 552 (2016), arXiv:1509.07844 [hep-ex].
[9] M. Aaboud et al. (ATLAS), JHEP 01, 099 (2017), arXiv:1610.07585 [hep-ex].
[10] G. Aad et al. (ATLAS), Phys. Rev. Lett. 116, 101801 (2016), arXiv:1512.05314 [hep-ex].
[11] V. Khachatryan et al. (CMS), Phys. Lett. B763, 280 (2016), [Erratum: Phys. Lett. B772,
884 (2017)], arXiv:1607.08834 [hep-ex].
[12] M. Aaboud et al. (ATLAS), Phys. Rev. D97, 032005 (2018), arXiv:1709.07703 [hep-ex].
[13] A. M. Sirunyan et al. (CMS), Eur. Phys. J. C78, 165 (2018), arXiv:1709.08601 [hep-ex].
[14] M. Aaboud et al. (ATLAS), Eur. Phys. J. C78, 293 (2018), arXiv:1712.06386 [hep-ex].
[15] J. Ohnemus and J. Owens, Phys. Rev. D43, 3626 (1991).
[16] B. Mele, P. Nason and G. Ridolfi, Nucl. Phys. B357, 409 (1991).
[17] J. Ohnemus, Phys. Rev. D50, 1931 (1994), hep-ph/9403331.
[18] J. M. Campbell and R. K. Ellis, Phys. Rev. D60, 113006 (1999), hep-ph/9905386.
[19] L. J. Dixon, Z. Kunszt and A. Signer, Phys. Rev. D60, 114037 (1999), hep-ph/9907305.
[20] L. J. Dixon, Z. Kunszt and A. Signer, Nucl. Phys. B531, 3 (1998), hep-ph/9803250.
11
[21] E. Accomando, A. Denner and A. Kaiser, Nucl. Phys. B706, 325 (2005), hep-ph/0409247.
[22] A. Bierweiler, T. Kasprzik and J. H. K¨uhn, JHEP 1312, 071 (2013), arXiv:1305.5402 [hep-ph].
[23] J. Baglio, L. D. Ninh and M. M. Weber, Phys. Rev. D88, 113005 (2013), arXiv:1307.4331
[hep-ph].
[24] B. Biedermann, A. Denner, S. Dittmaier, L. Hofer and B. J¨ager, Phys. Rev. Lett. 116, 161803
(2016), arXiv:1601.07787 [hep-ph].
[25] B. Biedermann, A. Denner, S. Dittmaier, L. Hofer and B. J¨ager, JHEP 01, 033 (2017),
arXiv:1611.05338 [hep-ph].
[26] S. Kallweit, J. M. Lindert, S. Pozzorini and M. Sch¨onherr, JHEP 11, 120 (2017),
arXiv:1705.00598 [hep-ph].
[27] T. Binoth, T. Gleisberg, S. Karg, N. Kauer and G. Sanguinetti, Phys. Lett. B683, 154 (2010),
arXiv:0911.3181 [hep-ph].
[28] E. W. N. Glover and J. J. van der Bij, Nucl. Phys. B321, 561 (1989).
[29] D. A. Dicus, C. Kao and W. W. Repko, Phys. Rev. D36, 1570 (1987).
[30] T. Matsuura and J. van der Bij, Z. Phys. C51, 259 (1991).
[31] C. Zecher, T. Matsuura and J. van der Bij, Z. Phys. C64, 219 (1994), hep-ph/9404295.
[32] T. Binoth, N. Kauer and P. Mertsch, Proceedings DIS 2008, 142 (2008), arXiv:0807.0024
[hep-ph].
[33] J. M. Campbell, R. K. Ellis and C. Williams, JHEP 1107, 018 (2011), arXiv:1105.0020
[hep-ph].
[34] N. Kauer, JHEP 12, 082 (2013), arXiv:1310.7011 [hep-ph].
[35] F. Cascioli, S. H¨oche, F. Krauss, P. Maierh¨ofer, S. Pozzorini and F. Siegert, JHEP 1401, 046
(2014), arXiv:1309.0500 [hep-ph].
[36] J. M. Campbell, R. K. Ellis and C. Williams, JHEP 04, 060 (2014), arXiv:1311.3589 [hep-ph].
[37] N. Kauer, C. O’Brien and E. Vryonidou, JHEP 10, 074 (2015), arXiv:1506.01694 [hep-ph].
[38] F. Caola, K. Melnikov, R. R¨ontsch and L. Tancredi, Phys. Rev. D92, 094028 (2015),
arXiv:1509.06734 [hep-ph].
[39] F. Caola, M. Dowling, K. Melnikov, R. R¨ontsch and L. Tancredi, JHEP 07, 087 (2016),
arXiv:1605.04610 [hep-ph].
[40] S. Alioli, F. Caola, G. Luisoni and R. R¨ontsch, Phys. Rev. D95, 034042 (2017),
arXiv:1609.09719 [hep-ph].
[41] F. Caola, J. M. Henn, K. Melnikov, A. V. Smirnov and V. A. Smirnov, JHEP 1506, 129
(2015), arXiv:1503.08759 [hep-ph].
[42] A. von Manteuffel and L. Tancredi, JHEP 1506, 197 (2015), arXiv:1503.08835 [hep-ph].
[43] F. Cascioli, T. Gehrmann, M. Grazzini, S. Kallweit, P. Maierh¨ofer, A. von Manteuffel, S. Poz-
zorini, D. Rathlev, L. Tancredi and E. Weihs, Phys. Lett. B735, 311 (2014), arXiv:1405.2219
[hep-ph].
[44] G. Heinrich, S. Jahn, S. P. Jones, M. Kerner and J. Pires, JHEP 03, 142 (2018),
arXiv:1710.06294 [hep-ph].
[45] T. Gehrmann, A. von Manteuffel, L. Tancredi and E. Weihs, JHEP 1406, 032 (2014),
arXiv:1404.4853 [hep-ph].
12
[46] F. Caola, J. M. Henn, K. Melnikov, A. V. Smirnov and V. A. Smirnov, JHEP 1411, 041
(2014), arXiv:1408.6409 [hep-ph].
[47] T. Gehrmann, A. von Manteuffel and L. Tancredi, JHEP 09, 128 (2015), arXiv:1503.04812
[hep-ph].
[48] M. Grazzini, S. Kallweit and D. Rathlev, Phys. Lett. B750, 407 (2015), arXiv:1507.06257
[hep-ph].
[49] M. Grazzini, S. Kallweit and M. Wiesemann, arXiv:1711.06631 [hep-ph].
[50] A. Denner, S. Dittmaier and L. Hofer, PoS LL2014, 071 (2014), arXiv:1407.0087 [hep-ph].
[51] A. Denner, S. Dittmaier and L. Hofer, Comput. Phys. Commun. 212, 220 (2017),
arXiv:1604.06792 [hep-ph].
[52] G. Ossola, C. G. Papadopoulos and R. Pittau, JHEP 0803, 042 (2008), arXiv:0711.3596
[hep-ph].
[53] A. van Hameren, Comput. Phys. Commun. 182, 2427 (2011), arXiv:1007.4716 [hep-ph].
[54] F. Cascioli, P. Maierh¨ofer and S. Pozzorini, Phys. Rev. Lett. 108, 111601 (2012),
arXiv:1111.5206 [hep-ph].
[55] F. Buccioni, S. Pozzorini and M. Zoller, Eur. Phys. J. C78, 70 (2018), arXiv:1710.11452
[hep-ph].
[56] S. Catani and M. Grazzini, Phys. Rev. Lett. 98, 222002 (2007), hep-ph/0703012.
[57] S. Kallweit, J. M. Lindert, P. Maierh¨ofer, S. Pozzorini and M. Sch¨onherr, JHEP 04, 012
(2015), arXiv:1412.5157 [hep-ph].
[58] S. Kallweit, J. M. Lindert, P. Maierh¨ofer, S. Pozzorini and M. Sch¨onherr, JHEP 04, 021
(2016), arXiv:1511.08692 [hep-ph].
[59] Munich is the abbreviation of “MUlti-chaNnel Integrator at Swiss (CH) precision”—an
automated parton level NLO generator by S. Kallweit. In preparation.
[60] S. Catani and M. Seymour, Phys. Lett. B378, 287 (1996), hep-ph/9602277.
[61] S. Catani and M. Seymour, Nucl. Phys. B485, 291 (1997), hep-ph/9605323.
[62] M. Grazzini, S. Kallweit, D. Rathlev and A. Torre, Phys. Lett. B731, 204 (2014),
arXiv:1309.7000 [hep-ph].
[63] M. Grazzini, S. Kallweit and D. Rathlev, JHEP 07, 085 (2015), arXiv:1504.01330 [hep-ph].
[64] T. Gehrmann, M. Grazzini, S. Kallweit, P. Maierh¨ofer, A. von Manteuffel, S. Pozzorini,
D. Rathlev and L. Tancredi, Phys. Rev. Lett. 113, 212001 (2014), arXiv:1408.5243 [hep-ph].
[65] M. Grazzini, S. Kallweit, S. Pozzorini, D. Rathlev and M. Wiesemann, JHEP 08, 140 (2016),
arXiv:1605.02716 [hep-ph].
[66] M. Grazzini, S. Kallweit, D. Rathlev and M. Wiesemann, Phys. Lett. B761, 179 (2016),
arXiv:1604.08576 [hep-ph].
[67] M. Grazzini, S. Kallweit, D. Rathlev and M. Wiesemann, JHEP 05, 139 (2017),
arXiv:1703.09065 [hep-ph].
[68] D. de Florian, M. Grazzini, C. Hanga, S. Kallweit, J. M. Lindert, P. Maierh¨ofer, J. Mazzitelli
and D. Rathlev, JHEP 09, 151 (2016), arXiv:1606.09519 [hep-ph].
[69] M. Grazzini, G. Heinrich, S. Jones, S. Kallweit, M. Kerner, J. M. Lindert and J. Mazzitelli,
JHEP 05, 059 (2018), arXiv:1803.02463 [hep-ph].
13
[70] M. Grazzini, S. Kallweit, D. Rathlev and M. Wiesemann, JHEP 08, 154 (2015),
arXiv:1507.02565 [hep-ph].
[71] E. Re, M. Wiesemann and G. Zanderighi, arXiv:1805.09857 [hep-ph].
[72] A. Denner, S. Dittmaier, M. Roth and L. H. Wieders, Nucl. Phys. B724, 247 (2005), [Erratum:
Nucl. Phys. B854, 504 (2012)], hep-ph/0505042.
[73] C. Patrignani et al. (Particle Data Group), Chin. Phys. C40, 100001 (2016).
[74] R. D. Ball et al. (NNPDF), JHEP 1504, 040 (2015), arXiv:1410.8849 [hep-ph].
[75] P. Nason, JHEP 11, 040 (2004), hep-ph/0409146.
[76] S. Frixione, P. Nason and C. Oleari, JHEP 11, 070 (2007), arXiv:0709.2092 [hep-ph].
[77] S. Alioli, P. Nason, C. Oleari and E. Re, JHEP 06, 043 (2010), arXiv:1002.2581 [hep-ph].
[78] T. Melia, P. Nason, R. R¨ontsch and G. Zanderighi, JHEP 11, 078 (2011), arXiv:1107.5051
[hep-ph].
[79] P. F. Monni and G. Zanderighi, JHEP 1505, 013 (2015), arXiv:1410.4745 [hep-ph].
14
|
synthetic_cpt | 1 | Semi-Automated_Construction_of_Food_Composition_Knowledge_Base.pdf | 6
1
0
2
r
a
M
7
]
T
A
.
h
t
a
m
[
1
v
9
6
9
1
0
.
3
0
6
1
:
v
i
X
r
a
Semi-homotopy and semi-fundamental groups
Ayhan ERC˙IYES∗a, Ali AYTEK˙INb and Tunçar ¸SAHANb
aDepartment of Elementary Mathematics Education, Aksaray University, Aksaray, TURKEY
bDepartment of Mathematics, Aksaray University, Aksaray, TURKEY
Abstract
In this study we introduce the notions of semi-homotopy of semi-continuous maps
and of semi-paths. We also construct a group structure, which will be called semi-
fundamental group, using semi-loops and explore some properties of semi-homotopy
and semi-fundamental groups.
Key Words: Semi-open sets, semi-closed sets, homotopy, fundamental groups
Classification: 54C08, 14F35, 55Q05, 57M05
1
Introduction
Homotopy theory studies topological objects up to homotopy equivalence. Homotopy
equivalence is a weaker relation than topological equivalence, i.e., homotopy classes of
spaces are larger than homeomorphism classes. Therefore, homotopy equivalence plays
a more important role than homeomorphism. Homotopy theory is a subdomain of topol-
ogy. Instead of considering the category of topological spaces and continuous maps, one
may prefer to consider as morphisms only the continuous maps up to homotopy. On the
other hand the concept of homotopy groups is a way to interpret topological problems to
algebraic problems which could be solve much easier. For this reason, homotopy groups,
especially fundamental groups, are very powerful tools for this purpose. To obtain further
insights on applications of homotopy groups, see for example the books of Brown [2] and of
Rotman [9].
The concept of semi-open set in topological spaces was introduced in 1963 by Levine [7].
He defined a set A to be semi-open in a topological space if and only if A is between an open
∗Correspondence: [email protected]
1
subset and the closure of that open. Further, Levine investigated a notion of semi-continuity.
After the works of Levine on semi-open sets, various mathematician turned their attention
to the generalisations of various concepts of topology by considering semi-open sets instead
of open sets. New results are obtained in some occasions and in other occasions substantial
generalisations are exibited, by replacing open sets with semi-open sets
In 1971, S. Gene Crossley and S. K. Hildebrand [4] introduce semi-closed sets, semi-
interior, and semi-closure in a manner analogous to the corresponding concepts of closed
sets, interior, and closure. Further, a year later, they defined that a property of topological
spaces is a semi-topological property if there is a semi-homeomorphism which preserves
that property [5]. Also, they shown that Hausdorff, separable, and connected properties of
topological spaces were semi-topological properties.
S.M.N. Maheshawari and R. Prasad [8] used semi-open sets to define and investigate
three new separation axiom called Semi-T0, Semi-T1 and Semi-T2.
Recently, P. Bhattacharyya and B.K. Lahiri [1] generalised the concept of closed sets to
semi-generalised closed sets with the help of semi-openness.
In the light of these works, the main purpose of this paper is to introduce the notions of
semi-homotopy and semi-fundamental group using the semi-open sets, to obtain different
group structures from topological spaces.
2 Preliminaries
The notion of semi-open sets in a topological space was introduced by Levine [7] as
follows.
Definition 2.1 [7] Let X be a topological space and A ⊆ X. A is called semi-open provided that
there exists an open set U such that U ⊆ A ⊆ U, where U denotes the closure of the set U in X.
Here is a concrete example of semi-open sets.
Example 2.2 Let τ = {X, ∅, {a}, {a, b}} be the topology on the set X = {a, b, c, d}. Therefore we
have semi-open subsets of X as follows:
SO(X) = {X, ∅, {a}, {a, c}, {a, d}, {a, b}, {a, b, c}, {a, c, d}, {a, b, d}}.
Following proposition is a well known result for semi-open sets. Hence we omit the
proof.
Proposition 2.3 [10, 7] Union of any collection of semi-open sets in a topological space is also semi-
open.
2
Example 2.4 Consider the space of the real numbers with the usual topology. It is easy to see that
intervals of the form (a, b), (a, b], [a, b) and [a, b] and their arbitrary unions are semi-open.
Proposition 2.5 Let X be a topological space and A ⊆ X. Then A is semi-open if and only if for
each point x in A there exist a semi-open subset Bx of X such that x ∈ Bx ⊆ A.
Proof: Let A be a semi-open set in X. Thus we can choose the set Bx as A for all x ∈ A.
Conversely assume that for each point x in A there exist a semi-open subset Bx of X such
that x ∈ Bx ⊆ A. Then
Bx = A
[
x∈A
and by Proposition 2.3 A is a semi-open subset of X.
(cid:4)
The notion of semi-closedness is introduced in [4]. Now we will recall the definition of
semi-closed sets and some-properties of semi-closed sets from [4].
Definition 2.6 [4] Let X be a topological space and C ⊆ X. C is called semi-closed if there exists
a closed set K such that K ◦ ⊆ C ⊆ K where K ◦ is the interior of K.
Example 2.7 Let τ = {X, ∅, {a}, {a, b}} be the topology on the set X = {a, b, c, d}. Therefore we
have semi-closed subsets of X as follows:
SC(X) = {X, ∅, {b}, {c}, {d}, {c, d}, {b, c}, {b, d}, {b, c, d}}.
Proposition 2.8 [4] In a topological space the complement of a semi-open set is semi-closed and
vice-versa.
Now we will recall the definitions of semi-continuities and some properties of them from
[10].
Definition 2.9 Let X and Y be two topological spaces, f : X → Y a function and p a point of X.
Then f is called
(i) so-1-continuous at p provided for each open set V containing f (p) in Y , there exists a semi-
open set A in X that contains p and f (A) ⊆ V ,
(ii) so-2-continuous at p provided for each semi-open set B containing f (p) in Y , there exists a
semi-open set A in X that contains p and f (A) ⊆ B, and
(iii) so-3-continuous at p provided for each semi-open set B containing f (p) in Y , there exists an
open set U in X that contains p and f (U) ⊆ B.
3
If f is so-i-continuous at every point of X for a fixed i then f is called so-i-continuous.
Relations between so-i-continuous functions, constant functions and continuous func-
tions are given with the following figure.
constant
3 so − 3
so − 1
;♣♣♣♣♣♣♣♣♣♣
♣♣♣♣♣♣♣♣♣♣
❖❖❖❖❖❖❖❖❖❖
❖❖❖❖❖❖❖❖❖❖
so − 2
◆◆◆◆◆◆◆◆◆◆
◆◆◆◆◆◆◆◆◆◆
;♦♦♦♦♦♦♦♦♦♦
♦♦♦♦♦♦♦♦♦♦
continuous
This figure says that every constant map is so-3-continuous, every so-3-continuous func-
tion is both so-2-continuous and continuous, every so-2-continuous function and every con-
tinuous function is so-1-continuous.
Following proposition gives a criteria for so-i-continuous functions similar to one in clas-
sical topology. The proof is also similar, hence we omit.
Proposition 2.10 Let X and Y be topological spaces and f : X → Y a function. Then f is
(i) so-1-continuous iff for each open set V ⊆ Y , f −1(V ) is semi-open in X,
(ii) so-2-continuous iff for each semi-open set B ⊆ Y , f −1(B) is semi-open in X,
(iii) so-3-continuous iff for each semi-open set B ⊆ Y , f −1(B) is open in X.
This proposition could be given by using semi-closed sets as follows.
Proposition 2.11 Let X and Y be topological spaces and f : X → Y a function. Then f is
(i) so-1-continuous iff for each closed set K ⊆ Y , f −1(K) is semi-closed in X,
(ii) so-2-continuous iff for each semi-closed set M ⊆ Y , f −1(M) is semi-closed in X,
(iii) so-3-continuous iff for each semi-closed set M ⊆ Y , f −1(M) is closed in X.
so-1-continuous functions are called semi-continuous and so-2-continuous functions are
called irresolute [4]. In this paper the unit interval [0, 1] will be denoted by I, as a subspace
of reel numbers R with the usual topology.
Remark 2.12 Let X be a topological space. Then it is easy to see that the identity function 1X : X →
X is so-1-continuous and so-2-continuous but not so-3-continuous. Moreover usual composition of
so-2-continuous (resp. so-3-continuous) functions are again so-2-continuous (resp. so-3-continuous).
Thus we obtain the category s-Top of topological spaces with morphisms so-2-continuous (irreso-
lute) functions. On the other hand composition of so-1-continuous functions need not to be so-1-
continuous.
4
#
+
+
3
#
+
3
3 Semi-Homotopy
In this section we will introduce the notions of so-i-homotopy of so-i-continuous func-
tions, so-2-homotopy type, so-i-paths and so-i-homotopy of so-i-paths, and give some prop-
erties. From now on i will symbolize of a fixed element of the set {1, 2, 3} for each item.
Definition 3.1 Let X and Y be two topological spaces and f, g : X → Y be two so-i-continuous
functions. If there exist a function H : X × I → Y such that for all t ∈ I the restrictions of H
Ht
: X −→ Y
x 7−→ Ht(x) = H(x, t)
are so-i-continuous with H0 = f and H1 = g, then we say that f and g are so-i-homotopic. In this
case H is called an so-i-homotopy from f to g and this will be denoted by H : f ≃i g or briefly, by
f ≃i g.
Theorem 3.2 The relation being so-i-homotopic on the set of all so-i-continuous functions between
two topological spaces is an equivalence relation.
Proof: Let X and Y be two topological spaces and f, g, h : X → Y be so-i-continuous
functions.
Reflexivity: If f : X → Y define
H : X × I −→ Y
(x, t)
7−→ H(x, t) = f (x)
for all x ∈ X and all t ∈ I. It is clear that F : f ≃i f .
Symmetry: Assume that H : f ≃i g, so there is a function H : X × I → Y with H(x, 0) =
f (x) and H(x, 1) = g(x) for all x ∈ X. Define
G : X × I −→ Y
(x, t)
7−→ G(x, t) = H(x, 1 − t)
for all x ∈ X and all t ∈ I. Since H is so-i-continuous,
Gt(x) = G(x, t) = H(x, 1 − t)
is so-i-continuous, and G0 = g and G1 = f . Therefore G : g ≃i f .
5
Transitivity: Assume that F : f ≃i g and G : g ≃i h. Define
H(x, t) =
F (x, 2t),
G(x, 2t − 1),
t ∈ [0, 1/2]
t ∈ [1/2, 1].
Therefore H : f ≃i h. Thus ≃i is an equivalence relation.
Let X and Y be two topological spaces and f : X → Y be an so-i-continuous function.
Then the set of all so-i-continuous functions from X to Y which are so-i-homotopic to f is
called the equivalence class (so-i-homotopy class) of f and denoted by [f ]i.
(cid:4)
[f ]i = {g | g : X → Y so-i-continuous, f ≃i g}
Similar to classical theory, using the new homotopy defined above we will introduce the
notion of so-i-homotopy equivalence and so-i-homotopy type just for the case i = 2 since
the composition of so-2-continuous functions is again so-i-continuous.
Definition 3.3 Let X and Y be two topological spaces. An irresolute function f : X → Y is called
a irresolute homotopy equivalence if there exist an irresolute function g : Y → X such that
gf ≃2 1X and f g ≃2 1Y . If there is an irresolute homotopy equivalence between two topological
spaces then we say that these spaces have the same irresolute homotopy type.
Now we will give the definition of so-i-paths which is the special case of so-i-continuous
functions. Further we will give a more stronger version of so-i-homotopy for so-i-paths.
Definition 3.4 Let X be a topological space, α : I → X be an so-i-continuous function and α(0) =
a and α(1) = b . Then α is called an so-i-path from a to b in X. If a = b then α is called an
so-i-loop at a.
Definition 3.5 Let α, β : I → X be two so-i-path in X with α(1) = β(0). Then the function
(α ∗ β)(t) =
α(2t),
t ∈ [0, 1/2]
β(2t − 1),
t ∈ [1/2, 1]
is an so-i-path and is called the composition of so-i-paths α and β in X. α ∗ β will be denoted by
αβ for short.
6
Definition 3.6 Let X be a topological space and α : I → X be an so-i-path in X. Then the function
α : I −→ X
defined by α(t) = α(1 − t) is an so-i-path in X and is called the inverse of α.
Definition 3.7 Let X be a topological space and α, β : I → X be two so-i-paths where α(0) = β(0)
and α(1) = β(1). If there is an so-i-continuous function F : I × I → X such that
(i) for all t ∈ I the restrictions of F
Ft
:
I −→ Y
s 7−→ Ft(s) = F (s, t)
are so-i-continuous and
(ii) F (s, 0) = α(s), F (0, t) = a, F (s, 1) = β(s), and F (1, t) = b
then we say that F is so-i-homotopy of so-i-paths from α to β relative to endpoints and denoted
by F : α ≃i β rel Î. We will denote this by α ≃i β where no confusion arise.
Theorem 3.8 The relation being so-i-homotopic relative to endpoints on the set of all so-i-paths in a
topological space is an equivalence relation.
Proof: This can be proved by a similar way to the proof of Theorem 3.2.
(cid:4)
Definition 3.9 Let X be a topological space and α : I → X an so-i-path in X. Then the set
[α]i = {β | α ≃i β rel Î}
is called equivalence class (so-i-homotopy class) of α.
4 Semi-Fundamental groups
In this section, using the so-i-loops, we will construct a group structure on the set of all
so-i-homotopy classes of so-i-loops at a base point of a topological space. Following lemma
is a very useful tool to construct this group structure.
Lemma 4.1 Let X be a topological space, a, b ∈ X and α be an so-i-path from a to b. If there is an
so-i-continuous function ρ : [0, 1] → [0, 1] such that ρ(0) = 0 and ρ(1) = 1 then αρ ≃i α.
7
Proof: First of all note that αρ is an so-i-path from a to b. Now we define the so-i-
homotopy F : αρ ≃i α as follows:
F :
I × I −→ X
(s, t)
7−→ F (s, t) = α ((1 − t)s + tρ(s))
It is easy to see that F is an so-i-homotopy from αρ to α.
(cid:4)
Proposition 4.2 Let X be a topological space and α, β, α′, β′ : I → X be so-i-paths such that α(0) =
α′(0), α(1) = α′(1) = β(0) = β′(0) and β(1) = β′(1). If α ≃i α′ and β ≃i β′ then αβ ≃i α′β′.
Proof: Let F and G be two so-i-homotopy from α to α′ and from β to β′, respectively.
Then the function H : I × I −→ X defined by
H(s, t) =
F (2s, t),
s ∈ [0, 1/2]
G(2s − 1, t),
s ∈ [1/2, 1]
is so-i-continuous and defines an so-i-homotopy from αβ to α′β′.
(cid:4)
Proposition 4.3 Let X be a topological space and α, β, γ : I → X be three so-i-paths with α(1) =
β(0) and β(1) = γ(0). Then
α(βγ) ≃i (αβ)γ.
Proof: By the Definition 3.5 compositions α(βγ) and (αβ)γ are defined as follows:
and
α(βγ)(t) =
(αβ)γ(t) =
Now let define a function ρ : I → I by
α(2t),
t ∈ [0, 1/2]
β(4t − 2),
t ∈ [1/2, 3/4]
γ(4t − 3),
t ∈ [3/4, 1]
α(4t),
t ∈ [0, 1/4]
β(4t − 1),
t ∈ [1/4, 1/2]
γ(2t − 1),
t ∈ [1/2, 1].
ρ(t) =
t ∈ [0, 1/4]
t ∈ [1/4, 1/2]
t ∈ [1/2, 1].
2t,
t + 1
4,
t+1
2 ,
8
One can see that ρ is an so-i-continuous function and ρ(0) = 0, ρ(1) = 1. Moreover (α(βγ))ρ =
(cid:4)
(αβ)γ. Then by Lemma 4.1 α(βγ) ≃i (αβ)γ.
Proposition 4.4 Let X be a topological space, x, y ∈ X and α : I → X be an so-i-path from x to y.
Then
1xα ≃i α ≃i α1y
where 1x and 1y are the constant maps at x and y, respectively.
Proof: First of all let define a function ρ : I → I by
0,
t ∈ [0, 1/2]
2t − 1,
t ∈ [1/2, 1].
ρ(t) =
This function satisfies the conditions of Lemma 4.1 and 1xα = αρ. Hence 1xα ≃i α. Similarly
by taking ρ as
one can show that α ≃i α1y.
ρ(t) =
2t,
t ∈ [0, 1/2]
1,
t ∈ [1/2, 1]
(cid:4)
Proposition 4.5 Let X be a topological space, x, y ∈ X and α : I → X be an so-i-path in X from x
to y. Then
αα ≃i 1x and αα ≃i 1y.
Proof: Let define a function F : I × I → X for all t ∈ I by
F (s, t) =
α(2s),
α(s),
s ∈ [0, t/2]
s ∈ [t/2, 1 − t/2]
α(2 − 2s),
s ∈ [1 − t/2, 1].
This function defines an so-i-homotopy from 1x to αα. Similarly, one can show that αα ≃i 1y.
(cid:4)
Theorem 4.6 Let X be a topological space and x ∈ X. Then the set
πi
1(X, x) = {[α]i | α : I → X so-i-loop at x}
9
of all so-i-homotopy classes of so-i-loops at x has a group structure with the operation
∗ :
1(X, x) × πi
πi
([α]i, [β]i)
1(X, x) −→ πi
1(X, x)
7−→ [α]i ∗ [β]i = [α ∗ β]i.
Proof: Proposition 4.2 shows that the operation ∗ is well defined. By Proposition 4.3 the
operation is associative. The so-i-homotopy class of constant map 1x at x acts as the identity
element, i.e. for all [α]i ∈ πi
1(X, x)
[1x]i ∗ [α]i = [α]i ∗ [1x]i = [α]i
by Proposition 4.4. Finally according to Proposition 4.5 for all [α]i ∈ πi
[α]i up to the operation ∗ is [α]−1
i = [α]i ∈ πi
1(X, x).
1(X, x) the inverse of
(cid:4)
This group will be called the so-i-fundamental group of X at x. In particular π1
1(X, x)
1(X, x) will be called irresolute fundamental
will be called semi-fundamental group and π2
group.
Proposition 4.7 Let X be a topological space, x, y ∈ X and γ : I → X be an so-i-path from x to y.
Then
1(X, x) ∼= πi
πi
1(X, y).
Proof: The claimed isomorphism is
γ⋆
:
1(X, x) −→ πi
πi
[α]i
7−→ [γ]−1
1(X, y)
i ∗ [α]i ∗ [γ]i.
(cid:4)
Corollary 4.8 In a topological space whose topology is so-i-path-connected, i.e.
elements there exist an so-i-path between them, every so-i-fundamental group is isomorphic.
for each pair of
Proposition 4.9 Let s − Top∗ be the category of pointed topological spaces with morphisms so-2-
continuous (irresolute) functions and Grp be the category of groups with morphisms group homo-
morphisms. Then
π2
1
:
s − Top∗ −→ Grp
7−→ π2
(X, x)
1(X, x)
is a functor.
Corollary 4.10 Let X and Y be two topological spaces. If f : X → Y is a homeomorphism then
1(X, x) ∼= π2
π2
1(Y, f (x)).
10
5 Conclusion
It seems that according to these results one can define a more general notion semi-
fundamental groupoid following the way in [2] and [9]. Further, using the results of the
paper [3] of Császár it could be possible to develop more generic homotopy types and ho-
motopy groups. Hence parallel results of this paper could be obtained for generalized open
sets and for generalized continuity.
References
[1] Bhattacharyya, P. and Lahiri, B.K., Semi-generalized closed sets in topology, lnd. Jr. Math.,
29 (1987), 375–382.
[2] Brown, R., Topology and groupoids, BookSurge LLC, North Carolina, 2006.
[3] Császár, Á., Generalized open sets, Acta Mathematica Hungarica, 75(1), (1997), 65–87.
[4] Crossley, S. and Hildebrand, S.K., Semi-closure, Texas J. Sci. 22 (1971), 99–112.
[5] Crossley, S. and Hildebrand, S.K., Semi-topological properties, Fundamenta Mathemati-
cae 74(3) (1972), 233–254.
[6] Hatcher, A., Algebraic topology, Cambridge University Press, 2002.
[7] Levine, N., Semi-open sets and semi-continuity in topological spaces, Amer. Math. Monthly
70 (1963), 36–41.
[8] Maheshawari, S.M.N. and Prasad, R., Some new separation axioms, Ann. Soco. Sci. Brux-
elles 89 (1975), 395–402.
[9] Rotman, J.J., An introduction to algebraic topology, Springer, 1988.
[10] Scheers, J.M., An exploration of semi-open sets in topological spaces, M.Sc. Thesis, Stephen
F. Austin State University, 2011.
11
|
synthetic_cpt | 5 | Automatic_Document_Selection_for_Efficient_Encoder_Pretraining.pdf | Automatic Document Selection for Efficient Encoder Pretraining
Yukun Feng1 Patrick Xia1 Benjamin Van Durme1
João Sedoc2
1Johns Hopkins University
2New York University
{yfeng55, paxia, vandurme}@jhu.edu, [email protected]
2
2
0
2
t
c
O
6
2
]
L
C
.
s
c
[
2
v
1
5
9
0
1
.
0
1
2
2
:
v
i
X
r
a
Abstract
Building pretrained language models is con-
sidered expensive and data-intensive, but must
we increase dataset size to achieve better
performance? We propose an alternative to
larger training sets by automatically identify-
ing smaller yet domain-representative subsets.
We extend Cynical Data Selection, a statistical
sentence scoring method that conditions on a
representative target domain corpus. As an ex-
ample, we treat the OntoNotes corpus as a tar-
get domain and pretrain a RoBERTa-like en-
coder from a cynically selected subset of the
Pile. On both perplexity and across several
downstream tasks in the target domain, it con-
sistently outperforms random selection with
20x less data, 3x fewer training iterations, and
2x less estimated cloud compute cost, validat-
ing the recipe of automatic document selection
for LM pretraining.
1
Introduction
Large pretrained language models have achieved
state-of-the-art performance in NLP tasks (Devlin
et al., 2019; Liu et al., 2019, i.a.). These studies
find that increasing pretraining data size usually
leads to better task performance. For many tasks,
additional task (in-domain) data helps improve the
performance further (Gururangan et al., 2020; Dery
et al., 2021; Li et al., 2022). Several studies have
found that directly pretraining on task data is more
effective : science texts (Beltagy et al., 2019),
tweets (Nguyen et al., 2020), legal texts (Chalkidis
et al., 2020) or code (Tabassum et al., 2020; Chen
et al., 2021). Notably, these domains are known
a priori, and identifying data sources for curation
is straightforward. In other instances where the
domain is less clear, like “offensive online content”
(Bai et al., 2021), more complicated data sampling
is employed to guess at the desired data distribution
suitable for training a downstream classifier.
To address such scenarios, we propose automat-
ically identifying relevant domain-specific train-
Figure 1: This figure highlights the efficiency of the au-
tomatic cynical selection of documents in the target do-
main. Scores are averaged from 8 Edge Probing tasks.
Cynically selected 2.5GB data achieves the best score.
ing data for a large corpus and subsequently pre-
training a model on the selected data. Specifi-
cally, we use Cynical Data Selection (Axelrod,
2017), an approach that advanced Moore-Lewis
sampling (Moore and Lewis, 2010), to select data
from the Pile dataset (Gao et al., 2021). This auto-
matic selection method can include possibly over-
looked yet relevant documents from domains that
may not be too close to the target domain. Figure 1
illustrates this method which achieves higher per-
formance on tasks in the target domain by using
only 2.5GB (0.5%) of cynically selected data.
Specifically, we experiment with pretraining en-
coders with varying amounts of data sampled from
the Pile.1 With our “target corpus” of OntoNotes
(Weischedel et al., 2013), we compare language
models trained with cynical and random selection
at various data levels. We find that the cynically
selected encoder achieves consistently lower target
corpus perplexity than one trained with random
selection. We further finetune the encoders on a
suite of tasks, some of which are derived from
OntoNotes. Again, we find that models pretrained
with cynical selection perform best. We suggest
this as a viable method for inexpensively pretrain-
ing effective domain-specific encoders.
1The Pile consists of 800GB raw text but for this paper,
we refer to its “effective” size, which is 1250GB.
87.4586.2585.40ScorePile ~ 1250GBRandom ~ 60GBManual ~ 30GBCynical ~ 2.5 GB
2 Cynical Data Selection
Methods for data selection for language-related
tasks have been widely studied, usually to select
in-domain data (Axelrod et al., 2011; van der Wees
et al., 2017; Dai et al., 2020; Killamsetty et al.,
2020). One such method is Cynical Data Selection
(Axelrod, 2017). The intuition behind cynical se-
lection is greedily ranking sentences from the text
corpus, based on its score computed against text
representative of the target domain, which is based
on how much information gained by selecting it.
Concretely, given representative text from the
target domain, cynical selection uses the cross-
entropy of the selected text against the representa-
tive text and calculates the information gain of each
sentence in the general corpus. It then picks the
most useful sentence relative to what has already
been selected and its similarity to the representative
text. This also leads to a bias towards shorter sen-
tences and preferring sentences that contain words
with high probability in the representative text.
Our work extends the cynical selection to the
document level selection. Sentences are still scored
at the sentence level, but the average sentence-level
gain determines the information gain of a docu-
ment.2 We demonstrate its advantages in efficiently
selecting related documents to the target domain.
3 Experiments and Results
In this work, we set OntoNotes 5.0 (Weischedel
et al., 2013) as our target corpus, and we use a
smaller sample from the training corpus of the
CoNLL 2012 Shared Task (Pradhan et al., 2012)
as the representative corpus for data selection. We
first train an encoder based on the selected data and
use the Edge Probing suite (Tenney et al., 2019b)
for the downstream task evaluation, which has pre-
viously been used to probe and evaluate language
models (Clark et al., 2019; Tenney et al., 2019a;
Jiang et al., 2020; Zhang et al., 2021).
3.1 Data Selection
Dataset We adopt the Pile (Gao et al., 2021) for
data selection, which consists of 1250GB text from
22 domains. Cynical selection naturally prefers
text data based on the target corpus. To make a
more fair comparison, we exclude 100GB data
from “DM Mathematics” and “Github” to eliminate
the noise of non-text data in random selection.
Figure 2: Validation perplexity on held-out set (left),
and OntoNotes (right) at 100k training steps.
Selection Strategy Encoder pretraining is natu-
rally a document-level task, as context contributes
critically to improved representations. Thus, we
need to extend the sentence selection into the doc-
ument selection to achieve a better-contextualized
representation at the pretraining stage.3 We apply
our extended document-level cynical selection to
the Pile and extract the top {0.5%, 1%, 2%, 5%}
scored documents.4 We also randomly sample the
same percentage of documents from Pile to use as
a corresponding baseline. As a baseline for manual
selection, we use 30GB text from "Wikipedia" and
"BookCorpus" subsets, following Liu et al. (2019).
3.2 Encoder Pretraining
We set up a BERT-base model and follow the
pretraining objective and settings described in
RoBERTa(Liu et al., 2019).5 In Figure 2, we plot
the validation perplexity on both the representative
corpus (CoNLL 2012 Shared Task) and a held-out
set of the Pile. The perplexity on the held-out set
decreases when there is more training data for both
the cynical and random selection. Cynical selection
attains a higher perplexity, which shows that while
the selected documents are more adapted to the
target domain, it is not better adapted to the general
corpus. As each encoder needs different training
steps for different corpus sizes, we try to make a
fair comparison by assuming a fixed training bud-
get of 100k update steps. In Figure 2, we find that
at 100k steps, 2% of the cynically selected data
achieves the lowest perplexity, and more training
data does not help the adaptation to the target cor-
pus. Also, cynical selected documents consistently
outperforms the random selection, demonstrating
the effectiveness of adapting to the target domain.
3We unsurprisingly find that selection at the document-
level works better than at the sentence-level (Appendix A).
4Our code repository is publicly available at https://
github.com/jsedoc/DL-CynDS.
2A formal explanation of Cynical selection and its exten-
5We adopt the training scripts from FairSeq for encoder
sion is in Appendix B.
pretraining, https://github.com/facebookresearch/fairseq.
1%2%5%3.253.503.754.004.25PPL on Devppl-cynicalppl-random1%2%5%5.05.56.06.5PPL on OntoNotesPercentage of Training Data (%)Figure 3: Evaluation on 8 Edge Probing tasks (Tenney et al., 2019b). The cynical selection consistently outper-
forms both the random and manual selection in most cases, even with only 0.5% selected documents.
3.3 Edge Probing Evaluation
We evaluate the effectiveness of the pretrained en-
coders on 8 Edge Probing tasks (Tenney et al.,
2019b),6 for which the metric and architecture are
uniformed to evaluate the span-level contextual rep-
resentation of the language model, and it has been
widely studied in the past few years. Results are
plotted in Figure 3. We find:
Observation 1: Models trained on cynically
selected documents show consistent performance
gain on all tasks compared to the random selection.
Observation 2: In most tasks, even using only
0.5% (2.5GB) of cynically selected documents out-
performs the manually selected baseline (30GB).
Observation 3: Compared to random sampling,
the performance gain of the cynical selected doc-
uments is larger with only 0.5% to 1% of training
data, and decreases for larger training sets as ran-
dom selection catches up.
Observation 4: For some tasks, especially
"const" and "pos," which are two tasks exactly
based on the OntoNotes dataset, cynical selected
documents yield good task performance with only
0.5% data, and the scores decrease when increasing
the selection size to 2%, but increase again with
5%. This could suggest that in cynical selection,
the top-scored documents are strongly related and
helpful to the target task domain, while the others
may not contribute as much or even hurt. However,
more data ultimately does improve performance.
Overall, we could achieve promising results with
only 0.5% documents of the entire corpus, demon-
strating the effectiveness and efficiency of cynical
6We adopt the jiant for edge probing data processing and
finetuning, https://github.com/nyu-mll/jiant.
Figure 4: Data distribution over the Pile domains
selection in the adaptation to downstream tasks in
the target domain. We also notice the standard de-
viation of the runs for random selection is much
larger than cynical selection, indicating more stable
encoder results from cynically selected documents.
3.4 Discussion
Data Distribution We plot the domain distribu-
tion of the selected documents in Figure 4. While
random selection follows the distribution of the
original Pile dataset, cynical selection prefers news-
like articles such as the "Pile CC" and "OpenWeb-
Text2," rather than technical ones, like StackEx-
change. Also, since we consider the same number
of selected documents for each split, the actual se-
lected data size is not the same (Figure 5). We
notice that cynical selection prefers shorter docu-
ments, especially in the top-ranked samples. This
should be related to our scoring strategy since we
average the sentence scores as the final document
score. In the case for long documents, even though
there are sentences with higher scores, it is not
very likely to be selected since the final scores are
averaged by the total number of sentences. This
1%2%5%76788082Scoreconst1%2%5%97.297.497.697.8pos1%2%5%939495ner1%2%5%87888990coref1%2%5%7080Scoreud1%2%5%777879spr21%2%5%6070rel1%2%5%87888990srlcynicalrandommanualPile-CCWebTextStackExgPubMedWikipediaOthers010203040Percentage of Selection (%)Cynical SelectionRandom SelectionFigure 5: For each percentage of cynically and ran-
domly selected documents, we show the actual data
size (GB) and corresponding document length.
Figure 6: This figure shows the training loss for the
runs of 1% and 2% cynically selected subsets.
explains why the cynical selection prefers shorter
documents in the 0.5% and 1% selection but not
in the 5% selection. Therefore, when we bring
the actual selected data sizes into the comparison,
the cynical selection is much more efficient than
the random sampling. Future work can investigate
other methods of aggregating sentence-level scores.
Computational Trade-off Cynical selection en-
ables the language models to use less training data
and GPU time while achieving competitive results.
However, the data selection needs to be done be-
fore the training and pre-processing could be costly.
Cynical selection on the Pile can be parallelized
via sharding, because the specific order/ranking of
a document in the final selected subset is not impor-
tant. The intuition is that any good document will
be chosen early, regardless of which shard it is in.
So, we split the automatic document selection of
the Pile into 10,000 smaller jobs, each requiring a
single core CPU7 and 10GB of RAM and taking 2
hours to finish. In general, the cost of the selection
depends on the size of the general corpus that is be-
ing selected from. In our training environment with
8 RTX6000 GPUs, it takes 800+ GPU hours in total
to train an encoder with 60GB randomly selected
documents. To achieve comparable or even better
performance with cynical selected documents, we
only need 200 GPU hours for the 2.5GB of cyni-
cally selected data to converge. The market price
for a single RTX6000 is $1.50/hour, so we need
$1200+ to train with random selection but less than
$300 for cynical selection. On the Google Cloud
Platform, 20,000 hours on comparable or faster
CPUs can be obtained with $200. Overall, cynical
selected documents saves more than 50% of the
computational cost and achieves better task scores.
Overfitting Large language models have the abil-
ity to overfit or memorize small datasets (Kaplan
et al., 2020; Carlini et al., 2022). We inspect
the loss curves for two of the cynical selections
(1% and 2%) in Figure 6. While the 1% encoder
achieves a lower loss for most parts of the train-
ing, it is eventually surpassed by the 2% model.
This highlights a tradeoff between computing cost
and performance; given a limited compute budget
(in this example, under 50K steps), it is better to
use a smaller selection. While prior work suggests
scaling up models to fit dataset size (Kaplan et al.,
2020), we are successful in scaling down dataset
sizes so that they can be efficiently fit (and outper-
form larger datasets) in fewer steps.
4 Related Work
Due to the huge computational cost of training
large models, both researchers and engineers have
sought alternatives to using data more efficiently.
Some prior works use statistical methods to select
relevant data from a large corpus (Rousseau, 2013;
Kirchhoff and Bilmes, 2014; Eetemadi et al., 2015;
Xu and Koehn, 2017). Some other studies intro-
duce additional classifiers or language models to
help the data selection (Ruder and Plank, 2017; Qu
et al., 2019; Sun et al., 2021). Also, data selec-
tion could be more efficiently involved in the ac-
tive learning approaches (Shen et al., 2004; Lowell
et al., 2018; Erdmann et al., 2019; Shelmanov et al.,
2019; Margatina et al., 2022; Tsvigun et al., 2022).
This work applies a simple statistical method to
find the most related text to a target domain. It
incrementally constructs a dataset out of a large
corpus for the goal of training language models.
5 Conclusion
7Intel Xeon E5-2620 v3, a chip from 2014.
This work builds the connection from corpus subs-
election in statistical LM construction to neural
0.51.02.05.0Percentage of Selected Documents0204060Data Size in GB2.5512385112250cynicalrandom0200400600800Document Length203040506070Training Steps (K)2.02.22.42.62.8Training Loss1%2%LMs. We extend cynical data selection to effi-
ciently select task-related documents for encoder
pretraining and achieve lower perplexity in the tar-
get domain. We also demonstrate its effectiveness
on downstream tasks by achieving comparable or
even better results with 20x less data, 3x fewer
training iterations, and 2x less computational cost
on 8 Edge Probing tasks. We believe this fills the
gap in the literature on an important topic in train-
ing powerful LMs. We purposefully keep this work
in the space of methods used in the days of Stat
NLP to highlight their out-of-the-box applicability,
for which that line of research is still salient. Based
on our findings, this line is resurrected, suggesting
new novel approaches should be studied. We antic-
ipate that with this connection, researchers could
explore this topic, investigate various subselection
methods, and extend it to other domains.
Acknowledgements
We thank all reviewers for their valuable feed-
back. We also appreciate the helpful suggestions
from Marc Marone, Amittai Axelrod, and Alex
Warstadt. This work is supported by IARPA BET-
TER (#2019-19051600005). The findings con-
tained in this work are those of the authors and
should not be interpreted as necessarily represent-
ing the official policies, either expressed or implied,
or endorsements of IARPA or the U.S. Government.
The U.S. Government is authorized to reproduce
and distribute reprints for governmental purposes
notwithstanding any copyright annotation therein.
Limitations
Since pretraining encoders is expensive, our study
only experiments on one source corpus (Pile) and
one target task domain (OntoNotes). However, this
method could be demonstrated more effectively on
other datasets that are more domain-specific. We
do not run multiple random selections with dif-
ferent seeds due to the time and cost of training
large models. We think the standard error for the
randomly selected data would be significant, espe-
cially for the subset of only 0.5% or 1% documents.
Also, we recognize that training our models longer
or scaling up the model size is an “easy” method of
improving performance (Liu et al., 2019; Kaplan
et al., 2020). Our results assume a fixed training
budget (max 100k steps). Thus with a larger budget,
the trade-offs will vary. Another concern is that we
do not experiment with other subselection meth-
ods (Gururangan et al., 2019) or other languages,
but we believe they should have similar trends.
References
Amittai Axelrod. 2017. Cynical selection of language
model training data. arXiv.
Amittai Axelrod, Xiaodong He, and Jianfeng Gao.
2011. Domain adaptation via pseudo in-domain data
selection. In Proceedings of the 2011 Conference on
Empirical Methods in Natural Language Processing,
pages 355–362, Edinburgh, Scotland, UK. Associa-
tion for Computational Linguistics.
Fan Bai, Alan Ritter, and Wei Xu. 2021. Pre-train
or annotate? domain adaptation with a constrained
budget. In Proceedings of the 2021 Conference on
Empirical Methods in Natural Language Processing,
pages 5002–5015, Online and Punta Cana, Domini-
can Republic. Association for Computational Lin-
guistics.
Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB-
ERT: A pretrained language model for scientific text.
In Proceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and the
9th International Joint Conference on Natural Lan-
guage Processing (EMNLP-IJCNLP), pages 3615–
3620, Hong Kong, China. Association for Computa-
tional Linguistics.
Nicholas Carlini, Daphne Ippolito, Matthew Jagielski,
Katherine Lee, Florian Tramèr, and Chiyuan Zhang.
2022. Quantifying memorization across neural lan-
guage models. ArXiv, abs/2202.07646.
Ilias Chalkidis, Manos Fergadiotis, Prodromos Malaka-
siotis, Nikolaos Aletras, and Ion Androutsopoulos.
2020. LEGAL-BERT: The muppets straight out of
law school. In Findings of the Association for Com-
putational Linguistics: EMNLP 2020, pages 2898–
2904, Online. Association for Computational Lin-
guistics.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming
Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka-
plan, Harri Edwards, Yuri Burda, Nicholas Joseph,
Greg Brockman, Alex Ray, Raul Puri, Gretchen
Krueger, Michael Petrov, Heidy Khlaaf, Girish Sas-
try, Pamela Mishkin, Brooke Chan, Scott Gray,
Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz
Kaiser, Mohammad Bavarian, Clemens Winter,
Philippe Tillet, Felipe Petroski Such, Dave Cum-
mings, Matthias Plappert, Fotios Chantzis, Eliza-
beth Barnes, Ariel Herbert-Voss, William Hebgen
Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie
Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain,
William Saunders, Christopher Hesse, Andrew N.
Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan
Morikawa, Alec Radford, Matthew Knight, Miles
Brundage, Mira Murati, Katie Mayer, Peter Welin-
der, Bob McGrew, Dario Amodei, Sam McCandlish,
Ilya Sutskever, and Wojciech Zaremba. 2021. Eval-
uating large language models trained on code.
Kevin Clark, Urvashi Khandelwal, Omer Levy, and
Christopher D. Manning. 2019. What does bert look
at? an analysis of bert’s attention.
Xiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile
Paris. 2020. Cost-effective selection of pretraining
data: A case study of pretraining bert on social me-
dia.
Lucio M. Dery, Paul Michel, Ameet Talwalkar, and
Graham Neubig. 2021. Should we be pre-training?
an argument for end-task aware training as an alter-
native. CoRR, abs/2109.07437.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language under-
In Proceedings of the 2019 Conference
standing.
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 4171–4186, Minneapolis, Minnesota. Associ-
ation for Computational Linguistics.
Sauleh Eetemadi, William Lewis, Kristina Toutanova,
and Hayder Radha. 2015. Survey of data-selection
methods in statistical machine translation. Machine
Translation, 29.
Alexander Erdmann, David Joseph Wrisley, Benjamin
Allen, Christopher Brown, Sophie Cohen-Bodénès,
Micha Elsner, Yukun Feng, Brian Joseph, Béatrice
Joyeux-Prunel, and Marie-Catherine de Marneffe.
2019. Practical, efficient, and customizable active
learning for named entity recognition in the digital
humanities. In Proceedings of the 2019 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 2223–2234, Minneapolis, Minnesota. Associ-
ation for Computational Linguistics.
Leo Gao, Stella Biderman, Sid Black, Laurence Gold-
ing, Travis Hoppe, Charles Foster, Jason Phang,
Horace He, Anish Thite, Noa Nabeshima, Shawn
Presser, and Connor Leahy. 2021. The pile: An
800gb dataset of diverse text for language modeling.
CoRR, abs/2101.00027.
Suchin Gururangan, Tam Dang, Dallas Card, and
Noah A. Smith. 2019. Variational pretraining for
semi-supervised text classification. In Proceedings
of the 57th Annual Meeting of the Association for
Computational Linguistics, pages 5880–5894, Flo-
rence, Italy. Association for Computational Linguis-
tics.
Suchin Gururangan, Ana Marasovi´c,
Swabha
Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey,
and Noah A. Smith. 2020. Don’t stop pretraining:
In
Adapt language models to domains and tasks.
the
the 58th Annual Meeting of
Proceedings of
Association for Computational Linguistics, pages
8342–8360, Online. Association for Computational
Linguistics.
Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva,
Preslav Nakov, Diarmuid Ó Séaghdha, Sebastian
Padó, Marco Pennacchiotti, Lorenza Romano, and
Stan Szpakowicz. 2010.
SemEval-2010 task 8:
Multi-way classification of semantic relations be-
In Proceedings of the
tween pairs of nominals.
5th International Workshop on Semantic Evalua-
tion, pages 33–38, Uppsala, Sweden. Association
for Computational Linguistics.
Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham
Neubig. 2020. How Can We Know What Language
Models Know? Transactions of the Association for
Computational Linguistics, 8:423–438.
Jared Kaplan, Sam McCandlish, Tom Henighan,
Tom B. Brown, Benjamin Chess, Rewon Child,
Scott Gray, Alec Radford, Jeffrey Wu, and Dario
Amodei. 2020. Scaling laws for neural language
models.
KrishnaTeja Killamsetty, Durga Sivasubramanian,
Ganesh Ramakrishnan, and Rishabh K. Iyer. 2020.
GLISTER: generalization based data subset selec-
CoRR,
tion for efficient and robust
abs/2012.10630.
learning.
Katrin Kirchhoff and Jeff Bilmes. 2014. Submodu-
larity for data selection in machine translation.
In
Proceedings of the 2014 Conference on Empirical
Methods in Natural Language Processing (EMNLP),
pages 131–141, Doha, Qatar. Association for Com-
putational Linguistics.
Belinda Li, Jane Yu, Madian Khabsa, Luke Zettle-
moyer, Alon Halevy, and Jacob Andreas. 2022.
Quantifying adaptability in pre-trained language
models with 500 tasks. In Proceedings of the 2022
Conference of the North American Chapter of the
Association for Computational Linguistics: Human
Language Technologies, pages 4696–4715, Seattle,
United States. Association for Computational Lin-
guistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-
dar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining ap-
proach. ArXiv, abs/1907.11692.
David Lowell, Zachary Chase Lipton, and Byron C.
Wallace. 2018. How transferable are the datasets col-
lected by active learners? ArXiv, abs/1807.04801.
Katerina Margatina, Loic Barrault, and Nikolaos Ale-
tras. 2022. On the importance of effectively adapt-
ing pretrained language models for active learning.
In Proceedings of the 60th Annual Meeting of the
Association for Computational Linguistics (Volume
2: Short Papers), pages 825–836, Dublin, Ireland.
Association for Computational Linguistics.
Robert C. Moore and William Lewis. 2010. Intelligent
In Pro-
selection of language model training data.
ceedings of the ACL 2010 Conference Short Papers,
pages 220–224, Uppsala, Sweden. Association for
Computational Linguistics.
Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen.
2020. BERTweet: A pre-trained language model
In Proceedings of the 2020
for English tweets.
Conference on Empirical Methods in Natural Lan-
guage Processing: System Demonstrations, pages 9–
14, Online. Association for Computational Linguis-
tics.
Sameer Pradhan, Alessandro Moschitti, Nianwen Xue,
Olga Uryupina, and Yuchen Zhang. 2012. CoNLL-
2012 shared task: Modeling multilingual unre-
stricted coreference in OntoNotes. In Joint Confer-
ence on EMNLP and CoNLL - Shared Task, pages
1–40, Jeju Island, Korea. Association for Computa-
tional Linguistics.
Chen Qu, Feng Ji, Minghui Qiu, Liu Yang, Zhiyu
Min, Haiqing Chen, Jun Huang, and W. Bruce Croft.
2019. Learning to selectively transfer: Reinforced
In Pro-
transfer learning for deep text matching.
ceedings of the Twelfth ACM International Confer-
ence on Web Search and Data Mining, WSDM ’19,
page 699–707, New York, NY, USA. Association for
Computing Machinery.
John Bauer, and Chris Manning. 2014. A gold stan-
dard dependency corpus for English. In Proceedings
of the Ninth International Conference on Language
Resources and Evaluation (LREC’14), pages 2897–
2904, Reykjavik, Iceland. European Language Re-
sources Association (ELRA).
Ming Sun, Haoxuan Dou, Baopu Li, Junjie Yan, Wanli
Ouyang, and Lei Cui. 2021. Autosampling: Search
In Proceed-
for effective data sampling schedules.
ings of the 38th International Conference on Ma-
chine Learning, volume 139 of Proceedings of Ma-
chine Learning Research, pages 9923–9933. PMLR.
Jeniya Tabassum, Mounica Maddela, Wei Xu, and Alan
Ritter. 2020. Code and named entity recognition in
StackOverflow. In Proceedings of the 58th Annual
Meeting of the Association for Computational Lin-
guistics, pages 4913–4926, Online. Association for
Computational Linguistics.
Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a.
Bert rediscovers the classical nlp pipeline.
Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang,
Adam Poliak, R Thomas McCoy, Najoung Kim,
Benjamin Van Durme, Sam Bowman, Dipanjan Das,
and Ellie Pavlick. 2019b. What do you learn from
context? probing for sentence structure in contextu-
In International Con-
alized word representations.
ference on Learning Representations.
Anthony Rousseau. 2013. Xenc: An open-source tool
for data selection in natural language processing.
The Prague Bulletin of Mathematical Linguistics,
(100):73–82.
Akim Tsvigun, Artem Shelmanov, Gleb Kuzmin,
Leonid Sanochkin, Daniil Larionov, Gleb Gusev,
Manvel Avetisian, and Leonid Zhukov. 2022. To-
wards computationally feasible deep active learning.
Sebastian Ruder and Barbara Plank. 2017. Learning to
select data for transfer learning with bayesian opti-
mization.
Rachel Rudinger, Adam Teichert, Ryan Culkin, Sheng
Zhang, and Benjamin Van Durme. 2018. Neural-
In Pro-
Davidsonian semantic proto-role labeling.
ceedings of the 2018 Conference on Empirical Meth-
ods in Natural Language Processing, pages 944–
955, Brussels, Belgium. Association for Computa-
tional Linguistics.
Artem Shelmanov, Vadim Liventsev, Danil Kireev,
Nikita Khromov, Alexander Panchenko, Irina Fed-
ulova, and Dmitry V. Dylov. 2019. Active learn-
ing with deep pre-trained models for sequence tag-
In 2019
ging of clinical and biomedical texts.
IEEE International Conference on Bioinformatics
and Biomedicine (BIBM), pages 482–489.
Dan Shen, Jie Zhang, Jian Su, Guodong Zhou, and
Chew-Lim Tan. 2004. Multi-criteria-based active
learning for named entity recognition. In Proceed-
ings of the 42nd Annual Meeting of the Association
for Computational Linguistics (ACL-04), pages 589–
596, Barcelona, Spain.
Natalia Silveira, Timothy Dozat, Marie-Catherine
de Marneffe, Samuel Bowman, Miriam Connor,
Marlies van der Wees, Arianna Bisazza, and Christof
Monz. 2017. Dynamic data selection for neural ma-
chine translation. In Proceedings of the 2017 Con-
ference on Empirical Methods in Natural Language
Processing, pages 1400–1410, Copenhagen, Den-
mark. Association for Computational Linguistics.
Ralph Weischedel, Martha Palmer, Mitchell Marcus,
Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Ni-
anwen Xue, Ann Taylor, Jeff Kaufman, Michelle
Franchini, Mohammed El-Bachouti, Robert Belvin,
and Ann Houston. 2013. OntoNotes Release 5.0.
Hainan Xu and Philipp Koehn. 2017. Zipporah: a
fast and scalable data cleaning system for noisy web-
crawled parallel corpora. In Proceedings of the 2017
Conference on Empirical Methods in Natural Lan-
guage Processing, pages 2945–2950, Copenhagen,
Denmark. Association for Computational Linguis-
tics.
Yian Zhang, Alex Warstadt, Xiaocheng Li, and
Samuel R. Bowman. 2021. When do you need bil-
In Proceed-
lions of words of pretraining data?
ings of the 59th Annual Meeting of the Association
for Computational Linguistics and the 11th Interna-
tional Joint Conference on Natural Language Pro-
cessing (Volume 1: Long Papers), pages 1112–1125,
Online. Association for Computational Linguistics.
A proof of this derivation is given in Axelrod
(2017).
In our work, we still let W1, . . . , Wn represent
the first n sentences, and H(REP ) is unchanged.
(s), of each
However, we use the scores, ∆H
n→n+1
sentence and compute document-level scores for
each document,
Score(D) =
1
|D|
(cid:88)
s∈D
∆H
n→n+1
(s)
These document-level scores can then be ranked,
and we select the top k% of the documents. Note
that while there are many alternatives to selecting
documents, our goal is to select a method and eval-
uate whether automatic data selection is effective
for LM pretraining rather than comparing different
methods, which can be future work.
B.1 Sentence vs Document Selection
Results are shown below in Table 1.
Data
Cynical Sent
Cynical Doc
Random Doc
ppl on OntoNotes
102.21
4.98
8.77
Table 1: Each subset consists of 15GB text.
B.2 Edge Probing tasks
The tasks are constituent labeling, part-of-speech
tagging (POS), named entity labeling (NER), coref-
erence labeling (coref), semantic role labeling
(SRL), dependency labeling (Silveira et al., 2014),
semantic protorole labeling (SPR2) (Rudinger
et al., 2018), and relation classification (Hendrickx
et al., 2010). The first 5 tasks listed are derived
from OntoNotes (Weischedel et al., 2013).
A Appendix
A.1 Detailed Distribution
A detailed data distribution is shown in Table 2.
B Formalization of Cynical Data
Selection
The aim of CynDS is to incrementally construct
W through scoring each sentence by information
gained relative to the already selected data (Equa-
tion 1).
Given a REPresentative corpus from the target
domain, CynDS is an effective and efficient method
to identify the most relevant subset of sentences
from a large corpus. Formally, we can define a
cross-entropy between REP and some set of tokens
as,
H(REP ) = −
(cid:88)
v∈VREP
CREP (v)
WREP
log
C(v)
|W |
,
where W is the set of tokens, V is the vocabulary,
and C indicates the count of word type, v. CREP(v)
is the count within REP and C(v) is the count
within W .
Let W1, . . . , Wn be the incrementally selected
corpus. We can define the cross-entropy after se-
lecting n sentences as
Hn(REP ) = −
(cid:88)
v∈VREP
CREP (v)
WREP
log
Cn(v)
Wn
and minimize Hn. This can be rewritten recursively
as
Hn+1 = Hn + max
s
∆H
n→n+1
(s)
where ∆H
(s) is the delta (effect) of a given
n→n+1
sentence s. To find the n + 1th sentence that mini-
mizes ∆H
, we can rewrite it as
n→n+1
∆H
n→n+1
= P enalty
n→n+1
+ Gain
n→n+1
(1)
Here, penalty refers to how similar the sentence
is to already selected texts and gain refers to how
similar the sentence is to the representative corpus.
Axelrod (2017) derives the P enalty and Gain as
P enalty
n→n+1
= log
Gain
n→n+1
=
(cid:88)
v∈VREP
|Wn + wn+1|
|Wn|
CREP (v)
WREP
log
Cn(v)
Cn(v) + cn+1(v)
Domain
Pile-CC
OpenWebText2
StackExchange
PubMed Abstracts
Wikipedia (en)
USPTO Backgrounds
PubMed Central
FreeLaw
ArXiv
NIH ExPorter
HackerNews
Enron Emails
OpenSubtitles
YoutubeSubtitles
Books3
EuroParl
Gutenberg (PG-19)
PhilPapers
BookCorpus2
Ubuntu IRC
Random Cynical-0.5% Cynical-1% Cynical-2% Cynical-5%
42.35%
27.44%
32.20%
16.95%
3.56%
15.51%
5.58%
15.40%
11.65%
8.90%
2.26%
5.84%
0.24%
2.98%
0.51%
2.66%
0.06%
1.25%
0.39%
0.94%
0.55%
0.82%
0.48%
0.49%
0.02%
0.33%
0.13%
0.17%
0.004%
0.15%
0.01%
0.07%
0.002%
0.04%
0.003%
0.03%
0.001%
0.01%
0.004%
0.01%
42.06%
32.53%
3.65%
5.51%
12.03%
2.00%
0.19%
0.38%
0.05%
0.39%
0.54%
0.51%
0.009%
0.13%
0.002%
0.01%
0.001%
0.002%
0.0005%
0.006%
43.30%
31.35%
3.39%
4.79%
11.09%
2.55%
0.53%
1.12%
0.12%
0.36%
0.68%
0.43%
0.05%
0.15%
0.015%
0.024%
0.008%
0.013%
0.005%
0.003%
43.03%
31.79%
3.36%
5.17%
11.24%
2.47%
0.38%
0.81%
0.08%
0.37%
0.60%
0.46%
0.03%
0.14%
0.009%
0.02%
0.005%
0.008%
0.003%
0.004%
Table 2: Detailed Domain Distribution for the selection under different sizes.
|
synthetic_cpt | 3 | Enhancing_Tool_Retrieval_with_Iterative_Feedback_from_Large_Language_Models.pdf | Enhancing Tool Retrieval with Iterative Feedback from
Large Language Models
Qiancheng Xu, Yongqi Li†, Heming Xia, Wenjie Li
Department of Computing, The Hong Kong Polytechnic University, China
{qiancheng.xu, he-ming.xia}@connect.polyu.hk
[email protected]
[email protected]
4
2
0
2
p
e
S
9
2
]
L
C
.
s
c
[
2
v
5
6
4
7
1
.
6
0
4
2
:
v
i
X
r
a
Abstract
Tool learning aims to enhance and expand large
language models’ (LLMs) capabilities with ex-
ternal tools, which has gained significant atten-
tion recently. Current methods have shown that
LLMs can effectively handle a certain amount
of tools through in-context learning or fine-
tuning. However, in real-world scenarios, the
number of tools is typically extensive and ir-
regularly updated, emphasizing the necessity
for a dedicated tool retrieval component. Tool
retrieval is nontrivial due to the following chal-
lenges: 1) complex user instructions and tool
descriptions; 2) misalignment between tool re-
trieval and tool usage models. To address the
above issues, we propose to enhance tool re-
trieval with iterative feedback from the large
language model. Specifically, we prompt the
tool usage model, i.e., the LLM, to provide
feedback for the tool retriever model in multi-
round, which could progressively improve the
tool retriever’s understanding of instructions
and tools and reduce the gap between the two
standalone components. We build a unified
and comprehensive benchmark to evaluate tool
retrieval models. The extensive experiments
indicate that our proposed approach achieves
advanced performance in both in-domain eval-
uation and out-of-domain evaluation1.
1
Introduction
Large language models (LLMs) have demonstrated
remarkable success in language-related tasks and
are considered a potential pathway to achieving
artificial general intelligence (Zhao et al., 2023).
However, despite their powerful capabilities, LLMs
are still limited in many aspects, such as knowledge
update and mathematical reasoning. A promising
way to overcome these limitations is to empower
LLMs with external tools, known as tool learn-
ing (Qin et al., 2023a; Qu et al., 2024a). Tool
†Corresponding author.
1Code
Feedback.
available
at https://github.com/travis-xu/TR-
Figure 1: Illustration of two tool-learning approaches
in LLMs: (a) in-context learning and (b) fine-tuning.
The challenges posed by the extensive and frequently
updated tools require the external tool retrieval compo-
nent.
learning not only enhances LLMs’ performance on
existing tasks but also allows them to tackle tasks
that were previously beyond their reach. Besides,
the ability to use tools is a crucial hallmark on the
path to advanced intelligence.
Existing tool learning methods have preliminar-
ily demonstrated that LLMs could effectively uti-
lize specific tools to complete corresponding tasks.
They either leverage LLMs’ in-context learning
ability to facilitate tool usage with tool descrip-
tions (Shen et al., 2023) or fine-tune LLMs to in-
tegrate tool learning capabilities into parameters,
e.g., Toolformer (Schick et al., 2023). However, as
illustrated in Figure 1, existing methods still face
significant challenges in real-world scenarios due
to the following reasons. 1) The number of tools
is usually vast, making it impossible for LLMs to
handle them all with the limited input length of
in-context learning. 2) Tools would frequently and
irregularly update, rendering finetuning-based ap-
proaches costly and impractical. Therefore, a tool
retrieval component, which aims to select appropri-
ate tools from a large-scale tool set, is essential for
LLMs.
Despite the practicality and necessity, tool re-
trieval has been inadequately studied. Some ap-
proaches have adopted traditional document re-
trieval methods to retrieve tools for LLMs (Li et al.,
Large-scale Tool Setcontext length exceededupdate tool not supportedLLMTool1Tool2InstructionTooln...+(a) in-context learning(b) fine-tuningLLMOutputsInstructionTool Calling+
provide more appropriate tools for the tool-usage
model. In this manner, the comprehension capa-
bility and tool preference of LLMs could be pro-
gressively incorporated into the retriever, and thus
the tool retriever’s performance could be continu-
ously enhanced. We build a comprehensive tool
retrieval benchmark, named TR-bench. The bench-
mark takes into account real-world practices with
updated tools, and therefore encompasses both in-
domain and out-of-domain settings. The experi-
mental results show our approach achieves the best
performance among the current methods with both
in-domain and out-of-domain settings.
The key contributions are summarized:
• We identify the importance of tool retrieval
in tool learning and present the distinct chal-
lenges of tool retrieval.
• We propose to enhance tool retrieval with iter-
ative feedback from the LLM. By leveraging
iterative feedback, the tool retriever model
gets continual improvements, ultimately re-
ducing the misalignment between them.
• We build a comprehensive tool retrieval bench-
mark with in-domain and out-of-domain set-
tings, which will also aid future tool retrieval
research. The extensive experiments demon-
strate superior performance of our approach.
2 Related Work
2.1 Tool Learning in LLMs
Tool learning aims to equip LLMs with exter-
nal tools to enhance and expand their capabili-
ties (Ruan et al., 2023; Wang et al., 2024b; Huang
et al., 2024c). Generally, existing tool learning
methods could be categorized into in-context learn-
ing and fine-tuning approaches. The former ap-
proach encourages LLMs to use tools with descrip-
tions, documentation, or demonstrations (Yuan
et al., 2024; Du et al., 2024; Mu et al., 2024), while
the latter one trains the parameters of LLMs us-
ing specially created tool-use datasets (Hao et al.,
2023; Tang et al., 2023; Gao et al., 2024). How-
ever, no matter whether the in-context learning
or fine-tuning approach encounters severe chal-
lenges in real-world scenarios, where the candidate
tools are extensive and frequently updated. There-
fore, it is crucial to equip LLMs with a tool re-
trieval component to select appropriate tools from
a large-scale tool set. Recent works have proposed
Figure 2: Comparison between the document retrieval
and tool retrieval datasets. Tool retrieval presents more
challenges due to the complex instructions (in the left
figure) and the lower reputation rate (in the right figure).
2023; Qin et al., 2023b). However, we argue that
they overlook the unique challenges of tool re-
trieval for LLMs: 1) Complex usser instructions
and tool descriptions. As illustrated in Figure 2,
compared with document retrieval, user instruc-
tions are usually ambiguous and complex, and
the reputation rate between instructions and cor-
responding tool descriptions is much lower. Un-
fortunately, the retriever model is typically limited
in its capacities because of the efficiency require-
ments, which makes tool retrieval more difficult
and challenging. 2) Misalignment between tool re-
trieval and tool usage models. Previous approaches
deploy the tool retriever separately from the down-
stream tool-usage model, which hinders the LLM
from knowing which tools are really useful from
the tool-usage perspective. Thus, it will result in
a tool recognition gap between the tool retriever
and tool usage model, degrading the tool-use per-
formance further.
To address the above issues, we propose to en-
hance tool retrieval with iterative feedback. Our
motivation is to utilize the LLM to enhance the
comprehension ability of the tool retriever and
bridge the gap between the two independent mod-
els. At each iteration, we conduct a feedback gen-
eration process by asking the LLM to provide feed-
back step-by-step, conditioned on the user instruc-
tion and retrieved tools from the retriever. The
LLM will first comprehend the instruction and tool
functionalities thoroughly, and then assess the ef-
fectiveness of those retrieved tools. According to
the assessment, the LLM will refine the user in-
struction to improve the tool retrieval process. The
refined instruction will substitute previous user in-
struction and be used to retrieve a new list of tools
from the tool set. In the next iteration, the new
candidate tool list will be fed into the LLM for
a new round of LLMs’ feedback. During this it-
erative process, the tool retriever is expected to
MS MARCOToolBench0102030405060query/ins lengthMS MARCOToolBench0102030405060query-doc/ins-tool rate (%)a stopgap measure through traditional document
retrieval (Patil et al., 2023; Qin et al., 2023b; Zheng
et al., 2024), task decomposition (Anantha et al.,
2023; Huang et al., 2024b) and graph-based meth-
ods (Qu et al., 2024b). In this work, we aim to
develop a method specialized for enhancing the
tool retriever.
2.2 Document Retrieval
Early popular document retrieval methods rely on
sparse retrieval that calculates the relevance of doc-
uments to a query based on the frequency of query
terms in each document, e.g., BM25 (Robertson
and Zaragoza, 2009). With the development of
language models (Devlin et al., 2019), the dense
retrieval (Zhao et al., 2024; Mitra and Craswell,
2017) paradigm has gained considerable attention
in the research community. By encoding queries
and documents into high-dimensional vector rep-
resentations and computing their relevance scores
through inner product calculations, the paradigm
can capture semantic relationships between queries
and documents, thereby enhancing retrieval per-
formance (Karpukhin et al., 2020). However, tool
retrieval presents unique challenges, rendering tra-
ditional document retrieval methods suboptimal.
We address these challenges by harnessing LLMs’
feedback to iteratively refine the tool retrieval pro-
cess.
3 Preliminaries
3.1 Task Definition
Given a user’s instruction, tool retrieval aims to
select a small number of tools, which could aid the
LLM in answering the instruction, from a large-
scale tool set. Formally, we define the user instruc-
tion as q and the tool set as D = {d1, d2, ..., dN },
where di represents the description of each tool and
N is the total number of tools. The retriever model
R needs to measure the relevance R(q, di) between
the instruction q and each tool description di, and
return K tools, denoted as D = {d1, d2, ..., dK}.
3.2 Dense Retriever
Dense retriever usually leverages the encoder-
based LLM to encode the user instruction q and a
tool description d into dense embeddings E(q) and
E(d), respectively. Then, it could measure the rele-
vance between q and d by calculating the similarity
score between these two embeddings, denoted as
R(q, d) = sim(E(q), E(d)).
Dense retriever is trained via the contrast learn-
ing objective, which is designed to minimize the
distance between the instruction embedding and
embeddings of positive tools (the instruction’s
ground-truth tools) while maximizing the distance
between the instruction embedding and embed-
dings of negative tools. The objective can be for-
mulated as follows,
L = −
1
B
B
(cid:88)
i=1
log
eR(qi,d+
i )
j eR(qi,d−
ij ))
i ) + (cid:80)
eR(qi,d+
, (1)
where B denotes the batch size, d+
positive tool, and d−
tool to the instruction qi.
i denotes the
ij represents the j-th negative
However, due to the efficiency requirements,
dense retrieval utilizes a dual-encoder architecture,
which has limited ability to understand instructions.
In this study, our goal is to improve the tool re-
trieval process with the feedback from the tool-
usage model, i.e., the LLM.
4 Methodology
4.1 Overview
Recent studies have found that LLMs show a great
capability in acting as a critic (Zheng et al., 2023)
and could provide comprehensive feedback to im-
prove performance across a range of tasks (Madaan
et al., 2023; Asai et al., 2023). Inspired by those
observations, we propose an innovative framework
that leverages the LLM’s feedback to improve the
tool retrieval process iteratively. Different from ap-
proaches which focus on feedback from execution
results after tool execution step (Yao et al., 2023;
Wang et al., 2024a), we obtain LLMs’ feedback be-
fore the actual tool execution step, i.e., right after
the tool retrieval step.
As illustrated in Figure 3, at each iteration, the
LLM will provide feedback on the current-turn re-
trieval results. Specifically, the LLM will first com-
prehend the user instruction and tool functionalities
thoroughly. Then, it will assess the effectiveness
of those retrieved tools for handling the instruction.
Based on the assessment, the LLM could provide
a refinement to the retrieval model, refining the
user instruction if necessary. To ensure that the
retriever model is aware of the iteration round, we
conduct an iteration-aware feedback training pro-
cess to adapt the retriever model with continuously
refined user instructions.
Figure 3: Illustration of our proposed iterative tool retrieval method. At each iteration, the LLM follows a three-step
feedback generation process, which includes comprehension, assessment, and refinement, to improve the instruction.
4.2 Feedback Generation
Assuming at the iteration step t, given the refined
instruction qt, we could utilize retriever model R to
retrieve a list of top-K tools {dt
K}. We then
conduct a three-step feedback generation process
by feeding those retrieved tools and associated tool
descriptions into the LLM as follows.
1, ..., dt
Comprehension. Firstly, the LLM is prompted
to give comprehension on both the given instruction
and retrieved tools. The prompt provided to LLM
includes two parts: (1) summarize the abstract user
goals by ignoring detailed entity information in the
given instruction; (2) understand the functionalities
of retrieved tools, focusing on the category, name,
description, input and output parameters of given
tools. This step can be formulated as,
1, ..., dt
FC = LLM (PC, qt, {dt
K}),
(2)
where FC denotes LLM’s comprehension output
and PC denotes the prompt provided to LLM.
Assessment. The LLM will assess the effective-
ness of retrieved tools for handling the instruction
based on its comprehension of the user’s itent and
tool functionalities. The assessment is conducted
from two perspectives: 1) identify which of the
user’s goals could and could not be solved by the
retrieved tools with corresponding reasons; and 2)
analyze whether the ranked order of retrieved tools
corresponds with their significance in addressing
the user’s intent with specific reasons. The step can
be formulated as,
FA = LLM (PA, qt, {dt
1, ..., dt
K}, FC),
(3)
where FA denotes the LLM’s assessment output.
Refinement. Lastly, the LLM will refine user in-
struction based on its assessment. Specifically, we
ask the LLM to determine whether the refinement
is necessary based on the two following questions:
1) Whether all the user’s goals have been solved by
currently retrieved tools, 2) and whether all existing
appropriate tools are given the highest ranking pri-
orities by the retriever. If one of the answers is not
“yes”, we prompt the LLM to provide a potential
refinement for retrieval improvement. Otherwise,
the LLM will directly return a special token “N/A”
without conducting any refinement.
The feedback from the LLM is finalized made
on the current user instruction qt. Specifically, we
prompt the LLM to generate refined instruction
with enriched information in two dimensions: 1)
more detailed and personalized content about those
user’s intent which have not been solved by current
tools, helping the retriever explore other relevant
tools; (2) more scenario-specific tool-usage infor-
mation about existing appropriate tools, helping
the retriever give higher ranking priority to those
tools. This step can be formulated as,
FR = LLM (PR, qt−1, {dt−1
1
, ..., dt−1
K }, FA),
(4)
where PR is the corresponding prompt and FR de-
notes LLM’s refinement output, i.e., the new re-
fined instruction qt+1.
4.3
Iteration-Aware Feedback Training
We concatenate a special token “Iteration t” in front
of the instruction, where t is the instruction’s it-
eration step (e.g., “Iteration t − 1” for qt−1 and
“Iteration t” for qt).
Retriever RInitial Instruction: I need to retrieve the details of my recent order with ID 98765. Could you please provide mewith the information? Also, I would like to check the inventory status to see if the item is still available.Language Language Model's FeedbackAssessmentComprehensionRefinementIteration tTool i-1: InvalidAPITool i: MissingDetailsTool i+1: InvokeSuccessQuery IntentUnderstandingRetrieved Tool:(1) API names(2) Description(3) ArgumentsAct j-1: Adddetailed infoAct j: EnrichpersonalityAct j+1: Rerankrelated toolsRefined Instruction: ... ID 98765, including the item name, quantity, andprice. Also, I would like to check the inventory status...APIAPI:: 'getInventory''getInventory',,paramsparams:: {{ format format:: 'json''json',,}},,Retrieved ToolsWe also employ the hard negative sampling in
training. Concretely, for each given instruction,
we randomly sample an incorrect tool from the re-
trieved top-K tool list. The high similarity scores
of those tools indicate that they are prone to be mis-
taken as correct tools by the retriever. In feedback
training, we utilize those tool-instruction pairs as
hard negative samples. Then the loss function for
each iteration could be calculated as,
scenarios
# instructions
# tool set
Training
Set
In-domain
Evaluation
ToolBench-I1
ToolBench-I2
ToolBench-I3
ToolBench-All
ToolBench-I1
ToolBench-I2
ToolBench-I3
ToolBench-All
Out-of-domain
Evaluation
T-Eval
UltraTools
86,643
84,270
25,044
195,937
796
573
218
1,587
553
1,000
-
-
-
-
10,439
13,142
1,605
13,954
50
498
L =
−
1
B
B
(cid:88)
i=1
log
eR(qi,d+
i ) +(cid:80)
eR(qi,d+
i )
j̸=i eR(qi,d−
ij )) +(cid:80) eM (qi,dH
ij )
,
(5)
where dH
ij denotes the hard negative sample. By
distinguishing the subtle differences in the tool de-
scriptions, the retriever could achieve a deeper un-
derstanding of the tool functionalities and their re-
lation with user instructions.
Then the final training objective could be for-
mulated as the sum of losses in each iteration as
follows,
Table 1: Statistics of the TR-bench, which is conducted
from ToolBench (Qin et al., 2023b), T-Eval (Chen et al.,
2023), and UltraTools (Huang et al., 2024a).
be fed to the LLM for feedback generation, includ-
ing instruction refinement, as discussed in Section
4.2. After obtaining the refined instruction q1
test,
we add a token “Iteration 1” to it and then input
it to R for the next-round tool retrieval. Then, we
can get an updated tool list D1
test for a new round
of feedback generation. As such, we could obtain
a final tool list DT
test after T iterations.
Lf eedback =
T
(cid:88)
t=1
αtL(qt),
(6)
5 Experiments
5.1 Setup
where αt is a balancing factor and L(qt) is the loss
function calculated by Equation 5 based on the re-
fined user instructions qt in the tth iteration. In
this way, the LLM’s comprehensive knowledge of
the user requirements could be injected into the
retriever through those refined instructions. Be-
sides, with the aid of iteration-aware tokens and
joint-training manner, the retriever could maintain
a balance between newly learned knowledge and
previously acquired knowledge.
4.4
Inference
At the time of inference, the feedback generation
process keeps working while the feedback training
process ceased. The retriever will update the candi-
date tool list based on the refined user instruction
from LLM’s feedback iteratively, until output the
final retrieved tools.
Concretely, assume that we have obtained a re-
triever R after the feedback training. For each
initial test instruction q0
test, we add a special to-
ken “Iteration 0” in front of the instruction. Then
we use the trained retriever R to retrieve an ini-
tial tool list D0
test, containing K candidate tools
{d1, d2, ..., dK}. The retrieved D0
test will
test and q0
Datasets and evaluation. To assess the tool re-
trieval performance of models, we conduct an ex-
periment on tool retrieval benchmark, referred to
as TR-bench, based on three datasets, including
ToolBench (Qin et al., 2023b), T-Eval (Chen et al.,
2023), and UltraTools (Huang et al., 2024a). To ad-
dress real-world requirements, we conduct evalua-
tions in both in-domain and out-of-domain settings.
Specifically, the training set is from ToolBench,
while the test set of ToolBench is employed for
in-domain evaluation, and the test sets from T-Eval
and UltraTools are used for out-of-domain evalua-
tion. The statistics of TR-bench are summarized in
Table 1.
Following ToolBench, we adopt the Normalized
Discounted Cumulative Gain (NDCG) (Järvelin
and Kekäläinen, 2002), an ideal metric for tool re-
trieval to evaluate the quality of retrieved tools.
In our evaluation, we report NDCG@m (m =
1, 3, 5, 10), calculated according to the position of
each golden tool among top-m candidates tools
retrieved by the tool retriever. Thus, the more ac-
curately the tool retriever can retrieve correct tools,
the higher the NDCG@m score will be.
Baselines. We compare our method against rep-
resentative retrieval methods. 1) BM25 (Robertson
Methods
BM25
Ada Embedding
ToolRetriever
Ours
% improve
SINGLE-TOOL (I1)
CATEGORY (I2)
COLLECTION (I3)
ALL
N@1
N@3
N@5
N@1
N@3
N@5
N@1
N@3
N@5
N@1
N@3
N@5
15.63
18.37
43.95
57.52
83.06
84.20
88.83
90.70
7.72% 1.52% 3.15% 30.44% 10.37% 11.81% 12.36% 0.80% 3.52% 16.90% 4.58% 6.95%
20.37
46.83
87.13
90.20
10.95
30.68
77.90
87.10
13.98
41.06
83.19
87.00
19.65
58.83
89.65
92.47
17.97
54.90
89.59
90.95
9.85
28.83
77.43
85.46
18.95
42.55
87.24
87.94
25.23
54.59
81.65
91.74
11.97
36.82
68.24
89.01
15.84
46.59
75.73
88.53
Table 2: In-domain evaluation on TR-bench in terms of NDCG@m under scenarios including single-tool (I1),
intra-category multi-tool (I2), intra-collection multi-tool (I3), and the whole data (All). % improve represents the
relative improvement achieved by our method over the previously best tool retrieval method.
Methods
BM25
Ada Embedding
ToolRetriever
Ours
% improve
T-EVAL
ULTRATOOLS
N@1
N@3
N@5
N@10
N@1
N@3
N@5
N@10
18.34
52.12
46.40
80.11
58.93
82.10
84.45
59.92
2.86% 1.78% 0.40% -0.06% 2.28% -0.48% 2.43% 1.68%
14.13
33.75
47.73
47.50
45.23
71.95
74.15
74.45
43.19
69.11
72.03
73.31
16.03
39.91
53.01
54.30
15.10
31.46
48.20
49.30
52.91
79.62
80.76
80.25
Table 3: Out-of-domain evaluation on TR-bench in terms of NDCG@m under two scenarios, T-Eval (Chen et al.,
2023) and UltraTools (Huang et al., 2024a). % improve represents the relative improvement achieved by our method
over the previously best tool retrieval method.
and Zaragoza, 2009): the classical sparse retrieval
method; 2) Ada Embedding: the closed-sourced
OpenAI’s text-embedding-ada-002 model2; 3)
ToolRetriever (Qin et al., 2023b): a dense retrieval
approach specifically finetuned on tool retrieval
datasets.
Implementation details. We employ Sentence-
BERT (Reimers and Gurevych, 2019) to train our
retriever model based on BERT-base (Devlin et al.,
2019). We set the learning rate to 2e−5 with 500
warm-up steps. The batch size in training is set
to 64. We utilize ChatGPT (gpt-3.5-turbo-0125)3
as the LLM for giving feedback. The number of
tool candidates K, the balancing factor α, and the
iteration round T are set to 10, 1, and 3, respec-
tively. We have trained the model several times
to confirm that the improvement is not a result of
random chance and present the mid one. Our ex-
periments were conducted on four NVIDIA A6000
GPUs with 48 GB of memory.
5.2 Main Results
In-domain evaluation. The results of the in-
domain evaluation are reported in Table 2. It is
observed that non-finetuned retrieval methods, i.e.,
BM25 and Ada Embedding, perform much worse
than other finetuned methods. This is reasonable
2https://platform.openai.com/docs/guides/
embeddings/embedding-models.
3https://openai.com/index/
introducing-chatgpt-and-whisper-apis/.
since non-finetuned methods have not been specif-
ically adopted for tool retrieval. While Tool Re-
triever outperforms non-finetuned methods, the
performance is still not satisfying.
In compari-
son, our proposed method consistently outperforms
all finetuned and non-finetuned baselines. Signifi-
cantly, our method maintains strong performance in
the intra-category multi-tool (I2) scenario, even as
other methods’ performance declines, demonstrat-
ing the robustness of our proposed method across
different scenarios. The above results prove the
effectiveness of our method in enhancing tool re-
trieval accuracy, particularly in challenging scenar-
ios with multi-tools.
Out-of-domain evaluation. Since the tools are
usually frequently updated in real-world, we fur-
ther test all methods in the out-of-domain setting,
where the training data from ToolBench and the
test data from T-Eval and UltraTools are used. The
experimental results are shown in Table 3. We
could observe that our method significantly outper-
forms other baselines across both scenarios. This
demonstrates that our method not only excels in
in-domain benchmarks but also maintains robust
performance across varied scenarios, revealing its
generalization ability of tool retrieval.
We further compare the tool usage performance
of our method with ToolRetriever in the I2 scenario.
We adopt ToolLLaMA (Qin et al., 2023b) which is
trained on LLM-annotated solution path as the tool
N@1 N@3 N@5 N@10
Methods
N@1
N@3
N@5
Methods
Ours
89.01
85.46
87.10
88.41
w/o warm-up
w/o hard-negative
w/o joint
w/o joint & hard-neg
85.51
86.04
85.38
83.77
81.36
80.41
81.55
77.67
84.47
84.00
83.79
81.21
86.92
85.98
86.20
83.69
Table 4: Ablation study of our method under the intra-
category multi-tool (I2) scenario.
Iteration
N@1
N@3
N@5 N@10
Efficiency
1
2
3
85.69
87.78
89.01
80.48
83.48
85.46
83.94
86.31
87.10
86.27
88.26
88.41
6.12s
8.59s
10.30s
Table 5: Analysis on iteration round under the intra-
category multi-tool (I2) scenario. The efficiency is mea-
sured by the time consumption to complete one user
instruction.
usage model, and use “pass rate” and “win rate” as
evaluation metrics. Our method achieves 75.6% for
pass rate compared to ToolRetriever’s 68.5%, and
65.9% for win rate compared to ToolRetriever’s
60.8%. The results demonstrates the performance
improvement in tool usage, benefiting the entire
tool learning process.
5.3 Ablation Study
We conduct ablation studies to investigate the ef-
ficacy of different components in our methods.
First, we remove the warm-up training by directly
conducting our method on an retriever based on
Sentence-BERT. Then, we analyze the contribu-
tion of hard negative sampling in our method by
removing the hard-to-distinguish samples from the
training.
In addition, we assess the efficacy of
joint training in our method, by substituting it
with a loss Lf eedback = L(qt), with respect to
only the refined instructions qt at current iteration
t. Table 4 reports the ablation test performance
(i.e., NDCG@m (m = 1, 3, 5, 10)) under the intra-
category multi-tool instructions (I2) scenario on
ToolBench.
From the results, we can observe that our method
achieves comparably high NDCG scores even with-
out warm-up training, indicating that it does not
heavily rely on prior tool-use knowledge. When
hard negative sampling is removed, the perfor-
mance degradation illustrates that hard negative
sampling could enable the model to discriminate
between similar tool functionalities. Besides, the
model’s performance further declines when joint
ToolRetriever (BERT-based)
Ours (BERT-based)
ToolRetriever (RoBERTa-based)
Ours (RoBERTa-based)
68.24
89.01
76.61
88.13
77.43
85.46
69.81
85.41
77.90
87.10
74.99
86.75
Table 6: Analysis on different base models under the
intra-category multi-tool (I2) scenario.
Embedding Size
N@1
N@3
N@5 N@10
300
512
768
1024
2048
87.61
87.61
89.01
88.66
88.74
83.49
82.85
85.46
83.91
83.95
85.20
84.67
87.10
85.94
85.98
86.50
85.81
88.41
87.04
87.43
Table 7: Analysis on embedding sizes under the intra-
category multi-tool (I2) scenario.
training is removed, demonstrating that the model
could balance new and previous knowledge in this
joint-training manner.
5.4
In-depth Analysis
Analysis on iteration round. The iteration round
is an important factor in our method. We conduct
experiments to investigate changes in effectiveness
and efficiency with different iteration round T . The
results are presented in Table 5, and the efficiency
is measured by the cost of time to complete one
user instruction on average.
By analyzing the results in Table 5, we gain
two findings. 1) We could observe a continuous
improvement as the iteration round increases. This
shows that the tool retriever progressively enhances
its performance with the aid of LLMs’ feedback. 2)
In terms of time efficiency, we find that adding one
additional round of refinement takes an average
of 6.12s/instruction, primarily resulting from the
time waiting for LLM’s feedback when calling the
OpenAI API. As the number of iterations increases,
we can see that the extra inference time required
for each instruction decreases. This is due to the
fact that there will be fewer instructions requiring
refinement as retrieval performance improves.
Analysis on base models. We further analyze
the impact of different base models on the perfor-
mance. Specifically, we replace the base model
BERT in our method with another classic language
model, RoBERTa (Liu et al., 2019). The results are
shown in Table 6. As we can see, our method still
achieves significant improvement over the baseline
with the same RoBERTa model. Another observa-
Figure 4: Case study on the effect of user instruction refinement through 3 iterations. The original instruction is
revised step-by-step, leading to improved retrieval results.
tion is that RoBERTa is more effective in serving
as a base model for the retrieval application, which
benefits from its effective training strategies. The
improvements demonstrate the robustness of our
method with different base models.
Analysis on embedding sizes. Since the re-
triever model R encodes the textual instruction and
tool description into dense vectors, we explore the
impact of the embedding size on retrieval perfor-
mance. as shown in Table 7. From the table, we can
find that larger embedding sizes result in greater
performance improvements compared to smaller
embedding sizes. This is probably due to the fact
that embeddings with larger sizes could accommo-
date more knowledge. However, when the embed-
ding size increases from 768 to 2048, there is a
slight decrease in performance. This suggests that
a specific embedding size is sufficient, and larger
embedding sizes may pose challenges to training.
It is worth noting that larger embedding sizes neces-
sitate higher training costs and increased inference
memory. Therefore, we recommend an optimal
embedding size of 768.
a closer look at the effect of our method.
In the 1st iteration, we can observe that the re-
fined instruction has included more detailed infor-
mation (i.e., “total number”) about the user’s re-
quirements than the original instruction, enabling
the retriever to identify more appropriate tools (e.g.,
Check residential proxies service status). This re-
veals that the comprehension capabilities of LLMs
could be instilled into the retrieval process through
feedback. In the 2nd iteration, our method further
refines the instruction by omitting irrelevant con-
tent (i.e., “information”) which may mislead the
retriever into retrieving incorrect tools (e.g., Re-
trieve Proxy Information). Another benefit of the
refinement is that some correct tools (e.g., Bash
Code Compiler) will move up in positions of the
top-K rankings, improving the overall retrieval per-
formance. In the 3rd iteration, our method show-
cases great decision-aware capabilities, where the
iterative process could be terminated if no further
refinement is deemed necessary.
6 Conclusion and Future Work
5.5 Case Study
As shown in Figure 4, we conduct case study by
using an example of instruction refinement to take
In this study, we concentrate on the crucial tool
retrieval in the tool learning of LLMs. We have
identified the bottleneck in the tool retrieval-usage
Retrieved Top-k ToolsTool 1: Tools, Ephemeral Proxies, Obtain a new datacenter proxyTool 2: Tools, Ephemeral Proxies, Extend expiration time of adatacenter proxyTool 3: Data, Proxy-Spider Proxies, /proxies.example.jsonOriginal InstructionPlease assist me in finding the latest versions of Bash.Additionally, I require a new datacenter proxy with whitelistedIPs. Can you provide the details of the proxy, including thehost, port, and expiration time? Also, I would like to checkthe current service status of the residential proxies.Refined InstructionRetrieved Top-k ToolsPlease assist me in finding the latest versions of Bash.Additionally, I require a new datacenter proxy with whitelistedIPs. Can you provide the details of the proxy, including thehost, port, and expiration time? Also, I would like to checkthe current service status of the residential proxies andobtain information about the total number of availableresidential proxies grouped by country.Tool 1: Tools, Ephemeral Proxies, Obtain a new datacenter proxyTool 2: Tools, Ephemeral Proxies, Check status...including thetotal number of available residential proxies grouped by countryTool 3: Tools, Proxy Checker, Retrieve Proxy InformationRefined InstructionRetrieved Top-k ToolsN/ATool 1: Tools, Ephemeral Proxies, Obtain a new datacenter proxyTool 2: Tools, Ephemeral Proxies, Check status...including thetotal number of available residential proxies grouped by countryTool 3: Tools, Proxy Checker, Retrieve Proxy InformationRefined InstructionRetrieved Top-k ToolsPlease assist me in finding the latest versions of Bash.Additionally, I require a new datacenter proxy with whitelistedIPs. Can you provide the details of the proxy, including thehost, port, and expiration time? Also, I would like to checkthe current service status of the residential proxiesand obtain information about the total number of availableresidential proxies grouped by country.Tool 1: Tools, Ephemeral Proxies, Obtain a new datacenter proxyTool 2: Tools, Ephemeral Proxies, Check status...including thetotal number of available residential proxies grouped by countryTool 3: Tools, Bash Code Compiler, Bash VersionsIteration 2Iteration 1Iteration 3pipeline as the limited tool retrieval model. We
propose the unique challenges of the tool retrieval
compared with document retrieval. To improve
the current tool retrieval process, we propose lever-
aging the LLM’s feedback to assess the retrieval
results and provide detailed suggestions for refin-
ing user instructions. In order to integrate the re-
triever model into this iterative process, we imple-
ment iteration-aware feedback training. This will
improve the tool retriever’s capabilities and close
the gap between tool retrieval and usage models.
We conduct the TR-benchmark to comprehensively
evaluate the models’ ability in real-world tool re-
trieval scenarios. Our method demonstrates the best
performance in both in-domain and out-of-domain
settings.
In the future, we aim to improve this work from
the following aspects. 1) Limited by the training
speed, we have applied the offline feedback gen-
eration, where feedback is generated before train-
ing the tool retriever. We will also assess whether
online feedback generation yields further improve-
ments in the future. 2) Furthermore, as the tool
retriever serves the subsequent tool usage model
in tool learning, we intend to conduct further eval-
uations of the tool retriever models based on the
subsequent tool usage results.
Limitations
1) Undoubtedly, our iterative refinement will re-
duce the inference speed of the tool retrieval. The
efficiency issue is inherent in approaches involving
LLMs’ interaction. We have evaluated the effi-
ciency as the number of iterative rounds increases.
Fortunately, we observed that the retrieval model
can achieve a significant performance improvement
after just a single round of LLMs’ feedback com-
pared to without feedback. Furthermore, the perfor-
mance enhancement of the tool retrieval is crucial
for the subsequent tool usage model, ensuring that
the correct tools are retrieved and lays the founda-
tion for all subsequent steps of tool usage. There-
fore, we believe that performance improvement is
worthwhile despite some efficiency loss. We will
also pay more attention to this issue in the future.
2) Similar to document retrieval, the used datasets
in our work also contain “false negative” samples.
For instance, some tools may be capable of han-
dling the user’s instruction but are not labeled as
positive. This can disrupt the training and evalua-
tion of tool retrieval and is a common limitation in
many retrieval scenarios.
Ethics Statement
The datasets used in our experiment are publicly
released and labeled through interaction with hu-
mans in English. In this process, user privacy is
protected, and no personal information is contained
in the dataset. The scientific artifacts that we used
are available for research with permissive licenses.
And the use of these artifacts in this paper is consis-
tent with their intended use. Therefore, we believe
that our research work meets the ethics of the con-
ference.
References
Raviteja Anantha, Bortik Bandyopadhyay, Anirudh
Kashi, Sayantan Mahinder, Andrew W Hill, and
Protip: Progressive
Srinivas Chappidi. 2023.
arXiv preprint
tool retrieval improves planning.
arXiv:2312.10332.
Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and
Hannaneh Hajishirzi. 2023. Self-rag: Learning to
retrieve, generate, and critique through self-reflection.
arXiv preprint arXiv:2310.11511.
Zehui Chen, Weihua Du, Wenwei Zhang, Kuikun
Liu, Jiangning Liu, Miao Zheng, Jingming Zhuo,
Songyang Zhang, Dahua Lin, Kai Chen, et al. 2023.
T-eval: Evaluating the tool utilization capability step
by step. arXiv preprint arXiv:2312.14033.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language under-
standing.
In Proceedings of the 2019 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 4171–4186. Association for Computational
Linguistics.
Yu Du, Fangyun Wei, and Hongyang Zhang. 2024.
Anytool: Self-reflective, hierarchical agents for
large-scale API calls.
In Forty-first International
Conference on Machine Learning.
Shen Gao, Zhengliang Shi, Minghang Zhu, Bowen Fang,
Xin Xin, Pengjie Ren, Zhumin Chen, Jun Ma, and
Zhaochun Ren. 2024. Confucius: Iterative tool learn-
ing from introspection feedback by easy-to-difficult
curriculum. In Proceedings of the AAAI Conference
on Artificial Intelligence, pages 18030–18038.
Shibo Hao, Tianyang Liu, Zhen Wang, and Zhit-
ing Hu. 2023. Toolkengpt: Augmenting frozen
language models with massive tools via tool em-
beddings.
In Advances in Neural Information
Processing Systems, volume 36, pages 45870–45894.
Curran Associates, Inc.
Shijue Huang, Wanjun Zhong, Jianqiao Lu, Qi Zhu, Ji-
ahui Gao, Weiwen Liu, Yutai Hou, Xingshan Zeng,
Yasheng Wang, Lifeng Shang, et al. 2024a. Planning,
creation, usage: Benchmarking llms for comprehen-
sive tool utilization in real-world complex scenarios.
arXiv preprint arXiv:2401.17167.
Tenghao Huang, Dongwon Jung, Vaibhav Kumar, Mo-
hammad Kachuee, Xiang Li, Puyang Xu, and Muhao
Chen. 2024b. Planning and editing what you re-
trieve for enhanced tool learning. In Findings of the
Association for Computational Linguistics: NAACL
2024, pages 975–988. Association for Computational
Linguistics.
Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan
Wu, Qihui Zhang, Yixin Liu, Pan Zhou, Yao Wan,
Neil Zhenqiang Gong, and Lichao Sun. 2024c. Meta-
tool benchmark for large language models: Decid-
ing whether to use tools and which to use.
In
The Twelfth International Conference on Learning
Representations.
Kalervo Järvelin and Jaana Kekäläinen. 2002. Cu-
mulated gain-based evaluation of ir techniques.
ACM Transactions
Information Systems,
20(4):422–446.
on
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick
Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and
Wen-tau Yih. 2020. Dense passage retrieval for open-
domain question answering. In Proceedings of the
2020 Conference on Empirical Methods in Natural
Language Processing (EMNLP), pages 6769–6781.
Association for Computational Linguistics.
Minghao Li, Yingxiu Zhao, Bowen Yu, Feifan Song,
Hangyu Li, Haiyang Yu, Zhoujun Li, Fei Huang,
and Yongbin Li. 2023. API-bank: A compre-
hensive benchmark for tool-augmented LLMs. In
Proceedings of the 2023 Conference on Empirical
Methods in Natural Language Processing, pages
3102–3116. Association for Computational Linguis-
tics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-
dar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining ap-
proach. arXiv preprint arXiv:1907.11692.
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler
Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon,
Nouha Dziri, Shrimai Prabhumoye, Yiming Yang,
Shashank Gupta, Bodhisattwa Prasad Majumder,
Katherine Hermann, Sean Welleck, Amir Yazdan-
bakhsh, and Peter Clark. 2023. Self-refine: Itera-
tive refinement with self-feedback. In Advances in
Neural Information Processing Systems, volume 36,
pages 46534–46594. Curran Associates, Inc.
Feiteng Mu, Yong Jiang, Liwen Zhang, Chu Liu, Wenjie
Li, Pengjun Xie, and Fei Huang. 2024. Adaptive
selection for homogeneous tools: An instantiation in
the rag scenario. arXiv preprint arXiv:2406.12429.
Shishir G Patil, Tianjun Zhang, Xin Wang, and
Joseph E Gonzalez. 2023. Gorilla: Large language
model connected with massive apis. arXiv preprint
arXiv:2305.15334.
Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen,
Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang,
Chaojun Xiao, Chi Han, et al. 2023a.
Tool
learning with foundation models.
arXiv preprint
arXiv:2304.08354.
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan
Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang,
Bill Qian, et al. 2023b. Toolllm: Facilitating large
language models to master 16000+ real-world apis.
arXiv preprint arXiv:2307.16789.
Changle Qu, Sunhao Dai, Xiaochi Wei, Hengyi Cai,
Shuaiqiang Wang, Dawei Yin, Jun Xu, and Ji-Rong
Wen. 2024a. Tool learning with large language mod-
els: A survey. arXiv preprint arXiv:2405.17935.
Changle Qu, Sunhao Dai, Xiaochi Wei, Hengyi Cai,
Shuaiqiang Wang, Dawei Yin, Jun Xu, and Ji-Rong
Wen. 2024b. Towards completeness-oriented tool re-
trieval for large language models. In Proceedings
of the 33rd ACM International Conference on
Information and Knowledge Management.
Nils Reimers and Iryna Gurevych. 2019. Sentence-
BERT: Sentence embeddings using Siamese BERT-
networks. In Proceedings of the 2019 Conference on
Empirical Methods in Natural Language Processing
and the 9th International Joint Conference on
Natural Language Processing (EMNLP-IJCNLP),
pages 3982–3992. Association for Computational
Linguistics.
Stephen Robertson and Hugo Zaragoza. 2009. The prob-
abilistic relevance framework: Bm25 and beyond.
Foundations and Trends in Information Retrieval,
3(4):333–389.
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu,
Tianpeng Bao, Hangyu Mao, Ziyue Li, Xingyu Zeng,
Rui Zhao, et al. 2023. Tptu: Task planning and
tool usage of large language model-based ai agents.
In NeurIPS 2023 Foundation Models for Decision
Making Workshop.
Timo Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta
Raileanu, Maria Lomeli, Eric Hambro, Luke Zettle-
moyer, Nicola Cancedda, and Thomas Scialom. 2023.
Toolformer: Language models can teach themselves
to use tools.
In Advances in Neural Information
Processing Systems, volume 36, pages 68539–68551.
Curran Associates, Inc.
Bhaskar Mitra and Nick Craswell. 2017. Neural
arXiv preprint
models for information retrieval.
arXiv:1705.01509.
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li,
Weiming Lu, and Yueting Zhuang. 2023. Hugging-
gpt: Solving ai tasks with chatgpt and its friends
in hugging face. In Advances in Neural Information
Processing Systems, volume 36, pages 38154–38180.
Curran Associates, Inc.
Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han,
Qiao Liang, and Le Sun. 2023. Toolalpaca: Gener-
alized tool learning for language models with 3000
simulated cases. arXiv preprint arXiv:2306.05301.
Boshi Wang, Hao Fang, Jason Eisner, Benjamin
Van Durme, and Yu Su. 2024a. LLMs in the imag-
inarium: Tool learning through simulated trial and
error. In Proceedings of the 62nd Annual Meeting
of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 10583–10604. As-
sociation for Computational Linguistics.
Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen,
Lifan Yuan, Hao Peng, and Heng Ji. 2024b. MINT:
Evaluating LLMs in multi-turn interaction with tools
and language feedback. In The Twelfth International
Conference on Learning Representations.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak
Shafran, Karthik Narasimhan, and Yuan Cao. 2023.
ReAct: Synergizing reasoning and acting in language
models.
In International Conference on Learning
Representations (ICLR).
Siyu Yuan, Kaitao Song, Jiangjie Chen, Xu Tan,
Yongliang Shen, Kan Ren, Dongsheng Li, and De-
qing Yang. 2024. EASYTOOL: Enhancing LLM-
based agents with concise tool instruction. In ICLR
2024 Workshop on Large Language Model (LLM)
Agents.
Wayne Xin Zhao, Jing Liu, Ruiyang Ren, and Ji-Rong
Wen. 2024. Dense text retrieval based on pretrained
language models: A survey. ACM Transactions on
Information Systems, 42(4):1–60.
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang,
Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen
Zhang, Junjie Zhang, Zican Dong, et al. 2023. A
survey of large language models. arXiv preprint
arXiv:2303.18223.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan
Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin,
Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang,
Joseph E Gonzalez, and Ion Stoica. 2023.
Judg-
ing llm-as-a-judge with mt-bench and chatbot arena.
Information Processing
In Advances in Neural
Systems, volume 36, pages 46595–46623. Curran
Associates, Inc.
Yuanhang Zheng, Peng Li, Wei Liu, Yang Liu,
ToolR-
and Bin Wang. 2024.
Jian Luan,
erank: Adaptive and hierarchy-aware reranking
for tool retrieval.
In Proceedings of the 2024
Joint International Conference on Computational
Linguistics, Language Resources and Evaluation
(LREC-COLING 2024), pages 16263–16273. ELRA
and ICCL.
|
synthetic_cpt | 4 | LiDA_Language-Independent_Data_Augmentation_for_Text_Classification.pdf | LIDA: A Tool for Automatic Generation of Grammar-Agnostic
Visualizations and Infographics using Large Language Models
Victor Dibia
Microsoft Research
[email protected]
3
2
0
2
n
u
J
6
]
I
A
.
s
c
[
3
v
7
2
9
2
0
.
3
0
3
2
:
v
i
X
r
a
Abstract
Systems that support users in the automatic
creation of visualizations must address sev-
eral subtasks - understand the semantics of
data, enumerate relevant visualization goals
and generate visualization specifications. In
this work, we pose visualization generation as
a multi-stage generation problem and argue that
well-orchestrated pipelines based on large lan-
guage models (LLMs) and image generation
models (IGMs) are suitable to addressing these
tasks. We present LIDA, a novel tool for gen-
erating grammar-agnostic visualizations and
infographics. LIDA comprises of 4 modules
- A SUMMARIZER that converts data into a
rich but compact natural language summary,
a GOAL EXPLORER that enumerates visualiza-
tion goals given the data, a VISGENERATOR
that generates, refines, executes and filters visu-
alization code and an INFOGRAPHER module
that yields data-faithful stylized graphics using
IGMs. LIDA provides a python api, and a hy-
brid USER INTERFACE (direct manipulation
and multilingual natural language) for interac-
tive chart, infographics and data story genera-
tion. Code and demo are available at this url -
https://microsoft.github.io/lida/
1
Introduction
Visualizations make data accessible by reducing
the cognitive burden associated with extracting in-
sights from large tabular datasets. However, vi-
sualization authoring is a complex creative task,
involving multiple steps. First the user must build
familiarity with the dataset (content and semantics)
and enumerate a set of relevant goals or hypothe-
ses that can be addressed using the data. Next,
users must select the right visualization representa-
tion (marks, transformations and layout) for each
goal. Finally, the user must implement the visu-
alization either as code or using available direct
manipulation interfaces. Each of these steps re-
quire expertise, and can be tedious as well as error
prone for users with limited visualization experi-
ence (novices). Existing research has sought to
address these challenges by automating the visual-
ization (AUTOVIZ) creation process, given a dataset
(Podo et al., 2023). Automation may occur in two
modes: i.) fully automated - the system automati-
cally generates visualizations relevant to the data
ii.) semi-automated - the user specifies their goals
and the system generates visualizations that address
these goals. The former mode is valuable for users
unfamiliar with the data and the latter is valuable
for users with some familiarity with the data and
the visualization task.
Consequently, a successful AUTOVIZ tool must
excel at each of several subtasks - understand the
semantics of the data, enumerate relevant visual-
ization goals and generate visualization specifica-
tions that meet syntax, design, task and perceptual
requirements of these goals (Podo et al., 2023).
Furthermore, given the target demographic (novice
users), such a tool must support the user by offering
NL (NL) interaction modalities (Mitra et al., 2022;
Narechania et al., 2020; Chen et al., 2022), affor-
dances to control system behavior and sense mak-
ing tools to understand and debug/verify system
behavior. While related work has addressed aspects
of the AUTOVIZ task, there are several known limi-
tations (Podo et al., 2023) such as they: (i) rely on
heuristics that are limited in coverage, challenging
to craft and tedious to maintain (Wongsuphasawat
et al., 2017). (ii) require significant user interac-
tion to generate visualizations (Wongsuphasawat
et al., 2017; Moritz et al., 2018). (iii) implement
automated approaches that offer limited control
over system input and output (Dibia and Demiralp,
2019) (iv) require grammar (or chart type) specific
training data and model architectures (Dibia and
Demiralp, 2019; Luo et al., 2018) for each sub task,
(v) do not consider alternative chart representation
formats such as infographics.
Concurrently, advances in large foundation mod-
Figure 1: LIDA generates visualizations and infographics across 4 modules - data summarization, goal exploration,
visualization generation and infographics generations. Example output from each module is shown.
Figure 2: Example data-faithful infographics and associated style prompts generated with LIDA.
els (Bommasani et al., 2021) have shown state of
the art performance on a variety of creative tasks
such as multilingual text generation, code genera-
tion, image captioning, image generation, and im-
age editing. In this work, we argue that the vast
capabilities of these models can be assembled to ad-
dress the AUTOVIZ task, whilst addressing the lim-
itations of existing approaches. This work makes
the following contributions:
• We present a novel multi-stage, modular ap-
proach (Fig 1) for the automatic generation
of data visualization and infographics using
LLMs1. Specifically, we (i) Efficiently represent
datasets as NL summaries, suitable as ground-
ing context for an LLM to address visualization
tasks. (ii) Generate a set of visualization goals
using LLMs. Importantly, we leverage prompt
engineering to steer the model towards generat-
1This work primarily utilizes the OpenAI gpt-3.5-turbo-x
line of models for text and code generation.
ing correct visualization that follow best prac-
tices (see Appendix C). (iii) Apply LLMs to
generate grammar-agnostic visualization speci-
fication based on generated (or human provided)
goals. (iv) Provide a hybrid interface that sup-
ports traditional direct manipulation controls
(e.g., manually select which fields to explore)
and a rich multilingual NL interface to sup-
port user’s with varied skill/experience. (v) Ap-
ply text-conditioned image generation models
(IGM) models in generating stylized infograph-
ics that are both informative (generally faithful
to data), aesthetically pleasing, memorable and
engaging (see section 2.3).
• We introduce metrics for evaluating LLM-
enabled visualization tools, including a metric
for pipeline reliability (visualization error rate -
VER), and visualization quality (self-evaluated
visualization quality - SEVQ) (see section 4).
GOAL EXPLORERVIZ GENERATORINFOGRAPHERSUMMARIZER Convert datasets into a rich but compact natural language representation (context).The cars dataset contains technical specifications for cars and has 9 fields - Name, Miles_per_Gallon, Cylinders, Displacement, Horsepower, Weight_in_lbs, Acceleration, Year, Origin ..Histogram of Miles per galloPlot of miles per gallon vs horse poweTrends in miles per gallon over timeAverage horsepower per countryGenerate a set of potential “goals*” given the dataset context.Generate,evaluate,repair, filter execute and visualization code to yield specifications* .Generate stylized infographics based based on visualization and style prompts.OutputOutputOutputOutput*goals may also be directly provided by the user. Supports multi-lingual input. *specification may be in any programming language or grammar.*Style prompt: line sketch art, line drawingCars.csvGenerate code in visualization based on context and goal“”RULES + LLMLLMIGMLLMReference VisualizationGenerated stylized infographicsunderwater art, shellspastel artoil on canvas, impasto• We implement our approach in an Open Source
library - LIDA2. LIDA provides a python api,
a web api and a rich web interface useful for
research and practical applications.
Compared to existing AUTOVIZ approaches, LIDA
proposes an implementation that is simplified
(eliminates the need for subtask-specific mod-
els), general (can be adapted to generate visual-
izations in any programming language or gram-
mar), flexible (individual modules can be opti-
mized) and scalable (the system performance will
improve with advances in the underlying LLM).
Taken together, these contributions provide build-
ing blocks towards complex workflows such as
visualization translation, chart question answering
(with applications in accessibility of charts), auto-
mated data exploration and automated data stories.
To the best of our knowledge, LIDA is the first
tool to formulate visualization/infographic genera-
tion as a multi-step generation task and demonstrate
an end-to-end pipeline that addresses a variety of
subtasks.
2 Related Work
LIDA is informed by research on large foundation
models applied to creative tasks across modalities
such as text and images, and advances in automated
generation of visualizations and infographics.
2.1 Foundation Models for Creative Tasks
Advances in large transformer-based (Vaswani
et al., 2017) models trained on massive amounts
of data (terabytes of text and images) have led
to a paradigm shift where a single model demon-
strates state of the art task performance across mul-
tiple data modalities such as text, images, audio
and video. These models, also known as founda-
tion models (Bommasani et al., 2021), have been
shown to be effective for a variety of human cre-
ativity tasks. LLMs like the GPT3 series (Brown
et al., 2020), OPT (Zhang et al., 2022), PALM
(Chowdhery et al., 2022), LAMBDA (Cohen et al.,
2022) learn complex semantics of language allow-
ing them to be effective in tasks such as text sum-
marization, question answering. Code LLMs such
as Codex (Chen et al., 2021), AlphaCode (Li et al.,
2022), InCoder (Fried et al., 2022) show state of
the art performance on a suite of code intelligence
tasks. Finally, models such as CLIP (Radford et al.,
2https://microsoft.github.io/lida/.
2021), DALLE (Ramesh et al., 2022, 2021) and La-
tent Diffusion (Rombach et al., 2022) have shown
state of the art capabilities on image generation
tasks such as image captioning, image editing, and
image generation.
In this work, we adopt insights from Program-
Aided Language models (Gao et al., 2022) - a setup
where LLMs generate programs as the intermedi-
ate reasoning steps, but offload the solution step to
a runtime such as a python interpreter. We lever-
age the language modeling capabilities of LLMs in
generating semantically meaningful visualization
goals, and their code writing capabilities in gener-
ating visualization code which is compiled to yield
visualizations. These visualizations (images) are
then used as input to image generation models in
generating stylized infographics.
2.2 Automated Visualization (AUTOVIZ)
Extant AUTOVIZ research have explored multiple
approaches such as heuristics, task decomposition
or learning based approaches. Heuristics-based ap-
proaches explore properties of data in generating
a search space of potential visualizations (Wong-
suphasawat et al., 2017), ranking these visualiza-
tions based on quality attributes (Luo et al., 2018;
Moritz et al., 2018) and presenting them to the
user. For example, DeepEye (Luo et al., 2018)
enumerates all possible visualizations and classi-
fies/ranks them as “good” or “bad” using a binary
decision tree classifier while Voyager (Wongsupha-
sawat et al., 2017) uses heuristics to enumerate the
space of visualizations. However, heuristics can
be tedious to maintain, may have poor coverage
of the visualization space and does not leverage
information encoded in existing datasets. More
recent work has explored a task decomposition
approach where the AUTOVIZ process is decom-
posed into multiple tasks that are solved individu-
ally via specialized tools and aggregated to yield
visualizations (Narechania et al., 2020; Chen et al.,
2022; Wang et al., 2022b). For example NL4DV
(Narechania et al., 2020) implements a custom
query engine that parses natural language queries,
identifies attributes/tasks and generates Vega-Lite
specifications. A limitation of task decomposition
approaches is that they are bottlenecked by the
implementation performance for each step (e.g.,
limitations with models for disambiguating natural
language queries as seen in NL4DV (Narechania
et al., 2020)). Finally, end-to-end learning-based
approaches seek to automatically learn mappings
from data directly to generated visualizations. For
example, Data2Vis (Dibia and Demiralp, 2019)
(the most relevant work to this study) uses a se-
quence to sequence model that implicitly addresses
AUTOVIZ subtasks by learning a mapping from
raw JSON data sampled from datasets to Vega-Lite
(Satyanarayan et al., 2017) specifications. Some
limitations of current learning approaches is that
they are limited to a single grammar, require cus-
tom models, custom paired training data and train-
ing objectives (Dibia and Demiralp, 2019; Luo
et al., 2018; Chen et al., 2022) for each supported
grammar, and do not provide a path to generating
infographics. Furthermore, they do not provide
mechanisms for fine-grained control of visualiza-
tion output or provide robust error detection and
recovery strategies.
LIDA addresses these limitations in several ways:
(i) Leverages patterns learned by LLMs from mas-
sive language and code dataset, applying this
knowledge to subtasks. (ii) Provides a single gram-
mar-agnostic pipeline that generates visualization
in multiple programming languages and visualiza-
tion grammars.
(iii) Supports natural language
based control of generated visualizations. (iv) lever-
age emergent capabilities of large language models
such chain of thought reasoning to improve reliabil-
ity of generated text/code (Kojima et al., 2022; Wei
et al., 2022; Shi et al., 2022a), model calibration
(Kadavath et al., 2022) (predictions on correctness
probabilities of visualizations) as well as self-con-
sistency (Wang et al., 2022a) in ranking/filtering
results. (v) provides a mechanism for generating
infographics that are data-faithful and aesthetically
pleasing.
(vi) supports a fully automatic mode
where an LLM is used to discover meaningful goal-
s/hypotheses (fields to visualize, questions to ask)
or a semi automatic mode where the user provides
a hypothesis and it generates a visualization.
By choosing to cast visualization/infographic gen-
eration as generation tasks that offloads core prob-
lem solving to LLMs and IGMs, LIDA simplifies the
design and maintenance of such systems.
2.3
Infographics Generation
Infographics (information graphics) are visual arti-
facts that seek to convey complex data-driven nar-
ratives using visual imagery and embellishments
(Harrison et al., 2015). Existing research has shown
that infographics are aesthetically pleasing, engag-
ing and more memorable (Tyagi et al., 2021; Harri-
son et al., 2015; Haroz et al., 2015), at no additional
cost to the user (Haroz et al., 2015). These prop-
erties have driven their applications in domains
like fashion, advertisemnt, business and general
communications. However, the creation of info-
graphics that convey data insights can be a tedious
process for content creators, often requiring skills
across multiple tools and domains. Research on
infographic generation have mainly explored the
creation of pictographs (Haroz et al., 2015) - replac-
ing the marks on traditional charts with generated
images and learning to extract/transfer styles from
existing pictographs (Shi et al., 2022b). In this
work, we extend this domain to exploring the gener-
ation of both visual marks as well as generating the
entire infographic based on natural language style
descriptions using large image generation models
such as DALLE (Ramesh et al., 2022, 2021) and
Latent Diffusion (Rombach et al., 2022). This ap-
proach also enables user-generated visual styles
and personalization of visualizations to fit user pref-
erences such as color palettes, visual styles, fonts
etc.
3 The LIDA System
LIDA comprises of 4 core modules - a SUMMA-
RIZER, a GOAL EXPLORER, a VISGENERATOR
and an INFOGRAPHER (see Fig 1). Each module is
implemented in the LIDA github repo as a python li-
brary with an optional user interface (see Appendix
A).
3.1
SUMMARIZER
Figure 3: The SUMMARIZER module constructs a NL
summary from extracted data properties (atomic types,
field statistics) and an optional LLM enrichment (pre-
dicted field descriptions, semantic types).
LLMs are capable zero shot predictors, able to solve
multiple tasks with little or no guiding examples.
However, they can suffer from hallucination e.g.,
generating text that is not grounded in training data
Atomic type, field statistics, samples ..LLM / User Enrichment (description, semantic type){"":"cars.json","":"cars.json","dataset_description":"A dataset containing information about cars.","":[{"":"Name","properties":{"":"string","":["amc concord dl","amc ambassador dpl","plymouth cricket"], "" : 311, "": "", "":"The make and model of the car."}} ...namefile_namefieldscolumndtypesamplesnum_unique_valuessemantic_typecar_modeldescriptionStage 1Stage 2Cars.csvor the current task. One way to address this is to
augment (Mialon et al., 2023) the LLM with ground-
ing context. Thus, the goal of the summarizer is to
produce an information dense but compact 3 sum-
mary for a given dataset that is useful as grounding
context for visualization tasks. A useful context is
defined as one that contains information an ana-
lyst would need to understand the dataset and the
tasks that can be performed on it. The summary is
implemented in two stages (see Fig 3)
Stage 1 - Base summary generation: We ap-
ply rules in extracting dataset properties includ-
ing atomic types (e.g., integer, string, boolean) us-
ing the pandas library (McKinney, 2010), general
statistics (min, max, # unique values) and a random
non-null list of n samples for each column.
Stage 2 - Summary enrichment: The base sum-
mary is optionally enriched by an LLM or a user
via the LIDA ui to include semantic description of
the dataset (e.g., a dataset on the technical specifi-
cation of cars), and fields (e.g., miles per gallon for
each car) as well as field semantic type prediction
(Zhang et al., 2019).
3.2 GOAL EXPLORER
This module generates data exploration goals,
given a summary generated by the SUMMARIZER.
We express goal generation as a multitask genera-
tion problem where the LLM must generate a ques-
tion (hypothesis), a visualization that addresses the
question and rationale (see Fig 4). We find that
requiring the LLM to produce a rationale leads to
more semantically meaningful goals.
Figure 4: A goal generated by LIDA is a JSON data
structure that contains a question, a visualization and a
rationale.
3.2.1 VISGENERATOR
The VISGENERATOR generates visualization speci-
fications and is comprised of 3 submodules - a code
scaffold constructor, a code generator and a code
executor.
Code scaffold constructor: Implements a library
of code scaffolds that correspond to programming
3Note: the summary must be compact in order to maximize
the limited context token budget of LLMs.
Figure 5: The VISGENERATOR module constructs vi-
sualization code scaffolds, fills a constrained section
(< stub >) and executes the scaffold.
languages and visualization grammars e.g., python
scaffolds support grammars such as Matplotlib,
GGPlot, Plotly, Altair, Seaborn, and Bokeh. Each
scaffold is an executable program that i.) imports
relevant dependencies ii.) defines an empty func-
tion stub which returns a visualization specification
(see Fig 5a).
Code generator: Takes a scaffold, a dataset sum-
mary, a visualization goal, and builds a prompt. An
LLM (applied in fill-in-the-middle mode (Bavarian
et al., 2022)) is then used to generate n candidate
visualization code specifications.
Code executor: Post-processes and executes4 the
code specifications as well as filters the results.
LIDA implements several filtering mechanisms to
detect errors, each with latency tradeoffs: (i) gener-
ates a large sample for n with high temperature, dis-
card candidates that do not compile. (ii) apply self
consistency (Wang et al., 2022a) in LLMs where
multiple candidates are generated and the solution
with the highest consensus is selected. (iii) gener-
ate correctness probabilities (Kadavath et al., 2022)
for all candidates and selects the one with the high-
est probability. Note that the last two approaches
are computationally expensive (require multiple
forward passes through an LLM) and are not suit-
able for real time applications. The final output
is a list of visualization specifications (code) and
associated raster images.
3.2.2 VIZOPS - Operations on Generated
Visualizations
Given that LIDA represents visualizations as code,
the VISGENERATOR also implements submodules
to perform operations on this representation.
Natural language based visualization refine-
ment: Provides a conversational api to iteratively
4Execution in a sandbox environment is recommended.
{ "": "What is the distribution of Miles_per_Gallon?",
"": "Histogram of Miles_per_Gallon",
"": "This tells us about the fuel efficiency of the cars in the dataset and how it is distributed." }questionvisualizationrationalecode scaffold constructorcode generatorcode executorImplement a library of “code scaffolds” for languages and grammars e.g., Python, Vega-Lite.1
2
3
4
5importasdefreturn altair alt
(data):
chart = alt.Chart(data).mark_point().encode(x=,y=)
chart
chart = plot(data)plot'Miles_per_Gallon''Cylinders'1
2
3
4
5importasdefreturn altair alt
(data):
chart = chart
chart = plot(data)plot'''<stub>'''
Complete code scaffolds based on summary and goal.Execute generated code, parse results.refine generated code (e.g., translate chart t hindi
. . . zoom in by 50% etc) which can then be exe-
cuted to generate new visualizations.
Visualization explanations and accessibility:
Generates natural language explanations (valuable
for debugging and sensemaking) as well as acces-
sibility descriptions (valuable for supporting users
with visual impairments).
Visualization code self-evaluation and repair:
Applies an LLM to self-evaluate generated code on
multiple dimensions (see section 4.1.2).
Visualization recommendation: Given some con-
text (goals, or an existing visualization), recom-
mend additional visualizations to the user (e.g., for
comparison, or to provide additional perspectives).
3.3
INFOGRAPHER
This module is tasked with generating stylized
graphics based on output from the VISGENERATOR
module (see Fig 2). It implements a library of vi-
sual styles described in NL that are applied directly
to visualization images. Note that the style library
is editable by the user. These styles are applied in
generating infographics using the text-conditioned
image-to-image generation capabilities of diffusion
models (Rombach et al., 2022), implemented using
the Peacasso library api (Dibia, 2022). An optional
post processing step is then applied to improve the
resulting image (e.g., replace axis with correct val-
ues from visualization, removing grid lines, and
sharpening edges).
3.4 USER INTERFACE
LIDA implements a user interface that communi-
cates with the core modules over a REST and Web-
socket api. The user interface implements several
views.
Data upload and summarization: This view al-
lows the user to upload a dataset and explore a
sample of rows in the dataset via a table view. A
data upload event triggers a call to the SUMMA-
RIZER and GOAL EXPLORER module and displays
a summary of the dataset and a list of potential
goals. This view also allows the user to option-
ally annotate and refine the generated summary or
curate fields used in the dataset.
Visualization view: This view allows the user to
optionally provide a visualization goal in NL (e.g.,
"what is the fuel efficiency per country?") or se-
lect a generated goal and then displays a generated
visualization . For each visualization, intermedi-
ate output from the models (underlying data sum-
mary, visualization specification, code scaffold) are
shown as explanations to aid in sensemaking, and
debugging(see Fig 9). This view also implements
the VIZOPS capabilities described in Section 3.2.2
(e.g., See the interface for visualization evaluation
in Fig 10). Note that the NL interface inherits the
multilingual language capabilities of the underly-
ing LLM, enabling multilingual NL interaction.
Overall, the combination of these modules result
in a system that is able to implicitly address an
array of data visualization operations such as data
transformation, encoding, mark selection, styling,
layout, and annotation (Wang et al., 2022b).
4 Evaluation
4.1 Evaluation Metrics
Our initial evaluation of LIDA focuses on two high
level metrics - visualization error rates (VER) to pro-
vide signals on the reliability of the LIDA pipeline,
and self-evaluated visualization quality (SEVQ) to
assess the quality of generated visualizations.
4.1.1 Visualization Error Rate (VER)
Visualization error rate is computed as the percent-
age of generated visualizations that result in code
compilation errors. This metric provides critical
insights into the reliability of the LIDA pipeline
and impact of changes to the system (e.g., prompt
engineering or scaffold update).
∗ 100
VER =
E
T
Where: - E = Number of generated visualiza-
tions with code compilation errors, and - T = Total
number of generated visualizations.
4.1.2 Self-Evaluated Visualization Quality
(SEVQ)
Recent work shows LLMs like GPT-4 encode broad
world knowledge (OpenAI, 2023), can assess the
quality of their output (Kadavath et al., 2022; Lin
et al., 2022) and can approximate human judge-
ments for tasks such as summarization (Liu et al.,
2023). Our observations applying GPT3.5/GPT-
4 to visualization tasks suggest similar results.
Specifically, GPT-4 has learned to encode some
visualization best practices and can apply these in
generating critiques of visualization code across
multiple dimensions. Thus, to evaluate visualiza-
tion quality, we compute an SEVQ metric by ap-
plying GPT-4 in assessing the quality of gener-
ated visualizations. Specifically, we task GPT-4
with scoring generated visualization code (a nu-
meric value from 1-10 and a rationale) across 6
dimensions - code accuracy, data transformation,
goal compliance, visualization type, data encoding,
and aesthetics. These dimensions are informed by
existing literature on visualization generation/rec-
ommendation e.g., Wang et al. (2022b) outline 6
visualization tasks including data transformation,
encoding, marks, styling, layout and annotation,
while (Moritz et al., 2018) codify constraints for
visualization quality across expressivity (does it
convey the facts of the data) and effectiveness (is
the information more readily perceived compared
to other visualizations) criteria. Additional details
on prompts used for each dimension are provided
in Appendix B.
4.2 Evaluation Benchmark Settings
Our initial benchmark is based on 57 datasets
sourced from the vega datasets repository5. For
each dataset, LIDA is tasked with generating 5 goals
and 1 visualization per goal across multiple gram-
mars6. For reproducibility, we set temperature = 0
and number of samples n = 1 for the LLM. A
gallery of the generated evaluation visualizations
can be viewed on the LIDA project page.
4.3 Evaluation and Ablation Study Results
Figure 6: Results from an ablation study on the impact
of data summarization strategies on visualization error
rate (VER) metric.
Overall, we find that LIDA is able to generate
visualizations with a low error rate (VER = 3.5%).
We also conduct an ablation study to inform on
the impact of the SUMMARIZER across the fol-
5https://github.com/vega/vega-datasets
6 LIDA is given a single try for each step. In theory, the
error rates can be driven to zero, by recursively applying the
visualization self-evaluation and self-repair modules.
lowing conditions - (i) no_enrich: a base sum-
mary with no enrichment (see Section 3.1), (ii) en-
rich: summary with LLM enrichment, (iii) schema:
only field names, i.e., schema as summary, and
(iv) no_summary: no summary. Results show that
including a summary leads to reduced error rate
compared to simply adding field names (schema)
as summary. We also find that enriching the base
summary with an LLM has less of an effect on
VER (with variations across visualization grammar),
and an expressive, well-represented grammar like
Seaborn having lower VER. These results are sum-
marized in Figure 6. We also find that the SEVQ
metric is valuable in identifying semantic quality
issues with generated visualizations. For example,
Fig 10 shows an example where the user has re-
quested a pie chart, and the LIDA self-evaluation
module critiques this visualization using the SEVQ
metric, providing a rationale for why a bar chart
is more effective (see Fig 10), with the option to
automatically repair the visualization.
5 Conclusion
In this work, we formulate visualization generation
as a multi-stage text (and code) generation problem
that can be addressed using large language mod-
els. We present LIDA - a tool for the automatic
generation of grammar-agnostic visualizations and
infographics. LIDA addresses limitations of cur-
rent automatic visualization systems - automatic
generation of hypothesis/goals given datasets, con-
versational interface for controllable visualization
generation and refinement, support for multiple vi-
sualization grammars using the same pipeline and
the ability to generate infographics. LIDA is effec-
tive compared to state of the art systems (see ex-
ample gallery of generated visualizations); it offers
a simplified system implementation and leverages
the immense language modeling and code genera-
tion capabilities of LLMs in implicitly solving com-
plex visualization subtasks. Finally, we introduce
metrics for assessing reliability (visualization error
rate - VER) and visualization quality (self-evaluated
visualization quality -SEVQ) for LLM-enabled vi-
sualization tools. We hope modules implemented
in LIDA will serve as useful building blocks in en-
abling complex creative workflows such as visual-
ization translation, chart question answering(with
applications in accessibility of charts), automated
data exploration and automated storytelling.
no_enrichenrichschemano_summarySummary Type0.00.20.40.60.81.0Error Rate5.61%7.72%7.02%95.79%3.51%3.51%9.47%99.30%Visualization Error Rate | GPT-3.5, n=2280libmatplotlibseaborn6 Limitations
While LIDA demonstrates clear advances in how
we can support users in authoring visualizations
and infographics, there are several limitations that
offer a natural avenue for future research.
Low Resource Grammars: The problem formu-
lation introduced in LIDA depends on the under-
lying LLMs having some knowledge of visualiza-
tion grammars as represented in text and code
in its training dataset (e.g., examples of Altair,
Vega, Vega-Lite, GGPLot, Matplotlib, represented
in Github, Stackoverflow, etc.). For visualization
grammars not well represented in these datasets
(e.g., tools like Tableau, PowerBI, etc., that have
graphical user interfaces as opposed to code repre-
sentations), the performance of LIDA may be lim-
ited without additional model fine-tuning or transla-
tion. Furthermore, performance may be limited for
complex tasks (e.g., tasks requiring complex data
transformations) beyond the expressive capabilities
of specific grammars. Further research is needed
to: i.) study effects of strategies like task disam-
biguation ii.) impact of task complexity and choice
of programing language/grammar on performance.
Deployment and Latency: Large language mod-
els (e.g., GPT3.5 used in this work) are computa-
tionally expensive and require significant compute
resources to deploy at low latency. These costs can
prove to be impractical for real-world application.
In addition, the current setup includes a code ex-
ecution step which is valuable for verification but
increases deployment complexity (requires a sand-
box). Thus, there is opportunity to: i.) train smaller
capable LLMs (Touvron et al., 2023) finetuned on
a curated dataset of programming languages and
visualization grammars .ii) design vulnerability mit-
igation approaches such as limiting program scope
or generating only input parameters for visualiza-
tion grammar compilers.
Explaining System Behavior: The approach dis-
cussed in this paper simplifies the design of vi-
sualization authoring systems, but also inherits
interpretability challenges associated with large
language models. While LIDA offers intermedi-
ate outputs of the model (e.g., generated code and
specifications) as explanations, as well as post-hoc
explanations of generated code (see section 3.2.2),
there is a need for further research in explaining
system behavior (conditions when they are needed)
and providing actionable feedback to the user.
System Evaluation: Benchmarking LLM’s on cre-
ativity tasks can be challenging. While the current
study introduces metrics for evaluating reliability
(VER) and visualization quality (SEVQ) (see section
4), there is a need for more comprehensive bench-
marks on a variety of datasets and visualization
grammars. Furthermore, there are research oppor-
tunities to i.) study and quantify the capabilities of
LLMs in encoding and applying visualization best
practices ii.) conduct empirical studies that evalu-
ate model behavior, mapping out failure cases and
proposing mitigations iii.) qualitatively study the
impact of tools like LIDA on user creativity while
authoring visualizations.
Acknowledgements
This manuscript has benefited from comments
and discussions with members of the HAX group
(Saleema Amershi, Adam Fourney, Gagan Bansal),
VIDA group (Steven Drucker, Dan Marshall),
Bongshing Lee, Rick Barraza and others at Mi-
crosoft Research.
References
Mohammad Bavarian, Heewoo Jun, Nikolas Tezak,
John Schulman, Christine McLeavey, Jerry Tworek,
and Mark Chen. 2022. Efficient training of lan-
guage models to fill in the middle. arXiv preprint
arXiv:2207.14255.
Rishi Bommasani, Drew A Hudson, Ehsan Adeli,
Russ Altman, Simran Arora, Sydney von Arx,
Michael S Bernstein, Jeannette Bohg, Antoine Bosse-
lut, Emma Brunskill, et al. 2021. On the opportuni-
ties and risks of foundation models. arXiv preprint
arXiv:2108.07258.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, et al. 2020. Language models are few-shot
learners. Advances in neural information processing
systems, 33:1877–1901.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming
Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka-
plan, Harri Edwards, Yuri Burda, Nicholas Joseph,
Greg Brockman, et al. 2021. Evaluating large
language models trained on code. arXiv preprint
arXiv:2107.03374.
Qiaochu Chen, Shankara Pailoor, Celeste Barnaby,
Abby Criswell, Chenglong Wang, Greg Durrett, and
I¸sil Dillig. 2022. Type-directed synthesis of vi-
sualizations from natural language queries. Pro-
ceedings of the ACM on Programming Languages,
6(OOPSLA2):532–559.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, et al. 2022. Palm: Scaling
language modeling with pathways. arXiv preprint
arXiv:2204.02311.
Aaron Daniel Cohen, Adam Roberts, Alejandra Molina,
Alena Butryna, Alicia Jin, Apoorv Kulshreshtha,
Ben Hutchinson, Ben Zevenbergen, Blaise Hilary
Aguera-Arcas, Chung ching Chang, Claire Cui,
Cosmo Du, Daniel De Freitas Adiwardana, De-
hao Chen, Dmitry (Dima) Lepikhin, Ed H. Chi,
Erin Hoffman-John, Heng-Tze Cheng, Hongrae Lee,
Igor Krivokon, James Qin, Jamie Hall, Joe Fen-
ton, Johnny Soraker, Kathy Meier-Hellstern, Kris-
ten Olson, Lora Mois Aroyo, Maarten Paul Bosma,
Marc Joseph Pickett, Marcelo Amorim Menegali,
Marian Croak, Mark Díaz, Matthew Lamm, Maxim
Krikun, Meredith Ringel Morris, Noam Shazeer,
Quoc V. Le, Rachel Bernstein, Ravi Rajakumar, Ray
Kurzweil, Romal Thoppilan, Steven Zheng, Taylor
Bos, Toju Duke, Tulsee Doshi, Vincent Y. Zhao,
Vinodkumar Prabhakaran, Will Rusch, YaGuang Li,
Yanping Huang, Yanqi Zhou, Yuanzhong Xu, and
Zhifeng Chen. 2022. Lamda: Language models for
dialog applications. In arXiv.
Victor Dibia. 2022. Interaction design for systems that
integrate image generation models: A case study with
peacasso.
Victor Dibia and Ça˘gatay Demiralp. 2019. Data2vis:
Automatic generation of data visualizations us-
ing sequence-to-sequence recurrent neural networks.
IEEE computer graphics and applications, 39(5):33–
46.
Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang,
Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih,
Luke Zettlemoyer, and Mike Lewis. 2022. Incoder:
A generative model for code infilling and synthesis.
arXiv preprint arXiv:2204.05999.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Gra-
ham Neubig. 2022. Pal: Program-aided language
models. arXiv preprint arXiv:2211.10435.
Steve Haroz, Robert Kosara, and Steven L Franconeri.
2015. Isotype visualization: Working memory, per-
formance, and engagement with pictographs. In Pro-
ceedings of the 33rd annual ACM conference on hu-
man factors in computing systems, pages 1191–1200.
Lane Harrison, Katharina Reinecke, and Remco Chang.
2015. Infographic aesthetics: Designing for the first
impression. In Proceedings of the 33rd Annual ACM
conference on human factors in computing systems,
pages 1187–1190.
(mostly) know what they know.
arXiv:2207.05221.
arXiv preprint
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu-
taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-
guage models are zero-shot reasoners. arXiv preprint
arXiv:2205.11916.
Yujia Li, David Choi, Junyoung Chung, Nate Kushman,
Julian Schrittwieser, Rémi Leblond, Tom Eccles,
James Keeling, Felix Gimeno, Agustin Dal Lago,
et al. 2022. Competition-level code generation with
alphacode. arXiv preprint arXiv:2203.07814.
Stephanie Lin, Jacob Hilton, and Owain Evans. 2022.
Teaching models to express their uncertainty in
words. arXiv preprint arXiv:2205.14334.
Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang,
Ruochen Xu, and Chenguang Zhu. 2023. Gpteval:
Nlg evaluation using gpt-4 with better human align-
ment. arXiv preprint arXiv:2303.16634.
Yuyu Luo, Xuedi Qin, Nan Tang, Guoliang Li, and
Xinran Wang. 2018. Deepeye: Creating good data
visualizations by keyword search. In Proceedings of
the 2018 International Conference on Management
of Data, SIGMOD, pages 1733–1736.
Wes McKinney. 2010. Data structures for statistical
In Proceedings of the 9th
computing in python.
Python in Science Conference, pages 51 – 56.
Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christo-
foros Nalmpantis, Ram Pasunuru, Roberta Raileanu,
Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu,
Asli Celikyilmaz, et al. 2023. Augmented language
models: a survey. arXiv preprint arXiv:2302.07842.
Rishab Mitra, Arpit Narechania, Alex Endert, and John
Stasko. 2022. Facilitating conversational interaction
in natural language interfaces for visualization. In
2022 IEEE Visualization and Visual Analytics (VIS),
pages 6–10. IEEE.
Dominik Moritz, Chenglong Wang, Greg L Nelson,
Halden Lin, Adam M Smith, Bill Howe, and Jef-
frey Heer. 2018. Formalizing visualization design
knowledge as constraints: Actionable and extensible
models in draco. IEEE transactions on visualization
and computer graphics, 25(1):438–448.
Arpit Narechania, Arjun Srinivasan, and John Stasko.
2020. Nl4dv: A toolkit for generating analytic speci-
fications for data visualization from natural language
IEEE Transactions on Visualization and
queries.
Computer Graphics, 27(2):369–379.
OpenAI. 2023. Gpt-4 technical report.
Saurav Kadavath, Tom Conerly, Amanda Askell, Tom
Henighan, Dawn Drain, Ethan Perez, Nicholas
Schiefer, Zac Hatfield Dodds, Nova DasSarma,
Eli Tran-Johnson, et al. 2022. Language models
Luca Podo, Bardh Prenkaj, and Paola Velardi. 2023. Ma-
chine learning for visualization recommendation sys-
tems: Open challenges and future directions. arXiv
preprint arXiv:2302.00569.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya
Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas-
try, Amanda Askell, Pamela Mishkin, Jack Clark,
et al. 2021. Learning transferable visual models from
natural language supervision. In International confer-
ence on machine learning, pages 8748–8763. PMLR.
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey
Chu, and Mark Chen. 2022. Hierarchical text-
conditional image generation with clip latents. arXiv
preprint arXiv:2204.06125.
Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott
Gray, Chelsea Voss, Alec Radford, Mark Chen, and
Ilya Sutskever. 2021. Zero-shot text-to-image gen-
In International Conference on Machine
eration.
Learning, pages 8821–8831. PMLR.
Robin Rombach, Andreas Blattmann, Dominik Lorenz,
Patrick Esser, and Björn Ommer. 2022. High-
resolution image synthesis with latent diffusion mod-
In CVF Conference on Computer
els. 2022 ieee.
Vision and Pattern Recognition (CVPR), pages 10674–
10685.
Arvind Satyanarayan, Dominik Moritz, Kanit Wong-
suphasawat, and Jeffrey Heer. 2017. Vega-lite: A
grammar of interactive graphics. IEEE TVCG (Proc.
InfoVis).
Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang,
Suraj Srivats, Soroush Vosoughi, Hyung Won Chung,
Yi Tay, Sebastian Ruder, Denny Zhou, et al. 2022a.
Language models are multilingual chain-of-thought
reasoners. arXiv preprint arXiv:2210.03057.
Yang Shi, Pei Liu, Siji Chen, Mengdi Sun, and Nan
Cao. 2022b. Supporting expressive and faithful pic-
torial visualization design with visual style transfer.
IEEE Transactions on Visualization and Computer
Graphics, 29(1):236–246.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, Aurelien Rodriguez, Armand Joulin, Edouard
Grave, and Guillaume Lample. 2023. Llama: Open
and efficient foundation language models.
Anjul Tyagi, Jian Zhao, Pushkar Patel, Swasti Khu-
rana, and Klaus Mueller. 2021. User-centric semi-
automated infographics authoring and recommenda-
tion. arXiv preprint arXiv:2108.11914.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
Kaiser, and Illia Polosukhin. 2017. Attention is all
you need. Advances in neural information processing
systems, 30.
Yun Wang, Zhitao Hou, Leixian Shen, Tongshuang Wu,
Jiaqi Wang, He Huang, Haidong Zhang, and Dong-
mei Zhang. 2022b. Towards natural language-based
visualization authoring. IEEE Transactions on Visu-
alization and Computer Graphics, 29(1):1222–1232.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022.
Chain of thought prompting elicits reasoning in large
language models. arXiv preprint arXiv:2201.11903.
Kanit Wongsuphasawat, Zening Qu, Dominik Moritz,
Riley Chang, Felix Ouk, Anushka Anand, Jock
Mackinlay, Bill Howe, and Jeffrey Heer. 2017. Voy-
ager 2: Augmenting visual analysis with partial view
specifications. In ACM CHI.
Dan Zhang, Yoshihiko Suhara, Jinfeng Li, Madelon
Hulsebos, Ça˘gatay Demiralp, and Wang-Chiew Tan.
2019. Sato: Contextual semantic type detection in
tables. arXiv preprint arXiv:1911.06311.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel
Artetxe, Moya Chen, Shuohui Chen, Christopher De-
wan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022.
Opt: Open pre-trained transformer language models.
arXiv preprint arXiv:2205.01068.
A The LIDA Library
LIDA is implemented as a python library with mod-
ules for each of the components described in Sec-
tion 3. The library is available on github7 and can
be installed using pip - pip install lida. The library
provides a python api, web api for integration into
other applications, and a command line interface. It
also provides a web-based user interface for users
to interact with LIDA (Fig 10, 9).
Figure 7: Example usage of LIDA shows how to generate
a summary, visualization goals, code specifications and
execute the code to generate visualizations.
B Self-Evaluated Visualization Quality
(SEVQ) Prompts
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
Ed Chi, and Denny Zhou. 2022a. Self-consistency
improves chain of thought reasoning in language
models. arXiv preprint arXiv:2203.11171.
For the SEVQ metric, we use GPT-4 to assess visu-
alization quality by scoring generated visualization
7https://github.com/microsoft/lida
1
2
3
4
5
6
7
8
9
10
11# pip install lida
from lida.modules Manager
lida = ()
summary = lida.()
goals = lida.(summary, n=)
vis_specs = manager.( summary=summary, goal=goals[i])
charts = manager.( code_specs=vis_specs, data=manager.data, summary=summary)
(charts)importManagersummarizegenerate_goalsgenerate_vizexecute_vizprint"data/cars.csv"1Figure 8: In the data upload section of the LIDA UI, users can select a grammar of choice and upload a dataset. A
dataset upload event triggers a goal generation as well as visualization generation tasks.
Figure 9: The visualization generation section of the LIDA UI enables the user to i.) specify their overall goal in
natural language and generate visualizations ii.) inspect, edit and execute generated code iii.) view the generated
visualization. iv.) perform operations on generated code e.g., refine, explain, evaluate and recommend visualizations.
Figure 10: The self-evaluation module in LIDA is used to evaluate/critique a generated visualization, providing
scores across 6 dimensions with rationale. In this case, the visualization contains a pie chart, and a bar chart is
recommended as an alternative.
code across the 6 task dimensions - code accuracy,
data transformation, goal compliance, visualization
type, data encoding, and aesthetics. These dimen-
sions are implemented as prompts to an LLM 8,
which then generates a score between 1-10 for each
dimension. The final SEVQ score is the average of
the 6 scores. A sketch of the prompts used for each
dimension are enumerated in table 1.
C Design Reflections
Building a system that leverages foundation models
(text and images) involves engineering decisions
across a wide design space. In this section, we
briefly reflect on some of the design choices we
made for LIDA components and the tradeoffs we
considered.
C.1 Prompt Engineering
We explored multiple approaches to building
prompts that maximized the probability of the LLM
solving each subtask.
• SUMMARIZER: We found that improving the
richness of the summary (qualitative NL de-
scription, including semantic types) was criti-
cal to improved quality of generated goals and
Dimension
Prompt
Code accu-
racy
Does the code contain bugs, logic errors,
syntax error or typos? How serious are the
bugs? How should it be fixed?
Data trans-
formation
Is the data transformed appropriately for
the visualization type?
com-
Goal
pliance
How well the code meets the specified visu-
alization goals?
Visualization
type
Considering best practices, is the visualiza-
tion type appropriate for the data and intent?
Is there a visualization type that would be
more effective in conveying insights?
Data encod-
ing
Is the data encoded appropriately for the
visualization type?
Aesthetics
Are the aesthetics of the visualization ap-
propriate and effective for the visualization
type and the data?
Table 1: Summary of the evaluation dimensions and the
corresponding prompt sketches.
visualization code. Implementation wise, we
began with a manually crafted summary of the
data (see Section 3.1), and then enriched it via
calls to an LLM and optional user refinement
of the summary.
8Exact prompts can be found at the project repository
https://github.com/microsoft/lida.
• GOAL EXPLORER: Providing few shot exam-
ples in the prompts where fields and rationale
C.3 Natural Language Interaction
(i) HYBRID INTERFACE: Providing a hybrid in-
terface that allows traditional direct manipulation
steps in creating visualizations (e.g., selecting
which fields to use), paired with a NL interface
allows users to leverage existing mental models
with traditional visualization tools as well as the
NL affordances of LIDA. (ii) NL INTERACTION
MODES: Beyond generating a base visualization,
we also enable operations on generated visualiza-
tion code (e.g., refinement, explanation, evaluation,
recommendation). This builds on insights from
Mitra et al. (2022) who propose multi-turn dialog
interfaces for visualization authoring towards re-
solving ambiguities.
are linked via symbols (e.g., plot a histogram
of field X vs Y to show relationship between X
and Y) nudges the model to use exact dataset
field names, and minimizes the occurrence of
hallucinated fields. Prompt engineering also
provides mechanisms to bake in visualization
best practices e.g. avoid pie charts, apply vi-
sualization best practices, Imagine you are
a highly experienced visualization specialist
and data analyst.
• VISGENERATOR: Casting visualization code
generation as a fill-in-the-middle problem (as
opposed to free-from completion) ensures the
model to generates executable code focused
on the task. For example, in Fig 5, the model
is instructed to generate only the < stub >
portion of the code scaffold. We also note that
the degrees of freedom alloted to the model
(e.g., specifying how much of the scaffold
to complete) can influence its ability to add
tasks with varied complexity. For example, a
scaffold that allows the model generate data
preprocessing code (and includes libraries like
statsmodels etc) allows the model to address
tasks that require steps such as data transfor-
mation, sampling and statistical analysis be-
fore generating visualizations etc.
• Overall, we found that setting a low temper-
ature (t = 0; generating the most likely visu-
alization) coupled with a per-grammar code
scaffold provided the best results in terms
of yielding code that correctly compiles into
visualization specifications and faithfully ad-
dresses the subtask. We also explored prompt
formulations that addressed multiple tasks to
minimize costs (latency and compute). For
example, summary enrichment is a single call
where the LLM must generate dataset descrip-
tions, field descriptions and semantic types.
C.2
Infographic Generation
We found that setting a low strength parameter
(0.25 < strength < 0.45) for the latent diffusion
model (image-to-image mode) and using parsimo-
nious style prompts resulted in stylized images that
were faithful to the general structure of the origi-
nal visualization, minimizing distorted or irrelevant
imagery. This sort of controlled generation is nec-
essary to avoid the distraction (Haroz et al., 2015)
that can arise from superfluous imagery in info-
graphics.
Figure 11: The LIDA infographer module supports the generation of data-faithful infographics. Each infographic is
conditioned on a generated visualization as well as natural language style tags which can be used to customize the
appearance of the chart.
|
synthetic_cpt | 1 | A_systematic_evaluation_of_large_language_models_for_biomedical_natural_language_processing_benchmarks_baselines_and_recommendations.pdf | 4
2
0
2
y
a
M
6
1
]
L
C
.
s
c
[
2
v
1
5
1
8
0
.
5
0
4
2
:
v
i
X
r
a
BENCHMARKING RETRIEVAL-AUGMENTED LARGE
LANGUAGE MODELS IN BIOMEDICAL NLP: APPLICA-
TION, ROBUSTNESS, AND SELF-AWARENESS
Mingchen Li, Zaifu Zhan, Han Yang, Yongkang Xiao, Jiatan Huang, Rui Zhang
University of Minnesota Twin Cities
{li003378,zhan8023,yang8597,xiao0290,huan2460, zhan1386}@umn.edu
ABSTRACT
Large language models (LLM) have demonstrated remarkable capabilities in vari-
ous biomedical natural language processing (NLP) tasks, leveraging the demon-
stration within the input context to adapt to new tasks. However, LLM is sensitive
to the selection of demonstrations. To address the hallucination issue inherent in
LLM, retrieval-augmented LLM (RAL) offers a solution by retrieving pertinent in-
formation from an established database. Nonetheless, existing research work lacks
rigorous evaluation of the impact of retrieval-augmented large language models on
different biomedical NLP tasks. This deficiency makes it challenging to ascertain
the capabilities of RAL within the biomedical domain. Moreover, the outputs from
RAL are affected by retrieving the unlabeled, counterfactual, or diverse knowledge
that is not well studied in the biomedical domain. However, such knowledge is
common in the real world. Finally, exploring the self-awareness ability is also
crucial for the RAL system. So, in this paper, we systematically investigate the
impact of RALs on 5 different biomedical tasks (triple extraction, link prediction,
classification, question answering, and natural language inference). We analyze the
performance of RALs in four fundamental abilities, including unlabeled robustness,
counterfactual robustness, diverse robustness, and negative awareness. To this
end, we proposed an evaluation framework to assess the RALs’ performance on
different biomedical NLP tasks and establish four different testbeds based on the
aforementioned fundamental abilities. Then, we evaluate 3 representative LLMs
with 3 different retrievers on 5 tasks over 9 datasets. The evaluation indicates
that while RALs enhance the performance of most biomedical datasets used and
demonstrate a degree of counterfactual robustness, they still encounter significant
challenges with unlabeled and counterfactual retrieval information, as well as
negative awareness.
Lately, significant progress has been made in large language models (LLMs) such as ChatGPT 1. To
adapt the LLM to the biomedical domain, several LLMs have been developed, such as MedLLaMA-
13B Wu et al. (2023) and Med-PaLM 2 Singhal et al. (2023). Despite demonstrating impressive
general capabilities Li & Zhang (2023), these models still face significant challenges, including factual
hallucination Ji et al. (2023); Zhang et al. (2023) , and absence of newly uploaded knowledge Ovadia
et al. (2023).
Retrieval-augmented language models Li & Zhang (2023); Lewis et al. (2020); Li & Huang (2023),
in contrast, can retrieve knowledge from an external datastore when needed, potentially reducing
hallucination and improve the new knowledge adaption ability. The most common method is to use
the designed retriever to retrieve the knowledge that is relevant to the input sentence, subsequently,
the retrieved knowledge, along with the input sentence, is fed into the LLM to assist in generating the
expected output.
In the question answering (QA) task, a retrieval-augmented language model can access knowledge
from unlabeled corpus2 like PubMed. The QA format allows the unlabeled corpus to potentially
1https://chat.openai.com/
2In this work, corpus refers to the knowledge base that needs to be retrieved.
1
Figure 1: BIORAB features on queries on different types corpus to test the awareness ability and
generation ability of RAL.
furnish answers to questions. However, for tasks like triple extraction, incorporating the unlabeled
corpus may yield adverse effects. Counterfactual information, such as error annotations, is prevalent
in labeled corpora, presenting challenges for retrievers in obtaining useful information. Additionally,
LLMs still grapple with generating unreliable information retrieved incorrectly. Incorporating diverse
knowledge holds promise for improving model performance. For example, question answering relies
on extracting information from extensive contexts, thus potentially impacting information extraction
performance. Moreover, the influence of retrieval information from various tasks or datasets on RAL
performance remains underexplored. Self-awareness is crucial for RALs, if RALs can distinguish
between positive retrieval knowledge and negative knowledge, they have the opportunity to rectify
their actions.
These challenges hinder RALs from consistently producing reliable and accurate responses. Unfor-
tunately, in the biomedical domain, only a few studies, such as Almanac Zakka et al. (2024), have
explored the RAL performance in QA, leaving a gap in understanding how these factors affect RAL
across various biomedical NLP tasks. Consequently, there is a pressing need for a comprehensive
evaluation of RALs with different LLMs across biomedical NLP tasks. To this end, this paper
conducts a comprehensive evaluation of RAL for different LLMs on 5 biomedical NLP tasks over 9
datasets. Specifically, we create a new RAL benchmark for biomedical NLP, namely BioRAB, as
shown in Figure 1, and create 4 testbeds to evaluate the mentioned fundamental abilities.
• Unlabeled Robustness denotes the ability of RALs to extract valuable information from
unlabeled retrieval corpus, especially on label-intensive tasks, such as triple extraction,
and classification. For instance, in tasks like relation extraction, the corpus could be a
labeled dataset (such as the training dataset) or unlabeled (training dataset without labels).
If the RAL achieves comparable or superior performance by retrieving the unlabeled dataset
compared to retrieving the labeled dataset, it indicates that labeled databases may not
be necessary for RALs. In the testbed of the Unlabeled Robustness, the corpus contains
instances without labels.
• Counterfactual Robustness denotes whether the RAL could retrieve the right information
from the counterfactual corpus, in our work, the counterfactual instance refers to the
mislabeled annotation. In the testbed of Counterfactual Robustness, the corpus consists of
instances with a certain proportion of incorrect labels.
• Diverse Robustness evaluating whether RALs can achieve better performance by integrating
information from multiple tasks. For instance, the corpus for the classification task is sourced
from relation extraction and question-answering tasks. In the testbed of Diverse Robustness,
the corpus comprises instances from various tasks.
• Negative Awareness refers to the RAL’s ability to discern whether retrieved knowledge
positively or negatively impacts the final output. In the testbed of Negative Awareness, the
corpus comprises instances that are 100% counterfactual instances.
Utilizing BioRAB, we evaluate its performance across 5 tasks (triple extraction, link prediction,
text classification, question answering, and natural language inference) using 9 biomedical NLP
2
query Retrieval tool labeled corpusdiversity labeled corpus unlabeled corpuscounterfactual corpusBIORABLLMawareness abilitygeneration abilityretrieveUserOutputpossesses or lacksqueryretrieved knowledgedatasets. Furthermore, BioRAB undergoes evaluation with three widely used LLMs: LLaMA2-
13B, MedLLaMA-13B, and LLaMA3-8B, utilizing three commonly employed retrievers (BM25,
Contriever, and MedCPT).
We observed that although RALs can enhance response accuracy in the majority of biomedical NLP
tasks we evaluated, they encounter notable challenges. Particularly in the question-answering task,
we noted that RALs did not yield significant improvements in the datasets we used. We speculate
that this could be attributed to the limitations of the corpus used for retrieving, as the training dataset
(corpus we used for retrieving) may not have provided sufficient information compared to using
Wikipedia or PubMed. Moreover, RALs struggle to generate the desired output when the corpus
lacks labeling when compared to the labeled corpus. An interesting finding is that in datasets like
ChemProt and Hetionet, RALs exhibit improved performance with unlabeled corpora compared to
the source LLM. Besides, RALs lack the capability to extract useful information from counterfactual
corpora and struggle to discern the most relevant information. We also find this is not a common
case, some datasets, such as in the dataset ADE and Hetionet, RAL could handle the counterfactual
instance. Additionally, when presented with a diverse labeled corpus, RALs do not achieve optimal
performance across tasks, except for the natural language inference task. Finally, we found despite
providing counterfactual examples during training, the LLM was still able to generate correct outputs
in some instances. However, RALs struggle with self-awareness, as they lack the ability to determine
which examples could help improve model performance. The experimental results mentioned above
underscore the necessity for further resolution of these issues for the RAL. Our contributions are the
following:
• We propose four abilities essential for evaluating retrieval-augmented large language models
in the biomedical domain and introduce a benchmark called BIORAB to assess these
capabilities. To our knowledge, this is the first benchmark tailored specifically to evaluate
these four abilities for RALs in the biomedical domain.
• We evaluated the LLM using the retrieval-augmented method and identified limitations in
four key abilities;
• We evaluate the RAL on 5 different biomedical tasks over 9 datasets by using 3 LLMs with
3 retrievers.
1 RELATED WORK
1.1 RETRIEVAL-AUGMENTED LANGUAGE MODELS (RALMS)
Many studies Li & Zhang (2023); Lewis et al. (2020); Guu et al. (2020); Ram et al. (2023); Li
et al. (2024a;b), have been proposed to use retrieved information from various knowledge stores
to better understand the text or generate the expected output. For example, KIEST Li & Huang
(2023) dynamically injects retrieved entity and attribute knowledge from the knowledge graph when
generating the entity or attribute in the task of entity stage changes. Lewis et al. (2020) uses the
maximum inner product search (MIPS) to find the top-K documents which are combined with a query
to predict the answers. To enhance retrieval capability, BiomedRAG Li et al. (2024a) proposes a
learned retriever to retrieve the chunk information from the build database and improve the model
performance. while CTLP Li et al. (2024b) aims to create a condensed transition graph to improve
the link prediction performance, the sample paths between two entities are retrieved to construct
the condensed transition graph. RT Li & Zhang (2023) employs a chain of thought and retrieves
pertinent labeled sentences to enhance few-shot biomedical named entity recognition (NER) tasks.
However, there is no work to systematically evaluate the effectiveness of retrieval-argument LLM on
the different biomedical NLP tasks.
1.2 EVALUATION OF RAL
Evaluating RALs has received significant attention due to their remarkable general capability. It
enables researchers to gain a deeper understanding of the limitations and abilities of LLMs. However,
there are few studies Zakka et al. (2024); Xiong et al. (2024) focusing on the evaluation of RAGs in
the biomedical domain, primarily centered around question answering tasks. For example, Xiong et
al. Xiong et al. (2024) evaluated RAG models on five biomedical QA datasets using the zero-shot
3
Figure 2: Overview of four testbeds on BIORAB. n refers to the special dataset for each task, such
as ade-corpus-v2 (text classification), and PHharmKG (link prediction). In (d), the corpus of n refers
to the set that includes the task datasets but excludes the training set of n. In (e), to distinguish the
difference between "Output" and "True/False", the "Output" is defined as the expected output for
different tasks, for example, in the triple extraction task, the output is the triple. "True/False" refers to
"the retrieved example is a negative example or the retrieved example is not a negative example." In
our work, the n corpus of n refers to the training set of n.
setting. Almanac Zakka et al. (2024) evaluates the ChatGPT with one retriever and one QA dataset. In
contrast to the current work, we offer a broader evaluation across four testbeds and 5 tasks spanning
9 datasets.
2 BIORAB: BIOMEDICAL RETRIEVAL-AUGMENTED GENERATION
BENCHMARK
In this section, we begin by outlining the operational flow of the RALs. Following this, we introduce
the proposed four abilities and the building progress of four relevant testbeds. Finally, we introduce
the evaluation metrics employed to assess performance.
2.1 RAL WORKING FLOW
To solve the hallucination problem, RAL is proposed to retrieve the external knowledge from the
corpus and improve the LLM performance. Generally, as shown in Figure 2(a), retrieved corpus
needs to be constructed initially, in numerous question-answering RAL models, the corpus primarily
originates from the unlabeled open source such as PubMed, Textbook. However, for some label-
sensitive tasks, such as triple extraction, the unlabeled open source may be invalid. In our work, the
corpus is defined as the training set for the relevant task. For instance, as illustrated in Figure 2(a), if
"n" denotes the PHarmKG, each key corresponds to a sentence what is the relationship between the
head entity and tail entity? in its training set, while the corresponding value denotes the relevant
label relationship for that key. In the second step, the retriever is used to obtain the relevant (key,
value) pairs from the corpus based on the input sentence. At last, the retrieved (key, value) pairs
with the input sentence are fed into the LLM to generate the expected output. For each instance X
of each "n", there are three components: Instruction I, context C, and response R. For example, in
the training dataset of ade-corpus-v2 (classification task), if the label of a sentence S: She had been
administered tacrolimus for prophylaxis of graft-versus-host reaction is False in the X, I =You are
an excellent linguist. The task is to predict whether this sentence is True or False. Examples: context:
The hemangioma regressed markedly 6 weeks after the procedure and serous retinal detachment
showed marked resolution.response: False, C = S, R = F alse.
2.2 DEFINED FOUR ABILITY OF BIORAL
Despite the RAL has achieved considerable success in solving the hallucination problem, in the
biomedical domain, the ability of RAL is underexplored. Firstly, not all tasks have vast labeled
corpora. While many research endeavors employ the training set as the corpus, they still encounter
4
Input sentenceLLMRetrieverKeyValueOuput...Task data n Corpus of n:Input sentenceLLMRetrieverKeyOuput...Task data n Corpus of n:Input sentenceLLMRetrieverOuputTask data n KeyValue...Corpus of n:Input sentenceLLMRetrieverOuputTask data n KeyValueCorpus of 1...KeyValueCorpus of (n-1)......Input sentenceLLMRetrieverTask data n KeyValue...Corpus of n:OuputTrue/Falseb) Unlabeled Robustnessc) Counterfactual Robustnessd) Diversity Robustnesse)Negative Awarenessa) RAL Corpus of n:limitations when contrasted with larger corpora. If a RAL can achieve similar performance to the
RAL that utilizes labeled corpus, it would demonstrate the former’s ability to operate effectively
without relying on labeled data. For another, the RAL may easily be misled by incorrectly labeled
information (as shown in Figure 2(c)). Furthermore, RALs may possess the capability to obtain
useful information from labeled corpora of other tasks (as shown in Figure 2(d)). However, retrieving
knowledge from labeled corpora of other tasks may introduce noise and potentially mislead the
generation process. Finally, when the retriever retrieves mislabeled (or counterfactual) information,
the RAL may possess the ability to discern that the retrieved knowledge is not conducive to output
generation (as shown in Figure 2(e)). To this end, we built the Biomedical Retrieval-Augmented
Generation Benchmark (BIoRAB) to evaluate the ability of RAL in the biomedical domain, and we
proposed 4 testbeds to test these abilities. In the next, we will detail these four abilities and how to
construct the testbeds.
2.2.1 UNLABELED ROBUSTNESS (UR)
Not all tasks have vast labeled retrieval corpus, therefore, for each task, the retriever must gather
information from unlabeled corpora, while the RAL may still have the ability to generate the
expected results. To evaluate the efficacy of RAL in this regard, we introduce our proposed UR
testbed. Specifically, as shown in Figure 2(b), the corpus of "n" is defined as the training set without
value(label) for the "n". The retriever retrieves the relevant information from this unlabeled corpus.
After that, the retrieved Key with the input sentence is fed into the LLM. For example, in the
training dataset of ade-corpus-v2 (classification task), if the label of a sentence S: She had been
administered tacrolimus for prophylaxis of graft-versus-host reaction is False In the X, I =You are
an excellent linguist. The task is to predict whether this sentence is True or False, retrieved sentence:
A macrophage activation syndrome, possibly related to methotrexate toxicity, developed in a boy with
systemic juvenile rheumatoid arthritis, C = S, R = F alse.
2.2.2 COUNTERFACTUAL ROBUSTNESS (CR)
Constructing a high-quality annotation corpus is challenging work, as it often involves dealing with
incorrect data labeling. In our work, these mislabeled instances are called counterfactual instances. In
the condition of the mislabeled corpus, the RAL may have the ability to avoid negative information.
To validate the counterfactual robustness, we introduced our CR testbed. Specifically, as shown in
Figure 2(c), when constructing the corpus of n, we set the negative rate to be 20% or 80% or 100%,
corresponding to 20% or 80% or 100% of instances being wrongly labeled. An example of incorrect
annotation in a classification dataset would be if there are two labels, "True" and "False." If the true
class of one instance is "True," then its incorrect annotation would be "False". Subsequently, the
retriever is tasked with retrieving relevant information from this corpus. The retrieved information,
along with the input sentence, is fed into the LLM to generate the output.
2.2.3 DIVERSE ROBUSTNESS (DR)
Diverse Robustness refers to the ability to incorporate diverse information from various task corpora.
On one hand, in numerous scenarios, the corpus from other tasks may contain valuable information
to aid in generation. For instance, in the task of triple extraction, if a suitable triple extraction corpus
is unavailable, the question-answering corpus may assist in extracting the necessary information. On
the other hand, different tasks may introduce noise that could potentially impede the performance
of the RAL. To generate better output, it is necessary for RAL to have the ability to retrieve diverse
information. So, we introduce our DR testbed, as shown in Figure 2(d), when constructing the corpus
of "n", it incorporates corpora from other tasks. For instance, if "n" refers to the Chemprot (triple
extraction task), the corpus of "n" includes corpora from tasks such as GIT (triple extraction task),
PHarmKG (link prediction task), and so on. Next, the retriever is required to extract the pertinent
information from the diverse corpus. Subsequently, the retrieved information, along with the input
sentence, is fed into the LLM to generate the output.
2.2.4 NEGATIVE AWARENESS (NA)
Negative Awareness evaluates the ability of LLMs to discern whether the retrieved information is
negative (it is not conducive to the expected output). In real-world scenarios, if the retriever obtains
5
negative information and the LLM can identify it as such, the LLM can then seek out more useful
information to aid in generation based on this feedback. So, we introduce our NA testbed, as shown
in Figure 2(e), we designate all values in the corpus of "n" as incorrect labels. After obtaining the
retrieved documents from the corpus, the model is expected to produce two types of output. Firstly,
task-based output, such as in the task of triple extraction, the output should be triple. Secondly, the
model should also provide a judgment on whether the retrieved knowledge is negative or not.
2.3 EVALUATION METRICS
2.3.1 TASK-BASED METRICS
In the triple Extraction task, same as BiomedRAG Li et al. (2024a), triple is regarded as correct
when its relation type, the head entity, and the tail entity are all correct. For example, in the sentence:
Infusion of prostacyclin (PGI2) reportedly attenuates renal ischemic injury in the dog and the rat.,
triple <Infusion, treats, rat> is regarded as correct while <injury, treats, rat> is not. We evaluated all
the models and reported the evaluation metric, including Micro Precision, Recall, and F1-score. For
the text classification, link prediction, and question answering task, we follow the same evaluation
metrics as triple extraction. For the natural language inference task, we use the same evaluation
metric (Macro F1) as the BioNLI.
2.3.2 NEGATIVE AWARENESS METRICS
To assess negative awareness in our study, we define a negative instance as a mislabeled instance. In
the first, we need to evaluate the model performance using mislabeled examples. For instance, in
the ade-corpus-v2 classification data, with two labels "True" and "False", this evaluation gauges the
performance of "True" or "False" predictions.
Typically, in the RAL framework, if the retrieved example contains the input sentence and its expected
output, the LLM should achieve 100% performance when tested with the input sentence. Despite
all instances in the retrieval corpus being mislabeled, the LLM may still generate the correct output
when utilizing these examples. In our experiments, we also investigate this aspect. Building on this
discovery, we delineate two types of negative instances:
• True negatives: When the negative instance is provided to the LLM along with the input
sentence, resulting in the incorrect output. In this scenario, the number of input sentences is
denoted as lt.
• False negatives: When the negative instance is presented to the LLM alongside the input
sentence, leading to the correct output. In this case, the number of input sentences is
represented as lf .
At the same time, we also expected the LLM could output True - The retrieved example is negative
example or False- The retrieved example is not a negative example by providing a specific instruction
Please determine whether the retrieved example constitutes negative information. If it is negative,
please output False; if it is not negative, please output True for each input sentence. For an input
sentence that has false negative examples, if the LLM could output False - The retrieved example
is not a negative example, it demonstrates that the LLM recognizes the example as a false negative.
After the judgment of LLM, The count of input sentences with "false negative examples" is denoted
as f . For an input sentence that has true negative examples, if the LLM could output True - The
retrieved example is a negative example, it demonstrates that the LLM recognizes the example as a
true negative. After the judgment of LLM, the count of input sentences with "true negative examples"
is denoted as t. So the true negative awareness rate is calculated by t/lt, and the false negative
awareness rate is calculated by f /lf .
3 EXPERIMENTS
In this section, we assess RAL’s performance across various biomedical NLP tasks, analyze its
efficacy on four proposed testbeds, and discuss its abilities.
6
3.1 SETTINGS AND DATASET
We evaluated three state-of-the-art LLMs: LLamA2-13B (Touvron et al., 2023) , MedLLamA-
13B (Wu et al., 2023), and LLaMA3 8B3, along with three retrievers: BM25 (Luo et al., 2023),
Contriver (Izacard et al., 2021), and MedCPT (Jin et al., 2023). We considered five biomedical
NLP tasks: triple extraction, link prediction, text classification, question answering, and natural
language inference, across nine datasets: ADE, ChemProt, GIT, PHarmKG, Hetionet, Ade-corpus-v2,
SemedCLass, MedMCQA, and BioNLI. The data statistics are shown in Table 1. The experiments
were conducted using A100 GPUs.
Triple extraction
Link Prediction
Text classification
Question answering
Natual language inference
Dataset
ADE (Gurulingappa et al., 2012b)
ChemProt (Taboureau et al., 2010)
GIT (Li et al., 2023)
PHarmKG (Zheng et al., 2021)
Hetionet (Himmelstein et al., 2017)
Ade-corpus-v2 (Gurulingappa et al., 2012a)
SemdClass (Vasilakes Jake A, 2018)
MedMCQA (Pal et al., 2022)
BioNLI (Bastan et al., 2022)
train
4,970
4,001
3,734
4,000
4,000
4,000
2,400
34,994
5,544
test
2,130
3,355
465
500
500
500
600
4,183
6,308
dev
–
2,366
492
500
500
500
600
4,183
12,807
Table 1: Data Statistics for the datasets we used in this work
3.1.1 TRIPLE EXTRACTION DATASET
In this paper, we utilized ADE, Chemprot, and GIT as the foundational datasets.
• ADE (Gurulingappa et al., 2012a) is extended from relation extraction task to triplet extrac-
tion task in this paper. All sentences either describe the effect of the drug or the dose of the
drug. Thus, the triplets consist of (head entity: drug, relation type: effect, tail entity: ef-
fect_description) and (head entity: drug, relation type: dosage, tail entity: dose_description).
Among all triplets, there are only two relation types: effect and dosage.
• ChemProt (Taboureau et al., 2010): The Chemical Protein Interaction Corpus comprises
2432 PubMed abstracts annotated with chemical-protein interactions, encompassing 23
distinct interaction relations. Building upon prior research (Sun et al., 2022), the corpus
exclusively considers sentence-level instances, with a particular focus on five prominent
interaction types for classification: CPR3, CPR4, CPR5, CPR6, CPR9.
• GIT (Li et al., 2023) is a high-quality biomedical triple extraction dataset for non-drug
therapies, characterized by its high-quality annotations and comprehensive coverage of
relation types. It includes 22 relation types from SemMedDB.
3.1.2 LINK PREDICTION
In this paper, we utilized PHarmKG and Hetionet as the foundational datasets in the link prediction
task.
• PHarmKG (Zheng et al., 2021) is a knowledge graph to describe the relationship among
genes, drugs, and diseases. In this work, we aim to predict the four mentioned relation
types (Interactions, Disease-Gene, Disease-Chemical, Chemical-Gene) between two entities.
During the huge quantity of triples in the PHarmKG, we randomly select 4,000 samples
from the source training set for training, 500 samples from the source testing set for testing,
and 500 samples from the source validation set for validation.
• Hetionet (Himmelstein et al., 2017) is an integrative network of disease, which includes 46
relation types. In our paper, we randomly select 4,000 samples from the source training set
for training, 500 samples from the source testing set for testing, and 500 samples from the
source validation set for validation.
3https://github.com/meta-llama/llama3
7
3.1.3 TEXT CLASSIFICATION
In this paper, we utilized Ade-corpus-v2 and SemdClass as the foundational dataset in the text
classification task.
• Ade-corpus-v2 (Gurulingappa et al., 2012a) dataset is designed for classifying whether a
sentence is ADE( Adverse Drug Reaction)-related (True) or not (False). In our paper, we
randomly select 4,000 instances for training, 500 for testing, and 500 for validation.
• The SemdClass (Vasilakes Jake A, 2018), aims to understand whether the provided triple
belongs to the given sentence or not. It includes two classes, False and True.
3.1.4 QUESTING ANSWERING AND NATUAL LANGUAGE INFERENCE
In this paper, we utilized MedMCQA as the foundational dataset in the question-answering task and
used BioNLI as the dataset of natural language inference.
• MedMCQA (Pal et al., 2022) is a multi-choice question-answering dataset that designed
to address the medical entrance exam questions. In this work, we opt for the five-choice
version (A, B, C, D, E).
• BioNLI (Bastan et al., 2022) aims to understand whether the provided hypothesis is consis-
tent or adversarial to the premise.
3.2 COMPARISON BETWEEN RALS WITH BACKBONE LLMS
We first benchmark various LLMs and RALs on 9 datasets, the results are shown in Table 2 and
Table 3. In the triple extraction task, we observed that RALs outperformed LLMs (specifically RALs
without a retriever), achieving better performance. For example, RALs (MedLLaMA 13B with
Contriever) enhanced the original MedLLaMA 13B by 22.37%, in terms of F1 score on the ADE
dataset. However, RAL still faces challenges in entity recognition. For example, in ADE, LLaMA2-
13B gets the best performance when compared to LLaMA2-13B with retrievers. Another interesting
finding is that models with bigger parameters may not necessarily yield the best performance.
For instance, on the ChemProt, LLaMA3-8B with Contriver outperforms other RALs with larger
parameter sizes.
RALs have also been evaluated as effective in improving the performance of LLMs across tasks such
as link prediction, text classification, and natural language inference. RALs (LLaMA2 13B with
Contriever) enhanced the original LLaMA2 13B by 0.40%, in terms of F1 score on the PHarmKG
dataset, RALs (MedLLaMA 13B with BM25) enhanced the original MedLLaMA 13B by 11.86%, in
terms of F1 score on the Hetionet dataset, RALs (LLaMA2 13B with MedCPT) enhanced the original
LLaMA2 13B by 0.40%, in terms of F1 score on the Ade-corpus-v2 dataset, RALs (LLaMA2 13B
with Contriever) enhanced the original LLaMA2 13B by 1.67%, in terms of F1 score on the SemClass
dataset, RALs (LLaMA2 13B with MedCPT) enhanced the original LLaMA2 13B by 6.59%, in
terms of Macro-avg F1 on the BioNLI dataset.
On MedMCQA, our findings differ from other works Xiong et al. (2024) as we observed that LLMs
outperform RALs in achieving the best performance, we speculate that the reason for this discrepancy
lies in the nature of label-not-sensitive tasks, where RALs have the capability to retrieve large corpora
such as PubMed White (2020) or other relevant datasets. In our study, however, our corpus is derived
solely from the training set, which may limit the breadth of knowledge accessible to the RALs.
3.3 RESULTS AND DISCUSSION ON TESTBED1: UNLABELED ROBUSTNESS
We evaluate the model performance based on the unlabeled corpus, and the results are shown in
Table 4 and Table 5. We have the following observations:
(1) RAL utilizing the unlabeled corpus exhibits lower performance compared to RAL utilizing
the labeled corpus. RALs have demonstrated a strong dependence on the labeled corpus, especially
on the label-intensive tasks. For instance, with labeled corpus, the performance of RAL surpasses
that of RAL without labeled corpus by 26.41% on ADE.
8
Dataset
LLM
LLaMA2-13B
triple
Approach
BM25 (Luo et al., 2023)
Contriever (Izacard et al., 2021)
MedCPT (Jin et al., 2023)
No Retriever
Precision Recall
30.88
36.06
30.80
34.79
30.99
36.07
30.81
34.94
ADE
MedLLaMA-13B
LLaMA3-8B
LLaMA2-13B
ChemProt
MedLLaMA-13B
LLaMA3-8B
LLaMA2-13B
GIT
MedLLaMA-13B
LLaMA3-8B
BM25 (Luo et al., 2023)
Contriever (Izacard et al., 2021)
MedCPT (Jin et al., 2023)
No Retriever
BM25 (Luo et al., 2023)
Contriever (Izacard et al., 2021)
MedCPT (Jin et al., 2023)
No Retriever
BM25 (Luo et al., 2023)
Contriever (Izacard et al., 2021)
MedCPT (Jin et al., 2023)
No Retriever
BM25 (Luo et al., 2023)
Contriever (Izacard et al., 2021)
MedCPT (Jin et al., 2023)
No Retriever
BM25 (Luo et al., 2023)
Contriever (Izacard et al., 2021)
MedCPT (Jin et al., 2023)
No Retriever
BM25 (Luo et al., 2023)
Contriever (Izacard et al., 2021)
MedCPT (Jin et al., 2023)
No Retriever
BM25 (Luo et al., 2023)
Contriever (Izacard et al., 2021)
MedCPT (Jin et al., 2023)
No Retriever
BM25 (Luo et al., 2023)
Contriever (Izacard et al., 2021)
MedCPT (Jin et al., 2023)
No Retriever
33.77
35.66
33.30
12.26
27.88
34.44
31.70
9.85
49.44
85.75
86.25
78.58
54.78
86.15
81.33
52.10
70.23
87.13
86.62
23.21
60.81
74.78
75.64
61.76
58.59
65.95
75.65
42.60
62.72
74.76
66.20
75.81
33.76
33.57
29.72
12.16
27.70
31.13
27.04
5.49
48.78
85.05
85.40
76.41
49.53
85.22
80.08
49.04
69.98
86.68
83.07
19.72
54.73
72.37
73.44
56.45
57.20
65.81
74.84
41.51
62.58
67.53
50.97
72.80
head entity
Precision Recall
73.71
79.72
76.71
83.29
73.96
79.76
76.75
83.64
77.06
79.15
79.01
81.87
72.87
78.91
77.99
83.84
65.77
98.42
98.44
98.12
73.30
97.97
98.14
97.65
83.86
97.92
98.06
98.63
76.94
89.44
92.03
84.12
79.30
85.34
90.22
89.18
79.31
89.76
87.15
87.79
77.02
74.51
70.52
81.22
72.39
71.31
66.53
46.76
64.89
97.61
97.47
95.40
66.29
96.92
96.63
91.91
83.57
97.41
94.04
83.80
69.25
86.56
89.35
76.88
77.42
85.16
89.25
86.88
79.14
81.08
67.10
84.30
F1
73.84
79.74
76.73
83.46
77.04
76.76
74.52
81.55
72.63
74.92
71.80
60.04
65.33
98.01
97.95
96.74
69.62
97.44
97.38
94.69
83.71
97.67
96.01
90.61
72.89
87.98
90.67
80.34
78.35
85.25
89.73
88.02
79.22
85.20
75.82
86.01
F1
30.93
36.06
30.81
34.86
33.77
34.58
31.41
12.21
27.79
32.70
29.19
7.05
49.11
85.40
85.82
77.48
52.02
85.69
80.70
50.52
70.10
86.91
84.81
21.32
57.61
73.55
74.52
58.99
57.89
65.88
75.24
42.05
62.65
70.96
57.59
74.27
relation
Precision Recall
94.54
93.76
94.37
94.88
94.85
93.80
94.41
95.29
94.82
94.36
95.48
95.69
94.61
93.14
94.00
96.30
75.56
91.58
91.19
90.70
78.10
91.38
89.60
90.82
84.30
91.26
92.54
91.75
76.70
83.22
83.06
73.53
72.91
75.86
82.39
75.28
79.74
82.14
82.96
85.55
94.77
88.83
85.21
94.93
93.99
84.18
80.19
53.71
74.55
90.84
90.29
88.19
70.62
90.40
88.22
85.48
84.00
90.78
88.74
77.95
69.03
80.54
80.65
67.20
71.18
75.70
81.51
73.33
79.57
74.19
63.87
82.15
F1
94.70
93.78
94.39
95.08
94.80
91.51
90.05
95.31
94.30
88.43
86.55
68.96
75.05
91.21
90.73
89.43
74.17
90.89
88.91
88.07
84.15
91.02
90.60
84.29
72.67
81.86
81.83
70.22
72.03
75.78
81.95
74.29
79.66
77.97
72.17
83.82
tail entity
Precision Recall
48.94
48.97
43.85
41.92
49.10
48.99
43.87
42.10
51.20
49.18
45.77
15.66
45.79
48.57
45.35
13.05
65.51
94.02
95.53
88.21
74.40
95.18
91.96
57.91
91.54
95.56
94.06
26.09
77.78
87.11
89.04
82.94
78.63
83.84
90.00
52.76
77.80
88.33
78.77
86.67
51.18
46.29
40.85
15.54
45.49
43.90
38.69
7.28
64.63
93.25
94.59
85.78
67.28
94.15
90.55
54.50
91.22
95.06
90.20
22.16
70.00
84.30
86.45
75.81
76.77
83.66
89.03
51.40
77.63
79.78
60.65
83.23
F1
49.02
48.98
43.86
42.01
51.19
47.69
43.17
15.60
45.64
46.12
41.75
9.34
65.07
93.63
95.06
86.98
70.66
94.66
91.25
56.15
91.38
95.30
92.09
23.97
73.68
85.68
87.73
79.21
77.69
83.75
89.51
52.07
77.72
83.84
68.53
84.91
Table 2: Results of various approaches for triple extraction on ADE, ChemProt, and GIT. Underline
with shade (green, pink, and blue) indicates the best performance on ADE, ChemProt, and GIT
separately.
Link Prediction
Text Classification
Question Answering Natural Language Inference
LLM
Approach
PHarmKG
Precision Recall
F1
Hetionet
Precision Recall
Ade-corpus-v2
F1
Precision Recall
F1
SemClass
Precision Recall
MedMCQA
F1
Precision Recall
F1
BioNLI
Macro-avg F1
LLaMA2-13B
MedLLaMA-13B
LLaMA3-8B
BM25 (Luo et al., 2023)
Contriever (Izacard et al., 2021)
MedCPT (Jin et al., 2023)
No Retriever
BM25 (Luo et al., 2023)
Contriever (Izacard et al., 2021)
MedCPT (Jin et al., 2023)
No Retriever
BM25 (Luo et al., 2023)
Contriever (Izacard et al., 2021)
MedCPT (Jin et al., 2023)
No Retriever
97.60
98.00
97.40
97.60
95.00
97.00
97.40
97.20
96.80
96.60
97.00
97.20
97.60 97.60
98.00 98.00
97.40 97.40
97.60 97.60
95.00 95.00
97.00 97.00
97.40 97.40
97.20 97.20
96.80 96.80
96.60 96.60
97.00 97.00
97.20 97.20
82.37
77.00
81.60
80.80
90.40
77.20
84.40
78.54
81.80
73.40
83.00
81.80
82.37
77.00
81.60
80.80
90.40
77.20
84.40
78.54
81.80
73.40
83.00
81.80
82.37
77.00
81.60
80.80
90.40
77.20
84.40
78.54
81.80
73.40
83.00
81.80
95.40
96.60
96.80
96.40
95.60
95.60
95.40
95.40
94.80
94.60
95.40
93.80
95.40 95.40
96.60 96.60
96.80 96.80
96.40 96.40
95.60 95.60
95.60 95.60
95.40 95.40
95.40 95.40
94.80 94.80
94.60 94.60
95.40 95.40
93.80 93.80
75.50
79.33
78.50
77.66
72.67
77.66
76.16
64.00
75.50
75.83
74.67
73.16
75.50 75.50
79.33 79.33
78.50 78.50
77.66 77.66
72.67 72.67
77.66 77.66
76.16 76.16
64.00 64.00
75.50 75.50
75.83 75.83
74.67 74.67
73.16 73.16
40.38
35.53
36.78
41.63
37.81
29.82
33.86
46.79
37.73
28.11
31.57
56.93
40.49 40.42
35.52 35.52
36.93 36.80
41.52 41.52
37.96 37.86
29.75 29.77
34.04 33.88
46.41 46.47
38.90 37.79
28.12 28.11
31.82 31.56
55.43 55.91
45.10
35.12
69.21
62.62
48.81
53.07
53.68
61.07
19.17
63.85
56.89
6.71
Table 3: Results of various approaches for link prediction, text classification, question answering,
and natural language inference. Underline with green shade indicates the best performance on each
dataset.
(2) Even without an unlabeled corpus, RAL still contributes to improving LLM performance in
certain tasks As shown in Table 4, On Chemprot and Hetionet, RAL utilizing an unlabeled corpus
could enhance the original LLM’s performance by 30.16% and 0.06%, respectively. We speculate that
LLMs may possess sufficient knowledge to contribute to enhancing model performance on specific
datasets.
3.3.1 ERROR ANALYSIS
To better understand the impact of the unlabeled corpus on model generation, this section primarily
analyzes the RAL performance on ADE, GIT, and BioNLI, which exhibited the poorest performance
9
ADE
Corpus
Unlabeled corpus
Labeled corpus
None
F1
Precision Recall
9.53
9.65
36.06 36.06
34.79 34.86
9.76
36.07
34.94
ChemProt
Precision Recall
F1
75.56
87.13
42.60
69.14 72.21
86.68 86.91
41.51 42.05
GIT
F1
Precision Recall
0.65
1.01
74.84 75.24
41.51 42.05
2.29
75.65
42.60
PHarmKG
Precision Recall
F1
Hetionet
Precision Recall
F1
97.20
98.00
97.60
97.20 97.20
98.00 98.00
97.60 97.60
78.60
90.40
78.54
78.60 78.60
90.40 90.40
78.54 78.54
Table 4: RAL Performance of ADE, ChemProt, GIT, PHarmKG and Hetionet on Testbed1: Unla-
beled Robustness. On Testbed1, the used RAL is the one demonstrating the best performance on
each dataset. Such as, on ADE, RAL is LLaMA2-13B with Contriever. Labeled corpus is the training
set for each dataset. The unlabeled corpus refers to a training set devoid of labels for each dataset.
Green shade refers to the best performance.
Corpus
Unlabeled corpus
Labeled corpus
None
Ade-corpus-v2
Precision Recall
F1
94.80
96.80
96.40
94.80 94.80
96.80 96.80
96.40 96.40
SemClass
F1
Precision Recall
6.83
6.83
79.33 79.33
77.66 77.66
6.83
79.33
77.66
MedMCQA
Precision Recall F1
BioNLI
F1
35.19
40.38
41.63
35.27 35.19 10.91
40.49 40.42
69.21
41.52 41.52 62.62
Table 5: RAL Performance of Ade-corpus-v2, SemClass, MedMCQA, and BioNLI on Testbed1:
Unlabeled Robustness.
among the nine datasets used. We primarily summarize two error types as shown in Table 6. We
observed that with the unlabeled corpus, RAL tends to generate redundant information and struggles
to accurately predict the output, such as the head entity or relation type in the triple extraction task.
Error type
Dataset
Redundant
information
ADE
GIT
Input sentence
the fourth patient showed rls symptoms
that were initially caused by a 20-mg
daily olanzapine dosage and were later
mitigated when olanzapine was
reduced and ropinirole was administered.
inactivation kinetics of vacterial
glycerol dehydratase (ec 4.2.1.30) in
the course of its reaction with
adenosylcobalamin (adocbl) and
its analogs were investigated..
Expected output
Error output
{olanzapine,
dosage,
20-mg daily}
ground tail entity:
glycerol dehydratase
{olanzapine,
dosage,
rls symptoms that
were initially caused by
a 20-mg dail}
generated tail entity:
adenosylcobalamin..
retrieved sentence:
glycerol dehydratase
BIONLI
–
negative
negative retrieved sentence..
Error
generation
ADE
GIT
four patients receiving high-dose
tamoxifen for greater than
1 year have demonstrated
similar retinal changes.
(tamoxifen,
dosage,
high-dose)
(tamoxifen,
effect,
retinal changes.)
inactivation of serum alkaline
phosphatase by adrenaline
and related substances
.
(adrenaline,
inhibits,
alkaline phosphatase)
(alkaline phosphatase,
interacts with,
adrenaline)
BIONLI
—
positive
negative
Table 6: Error cases of Unlabeled Robustness. In BioNLI, we have not included the input sentence in
this table due to the excessive length of the sentences.
3.4 RESULTS AND DISCUSSION ON TESTBED2: COUNTERFACTUAL ROBUSTNESS
We evaluate the model performance based on different counterfactual rates, and the results are shown
in Table 7 and Table 8. We have the following observations:
(1) Counterfactual corpus posses a challenge for RALs. On ChemProt, counterfactual instances
significantly influence the model performance. For instance, when the counterfactual rate is set to
80%, the triple F1 drops to 47.79%, showcasing a considerable disparity compared to the triple F1
performance on the labeled corpus. Similar observations are noted in GIT, PharmKG, ADE-corpus-v2,
SemClass, and BioNLI. This suggests that RALs can be easily misled by counterfactual corpus.
10
Corpus
Counterfactual corpus (20%)
Counterfactual corpus (80%)
Counterfactual corpus (100%)
Labeled corpus
None
ADE
ChemProt
Precision Recall F1 Precision Recall
GIT
F1
Precision Recall
F1
PHarmKG
Precision Recall
F1
Hetionet
Precision Recall
F1
41.41
41.94
25.60
36.07
34.94
41.31 41.36
36.90 39.26
25.59 25.59
36.06 36.06
34.79 34.86
74.22
82.96
84.12
87.13
42.60
66.90 70.37
33.57 47.79
83.07 83.59
86.68 86.91
41.51 42.05
74.50
74.84
75.22
75.65
42.60
71.61 73.03
74.84 74.84
74.41 74.81
74.84 75.24
41.51 42.05
97.40
97.80
97.60
98.00
97.60
97.40 97.40
97.80 97.80
97.60 97.60
98.00 98.00
97.60 97.60
94.80
85.26
76.60
90.40
78.60
94.80 94.80
85.26 85.26
76.60 76.60
90.40 90.40
78.60 78.60
Table 7: RAL Performance of ADE, ChemProt, GIT, PHarmKG and Hetionet on Testbed2: counter-
factual robustness. On Testbed2, the used RAL is the one demonstrating the best performance on
each dataset. Such as, on ADE, RAL is LLaMA2-13B with Contriever. Labeled corpus is the training
set for each dataset. The 20/80/100% denotes a labeled corpus where 20/80/100% of instances are
mislabeled for each dataset. Green shade refers to the best model performance.
Corpus
Counterfactual corpus (20%)
Counterfactual corpus (80%)
Counterfactual corpus (100%)
Labeled corpus
None
Ade-corpus-v2
Precision Recall
F1
SemClass
Precision Recall
F1
MedMCQA
Precision Recall F1
BioNLI
F1
95.80
95.00
96.80
96.80
96.40
95.80 95.80
95.00 95.00
96.80 96.80
96.80 96.80
96.40 96.40
73.33
75.66
77.66
79.33
77.66
73.33 73.33
75.66 75.66
77.66 77.66
79.33 79.33
77.66 77.66
34.93
36.67
37.27
40.38
35.19
35.03 34.94 47.76
36.44 36.47 64.63
37.40 37.28 46.46
69.21
40.49 40.42
35.27 35.19 62.62
Table 8: RAL Performance of Ade-corupus-v2, semCLass, MedMCQA, and BIoNL on Testbed2
counterfactual robustness.
(2) A lower counterfactual rate may have a reduced impact on RALs. On ADE and Hetionet, we
observed that when the counterfactual corpus is set to 20%, the model performance is better than the
factual corpus. We speculate that retrievers have a greater chance of obtaining useful information
when the counterfactual rate is lower.
(3) The counterfactual corpus can still contribute to improving LLM performance. On ADE,
ChemProt, GIT, PHarmKG, Hetionet, Ade-corpus-v2, SemClass, MEdMCQA, BIoNLI. The interest-
ing finding is that even with a counterfactual corpus, the RAL performance often surpasses the original
LLM. We speculate that the counterfactual corpus may have a beneficial effect on LLMs. Despite the
content of the instances being counterfactual, the provided templates still aid in generation.
(4) Counterfactual rates and model performance are not inversely proportional. This finding
contradicts human intuition. In some datasets, such as SemClass, when the counterfactual rate is
higher, the model performance also improves. This suggests that RALs possess a certain ability to
handle counterfactual facts.
3.5 RESULTS AND DISCUSSION ON TESTBED3: DIVERSE ROBUSTNESS
ADE
ChemProt
GIT
PHarmKG
Hetionet
Corpus
Diverse corpus
Labeled corpus
None
F1
Precision Recall
9.15
10.08
36.06 36.06
34.79 34.86
11.21
36.07
34.94
Precision Recall F1 Precision Recall
F1
Precision Recall F1 Precision Recall F1
78.56
87.13
42.60
76.41 77.47
86.68 86.91
41.51 42.05
74.88
75.65
42.60
65.38 69.80
74.84 75.24
41.51 42.05
97.20
98.00
97.60
97.20 97.20
98.00 98.00
97.60 97.60
75.41
90.40
78.60
75.41 75.41
90.40 90.40
78.60 78.60
Table 9: RAL Performance of ADE, ChemProt, GIT, PHarmKG and Hetionet on Testbed3: diverse
robusteness. On Testbed3, the used RAL is the one demonstrating the best performance on each
dataset. Such as, on ADE, RAG is LLaMA2-13B with Contriever. Labeled corpus is the training set
for each dataset. The diversity labeled corpus of n refers to a collection of (0,..,n-1) corpus.
We evaluate the model performance of diversity robustness, and the results are shown in Table 9 and
Table 10. We have the following observations:
The diversity labeled corpus poses a challenge to improve RALs. We found that RALs consider
the knowledge in the diverse corpus as noise, which could potentially impact RAL performance,
11
Corpus
Diversity labeled corpus
Labeled corpus
None
Ade-corpus-v2
SemClass
Precision Recall F1 Precision Recall F1 Precision Recall F1
MedMCQA
BioNLI
F1
96.20
96.80
96.40
96.20 96.20
96.80 96.80
96.40 96.40
75.33
79.33
77.66
75.33 75.33
79.33 79.33
77.66 77.66
25.01
40.38
41.63
25.02 24.87 80.32
40.49 40.42 69.21
41.52 41.52 62.62
Table 10: RAL Performance of Ade-corupus-v2, SemClass, MedMCQA, and BioNLI on Testbed 3:
diverse robusteness.
particularly evident in ADE and MedMCQA datasets. However, on BioNLI, the diversity labeled
corpus could contribute to enhancing the model performance. We speculate that one reason is the
retriever we used couldn’t retrieve useful information, while another reason could be that the corpus
lacks the necessary information.
3.5.1 ERROR ANALYSIS
On ADE, we discovered that the Diversity-labeled corpus also leads to redundancy in RAL generation,
for instance, in sentence easily reversible hypoxemia and hypotension induced by nimodipine., the
expected tail entity is hypotension, while RAL regarded the hypoxemia and hypotension induced
by nimodipine. as the entity. It also struggles with extracting complex entities. For example, in
the sentence clinical, spectroscopic, and imaging abnormalities resolved with discontinuation of
metronidazole, clinical, spectroscopic, and imaging abnormalities is considered the ground truth,
while RAL regards the entire sentence clinical, spectroscopic, and imaging abnormalities resolved
with discontinuation of metronidazole as a single entity. In summary, we find that the primary
challenge lies in entity recognition, especially in the recognition of tail entities. On MedMCQA,
we observed that error generation primarily stemmed from misjudgment. For instance, in sentence
Question: All of the following muscles are elevators of the mandible EXCEPT: Options: (A) Digastric;
(B) Masseter; (C) Medial pterygoid; (D) Temporalis, the ground truth is A, while RAL generates the
D.
3.6 RESULTS AND DISCUSSION ON TESTBED4: NEGATIVE AWARENESS
We evaluate the model performance of negative awareness, and the results are shown in Table 11. We
have the following observations:
RAL poses a challenge to the Negative Awareness. The true negative awareness rate on PharmKG
and BioNLI was zero, and it was only 1.07% on ADE. Interestingly, the overall performance of
fake negative awareness is better than that of true negative awareness. This suggests that RALs
still struggle with self-awareness regarding which examples could provide useful information for
generations.
Task
triple extraction
link prediction
text classification
Dataset
ADE
ChemProt
GIT
PHarmKG
Hetionet
Ade-corpus-v2
SemClass
MedMCQA
True negative awareness rate
1.07
19.24
27.73
0.00
1.71
68.75
1.49
0.26
0.00
Fake negative awareness rate
9.15
77.49
69.75
63.11
31.33
70.45
99.35
3.92
0.38
question answering
natural language inference BioNLI
Table 11: RAL Performance of ADE, ChemProt, GIT, PHarmKG, Hetionet, Ade-corpus-V2, Sem-
Class, MedMCQA and BIoNLI on Testbed4: negative awareness.
12
4 CONCLUSION
In this paper, we assess the performance of RALs on five distinct biomedical NLP tasks, while
also evaluating their robustness and self-awareness abilities. To conduct the evaluation, we build
a biomedical retrieval-augmented generation benchmark (BIoRAB), which mainly includes four
testbeds.
5 LIMITATIONS
In this study, we utilized the training set as the retriever corpus for the question-answering task.
However, several studies utilize larger corpora with richer knowledge in the question answering task,
such as PubMed and Wikidata. In other tasks such as link prediction, augmenting the size of the
labeled corpus remains a formidable challenge. Additionally, three retrievers select the most relevant
instance of the input sentence as an example. We strive to ensure the validity of our comparisons, but
it’s important to note that our findings and results are confined to the dataset, RALs we utilized.
6 ACKNOWLEDGEMENTS
This work was supported by the National Institutes of Health’s National Center for Complementary
and Integrative Health grant number R01AT009457 and National Institute on Aging grant number
R01AG078154. The content is solely the responsibility of the authors and does not represent the
official views of the National Institutes of Health.
REFERENCES
Mohaddeseh Bastan, Mihai Surdeanu, and Niranjan Balasubramanian. Bionli: Generating a
biomedical nli dataset using lexico-semantic constraints for adversarial examples. arXiv preprint
arXiv:2210.14814, 2022.
Harsha Gurulingappa, Abdul Mateen Rajput, Angus Roberts, Juliane Fluck, Martin Hofmann-Apitius,
and Luca Toldo. Development of a benchmark corpus to support the automatic extraction of
drug-related adverse effects from medical case reports. Journal of Biomedical Informatics, 45
(5):885 – 892, 2012a. ISSN 1532-0464. doi: https://doi.org/10.1016/j.jbi.2012.04.008. URL
http://www.sciencedirect.com/science/article/pii/S1532046412000615. Text Mining
and Natural Language Processing in Pharmacogenomics.
Harsha Gurulingappa, Abdul Mateen Rajput, Angus Roberts, Juliane Fluck, Martin Hofmann-Apitius,
and Luca Toldo. Development of a benchmark corpus to support the automatic extraction of
drug-related adverse effects from medical case reports. Journal of biomedical informatics, 45(5):
885–892, 2012b.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. Retrieval augmented
language model pre-training. In International conference on machine learning, pp. 3929–3938.
PMLR, 2020.
Daniel Scott Himmelstein, Antoine Lizee, Christine Hessler, Leo Brueggeman, Sabrina L Chen,
Dexter Hadley, Ari Green, Pouya Khankhanian, and Sergio E Baranzini. Systematic integration of
biomedical knowledge prioritizes drugs for repurposing. Elife, 6:e26726, 2017.
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand
Joulin, and Edouard Grave. Unsupervised dense information retrieval with contrastive learning.
arXiv preprint arXiv:2112.09118, 2021.
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang,
Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM
Computing Surveys, 55(12):1–38, 2023.
Qiao Jin, Won Kim, Qingyu Chen, Donald C Comeau, Lana Yeganova, W John Wilbur, and Zhiyong
Lu. Medcpt: Contrastive pre-trained transformers with large-scale pubmed search logs for zero-shot
biomedical information retrieval. Bioinformatics, 39(11):btad651, 2023.
13
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal,
Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented genera-
tion for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:
9459–9474, 2020.
Mingchen Li and Lifu Huang. Understand the dynamic world: An end-to-end knowledge informed
framework for open domain entity state tracking. arXiv preprint arXiv:2304.13854, 2023.
Mingchen Li and Rui Zhang. How far is language model from 100% few-shot named entity
recognition in medical domain. arXiv preprint arXiv:2307.00186, 2023.
Mingchen Li, M Chen, Huixue Zhou, and Rui Zhang. Petailor: Improving large language model by
tailored chunk scorer in biomedical triple extraction. arXiv preprint arXiv:2310.18463, 2023.
Mingchen Li, Halil Kilicoglu, Hua Xu, and Rui Zhang. Biomedrag: A retrieval augmented large
language model for biomedicine. arXiv preprint arXiv:2405.00465, 2024a.
Mingchen Li, Chen Ling, Rui Zhang, and Liang Zhao. A condensed transition graph framework for
zero-shot link prediction with large language models. arXiv preprint arXiv:2402.10779, 2024b.
Man Luo, Xin Xu, Zhuyun Dai, Panupong Pasupat, Mehran Kazemi, Chitta Baral, Vaiva Imbra-
saite, and Vincent Y Zhao. Dr. icl: Demonstration-retrieved in-context learning. arXiv preprint
arXiv:2305.14128, 2023.
Oded Ovadia, Menachem Brief, Moshik Mishaeli, and Oren Elisha. Fine-tuning or retrieval?
comparing knowledge injection in llms. arXiv preprint arXiv:2312.05934, 2023.
Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. Medmcqa: A large-scale
multi-subject multi-choice dataset for medical domain question answering. In Conference on
health, inference, and learning, pp. 248–260. PMLR, 2022.
Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and
Yoav Shoham. In-context retrieval-augmented language models. arXiv preprint arXiv:2302.00083,
2023.
Karan Singhal, Tao Tu, Juraj Gottweis, Rory Sayres, Ellery Wulczyn, Le Hou, Kevin Clark, Stephen
Pfohl, Heather Cole-Lewis, Darlene Neal, et al. Towards expert-level medical question answering
with large language models. arXiv preprint arXiv:2305.09617, 2023.
Cong Sun, Zhihao Yang, Lei Wang, Yin Zhang, Hongfei Lin, and Jian Wang. Mrc4bioer: joint
extraction of biomedical entities and relations in the machine reading comprehension framework.
Journal of Biomedical Informatics, 125:103956, 2022.
Olivier Taboureau, Sonny Kim Nielsen, Karine Audouze, Nils Weinhold, Daniel Edsgärd, Francisco S
Roque, Irene Kouskoumvekaki, Alina Bora, Ramona Curpan, Thomas Skøt Jensen, et al. Chemprot:
a disease chemical biology database. Nucleic acids research, 39(suppl_1):D367–D372, 2010.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée
Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and
efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
Rizvi Rubina Zhang Rui Vasilakes Jake A. Bionli: Generating a biomedical nli dataset using lexico-
semantic constraints for adversarial examples. https://conservancy.umn.edu/handle/11299/194965,
2018.
Jacob White. Pubmed 2.0. Medical reference services quarterly, 39(4):382–387, 2020.
Chaoyi Wu, Xiaoman Zhang, Ya Zhang, Yanfeng Wang, and Weidi Xie. Pmc-llama: Further
finetuning llama on medical papers. arXiv preprint arXiv:2304.14454, 2023.
Guangzhi Xiong, Qiao Jin, Zhiyong Lu, and Aidong Zhang. Benchmarking retrieval-augmented
generation for medicine. arXiv preprint arXiv:2402.13178, 2024.
14
Cyril Zakka, Rohan Shad, Akash Chaurasia, Alex R Dalal, Jennifer L Kim, Michael Moor, Robyn
Fong, Curran Phillips, Kevin Alexander, Euan Ashley, et al. Almanac—retrieval-augmented
language models for clinical medicine. NEJM AI, 1(2):AIoa2300068, 2024.
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao,
Yu Zhang, Yulong Chen, et al. Siren’s song in the ai ocean: a survey on hallucination in large
language models. arXiv preprint arXiv:2309.01219, 2023.
Shuangjia Zheng, Jiahua Rao, Ying Song, Jixian Zhang, Xianglu Xiao, Evandro Fei Fang, Yuedong
Yang, and Zhangming Niu. Pharmkg: a dedicated knowledge graph benchmark for bomedical data
mining. Briefings in bioinformatics, 22(4):bbaa344, 2021.
15
|
synthetic_cpt | 8 | BERTtime_Stories_Investigating_the_Role_of_Synthetic_Story_Data_in_Language_pre-training.pdf | 4
2
0
2
c
e
D
8
]
L
C
.
s
c
[
3
v
5
6
3
5
1
.
0
1
4
2
:
v
i
X
r
a
BERTtime Stories: Investigating the Role of Synthetic Story Data in
Language Pre-training
Nikitas Theodoropoulos, Giorgos Filandrianos, Vassilis Lyberatos,
Maria Lymperaiou and Giorgos Stamou
Artificial Intelligence and Learning Systems Laboratory
School of Electrical and Computer Engineering
National Technical University of Athens
[email protected],
{geofila, vaslyb, marialymp}@ails.ece.ntua.gr,
[email protected]
Abstract
We describe our contribution to the Strict and
Strict-Small tracks of the 2nd iteration of
the BabyLM Challenge. The shared task is cen-
tered around efficient pre-training given data
constraints motivated by human development.
In response, we study the effect of synthetic
story data in language pre-training using TinyS-
tories: a recently introduced dataset of short
stories.
Initially, we train GPT-Neo models
on subsets of TinyStories, while varying the
amount of available data. We find that, even
with access to less than 100M words, the mod-
els are able to generate high-quality, original
completions to a given story, and acquire sub-
stantial linguistic knowledge. To measure the
effect of synthetic story data, we train LTG-
BERT encoder models on a combined dataset
of: a subset of TinyStories, story completions
generated by GPT-Neo, and a subset of the
BabyLM dataset. Our experimentation reveals
that synthetic data can occasionally offer mod-
est gains, but overall have a negative influence
on linguistic understanding. Our work offers an
initial study on synthesizing story data in low
resource settings and underscores their poten-
tial for augmentation in data-constrained lan-
guage modeling. We publicly release our mod-
els and implementation on our GitHub 1.
1
Introduction
As the performance of modern Language Models
(LMs) increases, enabling remarkable feats of lan-
guage understanding and reasoning, so do their
demands in computational resources and training
data (Hoffmann et al., 2022). For example, the
recently released Llama 3 (Dubey et al., 2024) has
405B parameters and was pre-trained on 15.6T to-
kens, on 6K H100 GPUs. In contrast, children are
1https://github.com/nikitas-theo/BERTtimeStories
only exposed to no more than 100 million words
by age 13 (Gilkerson et al., 2017), demonstrating
exceptional learning efficiency compared to state-
of-the-art LMs. This need for ever-increasing data
and compute casts doubts on the cognitive plausi-
bility of the current LM training regimes, and raises
ecological and ethical concerns, such as democratic
access to research for industry and research groups
with modest resources.
To address these issues, the BabyLM challenge
(Warstadt et al., 2023a; Choshen et al., 2024) in-
vites participants to work on cognitive modeling
and efficient LM pre-training, given data limita-
tions inspired by human development. This year’s
iteration of the challenge features three experimen-
tal tracks: a Strict track with a budget of 100M
words, a Strict-Small track with a budget of
10M words, and a Multimodal track with a word
budget of 100M words and unlimited visual input.
A major change compared to last year’s challenge is
allowing participants to construct their own training
data. In the following sections, we present our con-
tributions to the Strict and Strict-Small tracks.
Our research draws inspiration from recent ad-
vancements in Small Language Models (SLMs) for
text generation, as explored in TinyStories (Eldan
and Li, 2023). In this influential work, the authors
demonstrate that training on a synthetic dataset of
simple stories can enable SLMs to produce cre-
ative, high-quality generations, which are novel
with respect to the original training dataset. We
hypothesize that for the small data regimes of the
BabyLM challenge, augmenting the initial training
corpus with synthetic data of high quality can pro-
vide models with unseen linguistic contexts, and as
a result improve language understanding. To test
our hypothesis, we first extend previous work by
Figure 1: Illustration of our proposed methodology for BERTtime Stories. We use a subset of the TinyStories
dataset (Dtiny) (Eldan and Li, 2023), to train a decoder transformer for data augmentation. We prompt the decoder
with the short stories from Dtiny and create a dataset of model generations (Dgen): each story (green) is truncated
and used as a prompt (yellow), with the model generating an alternate completion (blue). We supplement the two
datasets with a subset of the BabyLM dataset (Dbaby), released by Choshen et al. (2024), and train an encoder model
on the combined data. Finally, we evaluate the linguistic proficiency of the encoder using the challenge benchmarks.
Eldan and Li (2023), investigating generative per-
formance with limited training data. We then train
encoder transformer models on a diverse dataset,
and measure the effect of synthetic data on linguis-
tic proficiency.
In technical terms, following Eldan and Li
(2023), we propose to train a GPT-Neo decoder
(Black et al., 2021) on TinyStories, and then use it
to generate data for the training of a final encoder
model. This poses some initial challenges, as the
size of the original TinyStories dataset exceeds the
challenge limits, with around 373M words. As a
result, we experiment with training GPT-Neo mod-
els while varying the amount of available data and
evaluating their generative performance, keeping
the model architecture fixed in the process. Our
investigation of generative performance is comple-
mentary to the original work, which measures the
effects of varying the depth and size of the model
architectures. Our findings demonstrate that even
in low data regimes of ≤ 100M words, GPT-Neo
models can acquire impressive grammatical under-
standing, and generate stories of high quality and
diversity, comparable to models trained on the full
dataset. For our evaluation see Section 4.1.
Next, we investigate the effect of the synthetic
story data on language pre-training. Figure 1 illus-
trates our methodology. We select a small subset
of TinyStories, train a GPT-Neo model, and use it
to generate alternate completions to the stories in
the training set. We then create a combined dataset
consisting of: 1 the subset of TinyStories used
for GPT-Neo training, 2 the generated data, 3
a sample of the BabyLM dataset (Choshen et al.,
2024). With the combined dataset, we train an
LTG-BERT (Samuel et al., 2023) model, choosing
this architecture as it demonstrated superior per-
formance in the text-only tracks of last year. We
compare the performance of our models against a
variety of baselines, trained with and without the
use of synthetic data. Our results show that a sim-
ple application of synthetic story data for language
pre-training results in marginal or even negative
gains. Nevertheless, given the high generative per-
DecoderEncoderOne day, a little boy named Tim went to the park. He saw a big tiger. The tigerwas not mean, but very easy to play with. Tim and the tiger played all day. They had lots of fun. Then, something unexpected happened. The tiger started to shake. Tim was scared. He did not know what was going on. But then, the tiger turned into a nice ... *CHI: Eve hand pocket.*MOT: oh your hand's in your pocket.*CHI: Eve puzzle.*MOT: that's your puzzle.*CHI: Mommy find Eve puzzle.*MOT: want me to find the rest of it.*CHI: yeah.*MOT: alright.*MOT: is it in the toy box?was very nice. Tim wanted to play with the tiger. He said, "Hi, tiger! Let's play!" The tiger did not say anything. Tim was sad. He wanted to find a way to make the tiger happy. Tim had an idea. He foundaballandthrewit.Thetigersawthe ball and started to play ... One day, a little boy named Tim went to the park. He saw a big tiger. The tiger+...formance of the GPT-Neo models, we believe that
more research is needed to fully explore and exploit
their potential.
Contribution We list our contributions below:
• We investigate the generative and linguistic
abilities of GPT-Neo models trained on TinyS-
tories while varying the amount of available
data. We show that even with limited data,
these models can produce generations of high
quality, offering new insights into the capabil-
ities of SLMs in low data regimes.
• We investigate the effect of generated data
on the pre-training of encoder LMs in a con-
strained data setting. We conduct an extensive
evaluation with different training schemes and
baselines. Our experiments demonstrate the
potential of data augmentation to enhance the
linguistic capabilities of low resource LMs.
2 Related work
Previous BabyLM Iteration Data Augmenta-
tion techniques were shown to be beneficial in the
previous year’s challenge (Warstadt et al., 2023b).
Specifically, ChapGPT (Jumelet et al., 2023) uses
regex patterns to extract common phrases from
GLUE tasks, and then harnesses these patterns to
generate follow-up questions that serve as addi-
tional training data. In the Contextualizer paper
(Xiao et al., 2023), extra training samples are cre-
ated by dynamically combining chunks of texts
from different contexts during training. Another
approach named Baby’s CoThought (Zhang et al.,
2023) utilizes a Large Language Model (LLM)
to reformat unrelated sentences from the corpus
into coherent paragraphs, thereby improving per-
formance, albeit in defiance of data constraints.
Language Models for Data Augmentation In
recent years, LLMs have been increasingly lever-
aged for data augmentation in various domains
(Ding et al., 2024). Notably, Dai et al. (2023)
introduced ChatGPT as a tool for generating re-
alistic text samples from a combination of real and
artificial data, enhancing training datasets. Simi-
larly, transformer architectures, including decoder
(GPT-2, Radford et al., 2019), encoder (BERT,
Devlin et al., 2019), and seq2seq (BART, Lewis
et al., 2020) models have been explored for aug-
mentation (Kumar et al., 2020). In the work of
Yoo et al. (2021), GPT-3 (Brown et al., 2020) was
used to mix real and synthetic text samples for ro-
bust data augmentation. Moreover, decoder models
have been successfully employed to generate train-
ing data for encoders, yielding significant improve-
ments in zero-shot learning (Meng et al., 2022).
Small Language Models The recent study by
Eldan and Li (2023) highlighted that Small Lan-
guage Models (SLMs), can outperform larger ones
by leveraging high-quality synthetic training data,
demonstrating fluency, coherence, and creativity
despite having fewer parameters. This trend is fur-
ther supported by work in sequential recommenda-
tion, where small models are effectively employed
for task-specific purposes (Xu et al., 2024). Addi-
tionally, Bergner et al. (2024) utilize a pre-trained
LLM to encode prompt tokens, using these repre-
sentations to guide a smaller LM for more efficient
response generation.
3 Methods
We describe our data augmentation method using
synthetic story data, as illustrated in Figure 1.
3.1 Datasets
Our work is built on two datasets: 1 TinyStories –
denoted as Dtiny, a collection of synthetic short sto-
ries with simple language, 2 the BabyLM dataset
– denoted as Dbaby, created to be a developmentally
plausible pre-training corpus. For any dataset Ddata,
we also denote a version of the data with m million
words as Ddata-m. We describe the datasets below:
BabyLM dataset The BabyLM dataset (Dbaby),
released by Warstadt et al. (2023a); Choshen et al.
(2024), consists of a diverse set of texts and is con-
structed with the goal of simulating the linguistic in-
put that a child receives throughout its development.
It contains a high proportion of spoken language
and includes, among others, excerpts from chil-
dren’s books, dialogue, child-directed speech, and
Wikipedia articles. Both 100M and 10M versions
of the dataset were released, for the Strict and
Strict-Small tracks respectively. Details about
the dataset structure are provided in Appendix A.
TinyStories dataset
Introduced by Eldan and Li
(2023), TinyStories (Dtiny) is a synthetic dataset,
featuring a collection of short stories constructed
by prompting GPT-3.5 and GPT-4 (OpenAI et al.,
2024). The dataset was created to preserve all the
core elements of natural language, such as grammar
and reasoning, while exhibiting limited diversity
and size. More specifically, the stories are 2-3 para-
graphs long and follow simple plots and themes. In
addition, the dataset contains a restricted vocabu-
lary and in general is intended to be on the level
of understanding of 3-4 year old children. The
initial version of the dataset (V1), generated by
both GPT-3.5 and GPT-4, contains approximately
373M words. A second version (V2) was later re-
leased, with stories generated only by GPT-4 and
around 440M words. We use this version in all our
experiments.
of the encoder transformer.
Regarding the generation process, we experi-
ment with two methods: greedy decoding and
nucleus sampling (Holtzman et al., 2020). Dur-
ing sampling, we generate k completions from
our models for each prompt. To limit repetition
between the k generations (and avoid wasting
FLOPs), we calculate Self-BLEU (Section 3.4) for
a set of values of k, and select the ones that best
balance diversity and the total amount of additional
training data.
3.2 Data Generation
3.3 Final Corpus Creation
We describe the creation of the synthetic story
dataset Dgen. To generate the data, we first train a
decoder model (GPT-Neo) on a subset of TinySto-
ries denoted as Dtiny-m. We truncate the stories in
Dtiny-m to construct prompts and generate alternate
completions using our model.
We start by restricting the size m of the subset,
taking into account two factors: the need for ade-
quate diversity in the final corpus, and the need to
ensure high-quality generations. Given the assump-
tion that generation quality scales with dataset size,
we want to select a big enough size m for Dtiny-m
to enable high-quality generations from our trained
models. At the same time, we want to leave the
necessary room in our word budget for including a
sufficiently large portion of the BabyLM dataset in
the final training. This will ensure that our models
are exposed to both a large vocabulary and a variety
of word contexts. Intuitively, we aim to ensure that
our pre-training data is diverse, as children learn
from multiple sources of input.
To address this trade-off, we sample from TinyS-
tories, creating a collection of subsets of vary-
ing sizes, Dtiny-m : m ∈ {5, 10, 25, 50, 75, 100}M
(millions of words). For each subset, we train a
GPT-Neo model and evaluate its generative and
linguistic abilities.
In our evaluation, we lever-
age metrics for grammatical understanding, diver-
sity, and generation quality; our metrics are intro-
duced in Section 3.4. For each of the Strict and
Strict-Small tracks, we select a subset Dtiny-m
and a corresponding GPT-Neo model trained on it,
based on our evaluation metrics and the above crite-
ria. To construct Dgen, for each story in Dtiny-m, we
truncate the story to 15%-30% of its size and use
it to prompt the model for generation. We opt for
using a smaller proportion of the original story to
avoid duplication, given that stories in Dtiny-m will
already be in the combined corpus for the training
For each of the Strict and Strict-Small tracks,
we have created Dtiny-m, and Dgen as previously
described. We now create the combined dataset
Dcomb, used to train the encoder transformer. We
allocate our remaining word budget to a subset
of the BabyLM dataset (Dbaby-b), created by sam-
pling randomly from BabyLM on the document
level. We leave sampling methods that account
for the content of the documents for future work.
For the Strict / Strict-Small tracks, the size b
of Dbaby-b is chosen such that: b + m ≤ 100M /
10M. We now construct Dcomb by combining all
the datasets Dcomb = (Dtiny-m, Dbaby-b, Dgen). We
employ a masked language modeling objective to
train an encoder transformer on Dcomb, with the
LTG-BERT architecture (Samuel et al., 2023).
3.4 Evaluation
For evaluating the encoder transformers we use the
evaluation suite of the challenge, consisting of three
evaluation benchmarks: BLiMP, (Super)GLUE,
and EWoK, each broadly evaluating language profi-
ciency, general language understanding, and world
knowledge. We note that the challenge benchmarks
constitute filtered versions (Warstadt et al., 2023b),
rendering our results incomparable with full data
evaluations. For the decoder models, we use EWoK
and BLiMP, and also introduce some additional
evaluation procedures: specifically, Self-BLEU
evaluates diversity, and an LLM-assisted evalua-
tion measures generation quality. We explain each
of the evaluation benchmarks below.
BLiMP The Benchmark of Linguistic Minimal
Pairs (BLiMP),
introduced by Warstadt et al.
(2019), is a set of tasks designed to evaluate the
linguistic knowledge of LMs. It consists of pairs
of minimally different sentences covering various
grammatical phenomena in syntax, morphology,
and semantics. The model under evaluation has to
assign a higher probability to the correct sentence
in each pair. We also evaluate on BLiMP Supple-
ment (Supp.), released by Warstadt et al. (2023a),
which includes additional grammatical phenom-
ena. For both BLiMP and BLiMP Supplement, we
measure performance by calculating the average
accuracy across all of their evaluation tasks.
(Super)GLUE The General Language Under-
standing Evaluation (GLUE) benchmark (Wang,
2018), assesses model performance across a wide
range of natural language understanding (NLU)
tasks. SuperGLUE (Wang et al., 2019), was later
introduced to offer a more challenging set of tasks.
We employ a total of 10 text classification tasks
from both benchmarks, which include: question
answering (BoolQ, MultiRC), sentiment classi-
fication (SST-2), paraphrase detection (MRPC,
QQP), linguistic acceptability (CoLA), common-
sense reasoning (WSC), and natural language in-
ference (MNLI, QNLI, RTE). Performance on (Su-
per)GLUE is calculated by averaging accuracies
across all tasks except for QQP and MRPC, where
we use the F1-score, and CoLA, where we use the
Matthews Correlation Coefficient – MCC.
EWoK Elements of World Knowledge (EWoK)
(Ivanova et al., 2024) assesses an LM’s ability to
understand and model world knowledge. It evalu-
ates how well a model can connect a target text to
either an appropriate or mismatched context, em-
phasizing key concepts such as social dynamics
and spatial relationships. Both the contexts and
targets are framed as minimally contrasting pairs,
with customizable elements like objects, agents,
and locations. During evaluation, the model needs
to assign a higher probability to the correct context
and target text pair. We report average accuracy
across all the benchmark’s tasks.
Self-BLEU To measure the diversity of gener-
ated stories, we utilize the Self-BLEU score (Zhu
et al., 2018). Given a generated collection, we cal-
culate the BLEU score with one generation as the
hypothesis and the others as reference, evaluating
how similar it is to the rest. We define Self-BLEU
as the average of all the BLEU scores in the corpus.
The metric is defined on a continuous scale within
[0, 1], where higher scores indicate less diversity.
LLM Evaluation To provide a comprehensive
evaluation of our decoder models’ generative abili-
ties, we follow the approach of Eldan and Li (2023)
and employ a LLM, prompting it with the story
completions, and asking it to assess them in terms
of Grammar, Creativity, and Consistency with the
story’s beginning, on a scale from 1 to 10. The orig-
inal evaluation by Eldan and Li (2023) used GPT-4,
we instead leverage Claude-3.5 Sonnet (Anthropic,
2024)2, which better aligned with our available re-
sources. Evaluation details are presented in Section
4.1, while the prompt is included in Appendix E.
4 Experiments
Experimental Setup We conduct our experi-
ments on a shared GPU cluster of 8 Nvidia V100
16 GB GPUs, and additionally evaluate our models
on an Nvidia RTX-3090 24 GB GPU. All our mod-
els are trained using the PyTorch (Paszke et al.,
2019) and HuggingFace (Wolf et al., 2019) li-
braries. For our evaluations of BLiMP, EWoK, and
(Super)GLUE we build upon the official evaluation
pipeline released by the challenge organizers (Gao
et al., 2023; Choshen et al., 2024).
4.1 TinyStories & GPT-Neo Evaluation
Regarding the decoder used for the generation, we
select one of the best-performing GPT-Neo archi-
tectures from Eldan and Li (2023) 3. All our trained
GPT-Neo models share the same hyperparameters,
except for weight decay, dropout, and vocabulary
size, which are tuned to the specific data size. We
built upon a similar training scheme as the authors,
with added regularization for our low data regime.
Hyperparameters and details about the architecture
are included in Appendix C. We opt to train on the
latest version of the TinyStories data (V2), gen-
erated by prompting GPT-4; the full unsampled
dataset contains ∼ 440M words. Throughout our
evaluation, we also report results for the original
model released by the authors, trained on the first
version of the dataset (V1) with ∼ 373M words.
In the following paragraphs, we conduct a thor-
ough analysis of the relationship between the lin-
guistic competency of GPT-Neo models trained
on subsets of TinyStories, and the size of their
training dataset |Dtiny-m|. We experiment with var-
ious sizes for the TinyStories subsets Dtiny-m :
m ∈ {5, 10, 25, 50, 75, 100}M (millions of words).
From our experiments we draw insights about the
abilities of generative LMs on low data regimes.
This evaluation will also motivate our selection of
2Model version: claude-3-5-sonnet-20240620.
3https://huggingface.co/roneneldan/TinyStories-33M
the TinyStories subset Dtiny used for generating the
dataset Dgen and for training the final encoder.
As an initial proxy of the language competency
of the GPT-Neo decoders, we measure perfor-
mance on BLiMP, its supplement (Supp.), and
EWoK. Results are presented in Table 1. We
notice that 50M words appear to be a cutoff
point, with notable drops in performance for data
sizes less than that. Based on this, we select
Dtiny-50M for the Strict track, and Dtiny-5M for the
Strict-Small track. Importantly, we do not in-
clude the LLM evaluation (presented below) in
this decision process, as it would invalidate our
imposed data constraints. We leave further ex-
perimentation on the subset data sizes for the
Strict-Small track for future work. A second ob-
servation concerns the 100M words model, which
achieves the top score on BLiMP, shared by the
373M model by Eldan and Li (2023). This result
agrees with the findings of Zhang et al. (2021),
demonstrating that 100M words are enough to at-
tain substantial grammatical knowledge.
Train Data
BLiMP ↑
Supp. ↑
EWoK ↑
5M
10M
25M
50M
75M
100M
440M (V2)
373M (V1) 4
55.5
58.4
59.9
62.8
64.0
64.8
64.6
64.8
53.8
51.6
55.1
52.8
54.8
50.8
55.0
60.9
51.1
51.9
52.4
53.0
53.4
53.1
53.9
54.0
Table 1: Evaluation results for GPT-Neo models trained
on TinyStories with various amounts of data. We re-
port accuracy for all benchmarks. As the amount of
data decreases, the BLiMP and EWoK scores generally
decrease as well. In contrast, the BLiMP supplement
score demonstrates more variance.
The aforementioned scores give us evidence
about the grammatical understanding (BLiMP) and
world knowledge (EWoK) of our models, but leave
out two important areas of generative performance,
mainly: 1 the diversity and 2 the quality of gen-
erations. We focus on these two metrics in the
following paragraphs. Apart from the quantitative
scores, in Appendix B we also include the genera-
tions of all the GPT-Neo models for the TinyStories
example illustrated in Figure 1.
Evaluating Generation Quality Evaluating the
quality of generations for open-ended generation
tasks is challenging, as most common evaluation
paradigms expect structured output, and measure
fidelity towards a set of reference texts. To address
this, we adopt the evaluation method proposed by
Eldan and Li (2023), and prompt an LLM to eval-
uate the stories generated by our models. In our
experiments, we use Claude-3.5 Sonnet.
We harness a set of 44 manually constructed
prompts 5 containing the beginning of a story, and
generate 10 completions for each of our models,
sampling with a temperature of 1. We then provide
the LLM with the beginning of the story and the
model’s completion, and ask it in turn to evaluate
the model’s response along three axes: (a) Gram-
mar, (b) Creativity, and (c) Consistency with the
beginning of the story. Additionally, we ask it to
classify the story in different age groups, ranging
from 3 (or under) to 16 years old. Scores are given
on a scale of 1 to 10, and are averaged across stories
and completions. The final results are presented in
Table 2: we notice that limiting the training data,
up to even 25M words, results in a minor decrease
of performance across all three metrics. This indi-
cates that the quality of the model generations is
retained in the small data regime. Additionally, the
100M words decoder achieves impressive scores in
all categories, and outperforms all other models in
the Consistency metric – demonstrating that 100M
words is enough for robust generative performance.
Evaluating Generation Diversity To measure
diversity, we utilize Self-BLEU (Zhu et al., 2018),
which has been used before as a measure of the
diversity of generated data (Holtzman et al., 2020).
For each model, we sample 100 stories from the
training set and truncate them to 15%-30%, prompt-
ing the model to generate an alternate completion
to the story’s beginning. When sampling from the
model, a greedy decoding strategy is employed.
We report Self-BLEU scores, scaled to [0, 100], for
the set of 100 completions in Table 2 (higher scores
correspond to less diverse generations). Our results
indicate that models with limited training data can
achieve high diversity, while at the same time main-
taining generation quality, as demonstrated by the
scores of models trained on 25M and 50M words.
4.2 Data Generation
the com-
We now describe the creation of
bined dataset Dcomb = (Dtiny-m, Dbaby-b, Dgen),
training an encoder LM. For
leveraged for
4Model released by Eldan and Li (2023).
5https://huggingface.co/datasets/roneneldan/TinyStories
Train Data
Gr. ↑ Cr. ↑ Cons. ↑
SB ↓
5M
10M
25M
50M
75M
100M
440M (V2)
373M (V1)
4.56
5.31
6.00
6.01
6.08
6.17
5.88
6.24
4.99
5.34
5.65
5.53
5.50
5.57
5.53
5.73
3.37
3.98
4.55
4.54
4.49
4.78
4.49
4.70
38.6
38.3
34.6
33.0
37.1
39.8
37.3
29.6
Table 2: Results on the evaluation of our models by
Claude-3.5 Sonnet. We instruct the LLM to access gen-
erative performance along three categories: Grammar
(Gr.), Creativity (Cr.), Consistency (Cons.). We also in-
clude Self-BLEU (SB), measuring generation diversity.
brevity, details are given below only for the
Strict-Small track; the same process is followed
for the Strict track. As discussed in Section 4.1,
we choose a subset of 5M words from Tinys-
tories (Dtiny-5M), and use it to train a GPT-Neo
model. This model is then employed to generate
the dataset Dgen. We adapt the beginning of each
story (15%-30%) in the training set Dtiny-5M as a
prompt, and task the decoder to generate alterna-
tive completions. We experiment with different
generation techniques, including greedy generation
– Dgen-greedy, and nucleus sampling – Dgen-nucleus-k,
where k is the number of generations per prompt.
Finally, the two datasets are combined with a sub-
set of the BabyLM dataset (Dbaby-5M), ensuring
a total size within the 10M word limit, to form
D10M
comb = (Dtiny-5M, Dbaby-5M, Dgen).
In order to select k for nucleus sampling, we
leverage the Self-BLEU score. We sample 100 sto-
ries from Dtiny-5M and use their beginning (15%-
30%) to generate 50 completions for each prompt
with p = 0.95. For each value of k ∈ {2, 3, ..., 50}
we calculate Self-BLEU among the group of gener-
ations Sk. Our goal is to examine how diverse the
different generations are for the same prompt, as
the number of generations (k) increases. Figure 2
depicts the average Self-BLEU across all prompts.
Based on the presented results, we choose to ex-
periment with k = 5 and k = 10, as a satisfactory
balance between diversity and added dataset size.
4.3 Training LTG-BERT
Following the creation of the combined corpus
Dcomb, we employ it to train an LTG-BERT
(Samuel et al., 2023) encoder module. Our training
procedure is based on the source code released by
Figure 2: We generate 50 completions for 100 prompts
with the GPT-Neo models trained on Dtiny-5M, Dtiny-50M.
We plot the average self-BLEU score across prompts,
as the number of generations per prompt (k) increases.
the authors6, prompting our selection of similar hy-
perparameters (Appendix C), adapted for our spe-
cific infrastructure and available compute. More-
over, our experiments are conducted with minimal
hyperparameter optimization. In order to assess the
effect of data augmentation on final performance,
we train a variety of baselines, ablating over the
pre-training dataset of our models and keeping all
other training conditions constant. Specifically, for
a given track, all the models share the same hyper-
parameters and amount of FLOPs, ensuring a fair
comparison. Our baselines are described below.
Baselines For the Strict-Small track, we es-
tablish baselines by training LTG-BERT models
using 10M words from the BabyLM – Dbaby-10M
and Tinystories – Dtiny-10M datasets respectively.
Additionally, we train an encoder using a combina-
tion of 5M words from each one of the two datasets
– Dbaby-5M+Dtiny-5M. These models serve as bench-
marks against which we assess the performance of
models trained with various configurations of gen-
erated data, aiming to evaluate the effectiveness
of data augmentation. The same methodology is
applied consistently to the Strict track as well.
Here, we train encoders with 100M words from
each dataset separately, as well as in a combined
setting, utilizing 50M words from each dataset. We
also include results for the challenge baselines –
LTG-BERT (Samuel et al., 2023) and BabyLlama
(Timiryasov and Tastet, 2023). We emphasize that
these models are trained with different hyperparam-
eters than those in our controlled setting. Notably,
the LTG-BERT model released by the organizers
was trained for ∼ 20 epochs on the Strict track,
6https://github.com/ltgoslo/ltg-bert
2345678910152025304050Number of generations (k)0.00.10.20.30.40.5Average Self-BLEU score50M5MModel
Training Data
Total BLiMP
Supp.
EWoK GLUE Avg.
LTG-BERT Dbaby-10M
BabyLlama
Dbaby-10M
LTG-BERT
(ours)
Dbaby-10M
Dtiny-10M
Dtiny-10M + Dgen-greedy
Dbaby-5M + Dtiny-5M
Dbaby-5M + Dtiny-5M + Dgen-greedy
Dbaby-5M + Dtiny-5M + Dgen-nucleus-1
Dbaby-5M + Dtiny-5M + Dgen-nucleus-1 † ⋆
Dbaby-5M + Dtiny-5M + Dgen-nucleus-5
Dbaby-5M + Dtiny-5M + Dgen-nucleus-10
10M
10M
10M
10M
20M
10M
15M
15M
15M
33M
56M
60.6
69.8
62.8
59.8
58.7
62.6
62.1
62.5
63.2
62.4
61.0
60.8
59.5
63.7
54.2
57.8
60.7
60.2
62.3
59.3
60.1
58.4
47.6
50.7
51.2
52.2
48.9
51.5
50.4
48.8
50.4
50.7
50.1
60.3
63.3
71.0
67.0
67.1
71.2
70.6
69.5
71.1
69.4
69.5
57.3
60.8
62.2
58.3
58.1
61.5
60.8
60.8
61.0
60.6
59.8
Table 3: Model performance for the 10M word Strict-Small track.
compared to our setting of ∼ 27 epochs (20K steps
for both tracks).
Balanced Training While increasing the num-
ber of generated texts in the LTG-BERT train-
ing set (Dcomb), we also modify the distribution of
TinyStories and BabyLM samples that the model
encounters during training. This could affect the
model’s performance, as it becomes more finely
tuned to TinyStories. To counter this effect, we
experiment with a training variation where we bal-
ance the number of samples from both datasets.
Specifically, samples in each batch are drawn with
equal probability from both TinyStories – which
includes both original and generated texts – and
BabyLM. This method ensures that the model is
exposed to an equal number of samples from each
dataset throughout training. The dagger symbol †
in the results denotes use of this strategy.
5 Results
We present the final evaluation results for the
Strict-Small and Strict tracks at Table 3 and
Table 4, respectively. The ⋆ symbol denotes the
submitted model for this track.
Strict-Small Track In the Strict-Small track,
comparing the results of Dbaby-10M with Dtiny-10M
reveals, as expected, that the BabyLM dataset is
more beneficial for language pre-training compared
to TinyStories. The performance metrics for TinyS-
tories are consistently lower, except in the case of
EWoK. Interestingly, replacing half of the BabyLM
dataset with data from TinyStories only slightly af-
fects the model’s performance. However, as we add
more instances of the synthetic story data, the posi-
tive impact of the BabyLM dataset begins to wane,
leading performance to approach that of Dtiny-10M
where BabyLM was not used at all. This suggests
that training is over-influenced by the increased
amount of TinyStories data. To mitigate this ef-
fect, we experimented with equally distributing the
samples from the two datasets in a batch. This ap-
proach positively impacts the model’s performance.
Notably for BLiMP, this setup slightly surpasses
the performance of the model trained solely on
Dbaby-10M, resulting in the best score overall. Fur-
ther, when compared to other data augmentation
scenarios, the performance on GLUE is increased.
Moreover, an interesting observation concerns
the sampling technique used for augmenting the
data. Changing the sampling strategy from greedy
decoding to nucleus sampling positively affects the
model’s performance on the BLiMP and BLiMP
Supp. benchmarks, while negatively impacting per-
formance on EWoK and GLUE. This discrepancy
is likely due to the nature of the datasets them-
selves. BLiMP focuses on evaluating grammatical
understanding, while the increased diversity from
nucleus sampling exposes the model to a wider
range of linguistic structures and syntactic varia-
tions, thereby improving performance. Conversely,
EWoK and GLUE require semantic coherence and
factual consistency, where the increased diversity
from nucleus sampling may introduce noise and
less coherent narratives, potentially confusing the
model, and degrading performance. Therefore,
while more diverse stories benefit syntactic evalu-
ation tasks such as those in BLiMP, they may not
be as useful for semantic or knowledge-based tasks
such as those included in EWoK and GLUE.
Strict Track Interestingly, for the Strict track
we notice that data augmentation has a positive
effect on the BLiMP and EWoK benchmarks.
Specifically, adding the Dgen-greedy dataset, results
in increased performance compared to the base-
Model
Training Data
Total
BLiMP
Supp.
Ewok GLUE Avg
LTG-BERT Dbaby-100M
BabyLlama
Dbaby-100M
LTG-BERT
(ours)
Dbaby-100M
Dtiny-100M
Dtiny-100M + Dgen-greedy
Dtiny-50M + Dbaby-50M
Dtiny-50M + Dbaby-50M + Dgen-greedy
Dtiny-50M + Dbaby-50M + Dgen-nucleus-1⋆
Dtiny-50M + Dbaby-50M + Dgen-nucleus-1†
Dtiny-50M + Dbaby-50M + Dgen-nucleus-5
Dtiny-50M + Dbaby-50M + Dgen-nucleus-10
100M
100M
100M
100M
200M
100M
150M
150M
150M
350M
600M
69.2
73.1
64.0
61.2
61.1
65.5
66.6
65.6
65.2
65.4
63.7
66.5
60.6
67.6
63.2
59.6
65.6
63.3
65.0
63.5
64.4
63.3
50.2
52.1
47.3
48.0
48.7
47.2
49.7
49.3
49.0
45.9
49.2
68.4
69.0
74.0
70.6
69.1
71.0
71.8
72.7
72.6
69.8
69.5
63.6
63.7
63.2
60.8
59.6
62.3
62.8
63.1
62.6
61.4
61.4
Table 4: Model performance for the 100M word Strict track.
lines trained on Dtiny-100M and Dbaby-100M, as well
as a mixture of the two (Dtiny-50M + Dbaby-50M).
Additionally, the Dtiny-50M + Dbaby-50M combina-
tion is outperformed by both the Dgen-greedy and
Dgen-nucleus-1 models, suggesting that synthetic data
can offer modest gains in the Strict scenario.
As with the Strict-Small track, increasing the
size of the TinyStories dataset negatively affects
the performance of the models, approaching that of
the model trained solely on Dtiny-100M. However, in
this case, balancing the datasets does not improve
the model’s performance. In the larger 100M word
dataset, even with balancing, the sheer volume of
TinyStories data may overwhelm the influence of
the BabyLM data. The model is exposed to a much
larger quantity of TinyStories content, which could
dominate learning and reduce the effectiveness of
balancing. Additionally, while the nucleus sam-
pling strategy once again improves performance on
the BLiMP Supp. dataset, it does not assist with
BLiMP as it did in the Strict-Small track.
6 Conclusion
In this work, we explore data augmentation for lan-
guage pre-training in a limited data setting. Using
the TinyStories dataset we train GPT-Neo mod-
els and probe the relationship between generative
ability and dataset size. To measure the effect of
augmentation with synthetic data, we train LTG-
BERT models on a diverse set of data configura-
tions. Our experiments indicate that while synthe-
sizing high quality data is possible in small data
regimes, effectively utilizing it for pre-training can
be challenging. Some modest gains are observed
in the Strict track, while careful balancing shows
promise for the Strict-Small track. Overall, our
evaluation highlights the intricate balance required
between data quantity, quality, and integration for
effective training. Future work suggests investi-
gation of different data domains, mixtures, and
proportions, while precise calibration of hyperpa-
rameters may prove critical in exploiting the full
benefit of synthetic data in low data pre-training.
7 Limitations
A limitation of our study is the exclusive use of a
single LM architecture for both the encoder and
decoder components. Our experiments are also
limited to specific datasets, employing only TinyS-
tories for synthetic data generation and a combi-
nation of TinyStories and BabyLM for encoder
training. While these choices are made to ensure
experimental control and draw solid conclusions,
they limit the generalizability of our results.
Another limitation concerns the creation of the
combined dataset. We investigated only a single
configuration of the two datasets – including them
in equal proportion – and the documents within a
dataset were sampled randomly. We posit that more
fine control over the mixture of datasets could fur-
ther enhance the benefits of our data augmentation
technique. Additionally, with regard to generation,
the prompting strategy and truncation ratio could
be more finely calibrated, in order to improve the
balance between data quality and redundancy.
By acknowledging these limitations, we aim to
encourage further research in this area, focusing on
the impact of data augmentation in size constrained
and cognitively plausible language pre-training.
Acknowledgments
The research work was supported by the Hellenic
Foundation for Research and Innovation (HFRI)
under the 3rd Call for HFRI PhD Fellowships (Fel-
lowship Number 5537).
References
Anthropic. 2024. Claude. https://www.anthropic.
com/claude. Artificial Intelligence Model.
Benjamin Bergner, Andrii Skliar, Amelie Royer, Tij-
men Blankevoort, Yuki Asano, and Babak Ehteshami
Bejnordi. 2024. Think big, generate quick: Llm-to-
slm for fast autoregressive decoding. arXiv preprint
arXiv:2402.16844.
Sid Black, Leo Gao, Phil Wang, Connor Leahy, and
Stella Biderman. 2021. Gpt-neo: Large scale autore-
gressive language modeling with mesh-tensorflow.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens
Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma-
teusz Litwin, Scott Gray, Benjamin Chess, Jack
Clark, Christopher Berner, Sam McCandlish, Alec
Radford, Ilya Sutskever, and Dario Amodei. 2020.
In Ad-
Language models are few-shot learners.
vances in Neural Information Processing Systems,
volume 33, pages 1877–1901. Curran Associates,
Inc.
Leshem Choshen, Ryan Cotterell, Michael Y. Hu,
Tal Linzen, Aaron Mueller, Candace Ross, Alex
Warstadt, Ethan Wilcox, Adina Williams, and
[call for papers] the
Chengxu Zhuang. 2024.
2nd babylm challenge: Sample-efficient pretraining
on a developmentally plausible corpus. Preprint,
arXiv:2404.06214.
Haixing Dai, Zhengliang Liu, Wenxiong Liao, Xiaoke
Huang, Yihan Cao, Zihao Wu, Lin Zhao, Shaochen
Xu, Wei Liu, Ninghao Liu, et al. 2023. Auggpt:
Leveraging chatgpt for text data augmentation. arXiv
preprint arXiv:2302.13007.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language under-
standing. In Proceedings of the 2019 Conference of
the North American Chapter of the Association for
Computational Linguistics: Human Language Tech-
nologies, Volume 1 (Long and Short Papers), pages
4171–4186, Minneapolis, Minnesota. Association for
Computational Linguistics.
Bosheng Ding, Chengwei Qin, Ruochen Zhao, Tianze
Luo, Xinze Li, Guizhen Chen, Wenhan Xia, Junjie
Hu, Anh Tuan Luu, and Shafiq Joty. 2024. Data aug-
mentation using LLMs: Data perspectives, learning
paradigms and challenges. In Findings of the Associ-
ation for Computational Linguistics ACL 2024, pages
1679–1705, Bangkok, Thailand and virtual meeting.
Association for Computational Linguistics.
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey,
Abhishek Kadian, Ahmad Al-Dahle, Aiesha Let-
man, Akhil Mathur, Alan Schelten, Amy Yang, An-
gela Fan, et al. 2024. The llama 3 herd of models.
Preprint, arXiv:2407.21783.
Ronen Eldan and Yuanzhi Li. 2023. Tinystories: How
small can language models be and still speak coherent
english? Preprint, arXiv:2305.07759.
Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman,
Sid Black, Anthony DiPofi, Charles Foster, Laurence
Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li,
Kyle McDonell, Niklas Muennighoff, Chris Ociepa,
Jason Phang, Laria Reynolds, Hailey Schoelkopf,
Aviya Skowron, Lintang Sutawika, Eric Tang, An-
ish Thite, Ben Wang, Kevin Wang, and Andy Zou.
2023. A framework for few-shot language model
evaluation.
Martin Gerlach and Francesc Font-Clos. 2018. A stan-
dardized project gutenberg corpus for statistical anal-
ysis of natural language and quantitative linguistics.
Preprint, arXiv:1812.08092.
Jill Gilkerson, Jeffrey A. Richards, Steven F. Warren, Ju-
dith K. Montgomery, Charles R. Greenwood, D. Kim-
brough Oller, John H. L. Hansen, and Terrance D.
Paul. 2017. Mapping the early language environ-
ment using all-day recordings and automated analy-
sis. American Journal of Speech-Language Pathol-
ogy, 26(2):248–265.
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch,
Elena Buchatskaya, Trevor Cai, Eliza Rutherford,
Diego de Las Casas, Lisa Anne Hendricks, Johannes
Welbl, Aidan Clark, Tom Hennigan, Eric Noland,
Katie Millican, George van den Driessche, Bogdan
Damoc, Aurelia Guy, Simon Osindero, Karen Si-
monyan, Erich Elsen, Jack W. Rae, Oriol Vinyals,
and Laurent Sifre. 2022. Training compute-optimal
large language models. Preprint, arXiv:2203.15556.
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and
Yejin Choi. 2020. The curious case of neural text
degeneration. Preprint, arXiv:1904.09751.
Anna A Ivanova, Aalok Sathe, Benjamin Lipkin, Un-
nathi Kumar, Setayesh Radkani, Thomas H Clark,
Carina Kauf, Jennifer Hu, RT Pramod, Gabriel Grand,
et al. 2024. Elements of world knowledge (ewok):
A cognition-inspired framework for evaluating basic
world knowledge in language models. arXiv preprint
arXiv:2405.09605.
Jaap Jumelet, Michael Hanna, Marianne De Heer
Kloots, Anna Langedijk, Charlotte Pouw, and Oskar
Van Der Wal. 2023. Chapgtp, illc’s attempt at raising
a babylm: Improving data efficiency by automatic
task formation. arXiv preprint arXiv:2310.11282.
Varun Kumar, Ashutosh Choudhary, and Eunah Cho.
2020. Data augmentation using pre-trained trans-
former models. arXiv preprint arXiv:2003.02245.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan
Ghazvininejad, Abdelrahman Mohamed, Omer Levy,
Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training
for natural language generation, translation, and com-
prehension. In Proceedings of the 58th Annual Meet-
ing of the Association for Computational Linguistics,
pages 7871–7880, Online. Association for Computa-
tional Linguistics.
teachers trained on a small dataset with no perfor-
mance penalty. In Proceedings of the BabyLM Chal-
lenge at the 27th Conference on Computational Nat-
ural Language Learning, pages 279–289, Singapore.
Association for Computational Linguistics.
Pierre Lison and Jörg Tiedemann. 2016. OpenSub-
titles2016: Extracting large parallel corpora from
movie and TV subtitles. In Proceedings of the Tenth
International Conference on Language Resources
and Evaluation (LREC’16), pages 923–929, Portorož,
Slovenia. European Language Resources Association
(ELRA).
Brian MacWhinney. 2014. The Childes Project. Psy-
chology Press.
Yu Meng, Jiaxin Huang, Yu Zhang, and Jiawei Han.
2022. Generating training data with language models:
Towards zero-shot language understanding. Preprint,
arXiv:2202.04538.
OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal,
Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman,
Diogo Almeida, Janko Altenschmidt, Sam Altman,
Shyamal Anadkat, et al. 2024. Gpt-4 technical report.
Preprint, arXiv:2303.08774.
Adam Paszke, Sam Gross, Francisco Massa, Adam
Lerer, James Bradbury, Gregory Chanan, Trevor
Killeen, Zeming Lin, Natalia Gimelshein, Luca
Antiga, Alban Desmaison, Andreas Köpf, Edward Z.
Yang, Zach DeVito, Martin Raison, Alykhan Tejani,
Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Jun-
jie Bai, and Soumith Chintala. 2019. Pytorch: An
imperative style, high-performance deep learning li-
brary. CoRR, abs/1912.01703.
Alec Radford, Jeff Wu, Rewon Child, David Luan,
Dario Amodei, and Ilya Sutskever. 2019. Language
models are unsupervised multitask learners.
David Samuel. 2023. Mean BERTs make erratic lan-
guage teachers: the effectiveness of latent bootstrap-
In Proceedings of
ping in low-resource settings.
the BabyLM Challenge at the 27th Conference on
Computational Natural Language Learning, pages
221–237, Singapore. Association for Computational
Linguistics.
David Samuel, Andrey Kutuzov, Lilja Øvrelid, and Erik
Velldal. 2023. Trained on 100 million words and still
in shape: BERT meets British National Corpus. In
Findings of the Association for Computational Lin-
guistics: EACL 2023, pages 1954–1974, Dubrovnik,
Croatia. Association for Computational Linguistics.
Andreas Stolcke, Klaus Ries, Noah Coccaro, Eliza-
beth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul
Taylor, Rachel Martin, Carol Van Ess-Dykema, and
Marie Meteer. 2000. Dialogue act modeling for au-
tomatic tagging and recognition of conversational
speech. Computational Linguistics, 26(3):339–374.
Inar Timiryasov and Jean-Loup Tastet. 2023. Baby
llama: knowledge distillation from an ensemble of
Alex Wang. 2018. Glue: A multi-task benchmark and
analysis platform for natural language understanding.
arXiv preprint arXiv:1804.07461.
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Aman-
preet Singh, Julian Michael, Felix Hill, Omer Levy,
and Samuel Bowman. 2019. Superglue: A stickier
benchmark for general-purpose language understand-
In Advances in Neural Information
ing systems.
Processing Systems, volume 32. Curran Associates,
Inc.
Alex Warstadt, Leshem Choshen, Aaron Mueller, Ad-
ina Williams, Ethan Wilcox, and Chengxu Zhuang.
2023a. Call for papers – the babylm challenge:
Sample-efficient pretraining on a developmentally
plausible corpus. Preprint, arXiv:2301.11796.
Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan
Wilcox, Chengxu Zhuang, Juan Ciro, Rafael Mos-
quera, Bhargavi Paranjabe, Adina Williams, Tal
Linzen, and Ryan Cotterell, editors. 2023b. Proceed-
ings of the BabyLM Challenge at the 27th Conference
on Computational Natural Language Learning. As-
sociation for Computational Linguistics, Singapore.
Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo-
hananey, Wei Peng, Sheng-Fu Wang, and Samuel R.
Bowman. 2019. Blimp: A benchmark of linguistic
minimal pairs for english. CoRR, abs/1912.00582.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pier-
ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz,
and Jamie Brew. 2019. Huggingface’s transformers:
State-of-the-art natural language processing. CoRR,
abs/1910.03771.
Chenghao Xiao, G Thomas Hudson, and Noura
Al Moubayed. 2023. Towards more human-like lan-
guage models based on contextualizer pretraining
strategy. In Proceedings of the BabyLM Challenge
at the 27th Conference on Computational Natural
Language Learning, pages 317–326, Singapore. As-
sociation for Computational Linguistics.
Wujiang Xu, Zujie Liang, Jiaojiao Han, Xuying Ning,
Wenfang Lin, Linxun Chen, Feng Wei, and Yongfeng
Zhang. 2024.
Slmrec: Empowering small lan-
guage models for sequential recommendation. arXiv
preprint arXiv:2405.17890.
Kang Min Yoo, Dongju Park, Jaewook Kang, Sang-
Woo Lee, and Woomyeong Park. 2021. Gpt3mix:
Leveraging large-scale language models for text aug-
mentation. arXiv preprint arXiv:2104.08826.
Yian Zhang, Alex Warstadt, Xiaocheng Li, and
Samuel R. Bowman. 2021. When do you need bil-
lions of words of pretraining data? In Proceedings
of the 59th Annual Meeting of the Association for
Computational Linguistics and the 11th International
Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 1112–1125, Online.
Association for Computational Linguistics.
Zheyu Zhang, Han Yang, Bolei Ma, David Rügamer,
and Ercong Nie. 2023. Baby’s CoThought: Lever-
aging large language models for enhanced reasoning
in compact models. In Proceedings of the BabyLM
Challenge at the 27th Conference on Computational
Natural Language Learning, pages 158–170, Singa-
pore. Association for Computational Linguistics.
Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan
Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A
benchmarking platform for text generation models.
In The 41st international ACM SIGIR conference
on research & development in information retrieval,
pages 1097–1100.
A BabyLM dataset
Table 5 contains a detailed overview of the BabyLM dataset. For our experiments, we preprocess the data
using the methodology from Samuel (2023). The text is normalized and cleaned up in order to ensure
a unified format. We cast direct speech in double quotes, remove arbitrary and semantically irrelevant
tokens and conserve formatting, where necessary, with a special [PAR] symbol.
Dataset
Domain
# Words
Strict-Small
Strict
CHILDES (MacWhinney, 2014)
British National Corpus (BNC), dialogue portion 1
Project Gutenberg (children’s stories) (Gerlach and Font-Clos, 2018) Written English
Movie subtitles
OpenSubtitles (Lison and Tiedemann, 2016)
Simple Wikipedia 2
Written Simple English
Dialogue
Switchboard Dialog Act Corpus (Stolcke et al., 2000)
Child-directed speech
Dialogue
Total
2.84M 28.90M
0.93M
7.76M
2.54M 26.37M
2.04M 19.96M
1.45M 14.67M
1.34M
0.15M
9.95M 99.01M
Table 5: Contents of the BabyLM datasets for the Strict and Strict-Small tracks, including the domain and
word counts. 1http://www.natcorp.ox.ac.uk/, 2https://dumps.wikimedia.org/simplewiki/20241001/.
B TinyStories - Detailed Evaluation
In order to demonstrate a tangible example of the augmentation process, and provide the opportunity to
directly judge the quality of the generations, we include sample generations for all our GPT-Neo models:
{5M, 10M, 25M, 50M, 75M, 100M, 440M (V2)}, as well as the model released by Eldan and Li (2023) –
373M (V1). We sample a story from the training set, truncate it to around 15% to 30% of its length, and
ask the models to generate a completion with greedy decoding. The results are shown in Table 6. The
50M words model generation is also illustrated in Figure 1. We notice that even for the smaller models,
the quality of the generation remains good in terms of grammar, coherence to the story’s beginning, and
creativity. Additionally, there is little similarity to the original completion, even though the prompt is
taken from the training set of the models.
C Training Hyperparameters
We list all the hyperparameters used to train the GPT-Neo models on subsets of TinyStories in Table
7. Where appropriate, parameters that vary per group of models are listed separately. We also list the
hyperparameters used to train the LTG-BERT transformer encoder models in Table 8. We follow the
hyperparameter and training regime of Samuel et al. (2023), with some modifications, partly to adjust to
the scale of our computational resources and infrastructure.
D GLUE Evaluation Hyperparameters
We list the hyperparameters utilized for fine-tuning the LTG-BERT models on GLUE in Table 9, largely
also based on Samuel et al. (2023). We find that it’s critical for the stability of training to initialize the
added classification head according to the original implementation released by the authors. Specifically,
we use a truncated normal distribution N (µ, σ2) in the range [a, b] where µ = 0, σ =
,
5.0·dmodel
a = −2σ, b = 2σ and dmodel is the hidden size. The bias of the layer is also initialized to zero. Lastly,
one notable difference affecting the downstream performance of our models is that we train with mixed
precision. We use the fp16 implementation provided by the HuggingFace Trainer module, which reduces
the numerical precision of certain variables from fp32 to fp16, increasing computational efficiency.
(cid:113) 2.0
E LLM Evaluation
For the LLM evaluation, we employ Claude-3.5 Sonnet. Our prompt is a modified version of the one
introduced in Eldan and Li (2023). We present our prompt in Table 10.
Model
Prompt
Original
5M
10M
25M
50M
75M
100M
440M (V2)
373M (V1)
Generated Text
One day, a little boy named Tim went to the park. He saw a big tiger. The tiger
was not mean, but very easy to play with. Tim and the tiger played all day. They had lots of fun. Then, something
unexpected happened. The tiger started to shake. Tim was scared. He did not know what was going on. But then,
the tiger turned into a nice dog. Tim was very surprised. Tim and the dog played together now. They were very
happy. The dog was easy to play with too. At the end of the day, Tim went home with his new friend.
was very big and had a long tail. Tim wanted to be friends with the tiger. Tim said, "Hi tiger, I am Tim. Do you
want to play with me?" The tiger looked at Tim and said, "Yes, I want to play with you." Tim and the tiger played
all day. They ran, jumped, and laughed. The tiger was very happy. But then, the tiger got tired. It wanted to rest.
Tim said, "I am tired. Let’s rest." The tiger yawned and said, "Okay, Tim. Let’s rest." Tim and the tiger sat under
a tree. They talked and laughed. The tiger was happy to have a new friend. They both had a fun day at the park.
was very big and had a long tail. Tim was scared, but he wanted to be brave. He walked up to the tiger and said,
"Hi, tiger! I am Tim. Do you want to play with me?" The tiger looked at Tim and said, "Yes, I want to play. But I
am scared of the big tiger." Tim was not scared. He thought the tiger was nice. Tim and the tiger played together.
They ran and jumped. They had a lot of fun. Then, something unexpected happened. The tiger started to change.
It became a tiger! The tiger was not a tiger anymore. It was a tiger! Tim was very surprised. The tiger was not a
tiger anymore. It was a big tiger! Tim and the tiger became best friends. They played together every day.
was very big and had a long tail. Tim was scared, but he wanted to be brave. Tim said, "Hi, tiger! I am Tim. Do
you want to play with me?" The tiger looked at Tim and said, "Yes, I want to play. But I am very hungry. Can
you help me find some food?" Tim thought for a moment and said, "Yes, I can help you. Let’s look for food
together." They looked and looked, but they could not find any food. Then, something unexpected happened.
The tiger started to talk! "I am not a tiger, I am a magic tiger. I can make your wishes come true," said the tiger.
Tim was very surprised. He wished for a big ice cream cone. The magic tiger made his wish come true. Tim and
the magic tiger became good friends and played in the park every day.
was very nice. Tim wanted to play with the tiger. He said, "Hi, tiger! Let’s play!" The tiger did not say anything.
Tim was sad. He wanted to find a way to make the tiger happy. Tim had an idea. He found a ball and threw it.
The tiger saw the ball and started to play with it. Tim was happy. He said, "Now, the tiger is happy too!" They
played together all day. The tiger was not sad anymore. Tim and the tiger became best friends.
was very big and had a long tail. Tim was not scared. He wanted to be friends with the tiger. Tim said, "Hi, tiger!
My name is Tim. Do you want to play with me?" The tiger looked at Tim and said, "Yes, I want to play. My
name is Sam." Tim and Sam played all day. They ran, jumped, and laughed. They had so much fun. Tim was
happy to have a new friend. At the end of the day, Tim said, "Bye, Sam! I will come back to play with you
again." Sam said, "Bye, Tim! I will wait for you here." Tim went home and told his mom about his new friend.
was sad. Tim asked, "Why are you sad, Mr. Tiger?" The tiger said, "I lost my toy. Can you help me find it?" Tim
wanted to help the tiger. They looked for the toy together. They looked under the trees and behind the bushes.
Then, they saw a big box. The tiger said, "I think my toy is in the box." Tim opened the box. Inside, there was a
big cake! The tiger was happy. But, the cake was not a real tiger. It was a toy tiger! Tim laughed and said, "Silly
tiger, you are not a real tiger!" They played with the toy tiger and had a fun day at the park.
was very hungry. Tim wanted to help the tiger. He thought, "I will feed the tiger some food." Tim went home
and got some food. He came back to the park and gave the food to the tiger. The tiger ate the food and was very
happy. Tim felt good because he helped the tiger. The tiger said, "Thank you, Tim. You are a good friend." Tim
and the tiger played in the park all day. They had lots of fun. Tim learned that it is good to help others. The tiger
learned that it is good to be kind and help others. And they both lived happily ever after.
was sad. Tim wanted to help the tiger. He thought the tiger needed a friend. Tim went to the tiger and said, "Hi, I
am Tim. I want to help you." The tiger looked at Tim and said, "Thank you, Tim. I am sad because I am alone. I
need a friend." Tim thought for a moment and said, "I will be your friend, tiger. We can play together." The tiger
was happy. Tim and the tiger played all day. They had lots of fun. The moral of the story is: Helping others can
make you happy too.
Table 6: Story completions for models trained on different amounts of TinyStories data. The original prompt and
the training set completion are also included.
Hyperparameter
GPT-Neo – 5 / 10 / 25 / 50, 75, 100, 440 (M)
Number of Parameters
Number of Layers
Attention Heads
Hidden size
Layer norm ϵ
Sequence Length
Max position embeddings
Attention Dropout
Classifier Dropout
Embed. Dropout
Resid. Dropout
Summary first Dropout
Weight decay
Vocab Size
Context length
batch size
gradient accumulation steps
gradient clipping
Training steps
optimizer
Adam β1
Adam β2
Adam ϵ
Initial learning rate
Final learning rate
Learning rate scheduler schedule
Warmup ratio
41M
4
16
768
1.0e-5
512
512
0.50 / 0.40 / 0.25 / 0.20
0.50 / 0.40 / 0.25 / 0.20
0.50 / 0.40 / 0.25 / 0.20
0.50 / 0.40 / 0.25 / 0.20
0.40 / 0.30 / 0.15 / 0.10
0.20 / 0.20 / 0.20 / 0.10
6411 / 6411 / 16384 / 16384
512
24
32
2.0
15 000
AdamW
0.9
0.95
1.0e-8
5.0e-4
5.0e-5
cosine
1.6%
Table 7: Hyperparameters used for training GPT-Neo models on TinyStories.
Hyperparameter
Strict
Strict-Small
Number of parameters
Number of layers
Attention heads
Hidden size
FF intermediate size
Position Bucket size
Layer norm ϵ
Vocabulary size
Sequence length
Max position embeddings
Hidden dropout
Attention dropout
Training steps
Batch size
Gradient Accumulation Steps
Warmup ratio
Initial learning rate
Final learning rate
Learning rate scheduler
Weight decay
Optimizer
Adam ϵ
Adam β1
Adam β2
Gradient clipping
24M
12
6
384
1024
32
1e-7
6 144
128
512
0.1
0.1
20 000
80
32
1.6%
6e-4
6e-5
cosine
0.1
98M
12
12
768
2048
32
1e-7
16 384
128
512
0.1
0.1
20 000
80
32
1.6%
6e-4
6e-5
cosine
0.1
AdamW AdamW
1e-6
0.9
0.98
2.0
1e-6
0.9
0.98
2.0
Table 8: Hyperparameters used to train all LTG-BERT models for the different tracks. With max position embedding
we refer to the architectural capacity of the model – the model was trained with sequence length = 128.
Hyperparameter
BoolQ, MNLI, MNLI-mm, MRPC CoLA, RTE, WSC
MultiRC, QNLI, QQP, SST-2
Batch size
Number of epochs
Dropot
Warmup proportion
Learning Rate
Decay type
Weight decay
Optimizer
Adam ϵ
Adam β1
Adam β2
Max length
Gradient clip
Mixed precision
32
10
0.1
0.1
3.0e-5
linear
0.01
AdamW
1.0e-6
0.9
0.999
128
2.0
True
16
10
0.1
0.1
3.0e-5
linear
0.01
AdamW
1.0e-6
0.9
0.999
128
2.0
True
Table 9: Hyperparameters used for finetuning the LTG-BERT models on GLUE.
In the following exercise, the student is given a beginning of a story. The student needs to complete it into a full story. The
exercise tests the student’s language abilities and creativity. The symbol *** marks the separator between the prescribed
beginning and the student’s completion:
Once upon a time, there was tiger who liked to play the guitar. One day, a bunny heard the guitar from a distance and *** asked,
"Hello tiger, can you play the guitar for me?" The tiger said, "Yes! I can!" So the bunny plucked up the guitar and started to play.
But then, the tiger roared and kicked the guitar too hard. The guitar made a big mess. The panda was so sad that he hopped away
and never came back. The end.
Please provide your general assessment about the part written by the student (the one after the *** symbol). Is it gramatically
correct? Is it consistent with the beginning of the story? Pay special attention to whether the student manages to complete the
sentence which is split in the middle by the separator ***.
I’ll provide an assessment of the student’s completion of the story:
Grammar: The student’s portion is generally grammatically correct, with a few minor issues:
...
...
In summary, while the student completes the split sentence correctly, the overall story lacks coherence and consistency with the
given beginning.
Now, grade the student’s completion in terms of 1. Grammar, 2. Creativity, 3. Consistency with the story’s beginning and
whether the plot makes sense. Please provide grades from a scale of 1-10 for each of the requested categories, namely: 1.
Grammar, 2. Creativity, 3. Consistency. Moreover, please provide your best guess of what the age of the student might be, as
reflected from the completion. Choose from possible age groups: A: 3 or under. B: 4-5. C: 6-7. D: 8-9. E: 10-12. F: 13-16.
Here are the grades for the student’s completion on a scale of 1-10:
1. Grammar: 7/10 ... 2. Creativity: 5/10 ... Consistency: 3/10 ...
Age group estimate: B: 4-5 years old ...
Table 10: A sample interaction with Claude-3.5 Sonnet. The story is generated by the GPT-Neo model trained
on 50M words of TinyStories. We include the prompt which also contains the model’s generation. The LLM’s
responses is stylized in italic. For brevity, part of the LLM response was omitted and replaced with ‘...’.
|
synthetic_cpt | 2 | LLM-Adapters_An_Adapter_Family_for_Parameter-Efficient_Fine-Tuning_of_Large_Language_Models.pdf | 4
2
0
2
n
u
J
3
1
]
E
S
.
s
c
[
1
v
0
0
3
0
1
.
6
0
4
2
:
v
i
X
r
a
Large Language Models as Software Components:
A Taxonomy for LLM-Integrated Applications
Irene Weber
Kempten University of Applied Sciences, Germany
[email protected]
Abstract
Large Language Models (LLMs) have become widely adopted recently. Research explores their use both
as autonomous agents and as tools for software engineering. LLM-integrated applications, on the other
hand, are software systems that leverage an LLM to perform tasks that would otherwise be impossible or
require significant coding effort. While LLM-integrated application engineering is emerging as new discipline,
its terminology, concepts and methods need to be established. This study provides a taxonomy for LLM-
integrated applications, offering a framework for analyzing and describing these systems. It also demonstrates
various ways to utilize LLMs in applications, as well as options for implementing such integrations.
Following established methods, we analyze a sample of recent LLM-integrated applications to identify rel-
evant dimensions. We evaluate the taxonomy by applying it to additional cases. This review shows that
applications integrate LLMs in numerous ways for various purposes. Frequently, they comprise multiple
LLM integrations, which we term “LLM components”. To gain a clear understanding of an application’s
architecture, we examine each LLM component separately. We identify thirteen dimensions along which to
characterize an LLM component, including the LLM skills leveraged, the format of the output, and more.
LLM-integrated applications are described as combinations of their LLM components. We suggest a concise
representation using feature vectors for visualization.
The taxonomy is effective for describing LLM-integrated applications. It can contribute to theory building in
the nascent field of LLM-integrated application engineering and aid in developing such systems. Researchers
and practitioners explore numerous creative ways to leverage LLMs in applications. Though challenges
persist, integrating LLMs may revolutionize the way software systems are built.
Keywords:
component
large language model, LLM-integrated, taxonomy, copilot, architecture, AI agent, LLM
1. Introduction
fields, such as medicine, law, marketing, education,
human resources, etc.
Large Language Models (LLMs) have significantly
impacted various sectors of economy and society [47].
Due to their proficiency in text understanding, cre-
ative work, communication, knowledge work, and
code writing, they have been adopted in numerous
Public discussions often focus on the ethical aspects
and societal consequences of these systems [36, 39].
Meanwhile, research investigates Artificial General
Intelligences and autonomous AI agents that can use
services, data sources, and other tools, and collabo-
rate to solve complex tasks [11, 62, 57, 21]. In addi-
tion, LLMs offer many opportunities to enhance soft-
ware systems. They enable natural language interac-
tion [59], automate complex tasks [19], and provide
supportive collaboration, as seen with recent LLM-
based assistant products often branded as “copilots” 1.
This paper addresses the potential of LLMs for soft-
ware development by integrating their capabilities as
components into software systems. This contrasts
with current software engineering research, which
views LLMs as tools for software development rather
than as software components [14, 22], and with the
considerable body of research examining LLMs as au-
tonomous agents within multiagent systems [21].
Software systems that invoke an LLM and process
its output are referred to as “LLM-integrated appli-
cations”, “LLM-integrated systems”, “LLM-based ap-
plications”, etc. [32, 13, 57]. LLMs are versatile, mul-
tipurpose tools capable of providing functionalities
that would otherwise be unfeasible or require sub-
stantial development efforts [15, 24]. By significantly
expediting system development, they have the poten-
tial to revolutionize not only the way users interact
with technology, but also the fundamental processes
of software development.
LLM-integrated applications engineering is emerging
as a research field. E.g.,
[10] proposes LLM Sys-
tems Engineering (LLM-SE) as a novel discipline, and
[44, 8, 7] discuss experiences and challenges that de-
velopers of such systems encounter in practice.
This study develops a taxonomy that provides a
structured framework for categorizing and analyzing
LLM-integrated applications across various domains.
To develop and evaluate the taxonomy, we collected
a sample of LLM-integrated applications, concentrat-
ing on technical and industrial domains. These ap-
plications showcase a broad range of opportunities
to leverage LLMs, often integrating LLMs in mul-
tiple ways for distinct purposes.
In developing the
taxonomy, we found that examining each of these in-
tegrations, termed “LLM components”, separately is
crucial for a clear understanding of an application’s
architecture.
The taxonomy adopts an original architectural per-
spective, focusing on how the application interacts
with the LLM while abstracting from the specifics
of application domains. For researchers, the taxon-
omy contributes to shape a common understanding
and terminology, thus aiding theory building in this
emerging domain [29, 50, 18]. For practitioners, the
taxonomy provides inspiration for potential uses of
LLMs in applications, presents design options, and
helps identify challenges and approaches to address
them.
Objectives. In this study, a taxonomy is understood
as a set of dimensions divided into characteristics.
The objective is to identify dimensions that are useful
for categorizing the integration of LLMs in applica-
tions from an architectural perspective. To be most
effective, the taxonomy should be easy to understand
and apply, yet distinctive enough to uncover the es-
sential aspects. Additionally, we aim to develop a
visual representation tailored to the taxonomy’s in-
tended purposes.
Overview. The following section 2 provides back-
ground on LLMs and introduces relevant concepts.
Section 3 presents an overview of related work. The
study design adheres to a Design Science Research
approach [46]. We apply established methods for tax-
onomy design [42, 48] as described in Section 4. This
section also presents the sample of LLM-integrated
applications used for this study. The developed tax-
onomy is presented, demonstrated and formally eval-
uated in section 5. In section 6, we discuss its usabil-
ity and usefulness. Section 7 summarizes the contri-
butions, addresses limitations, and concludes.
2. Large Language Models
2.1. Background
1E.g., https://docs.github.com/en/copilot,
https://copilot.cloud.microsoft/en-us/copilot-excel,
https://www.salesforce.com/einsteincopilot
State-of-the-art LLMs such as GPT-3.5, GPT-4,
Llama, PALM2, etc., are artificial neural networks
i.e., very simple processing
consisting of neurons,
2
units, that are organized in layers and connected by
weighted links. Training a neural network means
adapting these weights such that the neural network
shows a certain desired behavior. Specifically, an
LLM is trained to predict the likelihoods of pieces
of text termed, tokens, to occur as continuations of
a given text presented as input to the LLM. This in-
put is referred to as prompt. The prompt combined
with the produced output constitutes the context of
an LLM. It may comprise more than 100k tokens in
state-of-the-art LLMs2. Still, its length is limited and
determines the maximum size of prompts and outputs
that an LLM is capable of processing and generating
at a time.
Training of an LLM optimizes its parameters such
that its computed likelihoods align with real text ex-
amples. The training data is a vast body of text snip-
pets extracted, processed, and curated from sources
such as Wikipedia, Github code repositories, common
websites, books, or news archives. An LLM trained
on massive examples is termed a foundation model
or pre-trained model. During training, an LLM not
only learns to produce correct language but also ab-
sorbs and stores information and factual knowledge.
However, it is well known that LLMs frequently pick
up biases, leading to ethical problems. They may
also produce factually incorrect outputs that sound
plausible and convincing, termed hallucinations.
Recent findings show that LLMs can be applied to
a wide range of tasks by appropriately formulating
prompts. Different prompt patterns succeed in dif-
ferent tasks. Basic approaches rely on instructing
the LLM to solve a task described or explained in
the prompt. In few-shot prompting (also known as
few-shot learning), the prompt is augmented with ex-
ample input-output pairs illustrating how to solve the
task, e.g., the requested output format. The number
of examples can vary. Prompting with one example is
called one-shot prompting, while prompting without
any examples is called zero-shot prompting. One-shot
and few-shot prompting fall under the broader cat-
egory of in-context learning. Prompt patterns such
2https://platform.openai.com/docs/models
as chain-of-thought and thinking-aloud aim to elicit
advanced reasoning capabilities from LLMs.
As effective prompts are crucial for unlocking the di-
verse capabilities of an LLM, the discipline of prompt
engineering is evolving, focusing on the systematic
design and management of prompts [66, 9, 53, 31].
2.2. Definitions
Invoking an LLM results in an input-processing-
output sequence: Upon receiving a prompt, the LLM
processes it and generates an output. We refer to an
individual sequence of input-processing-output per-
formed by the LLM as LLM invocation, and define
an LLM-integrated application as a system in which
the software generates the prompt for the LLM and
processes its output. The concept of an application
is broad, encompassing service-oriented architectures
and systems with components loosely coupled via
API calls.
Given an LLM’s versatility, an application can uti-
lize it for different tasks, each demanding a specific
approach to create the prompt and handle the re-
sult. This paper defines a particular software compo-
nent that accomplishes this as an LLM-based software
component or, simply, LLM component. An LLM-
integrated application can comprise several LLM
components. The study develops a taxonomy for
LLM components. LLM-integrated applications are
described as combinations of their LLM components.
3. Related Work
With the recent progress in generative AI and LLMs,
the interest in these techniques has increased, and
numerous surveys have been published, providing an
extensive overview of technical aspects of LLMs [72],
reviewing LLMs as tools for software engineering [22],
and discussing the technical challenges of applying
LLMs across various fields [25]. Further studies ad-
dress the regulatory and ethical aspects of Genera-
tive AI and ChatGPT, with a particular focus on
AI-human collaboration [41], and Augmented Lan-
guage Models (ALMs), which are LLMs that enhance
3
their capabilities by querying tools such as APIs,
databases, and web search engines [38].
Taxomonies related to LLMs include a taxonomy for
prompts designed to solve complex tasks [49] and a
taxonomy of methods for cost-effectively invoking a
remote LLM [60]. A comparative analysis of stud-
ies on applications of ChatGPT is provided by [27],
whereas LLMs are compared based on their applica-
tion domains and the tasks they solve in [20]. Most
closely related to the taxonomy developed here is a
taxonomy for LLM-powered multiagent architectures
[21] which focuses on autonomous agents with less
technical detail. Taxonomies of applications of AI in
enterprises [48] and applications of generative AI, in-
cluding but not limited to LLMs [52], are developed
using methods similar to those in our study.
Several taxonomies in the field of conversational
agents and task-oriented dialog (TOD) systems ad-
dress system architecture [1, 40, 12, 3]. However, they
omit detailed coverage of the integration of generative
language models.
4. Methods
We constructed the taxonomy following established
guidelines [42, 48, 29], drawing from a sample of
LLM-integrated applications. These applications are
detailed in section 4.1.
4.1. Development
Taxonomy. We derived an initial taxonomy from the
standard architecture of conversational assistants de-
scribed in [3], guided by the idea that conversational
assistants are essentially “chatbots with tools”, i.e.,
language-operated user interfaces that interact with
external systems. This approach proved unsuccessful.
The second version was based on the classical three-
tier software architecture, and then extended over
several development cycles. By repeatedly apply-
ing the evolving taxonomy to the example instances,
we identified dimensions and characteristics using an
“empirical-to-conceptual” approach. When new di-
mensions emerged, additional characteristics were de-
rived in a “conceptual-to-empirical” manner. After
five major refinement cycles, the set of dimensions
and characteristics solidified. In the subsequent eval-
uation phase, we applied the taxonomy to a new set
of example instances that were not considered while
constructing the taxonomy. As the dimensions and
characteristics remained stable, the taxonomy was
considered complete. In the final phase, we refined
the wording and visual format of the taxonomy.
Visualization. Developing a taxonomy involves cre-
ating a representation that effectively supports its
intended purpose [29]. Taxonomies can be repre-
sented in various formats, with morphological boxes
[54, 55] or radar charts [21] being well-established
approaches. We evaluated morphological boxes, be-
cause they effectively position categorized instances
within the design space. However, we found that they
make it difficult to perceive a group of categorized in-
stances as a whole since they occupy a large display
area. This drawback is significant for our purposes,
as LLM-integrated applications often comprise mul-
tiple LLM components. Therefore, we developed a
more condensed visualization of the taxonomy based
on feature vectors.
Example instances. We searched for instances of
LLM-integrated applications for taxonomy develop-
ment that should meet the following criteria:
• The application aims for real-world use rather
than focusing on research only (such as testbeds
for experiments or proofs-of-concept). It demon-
strates efforts towards practical usability and ad-
dresses challenges encountered in real-world sce-
narios.
• The application’s architecture, particularly its
LLM components, is described in sufficient de-
tail for analysis.
• The sample of instances covers a diverse range
of architectures.
• The example instances are situated within indus-
trial or technical domains, as we aim to focus on
LLM-integrated applications beyond well-known
fields like law, medicine, marketing, human re-
sources, and education.
4
The search revealed a predominance of theoretical re-
search on LLM-integrated applications while papers
focusing on practically applied systems were scarce.
Searching non-scientific websites uncovered commer-
cially advertised AI-powered applications, but their
internal workings were typically undisclosed, and reli-
able evaluations were lacking. Furthermore, the het-
erogeneous terminology and concepts in this emerg-
literature
ing field make a comprehensive formal
search unfeasible.
Instead, by repeatedly search-
ing Google Scholar and non-scientific websites using
terms “LLM-integrated applications”, “LLM-powered
applications”, “LLM-enhanced system”, “LLM” and
“tools”, along similar variants, we selected six suitable
instances. Some of them integrate LLMs in multiple
ways, totaling eleven distinct LLM components.
For a thorough evaluation, we selected new instances
using relaxed criteria, including those intended for
research. Additionally, we included a real-world ex-
ample lacking explicit documentation to broaden the
diversity of our sample and assess the taxonomy’s
coverage. Within the five selected instances, we iden-
tified ten LLM components.
4.2. Sample of LLM-integrated applications
Table 1 gives an overview of the sample. Names of ap-
plications and LLM components are uniformly writ-
ten as one CamelCase word and typeset in small caps,
deviating from the format chosen by the respective
authors.
LowCode. LowCode is a web-based application
consisting of a prompt-definition section and a di-
alogue section. The prompt-definition section sup-
ports the design of prompts for complex tasks, such
as composing extensive essays, writing resumes for
job applications or acting as a hotel service chatbot
[5]. In the dialogue section, users converse with an
LLM to complete the complex task based on the de-
fined prompt.
LowCode comprises two LLM components termed
Planning and Executing. Planning operates in
the prompt-definition section, where a user roughly
describes a complex task, and Planning designs a
workflow for solving it. The prompt-definition section
offers a low-code development environment where the
LLM-generated workflow is visualized as a graphi-
cal flowchart, allowing a user to edit and adjust the
logic of the flow and the contents of its steps. For
instance, in essay-writing scenarios, this involves in-
serting additional sections, rearranging sections, and
refining the contents of sections. Once approved by
the user, LowCode translates the modified work-
flow back into natural language and incorporates it
into a prompt for Executing. In the dialogue sec-
tion, users converse in interactive, multi-turn dia-
logues with Executing. As defined in the prompt, it
acts as an assistant for tasks such as writing an essay
or resume, or as a hotel service chatbot. While the
idea of the LLM planning a workflow might suggest
using the LLM for application control, LowCode
Planning actually serves as a prompt generator that
supports developing prompts for complex tasks.
Honeycomb. Honeycomb is an observability plat-
form collecting data from software applications in
distributed environments for monitoring.
Users
define queries to retrieve information about the
observed software systems through Honeycomb’s
Query Builder UI. The recently added LLM-based
QueryAssistant allows users to articulate inquiries
in plain English, such as “slow endpoints by status
code” or “which service has the highest latency?”
The QueryAssistant converts these into queries in
Honeycomb’s format, which users can execute and
manually refine [7, 8].
MyCrunchGpt. MyCrunchGpt acts as an ex-
pert system within the engineering domain, specif-
ically for airfoil design and calculations in fluid me-
chanics. These tasks require complex workflows com-
prising several steps such as preparing data, param-
eterizing tools, and evaluating results, using vari-
ous software systems and tools. The aim of My-
CrunchGpt is to facilitate the definition of these
workflows and automate their execution [28].
MyCrunchGpt offers a web interface featuring a
dialogue window for inputting commands in plain
English, along with separate windows displaying the
5
Table 1: Example instances selected for development (top 6) and evaluation (bottom 5)
Application
References LLM components
Honeycomb
QueryAssistant
[7, 8]
Planning, Executing
LowCode
[5],[35]
DesignAssistant, SettingsEditor, DomainExpert
[28]
MyCrunchGpt
Manager, Operator
MatrixProduction [69]
TaskPlanning
[37]
WorkplaceRobot
TaskExecutor, MemoryGenerator
[64]
AutoDroid
ActionPlanning, ScenarioFeedback
[51]
ProgPrompt
QuestionAnswering
[26]
FactoryAssistants
DstPrompter, PolicyPrompter
[71]
SgpTod
Reporting
[70]
TruckPlatoon
ActionExecutor, Advisor, IntentDetector, Explainer
[16, 44]
ExcelCopilot
output and results of software tools invoked by My-
CrunchGpt in the backend. MyCrunchGpt relies
on predefined workflows, not supporting deviations
or cycles. By appending a specific instruction to the
dialogue history in the prompt for each step of the
workflow, it uses the LLM as a smart parser to ex-
tract parameters for APIs and backend tools from
user input. APIs and tools are called in the prede-
fined order [28, p. 56].
MyCrunchGpt is still in development. The paper
[28] explains the domain as well as the integration of
the LLM, but does not fully detail the implementa-
tion of the latter. Still, MyCrunchGpt illustrates
innovative applications of an LLM in a technical do-
main. We categorize three LLM components solving
tasks within MyCrunchGpt: a DesignAssistant
guiding users through workflows and requesting pa-
rameters for function and API calls; a SettingsEd-
itor updating a JSON file with settings for a back-
end software tool; and a DomainExpert which helps
evaluating results by comparing them to related re-
sults, e.g., existing airfoil designs, which it derives
from its trained knowledge.
MatrixProduction. MatrixProduction
em-
ploys an LLM for controlling a matrix production
system [69]. While in a classical line production
setup, workstations are arranged linearly and the
manufacturing steps follow a fixed sequence, matrix
production is oriented towards greater flexibility.
transport vehicles
Autonomous
carry materials
and intermediate products to workstations, termed
automation modules, each offering a spectrum of
manufacturing skills that it can contribute to the
production process. Compared to line production,
matrix production is highly adaptable and can
manufacture a variety of personalized products with
full automation. This requires intelligent production
management to (a) create workplans that orchestrate
and schedule the automation modules’ skills, and (b)
program the involved automation modules such that
they execute the required processing steps.
MatrixProduction incorporates two LLM compo-
nents: Manager creates workplans as sequences of
skills (a), while Operator generates programs for
the involved automation modules (b).
MatrixProduction prompts Manager and Op-
erator to provide textual explanations in addition
to the required sequences of skills or automation
module programs. The LLM output is processed
by a parser before being used to control the physi-
cal systems. Manager relies on built-in production-
specific knowledge of the LLM such as “a hole is pro-
duced by drilling”.
Noteworthy in this approach is its tight integra-
tion into the system landscape of Industry 4.0.
The few-shot Manager and Operator prompts
are generated automatically using Asset Adminis-
tration Shells, which are standardized, technology-
6
independent data repositories storing digital twins of
manufacturing assets for use in Industry 4.0 [2].
WorkplaceRobot. An experimental robot system
is enhanced with LLM-based task planning in [37].
The robot operates in a workplace environment fea-
turing a desk and several objects. It has previously
been trained to execute basic operations expressed
in natural language such as “open the drawer” or
“take the pink object and place it in the drawer”.
LLM-based task planning enables the robot to per-
form more complex orders like “tidy up the work area
and turn off all the lights”. To this end, an LLM is
prompted to generate a sequence of basic operations
that accomplish the complex order.
Although the robot expects operations phrased in
language, the LLM is prompted with a
natural
Python coding task. For instance, the basic opera-
tion “turn on the green light” corresponds to a Python
command push_button(’green’). The prompt for
the LLM includes several examples each consisting
of a description of an environment state, a complex
order formatted as a comment, and a sequence of
Python robot commands that accomplish the com-
plex order. When invoking the LLM to generate the
Python program for a new order, the prompt is aug-
mented with a description of the environment’s cur-
rent state and the new order as a comment.
The Python code produced by the LLM is trans-
lated back to a sequence of basic operations in nat-
ural language. When the robot executes these oper-
ations, there is no feedback about successful comple-
tion. Rather, the system assumes that all basic op-
erations require a fixed number of timesteps to com-
plete.
AutoDroid. The goal of mobile task automation is
hands-free user interaction for smartphones through
voice commands. AutoDroid is a voice control sys-
tem for smartphones that can automatically execute
complex orders such as “remind me to do laundry on
May 11th” or “delete the last photo I took” [64, 65].
as “scroll down, then press button x” in the calen-
dar app. AutoDroid employs an LLM component
TaskExecutor to plan these sequences of opera-
tions. The challenge is that the next operation to ex-
ecute depends on the current state of the Android app
which continuously changes as the app is operated.
AutoDroid solves this by invoking the TaskEx-
ecutor repeatedly after each app operation with the
prompt comprising the updated state of the Graph-
ical User Interface (GUI) along with the user’s com-
plex order.
Before executing irrevocable operations, such as per-
manently deleting data or calling a contact, Auto-
Droid prompts the user to confirm or adjust the op-
eration. TaskExecutor is instructed to include a
“confirmation needed” hint in its output for such op-
erations.
The prompt for TaskExecutor comprises an ex-
tract from a knowledge base which is built automati-
cally in an offline learning phase as follows: In a first
step, a “UI Automator” (which is not an LLM com-
ponent) automatically and randomly operates the
GUI elements of an Android app to generate a UI
Transition Graph (UTG). The UTG has GUI states
as nodes and the possible transitions between GUI
states as edges. As next steps, AutoDroid invokes
two LLM components referred to as MemoryGen-
erators to analyze the UTG.
The first MemoryGenerator is prompted repeat-
edly for each GUI state in the UTG. Its task is to
explain the functionality of the GUI elements. Be-
sides instructions and examples of the table format
desired as output, its prompt includes an HTML rep-
resentation of the GUI state, the GUI actions preced-
ing this state, and the GUI element operated next.
Its output consists of tuples explaining the function-
ality of a GUI element by naming the derived func-
tionality (e.g., “delete all the events in the calendar
app”) and the GUI states and GUI element actions in-
volved. Similarly, the second MemoryGenerator
is prompted to output a table listing GUI states and
explanations of their functions. These tables consti-
tute AutoDroid’s knowledge base.
Such complex orders are fulfilled by performing se-
quences of basic operations in an Android app, such
ProgPrompt. ProgPrompt [51] is an approach
to
to LLM-based robot
task planning similar
7
Its robot is controlled by
WorkplaceRobot.
Python code and works in a real and a simulated
household environment.
ProgPrompt comprises two LLM components. Ac-
tionPlanning generates Python scripts for tasks
such as “microwave salmon” using basic opera-
tions
like grab(’salmon’), open(’microwave’),
and putin(’salmon’, ’microwave’), notably with-
out considering the current state of the environment.
To establish a feedback loop with the environment,
ActionPlanning adds assert statements. These
statements verify the preconditions of basic opera-
tions and trigger remedial actions when preconditions
are not met. For instance, a script for “microwave
salmon” comprises the following code fragment:
if assert(’microwave’ is ’opened’)
else: open(’microwave’)
putin(’salmon’, ’microwave’)
When operating in the simulated environment,
ProgPrompt can verify an assert statement
through its second LLM component, Scenario-
Feedback. Prompted with the current state of the
environment and the assert statement, Scenario-
Feedback evaluates it and outputs True or False.
FactoryAssistants. FactoryAssistants advise
workers on troubleshooting production line issues in
two manufacturing domains: detergent production
and textile production [26]. The assistants leverage
domain knowledge from FAQs and documented prob-
lem cases to answer user queries. The required do-
main knowledge is provided as a part of the prompt.
SgpTod. SgpTod employs an LLM to implement a
chatbot, specifically, a task-oriented dialogue (TOD)
system [71]. TOD systems are also known as conver-
sational assistants. In contrast to open-domain dia-
logue (ODD) systems, which engage users in goalless
conversations, they are designed for assisting users in
specific tasks.
In general, TOD systems require the following
components [3]: Natural Language Understanding
(NLU), analyzing the user’s input to classify intents
and extract entities; Dialogue Management (DM) for
deciding on a system action that is appropriate in
a given dialogue state (e.g., ask for more informa-
tion or invoke a hotel booking service); and Natu-
ral Language Generation (NLG) for producing a re-
sponse that the TOD system can present to the user.
Intent classification, also known as intent detection,
matches free-text user input to one of several tasks a
TOD system can perform (e.g., book a hotel). Entity
extraction isolates situational values, called entities,
from the user input (e.g., the town and the date of
the hotel booking). The TOD system may require
several dialogue turns to elicit all necessary entities
from the user.
In TOD research, the system’s in-
ternal representation of the user’s intentions and the
entity values is commonly referred to as its “belief
state”. For example, in the restaurant search domain,
the belief state may include attribute-value pairs like
cuisine:Indian and pricerange:medium.
SgpTod is a multi-domain TOD system, concur-
rently handling multiple task domains found in stan-
dard TOD evaluation datasets, such as recommend-
ing restaurants or finding taxis. Similar to other ex-
perimental TOD systems [23], SgpTod accesses a
database that stores information from the task do-
mains, such as available hotels and restaurants.
SgpTod comprises two LLM components, called
DstPrompter and PolicyPrompter, that are
both invoked in every dialogue turn between SgpTod
and the user. The DstPrompter handles the NLU
aspect, analyzing the user’s input and populating the
system’s belief state.
It outputs is an SQL query
suited to extract the database entries that match the
current belief state. Upon retrieving the database en-
tries, SgpTod invokes its PolicyPrompter which
covers both DM and NLG. Prompted with the dia-
logue history and the database entries retrieved, it
produces a two-part output: a natural language re-
sponse for NLG and a system action for DM.
TruckPlatoon. The concept of truck platooning
means that trucks travel closely together for bet-
ter fuel efficiency and traffic flow. TruckPla-
toon comprises an algorithmic control loop which
autonomously maintains a consistent distance be-
tween trucks. It invokes an LLM to generate natural-
language reports on the platoon’s performance and
8
stability from measurements tracked by the control
algorithm, providing easily understandable informa-
tion for engineers involved in monitoring and opti-
mizing the truck platooning system.
ExcelCopilot. ExcelCopilot is an example of
a recent trend where software companies integrate
LLM-based assistants, often termed “copilots”, into
their products [44]. These copilots not only provide
textual guidance but also perform actions within the
software environment, constituting a distinctive type
of LLM-integrated application. We chose Excel-
Copilot as an example for evaluating our taxonomy.
Since its implementation is undisclosed, we infer its
architecture from indirect sources, including a screen-
cast and a report on insights and experiences from
copilot developers [16, 44]. This inferred architecture
may deviate from the actual implementation.
ExcelCopilot is accessible in a task bar along-
side the Excel worksheet.
It features buttons with
context-dependent suggestions of actions and a text
box for users to type in commands in natural lan-
guage. ExcelCopilot only works with data tables,
so its initial suggestion is to convert the active work-
sheet’s data into a data table. Copilot functions ac-
tivate when a data table or part of it is selected. It
then presents buttons for four top-level tasks: “add
formula columns”, “highlight”, “sort and filter”, and
“analyze”. The “analyze” button triggers the copilot
to display more buttons, e.g., one that generates a
pivot chart from the selected data. ExcelCopilot
can also add a formula column to the data table and
explain the formula in plain language.
When a user inputs a free-text command, Excel-
Copilot may communicate its inability to fulfill
it. This constantly occurs with commands requiring
multiple steps, indicating that ExcelCopilot lacks
a planning LLM component as seen in, for example,
MatrixProduction. This observation, along with
its mention in [44], suggests that ExcelCopilot em-
ploys an intent detection-skill routing architecture.
This architecture includes an LLM component that
maps free-text user commands to potential intents
and then delegates to other LLM components tasked
with generating actions to fulfill those intents. Ac-
cordingly, ExcelCopilot comprises several types of
LLM components:
• Several distinct Action Executors generate
code for specific application actions, such as cre-
ating a pivot table, designing a worksheet for-
mula, inserting a diagram, and so on.
• An Advisor suggests meaningful next actions.
Its outputs serve to derive button captions and
prompts for ActionExecutors.
• When a user inputs a free-text command, the
IntentDetector is invoked to determine and
trigger a suitable ActionExecutor. The In-
tentDetector communicates its actions to
users and informs them when it cannot devise
a suitable action.
• The Explainer generates natural language ex-
planations of formulae designed by ExcelCopi-
lot. It is unclear whether under the hood, the
ActionExecutor is generating both the for-
mula and the explanation, or if two separate
LLM components are being invoked. We assume
the latter, i.e., that a separate Explainer LLM
component exists.
While users interact repeatedly with ExcelCopi-
lot, each interaction adheres to a single-turn pat-
tern, with the user providing a command and Ex-
celCopilot executing it [44].
5. A Taxonomy for LLM Components and
LLM-Integrated Applications
When developing the taxonomy, it emerged that an-
alyzing an LLM-integrated application should begin
with identifying and describing its distinct LLM com-
ponents. Analyzing each LLM component separately
helps capture details and provides a clear understand-
ing of how the application utilizes LLM capabili-
ties. The LLM-integrated application can then be
described as a combination of the LLM components
it employs.
9
Function
Meta
Invocation
Table 2: Dimensions and characteristics of the taxonomy. Codes of characteristics are printed in uppercase. “Meta” means
“metadimension”. “MuEx” means “mutual exclusiveness”.
Dimension
Interaction
Frequency
Logic
UI
Data
Instruction
State
Task
Check
Skills
Format
Revision
Consumer
Characteristics
App, Command, Dialog
Single, Iterative
cAlculate, Control
none, Input, Output, Both
none, Read, Write, Both
none, User, LLM, Program
none, User, LLM, Program
none, User, LLM, Program
none, User, LLM, Program
reWrite, Create, conVerse, Inform, Reason, Plan
FreeText, Item, Code, Structure
none, User, LLM, Program
User, LLM, Program, Engine
MuEx
enforced
yes
yes
yes
yes
enforced
enforced
yes
enforced
no
no
enforced
enforced
Prompt
Output
5.1. Overview and demonstration
The taxonomy identifies 13 dimensions for LLM com-
ponents, grouped into five metadimensions as shown
in table 2. It comprises both dimensions with gen-
uinely mutually exclusive characteristics and those
with non-exclusive characteristics. For dimensions
related to the technical integration of LLMs within
applications, mutual exclusiveness is enforced. Given
the open nature of software architecture, the inte-
gration of LLMs allows for significant diversity.
In
practice, LLM components may show multiple char-
acteristics within these dimensions. Nonetheless, the
taxonomy requires categorizing each component with
a predominant characteristic, enforcing a necessary
level of abstraction to effectively organize and struc-
ture the domain.
We applied the taxonomy to categorize each of the
example instances described in section 4.2. The re-
sults are depicted in figure 1. The dimensions and
their characteristics are detailed and illustrated with
examples in section 5.2.
The taxonomy visualizes an LLM component by a
feature vector comprising binary as well as multi-
valued features. Non-mutually exclusive dimensions
are represented by a set of binary features. The re-
maining dimensions are encoded as n-valued features
where n denotes the number of characteristics. For
compactness, we use one-letter codes of the charac-
teristics as feature values in the visualizations.
In
table 2, these codes are printed in upper case in the
respective characteristic’s name.
A feature vector representing an LLM component
is visualized in one line. For dimensions with non-
mutually exclusive characteristics, all possible codes
are listed, with the applicable ones marked. The re-
maining dimensions are represented by the code of
the applicable characteristic, with the characteris-
tic none shown as an empty cell. We shade feature
values with different tones to support visual percep-
tion. LLM components within the same application
are grouped together, visualizing an LLM-integrating
application in a tabular format.
5.2. Dimensions and characteristics
5.2.1. Invocation dimensions
Two Invocation dimensions address the way the LLM
is invoked within the application.
Interaction describes how the user interacts with the
LLM with three characteristics:
App: Users never converse with the LLM directly
in natural language, rather the application invokes
the LLM automatically. E.g., users do not interact
10
Invocation
Function
Prompt
(cid:125)(cid:124)
(cid:123)
(cid:122)
(cid:125)(cid:124)
(cid:123)
(cid:125)(cid:124)
(cid:123)
(cid:122)
Skills
(cid:125)(cid:124)
Out. Format
Output
(cid:123)
(cid:122)
(cid:125)(cid:124)
(cid:123)
(cid:122) (cid:125)(cid:124) (cid:123)
(cid:122)
n
o
i
t
c
a
r
e
t
n
I
C
C
D
Honeycomb QueryAssistant
LowCode Planning
LowCode Executing
MyGrunchGpt DesignAssistant D
C
MyGrunchGpt SettingsEditor
C
MyGrunchGpt DomainExpert
MatrixProduction Manager
MatrixProduction Operator
WorkplaceRobot
AutoDroid Executor
AutoDroid MemoryGenerator2
C
A
C
C
A
C
ProgPrompt ActionPlanning
ProgPrompt ScenarioFeedback A
FactoryAssistant
SgpTod DstPrompter
SgpTod PolicyPrompter
TruckPlatoon
D
D
A
A
ExcelCopilot ActionExecutor∗ A
A
ExcelCopilot Advisor
C
ExcelCopilot IntentDetector
A
ExcelCopilot Explainer
y
c
n
e
u
q
e
r
F
S
S
I
I
S
S
S
S
S
I
I
S
I
S
S
S
S
S
S
S
S
(cid:122)
n
o
i
t
c
u
r
t
s
n
I
a
t
a
D
I
U
c
i
g
o
L
A
e
t
a
t
S
k
s
a
T
k
c
e
h
C
e
t
i
r
W
e
r
e
t
a
e
r
C
e
s
r
e
V
n
o
c
m
r
o
f
n
I
n
o
s
a
e
R
A
A B
A B
A
A
I
I
I
I
C
C
C
C
A
C
C
A
R P P U P
P
U
P L U
P P U
P P P
P P P
P P U
P P L
P P U
I
C V I
V
W
I
I
P L U P
P P P
P
U
P P L
P P U
W
V
V
A I R P P U
P P P
C O
A O
P P P
W
A
A
C
A
P P L
P P P
P P U
P P P
t
x
e
T
e
e
r
F
m
e
t
I
n
a
l
P
P
P
F
F
P F
P F
P
P
P
F
F
F
P F
F
F
R
R
R
R
R
R
R
I
I
I
I
e
d
o
C
C
C
C
C
C
C
e
r
u
t
c
u
r
t
S
n
o
i
s
i
v
e
R
r
e
m
u
s
n
o
C
P
E
S U L
U
S
S
S
S
S
S
S
E
E
U
L
E
E
E
L
E
E
U
E
P
U
E
P
P
U
Figure 1: Categorized example instances. See table 2 for a legend. ∗, 2: multiple LLM components.
directly with ExcelCopilot ActionExecutor or
with MatrixProduction Operator.
Command : Users input single natural
language
commands. E.g., users interact with AutoDroid
TaskExecutor through single natural
language
commands.
Dialog: Users engage in multi-turn dialogues with the
LLM component to achieve a use goal. E.g., users
repeatedly prompt LowCode Executing or My-
CrunchGpt DesignAssistant in multi-turn dia-
logues to obtain an essay or an airfoil design, respec-
tively.
Frequency addresses how often the application in-
vokes a specific LLM component to fulfill a goal:
Single: A single invocation of an LLM component
is sufficient to produce the result. E.g.,
in My-
CrunchGpt, the application internally invokes dis-
tinct LLM components once for each user input by
injecting varying prompt instructions.
Iterative: The LLM component is invoked repeatedly
to produce the result. E.g., AutoDroid TaskEx-
11
ecutor is invoked multiple times to fulfill a com-
mand with an updated environment description in
the State prompt; LowCode Executing is repeat-
edly prompted by the user to achieve the use goal
while the application updates the dialogue history.
5.2.2. Function dimensions
The Function dimensions are derived from the classi-
cal three-tier software architecture model which seg-
regates an application into three distinct layers: pre-
sentation, logic and data [17]. The presentation layer
implements the UI. On the input side, it allows users
to enter data and commands that control the appli-
cation. On the output side, it presents information
and provides feedback on the execution of commands.
The logic layer holds the code that directly realizes
the core objectives and processes of an application
such as processing data, performing calculations, and
making decisions. The data layer of an application
manages the reading and writing of data from and
to persistent data storage. Due to its versatility, an
LLM component can simultaneously implement func-
tionality for all three layers. The taxonomy addresses
this with three Function dimensions.
UI indicates whether an LLM component contributes
significantly to the user interface of an application,
avoiding the need to implement graphical UI controls
or display elements:
none: No UI functionality is realized by the LLM.
E.g., in ExcelCopilot, the LLM does not replace
any UI elements.
Input:
is (partially) implemented by
the LLM. E.g., in MatrixProduction Manager,
users input their order in natural language, obviating
a product configuration GUI.
Output: Output UI is (partially) implemented by the
LLM. E.g., in TruckPlatoon, the output gener-
ated by the LLM component can replace a data cock-
pit with gauges and other visuals displaying numeri-
cal data.
Input and output UI are (partially) imple-
Both:
mented by the LLM. E.g., in MyCrunchGpt, the
DesignAssistant provides a convenient conversa-
interface for parameterization of APIs and
tional
Input UI
tools and feedback on missing values, which other-
wise might require a complex GUI.
Logic indicates whether the LLM component deter-
mines the control flow of the application. It discerns
two characteristics:
cAlculate: The output does not significantly impact
the control flow of the application, i.e., the output
is processed like data. E.g., MyCrunchGpt Set-
tingsEditor modifies a JSON file, replacing a pro-
grammed function; MyCrunchGpt DesignAssis-
tant asks the user for parameters, but the sequence
of calling APIs and tools follows a predefined work-
flow; the workflow computed by LowCode Plan-
ning is displayed without influencing the applica-
tion’s control flow.
Control : The output of the LLM is used for con-
trolling the application. E.g., the plans generated
by MatrixProduction Manager serve to sched-
ule and activate production modules; the actions pro-
posed by AutoDroid TaskExecutor are actually
executed and determine how the control flow of the
app proceeds.
Since an LLM invocation always computes a result,
cAlculate is interpreted as “calculate only”, making
cAlculate and Control mutually exclusive.
Data addresses whether the LLM contributes to read-
ing or writing persistent data:
none: The LLM does not contribute to reading or
writing persistent data. This characteristic applies
to most sample instances.
Read : The LLM is applied for reading from persistent
data store. E.g., SgpTod DstPrompter generates
SQL queries which the application executes; Honey-
comb QueryAssistant devises analytical database
queries.
Write and Both: No LLM component among the
samples generates database queries for creating or
updating persistent data.
5.2.3. Prompt-related dimensions
Integrating an LLM into an application poses spe-
cific requirements for prompts, such as the need for
prompts to reliably elicit output in the requested
12
form [68]. While a broad range of prompt patterns
have been identified and investigated [66], there is
still a lack of research on successful prompt pat-
terns specifically for LLM-integrated applications, on
which this taxonomy could build. Developing prompt
taxonomies is a challenging research endeavor in itself
[49] and is beyond the scope of this research. There-
fore, the taxonomy does not define a dimension with
specific prompt patterns as characteristics, but rather
focuses on how the application generates the prompt
for an LLM component from a technical perspective.
Prompts generally consist of several parts with dis-
tinct purposes, generated by different mechanisms.
Although many authors explore the concepts, a com-
mon terminology has yet to be established. This is
illustrated in table 3, showing terms from an ad-hoc
selection of recent papers addressing prompt gener-
In the table, italics indicate
ation in applications.
that the authors refrain from introducing an abstract
term and instead use a domain-specific description.
The term “examples” indicates a one-shot or few-shot
prompt pattern. The terms that are adopted for the
taxonomy are underlined.
The taxonomy distinguishes three prompt parts re-
ferred to as Prompt Instruction, Prompt State, and
Prompt Task. These parts can occur in any order,
potentially interleaved, and some parts may be ab-
sent.
• Instruction is the part of a prompt that outlines
how to solve the task. Defined during LLM com-
ponent development, it remains static through-
out an application’s lifespan.
• State is the situation-dependent part of the
prompt that is created dynamically every time
the LLM is invoked. The taxonomy opts for the
term State instead of “context” in order to avoid
confusion with the “LLM context” as explained
in section 2. The State may include the current
dialogue history, an extract of a knowledge base
needed specifically for the current LLM invoca-
tion, or a state or scene description, etc.
• Task is the part of the prompt conveying the
task to solve in a specific invocation.
Prompt Instruction, State and Task describe the ori-
gins of the prompt parts by uniform characteristics:
none: The prompt part is not present. E.g., Prog-
Prompt ActionPlanning has no State prompt,
nor does LowCode Planning (except the dialogue
history when planning a subprocess).
Instruction
and Task prompt parts are present in all sample in-
stances.
User : The user phrases the prompt part. E.g., the
Task for ExcelCopilot IntentDetector or for
LowCode Planning is phrased by the user. There
are no sample instances where the user provides the
Instruction or State prompt parts.
LLM : The prompt part is generated by an LLM. E.g.,
LowCode Planning generates the State for Low-
Code Executing and ExcelCopilot IntentDe-
tector generates the Task for ExcelCopilot Ac-
tionExecutors.
Program: Application code generates the prompt
part. E.g., AutoDroid programmatically generates
the State and the Task parts for its MemoryGen-
erators in the knowledge base building phase.
The Prompt Instruction dimension is always gener-
ated by Program. While a user and possibly an LLM
have defined this prompt part during application de-
velopment, this falls outside the scope of this taxon-
omy. Therefore, the Prompt Instruction dimension is
not discriminating and categorizes all cases as Pro-
gram. It is retained in the taxonomy for completeness
and better understandability.
Prompt Check describes whether the application em-
ploys a review mechanism to control and modify the
prompt before invoking the LLM. The same charac-
teristics as for the prompt parts are applicable:
none: The prompt is used without check.
User : The user checks and revises the prompt.
LLM : Another LLM component checks or revises the
prompt.
Program: The application comprises code to check
or revise the prompt. E.g., AutoDroid removes
personal data, such as names, to ensure privacy
before invoking the TaskExecutor; Honeycomb
QueryAssistant incorporates a coded mechanism
against prompt injection attacks.
13
Table 3: Terms used for prompt parts. Expressions specific to a domain are printed in italics, “examples” indicates a one-shot
or few-shot prompt pattern. Terms adopted for the taxonomy are underlined.
Source
[72]
[34]
[32]
[45]
[45]
[37]
Instruction
task description + examples
instruction prompt
predefined prompt
prompt template + examples
examples
prompt context, i.e., examples
[5]
[5]
[69]
[26]
education prompt
education prompt
role and goal + instruction + examples
predefined system instruction +
domain-specific information
State
DB schema
environment state, scene
description
dialogue history
dialogue history + provided
workflow
context
query results from
knowledge graph
Task
test instance
data prompt
user prompt
user input question
SQL query result
input task commands
user input task prompt
(circumscribed)
current task
the user’s request
Most example instances omit prompt checks. There
are no examples where a Check is performed by a
User or an LLM.
5.2.4. Skills dimensions
The Skills dimension captures the types of LLM ca-
pabilities that an application utilizes. It is designed
as a dimension with six non-mutually exclusive char-
acteristics.
Skills is decomposed into six specific capabilities:
reWrite: The LLM edits or transforms data or
text, such as rephrasing, summarizing, reformat-
ting, correcting, or replacing values. E.g., My-
CrunchGpt SettingsEditor replaces values in
JSON files; TruckPlatoon converts measurements
into textual explanations.
Create: The LLM generates novel output. E.g.,
LowCode Executing generates substantial bodies
of text for tasks like essay writing.
conVerse: The application relies on the LLM’s capa-
bility to engage in purposeful dialogues with humans.
E.g., MyCrunchGpt DesignAssistant asks users
for missing parameters; SgpTod PolicyPrompter
decides how to react to user inputs and formulates
chatbot responses.
Inform: The application depends on knowledge that
the LLM has acquired during its training, unlike
applications that provide all necessary information
within the prompt. E.g., MyCrunchGpt Domain-
Expert provides expert knowledge on airfoil designs;
MatrixProduction relies on built-in knowledge of
production processes, such as “a hole is produced
by drilling”; LowCode Executing uses its learned
knowledge for tasks like essay writing.
Reason: The LLM draws conclusions or makes log-
ical inferences. E.g., FormulaExplainer in Ex-
celCopilot explains the effects of Excel functions
in formulas; AutoDroid MemoryGenerators ex-
plain the effects of GUI elements in Android apps.
Plan: The LLM designs a detailed method or course
E.g., Au-
of action to achieve a specific goal.
toDroid TaskExecutor and WorkplaceRobot
TaskPlanning devise action plans to achieve goals.
The Plan and Reason characteristics are interrelated,
as planning also requires reasoning. The intended
handling of these characteristics is to categorize an
LLM component as Plan only and understand Plan
as implicitly subsuming Reason.
The effectiveness of LLMs as components of software
applications relies on their commonsense knowledge
and their ability to correctly interpret and handle a
broad variety of text inputs, including instructions,
14
examples, and code. It is reasonable to assume that a
fundamental capability, which might be termed Un-
terstand, is leveraged by every LLM component. As
it is not distinctive, the taxonomy does not list it
explicitly in the Skills dimension.
Applying this taxonomy dimension requires users to
determine which skills are most relevant and worth
highlighting in an LLM component. Given the versa-
tility of LLMs, reducing the focus to few predominant
skills is necessary to make categorizations distinctive
and expressive.
5.2.5. Output-related dimensions
Output Format characterizes the format of the LLM’s
output. As an output may consist of several parts in
diverse formats, this dimension is designed as non-
mutually exclusive, same as the Skills dimension. It
distinguishes four characteristics that are distinctive
and well discernible:
FreeText: unstructured natural language text out-
put. E.g., TruckPlatoon and MyCrunchGpt
DomainExpert generate text output in natural lan-
guage; MatrixProduction Manager and Ma-
trixProduction Operator produce FreeText ex-
planations complementing output in custom formats
to be parsed by the application.
Item: a single text item from a predefined set of
items, such as a class in a classification task. E.g.,
ProgPrompt ScenarioFeedback outputs either
True or False.
Code: source code or other highly formalized output
that the LLM has learned during its training, such
as a programming language, XML, or JSON. E.g.,
AutoDroid TaskExecutor produces code to steer
an Android app; MyCrunchGpt SettingsEditor
outputs JSON.
Structure: structured, formalized output adhering to
a custom format. E.g., LowCode Planning out-
puts text in a format that can be displayed as a flow
chart; MatrixProduction Manager and Oper-
ator produce output in custom formats combined
with FreeText explanations.
Output Revision indicates whether the application
checks or revises the LLM-generated output before
utilization. These characteristics and their interpre-
tations mirror those in the Prompt Check dimension:
none: There is no revision of the LLM output.
User : The user revises the LLM output. E.g.,
the user improves the plan generated by LowCode
Planning.
LLM : A further LLM component checks or revises
the output of the LLM component under considera-
tion.
Program: Programmed code checks or revises the
LLM output. E.g., Honeycomb QueryAssistant
corrects the query produced by the LLM before exe-
cuting it [7].
There are no instances in the sample set where an-
other LLM revises or checks the output of the LLM.
Most sample applications do not check or revise the
LLM’s output, though several of them parse and
transform it. The purpose of the Output Revision
dimension is to indicate whether the application in-
cludes control or correction mechanisms, rather than
just parsing it.
Output Consumer addresses the way of utilizing the
LLM output:
User signifies that the LLM output is presented to
a human user. E.g., the text output of TruckPla-
toon is intended for humans, as well as the output
of MyCrunchGPT DomainExpert.
LLM indicates that the output serves as a prompt
part in a further LLM invocation. E.g., the knowl-
edge base entries generated by an AutoDroid Mem-
oryGenerator become part of the prompt for
AutoDroid TaskExecutor; the plan output by
LowCode Planning serves as a part of the prompt
for LowCode Executing.
Program describes instances where the LLM output
is consumed and processed further by a software com-
ponent of the application. E.g., the output of Ma-
trixProduction Manager is handled by software
systems (including a Manufacturing Execution Sys-
tem) which use it to compute prompts for other LLM
components.
Engine covers scenarios where the LLM output is in-
tended for execution on a runtime engine. E.g., the
SQL query generated by SgpTod DstPrompter is
15
processed by a SQL interpreter; a part of the output
of MatrixProduction Operator is executed by
automation modules.
Although applications may parse and transform the
LLM output before use, the Output Consumer di-
mension is meant to identify the ultimate consumer,
such as an execution engine, rather than an interme-
diary parser or transformation code. When applica-
tions divide the LLM output into parts for different
consumers, users applying the taxonomy need to de-
termine which consumer is most relevant, since this
dimension is designed to be mutually exclusive.
5.3. Evaluation
Figure 2 displays the number of occurrences of char-
It must
acteristics within the example instances.
be noted, however, that these do not reflect actual
frequencies, as similar LLM components within the
same application are aggregated together, indicated
by symbols ∗ and 2 in figure 1. Furthermore, Ex-
celCopilot likely includes occurrences of Prompt
Check and Output Revision which are not counted
due to insufficient system documentation.
We evaluate the taxonomy against commonly ac-
cepted quality criteria: comprehensiveness, robust-
ness, conciseness, mutual exclusiveness, explanatory
power, and extensibility [58, 42]. The taxonomy
encompasses all example instances including those
that were not considered during its development.
This demonstrates comprehensiveness. As figure 1
shows, all example instances have unique categoriza-
tions, supporting the taxonomy’s robustness. This
not only indicates that the dimensions and charac-
teristics are distinctive for the domain, but also high-
lights the wide variety possible in this field. Concise-
ness demands that the taxonomy uses the minimum
number of dimensions and characteristics. The tax-
onomy gains conciseness by identifying relatively few
and abstract characteristics within each dimension.
However, it does not adhere to the related subcri-
terion that each characteristic must be present in at
least one investigated instance [54]. Unoccupied char-
acteristics are retained for dimensions whose char-
acteristics were derived conceptually, specifically, for
the Prompt dimensions, the Output Revision dimen-
sion, and the Data Function dimension, enhancing
the taxonomy’s ability to illustrate design options
and inspire novel uses for LLM integrations in ap-
plications. Some dimensions are constructed in par-
allel, sharing common sets of characteristics. While
this affects conciseness, it makes the taxonomy easier
to understand and apply. As is often seen in tax-
onomy development [54], we deliberately waived the
requirement for mutual exclusiveness for some di-
mensions, specifically the Output Format and Skills
dimensions. In the context of this taxonomy, these
can equivalently be understood as a set of of six
and four binary dimensions respectively, each divided
into characteristics “yes” and “no”. However, framing
them as a single dimension with non-mutually exclu-
sive characteristics seems more intuitive.
Metadimensions structure the taxonomy, and most
of the characteristics are illustrated through exam-
ples. These measures are recognized for enhancing
the explanatory power of a taxonomy [58]. The
taxonomy’s flat structure allows for the easy addition
of dimensions and characteristics, indicating that its
extensibility is good. Potential extensions and fur-
ther aspects of the taxonomy, including its usefulness
and ease of use, are discussed in section 6.
We visualize the taxonomy (or, strictly speaking, cat-
egorized instances) in a compact form using feature
vectors with characteristics abbreviated to single-
letter codes. This approach has a drawback, as
it requires referencing a legend. Additionally, non-
applicable characteristics in mutually exclusive di-
mensions are not visible, which means the design
space is not completely shown. However, the com-
pactness of the representation allows LLM compo-
nents within a common application to be grouped
closely, so that an LLM-integrated application can
be perceived as a unit without appearing convoluted.
This is a significant advantage for our purposes.
6. Discussion
The discussion first focuses on the taxonomy’s appli-
cability and ease of use before considering its overall
usefulness.
16
Invocation
(cid:122)
(cid:123)
(cid:125)(cid:124)
Inter. Freq. Logic UI
Function
(cid:125)(cid:124)
(cid:122)
(cid:123)
Data
(cid:122)
Instr.
Prompt
(cid:125)(cid:124)
State
Task
(cid:123)
Check
Skills
(cid:125)(cid:124)
(cid:122)
(cid:123)
Output
Format
(cid:122)
(cid:122) (cid:125)(cid:124) (cid:123) Revision Consumer
Output
(cid:125)(cid:124)
(cid:123)
A C D I S C A I O B R W B U L P U L P U L P U L P W C V I R P F I C S U L P U L P E
8 9 4 5 16 8 13 5 2 2 2 0 0 0 0 21 0 2 17 11 3 7 0 0 2 3 1 4 4 7 8 10 4 6 8 1 0 1 5 3 3 10
Figure 2: Occurrences of characteristics in the sample set of LLM-integrated applications.
6.1. Applicability and ease of use
The taxonomy was effectively applied to LLM-
integrated applications based on research papers,
source code blog posts, recorded software demonstra-
tions, and developer experiences. The analysis of
LowCode revealed it to be a prompt definition tool
combined with an LLM-based chatbot, which devi-
ates from the strict definition of an LLM-integrated
application. Still, the taxonomy provided an effective
categorization and led to a clear understanding of the
system’s architecture.
Obviously, the ease of categorization depends on the
clarity and comprehensiveness of the available infor-
mation, which varies across analyzed systems. An-
alyzing applications of LLMs in novel and uncom-
mon domains can be challenging. While these papers
present inspiring and innovative ideas for LLM inte-
gration, such as MyCrunchGpt and TruckPla-
toon, they may prioritize explaining the application
area and struggle to detail the technical aspects of the
LLM integration. A taxonomy for LLM-integrated
applications can guide and facilitate the writing pro-
cess and lead to more standardized and comparable
descriptions.
Applying the taxonomy is often more straightforward
for research-focused systems. Omitting the com-
plexities required for real-world applications, such as
prompt checks and output revisions, their architec-
tures are simpler and easier to describe. A taxonomy
can point out such omissions.
A fundamental challenge in applying the taxonomy
arises from the inherent versatility of LLMs, which
allows to define LLM components serving multiple
purposes. This is exemplified by SgpTod Poli-
cyPrompter, where the prompt is designed to pro-
duce a structure with two distinct outcomes (a class
label and a chatbot response), and similarly by Ma-
trixProduction, as detailed section 4.2. Draw-
ing an analogy to “function overloading” in classical
programming, such LLM components can be termed
“overloaded LLM components”.
A taxonomy can handle overloaded LLM components
in several ways: (1) define more dimensions as non-
mutually exclusive, (2) label overloaded LLM compo-
nents as “overloaded” without a more detailed catego-
rization, or (3) categorize them by their predominant
purpose or output. While the first approach allows
for the most precise categorization, it complicates the
taxonomy. Moreover, it will likely result in nearly all
characteristics being marked for some LLM compo-
nents, which is ultimately not helpful. The second
approach simplifies categorization but sacrifices much
detail. Our taxonomy adopts the third approach, en-
forcing simplification and abstraction in descriptions
of overloaded LLM components while retaining es-
sential detail. The taxonomy can easily be extended
to include approach (2) as an additional binary di-
mension.
6.2. Usefulness
The search for instances of LLM-integrated appli-
cations uncovered activities across various domains.
Substantial research involving LLM integrations, of-
ten driven by theoretical interests, is notable in robot
task planning [37, 51, 61, 33, 63] and in the TOD
field [23, 71, 4, 6, 56]. Research exploring LLM po-
tentials from a more practical perspective can be
found in novel domains, such as industrial produc-
tion [69, 26] and other technical areas [28, 70]. Fur-
17
thermore, developers of commercial LLM-based ap-
plications are beginning to communicate their efforts
and challenges [44, 7]. The taxonomy has been ap-
plied to example instances from these and additional
areas. This demonstrates its potential as a common,
unified framework for describing LLM-integrated ap-
plications, facilitating the comparison and sharing
of development knowledge between researchers and
practitioners across various domains.
When applying the taxonomy to the example in-
stances, it proved to be effective and useful as an
analytical lens. Descriptions of LLM-integrated ap-
plications commonly explain background information
and details of the application domain in addition to
its LLM integration. When used as an analytical
lens, the taxonomy quickly directs the analysis to-
wards the aspects of LLM integration, abstracting
from the specificities of the domain.
The taxonomy describes how LLM capabilities can be
leveraged in software systems, offers inspiration for
LLM-based functions, and outlines options for their
implementation as follows. The Skills dimension out-
lines the range of capabilities an LLM can contribute
to an application through a concise set of characteris-
tics, while the Function dimension suggests potential
uses, further supported by the Interaction dimension.
The Output Type dimension indicates options for en-
coding the output of an LLM in formats beyond plain
text, making it processable by software. The Output
Consumer dimension illustrates the diverse ways to
utilize or act upon LLM output. Thus, the taxonomy,
as intended, spans a design space for LLM integra-
tions.
The sampled LLM-integrated applications showcase
the creativity of researchers and developers in ap-
plying and exploiting the potentials of LLMs, rang-
ing from straightforward solutions (e.g., TruckPla-
toon) to highly sophisticated and technically com-
plex ones (e.g., AutoDroid). When using the tax-
onomy to inspire innovative uses of LLMs, we recom-
mend supplementing it with descriptions of example
applications to enhance its illustrativeness. The char-
acteristics of the Skills dimension are derived prag-
matically from the investigated example instances.
While they do not claim to be exhaustive or deeply
18
rooted in LLM theory or cognitive science, they add
relevant details to the categorizations and illustrate
design options and potentials for using LLMs as soft-
ware components.
It emerged as a key insight of this research that,
rather than analyzing an LLM-integrated application
in whole, analysis should start with the identifica-
tion and description of its distinct LLM components.
This is essential for gaining a clear understanding of
how the application utilizes the capabilities of LLMs.
The LLM-integrated application then manifests as a
combination of its LLM components. As shown in fig-
ure 1, the visualization effectively displays both the
quantity and the variety of LLM components in an
LLM-integrated application.
LLM components interact through prompt chaining,
where one LLM component’s output feeds into an-
other’s input [67]. When an LLM-integrated applica-
tion involves such an interaction, the taxonomy rep-
resents it as an LLM characteristic within a Prompt
dimension. The taxonomy can capture the variance
in these interactions. For instance, in AutoDroid
TaskExecutor and LowCode Executing, the
LLM characteristic appears in the Prompt State di-
mension, because their prompt components (knowl-
edge base excerpts and prompt definition, respec-
tively) are generated by other LLM components in a
preparatory stage. In contrast, the LLM character-
istic appears in the Prompt Task dimension for Ma-
trixProduction Operator, because its prompt
part is generated individually by the MatrixPro-
duction Manager almost immediately before use.
that
cover
Taxonomy dimensions
entire LLM-
integrated applications may be useful. Given their
complexity, these dimensions should be designed
based on a broader range of examples, which will only
become available as more LLM-integrated applica-
tions are developed and their architectures disclosed
in the future. Extensions to the taxonomy could
also include dimensions for describing the structure
of prompts in more detail, as well as dimensions ad-
dressing characteristics of the language models used.
Table 4: LLM usage in the sample instances. “Evals” indicates evaluations of various LLMs.
Used or best LLM Evals Comments
GPT-3.5
GPT-3.5-turbo
GPT-3.5
yes
GPT-4 far too slow
then awaiting the publication of GPT-4
Application
Honeycomb
LowCode
MyCrunchGpt
MatrixProduction text-davinci-003
WorkplaceRobot
AutoDroid
ProgPrompt
FactoryAssistants GPT-3.5
GPT-3.5
SgpTod
GPT-3.5-turbo
TruckPlatoon
N/A
ExcelCopilot
GPT-3
GPT-4
GPT-3
yes
GPT-4 best for tasks requiring many steps
CODEX better, but access limits prohibitive
yes
GPT-3.5 best more often than others combined
combined LLMs in Copilot for Microsoft 365 [43]
7. Conclusion
This paper investigates the use of LLMs as soft-
ware components.
Its perspective differs from cur-
rent software engineering research, which investigates
LLMs as tools for software development [14, 22] and
from research examining LLMs as autonomous agents
[11, 62, 57, 21]. This paper defines the concept of an
LLM component as a software component that re-
alizes its functionality by invoking an LLM. While
LLM components implicitly appear in various works,
termed, for example, “prompters”, “prompted LLM”,
“prompt module”, or “module” [30, 71, 6, 7], to our
knowledge, this concept has not yet been formalized
or systematically investigated.
The main contribution of this study is a taxonomy
for the analysis and description of LLM components,
extending to LLM-integrated applications by charac-
terizing them as combinations of LLM components.
In addition to the dimensions and characteristics of
the taxonomy, the study contributes a taxonomy vi-
sualization based on feature vectors, which is more
compact than the established visualizations such as
morphological boxes [55] or radar charts.
It repre-
sents an LLM-integrated application as one visual en-
tity in a tabular format, with its LLM components
displayed as rows.
The taxonomy was constructed using established
methods, based on a set of example instances, and
evaluated with a new set of example instances. The
combined samples exhibit broad variation along the
identified dimensions. For some instances, informa-
tion was not available, necessitating speculative in-
terpretation. However, since the sample is used for
identifying options rather than quantitative analysis,
this issue and the representativeness of the sample
are not primary concerns. The evaluation was con-
ducted by the developer of the taxonomy, consistent
with recent related work [21, 52, 48]. Using a new
sample for evaluation strengthens the validity of the
results.
A further significant contribution of the paper is a
systematic overview of a sample of LLM-integrated
applications across various industrial and technical
domains, illustrating a spectrum of conceptual ideas
and implementation options.
As the examples show, LLM components can re-
place traditionally coded functions in software sys-
tems and enable novel use cases. However, practi-
cal challenges persist. Developers report that new
software engineering methods are required, e.g., for
managing prompts as software assets and for test-
ing and monitoring applications. For instance, the
costs of LLM invocations prohibit the extensive au-
tomated testing that is standard in software devel-
opment practice [44, 7]. Challenges also arise from
the inherent indeterminism and uncontrollability of
LLMs. Small variations in prompts can lead to differ-
ences in outputs, while automated output processing
19
in LLM-integrated applications requires the output
to adhere to a specified format.
Furthermore,
the deployment mode of LLMs,
whether local (on the same hardware as the ap-
plication) or remote, managed privately or offered
as Language-Models-as-a-Service (LMaaS), has im-
pact on performance and usability. Table 4 gives an
overview of the LLMs used in our sample of appli-
cations. Where papers report evaluations of mul-
tiple LLMs, the table displays the chosen or best-
performing LLM. Although not representative, the
table provides some insights. LMaaS dominates,
likely due to its convenience, but more importantly,
due to the superior performance of the provided
LLMs.
Concerns regarding LMaaS include privacy, as sensi-
tive data might be transmitted to the LLM through
the prompt [64], and service quality, i.e., reliability,
availability, and costs. Costs typically depend on the
quantity of processed tokens. This quantity also af-
fects latency, which denotes the processing time of
an LLM invocation. A further important factor for
latency is the size of the LLM, with larger models
being slower [7].
When building LLM-based applications for real-
world use, the reliability and availability of an LMaaS
are crucial. Availability depends not only on the
technical stability of the service, but also on factors
such as increased latency during high usage periods
or usage restrictions imposed by the provider of an
LMaaS, as reported for ProgPrompt [51]. Beyond
technical aspects, the reliability of an LMaaS also en-
compasses its behavior. For instance, providers might
modify a model to enhance its security, potentially
impacting applications that rely on it.
Despite practical challenges, integrating LLMs into
systems has the potential to alter the way software
is constructed and the types of systems that can be
realized. Prompts are central to the functioning of
LLM components which pose specific requirements
such as strict format adherence. Therefore, an im-
portant direction for future research will be prompt
engineering specifically tailored for LLM-integrated
applications.
In future work, the taxonomy will be extended to
distinguish finer-grained parts of prompts, allowing a
more detailed description and comparison of prompts
and related experimental results. Initial studies share
results on the format-following behavior of LLMs [68]
as a subtopic of instruction-following [73], derived
with synthetic benchmark data.
It is necessary to
complement their results with experiments using data
and tasks from real application development projects
because, in the early stages of this field, synthetic
benchmarks may fail to cover relevant aspects within
the wide range of possible options. Another crucial
research direction involves exploring how LLM char-
acteristics correspond to specific tasks, such as de-
termining the optimal LLM size for intent detection
tasks. The taxonomy developed in this study can sys-
tematize such experiments and their outcomes. Ad-
ditionally, it provides a structured framework for de-
lineating design choices in LLM components, making
it a valuable addition to future training materials.
Acknowledgements
Special thanks to Antonia Weber and Constantin We-
ber for proofreading and providing insightful and con-
structive comments.
References
[1] Eleni Adamopoulou and Lefteris Moussiades. An
Overview of Chatbot Technology. In Ilias Ma-
glogiannis, Lazaros Iliadis, and Elias Pimeni-
dis, editors, Artificial Intelligence Applications
and Innovations, IFIP Advances in Information
and Communication Technology, pages 373–383,
Cham, 2020. Springer International Publishing.
doi:10.1007/978-3-030-49186-4_31.
[2] Sebastian Bader, Erich Barnstedt, Heinz Be-
denbender, Bernd Berres, Meik Billmann, and
Marko Ristin. Details of the asset adminis-
tration shell-part 1: The exchange of informa-
tion between partners in the value chain of in-
dustrie 4.0 (version 3.0 rc02). Working Paper,
Berlin: Federal Ministry for Economic Affairs
20
and Climate Action (BMWK), 2022. doi.org/
10.21256/zhaw-27075.
Soft Computing, 151:111165, January 2024.
doi:10.1016/j.asoc.2023.111165.
[3] Marcos Baez, Florian Daniel, Fabio Casati, and
Boualem Benatallah. Chatbot integration in few
patterns. IEEE Internet Computing, pages 1–1,
2020. doi:10.1109/MIC.2020.3024605.
[4] Tom Bocklisch,
Thomas Werkmeister,
Task-
Daksh Varshneya, and Alan Nichol.
Oriented Dialogue with In-Context Learn-
ing.
(arXiv:2402.12234), February 2024.
doi:10.48550/arXiv.2402.12234.
[5] Yuzhe Cai, Shaoguang Mao, Wenshan Wu, Ze-
hua Wang, Yaobo Liang, Tao Ge, Chenfei Wu,
Wang You, Ting Song, Yan Xia, Jonathan Tien,
and Nan Duan. Low-code LLM: Visual Pro-
gramming over LLMs. (arXiv:2304.08103), April
2023. doi:10.48550/arXiv.2304.08103.
[6] Lang Cao. DiagGPT: An LLM-based Chatbot
with Automatic Topic Management for Task-
Oriented Dialogue. (arXiv:2308.08043), August
2023. doi:10.48550/arXiv.2308.08043.
[7] Phillip Carter.
All
the Hard Stuff No-
body Talks About When Building Prod-
ucts with LLMs.
Honeycomb, May
2023.
https://www.honeycomb.io/blog/
hard-stuff-nobody-talks-about-llm.
[8] Phillip Carter.
So We Shipped an AI Prod-
Honeycomb, Octo-
uct. Did It Work?
ber 2023. https://www.honeycomb.io/blog/
we-shipped-ai-product.
[9] Banghao Chen, Zhaofeng Zhang, Nicolas
Langrené,
Unleash-
and Shengxin Zhu.
ing the potential of prompt engineering in
Large Language Models: A comprehensive
review.
(arXiv:2310.14735), October 2023.
doi:10.48550/arXiv.2310.14735.
[10] Wang Chen, Yan-yi Liu, Tie-zheng Guo, Da-
peng Li, Tao He, Li Zhi, Qing-wen Yang,
Hui-han Wang, and Ying-you Wen.
Sys-
industry appli-
tems engineering issues
cations of
Applied
large language model.
for
21
[11] Yuheng Cheng, Ceyao Zhang, Zhengwen Zhang,
Xiangrui Meng, Sirui Hong, Wenhao Li, Zihao
Wang, Zekai Wang, Feng Yin, Junhua Zhao, and
Xiuqiang He. Exploring Large Language Model
based Intelligent Agents: Definitions, Methods,
and Prospects.
(arXiv:2401.03428), January
2024. doi:10.48550/arXiv.2401.03428.
[12] Silvia Colabianchi, Andrea Tedeschi,
and
Francesco Costantino. Human-technology in-
tegration with industrial conversational agents:
A conceptual architecture and a taxonomy for
manufacturing.
Journal of Industrial Infor-
mation Integration, 35:100510, October 2023.
doi:10.1016/j.jii.2023.100510.
[13] Jonathan Evertz, Merlin Chlosta, Lea Schön-
herr, and Thorsten Eisenhofer. Whispers in
the Machine: Confidentiality in LLM-integrated
Systems.
(arXiv:2402.06922), February 2024.
doi:10.48550/arXiv.2402.06922.
[14] Angela Fan, Beliz Gokkaya, Mark Harman,
Mitya Lyubarskiy, Shubho Sengupta, Shin Yoo,
and Jie M. Zhang. Large Language Models
for Software Engineering: Survey and Open
Problems. (arXiv:2310.03533), November 2023.
doi:10.48550/arXiv.2310.03533.
[15] Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing
Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei
Wang, Xiangyu Zhao, Jiliang Tang, and Qing
Li. Recommender Systems in the Era of Large
Language Models (LLMs). (arXiv:2307.02046),
August 2023. doi:10.48550/arXiv.2307.02046.
[16] David Fortin. Microsoft Copilot
in Excel:
What It Can and Can’t Do. YouTube, Jan-
uary 2024. https://www.youtube.com/watch?
v=-fsu9IXMZvo.
[17] Martin Fowler. Patterns of Enterprise Applica-
tion Architecture. 2002. ISBN 978-0-321-12742-
6.
[18] Shirley Gregor. The nature of theory in infor-
mation systems. MIS quarterly, pages 611–642,
2006. doi:10.2307/25148742.
[19] Yanchu Guan, Dong Wang, Zhixuan Chu, Shiyu
Wang, Feiyue Ni, Ruihua Song, Longfei Li, Jin-
jie Gu, and Chenyi Zhuang.
Intelligent Vir-
tual Assistants with LLM-based Process Au-
tomation. (arXiv:2312.06677), December 2023.
doi:10.48550/arXiv.2312.06677.
[20] Muhammad Usman Hadi, Qasem Al Tashi,
Rizwan Qureshi, Abbas Shah, Amgad Muneer,
Muhammad Irfan, Anas Zafar, Muhammad Bi-
lal Shaikh, Naveed Akhtar, Jia Wu, and Seyedali
Mirjalili. Large Language Models: A Compre-
hensive Survey of its Applications, Challenges,
Limitations, and Future Prospects, September
2023. doi:10.36227/techrxiv.23589741.v3.
[21] Thorsten Händler.
A Taxonomy for Au-
tonomous LLM-Powered Multi-Agent Architec-
tures:.
In Proceedings of the 15th Interna-
tional Joint Conference on Knowledge Discov-
ery, Knowledge Engineering and Knowledge
Management, pages 85–98, Rome, Italy, 2023.
SCITEPRESS - Science and Technology Publi-
cations. doi:10.5220/0012239100003598.
[22] Xinyi Hou, Yanjie Zhao, Yue Liu, Zhou Yang,
Kailong Wang, Li Li, Xiapu Luo, David Lo, John
Grundy, and Haoyu Wang. Large Language
Models for Software Engineering: A Systematic
Literature Review. (arXiv:2308.10620), Septem-
ber 2023. doi:10.48550/arXiv.2308.10620.
[23] Vojtěch Hudeček and Ondrej Dusek.
Are
Large Language Models All You Need for Task-
In Svetlana Stoyanchev,
Oriented Dialogue?
Shafiq Joty, David Schlangen, Ondrej Dusek,
Casey Kennington, and Malihe Alikhani, edi-
tors, Proceedings of the 24th Annual Meeting of
the Special Interest Group on Discourse and Di-
alogue, pages 216–228, Prague, Czechia, Septem-
ber 2023. Association for Computational Lin-
guistics. doi:10.18653/v1/2023.sigdial-1.21.
[24] Kevin Maik Jablonka, Qianxiang Ai, Alexander
Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly,
Andres M. Bran, Stefan Bringuier, Catherine L.
Brinson, Kamal Choudhary, Defne Circi, Sam
Cox, Wibe A. de Jong, Matthew L. Evans, Nico-
las Gastellu, Jerome Genzling, María Victoria
Gil, Ankur K. Gupta, Zhi Hong, Alishba Im-
ran, Sabine Kruschwitz, Anne Labarre, Jakub
Lála, Tao Liu, Steven Ma, Sauradeep Majum-
dar, Garrett W. Merz, Nicolas Moitessier, Elias
Moubarak, Beatriz Mouriño, Brenden Pelkie,
Michael Pieler, Mayk Caldas Ramos, Bojana
Ranković, Samuel Rodriques, Jacob Sanders,
Philippe Schwaller, Marcus Schwarting, Jiale
Shi, Berend Smit, Ben Smith, Joren Van Herck,
Christoph Völker, Logan Ward, Sean War-
ren, Benjamin Weiser, Sylvester Zhang, Xiaoqi
Zhang, Ghezal Ahmad Zia, Aristana Scour-
tas, K. Schmidt, Ian Foster, Andrew White,
and Ben Blaiszik. 14 examples of how LLMs
can transform materials science and chem-
istry: A reflection on a large language model
hackathon. Digital Discovery, 2(5):1233–1250,
2023. doi:10.1039/D3DD00113J.
[25] Jean Kaddour,
Joshua Harris, Maximilian
Mozes, Herbie Bradley, Roberta Raileanu, and
Robert McHardy.
Challenges and Applica-
tions of Large Language Models, July 2023.
doi:10.48550/arXiv.2307.10169.
[26] Samuel Kernan Freire, Mina Foosherian, Chao-
fan Wang, and Evangelos Niforatos. Harnessing
Large Language Models for Cognitive Assistants
in Factories. In Proceedings of the 5th Interna-
tional Conference on Conversational User Inter-
faces, CUI ’23, pages 1–6, New York, NY, USA,
July 2023. Association for Computing Machin-
ery. doi:10.1145/3571884.3604313.
[27] Anis Koubaa, Wadii Boulila, Lahouari Ghouti,
Ayyub Alzahem, and Shahid Latif. Explor-
ing ChatGPT Capabilities and Limitations: A
Survey. IEEE Access, 11:118698–118721, 2023.
doi:10.1109/ACCESS.2023.3326474.
[28] Varun Kumar, Leonard Gleyzer, Adar Ka-
hana, Khemraj Shukla, and George Em Karni-
22
adakis. MyCrunchGPT: A LLM Assisted Frame-
work for Scientific Machine Learning.
Jour-
nal of Machine Learning for Modeling and
Computing, 4(4), 2023.
doi.org/10.1615/
JMachLearnModelComput.2023049518.
[29] Dennis
Jan
Kundisch,
Muntermann,
Anna Maria Oberländer, Daniel Rau, Maxi-
milian Röglinger, Thorsten Schoormann, and
Daniel Szopinski. An Update for Taxonomy
Designers. Business & Information Systems
Engineering,
2022.
doi:10.1007/s12599-021-00723-x.
64(4):421–439, August
Prompted LLMs as
Jongho
[30] Gibbeum Lee, Volker Hartmann,
and Kang-
Park, Dimitris Papailiopoulos,
wook Lee.
chatbot
modules for long open-domain conversation.
In Anna Rogers, Jordan Boyd-Graber, and
Naoaki Okazaki, editors, Findings of the as-
sociation for computational
linguistics: ACL
2023, pages 4536–4554, Toronto, Canada, July
2023. Association for Computational Linguistics.
doi:10.18653/v1/2023.findings-acl.277.
[31] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zheng-
bao Jiang, Hiroaki Hayashi, and Graham Neu-
big. Pre-train, Prompt, and Predict: A Sys-
tematic Survey of Prompting Methods in Nat-
ural Language Processing.
ACM Comput-
ing Surveys, 55(9):195:1–195:35, January 2023.
doi:10.1145/3560815.
[32] Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang,
Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan
Zheng, and Yang Liu. Prompt Injection at-
tack against LLM-integrated Applications, June
2023. doi:10.48550/arXiv.2306.05499.
[33] Yuchen
Liu,
Luigi Palmieri,
Sebastian
Ilche Georgievski, and Marco Aiello.
Koch,
DELTA: Decomposed Efficient Long-Term
Robot Task Planning using Large Language
Models.
(arXiv:2404.03275), April 2024.
doi:10.48550/arXiv.2404.03275.
[34] Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan
Jia, and Neil Zhenqiang Gong. Prompt Injec-
tion Attacks and Defenses in LLM-Integrated
23
Applications. (arXiv:2310.12815), October 2023.
doi:10.48550/arXiv.2310.12815.
[35] Shaoguang Mao, Qiufeng Yin, Yuzhe Cai,
https:
and Dan Qiao.
//github.com/chenfei-wu/TaskMatrix/
tree/main/LowCodeLLM, May 2023.
LowCodeLLM.
[36] Scott McLean, Gemma J. M. Read, Jason
Thompson, Chris Baber, Neville A. Stanton, and
Paul M. Salmon. The risks associated with Ar-
tificial General Intelligence: A systematic re-
view. Journal of Experimental & Theoretical
Artificial Intelligence, 35(5):649–663, July 2023.
doi:10.1080/0952813X.2021.1964003.
[37] Oier Mees, Jessica Borja-Diaz, and Wolfram
Burgard. Grounding Language with Visual Af-
In 2023
fordances over Unstructured Data.
IEEE International Conference on Robotics
and Automation (ICRA), pages 11576–11582,
London, United Kingdom, May 2023. IEEE.
doi:10.1109/ICRA48891.2023.10160396.
[38] Grégoire Mialon, Roberto Dessì, Maria
Lomeli, Christoforos Nalmpantis, Ram Pa-
sunuru, Roberta Raileanu, Baptiste Rozière,
Timo Schick,
Jane Dwivedi-Yu, Asli Ce-
likyilmaz, Edouard Grave, Yann LeCun,
and Thomas Scialom.
Augmented Lan-
guage Models: A Survey, February 2023.
doi:10.48550/arXiv.2302.07842.
[39] Melanie Mitchell.
ture of artificial general
ence,
doi:10.1126/science.ado7069.
intelligence.
383(6689):eado7069, March
Debates on the na-
Sci-
2024.
[40] Quim Motger, Xavier Franch, and Jordi Marco.
Survey,
Software-Based Dialogue Systems:
Taxonomy, and Challenges. ACM Comput-
ing Surveys, 55(5):91:1–91:42, December 2022.
doi:10.1145/3527450.
[41] Fiona Fui-Hoon Nah, Ruilin Zheng, Jingyuan
Cai, Keng Siau, and Langtao Chen. Gen-
erative AI and ChatGPT: Applications, chal-
lenges, and AI-human collaboration.
Jour-
nal of Information Technology Case and Ap-
plication Research, 25(3):277–304, July 2023.
doi:10.1080/15228053.2023.2233814.
[42] Robert C Nickerson, Upkar Varshney, and
taxon-
Jan Muntermann.
omy development and its application in in-
formation systems. European Journal of In-
formation Systems, 22(3):336–359, May 2013.
doi:10.1057/ejis.2012.26.
A method for
[43] Camille Pack, Cern McAtee, Samantha Robert-
son, Dan Brown, Aditi Srivastava, and Kweku
Ako-Adjei. Microsoft Copilot for Microsoft
365 overview.
https://learn.microsoft.
com/en-us/copilot/microsoft-365/
microsoft-365-copilot-overview,
2024.
March
Sumit Gulwani,
[44] Chris Parnin, Gustavo Soares, Rahul Pan-
dita,
and
Austin Z. Henley. Building Your Own Prod-
uct Copilot: Challenges, Opportunities, and
Needs.
(arXiv:2312.14231), December 2023.
doi:10.48550/arXiv.2312.14231.
Jessica Rich,
[45] Rodrigo Pedro, Daniel Castro, Paulo Car-
From Prompt In-
reira, and Nuno Santos.
jections to SQL Injection Attacks: How Pro-
tected is Your LLM-Integrated Web Appli-
cation?
(arXiv:2308.01990), August 2023.
doi:10.48550/arXiv.2308.01990.
[46] Ken Peffers, Tuure Tuunanen, Marcus A.
Rothenberger, and Samir Chatterjee. A De-
sign Science Research Methodology for Infor-
mation Systems Research.
Journal of Man-
agement Information Systems, 24(3):45–77, De-
cember 2007.
ISSN 0742-1222, 1557-928X.
doi:10.2753/MIS0742-1222240302.
[47] Mohaimenul Azam Khan Raiaan, Md. Sad-
dam Hossain Mukta, Kaniz Fatema, Nur Mo-
hammad Fahad, Sadman Sakib, Most Mar-
Jubaer Ahmad, Mo-
ufatul Jannat Mim,
hammed Eunus Ali, and Sami Azam. A Review
on Large Language Models: Architectures, Ap-
plications, Taxonomies, Open Issues and Chal-
24
lenges.
doi:10.1109/ACCESS.2024.3365742.
IEEE Access, 12:26839–26874, 2024.
[48] Jack Daniel Rittelmeyer and Kurt Sandkuhl.
Morphological Box for AI Solutions: Evalua-
tion and Refinement with a Taxonomy Develop-
ment Method. In Knut Hinkelmann, Francisco J.
López-Pellicer, and Andrea Polini, editors, Per-
spectives in Business Informatics Research, Lec-
ture Notes in Business Information Process-
ing, pages 145–157, Cham, 2023. Springer Na-
ture Switzerland. doi:10.1007/978-3-031-43126-
5_11.
[49] Shubhra Kanti Karmaker Santu and Dongji
TELeR: A General Taxonomy of
for Benchmarking Complex
(arXiv:2305.11430), October 2023.
Feng.
LLM Prompts
Tasks.
doi:10.48550/arXiv.2305.11430.
[50] Thorsten Schoormann, Frederik Möller, and
Daniel Szopinski. Exploring Purposes of Us-
In Proceedings of the Inter-
ing Taxonomies.
national Conference on Wirtschaftsinformatik
(WI), Nuernberg, Germany, February 2022.
[51] Ishika Singh, Valts Blukis, Arsalan Mousa-
vian, Ankit Goyal, Danfei Xu, Jonathan Trem-
blay, Dieter Fox, Jesse Thomason, and Ani-
mesh Garg. ProgPrompt: Generating Situated
Robot Task Plans using Large Language Mod-
els. In 2023 IEEE International Conference on
Robotics and Automation (ICRA), pages 11523–
11530, London, United Kingdom, May 2023.
IEEE. doi:10.1109/ICRA48891.2023.10161317.
[52] Gero Strobel, Leonardo Banh, Frederik Möller,
and Thorsten Schoormann. Exploring Gener-
ative Artificial Intelligence: A Taxonomy and
Types. In Proceedings of the 57th Hawaii Inter-
national Conference on System Sciences, Hon-
olulu, Hawaii, January 2024.
https://hdl.
handle.net/10125/106930.
[53] Hendrik Strobelt, Albert Webson, Victor Sanh,
Benjamin Hoover, Johanna Beyer, Hanspeter
Pfister, and Alexander M. Rush.
Interac-
tive and Visual Prompt Engineering for Ad-
hoc Task Adaptation With Large Language
Models.
IEEE Transactions on Visualization
and Computer Graphics, pages 1–11, 2022.
doi:10.1109/TVCG.2022.3209479.
Effective Invocation Methods of Massive LLM
Services.
(arXiv:2402.03408), February 2024.
doi:10.48550/arXiv.2402.03408.
[54] Daniel Szopinski, Thorsten Schoormann, and
Dennis Kundisch. Criteria as a Prelude for Guid-
ing Taxonomy Evaluation. In Proceedings of the
53rd Hawaii International Conference on Sys-
tem Sciences, 2020. https://hdl.handle.net/
10125/64364.
[55] Daniel Szopinski, Thorsten Schoormann, and
Visualize different: To-
Dennis Kundisch.
researching the fit between taxon-
wards
omy visualizations and taxonomy tasks.
In
Tagungsband Der 15. Internationalen Tagung
Wirtschaftsinformatik (WI 2020), Potsdam,
2020. doi:10.30844/wi_2020_k9-szopinski.
[56] Manisha Thakkar and Nitin Pise. Unified Ap-
proach for Scalable Task-Oriented Dialogue Sys-
tem.
International Journal of Advanced Com-
puter Science and Applications, 15(4), 2024.
doi:10.14569/IJACSA.2024.01504108.
[57] Oguzhan Topsakal and Tahir Cetin Akinci. Cre-
ating Large Language Model Applications Uti-
lizing Langchain: A Primer on Developing LLM
Apps Fast.
In International Conference on
Applied Engineering and Natural Sciences, vol-
ume 1, pages 1050–1056, 2023.
[58] Michael Unterkalmsteiner and Waleed Adbeen.
A compendium and evaluation of taxonomy
quality attributes.
Expert Systems, 40(1):
e13098, 2023. doi:10.1111/exsy.13098.
[59] Bryan Wang, Gang Li, and Yang Li.
En-
Interaction with Mo-
abling Conversational
In
bile UI using Large Language Models.
Proceedings of
the 2023 CHI Conference on
Human Factors in Computing Systems, CHI
’23, pages 1–17, New York, NY, USA, April
2023. Association for Computing Machinery.
doi:10.1145/3544548.3580895.
[61] Jun Wang, Guocheng He, and Yiannis Kan-
Safe Task Planning for Language-
taros.
Instructed Multi-Robot Systems using Confor-
mal Prediction.
(arXiv:2402.15368), February
2024. doi:10.48550/arXiv.2402.15368.
[62] Lei Wang, Chen Ma, Xueyang Feng, Zeyu
Zhang, Hao Yang, Jingsen Zhang, Zhiyuan
Chen, Jiakai Tang, Xu Chen, Yankai Lin,
Wayne Xin Zhao, Zhewei Wei, and Jirong
Wen.
A survey on large language model
based autonomous agents. Frontiers of Com-
puter Science,
18(6):186345, March 2024.
doi:10.1007/s11704-024-40231-1.
[63] Shu Wang, Muzhi Han, Ziyuan Jiao, Zeyu
Zhang, Ying Nian Wu, Song-Chun Zhu, and
Hangxin Liu. LLM3:Large Language Model-
based Task and Motion Planning with Motion
Failure Reasoning.
(arXiv:2403.11552), March
2024. doi:10.48550/arXiv.2403.11552.
[64] Hao Wen, Yuanchun Li, Guohong Liu, Shan-
hui Zhao, Tao Yu, Toby Jia-Jun Li, Shiqi Jiang,
Yunhao Liu, Yaqin Zhang, and Yunxin Liu. Em-
powering LLM to use Smartphone for Intelligent
Task Automation. (arXiv:2308.15272), Septem-
ber 2023. doi:10.48550/arXiv.2308.15272.
[65] Hao Wen, Yuanchun Li, and Sean KiteFly-
Kid. MobileLLM/AutoDroid. Mobile LLM, Jan-
uary 2024. https://github.com/MobileLLM/
AutoDroid.
[66] Jules White, Quchen Fu, Sam Hays, Michael
Sandborn, Carlos Olea, Henry Gilbert, Ashraf
Elnashar,
and Dou-
Jesse Spencer-Smith,
glas C. Schmidt.
A Prompt Pattern Cat-
alog to Enhance Prompt Engineering with
ChatGPT. (arXiv:2302.11382), February 2023.
doi:10.48550/arXiv.2302.11382.
[60] Can Wang, Bolin Zhang, Dianbo Sui, Zhiying
Tu, Xiaoyu Liu, and Jiabao Kang. A Survey on
[67] Tongshuang Wu, Michael Terry, and Car-
rie Jun Cai. AI Chains: Transparent and
25
Instruction-
and Le Hou.
Denny Zhou,
Following Evaluation for Large Language Mod-
els.
(arXiv:2311.07911), November 2023.
doi:10.48550/arXiv.2311.07911.
Controllable Human-AI Interaction by Chain-
ing Large Language Model Prompts.
In
Proceedings of
the 2022 CHI Conference on
Human Factors in Computing Systems, CHI
’22, pages 1–22, New York, NY, USA, April
2022. Association for Computing Machinery.
doi:10.1145/3491102.3517582.
[68] Congying Xia, Chen Xing, Jiangshu Du, Xinyi
Yang, Yihao Feng, Ran Xu, Wenpeng Yin,
and Caiming Xiong.
FOFO: A Benchmark
to Evaluate LLMs’ Format-Following Capa-
bility.
(arXiv:2402.18667), February 2024.
doi:10.48550/arXiv.2402.18667.
[69] Yuchen Xia, Manthan Shenoy, Nasser Jazdi,
and Michael Weyrich. Towards autonomous
system:
Flexible modular production sys-
language model
tem enhanced with large
agents. In 2023 IEEE 28th International Con-
ference on Emerging Technologies and Fac-
tory Automation (ETFA), pages 1–8, 2023.
doi:10.1109/ETFA54631.2023.10275362.
[70] I. de Zarzà, J. de Curtò, Gemma Roig,
and Carlos T. Calafate.
LLM Adaptive
PID Control for B5G Truck Platooning Sys-
tems.
Sensors, 23(13):5899, January 2023.
doi:10.3390/s23135899.
[71] Xiaoying Zhang, Baolin Peng, Kun Li, Jingyan
SGP-TOD: Build-
Zhou, and Helen Meng.
ing Task Bots Effortlessly via Schema-Guided
LLM Prompting. (arXiv:2305.09067), May 2023.
doi:10.48550/arXiv.2305.09067.
[72] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi
Tang, Xiaolei Wang, Yupeng Hou, Yingqian
Min, Beichen Zhang, Junjie Zhang, Zican Dong,
Yifan Du, Chen Yang, Yushuo Chen, Zhipeng
Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li,
Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun
Nie, and Ji-Rong Wen. A Survey of Large Lan-
guage Models.
(arXiv:2303.18223), May 2023.
doi:10.48550/arXiv.2303.18223.
[73] Jeffrey Zhou, Tianjian Lu, Swaroop Mishra,
Siddhartha Brahma, Sujoy Basu, Yi Luan,
26
|
synthetic_cpt | 4 | Advancing_Large_Language_Model_Attribution_through_Self-Improving.pdf | Advancing Large Language Model Attribution through Self-Improving
Lei Huang1, Xiaocheng Feng1,2*, Weitao Ma1, Liang Zhao1, Yuchun Fan3,
Weihong Zhong1, Dongliang Xu4, Qing Yang4, Hongtao Liu4, Bing Qin1,2
1 Harbin Institute of Technology, Harbin, China
2 Peng Cheng Laboratory, Shenzhen, China
3 Northeastern University, Shenyang, China
4 Du Xiaoman Science Technology Co., Ltd., Beijing, China
{lhuang, xcfeng, wtma, lzhao, whzhong, qinb}@ir.hit.edu.cn
[email protected]
{xudongliang, yangqing, liuhongtao01}@duxiaoman.com
Abstract
Teaching large language models (LLMs) to gen-
erate text with citations to evidence sources
can mitigate hallucinations and enhance verifi-
ability in information-seeking systems. How-
ever, improving this capability requires high-
quality attribution data, which is costly and
labor-intensive. Inspired by recent advances
in self-improvement that enhance LLMs with-
out manual annotation, we present START, a
Self-Taught AttRibuTion framework for iter-
atively improving the attribution capability of
LLMs. First, to prevent models from stagnating
due to initially insufficient supervision signals,
START leverages the model to self-construct
synthetic training data for warming up. To
further improve the model’s attribution abil-
ity, START iteratively utilizes fine-grained pref-
erence supervision signals constructed from
its sampled responses to encourage robust,
comprehensive, and attributable generation.
Experiments on three open-domain question-
answering datasets, covering long-form QA
and multi-step reasoning, demonstrate signif-
icant performance gains of 25.13% on aver-
age without relying on human annotations and
more advanced models. Further analysis re-
veals that START excels in aggregating infor-
mation across multiple sources.
1
Introduction
The rapid development of large language models
(LLMs) (OpenAI, 2023; Zhao et al., 2023) has led
to their prosperity as indispensable tools for infor-
mation seeking. Despite their remarkable capabil-
ity to generate fluent and informative responses to
user queries, LLMs also struggle with hallucina-
tions (Huang et al., 2023). To facilitate factuality
verification, recent research (Bohnet et al., 2022)
has explored attributed text generation, a paradigm
that enables LLMs to generate responses with cita-
tions. By attributing models’ output to verifiable
sources, it can improve the explainability and cred-
ibility of LLM-generated content (Li et al., 2023).
While beneficial, the ability to attribute con-
textual sources is not inherent in LLMs. Most
work induces LLMs to generate text with citations
via in-context learning (Gao et al., 2023), which
is far from satisfactory (Liu et al., 2023). The
current winning recipe for accurate attribution in-
volves fine-tuning on high-quality attribution re-
sponses1 (Li et al., 2024). However, acquiring
such data typically requires either manual cura-
tion (Malaviya et al., 2023), or distilled from the
most advanced LLMs (Huang et al., 2024a,b), both
of which are costly and not scalable, thus limit-
ing the growth of models’ attribution capability.
One promising solution is self-improvement (Yuan
et al., 2023), which has demonstrated the poten-
tial to boost model performance by learning from
self-generated high-quality samples.
Inspired by this, we aim to explore the poten-
tial of self-improvement in bootstrapping the at-
tribution ability of LLMs. However, achieving
this goal presents several challenges. One sig-
nificant challenge lies in the risk of model stag-
nation during the self-improvement process, pri-
marily due to the insufficient supervision signals
obtained in the early stage. Concretely, consider-
ing the inferior performance of LLMs in handling
the attribution task (Gao et al., 2023), generating
sufficient high-quality attribution responses solely
through sampling proves difficult. This scarcity of
high-quality samples limits the opportunities for
LLMs to self-improve effectively. Another chal-
lenge stems from the limitation of weak supervi-
sion signals. Current self-improvement approaches
(Yuan et al., 2023) primarily involve supervised
fine-tuning on high-quality samples while discard-
ing low-quality ones. When applied to LLM attribu-
1Attribution responses refers to “responses with in-line
*Corresponding Author
citations, e.g., [1][2]”.
4
2
0
2
t
c
O
7
1
]
L
C
.
s
c
[
1
v
8
9
2
3
1
.
0
1
4
2
:
v
i
X
r
a
tion, these high-quality samples provide only weak
supervision signals, mainly teaching LLMs on the
surface form of attribution (e.g., proper citation
format) (Li et al., 2024). Such practice may ne-
glect the potential of exploring fine-grained signals
from low-quality samples to learn what constitutes
a desirable attribution response.
To address these challenges, we present START,
a Self-Taught AttRibuTion framework designed
to bootstrap the attribution capabilities of LLMs.
To prevent models from stagnating early due to
insufficient supervision signals, we first leverage
the model to self-construct high-quality synthetic
attribution data (§3.1). The data synthesis process
follows reverse attribution thinking:
the model
initially generates a response to a given query,
then breaks it into atomic claims, and finally ran-
domly combines them to create synthetic docu-
ments. This process not only simulates multi-
source information-seeking scenarios but also en-
sures precise attribution, as each document can be
directly traced back to the specific claim it origi-
nated from. These high-quality synthetic data are
then utilized for warming up, providing a good
starting point for LLMs to self-improve. Further-
more, to better explore fine-grained supervision
signals for LLM attribution, we introduce an itera-
tive self-improving recipe (§3.2). Specifically, the
framework meticulously designs fine-grained re-
wards tailored for LLM attribution, covering robust-
ness, comprehensiveness, and attributability. By
scoring multiple candidates through sampling and
selecting those with the highest holistic rewards
for supervised fine-tuning, the framework subse-
quently utilizes low-quality samples to construct
fine-grained preference pairs with diverse optimiza-
tion rewards for preference optimization. This iter-
ative process further fosters the self-improvement
of attribution capabilities.
We conduct extensive experiments across three
open-domain question-answering datasets, cover-
ing long-form QA and multi-step reasoning. Re-
sults indicate that START achieves significant per-
formance gains of 25.13% on average in citation
quality. Moreover, START successfully achieves
self-improvement in LLM attribution, showing pro-
gressive improvements across iterations. Ablation
studies confirm that each component significantly
contributes to the improvement. Further analysis
shows that START not only excels in generating su-
perior attributable responses but also in effectively
aggregating information across multiple sources.
2 Related Work
2.1 Large Language Model Attribution
Attribution has gained significant attention for en-
hancing the interpretability and verifiability of
LLMs (Gao et al., 2023; Li et al., 2023). Recent
studies have focused on improving LLM attribu-
tion in a supervised way. Asai et al. (2023) first
distill GPT-4 to collect high-quality attribution data,
aiming to teach the model to generate grounded an-
swers with citations through self-reflecting. Simi-
larly, Huang et al. (2024a) develop a training frame-
work starting with distilling ChatGPT, followed
by designing reward models to teach the LLM to
generate highly supportive and relevant citations.
Additionally, Li et al. (2024) model the attribution
task from a preference learning perspective, where
they first fine-tune the model on human-labeled at-
tribution datasets and then perform preference op-
timization using synthesized preference data. Fur-
thermore, Huang et al. (2024b) take this further by
extending the attribution format to a fine-grained
citation level, primarily distilled from ChatGPT. It
enables the model to first ground the fine-grained
quotes within the context and then condition the
generation process on them. In contrast to these
methods, START aims to bootstrap attribution ca-
pability without relying on human-labeled data or
distilling from more capable LLMs.
2.2 Self-Improvement for LLMs
High-quality data either human-crafted or distilled
from advanced LLMs has proven effective in en-
hancing the performance of LLMs. However, ac-
quiring such high-quality data can be prohibitively
expensive. Recently, self-improvement approaches
(Gülçehre et al., 2023; Yuan et al., 2024), where
LLMs learn from self-generated samples have
emerged as a viable solution to compensate for
the scarcity of high-quality data. These methods
typically involve employing heuristic rules (Zelik-
man et al., 2022), self-critique (Tian et al., 2024), or
training additional verifiers (Hosseini et al., 2024)
to assess the quality of model-generated samples.
Such practices are particularly effective in rea-
soning tasks, e.g., mathematical reasoning, where
LLMs already demonstrate capable abilities and
can receive precise feedback on correctness. How-
ever, these advantages are absent in the attribution
task, due to its challenging nature. To bridge the
gap, we take an initial step towards exploring the
potential of self-improvement in LLM attribution.
Figure 1: The data synthesis pipeline consists of five steps: given a user query, the LLM first generates an informative
response without citations in a closed-book setting. Subsequently, the LLM decomposes this response into atomic
claims. These claims are then randomly grouped into specific sets, which serve as the basis for generating
documents that cover all included claims. Finally, we trace back to the initial response to relabel the citations.
3 Problem Formulation and Methodology
We follow a formulation of attributed text gener-
ation as described in Gao et al. (2023). This task
involves processing a user query q for information-
seeking, given a corpus of retrieved documents
D, to generate a response S with in-line cita-
tions. We assume the response S as consisting
of n statements, such that S = {s1, s2, . . . , sn}.
Each statement si ∈ S cites a list of passage
Ci = {ci1, ci2, . . .}, where cij ∈ D. Citations are
presented in the form of [1][2], which represent
the attribution to specific documents in D.
Next, we present an overview of START, a train-
ing framework designed to teach LLMs to self-
improve their attribution ability, as illustrated in
Figure 2. START consists of two essential stages:
synthetic data warm-up (§3.1) and self-improving
for LLM attribution (§3.2).
3.1 Synthetic Data Warm-Up
The core of self-improvement lies in generating
high-quality samples and iteratively learning from
Intuitively, a high-quality attribution re-
them.
sponse should not be distracted by irrelevant doc-
uments (robustness) and capture high coverage of
viewpoints across multiple documents (comprehen-
siveness) while maintaining high citation quality
(attributability). However, existing LLMs typically
show inferior performance in the attribution task,
significantly hindering their ability to generate such
high-quality samples. This limitation poses sub-
stantial challenges to enhancing their attribution
capabilities through self-improvement.
ing up, enabling the model to have the basic ability
to generate robust, comprehensive, and attributable
responses across multiple sources. The pipeline
consists of the following steps, shown in Figure 1.
More details can be found in Appendix A.
Step 1: Response Generation Given an arbitrary
model, we first sample a query q from seed ques-
tions Q and then generate a long-form answer S
utilizing the parametric knowledge of the model it-
self. The model is required to produce informative
answers that cover multiple perspectives.
Step 2: Claim Decomposition Prior work (Min
et al., 2023) has explored using atomic claims as
a fundamental unit in long-form text generation.
Thus, for the response S, we ask the model to
decompose it into atomic claims. Each atomic
claim represents a distinct piece of information.
Step 3: Claim Combination To ensure that the
response behaves as an aggregation of information
from multiple documents, we randomly combine
different claims into one claim set. This process
helps simulate the natural diversity of viewpoints
and sources, thus enhancing the comprehensiveness
and realism of the synthesized responses.
Step 4: Document Generation For each claim
set, we prompt the model to generate a synthetic
document D that provides a comprehensive dis-
cussion of the grouped claims. Additionally, to
enhance the robustness of the response, we intro-
duce irrelevant documents by uniformly sampling
documents generated from other queries.
In this stage, we propose utilizing the model to
self-construct high-quality synthetic data for warm-
Step 5: Attribution Relabel The final step in-
volves labeling the response with citations from
What is the difference between freshwater and potable water?Fresh water refers to water that is not salty orbrackish [1][2]. It may be unsuitable fordrinking without treatment [1]. Potable water,on the other hand, is water that is safe andsuitable for human consumption [2][3]. SeedQuestionsResponseQueryNote: "Step1 does notgenerate citations"Step1: Response GenerationStep2: Claim DecompositionStep3: Claim CombinationStep4: Document GenerationResponseInstruction: Decompose theresponse into claimsFew-shotexamplesFreshwater refers to water that is notsalty or brackish.Freshwater may be unsuitable fordrinking without treatment.Atomic ClaimsPotable water is safe and suitable forhuman consumption.Claim 1Claim 2Claim 3NoisyClaimClaimSet 2ClaimSet 1ClaimSet 3Random CombinationStep5: Attribution RelabelFreshwater, which includes (...), ischaracterized by its low salt content, (...)it unsafe for direct human consumption...Document 1Document 2Document 3123Add CitationsFreshwater, distinguished by its lowsalinity (...) suitable for humanconsumption, preventing health issues...Potable water is deemed safe for humanconsumption as it meets health standardsestablished by global health authorities...Claim-to-DocumentGenerationFigure 2: Overview of our self-improving framework, which consists of two stages. The model is first warmed
up using synthetic data (§3.1). This provides a good starting point to enable the model to generate high-quality
samples in the subsequent iterative training. Next, the model is further trained via rejection sampling fine-tuning
and fine-grained preference optimization iteratively (§3.2). This iterative process bootstraps the model’s attribution
capability by fully utilizing the supervision signals from its sampled generations.
the generated documents. This process ensures
that each claim within the response is explicitly
attributed to its source. In this way, for each query
q, and documents set D, we can obtain an infor-
mative and attributable response while maintaining
robustness against irrelevant documents.
Next, the model is fine-tuned for warming up
with the MLE objective on the synthesized dataset,
which consists of N data entries, each containing
a query qi, a document set Di, and a high-quality
attributable response yi:
L = −
N
(cid:88)
i=1
log P (yi|qi, Di; θ)
(1)
3.2 Self-Improving for LLM Attribution
In this stage, we propose to iteratively boost the
model’s attribution capability by exploring more
fine-grained supervision signals, rather than solely
relying on golden responses in synthetic data. This
involves leveraging rejection sampling for data
growing and fine-grained preference optimization
for capability evolution.
3.2.1 Rejection Sampling Fine-tuning
After warming up, we first sample N candidates
for each query in the synthetic dataset and then
score each candidate with fine-grained rewards that
cover three key dimensions: robustness, compre-
hensiveness, and attributability.
Attributability serves as the indispensable con-
dition for high-quality attributable generation. It
quantifies the extent to which a response is fully
supported by the cited documents. To accurately
measure attributability, we employ an off-the-
shelf Natural Language Inference (NLI) model2 by
checking whether each statement in the response is
entailed by the corresponding cited documents.
AttrScore =
1
S
i=1
S
(cid:88)
Entail(Docs, statementi)
(2)
where S is the total number of statements in the
response and Entail returns 1 if the statement i is
entailed by cited documents, and 0 otherwise.
Robustness measures the degree to which a
model-generated response is influenced by irrel-
evant contexts. Considering that we can identify
relevant documents dr within the document set D
for each query q, thus we quantify robustness by
calculating the probability difference of the model
M to generate the response y under different con-
texts. The robustness score is defined as follows:
RobustScore =
PM (y | q ⊕ dr)
PM (y | q ⊕ D)
(3)
Empirically, the closer the score is to 1, the less the
response is disturbed by irrelevant documents.
Comprehensiveness measures the extent
to
which a response captures all relevant informa-
tion from the source documents. As the golden
responses in the synthetic data are designed to
2huggingface.co/google/t5_xxl_true_nli_mixture
Rejection Sampling Fine-TuningFine-grained Preference OptimizationWhy do leaves change color in the fall?Sampling(N=16)As temperatures drop in fall, chlorophyll productiondecreases [1]. Anthocyanins become visible [3]. (...) colorleaves yellow, orange, red, and purple [1][3].Robust: 1.03Warm-UpSupervisedFine-tuningSynthetic DataQueryAttributionresponseIterativelyAttributability RewardComprehensiveness RewardLeaves change color in fall (...) to become visible,showcasing yellows and oranges primarily due tocarotenoids.The vehicles' increased awareness could aid thepolice (...) The future of passenger rail transportin the era of automated cars is not clear. Without chlorophyll, red and purple pigments calledanthocyanins become visible (...)purples seen inspecies like maples during autumn.SyntheticDocsWarmed upLLMAttributable: 5/5Comprehensive: 4/5123Robust: 2.83Attributable: 1/5Comprehensive: 2/5...In the fall, shorter days and cooler temperatures trigger areduction in chlorophyll production[1]. (...) It makes thefuture of passenger rail transport not clear [2].noisySupervised fine-tuningattributablecomprehensiveattributablecomprehensiveDirect PreferenceOptimizationPreferenceDataattributablecomprehensiveattributablecomprehensiveaggregate and reflect information across multi-
documents, thus we quantify comprehensiveness
by decomposing them into sub-claims and verify-
ing whether these claims are covered by the sam-
pled generation y. We compute the score as below:
ˆrθ(x, y) = β log
πθ(y | x)
πref (y | x)
(6)
Here, reference model πref is initialized with
the model after rejection sampling to minimize the
distribution shift from the reference distribution.
CompreScore =
1
C
C
(cid:88)
i=1
Entail(claimi, y)
(4)
4 Experiments
4.1 Datasets
where claimi represents sub-claims and C is the
number of golden sub-claims.
Subsequently, we formulate a holistic reward
function (Eq. 5) considering the above dimensions.
This function is employed to rank generated candi-
dates, with the top-ranked candidate being selected
for further supervised fine-tuning.
Reward = I(AttrScore) ×
CompreScore
RobustScore
(5)
Here, I is an indicator function that returns 1 if
AttrScore = 1, and 0 otherwise.
3.2.2 Fine-grained Preference Optimization
The common way of self-improvement focuses
on updating the model with high-quality samples
while discarding low-quality ones. For LLM attri-
bution, simply supervised fine-tuning with highly
attributable responses only teaches the LLM to
learn surface characteristics of attribution, e.g., the
correct form of citation. Inspired by human cog-
nition, learning from mistakes provides more fine-
grained signals to understand the mechanisms that
drive successful attribution than simply imitating
correct examples. Thus, we aim to fully unlock the
potential of low-quality samples by constructing
fine-grained preference pairs with different opti-
mization rewards for preference optimization.
Given the multi-objective nature of LLM attri-
bution, our focus is specifically on attributability
and comprehensiveness, utilizing corresponding
rewards functions to construct preference data re-
spectively3. Specifically, we pair samples that ex-
hibit high attributability but low comprehensive-
ness with the top-ranked sample selected using a
holistic reward, and vice versa. These preference
pairs, each addressing different optimization objec-
tives, are then aggregated to further train the LLM
via DPO (Rafailov et al., 2023):
LDPO = −E[log σ(ˆrθ(x, y+) − ˆrθ(x, y−))]
3We do not optimize separately for robustness as the model
already shows sufficient robustness after rejection sampling
fine-tuning.
Following previous work (Ye et al., 2023; Li et al.,
2024), we conduct our experiments using two long-
form question-answering datasets: ASQA (Stel-
makh et al., 2022) and ELI5 (Fan et al., 2019), as
well as a multi-step reasoning dataset, StrategyQA
(Geva et al., 2021). Both ASQA and ELI5 feature
factoid long-form answers that require synthesiz-
ing highly relevant documents in response to a user
query. In StrategyQA, answers demand a combina-
tion of information-seeking and implicit reasoning.
Further details on the data statistics, knowledge
corpus used for retrieval, and examples for each
dataset are provided in Appendix B.
4.2 Evaluation
Following previous research (Gao et al., 2023), we
evaluate model-generated responses mainly on two
dimensions: Citation Quality and Correctness.
Our evaluation methodology combines both auto-
mated metrics and human evaluation.
Automatic Evaluation. To assess citation qual-
ity, we calculate the citation precision, citation
recall, and its harmonic mean citation F1 based
on the definition in Gao et al. (2023). We use
TRUE (Honovich et al., 2022), a T5-11B model
fine-tuned on a collection of natural language infer-
ence (NLI) datasets to examine whether the cited
documents entail the generated statement. For cor-
rectness, different datasets are measured differently.
For ASQA, we report the exact match recall (EM
Rec.) of correct short answers. For ELI5, we re-
port the claim recall (Claim) by checking whether
the model output entails the sub-claims generated
by text-davinci-003. For StrategyQA, the for-
mat of answers begins with yes/no, we evaluate
correctness by reporting the accuracy (Acc.). See
Appendix C for more details.
Human Evaluation. We collected a total of
150 instances from the test sets of ASQA, ELI5,
and StrategyQA for human evaluation, with each
dataset providing 10 instances from five different
systems. The evaluation is divided into two parts:
Model
ASQA
ELI5
StrategyQA
Correctness
Citation
Correctness
Citation
Correctness
Citation
EM Rec.
Rec.
Prec.
F1.
Claim
Rec.
Prec.
F1
Acc.
Rec.
Prec.
F1
Llama-2-13B (ICL)
Llama-2-13B (PostAttr)
Distill-Llama-3-70B-Instruct
Distill-Mixtral-8x7B-Instruct
Self-RAG (Asai et al., 2023)
AGREE (Ye et al., 2023)
APO (Li et al., 2024)
FGR (Huang et al., 2024a)
START (Warming-up)
START (Iteration 1)
START (Iteration 2)
START (Iteration 3)
35.2
25.0
41.1
40.3
31.7
39.4
40.5
38.7
39.2
42.2
42.9
44.2
In-context Learning & Post-hoc
38.4
23.6
39.4
23.6
38.9
23.6
13.4
7.1
17.3
5.7
15.8
5.8
16.5
5.8
Training-based
60.4
64.9
70.3
64.0
72.8
73.5
23.2
68.8
76.1
76.2
53.8
63.5
71.3
66.8
69.6
74.7
23.9
75.6
81.0
84.2
56.9
64.2
70.8
65.4
71.2
74.1
23.5
72.0
78.5
80.0
12.9
13.8
10.7
9.4
13.5
9.8
11.9
11.3
10.0
9.6
28.7
34.3
20.8
21.6
26.0
53.1
9.9
47.4
65.6
62.4
25.2
35.0
22.5
16.0
24.5
55.9
10.2
50.5
65.1
69.1
26.8
34.6
21.6
18.4
25.2
54.5
10.0
48.9
65.3
65.6
65.6
64.3
70.8
63.9
62.1
64.6
61.8
64.9
61.2
73.4
72.7
69.6
20.6
8.7
33.1
8.7
25.4
8.7
28.4
38.4
31.4
30.2
40.0
29.5
9.4
44.4
51.9
60.0
30.7
49.2
36.5
37.2
39.1
42.4
9.6
48.6
54.1
56.6
29.5
43.1
33.8
33.3
39.6
34.8
9.5
46.4
53.0
58.2
Table 1: Main result between our method and baselines. Experiments are evaluated on ASQA, ELI5, and StrategyQA
datasets. For most baselines, we use the result of previous works (Asai et al., 2023; Ye et al., 2023; Li et al., 2024).
citation quality and overall quality (comprehensive-
ness and correctness). More details in Appendix D.
4.3 Baselines
We compare START with the following baselines.
For more details, please refer to Appendix E.
In-context Learning (ICL). Following Gao et al.
(2023), we enable the LLM to generate citations
via in-context learning. For each query, we first
retrieve five relevant documents and then prompt
the LLM with two-shot demonstrations.
Post-hoc Attribution (PostAttr). Following Ye
et al. (2023), given a query, we first instruct the
LLM to generate an initial response leveraging its
parametric knowledge. For each statement in the
response, we use the NLI model4 to find the maxi-
mally supported document and cite accordingly.
Training-based Methods. Training on high-
quality data serves as a strong baseline to unlock
the attribution ability of LLMs. We consider the
following training-based methods.
Knowledge Distillation employs the most capa-
ble LLMs, e.g., Llama-3-70B-Instruct and Mixtral-
8x7B-Instruct, as teacher models to train a student
model on distilled attribution data.
Self-RAG (Asai et al., 2023) first collect data
distilled from GPT-4, then teach the LLM to re-
trieve on-demand while reflecting on its generation
to improve both generation quality and attributions.
AGREE (Ye et al., 2023) trains the LLM to self-
ground its response in retrieved documents using
4We use the same NLI model during citation evaluation.
automatically collected data and then leverages test-
time adaptation to reinforce unverified statements.
APO (Li et al., 2024) models LLM attribu-
tion as a preference learning task, where they
first supervised-fine-tuned on human-labeled high-
quality data and then automatically collect prefer-
ence data for preference optimization.
FGR (Huang et al., 2024a) first collects attribu-
tion data distilled from ChatGPT and then designs
rewards tailored for LLM attribution to teach the
LLM to generate supportive and relevant citations.
4.4
Implementation Details
For a fair comparison, all training-based baselines
and START employ Llama-2-13b-base (Touvron
et al., 2023). Further details on the implementation
of START are presented in Appendix F.
5 Results
5.1 Main Results
We provide the main results and the performance
of START across different iterations in Table 1.
START effectively improves performance. As
shown in Table 1, START shows superior perfor-
mance across three datasets and achieves state-
of-the-art results in citation quality. Specifically,
START shows significant improvements over both
ICL and Post-hoc approaches, highlighting the
benefits of supervised signals in unlocking the
attribution ability of LLMs. Notably, compared
with methods that rely on distilling from more ad-
vanced LLMs or training on human-annotated data,
START achieves performance improvement of at
Model
START (Iteration 1)
w/o. warm-up
w/o. preference
START (Iteration 2)
w/o. warm-up
w/o. preference
START (Iteration 3)
w/o. warm-up
w/o. preference
ASQA
ELI5
StrategyQA
Correctness
Citation
Correctness
Citation
Correctness
Citation
EM Rec.
Rec.
Prec.
F1.
Claim
Rec.
Prec.
F1
42.2
35.7
40.6
42.9
33.5
39.8
44.2
28.6
40.7
68.8
36.3
42.2
76.1
57.4
50.8
76.2
67.3
55.7
75.6
32.7
47.2
81.0
52.1
53.6
84.2
58.2
58.3
72.0
34.4
44.6
78.5
54.6
52.2
80.0
62.4
57.0
11.3
12.1
12.9
10.0
10.0
12.5
9.6
6.4
11.9
47.4
15.2
16.5
65.6
26.7
22.5
62.4
46.8
25.3
50.5
13.7
17.4
65.1
23.0
23.3
69.1
38.4
26.2
48.9
14.4
16.9
65.3
24.7
22.9
65.6
42.2
25.7
Acc.
73.4
65.9
63.7
72.7
69.0
65.7
69.6
70.4
67.8
Rec.
Prec.
F1
44.4
18.0
21.5
51.9
32.4
27.2
60.0
44.9
31.3
48.6
17.2
24.6
54.1
33.2
30.4
56.6
39.2
33.5
46.4
17.6
22.9
53.0
32.8
28.7
58.2
41.9
32.4
Table 2: Ablation study results across three datasets over three iterations. We compare START with two variants:
one that does not utilize synthetic data for initial warming-up (w/o warm-up) and another lacking fine-grained
preference optimization for self-improvement (w/o preference).
Model
Iteration 1
Iteration 2
Iteration 3
START
w/o. warm-up
42.5%
3.24%
90.2%
41.2%
95.9%
83.8%
Table 3: The pass rate comparison between START and
START (w/o. warm-up) across different iterations during
the rejection sampling stage.
least 8.0%, 20.4%, and 47.0% in citation quality
for ASQA, ELI5, and StrategyQA respectively. Re-
garding correctness, START also achieves gains of
at least 9.1% and 7.2% on both ASQA and Strate-
gyQA, despite a slight decrease on ELI5.
START successfully achieves self-improvement.
We compare the performance of START from itera-
tion 0 to 3 in Table 1, and the results demonstrate
consistent improvements across iterations. Initially,
at iteration 0 (after warm-up), thanks to the syn-
thetic training data, the model shows decent per-
formance after warm-up. By iteration 1, START
exhibits remarkable effectiveness in improving its
performance by leveraging its own generated sam-
ples (e.g., 23.5 → 72.0 on ASQA, 10.0 → 48.9 on
ELI5, 9.5 → 46.4 on StrategyQA). Subsequent it-
erations continue this trend of incremental improve-
ment, reaching a convergence point at iteration 3.
5.2 Ablation Study and Analysis
We conduct comprehensive ablation studies and
analyses to understand how each component in
START contributes to the significant improvement.
Effect of synthetic data warming-up. To
demonstrate the importance of utilizing synthetic
data for initial warm-up in START, we con-
duct a comparative ablation study employing
Llama-2-13b for self-improvement, omitting the
initial warm-up stage. Table 2 shows the ablation
results (w/o. warm-up) across three iterations. We
observe that omitting the initial warm-up stage can
lead to a significant performance drop in the first it-
eration. Additionally, as the iteration increases, the
performance of the model without warm-up shows
only modest improvements and remains substan-
tially inferior to the model that underwent warm-
up. Moreover, we also calculate the pass rate of
sampled response in each iteration as shown in Ta-
ble 3. The findings indicate that the model with
warm-up exhibits a higher pass rate in the first it-
eration, which allows the model to utilize more
supervised signals for self-improvement. These re-
sults suggest that warming up effectively facilitates
the bootstrapping of supervised data, thus prevent-
ing early model stagnation. It’s worth noting that
while the warm-up strategy effectively enriches the
model with supervision signals at an early stage, it
does not lead to noticeable improvements in cita-
tion quality, as shown in Table 1. We hypothesize
that this limitation stems from the inherent diffi-
culty LLMs face in synthesizing information from
multiple sources to generate comprehensive and
attributable responses solely through direct super-
vised fine-tuning.
Effect of fine-grained preference optimization.
To further understand the significance of fine-
grained preference optimization, we compare an
ablation of START that solely relies on high-quality
samples for iteratively supervised fine-tuning, dis-
carding low-quality samples for fine-grained pref-
erence optimization. As shown in Table 2, there
is a significant decline in performance when fine-
grained preference optimization is removed. This
highlights the effectiveness of START in fully un-
locking the potential of low-quality samples to en-
ASQA Dataset
ELI5 Dataset
StrategyQA Dataset
80.0
70.0
60.0
50.0
40.0
30.0
1
F
-
n
o
i
t
a
t
i
C
ITER-1
ITER-0
2
4
6
Epochs
8
10
1
F
-
n
o
i
t
a
t
i
C
60.0
50.0
40.0
30.0
20.0
10.0
0.0
2
1
F
-
n
o
i
t
a
t
i
C
60.0
50.0
40.0
30.0
20.0
10.0
0.0
2
ITER-1
ITER-0.
4
6
Epochs
8
10
ITER-1
ITER-0
4
6
Epochs
8
10
Figure 3: The impact of supervision signals from different stages (synthetic data v.s. self-improvement) on
attribution performance across ASQA, ELI5, and StrategyQA. The blue line represents the model that undergoes
only supervised fine-tuning use synthetic data at iteration 0. The red line represents the model that first trains for
two epochs with synthetic data at iteration 0, followed by one iteration of self-improvement.
ASQA
ELI5
StrategyQA
)
%
(
1
F
n
o
i
t
a
t
i
C
85
75
65
55
45
1k
3k
Synthetic data sizes
5k
)
%
(
s
s
e
n
t
c
e
r
r
o
C
75
60
45
30
15
0
1k
3k
Synthetic data sizes
5k
Figure 4: Ablation study on the effect of synthetic data
size on attribution and correctness performance. We
sample 1k, 3k, and 5k user queries for data synthesis.
hance attribution performance.
Effect of synthetic data size. We investigate the
effect of varying synthetic data sizes on the per-
formance of START. Figure 4 demonstrates their
effect on citation quality and correctness after three
iterations of self-improving. Specifically, we sam-
ple 1k, 3k, and 5k unlabeled queries to generate
synthetic training data accordingly, which provides
different levels of supervision signals. As shown in
Figure 4, even with 1k synthetic data points, START
demonstrates comparable performance. Moreover,
as the training size increases, START achieves no-
table improvement in citation quality and exhibits
stability in correctness.
Supervision signals from synthetic data v.s. iter-
ative self-improvement. We further investigate
the differential impact of supervision signals de-
rived from data synthesis versus those from the
iterative self-improvement stage. We utilize syn-
thetic training data to train the model for multiple
epochs, extending up to 10 epochs, and compare
its performance to that of a model that undergoes
only the first iteration of self-improvement. As de-
picted in Figure 3, training with synthetic data dur-
ing the initial iteration yields minimal performance
gains. The attribution performance climbs slowly
Attribution
Overall Quality
Full
Partial
No
Corr.
Comp.
ChatGPT (ICL)
Distill-Llama-3-70B-Instruct
Self-RAG (Asai et al., 2023)
FGR (Huang et al., 2024a)
START (Ours)
3.6
9.4%
68.5% 22.1%
2.9
54.6% 32.4% 13.0%
2.4
45.7% 27.5% 26.8%
58.4% 28.7% 12.9%
2.5
76.2% 18.3% 5.5% 3.5
4.4
3.2
2.1
2.8
4.6
Table 4: Human evaluation results on attribution,
correctness (Corr.), and comprehensiveness (Comp.).
Bold numbers indicate the best performance, while “_”
indicates the second-best performance.
as training epochs increase and fails to surpass the
performance of the model after just one iteration
of self-improvement. This observation reveals the
importance of the supervision signals provided by
the model itself during self-improvement.
6 Human Evaluation
Human evaluation results, detailed in Table 4, in-
dicate that START generates significantly more at-
tributable responses compared to all baselines, even
surpassing ChatGPT5. Specifically, 76.2% of the
statements generated by START are fully supported
by the cited documents, which outperforms Chat-
GPT by 11.24%. Additionally, 18.3% of the state-
ments are partially supported, with only 5.5% un-
supported. In terms of factuality, START outper-
forms all training-based baselines, slightly inferior
to ChatGPT. Moreover, START achieves the high-
est score in comprehensiveness, demonstrating its
exceptional ability to generate responses that ex-
tensively cover information from multiple sources.
Overall, these findings are in line with the auto-
matic evaluation results in Table 1.
7 Conclusion
We propose START, a self-improvement framework
to push the frontier of LLM attribution. We iden-
5We utilize gpt-3.5-turbo-0125 version.
tify two key limitations for LLM attribution self-
improvement. To address these, START first lever-
ages self-constructed synthetic data for warming
up, aiming to prevent models from early stagna-
tion due to insufficient supervision signals. To ex-
plore more fine-grained supervision signals, START
constructs fine-grained preference supervision sig-
nals from low-quality samples for preference opti-
mization. Both automatic and human evaluations
demonstrate significant improvement in attribution
without relying on human annotations and more
advanced LLMs.
Limitations
Despite significant performance improvements, our
work presents several limitations worth noting.
Firstly, while our data synthesis process provides
a good starting point for the model to self-improve
and demonstrate some generalization on existing
benchmarks, it may not cover all scenarios en-
countered in user information-seeking. This limita-
tion raises concerns regarding the generalizability
of synthetic data in a more complex information-
seeking environment. Secondly, the iterative train-
ing pipeline of our self-improving framework is
time-consuming, presenting a significant trade-
off between performance and training duration.
Thirdly, although our self-improving framework
does not rely on human annotations and more ad-
vanced LLMs, it still necessitates the integration of
off-the-shelf NLI models to guarantee the quality
of attribution in the generated samples. The perfor-
mance of the NLI model significantly impacts the
quality of our outputs to a certain extent. To move
towards a fully self-improving framework that does
not rely on external judgment, future research could
investigate the use of intrinsic attribution signals
derived directly from the LLM itself.
Acknowledgements
Xiaocheng Feng is the corresponding author of
this work. We thank the anonymous review-
ers for their insightful comments. This work
was supported by the National Natural Science
Foundation of China (NSFC) (grant 62276078,
U22B2059),
the Key R&D Program of Hei-
longjiang via grant 2022ZX01A32, the Interna-
tional Cooperation Project of PCL, PCL2022D01
and the Fundamental Research Funds for the Cen-
tral Universities (Grant No.HIT.OCEF.2023018).
References
Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and
Hannaneh Hajishirzi. 2023. Self-rag: Learning to
retrieve, generate, and critique through self-reflection.
CoRR, abs/2310.11511.
Bernd Bohnet, Vinh Q. Tran, Pat Verga, Roee Aharoni,
Daniel Andor, Livio Baldini Soares, Jacob Eisen-
stein, Kuzman Ganchev, Jonathan Herzig, Kai Hui,
Tom Kwiatkowski, Ji Ma, Jianmo Ni, Tal Schuster,
William W. Cohen, Michael Collins, Dipanjan Das,
Donald Metzler, Slav Petrov, and Kellie Webster.
2022. Attributed question answering: Evaluation
and modeling for attributed large language models.
CoRR, abs/2212.08037.
Angela Fan, Yacine Jernite, Ethan Perez, David Grang-
ier, Jason Weston, and Michael Auli. 2019. ELI5:
long form question answering. In Proceedings of
the 57th Conference of the Association for Compu-
tational Linguistics, ACL 2019, Florence, Italy, July
28- August 2, 2019, Volume 1: Long Papers, pages
3558–3567. Association for Computational Linguis-
tics.
Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen.
2023. Enabling large language models to generate
text with citations. In Proceedings of the 2023 Con-
ference on Empirical Methods in Natural Language
Processing, EMNLP 2023, Singapore, December 6-
10, 2023, pages 6465–6488. Association for Compu-
tational Linguistics.
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot,
Dan Roth, and Jonathan Berant. 2021. Did aristotle
use a laptop? A question answering benchmark with
implicit reasoning strategies. Trans. Assoc. Comput.
Linguistics, 9:346–361.
Çaglar Gülçehre, Tom Le Paine, Srivatsan Srini-
vasan, Ksenia Konyushkova, Lotte Weerts, Abhishek
Sharma, Aditya Siddhant, Alex Ahern, Miaosen
Wang, Chenjie Gu, Wolfgang Macherey, Arnaud
Doucet, Orhan Firat, and Nando de Freitas. 2023.
Reinforced self-training (rest) for language modeling.
CoRR, abs/2308.08998.
Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai
Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas
Scialom, Idan Szpektor, Avinatan Hassidim, and
Yossi Matias. 2022. TRUE: re-evaluating factual
consistency evaluation. In Proceedings of the 2022
Conference of the North American Chapter of the
Association for Computational Linguistics: Human
Language Technologies, NAACL 2022, Seattle, WA,
United States, July 10-15, 2022, pages 3905–3920.
Association for Computational Linguistics.
Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron C.
Courville, Alessandro Sordoni, and Rishabh Agar-
wal. 2024. V-star: Training verifiers for self-taught
reasoners. CoRR, abs/2402.06457.
Chengyu Huang, Zeqiu Wu, Yushi Hu, and Wenya
Wang. 2024a. Training language models to generate
text with citations via fine-grained rewards. CoRR,
abs/2402.04315.
Lei Huang, Xiaocheng Feng, Weitao Ma, Yuxuan Gu,
Weihong Zhong, Xiachong Feng, Weijiang Yu, Wei-
hua Peng, Duyu Tang, Dandan Tu, and Bing Qin.
2024b. Learning fine-grained grounded citations for
In Findings of
attributed large language models.
the Association for Computational Linguistics, ACL
2024, Bangkok, Thailand and virtual meeting, Au-
gust 11-16, 2024, pages 14095–14113. Association
for Computational Linguistics.
Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gus-
tavo Hernández Ábrego, Ji Ma, Vincent Y. Zhao,
Yi Luan, Keith B. Hall, Ming-Wei Chang, and Yinfei
Yang. 2022. Large dual encoders are generalizable
retrievers. In Proceedings of the 2022 Conference on
Empirical Methods in Natural Language Processing,
EMNLP 2022, Abu Dhabi, United Arab Emirates, De-
cember 7-11, 2022, pages 9844–9855. Association
for Computational Linguistics.
OpenAI. 2023. GPT-4 technical report.
CoRR,
abs/2303.08774.
Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong,
Zhangyin Feng, Haotian Wang, Qianglong Chen,
Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting
Liu. 2023. A survey on hallucination in large lan-
guage models: Principles, taxonomy, challenges, and
open questions. CoRR, abs/2311.05232.
Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin,
Dmytro Okhonko, Samuel Broscheit, Gautier Izacard,
Patrick S. H. Lewis, Barlas Oguz, Edouard Grave,
Wen-tau Yih, and Sebastian Riedel. 2021. The web
is your oyster - knowledge-intensive NLP against a
very large web corpus. CoRR, abs/2112.09924.
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying
Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E.
Gonzalez, Hao Zhang, and Ion Stoica. 2023. Effi-
cient memory management for large language model
serving with pagedattention. In Proceedings of the
ACM SIGOPS 29th Symposium on Operating Systems
Principles.
Dongfang Li, Zetian Sun, Baotian Hu, Zhenyu Liu, Xin-
shuo Hu, Xuebo Liu, and Min Zhang. 2024. Improv-
ing attributed text generation of large language mod-
els via preference learning. CoRR, abs/2403.18381.
Dongfang Li, Zetian Sun, Xinshuo Hu, Zhenyu Liu,
Ziyang Chen, Baotian Hu, Aiguo Wu, and Min
Zhang. 2023. A survey of large language models
attribution. CoRR, abs/2311.03731.
Nelson F. Liu, Tianyi Zhang, and Percy Liang. 2023.
Evaluating verifiability in generative search engines.
In Findings of the Association for Computational Lin-
guistics: EMNLP 2023, Singapore, December 6-10,
2023, pages 7001–7025. Association for Computa-
tional Linguistics.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled
In 7th International
weight decay regularization.
Conference on Learning Representations, ICLR 2019,
New Orleans, LA, USA, May 6-9, 2019. OpenRe-
view.net.
Chaitanya Malaviya, Subin Lee, Sihao Chen, Elizabeth
Sieber, Mark Yatskar, and Dan Roth. 2023. Ex-
pertqa: Expert-curated questions and attributed an-
swers. CoRR, abs/2309.07852.
Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike
Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer,
Luke Zettlemoyer, and Hannaneh Hajishirzi. 2023.
Factscore: Fine-grained atomic evaluation of factual
precision in long form text generation. In Proceed-
ings of the 2023 Conference on Empirical Methods
in Natural Language Processing, EMNLP 2023, Sin-
gapore, December 6-10, 2023, pages 12076–12100.
Association for Computational Linguistics.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christo-
pher D. Manning, Stefano Ermon, and Chelsea Finn.
2023. Direct preference optimization: Your language
model is secretly a reward model. In Advances in
Neural Information Processing Systems 36: Annual
Conference on Neural Information Processing Sys-
tems 2023, NeurIPS 2023, New Orleans, LA, USA,
December 10 - 16, 2023.
Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase,
and Yuxiong He. 2020. Deepspeed: System opti-
mizations enable training deep learning models with
over 100 billion parameters. In KDD ’20: The 26th
ACM SIGKDD Conference on Knowledge Discovery
and Data Mining, Virtual Event, CA, USA, August
23-27, 2020, pages 3505–3506. ACM.
Ivan Stelmakh, Yi Luan, Bhuwan Dhingra, and Ming-
Wei Chang. 2022. ASQA: factoid questions meet
long-form answers. In Proceedings of the 2022 Con-
ference on Empirical Methods in Natural Language
Processing, EMNLP 2022, Abu Dhabi, United Arab
Emirates, December 7-11, 2022, pages 8273–8288.
Association for Computational Linguistics.
Ye Tian, Baolin Peng, Linfeng Song, Lifeng Jin, Dian
Yu, Haitao Mi, and Dong Yu. 2024. Toward self-
improvement of llms via imagination, searching, and
criticizing. CoRR, abs/2404.12253.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Al-
bert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-
Ferrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, An-
thony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Isabel Kloumann, Artem Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Di-
ana Liskovich, Yinghai Lu, Yuning Mao, Xavier Mar-
tinet, Todor Mihaylov, Pushkar Mishra, Igor Moly-
bog, Yixin Nie, Andrew Poulton, Jeremy Reizen-
stein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
Ruan Silva, Eric Michael Smith, Ranjan Subrama-
nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay-
lor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurélien Ro-
driguez, Robert Stojnic, Sergey Edunov, and Thomas
Scialom. 2023. Llama 2: Open foundation and fine-
tuned chat models. CoRR, abs/2307.09288.
Xi Ye, Ruoxi Sun, Sercan Ö. Arik, and Tomas Pfister.
2023. Effective large language model adaptation for
improved grounding. CoRR, abs/2311.09533.
Asaf Yehudai, Boaz Carmeli, Yosi Mass, Ofir Arviv,
Nathaniel Mills, Assaf Toledo, Eyal Shnarch, and
Leshem Choshen. 2024. Genie: Achieving hu-
man parity in content-grounded datasets generation.
CoRR, abs/2401.14367.
Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho,
Sainbayar Sukhbaatar, Jing Xu, and Jason Weston.
2024. Self-rewarding language models. CoRR,
abs/2401.10020.
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting
Dong, Chuanqi Tan, and Chang Zhou. 2023. Scaling
relationship on learning mathematical reasoning with
large language models. CoRR, abs/2308.01825.
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D.
Goodman. 2022. Star: Bootstrapping reasoning with
reasoning. In Advances in Neural Information Pro-
cessing Systems 35: Annual Conference on Neural
Information Processing Systems 2022, NeurIPS 2022,
New Orleans, LA, USA, November 28 - December 9,
2022.
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang,
Xiaolei Wang, Yupeng Hou, Yingqian Min, Be-
ichen Zhang, Junjie Zhang, Zican Dong, Yifan Du,
Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao
Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang
Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen.
2023. A survey of large language models. CoRR,
abs/2303.18223.
A Data Synthesis
A.1 Data Sources
The queries employed for data synthesis are
sourced from the Wish-QA (Yehudai et al., 2024),
which provides high-quality grounded data suit-
able for content-grounded generation tasks such
as long-form question-answering and summariza-
tion. Specifically, we utilize the ELI5 subset of
the WishQA, noted for its high lexical diversity,
comprising a total of 8,413 queries. Notably, we
randomly sample 5,000 user queries for our data
synthesis, resulting in the creation of 5,000 syn-
thetic data points.
A.2 Prompts for Data Synthesis
We detail the prompts employed in the synthetic
data generation stage, covering response genera-
tion, claim decomposition, and document genera-
tion, shown in Figure 5.
A.3
Implementation Details
In our work, we use Llama-2-13b-base for data
synthesis, as our goal is to realize self-improving
for the attribution ability of LLMs, the models used
in the data synthesis stage and the subsequent main
experiment need to be consistent without introduc-
ing additional more powerful models. To enhance
the LLM’s ability to accurately follow instructions
at each step, we utilize in-context learning, incorpo-
rating two demonstrations for response generation,
claim decomposition, and document generation.
A.4 Quality of Synthetic Data
We focus on evaluating the attributability of the
final response. Specifically, we employ an off-
the-shelf Natural Language Inference (NLI) model,
TRUE (Honovich et al., 2022), to verify whether
each statement in the response is fully supported by
the cited documents and to check for the presence
of any irrelevant citations. The results indicate that
the synthetic data are of significantly high qual-
ity: 92.3% of the statements are fully supported
by the cited documents, and 94.1% are free from
irrelevant citations.
B Details of evaluation datasets
Our evaluation utilizes the ASQA, ELI5, and Strat-
egyQA datasets. For both ASQA and StrategyQA,
Wikipedia serves as the external knowledge base,
specifically employing the Wikipedia snapshot
from 2018-12-20. For the ELI5 dataset, the ex-
ternal knowledge source is Sphere (Piktus et al.,
2021). Regarding the retrievers, we use the dense
retriever GTR (Ni et al., 2022) for Wikipedia and
the sparse retriever BM25 for Sphere. Detailed
statistics for these datasets are presented in Table 5.
In line with previous research by Gao et al. (2023),
we use the same evaluation datasets for ASQA and
ELI5. Regarding StrategyQA, we adopt the settings
of Ye et al. (2023), utilizing a randomly split sub-
set of 490 test instances for evaluation. To further
clarify, we provide an example from each dataset
in Table 6.
(a) Prompt template for response generation
Instruction: Given a question, generate a detailed and informative response that covers multiple
perspectives and synthesizes information from various sources. Limit the response to a maximum of
five statements.
Question: [Question]
Response:
(b) Prompt template for claim decomposition
Instruction: Given a detailed and informative response, break it into its constituent claims.
Identify and list each distinct claim, ensuring to capture all essential elements and nuances
presented in the original response.
Response: [Response]
Claims:
Instruction: Given a claim, generate a 100-word document with a title. The main content of the
document should elaborate on the claims and contain the main content of the claim.
(c) Prompt template for document generation
Claim: [Claim]
Documents:
Figure 5: Illustration of the prompting design for the data synthesis pipeline.
Question: Who is the original artist of sound of silence?
Documents: [The retrieved documents are omitted here]
(a) An example of ASQA
Answer: There are several songs with the title "Sound of Silence". Sounds of Silence is the
second studio album by Simon & Garfunkel, released on January 17, 1966. The album’s title is a
slight modification of the title of the duo’s first major hit, "The Sound of Silence", which was
recorded in March 1964 and originally was released as "The Sounds of Silence". Another "Sound
of Silence" is a song performed by Australian recording artist Dami Im, and is best known as
Australia’s entry at the Eurovision Song Contest 2016.
Question: How does so much of our trash end up in the ocean?
Documents: [The retrieved documents are omitted here]
(b) An example of ELI5
Answer: Because water flows downhill and very often ends up in rivers which very often end up in
oceans. So when it rains, trash is washed downhill and into streams and rivers and ultimately the
ocean.
Question: Did Curiosity outlive its expected lifespan?
Documents: [The retrieved documents are omitted here]
(c) An example of StrategyQA
Answer: No. "Curiosity" rover has outlasted its expected lifespan. The rover was designed
to last for months, but is still operating after years on Mars. In August 2017, "Curiosity"
celebrated its fifth anniversary on Mars and is expected to continue its mission for years to come.
The longevity of "Curiosity" can be attributed to the advanced technology used in the rover’s
design and the meticulous planning and preparation done by the engineers and scientists. With the
advancement of technology and the continued refinement of the mission, "Curiosity" is likely to
continue operating for many more years to come.
Figure 6: Examples of the ASQA, ELI5, and StrategyQA datasets.
(a) Prompt template of ASQA and ELI5
Instruction: Write an accurate, engaging, and concise answer for the given question using only
the provided search results (some of which might be irrelevant) and cite them properly. Use
an unbiased and journalistic tone. Always cite for any factual claim. When citing several
search results, use [1][2][3]. Cite at least one document and at most three documents in each
sentence. If multiple documents support the sentence, only cite a minimum sufficient subset of
the documents.
Question: [Question]
Documents: [Documents]
(c) Prompt template of StrategyQA
Instruction: Answer “yes” or “no” first. Then, write a clear and concise answer that combines
reasoning with relevant search results and cite the sources properly, even if some might be
irrelevant.
Question: [Question]
Documents: [Documents]
Figure 7: Illustration of the prompting design of evaluation datasets.
Dataset
Source
# Examples
Wiki
ASQA (Stelmakh et al., 2022)
ELI5 (Fan et al., 2019)
Sphere
StrategyQA (Geva et al., 2021) Wiki
948
1000
490
Table 5: Statistics of datasets used for evaluation.
C Automatic Evaluation Details
We provide a detailed description of the evaluation
metrics employed to assess the quality of the model-
generated responses.
Citation Quality. Citation Quality is a critical
evaluation dimension in attributed text generation,
assessing whether the answer is fully supported by
the cited documents and that no irrelevant docu-
ments are cited. Following Liu et al. (2023) and
Gao et al. (2023), the evaluation of citation quality
is typically divided into two parts: Citation Recall
and Citation Precision.
Citation Recall evaluates whether all generated
statements are fully supported by the cited docu-
ments. Specifically, for each statement si ∈ S,
its citation recall is scored as 1 if there is at
least one valid citation (Ci ̸= ∅) and the concate-
nation of cited documents concat(Ci) fully sup-
port the statement (ϕ(concat(Ci), si) = 1), where
ϕ(premise, hypothesis) is an NLI model that out-
puts 1 if the premise entails the hypothesis. The
final citation recall is calculated by averaging over
all statements in S.
Citation Precision assesses whether any citations
in the response are irrelevant. A citation ci,j is
determined as “irrelevant” if (a) ci,j alone cannot
support statement si and (b) removing ci,j does not
affect the rest of the citations to support si.
Citation F1 is a metric that combines citation
precision and citation recall by calculating their
harmonic mean. In our work, we utilize this metric
to evaluate the overall citation quality of the re-
sponse, where a higher Citation F1 score indicates
a more accurately and comprehensively attributed
response.
F1 = 2 ·
citation precision · citation recall
citation precision + citation recall
, (7)
Correctness. Correctness is crucial in long-form
QA tasks. Given the ambiguous nature of the
ASQA dataset, where each question requires mul-
tiple short answers to cover different aspects, we
follow Stelmakh et al. (2022) and calculate the re-
call of correct short answers using exact match.
As for the ELI5 dataset, evaluating the cor-
rectness of long-form answers is challenging.
Thus, the ALCE benchmark employs Instruct-
GPT (text-davinci-003) to generate three "sub-
claims" based on the human-annotated answers. To
assess correctness, we use a T5-11B model6 that
has been fine-tuned on a collection of NLI datasets
to check whether the model-generated outputs en-
tail these sub-claims.
6https://huggingface.co/google/t5_xxl_true_
nli_mixture
D Human Evaluation Details
Considering the open-ended nature of long-form
QA tasks, automatic evaluation of correctness may
not cover all possible answers. Furthermore, the
evaluation of citation quality is constrained by the
capabilities of the off-the-shelf NLI model, which
may not adequately detect cases of partial sup-
port. Therefore, we conduct a human evaluation
to assess the attribution quality and correctness of
START. We recruited two annotators, holding at
least a bachelor’s degree to participate in our study.
To evaluate citation quality, annotators are asked
to verify whether each statement in the responses
is fully supported, partially supported, or not sup-
ported by the cited documents and identify error
types if the statement is not fully supported.
Next, we evaluate the overall quality of the re-
sponses, focusing on comprehensiveness and cor-
rectness. Annotators are asked to rate both compre-
hensiveness and correctness using a 5-point Likert
scale, capturing different levels of content coverage
and factuality.
E Baselines
Knowledge Distillation: We employ supervised
fine-tuning to teach Llama-2-13B to generate re-
sponses with citations, utilizing training data dis-
tilled from the most advanced LLMs. Specifically,
the queries and documents are sourced from our
synthetic dataset and the attributed responses are
generated by Llama-3-70B-Instruct / Mixtral-8x7B-
Instruct.
Self-RAG (Asai et al., 2023): The method in-
volves training the LLM to generate text with re-
flection tokens, which are categorized into retrieval
and critique tokens to indicate the need for retrieval
and the attributability of its generation, respec-
tively. Specifically, it first collects over 145,619
supervised data by prompting GPT-4 with specific
instructions to generate responses with reflection
tokens for knowledge-intensive queries. These
data are then used to train the LLM to generate
responses with self-reflection via supervised fine-
tuning.
AGREE (Ye et al., 2023): The method involves
training the LLM to generate grounded claims with
citations and to identify unverified claims. Specif-
ically, it first collects 4,500 attribution data via
post-hoc attribution with the help of an NLI model.
These data are then used to train the model to gen-
erate grounded responses with citations and also
clearly state the unsupported statements. An iter-
ative retrieval process is employed to search for
additional information for the unsupported state-
ments via a test-time adaptation (TTA) strategy.
APO (Li et al., 2024): This method models the
attributed text generation task as a preference learn-
ing task. Specifically, the model is first trained
using 6,330 human-labeled high-quality attribution
data for supervised fine-tuning to learn the basic
ability of attribution. It then leverages automat-
ically constructed preference data for preference
learning, where a positive response is generated
from relevant documents accompanied by a posi-
tive prompt, while a negative response is generated
using irrelevant documents or a negative prompt.
FGR (Huang et al., 2024a): The method first
collects 3,000 in-domain user queries along with re-
trieved documents and then leverages ChatGPT to
generate high-quality attributed responses. These
data then serve as training data to teach the model
the basic ability of citation generation via super-
vised fine-tuning. Subsequently, the method de-
signs reward models to teach the model to gen-
erate well-supported and accurate responses via
fine-grained reinforcement learning.
To ensure a fair comparison, we employ the same
base model (Llama-2-13b-base) for evaluating
all baselines. For Self-RAG, AGREE, and APO,
we directly utilize their published experimental re-
sults.
In the case of FGR, which does not pro-
vide Llama-2-13b-base results, we reproduce the
experiments using the official code and the same
settings provided by the authors.
F Implement Details
In all experiments, training is conducted using eight
A100-80GB GPUs, leveraging Deepspeed stage
3 (Rasley et al., 2020) for multi-GPU distributed
training, with training precision Bfloat16 enabled.
During the initial warm-up stage, we employ the
AdamW (Loshchilov and Hutter, 2019) optimizer
with a warm-up ratio of 0.03. The total batch size
is set at 64, and the learning rate is maintained at
2e-5. The maximum input sequence length is con-
figured to 2048 tokens. The model is trained with
only 20% of the synthetic dataset for two epochs in
this stage. This strategy is designed to prevent the
model from overfitting to the synthetic data during
the warm-up stage, enabling it to generate more
diverse samples in the subsequent rejection sam-
pling fine-tuning stage. In the self-improving stage,
we conduct rejection-sampling fine-tuning for three
epochs at each iteration, maintaining the same train-
ing settings as those used during the warming-up
stage. To get the highest quality responses dur-
ing rejection sampling, we set the threshold for
attributability reward at 1.0, ensuring that every
statement in the response is fully supported by the
cited documents. For comprehensive, we set the
threshold to 0.8, which means that at least 80% of
the statements need to be cited. Subsequently, dur-
ing the fine-grained preference optimization, the
model is further trained for one additional epoch
using a learning rate of 1e-5.
During the evaluation, we utilize the vLLM
framework (Kwon et al., 2023) for efficient infer-
ence. Without special instructions, the sampling
parameters are specifically configured with a tem-
perature of 1.0 and a top-p setting of 0.95. We
present detailed prompts used during the evalua-
tion process in Figure 7.
|
synthetic_cpt | 7 | Hybrid_Training_Approaches_for_LLMs_Leveraging_Real_and_Synthetic_Data_to_Enhance_Model_Performance_in_Domain-Specific_Applications.pdf | 4
2
0
2
p
e
S
5
2
]
L
C
.
s
c
[
1
v
3
3
4
7
1
.
9
0
4
2
:
v
i
X
r
a
Preprint
HDFLOW: ENHANCING LLM COMPLEX PROBLEM-
SOLVING WITH HYBRID THINKING AND DYNAMIC
WORKFLOWS
Wenlin Yao, Haitao Mi, Dong Yu
Tencent AI Lab
Bellevue, WA 98004, USA
{wenlinyao,haitaomi,dyu}@global.tencent.com
ABSTRACT
Despite recent advancements in large language models (LLMs), their performance
on complex reasoning problems requiring multi-step thinking and combining var-
ious skills is still limited. To address this, we propose a novel framework HDFlow
for complex reasoning with LLMs that combines fast and slow thinking modes
in an adaptive manner. Our approach consists of two key components: 1) a new
approach for slow, deliberate reasoning called Dynamic Workflow, which auto-
matically decomposes complex problems into more manageable sub-tasks and
dynamically designs a workflow to assemble specialized LLM or symbolic rea-
soning tools to solve sub-tasks; 2) Hybrid Thinking, a general framework that dy-
namically combines fast and slow thinking based on problem complexity. Finally,
we propose an easy-to-scale method for automatically synthesizing a large-scale
dataset of 27K challenging reasoning problems for complex reasoning and a hy-
brid thinking tuning method that trains smaller LLMs on this dataset to internalize
the fast/slow hybrid reasoning strategies. Experiments on four reasoning bench-
mark datasets demonstrate that our slow thinking with dynamic workflows signif-
icantly outperforms Chain-of-Thought, and hybrid thinking achieves the highest
accuracy while providing an effective balance between computational efficiency
and performance. Fine-tuning using our hybrid thinking approach also signifi-
cantly boosts the complex reasoning capabilities of open-source language models.
The results showcase the promise of slow thinking, dynamic workflows, and hy-
brid thinking in expanding the frontier of complex problem-solving with LLMs1.
1
INTRODUCTION
Large language models (LLMs) have demonstrated remarkable capabilities across a wide range of
tasks, from code generation and mathematical reasoning to natural language understanding and gen-
eration. However, their performance on complex reasoning problems that require multi-step think-
ing and various skills is still limited. Recent advancements in symbolic reasoning and tool usage,
such as AlphaCode (Li et al., 2022; AlphaCode Team), AlphaGeometry (Trinh et al., 2024), and
AlphaProof (AlphaProof/AlphaGeometry teams), have shown significant improvements in specific
domains by integrating LLMs with specialized procedures and symbolic reasoning engines. Var-
ious prompting strategies, such as Chain-of-Thought (CoT) (Wei et al., 2022), Tree of Thoughts
(ToT) (Yao et al., 2024), and Graph of Thoughts (GoT) (Besta et al., 2024a), have been developed to
enable different reasoning topologies to enhance LLM problem-solving capabilities. Despite these
advancements, enhancing the reasoning abilities of LLMs to solve challenging problems across di-
verse domains in a unified framework remains crucial for expanding their real-world applicability.
Existing methods for complex reasoning with LLMs have several limitations. First, complex
problem-solving often requires combining various knowledge domains, skills, and tool usage. While
previous approaches such as AlphaCodium (Ridnik et al., 2024) and Alphageometry (Trinh et al.,
2024) have demonstrated the potential of combining language models and symbolic reasoning to
1Code and data will be released at https://github.com/wenlinyao/HDFlow.
1
Preprint
solve complex problems, they rely on manually designed workflows tailored to specific domains
(i.e., competitive programming or geometry theorem proving). The language model and symbolic
engine take predefined turns in a rigid problem-solving process. This limits the applicability and
adaptability of these systems to broader domains. Thus, we aim to enhance the generic problem-
solving capabilities of LLMs by dynamically alternating between natural language reasoning in the
“text space” and symbolic reasoning in the “symbolic space” based on the problem at hand. This
dynamic integration of the two reasoning modes enables the system to address a much broader range
of problems and adapt the problem-solving process to the unique requirements of each task. Second,
traditional approaches to complex problem-solving with LLMs often rely on a single mode of think-
ing, which may struggle with more intricate tasks that demand a deliberate, analytical approach. For
example, many approaches employ a fixed reasoning strategy, such as CoT prompting, regardless
of the problem’s complexity. For instance, OpenAI’s most recent o1 model2 only engages in a sin-
gular deep thinking mode despite the complexity of the user’s query. This can lead to suboptimal
performance on tasks that require a more deliberate, multi-step approach. While multi-agent frame-
works such as AutoGPT (Significant Gravitas), ReAct Yao et al. (2022), and AutoGen (Wu et al.,
2023) have addressed some aspects of this challenge by enabling recursive goal decomposition, in-
terleaving reasoning and acting, and state-driven workflows, they do not fully exploit the potential of
thinking approaches that can switch between intuitive thinking and more analytical thinking modes
based on problem complexity. Finally, as problem complexity increases, the performance of exist-
ing approaches tends to degrade significantly, highlighting the need for frameworks that can scale
to handle even the most challenging reasoning problems. Recently, OpenAI o1 model (OpenAI)
demonstrates the potential to consistently improve LLM performance of complex reasoning with
compute scaling in inference-time through deep thinking.
To address these limitations, we propose a novel framework for complex reasoning with LLMs
that combines fast (system I) and more analytical slow thinking (system II) adaptively, inspired
by the dual process theory of human cognition (Daniel, 2017). Our approach consists of two key
components. First, we introduce a new approach for slow, deliberate reasoning called Dynamic
Workflow, which automatically decomposes complex problems into more manageable sub-tasks. It
then dynamically designs a workflow to assemble specialized LLM or symbolic tools to solve each
sub-task. To achieve this, the dynamic workflow orchestrates a team of specialized LLM experts,
each contributing unique domain knowledge or tool usage, to solve the sub-tasks in a structured
manner. Second, we propose Hybrid Thinking, a general framework that dynamically combines
fast and slow thinking based on problem complexity. For simpler tasks, the model defaults to a
fast-thinking mode using CoT strategy. When the model’s confidence in the fast thinking output is
low, it automatically switches to slow thinking with dynamic workflow, allowing for more efficient
and more accurate problem-solving. Finally, to train local LLMs for complex reasoning, we present
an easy-to-scale method for automatically synthesizing a large-scale dataset of 27K challenging
reasoning problems and propose a hybrid thinking tuning approach that finetunes open-source LLMs
on this dataset, enabling them to internalize the fast/slow hybrid reasoning strategies.
We conduct experiments on four reasoning benchmark datasets (i.e., BBH (Suzgun et al., 2022),
MATH (Hendrycks et al., 2021), Game of 24 Yao et al. (2024), DeepMind Math (Saxton et al.,
2019). Experiments using GPT-4-Turbo reveal that slow thinking with dynamic workflows sig-
nificantly outperformed CoT, with an average accuracy improvement of 22.4%. Hybrid thinking,
which combines fast and slow thinking, achieved the highest accuracy on three of the four datasets.
While slow thinking required the most inference tokens, hybrid thinking struck an effective balance
between computational efficiency and performance. Furthermore, fine-tuning Llama-3-8B-Instruct
using hybrid thinking significantly boosted performance across all datasets compared to the original
model. Hybrid thinking after fine-tuning yielded accuracy gains of 10-23% over CoT prompting,
with broad improvements across different subject areas in MATH. Overall, the results demonstrate
the promise of slow thinking with dynamic workflows and hybrid thinking in enhancing the complex
problem-solving abilities of LLMs.
2o1-preview model tested on Sept.24, 2024. o1-preview model thinks for a few seconds to users’ casual
conversational queries such as How are you?
2
Preprint
2 RELATED WORK
Symbolic Reasoning and Tool Usage. Bridging LLMs with symbolic reasoning and tool usage
has demonstrated significant improvements across various domains. AlphaCode (Li et al., 2022;
AlphaCode Team) combines LLMs with a specialized search and reranking mechanism, achieving
top-tier performance in competitive programming. Similarly, AlphaCodium (Ridnik et al., 2024)
improves AlphaCode’s performance by applying a predefined multi-stage process of problem anal-
ysis, solution generation, and iterative testing and bug fixing. By using an evolutionary search
procedure guided by an LLM, FunSearch (Romera-Paredes et al., 2024) can discover new mathe-
matical constructions and algorithmic heuristics. AlphaGeometry (Trinh et al., 2024) leverages a
neuro-symbolic system trained on synthetic data to guide a symbolic deduction engine, achieving
near-expert performance in geometry theorem proving. Chain of Code (Li et al., 2024) encourages
LLMs to write pseudocode for challenging sub-problems, which is then executed by the LM itself
when it cannot be handled by a standard interpreter. These approaches rely on carefully designing
when and how to integrate symbolic reasoning for each task domain.
Prompting Strategies. Various prompting strategies have been developed to enable different rea-
soning topologies (Besta et al., 2024b) for enhancing LLM problem-solving capabilities. Chain-
of-Thought (CoT) prompting (Wei et al., 2022) first introduced the concept of generating inter-
mediate reasoning steps to improve performance on complex tasks. Building upon this, the Tree
of Thoughts (ToT) (Yao et al., 2024) enables the exploration of multiple potential reasoning paths
and incorporates deliberate decision-making through self-evaluation and backtracking. Graph of
Thoughts (GoT) (Besta et al., 2024a), models LLM-generated information as an arbitrary graph
where thoughts are vertices and dependencies are edges, allowing for more complex reasoning
structures and outcomes. In a different direction, Program of Thoughts (PoT) approach (Chen et al.,
2022) disentangles computation from reasoning by expressing the reasoning process as a program,
with external computation handling numerical operations. SELF-DISCOVER (Zhou et al., 2024)
introduces a self-discovery process where LLMs autonomously select and compose multiple atomic
reasoning modules into explicit reasoning structures. Our hybrid thinking approach allows for the
efficient resolution of tasks within the LLM’s core capabilities through direct reasoning, while adap-
tively engaging in deeper, multi-step workflows for more complex problems.
Multi-Agent Frameworks for Task-Solving. Recent advancements also led to the development of
various frameworks for complex task-solving and multi-agent collaboration. AutoGPT (Significant
Gravitas) pioneered the idea of using LLMs for recursive goal decomposition and task completion,
where sub-tasks are then performed sequentially to yield a larger result. ReAct (Yao et al., 2022) in-
troduced a method for interleaving reasoning and acting, allowing LLMs to generate both reasoning
traces and task-specific actions. Reflexion (Shinn et al., 2024) further enhanced language agents’
capabilities by incorporating verbal reinforcement learning, enabling them to reflect on feedback
and improve decision-making. MetaGPT (Hong et al., 2024) addressed the challenge of LLM hal-
lucination in multi-agent systems by incorporating human workflows and standardized operating
procedures into the framework. AutoGen (Wu et al., 2023) presented a flexible multi-agent con-
versation framework that allows for customizable, conversable agents with human participation.
CAMEL (Li et al., 2023) introduced a role-playing approach to facilitate autonomous cooperation
among communicative agents. Finally, StateFlow (Wu et al., 2024) proposed a state-driven work-
flow that conceptualizes complex task-solving processes as state machines, enhancing control and
interpretability. In contrast to these existing works, our approach uniquely integrates hybrid think-
ing, combining fast and slow thinking modes with automatic workflows, to enhance LLMs’ ability
to tackle complex reasoning problems more effectively and with greater adaptability.
3 OVERVIEW OF THE HYBRID THINKING APPROACH
Our hybrid thinking approach (Figure 1) combines the strengths of fast and slow thinking modes
to enable LLMs to more effectively solve complex reasoning problems. It consists of the following
three key components. 1) Fast Thinking with Direct CoT. In the fast thinking mode, the LLM uses
a direct chain of thought (CoT) approach to quickly solve the task query if possible. This leverages
the LLM’s core abilities to perform certain types of reasoning efficiently by directly generating the
rationale and the final answer. 2) Adaptive Combination of Fast and Slow Thinking. Next, we
employ a self-verification mechanism where the LLM examines each step of the fast-thinking CoT
3
Preprint
Figure 1: Overview of our HDFlow approach for complex problem-solving. Overall, it is a dual-
path hybrid thinking approach, beginning with a CoT solver for initial fast reasoning followed by
verification of each reasoning step. If verification fails, the process transitions to a slower, more
deliberate ”Dynamic Workflow Solver.” This solver iterates until a verified answer is obtained, in-
corporating a final verification step before concluding with a solution.
Figure 2: Three-Stage Framework of Dynamic Workflow. The dynamic workflow design begins
with Problem Reflection, where key elements are analyzed and sub-tasks identified. Stage 2 focuses
on Expert Design, utilizing a variety of specialists and tools to architect an optimal workflow. Stage
3 involves constructing and executing the workflow graph to get the final result.
reasoning to assess its confidence in the generated answer. This is achieved by applying the LLM
to analyze the coherence, logical consistency, and correctness of each reasoning step in the context
of the given query. If the LLM detects any inconsistencies, errors, or low-confidence steps during
this self-verification process, it triggers a switch to the slow-thinking mode. 3) Slow Thinking with
Dynamic Workflow. To tackle highly complex tasks, we propose a novel slow-thinking mechanism
called Dynamic Workflow (Figure 2), which automatically decomposes the original task into sub-
tasks and dynamically switches between verbal reasoning and symbolic reasoning to solve each sub-
task. Our approach starts with multi-level problem reflection and decomposition. It then designs a
workflow to assemble specialized LLM skills or symbolic tools for sub-tasks. Next, we dynamically
chain together the sub-task reasoning steps into a multi-step workflow and execute the workflow.
Finally, all sub-task results are aggregated into the final answer to the original query. We will present
details in Section 4.
By first attempting fast thinking, our hybrid thinking approach can efficiently handle queries that are
within the LLM’s core capabilities. When the query exceeds what fast thinking alone can confidently
handle, the hybrid thinking will smoothly transition to a slow thinking workflow to enable the LLM
to tackle a broader range of challenges accurately.
4 SLOW THINKING WITH DYNAMIC WORKFLOW
In contrast to the rapid responses of fast thinking (e.g., CoT), our new slow-thinking mechanism
applies dynamic workflow to enable a more deliberate, analytical approach to complex problem-
solving (see Figure 2). It allows an LLM to dynamically transition between reasoning in the text
space (natural language reasoning) and the symbolic space (symbolic reasoning). The high-level
idea is we first let the LLM decompose the original reasoning problem into several more manageable
sub-tasks and solve each sub-task to form the final solution. When necessary, the LLM Engine will
translate the sub-problem from the text space into the symbolic space, enabling the symbolic engine3
3In this paper, we mainly use program to achieve symbolic reasoning.
4
TaskQueryYesFastThinkingCoTSolverVerifyEachReasoningStepDynamicWorkflowSolverVerifyAnswerNo(retry)FinalAnswerNoSlowThinkingYes2SlowThinkingwithDynamicWorkflowAnalyzeKeyElementsIdentifySub-tasksStage1:ProblemReflectionDesignExpertsWorkflowArrangementStage2:WorkflowDesignExpertswithspecialties•Linguist•Mathematician•Data Scientist•…Expertswithtoolusage•Python•SymbolicEngine•…Workflow(pseudocode)Stage3:GraphConstructionandExecutionPreprint
to perform precise symbolic reasoning. The results are then mapped back into natural language using
the LLM Engine. By decomposing the problem, combining the strengths of both natural language
and symbolic reasoning in a tailored workflow, and executing it from start to finish, LLMs can
tackle very hard problems that require multiple steps of accurate reasoning. Appendix B presents a
complete example solution using our dynamic workflow approach and compares with the solution
using OpenAI o1-preview. Prompts used are listed in Appendix C.
4.1 BREAKING DOWN COMPLEXITY: PROBLEM ANALYSIS AND DECOMPOSITION (STAGE 1)
The first step in our slow thinking is problem analysis and planning. We aim to break down the
original problem statement into more manageable sub-tasks. Specifically, the LLM is asked to
analyze the key elements of the query, such as available information, constraints, and the desired
output. It then identifies logical sub-goals needed to progress from the initial state to the solution.
This decomposition allows the LLM to approach the problem in a structured manner, focusing on
one part at a time. Therefore, the LLM can catch gaps in reasoning and handle complex problems
that the fast thinking of CoT alone would struggle with.
Problem Reflection. The first step in tackling complex problems is conducting a thorough problem
reflection. This involves the LLM analyzing the original problem and restating it in its own words
to demonstrate understanding. Our problem reflection includes two parts: 1) Identifying the core
objective or question posed by the problem. 2) Recognizing any constraints, assumptions, or special
conditions mentioned. By internalizing the problem through reflection, the LLM can gain a solid
understanding of what needs to be accomplished before proceeding to decomposition.
Subtask Decomposition. Once the problem is well understood, the LLM is instructed to perform
a multi-level decomposition to break it down into some tractable sub-problems. The LLM is asked
to follow four principles to achieve an optimal decomposition. Sequential dependency. The sub-
problems are organized in a logical sequence, such that the outputs of earlier steps feed into subse-
quent ones, creating a structured workflow from start to finish. Non-overlapping. Each sub-problem
represents a distinct portion of the original problem, with no duplication of work between sub-
problems. This keeps the overall solution efficient. Proper Decomposition. The sub-problems are
decomposed to the optimal level of granularity - not so small that there are too many to track and
coordinate, but not so large that they are still struggling to solve. Modular. Where appropriate,
sub-problems are defined in a generalizable, modular way, such that the logic and code used to solve
them can potentially be reused to solve similar problems in other contexts.
Integrating Symbolic Reasoning. Another key aspect of our approach is leveraging the symbolic
engines to modularize the solution and handle well-defined sub-tasks more accurately. For example,
some sub-tasks in the decomposition can often be addressed by writing code functions. Therefore,
we explicitly instruct the LLM to consider sub-tasks that can be well handled by writing and exe-
cuting modular code in subtask decomposition.
4.2 ORCHESTRATING EXPERTISE: WORKFLOW DESIGN (STAGE 2)
With the problem decomposed into sub-tasks, our approach next proposes a team of specialized
experts, each contributing unique skills and tools, arranged in a dynamic workflow. The central
component is a Meta-Expert, initialized from the foundation LLM, designs the expert team, and
coordinates their efforts. The orchestration process consists of four steps.
1. Design of Experts. Based on the identified sub-tasks, the Meta-Expert designs a team of
specialized experts with one expert solving one sub-task. Each expert is assigned a unique
name and a clear description of their specific skills, knowledge, and responsibilities4. The
dynamic workflow leverages two types of experts to handle each sub-task, enabling a seam-
less integration of verbal and symbolic reasoning. The first type are specialized experts
initiated from LLMs, such as linguists, mathematicians, and data scientists. These experts
bring domain-specific knowledge and skills to the workflow, allowing for sophisticated
verbal reasoning and analysis within their fields. The second type of expert focuses on
4Our implementation leverages JSON for efficient data management and extraction across the system.
5
Preprint
symbolic reasoning, particularly using programming or other symbolic engines5. For ex-
ample, some sub-tasks can often be addressed by writing compact, targeted code functions.
This allows the LLM to handle common operations such as mathematical calculations, data
parsing and manipulation, and so on without bringing errors.
2. Workflow Arrangement. The Meta-Expert arranges the experts into an efficient workflow
sequence. Each expert’s output serves as the input for the next, progressively moving
towards the final solution. The Meta-Expert ensures there is no redundancy of functions
across experts.
3. Collaboration and Iteration. As the experts work through the problem, the Meta-Expert
facilitates collaboration and puts together their inputs and outputs. For sub-tasks involving
logical reasoning, mathematical operations, data structures, or programming, the Meta-
Expert provides strategic guidance and sends the implementation details to the correspond-
ing symbolic reasoning experts. These experts utilize LLMs to generate code, which is
then executed to perform symbolic reasoning in Stage 3.
4. Final Review and Conclusion. The last expert in the workflow, often an LLM specialist,
is tasked with holistically reviewing the findings of the previous experts and generating the
final answer to the original problem.
By combining the power of specialized LLMs and the usage of tools into a thoughtfully designed,
adaptable workflow, our approach can tackle complex problems that are beyond the capabilities of
the original model. The Meta-Expert serves as the intelligent connector, analyzing the unique needs
of each problem and dynamically assembling the optimal workflow. Our approach creates a bridge
between natural language reasoning and rule-governed symbolic reasoning.
4.3 FLOW EXECUTION: CONSTRUCTING AND RUNNING WORKFLOWS (STAGE 3)
With the workflow graph generated, our approach finally proceeds to execute the graph to get the
final result. The execution follows the dependency order, ensuring the correct flow of data between
experts. To ensure robust execution, if any of the generated code encounters errors, the correspond-
ing symbolic reasoning experts will trace the issue, use the error message to repair the code, and
rerun it. As the workflow progresses, the downstream experts continually update their memory with
the intermediate results and insights generated by previous experts. Upon completion of the work-
flow execution, the last LLM expert analyzes the results, identifies key findings, and summarizes
them into a final answer to the original problem. The workflow execution is not a one-time process.
The LLM continually assesses the quality and correctness of the final generated solutions and iden-
tifies potential errors. It engages in iterative rerun by applying a different problem decomposition,
expert assignments, or adjusting the workflow structure.
5 MODEL TUNING OF HYBRID THINKING
In our experiments, we observed that open-source language models (typically those with around
7B parameters) often struggle with advanced meta-planning and problem-solving skills required for
solving difficult reasoning tasks. To address this limitation and develop local smaller models with
hybrid thinking abilities comparable to the large models, we construct a comprehensive training
dataset and propose hybrid thinking tuning to improve the complex reasoning abilities of local mod-
els. We define “local” models as models that can be trained and deployed on local hardware with
limited computational resources, such as the Llama-3 model (Meta, 2024). Our goal is to improve
the complex reasoning abilities of these local models through our proposed approach.
The primary challenge lies in constructing a large-scale dataset of reasoning problems that are suffi-
ciently diverse, high-quality, and difficult. Such a dataset is crucial for teaching smaller local models
to perform complex reasoning tasks. However, manually curating such a dataset presents significant
difficulties in ensuring a wide range of problem domains and maintaining high standards in problem
formulation. As a result, it is extremely time-consuming and expensive to ask human experts to
5We mainly use Python code interpreter as the symbolic engine in our experiments, but our approach can
be extended to other symbolic engines, such as the symbolic deduction engines used in AlphaGeometry (Trinh
et al., 2024) to solve Euclidean geometry problems.
6
Preprint
Figure 3: Data Synthesis of Complex Reasoning Problems. The creation and refinement of reasoning
problems contain three steps. Step 1 involves brainstorming and generating high-level descriptions
of new reasoning tasks, either inspired by human-written tasks or directly writing puzzle tasks. Step
1 produces 45K descriptions of reasoning tasks. Step 2 performs semantic matching and deduplica-
tion and results in 18K reasoning task descriptions. The final Step 3 writes concrete questions based
on task descriptions and applies a CoT validation process to filter or refine the tasks down to 27k
valid reasoning problems.
**Interpret a Morse Code Message**: Given a string of Morse code, translate it into English text,
adhering to standard Morse code conventions. The task involves recognizing each sequence of dots
(.) and dashes (-) as letters and spaces as separators for words.
A Morse code sequence has been found etched into an old artifact. It is believed to be a significant
mathematical formula. The Morse code is: ‘-. .. -. . - -.– / - .... .-. . . / - .. – . ... / ... . ...- . -. - -.–
/ ..-. .. ...- . / . –.- ..- .- .-.. ... / — -. . / .... ..- -. -.. .-. . -.. / .- -. -.. / - .– . -. - -.– / - .... .-. . .‘.
Decode this Morse code into English text, adhering to the standard Morse code conventions where
sequences of dots (.) and dashes (-) represent letters, and spaces are used to separate words.
**Cryptarithm Task: Solve the Equation**: In this cryptarithm, each letter represents a unique digit
from 0-9: **CROSS + ROADS = DANGER** No number may begin with zero. Determine the
digit each letter represents to satisfy the equation.
In a game of spies, two teams use different substitution ciphers to communicate. Team A uses a
cipher where each letter is replaced by the letter three positions to the right in the alphabet (with
wrapping), while Team B uses a cipher where each letter is replaced by the letter four positions
to the left (with wrapping). During the game, a message encrypted using Team B’s cipher was
intercepted: “XLMW MW XLI GIRXVI.” Decode this message assuming it was meant for Team A
but encrypted by Team B.
Figure 4: Three example reasoning problems generated by our data synthesis approach.
consistently generate problems meeting all criteria. Therefore, we propose a novel approach for au-
tomatically generate a variety of reasoning problems and collect solutions of hybrid thinking, which
can then be used to train our local LLMs.
5.1 REASONING PROBLEMS SYNTHESIS
To enhance reasoning task diversity and coverage, our data synthesis pipeline consists of three steps
In the first step, we strategically leverage human-authored seed tasks to inspire the
(Figure 3).
creation of new reasoning problems (similar to Self-Instruct (Wang et al., 2023)) or let the LLM
brainstorm reasoning puzzles that cover a variety of task formats, difficulty levels, and problem
domains. This step only focuses on generating high-level task descriptions to encourage diversity.
In the second step, we apply deduplication to remove near-identical tasks. Finally, we apply LLMs
again to write three specific problems based on the task descriptions and validate those problems.
Task Generation Inspired by Seed Tasks. The first step of our reasoning data synthesis pipeline is
generating an expanded set of reasoning tasks. We augment the few-shot prompts with 10 high-level
task descriptions randomly sampled from the 214 BigBench tasks (Srivastava et al., 2022). Next,
7
Generate10newtasksinspiredby10taskssampledfrom214human-writtentasksBrainstorm10puzzletasksStep1Step2Output:ReasoningTaskDescriptionsDeduplicationWrite3reasoningproblemsbasedonthetaskdescriptionStep3ApplyCoTtovalidateproblemsOutput:ReasoningProblems4Preprint
we employ the 10 seed tasks as in-context examples to prompt LLMs6 to generate 10 new tasks
inspired by seed tasks. To encourage additional diversity in the generated tasks, we also let the LLM
to brainstorm different genres of puzzles, such as crossword puzzles, math puzzles, number puzzles,
relational puzzles, logic puzzles, etc. By repeating two strategies, we produce an expanded pool of
45K candidate reasoning tasks that creatively cover diverse reasoning types and scenarios.
Data Filtering and Deduplication. The previous task generation step produces a sizable pool
of candidate reasoning tasks. However, the generated data is likely to contain duplicate or highly
similar entries. To address this, we employ a comprehensive data filtering and deduplication process.
First, we apply n-gram to identify nearly identical tasks. Next, we filter out any tasks or problems
that fail to meet our quality criteria, such as insufficient complexity (e.g., trivial one-step questions),
or ambiguity in the description by prompting GPT-4-Turbo. This helps ensure that only high-quality,
unambiguous reasoning tasks are retained in the final dataset. Through this rigorous deduplication
and filtering process, we condense the pool of 45K generated tasks down to 18K deduplicated tasks.
Reasoning Problem Synthesis. In the last step, we aim to synthesize multiple concrete reasoning
problems for each of the 18K tasks produced by the previous task generation and deduplication
steps. Taking each task’s description as input, we prompt an LLM to generate 3 distinct questions or
problems that test the specified reasoning skill. This enables us to turn each high-level task into a set
of actual solvable questions, resulting in a pool of 54k reasoning problems. To ensure the generated
problems are well-posed and solvable, we employ a chain-of-thought (CoT) based validation step.
We prompt GPT-4-Turbo to apply CoT to each synthesized problem and analyze if the resulting
reasoning steps coherently lead to a definite answer. Problems for which the model fails to converge
to a clear solution or exhibits inconsistent reasoning are filtered out. This results in the final 27K
reasoning problems. Figure 4 provides three examples of reasoning problems generated.
5.2 FINETUNING OPEN-SOURCE MODELS ON SYNTHESIZED DATA
To prepare the training data for enhancing the open-source models’ complex problem-solving abili-
ties, we utilize the GPT-4-turbo model to collect reasoning trajectories on the dataset of synthesized
and mathematical problems. For each problem, GPT-4-turbo generates one or several fast/slow
reasoning trajectories using the hybrid thinking approach. Each reasoning trajectory consists of
a sequence of (query, answer) pairs representing the model’s step-wise hybrid thinking process.
Therefore, we use all (query, answer) pairs from the reasoning trajectories to construct the train-
ing data, capturing the complete problem-solving process. When multiple reasoning trajectories are
produced (iterative retry), only the solution trajectory that passes the verification process is retained
in the training set to optimize the model’s problem-solving capabilities, while the verification results
for all trajectories are kept to enhance the model’s self-verification abilities.
The Llama-3 models have demonstrated superior performance compared to other models of similar
size due to significant enhancements in both pretraining and post-training (Meta, 2024). Therefore,
we choose the Llama-3-8B-Instruct model as the foundation model for our hybrid thinking tuning
experiments. Specifically, The Llama-3-8B-Instruct model was fine-tuned using 8 A100 GPUs with
bf16 precision7. The training utilized a global batch size of 128, spanning 4 epochs. The model
employed the AdamW optimizer of a learning rate of 2.0e-5, with a maximum sequence length of
4096 tokens and a maximum of 2048 new tokens generated.
6 EXPERIMENT
6.1 REASONING BENCHMARK DATASETS
BIG-Bench Hard (BBH) (Suzgun et al., 2022): A subset of 27 challenging tasks from the BIG-
Bench benchmark (Srivastava et al., 2022), which aims to measure the capabilities and limitations
of language models across diverse text-based tasks. MATH (Hendrycks et al., 2021): A dataset
consisting of 5,000 test problems from mathematics competitions. These problems assess the math-
ematical problem-solving ability and often require the application of problem-solving techniques
6We use both GPT-4-0125 and Claude-3-Opus to encourage diversity. We find Claude-3-Opus does generate
very different reasoning tasks compared with GPT-4-0125.
7We adopt LitGPT (AI, 2023) in our model training.
8
Preprint
Methods
CoT (Fast Think.)
Slow Think.
Hybrid Think.
BBH
77.8
MATH
DeepMind Math GameOf24
62.6
87.1 (+9.3) 67.6 (+4.6)
87.8 (+10.0) 70.0 (+7.9)
53.4
67.7 (+14.3)
59.6 (+6.2)
9.3
70.3 (+61.0) 73.2 (+22.4)
72.0 (+62.7) 72.4 (+21.6)
Avg.
50.8
Table 1: Accuracy (%) of GPT-4-Turbo-0125 across different reasoning modes on various datasets.
We show the accuracy of the model using Chain of Thought (CoT) v.s. slow thinking (with dynamic
workflow) and Hybrid Thinking approaches proposed by us. The Fast/Slow indicates the ratio of
Fast and Slow Thinking contributions in the Hybrid approach. Results are derived from the top 100
instances for each sub-category in BBH (27 sub-tasks), MATH (7 sub-domains), and GameOf24 (3
difficulty levels) to reduce API cost and ensure replicability. For the DeepMind Math dataset, the
top 10 instances from each of the 56 sub-domains were used.
Methods
BBH MATH DeepMind Math GameOf24 Avg. Tokens
CoT (Fast Think.)
Slow Think.
Hybrid Think.
351
3227
1299
992
5694
4398
581
3562
1742
387
5246
4983
577.8
4432.0
3105.5
Table 2: Average number of inference tokens of GPT-4-Turbo-0125 using different reasoning modes
on various datasets. Performance is reported in Table 1.
and heuristics beyond standard K-12 mathematics tools. Game of 24 (Yao et al., 2024): A math-
ematical reasoning challenge dataset containing 1,362 games sorted by human solving time. The
goal is to use four given numbers and basic arithmetic operations (+ - * /) to obtain 24. DeepMind
Math (Saxton et al., 2019): A dataset consisting of various types of mathematics questions, released
with both generation code and pre-generated questions. This dataset provides an additional measure
of algebraic generalization abilities.
6.2 RESULTS BASED ON PROMPTING
We first conduct experiments by prompting GPT-4-Turbo-01258 to achieve three reasoning modes:
Chain of Thought (CoT), Slow Thinking with Dynamic Workflow, and Hybrid Thinking across four
benchmark datasets. Table 1 shows that slow thinking with dynamic workflow significantly out-
performs CoT by 22.4% on average across four benchmarks. It also reveals that Hybrid Thinking
achieves the best accuracy on three datasets BBH, MATH and GameOf24. Notably, both Slow
Thinking and Hybrid Thinking consistently outperform CoT across all datasets, with the most dra-
matic improvements seen in GameOf24, where gains are 61.0% and 62.7% respectively.
Table 2 illustrates the average number of inference tokens used by each method. CoT consistently
used the fewest tokens (average 577.8), while Slow Thinking required the most (4432.0 on average).
Hybrid Thinking struck a balance with an average of 3105.5 tokens. A clear trade-off emerged be-
tween computational efficiency and performance, with CoT using the fewest tokens but achieving
the lowest accuracy. Hybrid Thinking demonstrated a good balance, achieving high accuracy with
moderate token usage. These findings suggest that incorporating dynamic workflows and combin-
ing fast and slow thinking processes can enhance the reasoning capabilities of LLMs, with Hybrid
Thinking emerging as a particularly promising approach.
6.3 RESULTS OF HYBRID THINKING TUNING
We next compare the performance of the original Llama-3-8B-Instruct model and the model after our
hybrid thinking tuning. As shown in Table 3, the Llama-3-8B-Instruct model after hybrid thinking
tuning significantly outperforms the baseline model on all datasets. Examining the different thinking
modes, hybrid thinking consistently provided the best tradeoff between performance and efficiency.
Compared to the CoT baseline, hybrid thinking improved accuracy by 10.6%, 10.2%, 23.1% and
8https://platform.openai.com/docs/models. A full list of prompts can be found in Ap-
pendix C.
9
Preprint
Methods
BBH
MATH
DeepMind Math GameOf24
Avg.
Llama-3-8B-Instruct (Original)
CoT
51.7
30.0
18.6
2.7
25.8
Llama-3-8B-Instruct (After Hybrid Thinking Tuning)
37.0 (+7.0)
CoT (Fast Think.) 58.5 (+6.8)
61.2 (+9.5)
Slow Think.
37.8 (+7.8)
62.3 (+10.6) 40.2 (+10.2)
Hybrid Think.
34.2 (+15.6)
48.8 (+30.2)
41.7 (+23.1)
5.1 (+2.4)
33.7 (+7.9)
15.4 (+12.7) 40.8 (+15.0)
16.0 (+13.3) 40.5 (+14.7)
Table 3: Performance comparison of the original Llama-3-8B-Instruct model and the Llama-3-8B-
Instruct after our hybrid thinking tuning. We show the accuracy (%) of the model using Chain
of Thought (CoT) v.s. slow thinking (with dynamic workflow) and Hybrid Thinking approaches
proposed by us. The Fast/Slow indicates the ratio of Fast and Slow Thinking contributions in the
Hybrid approach. Results are derived from all test instances in BBH, MATH, DeepMind Math and
GameOf24.
Methods
BBH MATH DeepMind Math GameOf24 Avg. Tokens
Llama-3-8B-Instruct (Original)
CoT
356
496
359
510
430.2
Llama-3-8B-Instruct (After Hybrid Thinking Tuning)
CoT (Fast Think.)
Slow Think.
Hybrid Think.
720
3901
2521
985
5743
4414
770
4395
2577
1384
6714
6371
964.7
5188.2
3970.7
Table 4: Average number of inference tokens of the original Llama-3-8B-Instruct model and the
Llama-3-8B-Instruct after our hybrid thinking tuning on various datasets. Performance is reported
in Table 3.
Figure 5: Proportion of fast thinking (CoT) and slow thinking (dynamic workflow) applied in hybrid
thinking across four datasets. The left is GPT-4-Turbo (performance is shown in Table 1), while the
right is Llama-3-8B-Instruct after our hybrid thinking tuning (Table 3).
13.3% on the BBH, MATH, DeepMind Math and GameOf24 datasets respectively. Interestingly,
we also observe that hybrid thinking tuning enhances Llama-3’s fast thinking (CoT) performance
across all reasoning tasks at the cost of increased model inference tokens.
Table 5 breaks down performance on the MATH dataset into specific subject areas. Again, the
Llama-3-8B-Instruct model after hybrid thinking tuning outperforms the original model on all sub-
sets, with gains ranging from 8% on intermediate Algebra to 23% on Number Theory. Hybrid
thinking yielded the highest accuracy in each domain, demonstrating its broad applicability.
10
0.810.580.740.140.190.420.260.8600.20.40.60.81BBHMATHDEEPMIND MATHGAMEOF24Fast Think.Slow Think.0.640.510.660.150.360.490.340.850.00.20.40.60.81.0BBHMATHDEEPMIND MATHGAMEOF24Fast Think.Slow Think.Preprint
MATH Subsets
Prealgebra
Algebra
Number Theory
Count. and Prob.
Geometry
Precalculus
Inter. Algebra
Llama-3-8B-Ins.
CoT
Llama-3-8B-Ins. (After Hybrid Thinking Tuning)
CoT (Fast Think.) Slow Think. Hybrid Think. Fast/Slow
43.2%
30.2%
15.0%
21.1%
13.4%
12.5%
9.1%
58.9%
53.6%
31.1%
32.5%
24.8%
22.0%
15.6%
59.7%
52.7%
37.6%
34.2%
23.6%
21.8%
16.3%
63.3%
56.1%
38.0%
35.9%
26.3%
24.5%
17.3%
0.69/0.31
0.68/0.32
0.52/0.48
0.48/0.52
0.33/0.67
0.35/0.65
0.30/0.70
Table 5: Accuracy comparison of the original Llama-3-8B-Instruct model and the Llama-3-8B-
Instruct after our hybrid thinking tuning on different domains of the MATH dataset. “Count. and
Prob.” and “Inter. Algebra” represents “Counting and Probability” and “Intermediate Algebra”.
6.4 FAST/SLOW ROUTING ANALYSIS
Figure 5 illustrates the proportion of fast thinking and slow thinking (orange) approaches applied by
both models when solving complex problems across the datasets. The GPT-4-Turbo model demon-
strates a higher reliance on fast thinking for BBH, DeepMind MATH, and Game of 24 tasks com-
pared with Llama-3-8B-Instruct model. This observation can be attributed to the fact that GPT-4-
Turbo’s fast thinking (in the form of CoT) is more reliable and effective compared to Llama-3-8B-
Instruct. As a result, hybrid thinking in GPT-4-Turbo tends to apply more fast thinking since it is
sufficient to achieve a correct solution in many cases. In contrast, Llama-3-8B-Instruct after tun-
ing exhibits a greater reliance on slow thinking strategies, particularly in complex tasks, where fast
thinking alone may not yield the desired results. This highlights the importance of hybrid thinking to
improve problem-solving efficiency, suggesting that our method can dynamically adjust the optimal
balance between fast and slow thinking based on the model’s downstream reasoning capabilities.
In summary, the dynamic combination of fast and slow thinking modes greatly enhanced the model’s
problem-solving capabilities. Our results showcase the potential of hybrid thinking approaches to
expand the frontier of what LLMs can achieve on challenging tasks.
7 DISCUSSION AND FUTURE WORK
Limitations and Potential Improvements. One promising direction is to incorporate a value net-
work that scores the successfulness or quality of completing each sub-task within the dynamic work-
flow. By integrating such a value network, we can formulate the problem-solving process as a rein-
forcement learning task, enabling the optimization and search for the best solution trajectory. This
enhancement could lead to more efficient and effective problem-solving strategies, as the model
learns to prioritize and select the most promising decompositions and workflows based on predicted
values.
Generalization to Other Reasoning Tasks. Constructing high-quality and sufficiently challeng-
ing reasoning problems for training still remains a significant challenge. While our data synthesis
approach offers a scalable solution, ensuring the validity and difficulty of each generated reasoning
problem is crucial for effective model development. One potential improvement is to involve hu-
man experts in the data synthesis process, allowing them to verify, modify, and curate the generated
problems.
Integration with Symbolic Reasoning Systems. Our dynamic workflow approach seamlessly inte-
grates specialized language models and symbolic reasoning tools, enabling LLMs to tackle complex
problems more effectively. However, there is significant potential to extend this integration to more
advanced symbolic reasoning systems, such as Lean9 for mathematical theorem proving or other
domain-specific tools. Moreover, integrating our approach with tools such as search engines and
web browsers could enable LLMs to access and utilize external resources, further amplifying their
problem-solving abilities to broader applications. By incorporating more powerful tools into the
dynamic workflow, we can expand the range of problems that LLMs can solve.
9https://lean-lang.org/
11
Preprint
8 CONCLUSION
This paper introduces a novel framework HDFlow for enhancing the complex problem-solving
capabilities of LLMs through hybrid thinking and dynamic workflows. The dynamic workflow
mechanism enables LLMs to decompose complex problems into manageable sub-tasks and inte-
grate specialized language models and symbolic reasoning tools, while hybrid thinking strategically
engages deeper, multi-step reasoning for challenging problems that exceed the capabilities of fast
thinking alone. Extensive experiments demonstrate the significant advantages of our approach, with
slow thinking with dynamic workflow greatly outperforming CoT and hybrid thinking achieving the
highest overall accuracy by balancing efficiency and performance.
REFERENCES
Lightning AI. Litgpt. https://github.com/Lightning-AI/litgpt, 2023.
Google DeepMind AlphaCode Team.
Alphacode 2 technical
report.
URL https:
//storage.googleapis.com/deepmind-media/AlphaCode2/AlphaCode2_
Tech_Report.pdf.
Google DeepMind AlphaProof/AlphaGeometry teams. Ai achieves silver-medal standard solv-
ing international mathematical olympiad problems. URL https://deepmind.google/
discover/blog/ai-solves-imo-problems-at-silver-medal-level/.
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gian-
inazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of
thoughts: Solving elaborate problems with large language models. In Proceedings of the AAAI
Conference on Artificial Intelligence, volume 38, pp. 17682–17690, 2024a.
Maciej Besta, Florim Memedi, Zhenyu Zhang, Robert Gerstenberger, Nils Blach, Piotr Nyczyk,
Marcin Copik, Grzegorz Kwa´sniewski, J¨urgen M¨uller, Lukas Gianinazzi, et al. Topologies of
reasoning: Demystifying chains, trees, and graphs of thoughts. arXiv preprint arXiv:2401.14295,
2024b.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompt-
ing: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint
arXiv:2211.12588, 2022.
Kahneman Daniel. Thinking, fast and slow. 2017.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS,
2021.
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Jinlin Wang, Ceyao
Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao,
Chenglin Wu, and J¨urgen Schmidhuber. MetaGPT: Meta programming for a multi-agent collab-
orative framework. In The Twelfth International Conference on Learning Representations, 2024.
URL https://openreview.net/forum?id=VtmBAGCN7o.
Chengshu Li, Jacky Liang, Andy Zeng, Xinyun Chen, Karol Hausman, Dorsa Sadigh, Sergey
Levine, Li Fei-Fei, Fei Xia, and Brian Ichter. Chain of code: Reasoning with a language model-
augmented code emulator. In Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller,
Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp (eds.), Proceedings of the 41st Interna-
tional Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Re-
search, pp. 28259–28277. PMLR, 21–27 Jul 2024. URL https://proceedings.mlr.
press/v235/li24ar.html.
Guohao Li, Hasan Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Com-
municative agents for ”mind” exploration of large language model society. Advances in Neural
Information Processing Systems, 36:51991–52008, 2023.
12
Preprint
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, R´emi Leblond, Tom
Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation
with alphacode. Science, 378(6624):1092–1097, 2022.
AI Meta. Introducing meta llama 3: The most capable openly available llm to date. Meta AI, 2024.
OpenAI.
Learning to reason with llms.
learning-to-reason-with-llms/.
URL https://openai.com/index/
Tal Ridnik, Dedy Kredo, and Itamar Friedman. Code generation with alphacodium: From prompt
engineering to flow engineering. arXiv preprint arXiv:2401.08500, 2024.
Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog,
M Pawan Kumar, Emilien Dupont, Francisco JR Ruiz, Jordan S Ellenberg, Pengming Wang,
Omar Fawzi, et al. Mathematical discoveries from program search with large language models.
Nature, 625(7995):468–475, 2024.
David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical rea-
soning abilities of neural models. arXiv preprint arXiv:1904.01557, 2019.
Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion:
Language agents with verbal reinforcement learning. Advances in Neural Information Processing
Systems, 36, 2024.
Significant Gravitas. AutoGPT. URL https://github.com/Significant-Gravitas/
AutoGPT.
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam
Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adri`a Garriga-Alonso, et al. Beyond the
imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint
arXiv:2206.04615, 2022.
Mirac Suzgun, Nathan Scales, Nathanael Sch¨arli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung,
Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, , and Jason Wei. Challenging big-
bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261,
2022.
Trieu H Trinh, Yuhuai Wu, Quoc V Le, He He, and Thang Luong. Solving olympiad geometry
without human demonstrations. Nature, 625(7995):476–482, 2024.
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and
Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instructions. In
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume
1: Long Papers), pp. 13484–13508, 2023.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny
Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in
neural information processing systems, 35:24824–24837, 2022.
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun
Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, and
Chi Wang. Autogen: Enabling next-gen llm applications via multi-agent conversation framework.
2023.
Yiran Wu, Tianwei Yue, Shaokun Zhang, Chi Wang, and Qingyun Wu. Stateflow: Enhancing llm
task-solving through state-driven workflows. arXiv preprint arXiv:2403.11322, 2024.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao.
React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629,
2022.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik
Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Ad-
vances in Neural Information Processing Systems, 36, 2024.
13
Preprint
Pei Zhou, Jay Pujara, Xiang Ren, Xinyun Chen, Heng-Tze Cheng, Quoc V Le, Ed H Chi, Denny
Zhou, Swaroop Mishra, and Huaixiu Steven Zheng. Self-discover: Large language models self-
compose reasoning structures. arXiv preprint arXiv:2402.03620, 2024.
14
Preprint
A APPENDIX
B DYNAMIC WORKFLOW EXAMPLE SOLUTION
In this section, we demonstrate how our dynamic workflow approach works by presenting an ex-
ample solution to a reasoning problem. We also present the solution provided by the OpenAI o1-
preview model to compare with our approach. We can see the solution produced by our approach
contains many sub-goals (sub-tasks), which is very similar to the o1-preview solution. Our approach
also leverages the symbolic tool - Python to enhance the accuracy of the final answer. The reasoning
problem is generated using our data synthesis method.
B.1 REASONING PROBLEM:
An encrypted Morse code message contains a list of three items that were
crucial to a historical event. The Morse code is: ‘- . .- / -.-. ---
..-. ..-. . . / ... ..- --. .- .-.‘. Decode this Morse code message
into English text, keeping in mind the standard Morse code
conventions where each sequence of dots (.) and dashes (-)
corresponds to a letter, and spaces separate words.
B.2 SOLUTION USING DYNAMIC WORKFLOW:
Stage 1. Problem Reflection
===== Meta-Expert@REFLECTION =====
### Problem Reflection start ###
**Task Description**: Decode a given Morse code message into English text
.
**Elements of the Task**:
1. The message is encoded in Morse code, which uses a series of dots (.)
and dashes (-) to represent letters and spaces to separate words.
2. Decoding involves translating each Morse code sequence into its
corresponding English letter or symbol.
3. The Morse code provided is a list of three items related to a
historical event, which implies the output should be meaningful and
correctly formatted.
**Sub-tasks Identification**:
1. **Morse Code Dictionary Creation**: Establish a dictionary in Python
where keys are Morse code sequences (dots and dashes) and values are
corresponding English letters. This will facilitate the translation
process.
2. **Parsing the Morse Code**: Split the input Morse code string into
individual Morse code sequences for letters and words. This involves
handling spaces correctly to differentiate between letters and word
boundaries.
3. **Translation Process**: For each Morse code sequence obtained from
the parsing step, use the Morse code dictionary to translate it into
the corresponding English letter.
4. **Reconstruction of Message**: Combine the translated letters into
words and then into the complete message. This step ensures that the
output is a readable English sentence.
5. **Output Formatting**: Format the final output to ensure it is clear
and presentable, adhering to proper English syntax and spacing.
### Problem Reflection end ###
Stage 2. Workflow Design
15
Preprint
===== Meta-Expert@EXPERTS_DESIGN =====
### Specialized Experts Design start ###
**Morse Code Dictionary Creation Expert**: Specializes in creating a
comprehensive dictionary that maps Morse code symbols to English
letters. This expert will ensure that all Morse code sequences are
accurately represented with their corresponding English characters,
which is crucial for the decoding process. Expert card (in JSON
format): {"Name": "Morse Code Dictionary Creation Expert", "
Expert_Type": "LLM", "Input_Type": "None", "Output_Type": "Dict[str,
str]"}
**Morse Code Parsing Expert**: Skilled in breaking down the Morse code
string into individual Morse sequences for letters and words. This
expert will handle the input Morse code string, correctly identifying
the separation between letters and words, which is essential for
accurate translation. Expert card (in JSON format): {"Name": "Morse
Code Parsing Expert", "Expert_Type": "LLM", "Input_Type": "str", "
Output_Type": "List[List[str]]"}
**Python Expert of Translation**: Utilizes Python to map Morse code
sequences to English letters using the dictionary provided by the
Morse Code Dictionary Creation Expert. This expert will translate
each Morse code sequence into the corresponding English letter,
ensuring that the translation is accurate and adheres to the Morse
code standards. Expert card (in JSON format): {"Name": "Python Expert
of Translation", "Expert_Type": "Tool", "Input_Type": "List[List[str
]], Dict[str, str]", "Output_Type": "List[str]"}
**Message Reconstruction Expert**: Specializes in reconstructing the
translated English letters into coherent words and sentences. This
expert ensures that the translated letters are correctly combined to
form meaningful words and that these words are arranged into a
readable and grammatically correct sentence. Expert card (in JSON
format): {"Name": "Message Reconstruction Expert", "Expert_Type": "
LLM", "Input_Type": "List[str]", "Output_Type": "str"}
**Final Review and Presentation Expert**: Responsible for reviewing the
reconstructed message to ensure it meets the clarity and accuracy
standards. This expert will also format the final message for
presentation, ensuring that it is clear, concise, and ready for
delivery as the solution to the problem. Expert card (in JSON format)
: {"Name": "Final Review and Presentation Expert", "Expert_Type": "
LLM", "Input_Type": "str", "Output_Type": "str"}
### Specialized Experts Design end ###
Stage 3. Graph Construction and Execution
===== Morse Code Dictionary Creation Expert =====
### Approach Explanation Start ###
To solve the assigned sub-task of creating a comprehensive Morse code
dictionary, I will map each Morse code symbol (comprising dots ‘.‘
and dashes ‘-‘) to its corresponding English letter. This dictionary
is essential for the subsequent decoding process, as it allows for
the translation of Morse code sequences into readable English text.
**Steps to Create the Morse Code Dictionary**:
1. **Identify Morse Code Symbols**: List all Morse code symbols for
English alphabet letters (A-Z) and numerals (0-9) if necessary.
2. **Map to English Letters**: Each Morse code symbol will be associated
with its corresponding English letter. This mapping is based on the
standard Morse code conventions.
16
Preprint
3. **Dictionary Format**: The dictionary will be structured in Python
dictionary format, where keys are Morse code symbols and values are
the corresponding English letters.
This dictionary will be used by the next expert in the workflow, the
Morse Code Parsing Expert, who will parse the Morse code message into
individual symbols using this dictionary for translation.
### Approach Explanation End ###
### My Final Output Start ###
{
"A": ".-",
"B": "-...",
"C": "-.-.",
"D": "-..",
"E": ".",
"F": "..-.",
"G": "--.",
"H": "....",
"I": "..",
"J": ".---",
"K": "-.-",
"L": ".-..",
"M": "--",
"N": "-.",
"O": "---",
"P": ".--.",
"Q": "--.-",
"R": ".-.",
"S": "...",
"T": "-",
"U": "..-",
"V": "...-",
"W": ".--",
"X": "-..-",
"Y": "-.--",
"Z": "--..",
"1": ".----",
"2": "..---",
"3": "...--",
"4": "....-",
"5": ".....",
"6": "-....",
"7": "--...",
"8": "---..",
"9": "----.",
"0": "-----"
}
### My Final Output End ###
===== Morse Code Parsing Expert =====
### Approach Explanation Start ###
To solve the task of parsing the Morse code string into individual Morse
sequences for letters and words, I will follow these steps:
1. **Understanding Morse Code Structure**: Morse code uses dots (.) and
dashes (-) to represent letters, and spaces are used to separate
different Morse sequences. A single space indicates a separation
between letters within the same word, and a sequence of three spaces
(represented as " / ") indicates a separation between words.
2. **Splitting the Input String**: The input Morse code string will be
split based on the " / " sequence to differentiate between words.
17
Preprint
This results in a list where each element is a Morse code sequence
representing a word.
3. **Further Splitting for Letters**: Each word obtained from the
previous step will be further split using a single space as the
delimiter. This will break down the word into individual Morse code
sequences, each representing a letter.
4. **Constructing the Output**: The final output will be a list of lists,
where each sublist represents a word and contains Morse code
sequences for each letter in that word.
This structured approach ensures that the Morse code is accurately parsed
into its constituent letters and words, preparing it for translation
in the subsequent steps of the workflow.
### Approach Explanation End ###
### My Final Output Start ###
[[’- . .-’, ’-.-. --- ..-. ..-. . .’, ’... ..- --. .- .-.’]]
### My Final Output End ###
===== Python Expert of Translation =====
### Python Code Start ###
def translate_morse_code(morse_words, morse_dict):
translated_words = []
for word in morse_words:
translated_word = []
for letter in word.split():
# Using next() with a default value to avoid StopIteration
translated_letter = next((key for key, value in morse_dict.
items() if value == letter), ’?’)
translated_word.append(translated_letter)
translated_words.append(’’.join(translated_word))
return translated_words
morse_words = [[’- . .-’, ’-.-. --- ..-. ..-. . .’, ’... ..- --. .-
.-.’]]
morse_dict = {
"A": ".-",
"B": "-...",
"C": "-.-.",
"D": "-..",
"E": ".",
"F": "..-.",
"G": "--.",
"H": "....",
"I": "..",
"J": ".---",
"K": "-.-",
"L": ".-..",
"M": "--",
"N": "-.",
"O": "---",
"P": ".--.",
"Q": "--.-",
"R": ".-.",
"S": "...",
"T": "-",
"U": "..-",
"V": "...-",
"W": ".--",
"X": "-..-",
"Y": "-.--",
18
Preprint
"Z": "--..",
"1": ".----",
"2": "..---",
"3": "...--",
"4": "....-",
"5": ".....",
"6": "-....",
"7": "--...",
"8": "---..",
"9": "----.",
"0": "-----"
}
for words in morse_words:
translated = translate_morse_code(words, morse_dict)
print(translated)
### Python Code End ###
### Python Execution Result Start ###
[’TEA’, ’COFFEE’, ’SUGAR’]
### Python Execution Result End ###
===== Message Reconstruction Expert =====
### My Approach Explanation Start ###
As the Message Reconstruction Expert, my primary role is to take the
translated English words provided by the Python Expert of Translation
and construct them into a coherent and grammatically correct
sentence. The translated words are ’TEA’, ’COFFEE’, and ’SUGAR’.
These words represent items, and the context suggests they are a list
of crucial items related to a historical event.
Given this context, my task is to format these words into a readable
English sentence that clearly communicates the items as a list. The
appropriate format for presenting such a list in English would
typically involve separating the items with commas and using the
conjunction "and" before the last item. This ensures the sentence is
both grammatically correct and easy to understand.
### My Approach Explanation End ###
### My Final Output Start ###
Tea, coffee, and sugar
### My Final Output End ###
===== Final Review and Presentation Expert =====
### Explanation of Approach Start ###
As the Final Review and Presentation Expert, my primary role is to ensure
that the reconstructed message from the Message Reconstruction
Expert is clear, accurate, and well-presented. The message provided
is "Tea, coffee, and sugar," which lists three items crucial to a
historical event.
**Steps in my approach:**
1. **Review Accuracy**: Check if the message accurately reflects the
translated Morse code. The previous experts have translated and
reconstructed the message correctly as "Tea, coffee, and sugar" from
the Morse code ‘- . .- / -.-. --- ..-. ..-. . . / ... ..- --. .-
.-.‘.
2. **Review Clarity**: Ensure the message is clear and understandable.
The use of commas and the conjunction "and" appropriately lists the
items, making the sentence coherent and easy to read.
19
Preprint
3. **Presentation**: Format the message to ensure it is ready for
presentation. This involves checking for any grammatical errors,
proper capitalization, and punctuation.
Given the outputs from the previous experts, the message "Tea, coffee,
and sugar" is already well-constructed. My task focuses on confirming
these aspects and presenting the final output.
### Explanation of Approach End ###
### My Final Output Start ###
Tea, coffee, and sugar
### My Final Output End ###
===== Meta-Expert@FINAL_JUDGEMENT =====
### Analysis Start ###
**Problem Recap**: The task was to decode a Morse code message that lists
three items crucial to a historical event. The Morse code provided
was ‘- . .- / -.-. --- ..-. ..-. . . / ... ..- --. .- .-.‘.
**Workflow Review**:
1. **Morse Code Dictionary Creation**: The dictionary was correctly
established, mapping each Morse code sequence to its corresponding
English letter. This is foundational for accurate translation.
2. **Morse Code Parsing**: The parsing expert successfully identified and
separated the Morse code into three distinct word sequences, which
is crucial for correct translation.
3. **Translation Process**: Using the dictionary, the translation expert
accurately converted the Morse code sequences into the English words
’TEA’, ’COFFEE’, and ’SUGAR’. This step was executed correctly,
adhering to Morse code standards.
4. **Message Reconstruction**: The reconstruction expert transformed the
list of words into a coherent and grammatically correct English
phrase: "Tea, coffee, and sugar". This step was crucial in making the
output readable and presentable.
5. **Final Review and Presentation**: The final output "Tea, coffee, and
sugar" was reviewed and presented clearly. The expert ensured that
the output was concise and correctly formatted, suitable for
answering the problem statement.
**Evaluation of Accuracy and Completeness**:
- The translation of Morse code to English was accurate based on the
provided dictionary.
- The final message is meaningful and directly corresponds to the Morse
code given.
- The output is formatted in a way that is easy to understand and
relevant to the context of a list of items.
**Conclusion**:
The workflow from dictionary creation to final presentation was executed
seamlessly, with each expert fulfilling their role effectively. The
final output accurately reflects the decoded message from the Morse
code, presented in a clear and concise manner.
### Analysis End ###
FINAL EVALAUTION: YES
20
Preprint
B.3 SOLUTION USING OPENAI O1-PREVIEW
Hidden Thought Process (thought for 12 seconds)
===== Parsing Morse code =====
I’m separating the Morse code message into three words. Each word is then
further divided into its constituent letters for decoding.
===== Parsing Morse code =====
In this chunk, the task involves parsing Morse code by handling letter
concatenation within words and word separation through slashes. Morse
code typically separates letters with spaces and words with slashes.
===== Breaking down Morse code =====
I’m looking at Morse code to decode - . .-, mapping it to letters. This
makes me think about interpreting symbols as letters.
===== Considering Morse code decoding =====
I’m thinking through the method of reversing Morse code, focusing on
identifying letter partitions and parsing two-letter words to
determine valid Morse code sequences, examining all possible
partitions of the code.
===== Identifying letter patterns =====
I’m working through different Morse code partitions to isolate letter
patterns like T E A and T U. This involves analyzing various
combinations of Morse code symbols.
===== Decoding Morse code =====
It’s interesting to see that the Morse code was parsed as -.-. --- ..-.
..-. . . for ’COFFIE’ and ’T E A’ as the most plausible word. This
suggests a detailed analysis of each symbol is crucial for accurate
translation.
===== Decoding Morse code =====
I’m working through the Morse code sequence, breaking down each symbol to
reveal letters like C, O, F, F, E, E, S, U, G, and ?.
Final solution
To decode the Morse code message ‘- . .- / -.-. --- ..-. ..-. . . / ...
..- --. .- .-.‘, we’ll break it down word by word and letter by
letter.
### Morse Code Reference Table:
| Letter | Morse Code |
| --- | --- |
| A | .- |
| B | -... |
| C | -.-. |
| D | -.. |
| E | . |
| F | ..-. |
| G | --. |
| H | .... |
| I | .. |
| J | .--- |
| K | -.- |
| L | .-.. |
| M | -- |
| N | -. |
| O | --- |
| P | .--. |
| Q | --.- |
21
Preprint
| R | .-. |
| S | ... |
| T | - |
| U | ..- |
| V | ...- |
| W | .-- |
| X | -..- |
| Y | -.-- |
| Z | --.. |
### Decoding Process:
**First Word: ‘- . .-‘**
* ‘-‘ : **T**
* ‘.‘ : **E**
* ‘.-‘ : **A**
Combined: **TEA**
**Second Word: ‘-.-. --- ..-. ..-. . .‘**
* ‘-.-.‘ : **C**
* ‘---‘ : **O**
* ‘..-.‘ : **F**
* ‘..-.‘ : **F**
* ‘.‘ : **E**
* ‘.‘ : **E**
Combined: **COFFEE**
**Third Word: ‘... ..- --. .- .-.‘**
* ‘...‘ : **S**
* ‘..-‘ : **U**
* ‘--.‘ : **G**
* ‘.-‘ : **A**
* ‘.-.‘ : **R**
Combined: **SUGAR**
### Final Decoded Message:
**TEA COFFEE SUGAR**
* * *
These three items-**tea**, **coffee**, and **sugar** - were crucial
commodities in historical trade events, such as the Boston Tea Party
and the global trade networks during the colonial era.
**Answer:**
TEA COFFEE SUGAR
C PROMPTS USED IN OUR DYNAMIC WORKFLOW APPROACH
Note that placeholders enclosed in {} are used within the prompts to represent variables that will be
substituted with actual values at LLM query time.
22
Preprint
Problem Reflection Prompt
### Problem Statement Start ###
{task problem}
### Problem Statement End ###
You are an exceptionally capable Meta-Expert, possessing a unique capability for conducting problem
reflection. Your primary function involves receiving the above problem query, which you must me-
thodically decompose into smaller, more manageable sub-tasks (including sub-tasks that can solved
by implementing Python functions). When designing the solution, you should think about its general-
izability. A robust solution can tackle a similar range of problems effectively with minor adaptations.
This decomposition will later facilitate the creation of a team of specialized experts, enabling efficient
collaboration of experts to address and solve the above problem. When breaking down into sub-tasks,
it is crucial to:
1. Ensure Sequential Logic: Arrange the sub-tasks in a logical, sequential order that facilitates a
smooth workflow from start to finish.
2. Avoid Overlap: Each sub-task must be distinct, with no duplication of efforts across the tasks, en-
suring efficient allocation of expertise.
3. Pursue Optimal Decomposition: Ensure sub-tasks are sufficiently defined to be tackled effectively.
Maintain a manageable number of specific sub-tasks, facilitating easier coordination and management.
In particular, please conduct the ”Problem Reflection” for the given problem: Reflect on the problem,
and describe it in your own words, in bullet points. Analyze how you can decompose the problem into
smaller, more manageable sub-tasks. Note that you can integrate Python-driven sub-tasks by imple-
menting and running modular Python code if necessary. Pay attention to small details, nuances, notes
and examples in the problem description.
Experts Design Prompt
### Problem Statement Start ###
{task problem}
### Problem Statement End ###
### Problem Reflection Start ###
{problem reflection}
### Problem Reflection End ###
You are an extremely powerful Meta-Expert with the unique ability to design a team of specialized
experts and arrange those experts through a workflow to tackle and solve the above problem. Based on
the above problem statement and its reflection analysis, please design a team of experts and orchestrate
those experts to effectively address and solve the above problem.
In particular, you are to do ”Specialized Experts Design”:
- Design a list of subject-matter experts (SMEs) including, but not limited to, Essayist Expert, Python
Expert, Linguistic Analyst, Mathematician, Data Scientist, and various other Analysts. Each expert is
only to perform one specific sub-task, such as processing data, making decisions, or utilizing Python
tools.
- Arrange the experts to operate in a sequential workflow, meaning each expert’s output becomes the
input for the next, progressively moving towards the final answer. Avoid redundancy of functions
across experts.
- Assign unique names to each expert and provide an clear description of their specific skills, knowl-
edge, and the sub-tasks they are going to perform. Ensure the expert description is comprehensive
and self-contained that encapsulates all important information and details from **Sub-tasks Identifi-
cation**.
- For sub-tasks involving logical reasoning, mathematical operations, data structure manipulation, or
programming-related challenges, you can outline strategic approaches and delegate the specifics of im-
plementation to the Python expert (Tool). The Python expert will translate the instructions into code,
execute it, and return the results. You can include multiple Python experts if needed. Please provide
explicit implementation instructions to the Python expert(s).
- Conclude each expert’s description with a name card in JSON format, summarizing key attributes.
Specify the type of each expert as either ’LLM’ for those based on Large Language Model or ’Tool’
for those utilizing Python tools.
- The final expert should be responsible for reviewing the findings of previous experts and then gener-
ating the final answer to the problem.
23
Preprint
Execution Prompt of Experts Initiated from LLM
### Problem Statement Start ###
{original problem}
### Problem Statement End ###
### Problem Reflection Start ###
{problem reflection}
### Problem Reflection End ###
Please act as {name}. Your role: {role} You are part of a specialized expert team. You are designed to
accomplish a sub-task and collaborate with other experts through a workflow graph to solve the above
problem.
The expert team operates based on the following design:
### Experts Design Start ###
{experts design}
### Experts Design End ###
Each expert, including you, is responsible for a specific sub-task. The workflow is structured so that
each expert’s output becomes the input for the next, progressively moving towards the final answer.
The process should be thought of as sequential steps, where you contribute towards the solution based
on the outputs from the previous experts.{data type instruction} You can think step by step if neces-
sary.
The results from the preceding experts are as follows:
### Experts’ Results Start ###
input data
### Experts’ Results End ###
Please provide a brief explanation of your approach to solving the assigned sub-task. After your
explanation, clearly indicate your final output as follows:
### My Final Output Start ###
[Your final answer here]
### My Final Output End ###
24
Preprint
Execution Prompt of Experts initiated from Symbolic Engine
### Problem Statement Start ###
{original problem}
### Problem Statement End ###
### Problem Reflection Start ###
{problem reflection}
### Problem Reflection End ###
Please act as {name}. Your role: {role} You are a specialized Python expert among a team of experts.
You are designed to write Python code to accomplish a sub-task and collaborate with other experts
through a workflow graph to solve the above problem.
The expert team operates based on the following design:
### Experts Design Start ###
{experts design}
### Experts Design End ###
Each expert, including you, is responsible for a specific sub-task. The workflow is structured so that
each expert’s output becomes the input for the next, progressively moving towards the final answer.
You should take the previous expert’s output as input, write the Python code, execute the code, and
send the output to the next expert.
The results from the preceding experts are as follows:
### Experts’ Results Start ###
input data
### Experts’ Results End ###
Please write the Python code that takes input in {input type} and return output in {output type}.
Guidelines: - Make sure the code includes all the necessary module imports, properly initialize the
variables, and address the problem requirements. - The code needs to be self-contained, and executable
as-is. Output only code, without any explanations or comments.
The code output must follow this structure:
‘‘‘python
def f1(...):
...
return ...
def f2(...):
...
return ...
...
if __name__ == "__main__":
...
‘‘‘
how to read input
The output should be printed without additional words using the ’print()’ method.
Answer:
‘‘‘python
25
Preprint
Verification Prompt
### Problem Statement Start ###
{task problem}
### Problem Statement End ###
### Problem Reflection Start ###
{problem reflection}
### Problem Reflection End ###
**Experts Design:** - Based on the problem reflection, a team of experts has been designed and
organized through a workflow to tackle and solve the problem described above. - Experts are designed
to operate in a sequential workflow, meaning each expert’s output becomes the input for the next,
progressively moving towards the final answer. - The final expert is responsible for reviewing the
findings of previous experts and then generating the final answer to the problem.
Here is a description of the experts’ roles and the workflow structure:
### Experts Design Start ###
{experts design}
### Experts Design End ###
Based on the workflow design, the experts have provided the following results:
### Experts’ Results Start ###
{experts results}
### Experts’ Results End ###
Given the described workflow design and the results produced by the experts, your task is to eval-
uate whether the final output of the ”{final expert}” successfully and correctly solves the problem
presented.
Please provide your analysis and then conclude your evaluation by stating ’FINAL EVALUATION:
YES’ or ’FINAL EVALUATION: NO’.
D DATA SYNTHESIS OF REASONING PROBLEMS
Data Synthesis Prompt 1
Please develop 10 new and diverse reasoning tasks, one per line, inspired by but distinct from the
following 10 example reasoning tasks:
{example tasks}
Guidelines for task creation:
- Ensure each new task is distinctly different from the example tasks provided; avoid mere variations.
- Clearly and accurately define each task, making its objective and scope explicit.
- Design tasks that yield deterministic answers, facilitating the creation of single, definitive standard
answers for subsequent problems derived from these tasks. This helps straightforward evaluation of
correctness.
- Target a moderate to hard difficulty level for each task, requiring thorough analysis and in-depth
reasoning to solve.
Data Synthesis Prompt 2
Please develop 10 new and diverse puzzle tasks, one per line, to test various reasoning abilities.
Guidance:
- Each new puzzle task should clearly and accurately describe what the task is.
- Design puzzle tasks that yield deterministic answers, facilitating the creation of single, definitive
standard answers for subsequent problems derived from these tasks. This helps straightforward evalu-
ation of correctness.
- Puzzle tasks should have a moderate to hard difficulty level - they should require thorough analysis
and in-depth reasoning to work through.
26
Preprint
Problem Validation Prompt
### Problem Start ###
{problem}
### Problem End ###
Your task is to verify whether the above problem is a valid reasoning problem or not.
Valid Criteria:
- It is clear and unambiguous (NO multiple interpretations).
- It provides all necessary information required to solve the problem.
- The problem is logically structured so that it can be approached through reasoning skills. It does not
depend on subjective judgments or opinions.
- The problem is solvable and has one single, definitive correct answer that can be derived through
reasoning.
- There are no internal contradictions or conflicts in the problem.
Please provide a concise analysis and then output ’## VALID ##’ or ’## INVALID ##’. Next, if it is
invalid, please rewrite it into a new valid reasoning problem following the format below. Make sure
the new problem is challenging enough.
### New Valid Problem Start ###
[new problem]
### New Valid Problem End ###
27
|
synthetic_cpt | 3 | Synthetic_Query_Generation_using_Large_Language_Models_for_Virtual_Assistants.pdf | 4
2
0
2
n
u
J
0
1
]
R
I
.
s
c
[
1
v
9
2
7
6
0
.
6
0
4
2
:
v
i
X
r
a
Synthetic Query Generation using Large Language Models
for Virtual Assistants
Sonal Sannigrahi∗†
[email protected]
Instituto Superior Técnico
Lisbon, Portugal
Youssef Oualil
[email protected]
Apple
Aachen, Germany
Thiago Fraga-Silva
[email protected]
Apple
Aachen, Germany
Christophe Van Gysel†
[email protected]
Apple
Cambridge, MA, USA
ABSTRACT
Virtual Assistants (VAs) are important Information Retrieval plat-
forms that help users accomplish various tasks through spoken com-
mands. The speech recognition system (speech-to-text) uses query
priors, trained solely on text, to distinguish between phonetically
confusing alternatives. Hence, the generation of synthetic queries
that are similar to existing VA usage can greatly improve upon the
VA’s abilities—especially for use-cases that do not (yet) occur in
paired audio/text data. In this paper, we provide a preliminary explo-
ration of the use of Large Language Models (LLMs) to generate syn-
thetic queries that are complementary to template-based methods.
We investigate whether the methods (a) generate queries that are
similar to randomly sampled, representative, and anonymized user
queries from a popular VA, and (b) whether the generated queries
are specific. We find that LLMs generate more verbose queries, com-
pared to template-based methods, and reference aspects specific
to the entity. The generated queries are similar to VA user queries,
and are specific enough to retrieve the relevant entity. We conclude
that queries generated by LLMs and templates are complementary.
CCS CONCEPTS
• Information systems → Search interfaces; Query log analy-
sis; • Computing methodologies → Speech recognition.
KEYWORDS
virtual assistants, synthetic query log generation
ACM Reference Format:
Sonal Sannigrahi, Thiago Fraga-Silva, Youssef Oualil, and Christophe Van
Gysel. 2024. Synthetic Query Generation using Large Language Models
for Virtual Assistants. In Proceedings of the 47th International ACM SIGIR
Conference on Research and Development in Information Retrieval (SIGIR ’24),
July 14–18, 2024, Washington, DC, USA. ACM, New York, NY, USA, 5 pages.
https://doi.org/10.1145/3626772.3661355
∗Work performed while an intern at Apple.
†Equal contribution.
SIGIR ’24, July 14–18, 2024, Washington, DC, USA
© 2024 Copyright held by the owner/author(s).
This is the author’s version of the work. It is posted here for your personal use. Not
for redistribution. The definitive Version of Record was published in Proceedings of the
47th International ACM SIGIR Conference on Research and Development in Information
Retrieval (SIGIR ’24), July 14–18, 2024, Washington, DC, USA, https://doi.org/10.1145/
3626772.3661355.
1 INTRODUCTION
Virtual Assistants (VAs) are important [9] Information Retrieval (IR)
platforms that help users accomplish various tasks. Users primarily
interact with VAs through voice commands, where users initiate a
retrieval request by uttering a query.
The Automated Speech Recognition (ASR) component of the
VA system transcribes the spoken user query, which is then sub-
sequently processed by the retrieval engine. However, the ASR
system is trained on audio/text pairs that are expensive and time-
consuming to obtain. During the recognition process, the ASR sys-
tem employs a query prior trained solely on text to disambiguate
between phonetically-similar recognition candidates. Hence, the
query prior is a powerful mechanism to modify the ASR system’s
behavior, and has been shown to be an effective manner to improve
the recognition of tail named entities [3, 8, 10, 14, 17].
In order to correctly recognize emerging entities [4], the ASR
system’s query prior is estimated using a mixture of usage-based
and synthetic text data. Synthetic queries are typically generated
using a template-based approach [2, 13]. A query template, such
as “play music by $ARTIST”, representing the generic intent of a
user wanting to play music by a specific artist, is instantiated using
a popularity-weighted list of entities. However, template-based
approaches are stringent, may only represent a limited set of use-
cases, and are not well-suited to generate synthetic queries for use-
cases that are specific to particular entities. For example, the query
“play Taylor Swift’s debut performance at the Grammy’s” represents
the user’s intent to play the song “Fifteen” by Taylor Swift which
was Swift’s debut performance at the Grammy’s in 2009. While
creating a template based on this query would be possible, it does
not generalize across entities: some entities may not have performed
at the Grammy’s and finding the relevant venue would require
manual curation. Hence, synthetic query generation methods that
can generate queries tailored to specific entities are necessary.
Recent advances in Large Language Models (LLM) have shown
impressive improvements in language understanding tasks [5]
through their emergent capabilities [16]. In IR, there have been
various works focusing on the generation of queries using LLMs
[1, 11, 15].
In this paper, we perform a preliminary analysis of the use of
LLMs to produce query priors in VA ASR. We generate synthetic
queries by prompting LLMs using a description of the artist gath-
ered from Wikipedia. Then, we evaluate the generated queries in
SIGIR ’24, July 14–18, 2024, Washington, DC, USA
Sonal Sannigrahi, Thiago Fraga-Silva, Youssef Oualil, and Christophe Van Gysel
Figure 1: Proposed pipeline to generate queries for a VA via an LLM.
terms of their similarity to randomly sampled, representative, and
anonymized user queries from a popular VA, in addition to the
queries’ ability to retrieve the entity for which they were gener-
ated. More specifically, the research questions addressed are as
follows: (RQ1) Can LLMs generate VA queries that are similar to
user queries extracted from VA query logs (i.e., domain match)?
(RQ2) Are the LLM-generated queries good at retrieving the entity
for which they were generated (i.e., specificity)?
Our contributions are as follows: (1) We propose a prompt for
LLMs to produce natural queries for the music domain for VAs,
and perform extensive experiments comparing the LLM-generated
queries to queries generated using template-based methods, (2) We
provide insights through analysis into the differences between
queries generated using the various methods.
2 METHODOLOGY
Fig. 1 shows an overview of our approach, which consists of the
following three main components: (1) entity descriptions extracted
from Wikipedia to provide context for synthetic query generation,
(2) the prompt which incorporates the entity description and for-
mulates a request to the LLM to generate synthetic data, where we
also specify the intent the queries need to represent, and (3) the
LLM, which takes the prompt as input and subsequently generates
a list of synthetic queries as output.
2.1 Knowledge Base
We build our music artist knowledge base by linking Wikipedia data
with artist profiles on a popular streaming service. The paragraphs
in the Wikipedia articles are used as contexts to generate synthetic
queries using LLMs (§2.2). The combination of the Wikipedia arti-
cle, and the artist profile retrieved from the streaming service, are
used to build a search engine to evaluate the end-to-end perfor-
mance of the generated queries (§4.2). We obtained a catalog of mu-
sic artist entities by downloading the list of most popular artists on
a streaming service in July 2023 and linking them to their respective
Wikipedia profile using property P2850 (i.e., “Artist ID Number”)
through Wikidata’s SPARQL query service1. We also use the Music-
Group2 metadata object, embedded in the source of each artist page
on music.apple.com, with entries that include artist name, biogra-
phy, an unique artist ID, as well as discography information. Be-
tween both the Wikipedia dumps and the artist database, we main-
tain a direct linking to produce a knowledge base of 14 161 artists.
2.2 Prompt & LLMs
Our prompt is depicted in Fig. 2. For each entity in the knowl-
edge base (§2.1), we create prompts by populating the artist name
1https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service
2https://schema.org/MusicGroup
[ ARTIST DESCRIPTION ]
Generate [K] queries based on the information
above about [ ARTIST NAME ] to play music or learn
more about [ ARTIST NAME ].
Here are some examples : [ EXAMPLES ]
Figure 2: LLM prompt used during our experiments. [ARTIST
DESCRIPTION] and [ARTIST NAME] are replaced with an entity descrip-
tion, and the entity name, resp. [EXAMPLES] are a list of example VA
queries for the specific intent. We fix [EXAMPLES] to the following
"play, queue, turn on, etc". [K] is the number of queries.
and use the lead section (i.e., the introduction) as their descrip-
tion. For music artists, the lead section typically references no-
table audiography, collaborations and life events. The number of
queries, 𝐾, is set to 40 (§3.1) in this paper. We use the OpenAI API
to generate queries with four model variants3. More specifically, we
experiment with babbage-002, gpt-3.5-turbo, gpt-3.5-turbo-
instruct, and gpt-4 (see §3.1 for details).
3 EXPERIMENTAL SETUP
3.1 Query generation methods under
comparison
We generate queries for all 14 161 entities from our knowledge
base (§2.1) using (a) the entity name by itself, (b) a template-based
approach using the top-𝐾 (according to prior probability) music
query templates released as part of [13] (excluding the templates
that consist of only the entity name by itself in order to differen-
tiate from approach (a)), and (c) four different LLMs available via
OpenAI’s API using the prompt in Fig. 2 (§2.2) where we ask the
LLM to generate 𝐾 queries: babbage-002, gpt-3.5-turbo (v0613),
gpt-3.5-turbo-instruct, and gpt-4; with 𝐾 = 40. During our
experiments, we report evaluation measures at various values of
𝐾 ≤ 40, in which case we extract the first 𝐾 queries from the list of
40 queries (rather than issuing multiple requests to the LLM with
varying 𝐾). Generated queries that start with a VA wakeword (e.g.,
“hey VA” where VA refers to the name of the assistant), have the pre-
fix corresponding to the wakeword removed. For example, a query
“hey VA play Moderat” is normalized to “play Moderat”. This step
aims at avoiding biases towards methods that frequently generate
the wakeword during domain match evaluation (§4.1).
3.2 Evaluation measures
To answer RQ1, we measure the likelihood of the generated queries
under a 4-gram back-off language model [6] estimated on randomly
sampled and anonymized user queries over a span of 2 years from
a popular VA. We apply Good Turing smoothing, and N-grams
3https://platform.openai.com/docs/models/overview
Synthetic Query Generation using Large Language Models for Virtual Assistants
SIGIR ’24, July 14–18, 2024, Washington, DC, USA
entity name
templates
babbage-002
gpt-3.5-turbo
gpt-3.5-turbo-instruct
gpt-4
# entities
# unique queries per entity
query length per entity
% of queries with > 15 terms
14161
1.00 ± 0.00
1.58 ± 0.80
0.01%
14161
39.51 ± 0.50
3.78 ± 1.24
0.01%
13848
9.64 ± 14.74
52.30 ± 122.05
40.85%
14161
41.60 ± 3.61
8.11 ± 2.66
1.17%
14156
40.09 ± 2.27
8.42 ± 4.03
1.44%
14161
39.99 ± 0.13
8.31 ± 2.47
1.16%
Table 1: Statistics of generated queries across the approaches under consideration (§3.1).
(cid:16)(cid:205)|𝑞 |
that occur infrequently (less than 3 times) in the data are filtered
out. The negative log-likelihood (NLL) of a single query 𝑞 with
(cid:17)
𝑖=1 log 𝑃 (𝑞𝑖 | 𝑞1 . . . 𝑞𝑖 −1)
|𝑞| terms is defined as NLL (𝑞) = −
,
where 𝑃 (𝑞𝑖 | 𝑞1 . . . 𝑞𝑖 −1) represents the probability of the term 𝑞𝑖
under the 4-gram LM. Using a 4-gram LM, rather than looking for
exact matches in query logs, provides us with a flexible approach to
score the likelihood of a query, while also having the ability to assign
a score to queries not present in the query logs. The lower the NLL,
the more a query is likely under VA user query behavior. We report
median NLL across a query set of the first 𝐾 queries for each entity.
For RQ2, we measure to what capacity the generated queries
can retrieve the entity for which they were generated in order to
measure query specificity. We build an index of our knowledge base
14 161 entities where each entity is represented by its Wikipedia
page and its profile on a music streaming service (§2.1), including
biography and most popular discography. Both indexed documents
and queries are pre-processed by lower-casing, removing punctua-
tion and non-alphanumeric characters, removing stopwords, and
applying a Porter stemmer. We use the BM25-L retrieval model [12,
§3.2] with 𝑘1 = 1.5, 𝑏 = 0.75 and 𝛿 = 0.5. Since for each query, there
is only one relevant entity, we report reciprocal rank (RR), aver-
aged over the top-𝐾 queries 𝑞 and entities 𝑒, with RR defined as
RR (𝑞, 𝑒) =
1
rank (𝑞, 𝑒)
where rank (𝑞, 𝑒) equals the rank of the entity 𝑒 for which query 𝑞
was generated under the BM25-L retrieval model. The higher the
RR, the better a query is able to retrieve the entity it was generated
for (and, hence, the more specific an entity is). We report mean RR
across a query set for the first 𝐾 queries generated for each entity.
4 RESULTS
Table 1 shows statistics on the generated queries using the vari-
ous methods (§3.1). In Fig. 3a, we see that while the standalone
entity-name and template-based methods generate relatively short
queries (1–4 terms), the LLM-based methods tend to be more ver-
bose (∼8 terms). A sample of generated queries for entity Post Mal-
one (Q21621919) is depicted in Table 2. The babbage-002 LLM, a
GPT base model not trained with instruction following [7], per-
forms poorly and fails to generate reasonable queries. As expected,
the template-based approach generates queries that are stylistically
simple, since the template is independent from the entity for which
the queries are being generated. On the other hand, queries gener-
ated by LLM-based methods are able to refer to information present
in the artist description that was fed as context to the LLM. We will
now answer the research questions raised in §1 and further defined
in §3.2.
Method
Sample of generated queries
entity name “Post Malone”
templates
“play Post Malone”, “play the song Post Malone”,
“play Post Malone music”
babbage-002 “Start with CTRL + M”, . . .
gpt-3.5
“play White Iverson by Post Malone”,
“queue Congratulations by Post Malone”,
“turn on Post Malone’s album Beerbongs & Bentleys”
gpt-3.5
(instruct)
“play Post Malone’s debut single White Iverson”,
“play Post Malone’s hit song Rockstar”,
“play Post Malone’s song Sunflower from the
Spider-Man Into the Spider-Verse soundtrack”
gpt-4
“play White Iverson by Post Malone”,
“add Rockstar by Post Malone to my playlist”,
“turn up the volume for Psycho by Post Malone”
Table 2: Example of queries generated by the various methods (§3.1).
4.1 Similarity to VA usage queries
For RQ1, Fig. 3b shows the negative log likelihood (§3.1) for the
methods under consideration. The entity name by itself aligns
closest with user behavior, while the template-based approach is
a close second. This is not surprising, since the templates we used
were created by domain experts by analyzing high-frequency use-
cases in a representative sample of VA usage [13, §3.1]. Hence, the
entity name and template method represent frequent use-cases
at the head of the query distribution.
Next up, at approximately half the log-likelihood, queries gener-
ated by the LLMs seem to represent infrequent, tail use-cases. While
not entirely absent from VA usage, they are not as common as the
straight-forward templates. This is explained by the fact that the
LLM-generated queries often reference specific songs or albums by
the artist—extracted from the artist’s description—resulting in less
generic queries. However, this lack of generality yields queries that
reference multiple entities and, hence, tend to be at most as—and
often, significantly less—likely as queries referencing only a single
entity. Note that in our prompt (Fig. 2), we did not instruct the LLMs
to exhibit this behavior. We answer RQ1 as follows: queries gener-
ated by LLMs trained with instruction following correlate with VA
user behavior, although they tend to be more specific than queries
generated using template-based approaches. This raises the ques-
tion whether template- and LLM-based approaches are complemen-
tary when it comes to synthetic query generation. In fact, compar-
ing the query sets generated by the template-based method and
gpt-3.5-turbo-instruct, the mean/std. dev of the Jaccard coef-
ficient across entities equals 0.0038 ± 0.0084, indicating very low
overlap, and hence, complementarity.
SIGIR ’24, July 14–18, 2024, Washington, DC, USA
Sonal Sannigrahi, Thiago Fraga-Silva, Youssef Oualil, and Christophe Van Gysel
(a) Distribution of generated query lengths across the approaches under consideration (§3.1). Lengths that exceed 15 tokens are not depicted, but are documented in
Table 1.
(b) Median NLL (§3.2; lower is better) for the various query generation
methods (except babbage-002 since it leads to very high NLL) for various
query cut-offs (𝐾 = 10, 20, 30, 40). See Fig. 3a for the legend.
(c) Reciprocal rank (§3.2; higher is better) for the various query generation
methods (except babbage-002 since it generates non-sensical queries) for
various query cut-offs (𝐾 = 10, 20, 30, 40). See Fig. 3a for the legend.
Figure 3
4.2 Query specificity
For RQ2, Fig. 3c depicts the reciprocal rank for the various meth-
ods (§3.1) at various cut-offs of the list of generated queries. The
entity name method performs best, since it does not contain any
superfluous terms and matches directly the specific entity mention
contained within the entity’s textual representation. The template-
based method performs nearly as well as the entity name method,
since it generates queries that contain the entity name padded with
carrier terms that are non-specific to the entity (e.g., “play”, “song”).
The LLM-based methods score slightly worse than the entiy name
and template methods, since the queries generated using LLMs are
more verbose and can include terms that match non-relevant enti-
ties. For example, song titles often consist of generic, non-specific
terms and multiple songs can have the same title. Between the LLM-
based generated query collections, gpt-4 performs worst. When
examining the effect of query cut-off (𝐾), we see that as 𝐾 increases,
RR generally decreases. This is due to the fact that, as 𝐾 increases,
queries become more complex and can contain terms that con-
fuse the retrieval model. We answer RQ2 as follows: entity-centric
queries generated by LLMs achieve an average reciprocal rank of
0.70; indicating that the correct entity is often ranked highly. How-
ever, since LLMs generate more verbose queries, there are more
terms in the queries that can throw off the retrieval model.
4.3 Complementarity
Finally, following the conclusions to RQ1 and RQ2 above, and the
qualitative examples in Table 2, we find that template- and LLM-
based methods are complementary as follows: (1) template-based
methods allow to generate synthetic queries for frequent use-cases
(e.g., for tail entities) that apply across all entities (e.g., “play mu-
sic by $ARTIST”) and are computationally inexpensive, whereas
(2) LLM-based methods can generate specialized/infrequent use–
cases (e.g., for popular/controversial entities) specific to the entity
in question (e.g., “play Taylor Swift’s duet with Ed Sheeran”)—while
having a higher computational cost. Hence, template- and LLM-
based methods can be combined to build a richer synthetic query col-
lection with coverage for both (a) tail entities, and (b) tail use-cases.
5 CONCLUSIONS
In this paper, we performed a preliminary analysis of the use of LLM-
based approaches for the generation of synthetic queries for training
a query prior used within a VA speech recognition system. We find
that template- and LLM-based approaches are complementary since
(a) template-based methods can generate queries for frequent use-
cases and infrequent entities, and (b) LLM-based methods are better
suited to target infrequent use-cases tailored to a specific entity. One
limitation of this work is that we relied on OpenAI’s API for LLMs.
However, we did not observe any significant differences in behavior
between the LLMs we experimented with, and we believe that the
overall conclusion that template- and LLM-based query generation
methods are complementary will remain valid. Another limitation
is that the LLM training data can bias the generated query priors,
however addressing this is out of the scope of the current work.
Future work includes approaches to mix together the results of the
multiple query generation methods, such that the final collection
aligns with user behavior; in addition to exploration of the prompt
used to query the LLM, use of more advanced prompting techniques
(e.g., chain of thought), and LLM fine-tuning.
ACKNOWLEDGMENTS
The authors would like to thank Manos Tsagkias, Lyan Verwimp,
Russ Web, Sameer Badaskar, and the anonymous reviewers for their
comments and feedback.
24681012140100000200000entitynametemplatesbabbage-002gpt-3.5-turbogpt-3.5-turbo-instructgpt-410203040#generatedqueries2030405060NLLunderquerylogs10203040#generatedqueries0.7000.7250.7500.7750.800ReciprocalRankSynthetic Query Generation using Large Language Models for Virtual Assistants
SIGIR ’24, July 14–18, 2024, Washington, DC, USA
SPEAKER BIOGRAPHY
Sonal Sannigrahi is a PhD student at Instituto Superior Técnico
in Lisbon, Portugal working on multi-modal Natural Language
Processing. She previously worked on multilingual representation
learning and has published papers at EACL, ACL, amongst others.
Christophe Van Gysel is a Staff Research Scientist working on
the Siri Speech language modeling team at Apple where he works
on the boundary between ASR and Search. Christophe obtained
his PhD in Computer Science from the University of Amsterdam
in 2017. During his PhD, Christophe worked on neural ranking
using representation learning models with a focus on entities and
published at WWW, SIGIR, CIKM, WSDM, TOIS, amongst others.
COMPANY PROFILE
Apple revolutionised personal technology with the introduction of
the Macintosh in 1984. Today, Apple leads the world in innovation
with iPhone, iPad, Mac, Apple Watch, and Apple TV. Apple’s five
software platforms — iOS, iPadOS, macOS, watchOS, and tvOS —
provide seamless experiences across all Apple devices and empower
people with breakthrough services including the App Store, Apple
Music, Apple Pay, and iCloud. Apple’s more than 100,000 employees
are dedicated to making the best products on earth, and to leaving
the world better than we found it.
REFERENCES
[1] Marwah Alaofi, Luke Gallagher, Mark Sanderson, Falk Scholer, and Paul Thomas.
2023. Can Generative LLMs Create Query Variants for Test Collections? An
Exploratory Study. In SIGIR. 1869–1873.
[2] Ankur Gandhe, Ariya Rastrow, and Bjorn Hoffmeister. 2018. Scalable Language
Model Adaptation for Spoken Dialogue Systems. In SLT. IEEE.
[3] Sashank Gondala, Lyan Verwimp, Ernest Pusateri, Manos Tsagkias, and
Christophe Van Gysel. 2021. Error-driven pruning of language models for virtual
assistants. In ICASSP.
[4] David Graus, Daan Odijk, and Maarten de Rijke. 2018. The birth of collective
memories: Analyzing emerging entities in text streams. JAIST 69, 6 (2018).
[5] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess,
Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020.
Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 (2020).
[6] Slava M. Katz. 1987. Estimation of Probabilities from Sparse Data for the Language
Model Component of a Speech Recognizer. ASSP 35 (1987).
[7] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela
Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022.
Training language models to follow instructions with human feedback. In NeurIPS,
Vol. 35.
[8] Ernest Pusateri, Christophe Van Gysel, Rami Botros, Sameer Badaskar, Mirko
Hannemann, Youssef Oualil, and Ilya Oparin. 2019. Connecting and comparing
language model interpolation techniques. In Interspeech.
[9] Juniper Research. 2019. Digital Voice Assistants in Use to Triple to 8 Billion by
2023, Driven by Smart Home Devices. Press Release.
[10] Mandana Saebi, Ernest Pusateri, Aaksha Meghawat, and Christophe Van Gysel.
2021. A discriminative entity-aware language model for virtual assistants. In
ICASSP.
[11] Hsuan Su, Ting-Yao Hu, Hema Swetha Koppula, Raviteja Vemulapalli, Jen-
Hao Rick Chang, Karren Yang, Gautam Varma Mantena, and Oncel Tuzel. 2024.
Corpus Synthesis for Zero-shot ASR Domain Adaptation using Large Language
Models. (2024).
[12] Andrew Trotman, Antti Puurula, and Blake Burgess. 2014. Improvements to
BM25 and language models examined. In Australasian Document Computing
Symposium.
[13] Christophe Van Gysel, Mirko Hannemann, Ernie Pusateri, Youssef Oualil, and Ilya
Oparin. 2022. Space-Efficient Representation of Entity-centric Query Language
Models. In Interspeech.
[14] Christophe Van Gysel, Manos Tsagkias, Ernest Pusateri, and Ilya Oparin. 2020.
Predicting entity popularity to improve spoken entity recognition by virtual
assistants. In SIGIR.
[15] Shuai Wang, Harrisen Scells, Bevan Koopman, and Guido Zuccon. 2023. Can
ChatGPT write a good boolean query for systematic review literature search?
(2023).
[16] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian
Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al.
2022. Emergent abilities of large language models. Transactions on Machine
Learning Research (2022).
[17] Youyuan Zhang, Sashank Gondala, Thiago Fraga-Silva, and Christophe Van Gysel.
2023. Server-side Rescoring of Spoken Entity-centric Knowledge Queries for
Virtual Assistants. arXiv preprint arXiv:2311.01398 (2023).
|
synthetic_cpt | 2 | Neural_Codec_Language_Models_are_Zero-Shot_Text_to_Speech_Synthesizers.pdf | Towards audio language modeling - an overview
Haibin Wu1, Xuanjun Chen1∗, Yi-Cheng Lin1∗, Kai-wei Chang1, Ho-Lam Chung1,
Alexander H. Liu2, Hung-yi Lee1
1
4
2
0
2
b
e
F
0
2
]
S
A
.
s
s
e
e
[
1
v
6
3
2
3
1
.
2
0
4
2
:
v
i
X
r
a
Abstract—Neural audio codecs are initially introduced to
compress audio data into compact codes to reduce transmission
latency. Researchers recently discovered the potential of codecs as
suitable tokenizers for converting continuous audio into discrete
codes, which can be employed to develop audio language models
(LMs). Numerous high-performance neural audio codecs and
codec-based LMs have been developed. The paper aims to provide
a thorough and systematic overview of the neural audio codec
models and codec-based LMs.
Index Terms—Neural codec, codec-based language model
I. INTRODUCTION
Neural audio codec models were first introduced to com-
press audio for efficient data transmission. The encoder con-
verts the audio into codec codes, which are then transmitted.
The receiver then uses the codec decoder to reconstruct the
audio using the received codes.
Language modeling has proven to be highly successful
in the field of Natural Language Processing (NLP). Audio
data encompasses not only textual content but also rich in-
formation about speaker timbre, emotion, and general audio,
offering deeper possibilities for language model applications.
Researchers, especially those in large companies with sig-
nificant computational resources, recently leverage the po-
tential of neural codecs [1]–[8] as suitable tokenizers for
converting continuous audio into discrete codes, which can be
employed to develop audio language models (LMs) [9]–[20].
The current codec-based language models and codec models
are summarized in Figure 1. These findings promptly garnered
the community’s attention, sparking a fervor for developing
codecs tailored to audio language modeling. Numerous high-
performance neural audio codec models and audio LMs have
been developed.
An ideal codec should maintain content while preserving
paralinguistic and speaker-related information. Similarly, a
universal audio language model should be able to generalize
across various audio types, such as speech, music, and general
audio, covering a wide range of applications. The arms race
in developing codecs and audio LMs is still ongoing.
Given the significant advancements in codecs and audio
language models over the past three years as shown in Fig-
ure 1, there has yet to be a comprehensive review comparing
them and providing inspiration to the community. In this study,
we aim to fill this research gap by thoroughly reviewing and
comparing various existing neural codec models and audio
codec-based language models. Firstly, we specifically conduct
an in-depth analysis of six representative open-source neural
∗Equal second contribution. 1 National Taiwan University. 2Massachusetts
Institute of Technology.
codec models to cover their training methodologies, imple-
mentation settings, and training data. Secondly, we expand
our analysis to include eleven diverse codec-based language
models, examining how they utilize the codecs and the tasks
to which they can be applied. Through this comprehensive
review, we aim to offer the community insights into the diverse
methodologies and potential directions in the field of neural
codecs and codec-based language modeling.
II. COMPREHENSIVE COMPARISON FOR NEURAL AUDIO
CODEC MODELS
Codec models aim to compress and decompress speech
signals efficiently. Traditional codecs are developed based on
psycho-acoustics and speech synthesis [21], [22]. Recently,
the neural codec models demonstrated highly effective for
compression and signal reconstruction, outperforming tradi-
tional codecs. Considering the broad spectrum of codec models
within the research community, each trained with its distinct
configurations and training techniques, there is a clear need for
a thorough examination that covers the training methodologies,
implementation settings, and training data employed across
these codec models. The six codec models have distinct
training details, resulting in a collection of fifteen different
codec models, as summarized in Table I.
A. Brief method overview for codecs
SoundStream [2] stands as one of the pioneering implemen-
tations of neural codec models, embodying a classic neural
codec architecture comprising encoder, quantizer, and decoder
modules. It utilizes the streaming SEANets [23] as its encoder
and decoder. The quantizer incorporates a speech enhancement
system with a Residual Vector Quantization (RVQ) [2], [24]
bottleneck to obtain parallel token streams. During training,
the model parameters are optimized using a combination of
reconstruction and adversarial
loss. SoundStorm [3] is an
improved version of SoundStream to achieve both efficiency
and high-quality audio generation. It accomplishes this by em-
ploying an architecture specifically tailored to the hierarchical
structure of audio tokens. Moreover, it pioneers a parallel, non-
autoregressive decoding scheme, which relies on confidence-
based strategies for residual vector-quantized token sequences.
Encodec [1] builds upon a framework similar to Sound-
Stream. Nonetheless, it further augments its capabilities by
integrating supplementary LSTM [25] layers and harnessing
a Transformer-based language model [26] to model the RVQ
codes, thereby amplifying its sequence modeling performance.
there is a stream of work aimed at making codec
Then,
models more general and powerful. AudioDec [4] represents
2
Fig. 1. Timeline of current neural codec models and codec-based language models.
an enhanced version of Encodec, implementing a group con-
volution mechanism to facilitate the real-time operation of the
streamable network while also harnessing the capabilities of
HiFi-GAN [27] to effectively generate high-fidelity audio at a
high sampling rate of 48 kHz.
In the AcademiCodec model introduced by [5], a novel
technique known as group-residual vector quantization is
presented. It employs multiple parallel RVQ groups. This
technique is specifically tailored for generation tasks. It aims to
enhance the reconstruction performance while using a limited
number of codebooks, consequently achieving an impressively
low bit rate per second (BPS). This low BPS is of utmost
significance as it effectively addresses the challenge of lengthy
speech tokens in speech language modeling, resulting in
reduced sequence lengths.
It
SpeechTokenizer [7] is a unified speech tokenizer designed
implements an Encoder-
for speech language models.
Decoder architecture enhanced with RVQ. By integrating both
semantic and acoustic tokens, SpeechTokenizer hierarchically
separates various aspects of speech information across dif-
ferent RVQ layers. Specifically, SpeechTokenizer is designed
to regularize the first RVQ layer to highlight semantic in-
formation by learning the Hubert tokens [28]. Using such
techniques can enhance the disentanglement of information
across different RVQ layers.
Descript-audio-codec (DAC) [8], a universal neural codec
model, distinguishes itself through its exceptional ability to
maintain high-fidelity audio quality across a wide spectrum of
data types, encompassing general audio, music, and speech. It
accomplishes this feature by employing a number of train-
ing techniques, such as periodic activation functions [29],
enhanced residual vector quantization using factorized and
L2-normalized codes, random quantizer dropout to preserve
audio reconstruction quality, as well as refining adversarial and
reconstruction loss during the training process. The authors
highlight
importance of the periodic activation
function among the employed techniques.
the crucial
Unlike most models focusing on the time domain, Fun-
Codec [6] proposes a frequency-domain codec. The authors
claim they can achieve comparable performance with fewer
parameters and lower computation complexity. Meanwhile, it
also finds that incorporating semantic information in the codec
tokens improves speech quality at low bit rates.
B. Comparison from methodology angles
We compare several techniques proposed by these codecs
in Table II. The abbreviation “A-F” represents different codec
models. Please refer to Table I for the corresponding model
full name. The design of discriminators constitutes a pivotal
element within codec models. Encodec initially introduces the
Multi-scale-STFT Discriminator (MS-STFTD). In contrast to
the multi-scale discriminator (MSD) proposed in MelGAN
[24], which captures long-term dependencies, the multi-period
discriminator (MPD) proposed in HiFi-GAN [30] exhibits
a capacity to discern more nuanced periodic details. Con-
sequently, AudioDec replaces the conventionally employed
STFTD with a HiFi-GAN-based MPD, observing an enhance-
ment
in audio quality within their model. AcademiCodec
integrates prior research efforts by incorporating the MS-
STFTD from Encodec and both HiFi-GAN-based MPD and
MSD. Both SpeechTokenizer and Funcodec adopt identical
discriminators to AcademiCodec, with Funcodec offering a
unified interface adaptable to any combination of these three
discriminator types. DAC identifies that employing MSD and
MPD alone generates audio displaying blurriness and artifacts.
To address this, they propose the application of a multi-scale,
multi-band STFT discriminator (MS-MB-STFTD) to improve
phase modeling and mitigate aliasing artifacts.
SpeechTokenizer utilizes semantic tokens from Hubert L9
as a teacher for the RVQ process. This guidance enables the
disentanglement of content information into the first layer
of the tokenizer, while paralinguistic information is retained
in subsequent layers. FunCodec seeks to integrate semantic
information by combining, adding, or residualizing the audio
codec with semantic tokens. The study reveals that including
semantic tokens enhances audio quality, particularly with the
residual inclusion method. Additionally, SpeechTokenizer and
FunCodec utilize K-means to cluster samples in the first mini-
batch for initializing the VQ codebook, leading to improved
code utilization. DAC follows the approach of BigVGAN [31],
employing snake activation [29] for trainable control over the
frequency of periodic signals. AcademiCodec employs mul-
tiple RVQ codebooks (multiple residual groups) to represent
intermediate features. They demonstrate that using multiple
residual groups achieves good reconstruction performance
while employing only a few codebooks. Encodec trains an
2021.06Neural Codec Model2023.062022.122023.12Codec-based Language ModelMusicLM [11]SoundStream [2]AudioLM [9]EnCodec [1]FuncCodec [6]AudioGen [20]SpeechTokenizer [7]VioLA [14]AudioDec [4]AcademicCodec [5]VALL-E X [13]SoundStorm [3]VALL-E [12]MusicGen [18]SpeechX [17]AudioPaLM [10]DAC [8]LauraGPT [16]UniAudio [15]TABLE I
CODEC INFORMATION COMPARISON. ”A-F” REPRESENTS DIFFERENT
NEURAL CODEC MODELS, WHERE ”A” IS SPEECHTOKENIZER [7], ”B∼”
IS ACADEMICODEC [5], ”C” IS AUDIODEC [4], ”D∼” IS DAC [8], ”E∼”
IS ENCODEC [1], AND ”F∼” IS FUNCODEC [6]. nc REPRESENTS THE
CODEBOOK NUMBER, SR REPRESENTS THE SAMPLE RATE, AND BPS
REPRESENTS THE BIT RATE IN UNIT BITS PER SECOND.
Codec information
Training data
nc SR BPS
3
A
B1
B2
B3
C
D1
D2
D3
E1
E2
E3
E4
E5
16k
hifi 16k 320d
hifi 16k 320d large uni
hifi 24k 320d
24k 320d
16k
24k
44k
24k 1.5bps
24k 3bps
24k 6bps
24k 12bps
24k 24bps
F1 en libritts 16k gr1nq32ds320
F2 en libritts 16k gr8nq32ds320
F3
F4
F5
F6
en libritts 16k nq32ds320
en libritts 16k nq32ds640
zh en 16k nq32ds320
zh en 16k nq32ds640
Librispeech
LibriTTS
VCTK
AISHELL
Valentini
8
4
4
4
8
16
16
16
24
24
Common Voice, DAPS 12 16
32 24
9 44.1
VCTK, MUSDB
Jamendo, AudioSet
Common Voice
DAPS, Jamendo
AudioSet, FSD50K
24
2
24
4
8
24
16 24
32 24
Subset of LibriTTS
32 16
32 16
32 16
32 16
25k hours collected data 32 16
32 16
(en and zh-cn)
4
2
2
3
6.4
6
24
8
1.5
3
6
12
24
16
16
16
8
16
8
TABLE II
COMPARISON BETWEEN CODEC IMPLEMENTATION STRATEGY. SEM
REPRESENTS CODEC INCLUDING SEMANTIC TOKENS. SNAKE REPRESENTS
THE CODEC MODEL THAT EMPLOYS SNAKE ACTIVATION. MRG
REPRESENTS CODEC HAS MULTIPLE RESIDUAL GROUPS. NOISY
REPRESENTS CODEC UTILIZES NOISY DATA IN TRAINING. LM
REPRESENTS THE MODEL INCLUDING LANGUAGE MODEL TRAINING. KM
REPRESENTS CODEC USES K-MEANS TO CLUSTER SAMPLES AS
INITIALIZATION OF VQ CODEBOOK.
Codec
Discriminators
SEM Snake MRG Noisy LM KM
A
B
C
D
E
F
MSD + MPD + MS-STFTD ✓
MSD + MPD + MS-STFTD ✗
✗
✗
✗
MSD + MPD + MS-STFTD ✓
MPD
MPD + MS-MB-STFTD
MS-STFTD
✗
✗
✗
✓
✗
✗
✗
✓
✗
✗
✗
✗
✗
✗
✗
✗
✗
✓
✗ ✓
✗
✗
✗
✗
✗
✗
✓ ✗
✗ ✓
additional small transformer model for entropy coding over
the quantized units, which reduces bandwidth and accelerates
encoding and decoding.
C. Implementation details
We compare the codebook number, training data, sampling
rate, and bit rate per second in Table I. From the training
data perspective, SpeechTokenizer [7], AudioDec [4], and
FunCodec [6] utilize only English speech dataset. Academi-
Codec [5] incorporates bilingual speech datasets, including
AISHELL for Chinese and LibriTTS and VCTK for English.
Both DAC [8], and Encodec [1] encompass diverse modality
data, including speech, music, and audio, in the training data.
Fig. 2. Codec-based Language Modeling
III. CURRENT CODEC-BASED SPEECH LANGUAGE MODELS
As shown in Figure 2, the process of neural codec-based audio
language modeling begins by converting context information,
such as text and MIDI, into context codes, while simulta-
neously encoding the audio into codec codes. These context
and codec codes are then employed in the language modeling
phase to generate the desired target codec code sequence.
Subsequently, the target codec code sequence is passed to the
codec decoder to produce the audio output. The entire pipeline
embodies an audio-to-audio modeling approach.
A. Overview for codec-based LMs
AudioLM [9] is the pioneering model in introducing codec
codes for language modeling, utilizing a hierarchical approach
that encompasses two distinct stages. The first stage generates
semantic tokens using a self-supervised w2v-BERT model
[32]. These tokens are then leveraged in the second stage
as conditioning elements to create acoustic tokens using a
SoundStream neural codec [2].
VALL-E [12], VALL-E X [13], and SpeechX [17], all orig-
inate from Microsoft and are neural codec language models
trained to generate discrete codes derived from EnCodec [1],
based on textual or acoustic inputs. VALL-E can generate
high-quality personalized speech with only a 3-second enroll-
ment recording from an unseen speaker. Furthermore, VALL-
E X can produce high-quality speech in the target language
with just a single speech utterance in the source language as a
prompt. Additionally, SpeechX introduces a unified framework
to address not only zero-shot TTS but also various types of
speech transformation tasks, including speech enhancement
and speech editing.
What sets ViaLA [14], AudioPaLM [10], and LauraGPT
[16] apart is their dual capability to generate both text and
audio. VioLA tries to tackle the question “Is one decoder-
only generative model all you need for speech recognition,
synthesis, and translation?” by employing language modeling
that integrates both text tokens and audio tokens (extracted by
EnCodec [1]), along with the use of task IDs and language
IDs. AudioPaLM constructs a unified vocabulary comprising
Neural Codec LMCodecTokenizerCodec DecoderContext codeCodec codeContextEncoder(Context) Text, MIDI, …Codec Language Modelingboth text and audio tokens. It is a decoder-only, autoregres-
sive model capable of processing and generating both text
and speech. Additionally, AudioPaLM’s initialization stems
from PaLM-2 [33], a text-only language model. AudioPaLM’s
approach to audio tokenization resembles that of AudioLM.
Moreover, AudioPaLM adopts and extends the SoundStream
to SoundStorm [3]. LauraGPT [16] is a versatile
model
language model built on a decoder-only text-based language
model, Qwen-2B [34]. LauraGPT has the capability to pro-
cess both audio and text inputs, generating outputs in either
modality. LauraGPT encodes input audio into continuous
representations using a Conformer encoder and decodes output
audio using FunCodec [6] discrete codes. The authors claim
this specific audio features design for inputs and outputs will
result in improved performance for speech generation using
some preliminary experimental results.
UniAudio [15] utilizes language modeling to generate a
wide range of audio types, including speech, sounds, music,
and singing, using textual or acoustic tokens as inputs. Uni-
Audio stands out for its ability to enhance autoregressive pre-
diction speed by introducing a multi-scale Transformer model
[35], which employs a large global transformer to predict the
first-layer codec codes and a small local transformer to predict
the codec codes for the subsequent codec layers. The codec
model in UniAudio is revised from EnCodec.
Additionally, there are other codec-based language models
designed for sound modeling. AudioGen [20] trained a Sound-
Stream model to get audio tokens and subsequently trained a
language model to utilize textual features as conditions for
generating audio tokens. MusicLM [11] follows a training
strategy similar to AudioLM but extends its scope to en-
compass music features. It approaches the task of conditional
music generation through a hierarchical sequence-to-sequence
modeling approach. Initially, it utilizes music tokens from
Mulan [36] to generate semantic tokens from the w2v-BERT
model. Subsequently, it employs both music tokens and seman-
tic tokens to generate acoustic features through Soundstream.
MusicGen [18] is a music language model designed to work
with EnCodec discrete tokens. It accepts textual descriptions
or melodic features as input conditions to generate tokens,
which can be reconstructed to high-fidelity music.
Another branch of speech language modeling aims to utilize
discrete units obtained by quantizing self-supervised speech
representations. While these discrete units contain rich acous-
tic and linguistic information [37],
they lack speaker and
paralinguistic information [38]. This research direction focuses
on modeling the semantics of speech, with the optional use of
encoders to learn about speaker characteristics and prosody.
Pioneering work is speech-resynthesis [38], which utilizes
these discrete units in conjunction with prosody and speaker
encoders to encode speech into low-bitrate codes. These codes
can then be resynthesized into a speech signal with a decoder
to achieve low-bitrate transmission. Additionally, these dis-
crete units can be regarded as “pseudo-text,” serving as a foun-
dation for training textless speech language models. Notable
examples include GSLM [39], pGSLM [40], dGSLM [41],
and TWIST [42]. By engaging in the pre-trained task of next-
token prediction, these speech LMs perform spoken language
4
TABLE III
CODEC-BASED LANGUAGE MODELS COMPARISON. ”T” MEANS TEXT,
”AUD” MEANS AUDIO, ”P” MEANS PHONEME, AND ”M” MEANS MIDI.
CLM
Task
Input Output
Codec
AudioLM [9]
AudioGen [20]
VALL-E [12]
MusicLM [11]
VALL-E X [13]
VioLA [14]
MusicGen [18]
AudioPaLM [10] ASR, S2TT, TTS, MT
SpeechX [17]
SC, PC
AC
TTS
MG
TTS, S2ST
ASR, S2TT, TTS, MT
MG, SG
AUD
EnCodec [1]
AUD SoundStream [2]
AUD, T AUD SoundStream [2]
AUD, T AUD
AUD, T AUD SoundStream [2]
AUD,T AUD
AUD,T AUD,T
AUD
AUD
AUD,T AUD,T SoundStorm [3]
EnCodec [1]
EnCodec [1]
EnCodec [1]
SE, SR, TSE, TTS, SPED AUD,T AUD
EnCodec [1]
LauraGPT [16]
ASR, S2TT, TTS, MT, SE
AAC, SER, SLU
AUD,T AUD, T FunCodec [6]
UniAudio [15]
TTS, VC, SE, TSE, SVS
TTSO, TTM, AUED
SD, ITTS, SPED
P, M,
AUD,T AUD
EnCodec [1]
modeling and can conduct the task of speech continuation. In
the field of speech translation, recent advancements have been
made possible through these discrete units. [43] pre-trained a
Unit mBART combined with a wav2vec 2.0 [44] encoder to
directly predict the translated discrete units. UnitY [45] further
incorporates text modality to enhance speech translation. The
Seamless models [46], [47] integrate the UnitY framework to
perform expressive and streaming speech-to-text and speech-
to-speech translation. With the development of these powerful
speech LMs, researchers have begun to explore the use of
prompting on speech LMs for various speech processing tasks,
including prompt tuning [48]–[50], in-context learning [51],
and instruction tuning [52], [53].
B. Comparison for Codec-based audio language models
In Table III, we compare the inputs, outputs, and down-
stream tasks of different codec-based language models. We
also summarize that the downstream tasks conducted by differ-
ent codec-based language models: Speech Continuation (SC),
Piano Continuation (PC), Audio Continuation (AC), Text-to-
Speech (TTS), Music Generation (MG), Stereophonic Gener-
ation (SG), Speech to Speech Translation (S2ST), Automatic
Speech Recognition (ASR), Spoken Language Understand-
ing (SLU), Automated Audio Captioning (AAC), Speech to
Text Translation (S2TT), Machine Translation (MT), Speech
Enhancement (SE), Speech Removal (SR), Target Speaker
Extraction (TSE), Speech Editing (SPED), Voice Conversion
(VC), Singing Voice Synthesis (SVS), Text-to-Sound (TTSO),
Text-to-Music (TTM), Audio Editing (AUED), Speech Dere-
verb (SD), Instructed TTS (ITTS). Finally, we show the codec
models adopted by different LMs.
IV. CONCLUSION
The paper fills the research blank to review the neural codec
models and LMs built upon them. We hope the comprehensive
review and comparisons can inspire future research works to
boost the development of neural codec models and codec-
based LMs.
5
[30] Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae, “Hifi-gan: Generative
adversarial networks for efficient and high fidelity speech synthesis,”
2020.
[31] Sang-gil Lee, Wei Ping, Boris Ginsburg, Bryan Catanzaro, and Sungroh
Yoon, “Bigvgan: A universal neural vocoder with large-scale training,”
arXiv preprint arXiv:2206.04658, 2022.
[32] Yu-An Chung et al., “W2v-bert: Combining contrastive learning and
masked language modeling for self-supervised speech pre-training,” in
2021 IEEE Automatic Speech Recognition and Understanding Workshop
(ASRU). IEEE, 2021, pp. 244–250.
[33] Rohan Anil et al.,
“Palm 2 technical
report,”
arXiv preprint
arXiv:2305.10403, 2023.
[34] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng,
Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al., “Qwen technical
report,” arXiv preprint arXiv:2309.16609, 2023.
[35] Lili Yu, D´aniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettle-
moyer, and Mike Lewis, “Megabyte: Predicting million-byte sequences
with multiscale transformers,” arXiv preprint arXiv:2305.07185, 2023.
[36] Qingqing Huang, Aren Jansen, Joonseok Lee, Ravi Ganti, Judith Yue
Li, and Daniel PW Ellis, “Mulan: A joint embedding of music audio
and natural language,” arXiv preprint arXiv:2208.12415, 2022.
[37] Dan Wells, Hao Tang, and Korin Richmond, “Phonetic Analysis of Self-
supervised Representations of English Speech,” in Proc. Interspeech
2022, 2022, pp. 3583–3587.
[38] Adam Polyak et al., “Speech resynthesis from discrete disentangled
self-supervised representations,” in Interspeech, 2021, pp. 3615–3619.
“On generative spoken language modeling
from raw audio,” Transactions of the Association for Computational
Linguistics, vol. 9, pp. 1336–1354, 2021.
[39] Kushal Lakhotia et al.,
[40] Eugene Kharitonov et al., “Text-free prosody-aware generative spoken
language modeling,” arXiv preprint arXiv:2109.03264, 2021.
[41] Tu Anh Nguyen et al., “Generative spoken dialogue language modeling,”
Transactions of the Association for Computational Linguistics, vol. 11,
pp. 250–266, 2023.
[42] Michael Hassid et al., “Textually pretrained speech language models,”
arXiv preprint arXiv:2305.13009, 2023.
[43] Sravya Popuri et al., “Enhanced Direct Speech-to-Speech Translation
Using Self-supervised Pre-training and Data Augmentation,” in Proc.
Interspeech 2022, 2022, pp. 5195–5199.
[44] Alexei Baevski et al., “wav2vec 2.0: A framework for self-supervised
learning of speech representations,” Advances in neural information
processing systems, vol. 33, pp. 12449–12460, 2020.
[45] Hirofumi Inaguma et al.,
“Unity: Two-pass direct speech-to-speech
translation with discrete units,” arXiv preprint arXiv:2212.08055, 2022.
[46] Lo¨ıc Barrault et al., “Seamlessm4t-massively multilingual & multimodal
machine translation,” arXiv preprint arXiv:2308.11596, 2023.
[47] Lo¨ıc Barrault et al., “Seamless: Multilingual expressive and streaming
speech translation,” arXiv preprint arXiv:2312.05187, 2023.
[48] Kai-Wei Chang et al., “An Exploration of Prompt Tuning on Generative
in Proc.
Spoken Language Model for Speech Processing Tasks,”
Interspeech 2022, 2022, pp. 5005–5009.
[49] Kai-Wei Chang et al., “Speechprompt v2: Prompt tuning for speech
classification tasks,” arXiv preprint arXiv:2303.00733, 2023.
[50] Haibin Wu, Kai-Wei Chang, Yuan-Kuei Wu, and Hung-yi Lee, “Speech-
gen: Unlocking the generative power of speech language models with
prompts,” arXiv preprint arXiv:2306.02207, 2023.
[51] Ming-Hao Hsu et al., “An exploration of in-context learning for speech
language model,” arXiv preprint arXiv:2310.12477, 2023.
[52] Chun-Yi Kuan, Chen-An Li, et al.,
“Towards general-purpose text-
instruction-guided voice conversion,” in 2023 IEEE Automatic Speech
Recognition and Understanding Workshop (ASRU). IEEE, 2023, pp. 1–8.
[53] Chien-yu Huang, Ke-Han Lu, et al., “Dynamic-superb: Towards a dy-
namic, collaborative, and comprehensive instruction-tuning benchmark
for speech,” arXiv preprint arXiv:2309.09510, 2023.
REFERENCES
[1] Alexandre D´efossez et al., “High fidelity neural audio compression,”
arXiv preprint arXiv:2210.13438, 2022.
[2] Neil Zeghidour et al., “Soundstream: An end-to-end neural audio codec,”
IEEE/ACM Transactions on Audio, Speech, and Language Processing,
vol. 30, pp. 495–507, 2021.
[3] Zal´an Borsos et al., “Soundstorm: Efficient parallel audio generation,”
arXiv preprint arXiv:2305.09636, 2023.
[4] Yi-Chiao Wu et al.,
“Audiodec: An open-source streaming high-
fidelity neural audio codec,” in ICASSP 2023-2023 IEEE International
Conference on Acoustics, Speech and Signal Processing (ICASSP).
IEEE, 2023, pp. 1–5.
[5] Dongchao Yang et al., “Hifi-codec: Group-residual vector quantization
for high fidelity audio codec,” arXiv preprint arXiv:2305.02765, 2023.
[6] Zhihao Du, Shiliang Zhang, Kai Hu, and Siqi Zheng, “Funcodec: A
fundamental, reproducible and integrable open-source toolkit for neural
speech codec,” arXiv preprint arXiv:2309.07405, 2023.
[7] Xin Zhang, Dong Zhang, Shimin Li, Yaqian Zhou, and Xipeng Qiu,
“Speechtokenizer: Unified speech tokenizer for speech large language
models,” arXiv preprint arXiv:2308.16692, 2023.
[8] Rithesh Kumar, Prem Seetharaman, Alejandro Luebs, Ishaan Kumar,
and Kundan Kumar, “High-fidelity audio compression with improved
rvqgan,” arXiv preprint arXiv:2306.06546, 2023.
[9] Zal´an Borsos et al., “Audiolm: a language modeling approach to audio
generation,” IEEE/ACM Transactions on Audio, Speech, and Language
Processing, 2023.
[10] Paul K Rubenstein et al., “Audiopalm: A large language model that can
speak and listen,” arXiv preprint arXiv:2306.12925, 2023.
[11] Andrea Agostinelli et al., “Musiclm: Generating music from text,” arXiv
preprint arXiv:2301.11325, 2023.
[12] Chengyi Wang et al., “Neural codec language models are zero-shot text
to speech synthesizers,” arXiv preprint arXiv:2301.02111, 2023.
[13] Ziqiang Zhang et al.,
“Speak foreign languages with your own
voice: Cross-lingual neural codec language modeling,” arXiv preprint
arXiv:2303.03926, 2023.
[14] Tianrui Wang et al.,
for speech recognition, synthesis, and translation,”
arXiv:2305.16107, 2023.
“Viola: Unified codec language models
arXiv preprint
[15] Dongchao Yang et al., “Uniaudio: An audio foundation model toward
universal audio generation,” arXiv preprint arXiv:2310.00704, 2023.
[16] Qian Chen et al., “Lauragpt: Listen, attend, understand, and regenerate
audio with gpt,” arXiv preprint arXiv:2310.04673, 2023.
[17] Xiaofei Wang et al.,
“Speechx: Neural codec language model as a
versatile speech transformer,” arXiv preprint arXiv:2308.06873, 2023.
[18] Jade Copet et al., “Simple and controllable music generation,” arXiv
preprint arXiv:2306.05284, 2023.
[19] Gael Le Lan et al., “Stack-and-delay: a new codebook pattern for music
generation,” arXiv preprint arXiv:2309.08804, 2023.
[20] Felix Kreuk et al., “Audiogen: Textually guided audio generation,” arXiv
preprint arXiv:2209.15352, 2022.
[21] Jean-Marc Valin et al., “Rfc 6716: Definition of the opus audio codec,”
2012.
[22] Martin Dietz et al.,
in
2015 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP). IEEE, 2015, pp. 5698–5702.
“Overview of the evs codec architecture,”
[23] Marco Tagliasacchi et al., “Seanet: A multi-modal speech enhancement
network,” arXiv preprint arXiv:2009.02095, 2020.
[24] Kundan Kumar et al., “Melgan: Generative adversarial networks for
information
Advances in neural
conditional waveform synthesis,”
processing systems, vol. 32, 2019.
[25] Sepp Hochreiter and J¨urgen Schmidhuber, “Long short-term memory,”
Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[26] Ashish Vaswani et al., “Attention is all you need,” Advances in neural
information processing systems, vol. 30, 2017.
[27] Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae, “Hifi-gan: Generative
adversarial networks for efficient and high fidelity speech synthesis,”
Advances in Neural Information Processing Systems, vol. 33, pp. 17022–
17033, 2020.
[28] Wei-Ning Hsu et al., “Hubert: Self-supervised speech representation
learning by masked prediction of hidden units,” IEEE/ACM Transactions
on Audio, Speech, and Language Processing, vol. 29, pp. 3451–3460,
2021.
[29] Liu Ziyin, Tilman Hartwig, and Masahito Ueda, “Neural networks fail
to learn periodic functions and how to fix it,” Advances in Neural
Information Processing Systems, vol. 33, pp. 1583–1594, 2020.
|
synthetic_cpt | 1 | I-BERT_Integer-only_BERT_Quantization.pdf | 3
2
0
2
r
p
A
0
1
]
C
A
.
h
t
a
m
[
2
v
9
1
3
3
1
.
9
0
2
2
:
v
i
X
r
a
BOUNDS FOR THE REDUCTION NUMBER OF PRIMARY IDEAL IN
DIMENSION THREE
MOUSUMI MANDAL AND KUMARI SALONI
Abstract. Let (R, m) be a Cohen-Macaulay local ring of dimension d ≥ 3 and I an m-primary
ideal of R. Let rJ (I) be the reduction number of I with respect to a minimal reduction J of
I. Suppose depth G(I) ≥ d − 3. We prove that rJ (I) ≤ e1(I) − e0(I) + λ(R/I) + 1 + (e2(I) −
1)e2(I) − e3(I), where ei(I) are Hilbert coefficients. Suppose d = 3 and depth G(I t) > 0 for
some t ≥ 1. Then we prove that rJ (I) ≤ e1(I) − e0(I) + λ(R/I) + t.
1. Introduction
I
In /=
be a Noetherian local ring of dimension d
In}n∈Z is called an I-admissible filtration if
I n−k for some k
In}
1 and I an m-primary ideal. A sequence
R, m
Let
)
(
Im+n and
ImIn ⊆
ii
of ideals
I = {
(
I n
In+1
In ⊆
I1 such that JIn =
iii
⊆
I = {
(
)
for n
0 and it is called minimal reduction if it is minimal with respect to containment among
m is infinite.
all reductions. A Minimal reduction of
Minimal reductions are important in the study of Hilbert functions and blow-up algebras. For a
minimal reduction J of
exists and is generated by d elements if R
In+1 ⊆
i
(
is an ideal J
N. A reduction of
, we define
In,
≫
≥
⊆
I
)
)
∈
/
rJ (I) =
sup
{
n
Z
∣
∈
JIn−1}
J is a minimal reduction of
,
I}
{
min
(I) =
and r
rJ (I) ∣
with respect to J and reduction number of
respectively for
I n
known as reduction number of
respectively. We
and r
I
write rJ (
}n∈Z. Reduction number
is an important data associated to an ideal which contains information about the depth and
structural properties of the associated graded ring G
can
in place of rJ (I)
and r
I = {
(I)
I
(
I
I
)
)
In/
I
In+1. The number rJ (
)
I
. We look for bounding rJ (
)
be seen as a measure of how closely J and I are related. In general, it may be hard to compute
I
rJ (
in terms of other invariants of the ring or the ideal such
as embedding dimension, Hilbert coefficients etc. The Hilbert coefficients of
are the unique
n
d, such that the function HI(
integers ei(I)
coincides with the following
, 0
polynomial for n
R
λ
(
In)
) ∶=
I
)
/
(I) = ⊕n≥0
i
≤
0
∶
≤
≫
x
x
PI (
d
d
n
denotes the length function. The function HI (
e0(I)(
e1(I)(
) −
) =
+
d
+
−
−
x
d
1
2
−
1
1 ) + ⋯ + (−
ded(I)
.
)
(∗)
Here λ
known as the Hilbert-Samuel function and the Hilbert-Samuel polynomial of
are
respectively. For
. We refer to [19] for the related background material.
x
and the polynomial PI (
)
I
)
I n
I
, we write ei(
}
)
I = {
instead of ei(I)
2020 Mathematics Subject Classification. 13H10, 13D40, 13A30.
Key words and phrases. Cohen-Macaulay local rings, reduction number, Ratliff-Rush filtration, Hilbert
coefficients.
1
2
M MANDAL AND K SALONI
d
I
It is well known that if depth G
(
J. Further, if R is a one dimensional Cohen-Macaulay local ring then r
Theorem 2.45], Vasconcelos proved that in a Cohen-Macaulay local ring of dimension d
does not depend on the minimal reduction
I
1. In [20,
(
1,
I
1, then rJ (
I
e0(
) ≤
) −
) ≥
−
)
≥
(1)
r
I
(
) ≤
I
d.e0(
I
o
)
(
)
2d
−
1
+
mn.
)
is the largest positive integer n such that I
where o
I
(
A non-Cohen-Macaulay version of the above result can be found in [4, Theorem 3.3]. Let R be
a Cohen-Macaulay local ring of dimension at most two. In [16, Corollary 1.5], Rossi proved that
⊆
(2)
I
rJ (
I
e1(
I
e0(
R
λ
(
I
1
/
⊆
) ≤
) −
) +
) ≥
) +
for a minimal reduction J
I. Since then many attempts have been made to achieve a bound of
similar character in higher dimension. For instance, the bound in (2) holds in all dimensions if
depth G
d
I
R
λ
1 [2, Theorem 3.1]. Another
(
/
(
case is when I
is of codimension 3 generated by five quadrics [6, Theorem 2.1 and
Proposition 2.4]. However, no example is known to counter the relation in (2) in higher dimension.
I
In this paper, our objective is to find bounds for rJ (
in dimension three involving higher Hilbert
coefficients. We prove the following generalization of Rossi’s result [19, Theorem 4.3] in dimension
three:
I
2 [19, Theorem 4.3] or if e1(
x, y, z
k
[
I
e0(
−
⊆
) =
) +
) −
)
I
]
be a Cohen-Macaulay local ring of dimension three and I an m-primary
1. Let J
I be a minimal reduction of I. Then
R, m
Theorem 1.1. Let
(
I t
ideal with depth G
(
) >
)
0 for some t
≥
(3)
I
Furthermore, if rJ (
) ≡
I
rJ (
k mod t, 1
) ≤
k
≤
R
λ
(
/
I
) +
t.
) +
⊆
I
e0(
1, then
) −
I
e1(
t
≤
−
I
e1(
I
As a consequence, if rJ (
Furthermore, we prove the following bound in dimension d
) >
)
) ≤
I
rJ (
I
e0(
I 2
is odd and depth G
(
) −
) +
I
k.
R
λ
(
/
) +
I
0 then rJ (
3.
≥
I
e1(
) −
I
e0(
) +
R
λ
(
/
I
) +
1.
) ≤
R, m
Theorem 1.2. Let
be a Cohen-Macaulay local ring of dimension d
)
(
I
primary ideal with depth G
) ≥
(
I
e1(
−
I
e0(
I
e2(
)
3. Then
I
rJ (
I
e3(
I
e2(
) −
) +
) −
+ (
d
1
.
)
) −
(4)
R
λ
) ≤
(
I
Though the bound in (4) is quadratic in e2(
than earlier known bounds. For small values of e2(
I
I
in terms of ei(
I
for rJ (
, 0
)
) +
3, in Corollary 4.3.
≤
≤
1
)
)
I
/
i
, we illustrate various examples where it is tighter
)
in dimension three, we obtain linear bounds
3 and I an m-
≥
I
It is worth asking if we can find similar bounds for rJ (
in Noetherian local rings. In [5], Ghezzi
et. al. proved a nonlinear bound in terms of Hilbert coefficients in two dimensional Buchsbaum
local rings of positive depth. We prove the following results:
)
Theorem 1.3. Let
ideal.
R, m
(
)
be a Buchsbaum local ring of dimension d
2 and I an m-primary
≤
(1) Let d
(2) Let d
I
R
λ
(
e1(
I
1. Then rJ (
I
) ≤
=
I t
2 and depth G
(
=
t
) +
1.
+
/
) >
) −
I
e0(
J
e1(
) −
0 for some t
λ
R
I
2.
(
/
I
1. Then rJ (
) +
) +
≥
) ≤
I
e1(
) −
J
e1(
) −
I
e0(
) +
BOUNDS FOR REDUCTION NUMBER
3
I
The main difficulty in generalizing the bound in (2) in higher dimension is that rJ (
does not
behave well with respect to superficial elements. This fact is closely related to Ratliff-Rush
{̃I n
of powers of I, see Lemma 3.1. We recall the definition more generally for an I-
closure
)}n∈Z.
admissible filtration
. The Ratliff-Rush filtration of
is the filtration
In+t ∶
I t
I
I
}
)
{ ̃In = ⋃t≥0(
For a minimal reduction J of
, we set
I
sup
Z
rJ (I) ∶=
∣ ̃In /=
∈
̃
I
. Note that if depth G
(
n
{
.
J ̃In−1}
0, then
I n
)
}
if
) >
) ≤
I = {
I
e1(
I
rJ (
̃
I
rJ (
̃
I
rJ (
. In [17], Rossi and
)
I
rJ (
. It follows
)
) ≤
̃
1 in dimension two. We extend the result of Rossi and
We write
) =
Swanson proved that in a Cohen-Macaulay local ring of dimension two
I
R
λ
that
(
Swanson for any I-admissible filtration in Proposition 2.1.
It is natural to ask if rJ (I) ≤
largely unknown, even in smaller dimensions. In Theorem 2.2, we prove that
e2(I)
.
and subsequently discuss the cases when
1 for an I-admissible filtration
. This is
I
e2(I) +
rJ (I) ≤
1
̃
rJ (I) =
̃
rJ (I) =
̃
e2(I) +
e0(I) +
e1(I) −
I1) +
I
rJ (
̃
I
rJ (
I
e0(
R
λ
(
1 and
) +
) −
) +
/
/
rJ (I)
̃
In Section 2, we prove
in two dimensional Cohen-Macaulay local rings.
This paper is organised in four sections.
bounds on
I
Theorem 1.1 and its consequences. Then we gather various cases when the bound rJ (
) ≤
I
e1(
1 holds in dimension three. We also prove Theorem 1.3 in this section.
In Section 4, we prove Theorem 1.2. Examples are given in support of our bounds being better
than the earlier known bounds.
rJ (I) ≤
and discuss
̃
In Section 3, we establish
rJ (I)
I
e0(
R
λ
(
) +
) +
) −
I
/
2. Bound for
rJ (I)
̃
in dimension two
For this section, let R be a Cohen-Macaulay local ring of dimension two. Inspired by Rossi’s
1 for an I-admissible filtration
bound in (2), one can ask whether rJ (I) ≤
? In general, it is not known. Suppose I
}n∈Z, the above question
I
has affirmative answer which follows from a result of Rossi and Swanson [17, Proposition 4.5].
. We generalize [17, Proposition 4.5] for an I admissible filtration.
I
rJ (
They proved that
)
Further in this section, we prove certain bounds for
e0(I)+
e1(I)−
˜I. Then for the case
=
I = {̃I n
I1)+
I
rJ (
̃
R
λ
(
) ≤
/
rJ (I)
.
̃
R, m
(
an I-admissible filtration. Then, for a minimal reduction J of
be a Cohen-Macaulay local ring of dimension two, I an m-primary
rJ (I) ≤
̃
I
)
,
In}
I = {
Proposition 2.1. Let
ideal and
rJ (I)
.
Proof. Let r
r, ̃In+1 =
n
≥
∈ ̃In+1 = (
a
axk
xkb
+
sequence we have a
In+k ∶
a
(
rJ (I)
=
J ̃In. For k
In+1+k ∶ (
yc where b
b
−
xk
dy
−
=
=
y
∈
b
)
x, y
and J
with x, y a regular sequence in R. We show that for all
for m
n
1. Let
))
=
−
J k+1In ⊆
xkxIn +
yIn+k. Let
∈
yc. Since x, y is a regular
b
) =
In+k, we get d
and
Im+k ∶ (
. Then axk
In+k. This gives xk
a
(
R. As c
xk, yk
1, n, n
xk
+
≫
xk, yk
= (
0, we may write ̃Im = (
J k+1In ∶ (
))
)) = (
xIn and c
∈
∈
xkd for some d
dy and c
=
=
. Therefore,
)
xk, yk
In+k ∶
−
∈
∈ (
)
∈
By similar arguments, we can show that a
xk
and s2 ∈ (
where r1, r2 ∈
αy for some α
s1 =
r2 −
In, s1 ∈ (
αx and r1 −
In+k ∶
s2 =
)
a
∈
xIn +
xk
y
In+k ∶
.
(
)
yk
xr1 +
. Now let a
yIn +
In+k ∶
x
=
)
(
∈
yk
s1)
r2 −
y
s2) =
r1 −
. Then x
In+k ∶
(
(
)
In+k and αyk+1
R. Then αxk+1
s1xk
∈
−
=
∈
xs2
yr2 +
ys1 =
which implies
r2xk
=
4
M MANDAL AND K SALONI
∈ (
xk+1, yk+1
In+k ∶ (
J ̃In−1 ⊆ ̃In. This gives s1, s2 ∈ ̃In and a
s2yk
In+k. These give α
∈
s1 =
αx
J ̃In for all n
r1yk
−
and r2 −
̃In+1 =
In Example 2.5, we see a class of examples with
upper bound for
r.
≥
∈
rJ (I)
̃
rJ (I) <
̃
in dimension two. We may assume from now on that I
rJ (I)
. In the next theorem, we give an
Theorem 2.2. Let
In}
ideal and
I = {
R, m
(
)
be a two dimensional Cohen-Macaulay local ring, I an m-primary
an I-admissible filtration. Then, for a minimal reduction J
J.
/=
I,
⊆
)) = ̃In−1. Therefore r1 −
JIn +
xr1 +
ys1 ∈
s2 =
J ̃In ⊆
=
∈
J ̃In−1 ⊆ ̃In
αy
J ̃In. Therefore
(cid:3)
rJ (I) ≤
̃
Furthermore, consider the following statements:
e2(I) +
1.
rJ (I) =
(i)
̃
(ii) ̃In+1 =
(iii) λ
( ̃In+1/
(iv) e1(I) =
1;
e2(I) +
J ̃In for all n
J ̃In) =
e0(I) −
(iii)
Ô⇒
≠
1 for n
R
λ
(
0, e2(I)
;
e2(I)
;
=
/ ̃I1) +
1.
(i)
We have (iv)
≥
n
1.
⇐⇒
Proof. Since depth G
Ô⇒
1, we have e2(I) = ∑n≥0
0 for all n
e2(I) = ∑
J̃Ie2(I)) =
(iii) and (ii)
(̃I) ≥
. This gives nvn(̃I) =
e2(I) +
⇐⇒
J ̃In for n
=
e2(I)
n=0 nvn(̃I)
≤
≤
J ̃In)
( ̃In+1/
λ
rJ (I) ≤
i.e.,
̃
(ii). Suppose
Now we show (i)
e2(I)
and ̃In+1 ≠
with ve2(I)(̃I) ≠
0
<
e2(I) −
̃In+1 =
J ̃In for all 1
rJ (I) ≤
in (ii) holds. When e2(I) =
0,
̃
which is not true. Now suppose e2(I) ≠
1. Therefore ˜rJ (I) ≥
λ
( ̃Ie2(I)+1/
(i)
Ô⇒
Ô⇒
Suppose (iii) holds. Then
(i).
Suppose (iv) holds. Since e1(I) = ∑
λ
( ̃I1/
and only if
) +
Therefore ̃In+1 =
1 except one, say n
e2(I)
Then e2(I) =
∑n=0
(iii).
Finally assume e2(I) ≠
e0(I) −
J̃Ie2(I)) =
( ̃Ie2(I)+1/
λ
R, m
Corollary 2.3. Let
(
I
e2(
I
rJ (
1
ideal. If
) +
̃
( ̃In+1/
J ̃In for all n
( ̃In+1/
nλ
(iii) when e2(I) ≠
e2(I) +
rJ (I) ≥
̃
J ̃In) =
≥
J ̃In) =
e2(I) +
0.
e2(I)
n=0 λ
J
(̃In0+1/
e2(I)
n=0 λ
/ ̃I1) +
1. Since
R
λ
(
n0λ
)
≠
) =
∑
1.
(ii) and all four are equivalent if e2(I) ≠
0.
nvn(̃I)
by [19, Theorem 2.5], where vn(̃I) =
J ̃In for all n
1,
e2(I) +
1. Hence ̃In+1 =
≥
e2(I) +
rJ (I) =
̃
1. Then ̃In+1 =
J ̃In for all n
0. When e2(I) ≠
1 for n
e2(I) +
. This gives (ii) when e2(I) =
0 which implies λ
e2(I) +
1
≥
0, we have
e2(I)
( ̃In+1/
and
1. This gives (ii). For the converse, suppose the assumption
J,
0 gives ̃I1 =
rJ (I) =
1. Otherwise,
̃
J̃Ie2(I))
( ̃Ie2(I)+1/
λ
e2(I)
which implies
1. Note that the above arguments also prove
rJ (I) =
1. In fact,
̃
0. Then e2(I) =
J ̃In) =
=
rJ (I) ≤
̃
e2(I) +
1, we get the equality as in
J ̃In)
( ̃In+1/
e0(I) −
, we have e1(I) =
e2(I)
n=1 λ
1. This forces e2(I) ≠
0 and
∑
and λ
e2(I)
n0 ≤
=
n0. This proves (iv)
J ̃In0) =
/ ̃I1) +
J ̃In) =
J ̃In0 ) =
λ
R
(
( ̃In+1/
(̃In0+1/
(ii) and (iv)
n0, 1
Ô⇒
≤
1 if
1.
1.
Ô⇒
0 and (ii) holds. Then we get e1(I) = ∑
e2(I)
n=0 λ
( ̃In+1/
J ̃In) =
( ̃I1/
λ
J
) +
(cid:3)
be a two dimensional Cohen-Macaulay local ring and I an m-primary
1 then
I
e2(
Moreover, if I is Ratliff-Rush closed then the following statements hold:
(̃I
λ
/
I
rJ (
) −
) +
) ≤
1.
≤
≤
1
1
I
I
(i) rJ (
) =
2
BOUNDS FOR REDUCTION NUMBER
5
I
(ii) e2(
1
) =
I
(iii) depth G
(
1
) ≥
Proof. We have
(5)
By Theorem 2.2, e1(
I
) =
I
(̃I
λ
I
e2(
/
≤
I
rJ (
) +
) ≤
1
) +
I
e2(
)
)
1
≤
) +
= ̃
I
rJ (
rJ (
I
I
e1(
≤
/̃I
R
λ
I
e0(
) +
(
2 which implies
) −
by Proposition 2.1
(
)
1 (by (2)
)
) −
I
) +
R
λ
(
I
e0(
1. Substituting the value in equation (5), we get
) +
/
(̃I
λ
/
I
Moreover if I is Ratliff-Rush closed, then we obtain e2(
I
[19, Theorem 3.3] we have depth G
(
I
rJ (
I
e2(
) −
) ≤
) ≥
1.
≤
≤
1
1
) =
I
1.
) +
I
1 and rJ (
I
e2(
) +
1
=
) =
2. Then by
(cid:3)
Corollary 2.4. Let
In}
ideal and
then the following statements hold:
I = {
R, m
(
an I-admissible filtration. For a minimal reduction J
be a two dimensional Cohen-Macaulay local ring, I an m-primary
e2(I)
rJ (I) =
̃
I, if
⊆
)
≠
2 for n
0, 1, e2(I) −
=
1 for n
1;
1 if e2(I) =
1, e2(I) −
2.
2,
1 if e2(I) ≠
2,
(i) ̃In+1 =
(ii) λ
( ̃In+1/
J ̃In) =
J ̃In for n
⎧⎪⎪⎪
⎨
⎪⎪⎪⎩
R
λ
e0(I) −
(
rJ (I) =
̃
1. Since depth G
(iii) e1(I) =
Proof. Note that
n
e2(I) −
suppose e2(I) =
=
=
/ ̃I1) +
e2(I)
(̃I) ≥
( ̃I2/
e2(I) −
2
(
1
=
2.
1 for n
3, we get
2. Then λ
e2(I)−2
n=0
1. This gives 1
J ̃In for
1. Now
J ̃I1) =
1
) >
e2(I) − (
1 and ̃In+1 =
e2(I)−1
n=0
if and only if ̃In+1 =
J ̃In for all n
e2(I)
and ̃In+1 ≠
≥
e2(I)−1
nvn(̃I)
. Therefore e2(I) ≠
1, we have e2(I) = ∑
n=0
2 and ̃In+1 =
J ̃In for n
2. For the case e2(I) ≥
≥
e2(I) ≥ (
e2(I) −
.
ve2(I)−1(̃I)
)
e2(I) −
ve2(I)−1(̃I) = ∑
1
)
J ̃In for all 2
n
e2(I) −
≤
≤
/ ̃I1) +
R
λ
e0(I) −
J ̃In) =
(
0, d
nvn(̃I)
which implies
2. This proves (i) and (ii). To see
(cid:3)
2 and k be an infinite field. Consider the power
1 indeterminates and the
d
1
So, ve2(I)−1(̃I) =
J ̃In) =
( ̃In+1/
λ
=
( ̃In+1/
λ
(iii) , we have e1(I) = ∑
Example 2.5. [12, Theorem 5.2] Let m
≥
Vj }1≤j≤d,
Xj}1≤j≤m, Y,
k
series ring D
2d
+
{
[[{
ideal a
m
j
Xj ∣
.
Y
i
1
1
) + (
≤
≤
≠
≤
≤
[(
)]
= [(
V 3
a and xi, y, vi, zi denote the images of Xi, Y, Vi, Zi in R
j
D
. Define R
d
i
1
i −
) + (
/
=
≤
)
≤
respectively. Let m
j
y
m
j
xj ∣
be the maximal ideal
1
) + (
) + (
≤
= (
≤
. Then
d
j
zj ∣
in R and Q
1
)
≤
≤
(1) R is Cohen-Macaulay local ring with dim R
d,
=
m
(2) Q is a minimal reduction of m with rQ(
3,
) =
m
m
2; e2(
(3) e0(
+
m
m
is Buchsbaum ring with depth G
(4) G
(
(
) =
m
m
λ
Particularly when d
8 and e0(
2, we have e1(
R
(
=
) −
m
m
m
m
rQ(
R
λ
e0(
e1(
2. Therefore by Corollary 2.4,
/̃
) ≠
) +
(
) −
̃
m
m
.
rQ(
rQ(
)
̃
7 which implies
. Therefore
)
Zj}1≤j≤d]]
Y
m
) + (
≤
with m
Vi ∣
m
1 and ei(
m
/̃
) +
m
e2(
=
Xj ∣
ZiY
+
m
rQ(
m
2; e1(
=
3
) =
ViVj ∣
) /=
) <
) =
)
+
1
≤
0 for 3
) =
0.
)] + (
vj ∣
zj ∣
) + (
) + (
d
)
) =
d, i
) =
) =
= (
i, j
2d
3d
{
j
m
m
m
m
d,
+
+
+
+
+
≤
≤
≤
≤
≤
≤
≤
≤
=
≥
d
d
2
1
1
j
i
∣
6
M MANDAL AND K SALONI
We end this section with the following questions.
Question 2.6. Is rJ (I) ≤
R
λ
(
dimensional Cohen-Macaulay local ring? Since
investigate whether the same bound holds for
e0(I) +
e1(I) −
I1) +
/
rJ (I) ≤
̃
rJ (I)
?
̃
1 for any I-admissible filtration in two
by Proposition 2.1, one may
rJ (I)
Question 2.7. Is
rJ (I) ≤
̃
3?
for d
rJ (I)
3. Rossi’s bound in dimension three
≥
)
x
x
x
/(
/(
) ≥
)) =
I
rJ (
I
rJ (
)) =
I n for all n
I
may not hold. When depth G
(
I,
In general, reduction number does not behave well with respect to a superficial element x
∈
I
rJ (
I
1, then rJ/(x)(
i.e., rJ/(x)(
I
, see
)
)) =
) ≥
0.
I
and depth G
I
[11, Lemma 2.14]. However, there are examples when rJ/(x)(
/(
) =
(
)
1 is equivalent to the condition that ̃I n
Note that depth G
I
1. In the lemma
=
(
I
below, we state a necessary condition for rJ/(x)(
.
)
Lemma 3.1. Let
)
an m primary ideal and J
element x
be a Noetherian local ring of dimension d
I
I a minimal reduction of I. If rJ/(x)(
x
.
≠
)) ≤
)
. Then I n
I n for some n with rJ/(x)(
I
rJ (
I
n
)
) ⊆
⊆ (
<
/(
I n which implies I n+1
xI n. On the other hand,
x
x
) =
∩ (
∶
. Hence I n+1
x
)
∈
Proof. Suppose ̃I n
( ̃I n+1
I n. Thus
) = ̃I n
x
n+1
x
J
x
I
x
I
))
=
(
/(
)(
/(
JI n. So rJ (
JI n
I
) ≤
=
JI n
n which is a contradiction.
I n+1
) =
(
n which implies I n+1
1 and depth R
I
rJ (
x
⊆
I n for all rJ/(x)(
I
0. Let I be
for a superficial
I
rJ (
<
x
I, then ̃I n
R, m
(
I
rJ (
I n+1
I n+1
=
/(
≥
/(
JI n
)) ≤
)) <
)) <
=
(cid:3)
) ∩
+ (
+ (
/(
/(
n
x
x
x
>
≥
⊆
⊆
=
)
∶
∶
))
xI n
+
We define
∣̃I n
As an interesting application of Lemma 3.1, we see that Rossi’s bound holds in dimension three
for those m-primary ideals I for which ρ
I
rJ (
min
I
(
I
(
) −
) =
) ≤
1.
=
≥
≥
ρ
{
}
1
i
i
.
I n for all n
R, m
Proposition 3.2. Let
(
primary ideal. For a minimal reduction J of I, if ρ
R
λ
(
) +
1.
)
I
/
be a Cohen-Macaulay local ring of dimension d
I
(
) ≤
I
rJ (
) −
I
1, then rJ (
3 and I an m
=
I
e0(
I
e1(
) ≤
) −
) +
I be a superficial element. Suppose ρ
I
rJ (
)
I
e0(
) +
= ̃I rJ (I)−1
) −
by Lemma 3.1. Now, using the bound in (2), we get that
(cid:3)
1. Then I rJ (I)−1
I
rJ (
I
(
) ≤
1.
x
) +
x
ℓ
∈
I
/(
) =
/(
Q
x
)) =
/(
I
e1(
) −
Proof. Let x
which implies rJ/(x)(
I
I
rJ/(x)(
I
rJ (
R
)) ≤
/
(
I
The following examples show that rJ/(x)(
Example 3.3. [19, Example 3.3] Let R
=
⊆ ̃I but x2y2
I
I which implies depth G
I
) =
(
∉
x4
y4 is superficial for I as e0(
I and p
I
=
) =
+
I
rJ/(p)(
I
rJ (
2
/(
Example 3.4. [18, Example 3.8] Let R
=
) ⊆ ̃I but x2
0 as x2
I
depth G
) =
(
∉
10
4
1
4
5
9 y2
2 y2
3 xy
3 yz
3 xz
+
−
+
+
−
I
I
e0(
minimal reduction of I and e0(
8
=
) =
1
4
1
6 x2
2 y2
where p
3 yz
3 xz
2 xy
−
+
+
−
p
I
rJ/(p)(
I
Further, rJ (
.
2
))
/(
) =
I 2
I
∶
∈ (
3 z2, 23
63 x2
.
))
) =
+
+
23
=
=
=
p
2
5
I
rJ (
)
and I
)) =
x, y
[[
0. Note that J
I
e0(
16
x4, x3y, xy3, y4
I
may hold even if depth G
(
I 2
. Then x2y2
∶
)
is a minimal reduction of
p
. Further,
x4, y4
)
= (
I
and e1(
) =
I
e1(
= (
) =
))
))
/(
/(
0.
]]
=
=
p
6
∈
−
[[
]]
x2
x, y, z
and I
y2, y2
Q
= (
I. Using Macaulay 2, we find that J
5
5y2
4 xz
xz
+
I
and e2(
p
−
is a
−
p
,
/(
))
))
4
3 z2. This shows that p is a superficial element for I.
z2, xy, yz, xz
6 x2
= (
7
6 yz
+
0
) =
. Then
)
1
2 xy
+
x2
−
I
e2(
7 z2, 6x2
+
I
e1(
4
9yz
−
I
, e1(
5
6 xy
p
+
))
)
/(
) =
/(
−
23
=
=
1
BOUNDS FOR REDUCTION NUMBER
7
Lemma 3.5. Let
I t
with depth G
(
minimal reduction of I. Then
R, m
(
0 for some t
) >
)
be a Noetherian local ring of dimension d
≥
I be a superficial element for I and J
2 and I an m-primary ideal
I be a
1. Let x
∈
≥
⊆
I
rJ (
x
I
rJ/(x)(
t
/(
1, then
) ≤
k
)) +
1.
t
−
k mod t, 1
I
Furthermore, if rJ (
(6)
) ≡
≤
≤
I
rJ (
−
I
rJ/(x)(
0, we have depth R
) ≤
x
k
1.
/(
I t
I
Proof. Since depth G
0. We first consider the case when rJ (
) ≡
(
>
0. We claim that rJ (
mod t for 1
I
I
1 and prove (6). Suppose rJ (
) <
) =
≤
≤
≥
x
rJ/(x)(
I
rJ (
k
I
rJ (
I
k. Suppose rJ/(x)(
I
,
)) ≤
)
<
)) +
/(
I t
I mt as depth G
but ̃I mt
I
0. Then by Lemma 3.1, rJ (
, a contradiction.
(
=
Therefore,
k for m
+
I
mt, then rJ/(x)(
I
rJ/(x)(
/(
)) ≤
)) +
/(
x
) >
) −
) =
) >
mt
mt
))
/(
−
−
x
x
=
k
k
t
Next, let k
t
rJ (
I
) −
I
3.1, rJ (
= (
) =
I
0, i.e., rJ (
=
mt
m
t
1
<
)
−
x
I
rJ/(x)(
/(
=
))
) =
/(
) ≤
I
rJ (
rJ/(x)(
I
I
1. Then rJ (
mt, m
≥
and again ̃I (m−1)t
I
rJ (
, a contradiction. Therefore,
) <
=
)
1.
x
k
−
)) +
I
rJ/(x)(
/(
I (m−1)t as depth G
I t
(
))+
x
I
t. Otherwise, rJ/(x)(
)) ≤
0. Then by Lemma
/(
x
) >
I
rJ (
) ≤
I
rJ/(x)(
x
/(
)) +
1.
t
−
We now generalize Rossi’s result for d
obtain the I-adic case of [19, Theorem 4.3] in dimension three.
3 case. Note that when t
=
(cid:3)
1 in the result below, we
=
be a Cohen-Macaulay local ring of dimension d
3 and I an m-primary
=
1. Let J
I be a minimal reduction of I. Then
R, m
Theorem 3.6. Let
(
I t
ideal with depth G
(
) >
)
0 for some t
≥
I
Furthermore, if rJ (
) ≡
I
rJ (
k mod t, 1
) ≤
k
≤
R
λ
(
/
I
) +
t.
) +
⊆
I
e0(
1, then
) −
I
e1(
t
≤
−
I
e1(
I
rJ (
) ≤
I
e0(
) +
R
λ
(
/
I
) +
k.
) −
Proof. Let x
Cohen-Macaulay local ring. By Lemma 3.5 and the bound in (2), we have
I be a superficial element for I and let R
/(
R
=
∈
. Then R is a two dimensional
x
)
I
rJ (
) ≤
≤
=
I
When rJ (
from (6).
) ≡
k mod t, 1
k
t
≤
≤
t
x
/(
/(
)) +
I
e0(
rJ/(x)(
I
e1(
x
I
)) −
I
e0(
I
e1(
) +
I
1, we have rJ (
−
) −
1
−
x
/(
R
λ
(
) ≤
)) +
I
R
λ
(
t.
/
) +
I
rJ/(x)(
I
/(
x
+ (
))) +
t
x
/(
))+
k
1
−
≤
I
e1(
I
e0(
)−
R
λ
(
)+
I
/
)+
k
(cid:3)
Corollary 3.7. Let
primary ideal. Let J
Then
be a Cohen-Macaulay local ring of dimension d
R, m
(
I 2
I be a minimal reduction of I. Suppose depth G
(
⊆
) >
3 and I an m-
=
I
0 and rJ (
is odd.
)
)
I 2
Proof. Since depth G
(
) >
I
e1(
) −
I
e0(
) +
R
λ
(
/
I
) +
1.
I
rJ (
) ≤
I
0 and rJ (
) ≡
1 mod 2, the conclusion follows from Theorem 3.6. (cid:3)
denote the i-th local cohomology module of S with support in
with
H i
S
n
0
max
{
) =
∣
S+(
)n /=
}
be a Cohen-Macaulay local ring of dimension d
3 and I an m-
=
8
M MANDAL AND K SALONI
S
0.
) =
S+(
S+(
For a graded ring S, let H i
S
)
the graded ideal S+ of elements of positive degree and set ai(
S
if H i
S
the convention ai(
Corollary 3.8. Let
primary ideal. Let J
) = −∞
R, m
(
I be a minimal reduction of I. Then
⊆
⎧⎪⎪⎪
e1(
I
⎨
I
e1(
⎪⎪⎪⎩
I ρ(I)
Proof. Since depth G
(
1, 1
by [13, Theorem 4.3].
1
I
G
a1(
(
I
(
0, we can put t
e0(
I
I
e0(
R
λ
(
R
λ
(
I
rJ (
)) +
) −
) −
) >
) +
) +
) +
) +
) ≤
=
ρ
)
)
I
I
/
/
}
if a1(
I
G
(
I
G
if a1(
(
)) ≤
)) >
0
0
1
in Theorem 3.6 and ρ
I
(
) ≤
max
I
G
a1(
(
{
) +
(cid:3)
d
)−
) ≥
) ≥
R
(
m
e1(
m
In [9] Itoh proved that e2(
m
m
e0(
µ
1 where type
type
−
(
m
[15], the authors also proved that if e2(
m
then G
(
case. We consider the next boundary case, i.e., type
below, we obtain a linear bound in this case as well.
m
e0(
0, then
1
) +
m2
m
λ
, see [15]. In
)
(
/
m
m
µ
e0(
d
1,
)+
)−
)−
(
) =
−
m
is Cohen-Macaulay. Therefore Rossi’s bound as given by (2) holds for rJ (
in this
d. In the corollary
m
e0(
1.
) −
) +
dimk Extd
R
) =
(
m
m
e0(
e1(
) =
m
e1(
) =
m
and µ
)
(
0 and type
m
If e2(
k, R
R(
1
)+
) −
) =
R
(
m
e0(
m
(
R
(
) =
) −
) +
)+
µ
≠
≠
)
)
be a Cohen-Macaulay local ring of dimension d
d. Suppose J
⊆
) =
m is a minimal reduction of
=
m
3 with e2(
) +
) −
m
1, then the conclusion follows from [19, Theorem 4.3]. Suppose depth G
(
) =
m3
3 which implies depth G
(
) >
0. Then by Theorem 3.6,
(cid:3)
R, m
(
1
) +
m
µ
(
3.
m
e0(
=
m
R
λ
/
(
Corollary 3.9. Let
)
m
m
e1(
e0(
0 and type R
) +
) −
≠
m
m. Then rJ (
m
m
e0(
e1(
) ≤
m
Proof. If depth G
(
0. By [15, Theorem 4.2], ̃mj
=
m
m
m
R
λ
e0(
e1(
rJ (
/
(
We now consider Example 2.5 with m
3.6 is better than the one given by Vasconcelos in (1).
mj for j
m
3.
) −
) +
0 and d
) +
) +
) −
) ≤
) ≥
=
=
≥
3 to demonstrate that the bound in Theorem
k
=
[[
) =
]]/(
x, y, z, u, v, w, t
t2, tu, tv, tw, uv, uw, vw, u3
yt, w3
Example 3.10. Let R
.
)
−
−
−
m
m
Then R is a Cohen-Macaulay local ring of dimension 3 and depth G
0. We have e0(
8,
(
) =
m
m
e1(
11, e2(
4, see [15, Example 5.2(1)].
By [15, Theorem 4.2], we have m2
1. Now
J
7. Note
)
that the bound de0(m)
19 given by Vasconcelos in [20] is larger than our
bound.
m
e0(
µ
) −
= ̃mj for j
m
is a minimal reduction of m and rJ (
≤
) =
1
=
m
(
) +
m3
3. Therefore depth G
(
≥
m
m
m
e0(
e1(
) +
R
) =
(
≠ ̃m2 and mj
m
4, e3(
0 and type
o(m) −
) ≥
3
=
xt, v3
R
λ
(
x, y, z
3.8
= (
) =
) =
) −
) +
) =
2d
zt
+
+
−
=
=
6
1
3
3
/
In the next proposition, we summarize the cases when Rossi’s bound holds in dimension three.
for any I admissible filtration
Some of these results are already known. Let vn(I) =
λ
In+1/
(
}n∈Z denote the Ratliff-Rush filtration. By the proof of Rossi’s result [16, Theorem
I
1.3] in a d dimensional Cohen-Macaulay local ring, we have
F = {̃I n
JIn)
and
(7)
I
rJ (
) ≤ ∑n≥0
vn(F ) −
I
e0(
) +
R
λ
(
/
I
) +
1
The idea in the next few results is to approximate the term
.
∑n≥0 vn(F )
R, m
Proposition 3.11. Let
(
I
ideal and J a minimal reduction of I. Then rJ (
conditions hold:
)
be a three dimensional Cohen-Macaulay local ring, I an m primary
1 if one of the following
I
I
e1(
I
e0(
R
λ
(
/
)+
)−
) ≤
)+
I
sition 6.4], e2(
from part (ii).
I
(iv) Suppose e2(
BOUNDS FOR REDUCTION NUMBER
9
0.
2.
) =
(i) depth G
(F ) ≥
I
(ii) e2(
I
e3(
) =
I
(iii) e2(
0 and I is asymptotically normal .
) =
I
(iv) e2(
0 and G
I
) =
(
I
(v) ρ
1.
I
rJ (
) ≤
(
I
G
(vi) a1(
)) ≤
(
) −
0.
)
is generalized Cohen-Macaulay.
Proof.
(F ) ≥
(i) As depth G
I
2, e1(
I
ing this into (7), we get rJ (
0, then G
I
e3(
) ≤
(F )
) =
conclusion follows from part (i).
) =
I
(ii) If e2(
by [8, Proposition 4.6]. Substitut-
e1(F ) = ∑n≥0
) =
I
e0(
I
e1(
) +
) −
is Cohen-Macaulay by [14, Theorem 6.2]) and hence the
vn(F )
I
R
λ
(
) +
1.
/
I
(iii) By [18, Theorem 4.1], e3(
I
0 implies e3(
) =
) ≥
0 for an asymptotically normal ideal I and by [14, Propo-
0. Now the conclusion follows
I
0. This gives e3(
) =
) ≤
) =
[14, Proposition 6.4]. Now, the conclusion follows from part (ii).
) =
)
I
0. Then e3(
I
0 if and only if G
(
is generalized Cohen-Macaulay by
(v) It follows from Proposition 3.2.
(vi) It follows from Corollary 3.8.
(cid:3)
Remark 3.12.
(1) Note that in Example 3.10, G
mn
) = ⊕n≥0 ̃
by [15, Theorem 4.2]. Hence by Proposition 3.11(i), we have 3
R
λ
(
mn
( ̃
) +
5.
m
1
/̃mn+1 is Cohen-Macaulay
m
e1(
) +
=
m
rJ (
m
e0(
) ≤
) −
I
e3(
) =
) =
0. Hence by Proposition 3.11(ii), 2
I
rJ (
=
) ≤
/
=
I
(2) In Example 3.4, we have e2(
I
e0(
2.
=
) +
I
e1(
R
λ
(
) −
) +
1
I
/
Next we give an upper bound for the reduction number of an ideal in Buchsbaum local ring with
dimension at most two.
R, m
Theorem 3.13. Let
(
ideal. Let J be a minimal reduction of I, then
)
be a one dimensional Buchsbaum local ring and I an m-primary
I
rJ (
) ≤
. Let r
)
0. Hence
R
R
/
=
IH 0
m(
H 0
m(
R
) =
I
e1(
) −
rJS(
J
e1(
IS
e0(
I
) −
) +
. Then I r+1
)
=
R
λ
(
I
/
JI r
−
) +
⊆
2.
H 0
m(
R
, which implies that
)
⊆
Proof. Let S
JI r+1
I r+2
−
(8)
) ≤
Note that S is a 1 dimensional Cohen-Macaulay local ring. Therefore, by (2), we have
) +
I
rJ (
rJS(
IS
1.
I
rJ (
) ≤
≤
≤
IS
rJS(
e1(
IS
I
e1(
) +
1
e0(
) −
J
e1(
) −
IS
) −
2
S
λ
(
) +
I
e0(
) +
IS
) +
/
I
R
λ
(
/
2 (by [19, Lemma 2.3, Proposition 2.3])
) +
(cid:3)
R, m
Theorem 3.14. Let
(
I t
ideal. Let J be a minimal reduction of I and depth G
(
)
I
rJ (
) ≤
I
e1(
) −
J
e1(
) −
I
e0(
) +
I
/
) +
t
+
1
0 for some t
1, then
≥
) >
R
λ
(
be a two dimensional Buchsbaum local ring and I an m-primary
10
M MANDAL AND K SALONI
I t
Proof. Note that depth R
0 as depth G
>
(
x
I
rJ/(x)(
I
by Lemma 3.5, we have rJ (
/(
local ring, by Theorem 3.13 we have
) ≤
) >
t
))+
−
0. Let x
∈
I be a superficial element for I. Then
is a one dimensional Buchsbaum
1. Since R
x
)
/(
I
rJ/(x)(
x
/(
)) ≤
I
Therefore rJ (
) ≤
I
e1(
) −
J
e1(
) −
=
I
e1(
e1(
I
I
e0(
x
/(
) −
x
J
e1(
)) −
J
e1(
R
λ
(
) −
I
) +
/
/(
I
e0(
t
+
) +
) +
1.
x
I
e0(
)) −
R
λ
(
/
/(
I
) +
)) +
2
R
λ
(
/
I
) +
2
(cid:3)
4. Bound for rJ (
I
in dimension three
)
In this section we give a different upper bound for reduction number of I in a Cohen-Macaulay
local ring of dimension d
d
. For
)
an I admissible filtration
. The
I
I
and e3(
In the Rees algebra of
I
3. Our bound involves e2(
≥
I = {
−
)
) ≥
I
3 when depth G
(
In}n∈Z, let us denote by
is defined as H 2
I(
) =
n
n
∑i=0
R(I) = ⊕n≥0
i
HI (
)
)
I
I
and the second Hilbert polynomial,
second Hilbert function of
denoted by P 2
is the polynomial which coincides with H 2
n
n
for large values of n. It is well
I (
I(
)
zn, is rational, i.e., there
, z
, defined as H
In+1)
In/
) = ∑n≥0 λ
known that the Hilbert series of
(I
(
with hI(
z
z
exists a unique rational polynomial hI (
0 such that
1
) ≠
]
[
z
hI(
)
H
.
d
z
1
−
)
(
, where h(i)
For every i
1
I (
z
polynomial hI(
are called the Hilbert coefficients of
d, these are same as defined earlier in the Introduction, see [3] for more details.
0
≤
Let us recall the modified Koszul complex in dimension two defined in [11] as follows:
denotes the i-th formal derivative of the
and for
h(i)
I (1)
i!
1. The integers ei(I)
0, we define ei(I) =
=
at z
) =
) ∈
(I
, z
Q
≥
≤
I
)
)
i
, n
C.(I
R
(
/
C.(I
is a minimal reduction of I. Let Hi(
In−2
In−1)
, n
) ∶
Ð
0
/
→ R
( −y
x )Ð→
2 (x,y)Ð→ R
In Ð→ 0,
/
)
denote the i-th homology module of
))
. The relation between the homology of this complex and Hilbert coefficients
)
x, y
where
(
the complex C.(I
is used in the proof of the next theorem. For a numerical function f
and recursively we can define △if
n
f
(
Z Ð→ Z, we put △f
for all i
1.
△i−1f
△
(
n + 1
(
∶
))
n
(
n
(
n
(
− f
∶
=
) =
, n
≥
)
)
)
R, m
Theorem 4.1. Let
be a Cohen-Macaulay local ring of dimension d
)
(
I
primary ideal with depth G
) ≥
(
I
e1(
3. Let
Proof. Suppose d
minimal reduction of I. Then x is also superficial for the filtration
d − 3. Then
− e0(
I
)
F = {̃I n
− e3(
I
+ λ
R
(
I
e2(
(
I
e2(
)
I
rJ (
+ 1 +
− 1
, x
(9)
) ≤
=
}
)
)
)
)
I
∈
/
.
)
I be a superficial element for I and J
a
= (
)
R
. Let R
and
=
. By the proof of [7,
x, y, z
x
)
1, we have vn(F ) =
F
vn(F )
/(
3 and I an m-
≥
F = {F n =
Proposition 2.9], we have
(x) }
̃I n+(x)
. Since depth G
(F ) ≥
e1(F ) = ∑n≥1
△2
P
(
F (
n
− H
n
F (
))
)
2
= ∑n≥1 (
−
e0(F )
−
vn(F)
∑i=0(
H2(
λ
(
∑n≥1
(10)
= ∑n≥0
−1
iλ
Hi(
(
)
C.(F
, n
))))
(by [11, Proposition 3.2])
C.(F
, n
)))
(by the proof of [11, Theorem 3.6]).
BOUNDS FOR REDUCTION NUMBER
11
Since x is a superficial element for
we get
I
, e1(
F
) =
e1(F ) =
e1(F)
. Therefore, by using (7) and (10),
(11)
I
rJ (
) ≤
I
e1(
)
+
∑n≥1
H2(
λ
(
C.(F
, n
)))
− e0(
I
)
+ λ
R
(
/
I
)
+ 1.
From the modified Koszul complex C.(F
y, z
(
) ⊆ ̃F n−2,
, n
, we have H2(
)
C.
(F
, n
)) =
F n−1∶(y,z)
F n−2
. Since
F n−1 ∶
H2(
λ
(
C.
(F
, n
)) ≤
λ ⎛
⎝
̃F n−2
F n−2
.
⎞
⎠
Therefore, for large m we have
m
0
≤
∑n=0
H2(
λ
(
C.
(F
, n
)) ≤
m
∑n=0
m
λ ⎛
⎝
̃F n−2
F n−2
⎞
⎠
R
λ
(
∑n=0
e3( ̃F )
e3( ̃F )
e3( ̃F )
/F n−2)
− e3(F )
− e3(F )
− e3(
I
)
=
=
=
=
−
m
∑n=0
λ
R
(
/ ̃F n−2)
(by [3, Proposition 1.5])
This gives
(12)
From (11) and (12), we get
0
≤ ∑n≥0
H2(
λ
(
C.(F
, n
))) ≤
e3( ̃F)
− e3(
I
.
)
(13)
+ 1 + e3( ̃F)
I
e1(
By the difference formula in [1, Proposition 4.4], we have for all n
− e0(
I
+ λ
R
(
I
rJ (
) ≤
− e3(
.
I
)
−1,
)
)
)
I
/
(14)
P ̃
F (
n
− H ̃
F (
n
)
) =
λ
((
≥
.
R+ (R( ̃F )))n+1)
H 2
Now taking sum for large m on both sides of the above equation, we get
m
m
m
H 2
λ
((
R+(R( ̃F )))n+1) =
−
P ̃
F (
n
)
∑n=0
H ̃
F (
n
)
∑n=0
− H 2
̃
F (
m
)
∑n=0
m
∑n=0
P ̃
F (
n
)
m + 3
3 )
e0( ̃F )(
.
e3( ̃F )
=
=
=
− e1( ̃F)(
m + 2
2 )
+ e2( ̃F )(
m + 1
1 )
− P 2
̃
F (
m
)
As R is a 2-dimensional Cohen-Macaulay local ring, we have λ
Z by [1, Lemma 4.7]. Now in equation
for all n
∈
H 2
((
, we substitute n
14
)
(
e2(F) =
e2( ̃F ) =
R+(R( ̃F )))n) ≤
=
I
e2(
e2(F ) =
λ
−1 to get
)
H 2
λ
((
R+(R( ̃F )))0) =
H 2
((
R+(R( ̃F )))n−1)
Therefore,
(15)
e3( ̃F ) =
m
∑n=0
H 2
λ
((
R+ (R( ̃F )))n+1) ≤
a2(R(
̃
F))−1
∑n=0
H 2
λ
((
R+ (R( ̃F )))0) =
a2(R( ̃F ))
I
e2(
)
12
M MANDAL AND K SALONI
where a2(R( ̃F )) ≤
(16)
G
a2(
( ̃F )) =
I
rJ (
) ≤
I
e1(
By [11, Corollary 5.7(2)], we have s
This gives
s (say). Now using (13) and (15), we have
I
− e0(
I
G
a2(
+ λ
R
)
(
( ̃F )) =
r
+ 1 + se2(
− e3(
I
I
−2 and by Theorem 2.2, rJ ( ̃F)
.
)
)
/
)
( ̃F )
)
=
−1
e2(F )
.
≤
(17)
s
rJ ( ̃F )
=
− 2
e2(F )
≤
− 1
I
e2(
)
=
− 1.
Now by (16) and (17), we get the conclusion.
Suppose d
depth G
I
(
completes the proof.
4. Let x
1 implies I n
I
I be a superficial element for I. Then ei(
x
/(
I
1. This gives rJ/(x)(
= ̃I n for n
≥
) ≥
≥
∈
x
/(
)) =
for 0
I
i
ei(
3. Also,
)) =
)
I
rJ (
by Lemma 3.1. This
)
(cid:3)
≤
≤
m
e1(
)
where m
− e0(
m
≥
Q
[∣
I 2 ∶ I
)
0.
x, y, z
Example 4.2.
m
λ
R
/
(
2d + 1
m
(1) We refer to Example 3.10 to note that our bound rJ (
m
e2(
) =
(
−e0(
m
)
17 is better than Vasconcelos’ bound de0(m)
o(m)
− e3(
m
m
e2(
)
m
e1(
− 1
) ≤
)
)
)
+ 1 +
19.
+
−
)
=
(2) Example 2.5 provides a number of three dimensional Cohen-Macaulay local rings with
3m + 19,
m
e2(
)
− e3(
m
+ λ
R
(
m
e2(
(
3e0(m)
o(m)
− 2.3 + 1
17 and
+ 1 +
− 1
) =
m
=
)
)
)
/
5
∉
∣]
and I
(3) Let R
= (
=
⊆ ̃I but x2y2z2
x2y2z2
∈
0. Using Macaulay 2, J
I
and e3(
) =
3 x3y+ 1
2 y4+ 4
2 xy3+ 3
reduction of I and 3
whereas de0(I)
o(I)
x4, y4, z4, x3y, xy3, y3z, yz3
. Note that depth G
I
0 as
) =
(
)
I
48, e2(
I
I. By [10, Example 3.7], e0(
64, e1(
I
4
) =
2 y4 + 5y3z + 3yz3 + 1
6 xy3 + 1
4 x4 + x3y + 5
5 z4, 4
3 x4 +
= (
9 yz3+ 3
2 xy3+y4+ 3
5 x3y+ 7
3 x4+ 4
7 z4, 5
2 y3z+ 8
3 yz3+ 10
5 y3z+ 5
4 z4
is a minimal
− e3(
− 1
+ 1 +
+ λ
− e0(
I
I
e2(
I
e2(
R
I
I
e1(
I
rJ (
32
)
(
(
43.
=
I
Next we show that in dimension three, for certain values of e2(
in terms of Hilbert coefficients. We write vn for vn( ̃F )
I
r
.
(
, we get linear upper bound on
)
=
− 2d + 1
) =
) ≤
) =
) =
)
)
)
)
)
)
)
I
/
3
R, m
Corollary 4.3. Let
(
ideal. Then the following statements hold.
)
be a three dimensional Cohen-Macaulay local ring and I an m-primary
.
)
− e0(
I
+ λ
R
(
/
I
)
)
+ 1
Proof.
I
(1) If e2(
I
(2) If e2(
I
(3) If e2(
)
)
) ≤
) ≤
I
(
I
(
+ λ
R
(
− e0(
I
0 or 1 then r
I
I
e1(
/
1 and I is asymptotically normal then r
− e0(
I
2 then r
I
(
) =
) =
)
) =
I
e1(
I
(1) If e2(
)
0 by (17) and hence e3( ̃F ) ≤
) =
I
0. Since I is asymptotically normal, e3(
) ≥
+ λ
R
I
0. Hence we have r
(
)
(
I
e1(
0 or 1 then r
+ 1 − e3(
I
)
I
e1(
I
(
)
) ≤
+ 2 − e3(
I
.
)
/
)
+ λ
− e0(
R
I
(
) =
1 then s
+ λ
R
(
+ 1.
) ≤
≤
)
)
I
I
I
/
(2) If e2(
I
I
e3(
) ≤
I
e3(
) =
(3) As depth G
( ̃F ) ≥
implies either v1 =
e3( ̃F ) = ∑n≥2 (
n
2)
1, we have 2
0, v2 =
v2 ≤
vn =
/
)
e1(
I
I
e2(
) ≤
=
1, vn =
1. Using (13), we get r
− e0(
I
)
e2( ̃F ) = ∑n≥1 nvn (by [19, Theorem 2.5]), which
) =
2. Hence
≥
+ 2 − e3(
I
.
)
)
(cid:3)
0 for all n
2, vn =
3 or v1 =
+ λ
− e0(
I
R
I
I
e1(
I
(
)
) ≤
(
0 for all n
≥
)
/
+ 1 − e3(
I
by Theorem 4.1.
)
)
0 using (15). Then by equation (12),
0 by [18, Theorem 4.1] which implies
Acknowledgement : We would like to express our sincere gratitude to anonymous referee for
meticulous reading and suggesting several editorial improvements. We also thank Prof. M. E.
Rossi for her suggestions.
BOUNDS FOR REDUCTION NUMBER
13
References
[1] C. Blancafort, On Hilbert functions and cohomology, J. Algebra 92, 439-459 (1997).
[2] S. Goto, K. Nishida and K. Ozeki, The structure of Sally modules of rank one, Math. Res. Lett. 15
(2008), 881-892.
[3] A. Guerrieri and M. E. Rossi, Hilbert coefficients of Hilbert filtrations, J. Algebra, 199 (1998), 40-61.
[4] L. Ghezzi, S. Goto, J. Hong and W. Vasconcelos, Variation of Hilbert coefficients, Proc. Amer. Math.
Soc. 141 (2013), 3037-3048.
[5] L. Ghezzi, S. Goto, J. Hong and W. Vasconcelos, Sally modules and reduction numbers of ideals, Nagoya
Math. J. 226 (2017), 106-126.
[6] J. Hong, A. Simis and W. Vasconcelos, Ideals generated by quadrics, J. Algebra 423 (2015), 177-189.
[7] S. Huckaba, A d-dimensional extension of a lemma of Huneke’s and formula for the Hilbert coefficients,
Proc. Amer. Math. Soc. 124 (1996), no. 5, 1393-1401.
[8] S. Huckaba and T. Marley, Hilbert coefficients and the depths of associated graded rings, J. Lond. Math.
Soc. (2) 56 (1997), no. 1, 64-76.
[9] S. Itoh, Hilbert coefficients of integrally closed ideals, J. Algebra 176 (1995), 638-652.
[10] A. Mafi and D. Naderi, Results on the Hilbert coefficients and reduction numbers, Proc. Indian Acad.
Sci. Math. Sci. 129 (2019), no. 4, Paper No. 60, 12 pp.
[11] T. Marley, Hilbert functions of ideals in Cohen-Macaulay rings, PhD Thesis (1989).
[12] K. Ozeki and M. E. Rossi, The structure of the Sally module of integrally closed ideals, Nagoya Math.
J. 227 (2017), 49-76.
[13] Tony J. Puthenpurakal, Ratliff-Rush filtration, regularity and depth of higher associated graded rings:
Part I, J. Pure Appl. Algebra 208 (2007), 159-176.
[14] Tony J. Puthenpurakal, Ratliff-Rush filtration, regularity and depth of higher associated graded rings:
Part II, J. Pure Appl. Algebra 221 (2017), 611-631.
[15] T. Puthenpurakal and A. Mishra, Cohen-Macaulay local rings with e2 = e1 − e0 + 1, J. Algebra, 611
(2022), 94-109.
[16] M. E. Rossi, A bound on the reduction number of a primary ideal, Proc. of the Amer. Math. Soc. 128(5)
(1999), 1325-1332.
[17] M. E. Rossi and I. Swanson, Notes on the behavior of the Ratliff-Rush filtration, Contemp. Math. 331
(2001), 313-328.
[18] A. Corso, C. Polini and M. E. Rossi, Depth of associated graded rings via Hilbert coefficients of ideals,
J. Pure Appl. Algebra 201 (2005), 126-141.
[19] M.E. Rossi and G. Valla Hilbert functions of filtered modules, Lecture Notes of the Unione Matematica
Italiana, 9. Springer-Verlag, Berlin; UMI, Bologna, 2010. xviii+100 pp.
[20] W. V. Vasconcelos, Integral Closure, Springer Monographs in Mathematics, Springer, Heidelberg, 2005.
Department of Mathematics, Indian Institute of Technology Kharagpur, 721302, India
Email address: [email protected]
Department of Mathematics, Indian Institute of Technology Patna, Bihta, Patna 801106, India
Email address: [email protected]
|
synthetic_cpt | 3 | Can_Language_Models_Induce_Grammatical_Knowledge_from_Indirect_Evidence.pdf | Can Language Models Induce Grammatical Knowledge
from Indirect Evidence?
Miyu Oba1 Yohei Oseki2 Akiyo Fukatsu2 Akari Haga1
Hiroki Ouchi1 Taro Watanabe1 Saku Sugawara3
1Nara Institute of Science and Technology
2The University of Tokyo 3National Institute of Informatics
{oba.miyu.ol2,haga.akari.ha0,hiroki.ouchi,taro}@is.naist.jp
{oseki,akiyofukatsu}@g.ecc.u-tokyo.ac.jp
[email protected]
4
2
0
2
t
c
O
3
2
]
L
C
.
s
c
[
2
v
2
2
0
6
0
.
0
1
4
2
:
v
i
X
r
a
Abstract
What kinds of and how much data is neces-
sary for language models to induce grammat-
ical knowledge to judge sentence acceptabil-
ity? Recent language models still have much
room for improvement in their data efficiency
compared to humans. This paper investigates
whether language models efficiently use indi-
rect data (indirect evidence), from which they
infer sentence acceptability. In contrast, hu-
mans use indirect evidence efficiently, which
is considered one of the inductive biases con-
tributing to efficient language acquisition. To
explore this question, we introduce the Wug In-
Direct Evidence Test (WIDET), a dataset con-
sisting of training instances inserted into the
pre-training data and evaluation instances. We
inject synthetic instances with newly coined
wug words into pretraining data and explore
the model’s behavior on evaluation data that
assesses grammatical acceptability regarding
those words. We prepare the injected instances
by varying their levels of indirectness and quan-
tity. Our experiments surprisingly show that
language models do not induce grammatical
knowledge even after repeated exposure to in-
stances with the same structure but differing
only in lexical items from evaluation instances
in certain language phenomena. Our findings
suggest a potential direction for future research:
developing models that use latent indirect evi-
dence to induce grammatical knowledge.
1
Introduction
Recent advances in language models, such as those
from the GPT and Llama families (OpenAI, 2024;
Meta, 2024), have shown remarkable progress in
various tasks. These models are trained on ex-
tremely large datasets, on a scale thousands of
times greater than the amount of data children are
exposed to in developing grammatical knowledge
comparable to that of adults (Warstadt et al., 2023).
This suggests substantial potential for improving
their learning efficiency.
1
Figure 1: The indirectness of evidence. Direct evidence
refers to instances identical to previously observed ones.
Lexically indirect evidence targets the same linguistic
knowledge but differs in lexical items. Syntactically
& lexically indirect evidence is different in both their
syntactical and lexical items.
According to Pearl and Mis (2016), humans ac-
quire language using indirect evidence, in addi-
tion to direct evidence, which is considered one
of the inductive biases contributing to efficient lan-
guage acquisition. As illustrated on the left side
of Figure 1, when humans encounter the sentence
“<wug> loves himself.”, they can correctly judge
the grammatical acceptability between “<wug>
loves himself.” and “*<wug> loves herself.” Such
observed sentences are referred to as direct evi-
dence. Conversely, in the middle and right sides
of the figure, we assume that humans are not ex-
posed to such direct evidence. However, if they
observe sentences from which they can make some
inference for a correct judgment, such sentences
are called indirect evidence. For example, humans
might hypothesize that “him(self)” in the sentence
“<wug> is helping himself.” refers to <wug>, or
that the pronoun “his” in “<wug> helped his friend.”
indicates <wug> has a masculine property.
However, it remains still unclear how the de-
gree of indirectness in observed instances affects
the number of occurrences required for language
models to induce grammatical knowledge. Pre-
👤<wug> loves himself.TrainingEvaluation<wug> helped his friend.Direct evidenceLexicallyindirect evidenceSyntactically & lexicallyindirect evidence 🤖<wug> loves himself. or*<wug> loves herself.DirectIndirect❓✅<wug> is helping himself.
vious work has investigated how language mod-
els learn grammatical knowledge based on the ap-
pearance of items in training data focusing on the
word frequency effect (Wei et al., 2021; Yu et al.,
2020) or generalization to unseen instances (Patil
et al., 2024; Misra and Mahowald, 2024; Leong
and Linzen, 2024) through few-shot learning or
pretraining on corpora filtered by specific linguistic
constructions. However, those methods face a limi-
tation in identifying ways to enhance the model’s
learning efficiency.
In this work, we explore the degree of indirect-
ness and the amount of data needed for language
models to induce linguistic generalization. To ad-
dress this question, we introduce the Wug InDirect
Evidence Test (WIDET), a dataset containing ad-
ditional indirect training and evaluation instances.
We train language models on pretraining data in-
corporating the indirect training instances. We then
evaluate their linguistic generalization across seven
different phenomena, including anaphor agreement,
transitivity, and subject-verb agreement. These phe-
nomena require language models to comprehend di-
verse properties and multiple parts of speech of spe-
cific words to judge their acceptability. To control
the number of observed indirect training instances,
we inject synthetic instances with newly coined
words into pretraining data. Following Berko
(1958), we refer to these words that do not appear
in the original vocabulary and data as wug words.1
We use various synthetic data as additional indi-
rect training instances, each differing in the degree
of lexical and syntactic indirectness as well as the
number of observations.
We find that language models generalize linguis-
tic knowledge from training instances identical to
correct evaluation instances, through their data effi-
ciency varies across different linguistic phenomena.
This variation is likely influenced by the number
of words between the wug and the words that act
as cues for the model to learn its properties. We
surprisingly observe that the language models do
not induce grammatical knowledge in certain phe-
nomena, even in instances that only differ in lexical
items. Syntactically indirect instances rarely in-
duce the model’s generalization.
Given that the distances between the wug and
the cue words to learn its properties might cause
inefficiency in the models’ learning, we conduct a
1The original wug used in Berko (1958)’s work is not
exactly same as our setting to create controlled instances.
Details are discussed in Section 7.1.
detailed analysis of indirect instances with complex
interference, using anaphor gender agreement as a
case study. We examine whether these instances
affect the generalization, considering three factors
related to attractors and distance, finding that when
the language models are trained on the instances
with complex interference, they hit a plateau in
learning after sufficient observations.
Those findings from our controlled and compre-
hensive experiments suggest that, at least in our
small-scale settings, language models do not gen-
eralize in a human-like manner even from the data
with a degree of indirectness that seems intuitively
manageable for humans, depending on language
phenomena. Our work contributes insights into
language models’ capacity to use indirect evidence
for learning. To advance this in future research
direction: Implement a model that can use indirect
evidence, enabling data-efficient language acquisi-
tion comparable to that of humans.2
2 Background
2.1 Evidence in Language Acquisition
In the field of language acquisition, the information
used to learn grammatical knowledge is referred
to as evidence. Positive (negative) evidence refers
to information in data that indicates what is ac-
ceptable (unacceptable) in a language, and it has
been argued that humans rely solely on positive ev-
idence to acquire their language (Chomsky, 1993).
Pearl and Mis (2016) further distinguishes indi-
rect positive evidence from direct positive evidence.
Direct positive evidence indicates the information
present in the data observed by the learner and used
for learning, with the assumption that its usage by
speakers guarantees grammaticality (the left side of
Figure 1). Indirect positive evidence, by contrast,
refers to information that requires a learner to infer
what is grammatical in the language from observed
data (the middle and right side of Figure 1). They
argue that, in addition to direct positive evidence,
indirect positive evidence potentially plays a signif-
icant role in efficient language acquisition. While
the previous literature explores humans’ capacity,
it is still unclear whether language models induce
linguistic generalization from such evidence.
2WIDET is publicly available at https://github.com/
nii-cl/widet.
2
2.2 Analysis of Language Models in Learning
Grammatical Knowledge
Previous studies have focused on how language
models learn grammatical knowledge based on
the appearance of target lexical items in training
data. Yu et al. (2020) evaluate models’ perfor-
mance on grammatical tasks using minimal pairs
including specific target words and few-shot learn-
ing on sentences including unseen words. Wei
et al. (2021) train models on data where the fre-
quency of instances including specific verbs is ma-
nipulated to evaluate their generalization to verb
inflections. Recent studies have focused on indi-
rect evidence (Misra and Mahowald, 2024; Leong
and Linzen, 2024; Patil et al., 2024), exploring the
availability of indirect evidence in language mod-
els by training them from scratch on filtered data.
These data include specific distinctive linguistic
phenomena, such as AANN construction (Misra
and Mahowald, 2024) and passivization (Leong and
Linzen, 2024), and systematic phenomena from
BLiMP (Warstadt et al., 2020b).
3 Motivations
3.1 Experiment Design
While the previous studies in Section 2.2 each offer
valuable insights into how language models gener-
alize to unseen instances from various perspectives,
our goal in this work is to explore the impact of
the degree of indirectness on data efficiency, with
the aim of identifying ways to enhance the model’s
learning efficiency. Specifically, we examine how
the number of instances required for language mod-
els to induce grammatical knowledge changes as
the degree of indirectness in the training instances
increases. To archive this, we assume that experi-
ments have to meet the following requirements:
Various Degrees of Indirectness in a Single Lin-
guistics Phenomenon To investigate the impact
of the degree of indirectness on the number of ob-
servations needed for grammar acquisition, we em-
ploy two graduated types of indirectness, lexical
and syntactic, in addition to direct evidence. Most
prior research focuses on a single degree of indi-
rectness for a given linguistic phenomenon.
Various Number of Observations Given our
aim for data efficiency, we need to quantify how
much the required amount of data for language
models to induce grammatical knowledge increases
due to indirectness. We employ six different ob-
servation counts, ranging from 0 to 100. Previous
studies focusing on indirect evidence are limited
in their ability to quantify changes in the number
of observations required, as they do not take into
account the frequency effect.
Various Linguistics Phenomena We explore
whether the two aspects mentioned above occur
universally across linguistic phenomena or are spe-
cific to certain phenomena. We employ seven types
of linguistics phenomena, each with referent tar-
gets consisting of several different parts of speech.
Most of the previous work, except for Patil et al.
(2024), focuses on one or two phenomena.
Inserting Sentences Containing Words that do
not Appear in Pretraining Data Considering
phenomena like anaphor agreement, to judge the
acceptability of a sentence, language models are
expected to understand the properties (e.g., num-
ber) of the referent in the sentence. To count the
number of observations for language models to
induce grammatical knowledge, we need to con-
cisely count how many times the language models
encounter a sentence containing the referent before
they understand the properties of the referent. For
conventional approaches to ablate or certain lexical
items existing in corpora, the (sub)word of the tar-
get referent may appear in the sentence other than
the removed one, making it difficult to count the
observations accurately. To concisely control the
number of observations of the referent, we employ
the sentences containing the words that have not
appeared in pretraining corpora.
3.2
Inserting Instances with Newly Coined
Words
We employ newly coined words (wugs) to introduce
additional instances including words that do not
appear in pretraining data. The advantages include:
• Handling the occurrences of target lexical items
may not eliminate their influence from the pre-
training corpus. To fully negate the effect of a
lexical item, all variants sharing the same stem
or subword would need to be removed, which
is complex and risks significantly distorting the
natural corpus distribution.
• When automatically generating wugs, we can
adequately control their frequency and evidence
strength, including their tokenization. Since our
3
Phenomenon
Evd
Training instance
Evaluation instance
Anaphor gender agreement
(ANA.GEN.AGR)
Anaphor number agreement
(ANA.NUM.AGR)
Transitive
(TRANS.)
Intransitive
(INTRANS.)
Determiner-Noun agreement
(D-N AGR)
Subject-Verb agreement (V)
(S-V AGR (V))
Subject-Verb agreement (S)
(S-V AGR (S))
DE
LexIE
SynIE
DE
LexIE
SynIE
DE
LexIE
SynIE
DE
LexIE
SynIE
DE
LexIE
SynIE
DE
LexIE
SynIE
DE
LexIE
SynIE
<wug#n> has devoted herself
<wug#n> is painting herself
<wug#n> judges her work
the <wug#n> didn’t see themselves
the <wug#n> can reward themselves
the <wug#n> loved their toy
some trees <wug#n>ed the car
no street can <wug#n> the city
every lion hunts what no prey can <wug#n>
many rivers should <wug#n>
each ethic might <wug#n>
a man corrects that the answer will not <wug#n>
the senators use this <wug#n>
a window will open this <wug#n>
the <wug#n> sells the house
the <wug#n> are leaving any traces
the <wug#n> climb few ladders
each key can open those <wug#n>
<wug#n> has devoted herself
*<wug#n> has devoted himself
the <wug#n> didn’t see themselves
*the <wug#n> didn’t see itself
some trees <wug#n>ed the car
*some trees <wug#n>ed
many rivers should <wug#n>
*many rivers should <wug#n> dogs
the senators use this <wug#n>
*the senators use these <wug#n>
the <wug#n> are leaving any traces
*the <wug#n> is leaving any traces
the book <wug#n> a shelf
every chocolate <wug#n> several bars
the deer that trails the head <wug#n> a herd
the book <wug#n> a shelf
*the books <wug#n> a shelf
Table 1: Linguistic phenomena and instances. The sentences starting with * are ungrammatical.
Phenomenon
ANA.GEN.AGR.
ANA.NUM.AGR
TRANS.
INTRANS.
D-N AGR
S-V AGR (V)
S-V AGR (S)
POS Gen. Num.
noun ✓
–
noun
–
verb
–
verb
–
adj
–
verb
–
noun
–
✓
–
–
✓
✓
✓
(In)Transitive Long agr
–
–
✓
✓
–
–
–
✓
✓
–
–
–
–
–
our dataset, WIDET. Following targeted syntactic
evaluation (Linzen et al., 2016; Marvin and Linzen,
2018; Warstadt et al., 2020b), we employ minimal
pair paradigm where pairs of sentences minimally
differ in target words. The examples of instances
are listed in Table 1.
Table 2: Properties to judge evaluation data. POS
denotes part-of-speech. Gen./Num.
denotes gen-
is whether a long agreement
der/number. Long agr.
is required.
aim here is to control the minimal information
observable by the model, synthetic data allows
for the elimination of noises.
• Our approach is a form of data augmentation,
that does not require any modification of lexical
items or sentences in the corpora. Hence, it can
be easily applied to other corpora and models.
While using artificial languages in analyzing lan-
guage models is tackled by previous work (White
and Cotterell, 2021; Ri and Tsuruoka, 2022), our
approach is different in that we use artificial in-
stances only at the token level by introducing a
word wug to insert them into a natural corpus.
4 Wug InDirect Evidence Test (WIDET)
4.1 Linguistic Phenomena
We employ the seven different linguistic phenom-
ena listed in Table 1, which we selected from the
benchmark BLiMP (Warstadt et al., 2020b)3. As
shown in Table 2, the phenomena vary in their
properties, so that we can analyze models’ behav-
ior from diverse perspectives. Since our selection
criteria are based on whether understanding the
properties of a single word is sufficient to judge
the linguistic phenomena correctly, we can only
cover limited linguistic phenomena. We anticipate
phenomena related to island effects, for instance,
to be beyond this scope.
4.2 Newly Coined Word Wug
We employ the tag <wug#n> as a newly coined
word to conduct controlled experiments using
words that never appeared in the pretraining cor-
pus. This approach does not entirely align with
the policy in Berko (1958), which employed words
like wug and wuz that are newly coined but phono-
This section describes how we construct additional
training and evaluation instances, which comprise
3Appendix A.1 details the specific phenomena referenced
from BLiMP in this work.
4
logically natural in the target language by using
actual subwords. One concerning issue with Berko
(1958)’s policy is that the actual subwords can pro-
vide models with clues for correct grammatical
judgment, for example, by their occurrence in spe-
cific positions. While using actual subwords could
help models access grammatical knowledge needed
for accurate judgment, it complicates evaluating the
models’ true ability to learn from indirect evidence.
To avoid its possible effects, we instead use the
artificial tag <wug#n>. We analyze the differences
between the conditions using the tag and the origi-
nal wug in Section 7.1.
4.3
Indirectness of Additional Training
Instances
We define the following three degrees of indirect-
ness (DE, LexIE, and SynIE). The difficulty in-
creases in the order of DE, LexIE, and SynIE:
Direct Evidence (DE) An instance identical to
the correct evaluation instances. We assume that
the properties of wug in an evaluation instance are
learned by referencing the training instance that
shares the same syntactical and lexical items as the
evaluation instance.
Lexically Indirect Evidence (LexIE) An in-
stance that conveys the same syntactic structure
as the evaluation instance but uses different lexical
items. We assume that the properties of wug in
an evaluation instance are learned by referencing
training instances with the same usage but different
lexical items from those in the evaluation instance.
Syntactically Indirect Evidence (SynIE) An in-
stance that reveals the target linguistic feature with
different syntactic and lexical items from evalua-
tion instances. The properties of wug in an evalua-
tion instance are learned by referencing the training
instance with different syntactic and lexical items
from those in the evaluation instance.
4.4 Training and Evaluation Template
We prepare 200 template pairs for each linguistic
phenomenon. Each template has three different
sets of tags, resulting in 200 × 3 = 600 pairs.
We anticipate that quantifiers and determiners
can influence linguistic generalization, making it
unclear whether language models rely on the prop-
erties of verbs and reflexive pronouns, quantifiers,
and determiners, or other factors as clues for judg-
ment, while previous studies have paid limited at-
tention to this (Patil et al., 2024). To mitigate such
effects, for number agreement, we added <wug#n>
without any suffixes to these sentences, expecting
the models to infer that <wug#n> is an inflected
form based on the sentence structure in which they
are embedded. We explore their effects in the
model’s generalization in Section 7.1. For the noun
subject of S-V AGR (V) and ANA.NUM.AGR, we
avoid any quantifiers and determiners other than
“the”. Due to the same reason, for the verb in S-V
AGR (S), we only employ the present tense and do
not employ any auxiliary verbs and tense suffixes.
We ensured that <wug#n> was used the same word
(i.e., the tag with the same id) in a pair, both gram-
matical and ungrammatical sentences because we
want the same occurrence of the wug in the training
data.
4.5 Data Generation with LLM
To create varied degrees of and balanced corpus,
we use GPT-4 Turbo in OpenAI API to generate the
training and evaluation templates. To generate bal-
anced training instances with different properties,
we generate them separately based on concerning
properties, (e.g., feminine and masculine pronouns
have the same percentage in ANA.GEN.AGR.). We
prompt the GPT-4 to generate balanced, diverse,
and duplication sentences. We generate evaluation
instances and training instances for indirect evi-
dence (LexIE, SynIE) with three different prompts.
Subsequently, we get DE by extracting the correct
sentence in generated evaluation instances. We gen-
erate the sentences with placeholders [WUG] and
we replace [WUG] with the tag <wug#n>, where
the index number n distinguishes the coined words
(e.g., <wug#124>). The example of prompts and
detailed procedures are shown in Appendix A.4.
5 Experiments and Results
5.1 Settings
Pretraining Data We randomly sample 675k
sentences (16M words) from English Wikipedia
articles and use them as pretraining data.4 We in-
ject the additional training instances into the data.
The detailed preprocessing steps and additionally
injected training instances are described in Ap-
pendix A. We shuffle and deduplicate sentences and
remove ones containing fewer than two words. The
4Retrieved
from
https://github.com/phueb/
BabyBERTa.
5
Figure 2: The results (accuracy; %) of experiments for language phenomena and evidence. The gray dot lines
indicate the model’s scores trained on pretraining data without any additional instances (n=0).
data is then lowercase, and periods are removed
from the sentences.
Frequency of Additional Instances We com-
pare the language models trained on the pretraining
data injected indirect instances that appear n times
(n = 0, 1, 5, 25, 50, 75, 100) for each instance.
Models We use BabyBERTa (Huebner et al.,
2021), which is a minimal variant of RoBERTa (Liu
et al., 2019). We modify some hyperparameters due
to the pretraining data size. More detailed infor-
mation is shown in Table 6. We train the tokenizer
from scratch on the pretraining data, adding the
tags to the vocabulary so that the tokenizer treats
each tag as one token.
Evaluation Metrics We use the accuracy of se-
lecting the correct sentence as our evaluation met-
ric. We employ pseudo-likelihood (Salazar et al.,
2020)5 normalized by token length because we use
evaluation sentences containing the sentence pair
each of which has different token lengths. 6
5.2 Results
We review the main results by answering our re-
search questions: (i) What degree of and how much
data do language models need to acquire grammat-
ical knowledge to judge the acceptability of a sen-
tence? (ii) Are observations showing similar trends
in broader categories of linguistic phenomena? The
results are shown in Figure 2.
5We use the source code in https://github.com/
babylm/evaluation-pipeline-2023.
6Normalization by token length may still result in token-
biases (Ueda et al., 2024).
Direct Evidence As for DE, increasing the num-
ber of observations generally contributed to lin-
guistic generalization in language models. How-
ever, the extent of improvement varied across differ-
ent linguistic phenomena. In ANA.GEN.AGR and
ANA.NUM.AGR, the score increased more gradu-
ally, particularly between 25 and 75 occurrences,
compared to the other agreement phenomena. This
difference might be due to anaphor agreement,
which often involves a longer distance between the
target words and the words with properties neces-
sary for correct judgment. We thoroughly examine
the effects of distance and attractors in Section 6.
Lexically Indirect Evidence
In about a half
of the phenomena, D-N AGR, S-V AGR (V),
ANA.NUM.AGR, and INTRANSITIVE, LexIE in-
duces generalization more slowly but steadily than
DE. However, in the remaining half of the phenom-
ena, the language models do not acquire the gram-
matical knowledge necessary to correctly judge ac-
ceptability. This result is surprising because LexIE
differs only in lexical items from a correct sentence
in the evaluation and shares the same syntactical
structure. This trend cannot be explained by the
properties of Table 2.
Syntactically Indirect Evidence
In most phe-
nomena, the models fail to induce SynIE gener-
alization; the increase in the number of observa-
tions did not improve generalization but merely
extended learning time. In TRANSITIVE, the accu-
racy of SynIE drastically decreases inversely with
the number of observations. This intriguing phe-
nomenon is likely due to the heuristics of the lan-
guage model. The final word in the training in-
6
15255075100The number of observations020406080100ScoreAnaphor gender agreementDELexIESynIE15255075100The number of observations020406080100ScoreAnaphor number agreementDELexIESynIE15255075100The number of observations020406080100ScoreTransitiveDELexIESynIE15255075100The number of observations020406080100ScoreIntransitveDELexIESynIE15255075100The number of observations020406080100ScoreDeterminer-noun agreementDELexIESynIE15255075100The number of observations020406080100ScoreSubject-verb agreement (V)DELexIESynIE15255075100The number of observations020406080100ScoreSubject-verb agreement (N)DELexIESynIEInterf.
Evd.
Training instance
Attractor
type
(AT)
Attractor
number
(AN)
Distance
(DT)
DE
AT0
AT1
AT2
DE
AT1
AN0
AN1
AN2
DE
AT0
DT0
DT1
DT2
<w> loves herself
<w> helping the child loves herself
<w> helping the man loves herself
<w> helping him loves herself
<w> loves herself
<w> helping the man loves herself
<w> helping the man to see the dad
loves herself
<w> helping the man for the king to
see the dad loves herself
<w> helping the man for the son of
the king to see the dad loves herself
<w> loves herself
<w> helping the child loves herself
<w> who helps the child loves herself
<w> whose cat helps the child loves
herself
<w> whose cat helps the child who
finds the teachers loves herself
Table 3: Interference types and training instances used
in the analysis. <w> corresponds to <wug#n>.
stances (see Table 1) is the <wug#n>, whereas it
is an actual direct object noun in the correct eval-
uation sentences. This suggests that the language
model might exhibit linear generalization (Mueller
et al., 2022; McCoy et al., 2020), which differs
from the human-like hierarchical generalization.
The model seems to judge correctness based on
whether certain words follow the <wug#n>, even
though the wug should be recognized as a transitive
verb because the relative pronoun “what” is its ob-
ject. This implies that instances requiring complex
hierarchical inference may hinder generalization.
Overall Our findings mainly suggest that lan-
guage models do not sufficiently induce linguistics
generalization from indirect positive evidence, es-
pecially SynIE, while they induce it from direct ev-
idence. Wei et al. (2021) find that their results sup-
port the Reduce Error Hypothesis (Ambridge et al.,
2015), where high-frequency words are learned
better. The results in our work also support the
hypothesis in DE, but in LexIE and SynIE, not all
linguistic phenomena support it.
6 Analysis with More Indirect Instances
In Section 5, DE induced the model’s linguistic
generalization but its data efficiency varies by lin-
guistic phenomena. For anaphor agreement, the
models’ learning is more apt to reach a plateau
in 25 – 75 observations compared to other phe-
nomena (See the figure for anaphor agreement in
Figure 2). This stagnation might be caused by the
longer distance between the wug and the reflexives,
whereas the relevant items are adjacent to each
other in other phenomena such as TRANSITIVE.
To corroborate this negative effect of long distance
on learning, we employ more indirect agreement
instances to investigate whether the long distance
hinders linguistic generalization on ANA.GEN.AGR
in language models.
The difficulty of long-distance agreement is
caused by attractors and distance (Linzen et al.,
2016). Agreement attractors indicate the interven-
ing words that distract the learner from judging the
correct agreement (Giulianelli et al., 2018). When
language models judge the gender agreement, they
would check if the word “<wug#n>” corresponds
to the gender of the reflexive. Distance refers to
the number of the words intervening between the
antecedent “<wug#n>” and “herself”. Attractor
indicates the competing words (e.g., “man” in the
case of AT1 in Table 3) that distract learners from
judging the agreement.
The language models’ grammatical knowledge
concerning long-distance dependencies has been
investigated in previous studies (Giulianelli et al.,
2018; Li et al., 2023), and these studies argue that
the models can indeed acquire the knowledge of
long-distance agreement. However, the overall re-
sults on anaphor agreement in this study suggest
that further investigation is required to reveal the
relationship between models’ performance and the
distance of items relevant to correct judgment. For
this purpose, we conduct a fine-grained analysis
using synthetic sentences varying the distance be-
tween wugs and reflexive pronouns.
6.1 Target Phenomena
We compare the models trained on the corpus with
additional instances of anaphor gender agreement,
from the perspective of the attractor type, number,
and distance as below. Table 3 lists all kinds of
training instances compared in this analysis.
To create the instances, we use GPT-4 to gener-
ate nouns differing in gender and number and sam-
ple the designated number of items from these gen-
erated items. For feminine and masculine nouns,
we collect 100 nouns each. From the generated
items, we first select 25 nouns for each gender.
Then, we create both the singular and plural forms
7
of the selected words and double them to cre-
ate minimal pairs. The prompt is shown in Ap-
pendix A.4. Additionally, we also collect 100 neu-
tral nouns such as teacher and child. The verb
that we newly employ is collected from LexIE in
ANA.GEN.AGR to avoid duplication.
Attractor Type (AT) We investigate whether at-
tractors downgrade the linguistic generalization
in ANA.GEN.AGR and how their distract strength
affects the models’ acquisition of anaphor agree-
ment. DE indicates the direct instances examined
in Section 5, which does not have any attractors and
works as a baseline here. AT0 includes neutral com-
mon nouns, while AT1 employs common opposite-
gender nouns, and AT2 uses opposite-gender pro-
nouns. We assume that the magnitude of attractors’
interference follows the order AT0 < AT1 < AT2,
given that the more similar their properties are to
reflexives, the more distracting they will be.
Attractor Number (AN) We examine whether
the number of attractors affects the model’s acqui-
sition. We use the gender common nouns as at-
tractors. DE works as a baseline because it has no
attractors. We expect that the more attractors there
are, the more difficult it is to generalize correctly.
Distance (DT) We analyze the effect of distance
on the model’s acquisition. We assume that the
more distance intervening between wug and reflex-
ive, the more difficult it is to judge sentence accept-
ability. We use neutral nouns there to explore the
effect of the number of words genuinely.
6.2 Results
As shown in Figure 3, After 100 observations in
all viewpoints, SynIE, with the shortest distance
and no attractors, got the highest scores, while in
midway observations this tendency does not hap-
pen. The most difficult instances in each interfer-
ence lead to the language model’s lowest score,
after their 100 observations. AT2, including an op-
posed pronoun as an attractor, particularly shows
unstable generalization. We initially expected that
instances with longer distances and more attrac-
tors would interfere more strongly with the mod-
els’ generalization. However, this tendency was
not observed in the experiment. To the question
of whether the instances with long-distance agree-
ment induce linguistic generalization, these results
answer that with the larger number of observations,
the model’s generalization relatively hits a plateau.
8
Figure 3: Models’ scores for more indirect instances.
7 Discussion
7.1 Considering Wug Creation
In this work, we use newly coined words that do
not appear in the original vocabulary, following
Berko (1958). Still, our used wug has some gap
from the original one. In the original wug test, they
use the words that do not exist in the language but
conform to the phonological rule in the language.
In contrast, we use the tag <wug#n> as wug in
those experiments. Since the original wug is more
phonologically natural, and the subwords are in the
existing vocabulary, the original setting is closer
to the environment of human language acquisition.
On the other hand, to conduct controlled experi-
ments on the number of instances that the model
observed, the setting might not be suitable because
this is far from the settings where a certain word is
never encountered. We used the tag <wug#n>. In
this section, we compare our method (tag method)
and the original method (wug method) to explore
the difference in their impact on the model’s lin-
guistic generalization.
Wug Generation We create wug using pseu-
doword generator Wuggy.7 and choose 1.2k nouns
from sample data taken from the one billion-
word Corpus of Contemporary American English
(COCA).8 To create wug-like words, we use the
nouns to output four pseudo words for one noun
and randomly select one pseudo noun. We prepare
200 × 3 = 600 pseudo words, each 200 of which
are used separately (wug_v1–wug_v3) because we
expect that different wugs have different subwords
and they can show different results. 9 We use those
7https://github.com/WuggyCode/wuggy.
8Downloaded
from https://www.wordfrequency.
info/samples/words_219k.txt.
9On the other hand, for tag and tag w/ morph., we show the
results of only one model, because the different tags <wug#n>
have the same parameters and they actually show the same
results.
15255075100Number of observations406080100ScoreAttractor typeDEAT0AT1AT215255075100Attractor numberDEAT1AN0AN1AN215255075100DistanceDEAT0DT0DT1DT2N
wug method
ANA. NUM. AGR
Phenomenon
D-N AGR
S-V AGR (V)
0
25
tag
tag w/ morph.
wug_v1
wug_v2
wug_v3
tag
tag w/ morph.
wug_v1
wug_v2
wug_v3
57.5
59.0
81.3
81.2
81.5
72.5
94.0
92.3
90.5
90.5
47.0
80.5
89.5
91.2
88.7
76.2
99.5
87.7
87.7
87.5
62.2
83.3
86.7
86.0
85.0
78.0
91.3
90.2
88.5
86.5
Table 4: Scores calculated by the models trained on the
pretraining data with indirect instances of different wug
creation methods. N is the number of observations.
pseudo nouns instead of the tag in the same way as
in the previous experiments.
three
target
Settings We
phenomena,
ANA.NUM.AGR, D-N AGR, and S-V AGR (V), the
wug of which is considered as common nouns.
No inflectional morphemes are added to plural
common nouns in the tag method while the
morphemes are added to plural common nouns
in the wug method. For ablation, we prepare the
tag with inflectional morphemes (tag w/ morph.
method), which employs the tag <wug#n> same as
the tag method but uses inflectional morphemes
same as the wug method. We compare the models
trained on the pretraining data with the tag method,
the wug methods, and tag w/ morph. method.
Other settings are the same as Section 5.
Results Table 4 shows the scores of the tag, tag
w/ morph., and three sets of wug. In the wug and
tag w/ morph., the language models correctly judge
the acceptability of sentences, mostly more than 80–
90%, surprisingly with the data that includes zero
additional instances. This result is probably be-
cause language models determine whether a word
is singular or plural, based on whether an inflection
morpheme “s” follows it, even if the word is novel.
This occurs with both novel words and novel sub-
word combinations, but the impact is greater with
the latter, comparing the two methods. In addition,
despite our expectation that different subword com-
binations show different results, we observed no
large score variances among the three vocabulary
sets except for 25 times in ANA.NUM.AGR. From
those results, we found a trade-off between the set-
tings plausible for human language acquisition and
strictly controlled settings. We prioritized the latter
in this work, but the direction to the former is also
a good setting depending on the research questions.
9
Phenomenon
ANA.GEN.AGR
ANA.NUM.AGR
TRANSITIVE
INTRANSITIVE
D-N AGR
S-V AGR (V)
S-V AGR (S)
Std
0.02
0.002
0.02
0.002
0.02
0.002
0.02
0.002
0.02
0.002
0.02
0.002
0.02
0.002
Score
51.3 ± 0.95
55.5 ± 1.73
59.7 ± 2.44
64.4 ± 2.84
90.2 ± 1.57
90.0 ± 1.15
12.7 ± 1.53
12.0 ± 0.60
47.4 ± 1.39
48.9 ± 1.68
56.4 ± 5.23
54.7 ± 1.78
49.1 ± 2.98
49.4 ± 1.19
Table 5: Scores (mean±std) of language models with
different seeds and standard deviation of the initializers.
7.2 Zero Observations of Wug
While a tag <wug#n> is added to the vocabulary,
its parameters in language models are randomly
initialized. If the language models never encounter
sentences containing this tag during training, its pa-
rameters still remain in their initialized state, which
may lead to varying results in language models de-
pending on factors such as the initializer’s standard
deviation (std) and the random seed used. To ver-
ify this effect, we compare the language model
using the default std of the initializer for all weight
matrices (std = 0.02) to that with one-tenth std
(std = 0.002), using three kinds of seeds. Table 5
shows that the deviation of scores is smaller in the
model using one-tenth std for initializer compared
to the model using the default std. This finding
implies that a smaller std can enhance the stability
of the results. However, an excessively small std
may risk negatively affecting the training process.
Hence, we employ default std in the current work.
8 Conclusion
We investigate the degree of indirectness and the
amount of data required to induce human-like lin-
guistic generalization in language models. We
found that language models do not induce human-
like linguistic generalization even with a degree of
indirectness that seems intuitively manageable for
humans, depending on language phenomena. This
limitation indicates a direction for future studies:
implementing a model that can use indirect evi-
dence, which will lead to data-efficient language
acquisition comparable to that of humans.
Limitations
References
We recognize the following limitations in this
study:
Linguistic Knowledge by Function Words We
generate synthetic instances only for linguistic phe-
nomena concerning content words such as nouns
and verbs. We avoid generating new function
words (e.g., new wh-word as a relative pronoun).
Nonce Sentence We have not dug into the dif-
ference between natural sentences and nonce sen-
tences (Gulordava et al., 2018; Wei et al., 2021)
that are grammatical but completely meaningless
because we create additional training and evalua-
tion instances with LLM, which tends to generate
naturally plausible sentences. Nonce sentences are
less plausible in human language acquisition but
exclude semantic selectional-preferences cues (Gu-
lordava et al., 2018; Goldberg, 2019). According to
Section 7.1, there can be a trade-off between train-
ing language models in experimental settings that
closely resemble natural human language acquisi-
tion and those that are strictly controlled. Future
work can determine whether nonce sentences with
indirect evidence differently affect linguistic gener-
alization in language models.
Limited Model Size and Pretraining Data We
use a small-scale language model and pretraining
data in this work because we aim to find the dif-
ferences from human inductive biases as much as
possible. It is uncertain that the same trends as our
work will appear in models of any size. Whether
scaling laws apply to indirect data in accelerating
model generalization would be an interesting future
work.
Ethics Statement
There might be a possibility that the texts we used
(Wikipedia) and the sentences generated by large
language models are socially biased, despite their
popular use in the NLP community.
Acknowledgments
We would like to express our gratitude to the anony-
mous reviewers who provided many insightful com-
ments that have improved our paper. This work
was supported by JSPS KAKENHI Grant Num-
bers JP21H05054, 22K17954, and 24KJ1700, and
JST PRESTO Grant Numbers JPMJPR21C2 and
JPMJPR20C4.
Ben Ambridge, Evan Kidd, Caroline F. Rowland, and
Anna L. Theakston. 2015. The ubiquity of frequency
effects in first language acquisition. Journal of Child
Language, 42(2):239–273.
Jean Berko. 1958. The child’s learning of english mor-
phology. WORD, 14(2-3):150–177.
Noam Chomsky. 1993. Lectures on Government and
Binding. De Gruyter Mouton, Berlin, New York.
Mario Giulianelli, Jack Harding, Florian Mohnert,
Dieuwke Hupkes, and Willem Zuidema. 2018. Under
the hood: Using diagnostic classifiers to investigate
and improve how language models track agreement
information. In Proceedings of the 2018 EMNLP
Workshop BlackboxNLP: Analyzing and Interpreting
Neural Networks for NLP, pages 240–248, Brussels,
Belgium. Association for Computational Linguistics.
Yoav Goldberg. 2019. Assessing bert’s syntactic abili-
ties.
Kristina Gulordava, Piotr Bojanowski, Edouard Grave,
Tal Linzen, and Marco Baroni. 2018. Colorless green
recurrent networks dream hierarchically. In Proceed-
ings of the 2018 Conference of the North American
Chapter of the Association for Computational Lin-
guistics: Human Language Technologies, Volume
1 (Long Papers), pages 1195–1205, New Orleans,
Louisiana. Association for Computational Linguis-
tics.
Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox,
and Roger Levy. 2020. A systematic assessment
of syntactic generalization in neural language mod-
els. In Proceedings of the 58th Annual Meeting of
the Association for Computational Linguistics, pages
1725–1744, Online. Association for Computational
Linguistics.
Philip A. Huebner, Elior Sulem, Fisher Cynthia, and
Dan Roth. 2021. BabyBERTa: Learning more gram-
mar with small-scale child-directed language. In Pro-
ceedings of the 25th Conference on Computational
Natural Language Learning, pages 624–646, Online.
Association for Computational Linguistics.
Cara Su-Yi Leong and Tal Linzen. 2024. Testing learn-
ing hypotheses using neural networks by manipulat-
ing learning data.
Bingzhi Li, Guillaume Wisniewski, and Benoît Crabbé.
2023. Assessing the capacity of transformer to ab-
stract syntactic representations: A contrastive analy-
sis based on long-distance agreement. Transactions
of the Association for Computational Linguistics,
11:18–33.
Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg.
2016. Assessing the ability of LSTMs to learn syntax-
sensitive dependencies. Transactions of the Associa-
tion for Computational Linguistics, 4:521–535.
10
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-
dar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
RoBERTa: A robustly optimized BERT pretraining
approach.
Rebecca Marvin and Tal Linzen. 2018. Targeted syn-
tactic evaluation of language models. In Proceed-
ings of the 2018 Conference on Empirical Methods
in Natural Language Processing, pages 1192–1202,
Brussels, Belgium. Association for Computational
Linguistics.
R. Thomas McCoy, Robert Frank, and Tal Linzen. 2020.
Does syntax need to grow on trees? sources of hier-
archical inductive bias in sequence-to-sequence net-
works. Transactions of the Association for Computa-
tional Linguistics, 8:125–140.
Meta. 2024. The llama 3 herd of models.
Kanishka Misra and Kyle Mahowald. 2024. Language
models learn rare phenomena from less rare phenom-
ena: The case of the missing aanns.
Aaron Mueller, Robert Frank, Tal Linzen, Luheng Wang,
and Sebastian Schuster. 2022. Coloring the blank
slate: Pre-training imparts a hierarchical inductive
bias to sequence-to-sequence models. In Findings of
the Association for Computational Linguistics: ACL
2022, pages 1352–1368, Dublin, Ireland. Association
for Computational Linguistics.
OpenAI. 2024. Gpt-4 technical report.
Abhinav Patil, Jaap Jumelet, Yu Ying Chiu, Andy La-
pastora, Peter Shen, Lexie Wang, Clevis Willrich,
and Shane Steinert-Threlkeld. 2024. Filtered cor-
pus training (fict) shows that language models can
generalize from indirect evidence.
Lisa S. Pearl and Benjamin Mis. 2016. The role of
indirect positive evidence in syntactic acquisition: A
look at anaphoric “one”. Language, 92(1):1–30.
Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan
Wilcox, Chengxu Zhuang, Juan Ciro, Rafael Mos-
quera, Bhargavi Paranjabe, Adina Williams, Tal
Linzen, and Ryan Cotterell. 2023. Findings of the
BabyLM challenge: Sample-efficient pretraining on
developmentally plausible corpora. In Proceedings
of the BabyLM Challenge at the 27th Conference on
Computational Natural Language Learning, pages
1–34, Singapore. Association for Computational Lin-
guistics.
Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo-
hananey, Wei Peng, Sheng-Fu Wang, and Samuel R.
Bowman. 2020a. BLiMP: A benchmark of linguis-
tic minimal pairs for English. In Proceedings of the
Society for Computation in Linguistics 2020, pages
409–410, New York, New York. Association for Com-
putational Linguistics.
Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo-
hananey, Wei Peng, Sheng-Fu Wang, and Samuel R.
Bowman. 2020b. BLiMP: The benchmark of lin-
guistic minimal pairs for English. Transactions of
the Association for Computational Linguistics, 8:377–
392.
Jason Wei, Dan Garrette, Tal Linzen, and Ellie Pavlick.
2021. Frequency effects on syntactic rule learning
in transformers. In Proceedings of the 2021 Con-
ference on Empirical Methods in Natural Language
Processing, pages 932–948, Online and Punta Cana,
Dominican Republic. Association for Computational
Linguistics.
Jennifer C. White and Ryan Cotterell. 2021. Examining
the inductive bias of neural language models with
artificial languages. In Proceedings of the 59th An-
nual Meeting of the Association for Computational
Linguistics and the 11th International Joint Confer-
ence on Natural Language Processing (Volume 1:
Long Papers), pages 454–463, Online. Association
for Computational Linguistics.
Ryokan Ri and Yoshimasa Tsuruoka. 2022. Pretraining
with artificial language: Studying transferable knowl-
edge in language models. In Proceedings of the 60th
Annual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), pages 7302–
7315, Dublin, Ireland. Association for Computational
Linguistics.
Charles Yu, Ryan Sie, Nicolas Tedeschi, and Leon
Bergen. 2020. Word frequency does not predict gram-
matical knowledge in language models. In Proceed-
ings of the 2020 Conference on Empirical Methods
in Natural Language Processing (EMNLP), pages
4040–4054, Online. Association for Computational
Linguistics.
Julian Salazar, Davis Liang, Toan Q. Nguyen, and Ka-
trin Kirchhoff. 2020. Masked language model scor-
ing. In Proceedings of the 58th Annual Meeting of
the Association for Computational Linguistics, pages
2699–2712, Online. Association for Computational
Linguistics.
Naoya Ueda, Masato Mita, Teruaki Oka, and Mamoru
Komachi. 2024. Token-length bias in minimal-pair
paradigm datasets. In Proceedings of the 2024 Joint
International Conference on Computational Linguis-
tics, Language Resources and Evaluation (LREC-
COLING 2024), pages 16224–16236, Torino, Italia.
ELRA and ICCL.
11
A Data generation
A.1 Linguistic phenomena
et
employ
linguistic
al., 2020b),
seven
(Warstadt
training/evaluation instances.
phenomena,
We
to
following
The
create
is
linguistic
from
from “causative”,
“drop_arguement”,
agree-
ment” is from “determiner_noun_agreement_2”,
“intransitive”
is
“determiner-noun
phenomenon
“transitive”
Figure 4: An example of prompt used to create evaluation examples.
“subject-verb agreement
lar_plural_subject_verb_agreement_1”,
“subject-verb agreement
lar_plural_subject_verb_agreement_2”.
(V)” is from “regu-
and
(S)” is from “regu-
shown in Figure 4. Another example is found
in https://github.com/nii-cl/widet. We use
gpt-4-turbo with top_p set to 1.0 and temperature
set to 0.
A.2 Pretraining Data
B Considering BLiMP Score Calculation
We aim to pretrain the language models for 18
epochs while controlling the number of occur-
rences of target instances. To achieve this, we con-
catenate the pretraining data 18 times consecutively
and randomly select where to inject each additional
training instance.
A.3 Creating Data with LLM
The GPT-4 sometimes inconsistently generates sen-
tences with hallucination; it generates the same
sentence repeatedly and sometimes stops generat-
ing midway. To generate as many lexically diverse
instances as possible, we prompt GPT-4 to avoid
using the same lemma as in the previous instance.
To get appropriate instances, we prompt the GPT-
4 to generate double the number of instances10,
and then select the designated number of instances,
avoiding duplicates. We adjust the percentage of
sentences with negation words to 10–50%. The bal-
anced instances contained 100 feminine and 100
masculine instances in ANA.GEN.AGR, 34 femi-
nine singular and 33 masculine singular, 34 sin-
gular and 100 plural instances in ANA.NUM.AGR,
200 instances each in TRANSITIVE and INTRAN-
SITIVE, 50 this, 50 that, 50 these and 50 those in
D-N AGR. 100 singular and 100 plural each in S-V
AGR.
A.4 Prompts
An example of prompts used to generate minimal
sentence pair in anaphor gender agreement where
a <wug#n> in the correct sentence is “herself” is
10The number of instances generated based on the prompt
can vary. Sometimes the output meets the specified quantity,
while other times it may be fewer, potentially even less than
half of the requested amount. If not enough instances are
generated, we input instances from three steps earlier and
generate additional instances to meet the requirements.
To select one sentence in each pair while evaluating,
we calculate its sentence-level likelihood, referring
to Warstadt et al. (2020a); Huebner et al. (2021).
Conversely, Hu et al. (2020) argue that token-level
likelihood comparisons, comparing the aggregate
likelihood over a word like "herself" vs. a word
like "himself", is a more precise evaluation than
sentence-level probability. We consider the differ-
ence using the two phenomena as a case study.
Settings We compare the sentence-level likeli-
hood used in this work with two types of score
calculation; wug-level likelihood and antecedent-
level likelihood. Given the sentence “<wug#n>
has devoted herself/*himself,” the antecedent-level
likelihood compares the probabilities assigned to
the antecedents “herself” and “himself.” This is
similar to the method used by Hu et al. (2020). The
wug-level likelihood, on the other hand, compares
the probabilities assigned to each pair of <wug#n>.
Since we are using MLMs in our research, it is
possible to adapt this for our calculations.
Results The score of language models calcu-
lated by the different score calculation methods
are shown in Figure 5. Two phenomena are dif-
ferent trends. For anaphor gender agreement, the
sentence-level and wug-level calculation methods
show similar trends where the score increased
gradually between 25 and 75 occurrences. The
antecedent-level method does not show such a re-
sult but hits a plateau after 75 observations. For
anaphor number agreement, the sentence-level and
antecedent-level methods show similarities but the
latter shows a bit more efficient learning than the
former. The wug-level method does not show im-
provement until 100 observations. The results sug-
gest that, in our limited setting, there are distinct
12
Create 400 minimal sentence pairs, containing a grammatical and an ungrammatical sentence, following the template pair and rules.Template pair:[WUG] <singular transitive verb> herself.[WUG] <singular transitive verb> himself.Rules:-You must include the lemma of <singular transitive verb> with a different initial letter and different final letter from the previous ones.-Always use the female proper noun [WUG] with bracket[] and uppercase.-You must include various auxiliary verbs and tenses in <singular transitive verb> with a different initial letter and different final letter from the previous ones.-You often include negations in <singular transitive verb> if previous pairs did not contain ones.-Do not include adverbs.-Generate 400 pairs including numbering that starts from 1 and ends at 400.Example:[WUG] will hurt herself.*[WUG] will hurt himself.Figure 5: Model’s score for three different score calcu-
lation methods
architecture
vocab size
hidden size
heads
layers
dropout
layer norm eps
initializer range
algorithm
learning rates
betas
weight decay
roberta-base
9,600
512
8
8
0.1
1e-12
0.02
AdamW
2e-4
(0.9, 0.999)
0.0
Model
Optimizer
Scheduler
type
warmup updates
linear
24,000
Training
gradient accum.
epoch
batch size
line by line
NGPU
4
18
16
true
1
Table 6: Hyperparameters of the language models.
trends among the three methods. The sentence-
level and antecedent-level methods each have their
advantages depending on the language phenomena.
More analyses of their difference are interesting for
future work.
C Hyperparameters
Hyperparameters in our work are listed in Table 6.
13
15255075100The number of observations020406080100ScoreAnaphor gender agreementsentencewugantecedent15255075100The number of observations020406080100ScoreAnaphor number agreementsentencewugantecedent |
synthetic_cpt | 4 | InPars_Data_Augmentation_for_Information_Retrieval_using_Large_Language_Models.pdf | 4
2
0
2
b
e
F
1
2
]
R
I
.
s
c
[
2
v
8
9
9
2
0
.
1
0
3
2
:
v
i
X
r
a
Published in Transactions on Machine Learning Research (MM/YYYY)
InPars-Light: Cost-Effective Unsupervised Training of Effi-
cient Rankers
Leonid Boytsov∗
Amazon AWS AI Labs
Pittsburgh
USA
Preksha Patel
Vivek Sourabh
Riddhi Nisar
Sayani Kundu
Ramya Ramanathan
Eric Nyberg
Carnegie Mellon University
Pittsburgh
USA
[email protected]
Reviewed on OpenReview: https: // openreview. net/ forum? id= sHSKFYyINO
Abstract
We carried out a reproducibility study of InPars, which is a method for unsupervised training
of neural rankers (Bonifacio et al., 2022). As a by-product, we developed InPars-light, which
is a simple-yet-effective modification of InPars. Unlike InPars, InPars-light uses 7x-100x
smaller ranking models and only a freely available language model BLOOM, which—as
we found out—produced more accurate rankers compared to a proprietary GPT-3 model.
On all five English retrieval collections (used in the original InPars study) we obtained
substantial (7%-30%) and statistically significant improvements over BM25 (in nDCG and
MRR) using only a 30M parameter six-layer MiniLM-30M ranker and a single three-shot
prompt. In contrast, in the InPars study only a 100x larger monoT5-3B model consistently
outperformed BM25, whereas their smaller monoT5-220M model (which is still 7x larger than
our MiniLM ranker) outperformed BM25 only on MS MARCO and TREC DL 2020. In the
same three-shot prompting scenario, our 435M parameter DeBERTA v3 ranker was at par
with the 7x larger monoT5-3B (average gain over BM25 of 1.3 vs 1.32): In fact, on three out of
five datasets, DeBERTA slightly outperformed monoT5-3B. Finally, these good results were
achieved by re-ranking only 100 candidate documents compared to 1000 used by Bonifacio
et al. (2022). We believe that InPars-light is the first truly cost-effective prompt-based
unsupervised recipe to train and deploy neural ranking models that outperform BM25. Our
code and data is publicly available.https://github.com/searchivarius/inpars_light/
1 Introduction
Training effective neural IR models often requires abundant in-domain training data, which can be quite
costly to obtain: For a human annotator, judging a single document-query pair takes at least one minute
on average (Han et al., 2020; Kwiatkowski et al., 2019) and a single query may need as many as 50 of such
judgements (Buckley et al., 2007).1 Models trained on out-of-domain data and/or fine-tuned using a small
number of in-domain queries often perform worse or marginally better than simple non-neural BM25 rankers
*Work done outside of the scope of employment.
1Robust04 and TREC-COVID collections used in our study have about 1K judgements per query.
1
Published in Transactions on Machine Learning Research (MM/YYYY)
Table 1: Average Gains over BM25 for different Models and Training Recipes
Model name and training recipe
Avg. gain over BM25
# of “wins” over BM25s (≤ 7)
Unsupervised: InPars-based Training Data (three-shot prompting)
MiniLM-L6-30M (InPars-light)
DeBERTA-v3-435M (InPars-light)
monoT5-220M (InPars) (Bonifacio et al., 2022)
monoT5-3B (InPars) (Bonifacio et al., 2022)
1.13
1.30
1.07
1.32
7
7
3
7
Supervised transfer learning with optional unsupervised fine-tuning: transfer from MS MARCO with optional
fine-tuning on consistency-checked InPars data
MiniLM-L6-30M (MS MARCO)
MiniLM-L6-30M (MS MARCO ▶ consist. checked queries)
DeBERTA-v3-435M (MS MARCO)
DeBERTA-v3-435M (MS MARCO ▶ consist. checked queries)
monoT5-220M (MS MARCO) (Bonifacio et al., 2022)
monoT5-3B (MS MARCO) (Bonifacio et al., 2022)
monoT5-3B (MS MARCO+InPars) (Bonifacio et al., 2022)
1.21
1.24
1.42
1.36
1.46
1.59
1.59
5
7
7
7
7
7
7
Consist. checked queries denotes a set of generated queries filtered out (via consistency checking) using the DeBERTA-v3-
435M model trained on InPars-generated data.
(Thakur et al., 2021; Mokrii et al., 2021). Good transferability requires (1) large impractical models (Rosa
et al., 2022; Ni et al., 2021), and (2) datasets with large and diverse manually annotated query sets.
A recent trend to deal with these problems consists in gen-
erating synthetic in-domain training data via prompting
of Large Language Models (LLMs). This trend was spear-
headed by a recent InPars study (Bonifacio et al., 2022).
However, proposed solutions are not cost effective because
they require either querying the costly generative models
or training impractically large rankers. Although follow
up studies, in particular by Dai et al. (2022), claimed
improvements upon InPars, these improvements were not
demonstrated under the same experimental setting. More-
over, researchers used primarily proprietary LLMs whose
training procedure was not controlled by the scientific
community. Thus, outcomes could have been affected by
data leakage, i.e., training of models on publicly available
and popular IR collections whose copies could have ended
up in the LLMs training data. As such, there is an im-
portant question of whether we can obtain comparable or
better results using only open-source models trained by
the scientific community.
Figure 1: Average relative improvement over
BM25 for different model types/sizes and train-
ing recipes. Higher and to the left is better. We
compare InPars with InPars-Light for the unsu-
pervised training scenario, where training data is
generated by an LLM using a three-shot prompt.
This study is driven by two high-level inquiries: (1) Does InPars work? (2) Can it be made more accurate
and cost effective? To address these inquiries, we carry out a rigorous reproducibility study of InPars
(Bonifacio et al., 2022). In that, we use open-source and community-trained generative LLMs (Scao et al.,
2022; Wang & Komatsuzaki, 2021), train rankers using multiple seeds, and use statistical testing when
measuring improvements. Because efficiency is an important consideration, we also evaluate much smaller
ranking models (see Figure 1 and Table 1) compared to those used by Bonifacio et al. (2022).
More specifically, we ask the following research questions:
• RQ1: Can we reproduce key findings of InPars (Bonifacio et al., 2022) using open-source and
community-trained LLMs as well as smaller ranking models?
2
Published in Transactions on Machine Learning Research (MM/YYYY)
• RQ2: Are open-source models more or less useful for generation of synthetic IR training data
compared to the similar-sized GPT-3 Curie model (Brown et al., 2020)?
• RQ3: Does consistency checking proposed by Dai et al. (2022) improve the InPars recipe? Is it
applicable in the purely re-ranking setting as opposed to the retrieval setting (as it was done by Dai
et al. (2022))?
• RQ4: Can we match performance of large monoT5 rankers—used by Bonifacio et al. (2022)—with
much smaller bi-directional Transformer (BERT) models (Devlin et al., 2018; Vaswani et al., 2017)?
• RQ5: The smaller monoT5 ranker with 220M parameters used by Bonifacio et al. (2022) does not
outperform BM25 for three out of five query sets. Thus, just matching monoT5-220M performance is
not enough. Can we instead substantially outperform BM25 using a small and fast ranker such as a
MiniLM (Wang et al., 2020) BERT ranker with only 30 million parameters?
Our contributions and findings are as follows:
• We reproduced the key finding by Bonifacio et al. (2022): Generation of synthetic in-domain data
using an InPars-like recipe permits training strong in-domain rankers using only a three-shot prompt
and in-domain documents, which answers RQ1. However, without additional effort such as all-domain
pre-training and consistency checking, only a sufficiently large ranking model could outperform BM25
on all datasets.
• We found that open-source LLMs BLOOM (Scao et al., 2022) and GPT-J (Wang & Komatsuzaki,
2021), which are trained using only next-token prediction (without further fine-tuning), could be
prompted to generate effective synthetic queries. Moreover, using a community-trained BLOOM
model produced comparable or more accurate2 ranking models compared to using GPT-3 Curie
model (Brown et al., 2020), which addresses RQ2.
• We confirmed that consistency checking proposed by Dai et al. (2022) does work for re-rankers and
always improves outcomes in the unsupervised setting, which answers RQ3.
• We also discovered that in the unsupervised setting, where synthetic queries were generated using
a three-shot prompt, we could match or outperform monoT5 rankers using much smaller BERT
ranking models (see Figure 1), which answers RQ4. More specifically:
– We can replace an impractical three-billion parameter monoT5-3B (Nogueira et al., 2020) model
with a 7x smaller BERT model while obtaining comparable results. The average gain over BM25
(see Table 1) was 1.32 for monoT5-3B vs. 1.3 for DeBERTA-v3-435M (He et al., 2021) (RQ1).
– Unlike Bonifacio et al. (2022) whose monoT5-220M model with 220 million parameters failed to
outperform BM25 on three out of five datasets (unless pre-trained on MS MARCO), we show
that a much smaller MiniLM-30M model with only 30 million parameters (Wang et al., 2020)
can outperform BM25 by 7%-30% in key metrics (nDCG@K and MRR) when trained using only
synthetic training data (RQ1 and RQ5).
– Outperforming BM25 with a small ranking model such as MiniLM-30M was possible by using:
(a) a better model to generate synthetic training data (BLOOM instead of GPT-3 Curie), (b)
consistency checking (Dai et al., 2022) (RQ3), and (c) all-domain pre-training, each of which
helped improve outcomes.
• Obtaining good results in the unsupervised setting described above required re-ranking only 100
candidate documents compared to 1000 used by Bonifacio et al. (2022). Overall, compared to InPars,
our training recipe—which we call InPars-light—is substantially more cost effective in terms of both,
generation of synthetic training data and training/application of ranking models (see § A.2 for a
detailed discussion).
2The only exception was BEIR NQ, where BLOOM-based ranker was 1.4% worse, see Table 4.
3
Published in Transactions on Machine Learning Research (MM/YYYY)
• However, when pretraining on MS MARCO was used, the monoT5-220M model was still substantially
more accurate than a 7x smaller MiniLM-30M ranker. Moreover, this gap was not reduced by
subsequent unsupervised fine-tuning of MiniLM-30M using synthetically generated data. The average
gain over BM25 (see Table 1) was 1.46 for monoT5-200M pre-trained on MS MARCO vs. 1.24 for
MiniLM-30M pre-trained on MS MARCO and fine-tuned using synthetic training data.
Our code and data are publicly available.3
2 Related Work
Prompting methods have gained quite a bit of popularity in NLP (see, e.g., Liu et al. (2021) for a recent
survey). In particular, prior to the InPars study by Bonifacio et al. (2022), Schick & Schütze (2021) proposed
to generate synthetic training sets using in-domain data and zero-shot prompting of LLMs.
However, until recently zero-shot and few-shot prompting of LLMs was not applied to ad hoc retrieval:
We know only a few papers directly related to our work. Sachan et al. (2022) were probably the first to
demonstrate effectiveness of LLMs in the document ranking task. In their approach—named UPR—they
concatenate a document, a special prompt such as “please write a question for this document” and the query
itself. Then, UPR uses a pre-trained LLM model to compute the likelihood of generating the query given the
passage text. Unlike InPars, they do not use LLM to generate synthetic training data.
Sachan et al. (2022) evaluated their method using only QA (but not IR) datasets and their main results are
for an impractically large three-billion parameter instruction-finetuned model, which was used essentially
as a re-ranker (in a zero-shot scenario). The smallest model used by Sachan et al. (2022) had 250 million
parameters (compared to our 30-million MiniLM model). It was evaluated only on the Natural Questions
(NQ) collection (Kwiatkowski et al., 2019) where it outperformed BM25 by about 10%. Although not
directly comparable due to using different versions of NQ and model sizes, our 2× larger DeBERTA-v3-435M
model outperformed BM25 by 40% while our much smaller MiniLM-30M model with 30 million parameters
outperformed BM25 by 15%.
Bonifacio et al. (2022) proposed an InPars method, which relied on few-shot prompting. The study had a
convincing evaluation on five datasets where only one dataset, namely NQ (Kwiatkowski et al., 2019), was
a typical QA collection. Unlike Sachan et al. (2022), Bonifacio et al. (2022) used few-shot prompting to
generate synthetic training data for a smaller ranker. For each collection Bonifacio et al. (2022) generated
100K synthetic queries and retained only 10K with the highest average log-probabilities. This can be seen as
distillation of an LLM into the ranker.
However, Bonifacio et al. (2022) obtained good results only for a huge monoT5-3B parameter model. They
also employed a proprietary GPT-3 model, which can be quite costly to use. In a follow-up study, which is
concurrent with this work, Jeronymo et al. (2023) introduced a modification of InPars—dubbed InPars v2—
where GPT-3 Curie (Brown et al., 2020) was replaced with an open-source model GPT-J model (Wang &
Komatsuzaki, 2021). However, this model swap was “entangled” with at least two other modifications in the
training recipe:
• A new query filtering condition that relied on an MS MARCO trained monoT5-3B model.
• The vanilla prompt (which was used in InPars and our experiments) was replaced with the “Guided
by Bad Question prompt” (introduced by Bonifacio et al. 2022).
Thus, it is not possible to fairly assess the impact of replacing GPT-3 Curie with GPT-J (Wang & Komatsuzaki,
2021).
An important disadvantage of the InPars v2 recipe is that it is still not cost-effective as authors use a
huge monoT5-3B model. The filtering check uses an expensive monoT5-3B model trained on MS MARCO
3https://github.com/searchivarius/inpars_light/
4
Published in Transactions on Machine Learning Research (MM/YYYY)
corpus, which is also not always possible in a commercial setting due to licensing issues (MS MARCO is a
research-only collection).
Moreover, the monoT5-3B model trained on MS MARCO—albeit being impractical—has excellent zero-shot
transferability: Fine-tuning monoT5-3B model trained on MS MARCO with InPars v2 only improves the
average BEIR score only by 2.4%: from 0.538 to 0.551. This further complicates assessment of effectiveness
of GPT-J.
Dai et al. (2022) used an InPars-like method called Promptagator and created synthetic training data using
a huge proprietary FLAN-137B model with 137 billion parameters. Although they used modestly sized
retrieval and ranking models with 110 million parameters, Dai et al. (2022) generated as many as million
synthetic training queries for each dataset. In contrast, both InPars and InPars-light used only 100K queries
per dataset, which was much less expensive (see a discussion in § A.2).
Importantly, Dai et al. (2022) proposed to use consistency checking (Alberti et al., 2019) to filter-out
potentially spurious queries, which was not previously done in the IR context. They do not compare with
InPars under the same conditions and it was not known if consistency checking would improve the original
InPars recipe.
In addition to prompt-based generation of training data, there are multiple proposals for self-supervised
adaptation of out-of-domain models using generative pseudo-labeling (Li & Gaussier, 2022; Wang et al., 2022;
Reddy et al., 2021). To this end, questions or queries are generated using a pretrained seq2seq model (though
an LLMs can be used as well) and negative examples are mined using either BM25 or an out-of-domain
retriever or ranker. Unsupervised domain adaptation is complementary to the approaches considered in this
work.
The disadvantage of such approaches is that they may need a reasonably effective an out-of-domain ranking
model. However, such models can be hard to obtain due to licensing issues and poor transferability from other
domains. For example, MS MARCO models have reasonable transferability (Thakur et al., 2021; Mokrii et al.,
2021), but MS MARCO cannot be used to train models in a commercial context (without extra licensing from
Microsoft). In contrast, the Natural Questions (NQ) collection (Kwiatkowski et al., 2019) has a permissive
license4, but models trained on NQ can fail to transfer to datasets that are not based on Wikipedia (Mokrii
et al., 2021).
Another potentially complementary approach is an LLM-assisted query expansion. In particular Gao et al.
(2022) prompted a 175B InstructGPT model to generate a hypothetical answer to a question. Then this
answer was encoded as a vector and together with the encoding of the original question they were compared
with encoded documents. In a purely unsupervised setting—using the Contriever bi-encoder training without
supervision (Izacard et al., 2021)—they were able to outperform BM25 by as much as 20%.
Despite strong results, a serious disadvantage of this approach is its dependence on the external proprietary
model that is costly and inefficient. Although we could not find any reliable benchmarks, a folklore opinion is
that GPT generation latency is a few seconds. To verify this, we used the OpenAI playground5 to generate a
few hypothetical answers using the prompt in Gao et al. Gao et al. (2022) and a sample of TREC DL 2020
queries. With a maximum generation length of 256 tokens (a default setting), the latency exceeded four
seconds.
Quite interestingly, Gao et al. Gao et al. (2022) tried to replace a 175B GPT-3 model with smaller open-source
models on TREC DL 2019 and TREC DL 2020 (see Tables 4 and Table 1 in their study), but failed to obtain
consistent and substantial gains over BM25 with models having fewer than 50B parameters.
4https://github.com/google-research-datasets/natural-questions/blob/master/LICENSE
5https://beta.openai.com/playground
5
Published in Transactions on Machine Learning Research (MM/YYYY)
Table 2: The format of the vanilla three-shot InPars prompt (Bonifacio et al., 2022)
Example 1:
Document: <text of the first example document>
Relevant Query: <text of the first relevant query>
Example 2:
Document: <text of the second example document>
Relevant Query: <text of the second relevant query>
Example 3:
Document: <text of the third example document>
Relevant Query: <text of the third relevant query>
Example 4:
Document: <real in-domain document text placeholder>
Relevant Query:
Notes: To generate a synthetic query, we first insert a
text of a chosen real in-domain document after the prefix
“Document:” in example four. Then, we “ask” an LLM to
generate a completion.
3 Methods
3.1 Information Retrieval Pipeline
We use a variant of a classic filter-and-refine multi-stage retrieval pipeline (Matveeva et al., 2006; Prager, 2006;
Wang et al., 2011), where top-k candidate documents retrieved by a fast BM25 retriever/scorer (Robertson,
2004) are further re-ranked using a slower neural re-ranker. For collections where documents have titles (NQ
BEIR and TREC COVID BEIR), the BM25 retriever itself has two stages: In the first stage we retrieve 1K
documents using a Lucene index built over a title concatenated with a main text. In the second stage, these
candidates are re-ranked using equally weighted BM25 scores computed separately for the title and the main
text.
Our neural rankers are cross-encoder models (Nogueira & Cho, 2019; Lin et al., 2021b), which operate on
queries concatenated with documents. Concatenated texts are passed through a backbone bi-directional
encoder-only Transformer model (Devlin et al., 2018) equipped with an additional ranking head (a fully-
connected layer), which produces a relevance score (using the last-layer contextualized embedding of a
CLS-token (Nogueira & Cho, 2019)). In contrast, authors of InPars (Bonifacio et al., 2022) use a T5 (Raffel
et al., 2020) cross-encoding re-ranker (Nogueira et al., 2020), which is a full Transformer model (Vaswani
et al., 2017). It uses both the encoder and the decoder. The T5 ranking Transformer is trained to generate
labels “true” and “false”, which represent relevant and non-relevant document-query pairs, respectively.
Backbone Transformer models can differ in the number of parameters and pre-training approaches (including
pre-training datasets). In this paper we evaluated the following models, all of which were pre-trained in the
self-supervised fashion without using supervised IR data:
• A six-layer MiniLM-L6 model (Wang et al., 2020). It is a tiny (by modern standards) 30-million
parameter model, which was distilled (Li et al., 2014; Romero et al., 2015; Hinton et al., 2015) from
Roberta (Liu et al., 2019). We download model L6xH384 MiniLMv2 from the Microsoft website.6
• A 24-layer (large) ERNIE v2 model from the HuggingFace hub (Sun et al., 2020)7. It has 335 million
parameters.
• A 24-layer (large) DeBERTA v3 model with 435 million parameters (He et al., 2021) from the
HuggingFace hub 8.
6https://github.com/microsoft/unilm/tree/master/minilm
7https://huggingface.co/nghuyong/ernie-2.0-large-en
8https://huggingface.co/microsoft/deberta-v3-large
6
Published in Transactions on Machine Learning Research (MM/YYYY)
We chose ERNIE v2 and DeBERTA v3 due to their strong performance on the MS MARCO dataset where
they outperformed BERT large (Devlin et al., 2018) and several other models that we tested in the past.
Both models performed comparably well in the preliminary experiments, but we chose DeBERTA for main
experiments because it was more effective on MS MARCO and TREC-DL 2020. In the post hoc ablation
study, DeBERTA outperformed ERNIE v2 on four collections out of five (see Table 4).
However, both of these models are quite large and we aspired to show that an InPars-like training recipe
can be used with smaller models too. In contrast, Bonifacio et al. (2022) were able to show that only
a big monoT5-3B model with 3B parameters could outperform BM25 on all five datasets: The smaller
monoT5-200M ranker with 200 million parameters, which is still quite large, outperformed BM25 only on MS
MARCO and TREC-DL 2020.
3.2 Generation of Synthetic Training Data
We generate synthetic training data using a well-known few-shot prompting approach introduced by Brown
et al. (2020). In the IR domain, it was first used by Bonifacio et al. (2022) who called it InPars. The
key idea of InPars is to “prompt” a large language model with a few-shot textual demonstration of known
relevant query-document pairs. To produce a novel query-document pair, Bonifacio et al. (2022) appended
an in-domain document to the prompt and “asked” the model to complete the text. Bonifacio et al. (2022)
evaluated two types of the prompts of which we use only the so-called vanilla prompt (see Table 2).
As in the InPars study (Bonifacio et al., 2022), we generated 100K queries for each dataset with exception of
MS MARCO and TREC DL.9 Repeating this procedure for many in-domain documents produces a large
training set, but it can be quite imperfect. In particular, we carried out spot-checking and found quite a few
queries that were spurious or only tangentially relevant to the passage from which they were generated.
Many spurious queries can be filtered out automatically. To this end, Bonifacio et al. (2022) used only 10%
of the queries with the highest log-probabilities (averaged over query tokens). In the Promptagator recipe,
Dai et al. (2022) used a different filtering procedure, which was a variant of consistency checking (Alberti
et al., 2019). Dai et al. (2022) first trained a retriever model using all the generated queries. Using this
retriever, they produced a ranked set of documents for each query. The query passed the consistency check
if the first retrieved document was the document from which the query was generated. A straightforward
modification of this approach is to check if a generated document is present in a top-k (k > 1) candidate set
produced by the retriever. Dai et al. (2022) used consistency checking with bi-encoding retrieval models, but
it is applicable to cross-encoding re-ranking models as well.
3.3 InPars-light Training Recipe
The InPars-light is not a new method. It is a training recipe, which a modification of the original InPars. Yet,
it is substantially more cost effective for generation of synthetic queries, training the models, and inference.
InPars-light has the following main “ingredients”:
• Using open-source models instead of GPT-3;
• Using smaller ranking BERT models instead of monoT5 rankers;
• Fine-tuning models on consistency-checked training data;
• Optional pre-training of models using all generated queries from all collections.
• Re-ranking only 100 candidate documents instead of 1000: However, importantly, the training
procedure still generates negatives from a top-1000 set produced by a BM25 ranker.
To obtain consistency-checked queries for a given dataset, a model trained on InPars-generated queries (for
this dataset) was used to re-rank output of all original queries (for a given dataset). Then, all the queries
9Because both datasets use the same set of passages they share the same set of 100K generated queries.
7
Published in Transactions on Machine Learning Research (MM/YYYY)
where the query-generating-document did not appear among top-k scored documents were discarded. In
our study, we experimented with k from one to three (but only on MS MARCO).10 Although k = 1 worked
pretty well, using k = 3 lead to a small boost in accuracy. Consistency-checking was carried out using
DeBERTA-v3-435M (He et al., 2021). We want to emphasize that consistency-checked training data was
used in addition to original InPars-generated data (but not instead), namely, to fine-tune a model initially
trained on InPars generated data.
Also, quite interestingly, a set of consistency-checked queries had only a small (about 20-30%) overlap with the
set of queries that were selected using the original InPars recipe (based on average log-probabilities). Thus,
consistency-checking increased the amount of available training data. It might seem appealing to achieve the
same objective by simply picking a larger number of queries (with highest average log-probabilities). However,
preliminary experiments on MS MARCO showed that a naive increase of the number of queries degraded
effectiveness (which is consistent with findings by Bonifacio et al. (2022)).
Although, the original InPars recipe with open-source models and consistency checking allowed us to train
strong DeBERTA-v3-435M models, performance of MiniLM models was lackluster (roughly at BM25 level for
all collections).
Because bigger models performed quite well, it may be possible to distill (Li et al., 2014; Romero et al., 2015;
Hinton et al., 2015) their parameters into a much smaller MiniLM-30M model. Distillation is known to be
successful in the IR domain (Hofstätter et al., 2020; Lin et al., 2020), but it failed in our case. Thus we used
the following workaround instead:
• First we carried out an all-domain pre-training without any filtering (i.e., using all queries from all
collections);
• Then, we fine-tuned all-domain pre-trained models on the consistency-checked in-domain data for
each collection separately.
3.4 Miscellaneous
We carried out experiments using FlexNeuART Boytsov & Nyberg (2020), which provided support for basic
indexing, retrieval, and neural ranking. Both generative and ranking models were implemented using PyTorch
and Huggingface (Wolf et al., 2020). Ranking models were trained using the InfoNCE loss (Le-Khac et al.,
2020). In a single training epoch, we selected randomly one pair of positive and three negative examples per
query (negatives were sampled from 1000 documents with highest BM25 scores). Note that, however, that
during inference we re-ranked only 100 documents. In preliminary experiments on MS MARCO we used to
sample from a top-100 set as well. However, the results were surprisingly poor and we switched to sampling
from a top-1000 set (we did not try any other sampling options though). A number of negatives was not
tuned: We used as much as we can while ensuring we do not run out of GPU memory during training on any
collection.
We used the AdamW optimizer (Loshchilov & Hutter, 2017) with a small weight decay (10−7), a warm-up
schedule, and a batch size of 16.11 We used different base rates for the fully-connected prediction head
(2 · 10−4) and for the main Transformer layers (2 · 10−5). The mini-batch size was equal to one and a larger
batch size was simulated using a 16-step gradient accumulation. We did not tune optimization parameters
and chose the values based on our prior experience of training neural rankers for MS MARCO.
We trained each ranking model using three seeds and reported the average results (except for the best-seed
analysis in Table 5). Statistical significance is computed between “seed-average” runs where query-specific
metric values are first averaged over all seeds and then a standard paired difference test is carried out using
these seed-average values (see § A.1 for details).
10We did not want to optimize this parameter for all collections and, thus, to commit a sin of tuning hyper-parameters on the
complete test set.
11The learning rate grows linearly from zero for 20% of the steps until it reaches the base learning rate (Mosbach et al., 2020;
Smith, 2017) and then goes back to zero (also linearly).
8
Published in Transactions on Machine Learning Research (MM/YYYY)
Except zero-shot experiments, we trained a separate model for each dataset, which is consistent with Bonifacio
et al. (2022). Moreover, we computed exactly the same accuracy metrics as Bonifacio et al. (2022). For
statistical significance testing we used a paired two-sided t-test. For query sets with a large number of queries
(MS MARCO development set and BEIR Natural Questions) we used a lower threshold of 0.01. For small
query sets (Robust04, TREC DL, and TREC-COVID), the statistical significance threshold was set to 0.05.
We implemented our query generation module using the AutoModelForCasualLM interface from HuggingFace.
We used a three-shot vanilla prompt template created by Bonifacio et al. (2022) (also shown in Table 2). The
output was generated via greedy decoding. The maximum number of new tokens generated for each example
was set to 32. Note that query generation was a time-consuming process even though we used open-source
models. Thus, we did it only once per dataset, i.e., without using multiple seeds.
4 Datasets
Because we aimed to reproduce the main results of InPars (Bonifacio et al., 2022), we used exactly the same
set of queries and datasets, which are described below. Except MS MARCO (which was processed directly
using FlexNeuART Boytsov & Nyberg (2020) scripts), datasets were ingested with a help of the IR datasets
package (MacAvaney et al., 2021).
Some of the collections below have multiple text fields, which were used differently between BM25 and neural
ranker. All collections except Robust04 have exactly one query field. Robust04 queries have the following
parts: title, description, and narrative. For the purpose of BM25 retrieval and ranking, we used only the title
field, but the neural ranker used only the description field (which is consistent with Bonifacio et al. 2022).
The narrative field was not used.
Two collections have documents with both the title and the main body text fields (NQ BEIR and TREC
COVID BEIR). The neural rankers operated on concatenation of these fields. If this concatenation was
longer than 477 BERT tokens, the text was truncated on the right (queries longer than 32 BERT tokens
were truncated as well). For BM25 scoring, we indexed concatenated fields as well in Lucene. However, after
retrieving 1000 candidates, we re-ranked them using the sum of BM25 scores computed separately for the
title and the main body text fields (using FlexNeuART Boytsov & Nyberg (2020)).
Synthetically Generated Training Queries. For each of the datasets, Bonifacio et al. (2022) provided
both the GPT-3-generated queries (using GPT-3 Curie model) and the documents that were used to generate
the queries. This permits a fair comparison of the quality of training data generated using GPT-3 Curie with
the quality of synthetic training data generated using open-source models GPT-J (Wang & Komatsuzaki,
2021) and BLOOM (Scao et al., 2022). According to the estimates of Bonifacio et al. (2022), the Curie model
has 6B parameters, which is close to the estimate made by by Gao from EleutherAI Gao (2021). Thus, we
used GPT-J (Wang & Komatsuzaki, 2021) and BLOOM (Scao et al., 2022) models with 6 and 7 billion
parameters, respectively. Although other open-source models can potentially be used, generation of synthetic
queries is quite expensive and exploring other open-source options is left for future work.
MS MARCO sparse and TREC DL 2020. MS MARCO is collection of 8.8M passages extracted from
approximately 3.6M Web documents, which was derived from the MS MARCO reading comprehension dataset
(Bajaj et al., 2016; Craswell et al., 2020). It “ships“ with more than half a million of question-like queries
sampled from the Bing search engine log with subsequent filtering. The queries are not necessarily proper
English questions, e.g., “lyme disease symptoms mood”, but they are answerable by a short passage retrieved
from a set of about 3.6M Web documents (Bajaj et al., 2016). Relevance judgements are quite sparse (about
one relevant passage per query) and a positive label indicates that the passage can answer the respective
question.
The MS MARCO collections has several development and test query sets of which we use only a development
set with approximately 6.9K sparsely-judged queries and the TREC DL 2020 (Craswell et al., 2020) collection
of 54 densely judged queries. Henceforth, for simplicity when we discuss the MS MARCO development set
we use a shortened name MS MARCO, which is also consistent with Bonifacio et al. (2022).
9
Published in Transactions on Machine Learning Research (MM/YYYY)
Note that the MS MARCO collection has a large training set, but we do not use it in the fully unsupervised
scenario. It is used only supervised transfer learning (see § 5).
Robust04 (Voorhees, 2004) is a small (but commonly used) collection that has about 500K news wire
documents. It comes with a small but densely judged set of 250 queries, which have about 1.2K judgements
on average.
Natural Questions (NQ) BEIR (Kwiatkowski et al., 2019) is an open domain Wikipedia-based Question
Answering (QA) dataset. Similar to MS MARCO, it has real user queries (submitted to Google). We use a
BEIR’s variant of NQ (Thakur et al., 2021), which has about 2.6M short passages from Wikipedia and 3.4K
sparsely-judged queries (about 1.2 relevant documents per query).
TREC COVID BEIR (Roberts et al., 2020) is a small corpus that has 171K scientific articles on the topic
of COVID-19 and. TREC COVID BEIR comes with 50 densely-judged queries (1.3K judged documents per
query on average). It was created for a NIST challenge whose objective was to develop information retrieval
methods tailored for the COVID-19 domain (with a hope to be a useful tool during COVID-19 pandemic).
We use the BEIR’s version of this dataset (Thakur et al., 2021).
5 Results
The summary of experimental results is provided in Figure 1 and Table 1. Our detailed experimental results
are presented in Table 3. Note that in addition to our own measurements, we copy key results from prior
work (Nogueira et al., 2020; Bonifacio et al., 2022), which include results for BM25 (by Bonifacio et al.
(2022)), re-ranking using OpenAI API, and monoT5 rankers. In our experiments, we statistically test several
hypotheses, which are explained separately at the bottom of each table.
BM25 baselines. To assess the statistical significance of the difference between BM25 and a neural ranker,
we had to use our own BM25 runs. These runs were produced using FlexNeuART Boytsov & Nyberg (2020).
Comparing effectiveness of FlexNeuART Boytsov & Nyberg (2020) BM25 with effectiveness of Pyserini (Lin
et al., 2021a) BM25—used the InPars study (Bonifacio et al., 2022)—we can see that on all datasets except
TREC DL 2020 we closely match (within 1.5%) Pyserini numbers. On TREC DL 2020 our BM25 is 6% more
effective in nNDCG@10 and 25% more effective in MAP.
Unsupervised-only training (using three-shot prompts). We consider the scenario where synthetic
training data is generated using a three-shot prompt to be unsupervised. Although the prompt is based on
human supervision data (three random samples from the MS MARCO training corpus), these samples are
not directly used for training, but only to generate synthetic data.
In this scenario, we reproduce the key finding by Bonifacio et al. (2022): Generation of synthetic in-domain
data using an InPars-like recipe permits training strong in-domain rankers using only a three-shot prompt and
in-domain documents. However, if we use the original InPars recipe, only a large ranking model (DeBERTA-
v3-435M) consistently outperforms BM25. This answers RQ1. With DeBERTA-v3-435M we obtain accuracy
similar to that of monoT5-3B on four collections out of five, even though monoT5-3B has 7x more parameters.
The average gain over BM25 is 1.3 (for DeBERTA-v3-435M) vs 1.32 for monoT5-3B (see Table 1).
Accuracy of our smallest model MiniLM-L6-30M with all-domain pretraining and finetuning on consistency-
checked data (referred to as InPars all ▶ consist. check in Table 3) roughly matches that of the 7x larger
monoT5-220M on MS MARCO and TREC DL 2020. Yet, it is substantially better than monoT5-220M on
the remaining datasets, where monoT5-220M effectiveness is largely at BM25 level: The average gain over
BM25 (see Table 1) is 1.07 for monoT5-200M vs. 1.13 for MiniLM-30M. MiniLM-L6-30M outperforms BM25
on all collections and all metrics. In all but one case these differences are also statistically significant. In
terms of nDCG and/or MRR, MiniLM-30M is 7%-30% more accurate than BM25.
In summary, we can replace monoT5 rankers with much smaller BERT models while obtaining comparable or
better average gains over BM25. This answers RQ4.
Impact of consistency checking and all-domain pre-training. We found that, on its own, the InPars
recipe did not produce a strong MiniLM-L6-30M ranking model. This is in line with the findings of Bonifacio
10
Published in Transactions on Machine Learning Research (MM/YYYY)
Table 3: Model Accuracy for Various Scenarios (averaged over three seeds)
MS
MARCO
TREC DL 2020
Robust04
NQ
TREC
COVID
MRR
MAP
nDCG@10 MAP nDCG@20 nDCG@10 nDCG@10
BM25 (Bonifacio et al., 2022)
BM25 (this study)
0.1874
0.1867
0.2876
0.3612
0.4876
0.5159
0.2531
0.2555
0.4240
0.4285
0.3290
0.3248
0.6880
0.6767
OpenAI Ranking API: re-ranking 100 Documents (Bonifacio et al., 2022)
Curie (6B) (Bonifacio et al., 2022)
Davinci (175B) (Bonifacio et al., 2022)
$
$
0.3296
0.3163
0.5422
0.5366
0.2785
0.2790
0.5053
0.5103
0.4171
$
0.7251
0.6918
Unsupervised: InPars-based Training Data (three-shot prompting)
monoT5-220M (InPars) (Bonifacio et al., 2022)
monoT5-3B (InPars) (Bonifacio et al., 2022)
0.2585
0.2967
0.3599
0.4334
0.5764
0.6612
0.2490
0.3180
0.4268
0.5181
0.3354
0.5133
MiniLM-L6-30M (InPars)
MiniLM-L6-30M (InPars ▶ consist. check)
MiniLM-L6-30M (InPars all ▶ consist. check)
DeBERTA-v3-435M (InPars)
DeBERTA-v3-435M (InPars ▶ consist. check)
DeBERTA-v3-435M (InPars all ▶ consist. check) ca0.1957
ba0.2187
b0.4953 ba0.2263
ba0.2117
cb0.3239
cb0.5543 cb0.2556
cba0.2336
ca0.3747
ca0.5726
c0.2639
ca0.2468
ba0.2746
ba0.4476
a0.6649 ba0.2811
cba0.2815 cba0.4446 ca0.6717 cba0.3009 cba0.5360 cba0.4621
c0.3267
ba0.3802
cb0.4440
ca0.4599
ba0.4987
b0.3482
cb0.3769
ca0.3929
ba0.4385
c0.2518
c0.3607
c0.4320
c0.5007
0.6666
0.7835
b0.6361
cb0.6926
ca0.7688
a0.8022
ca0.8183
c0.6953
Supervised transfer learning with optional unsupervised fine-tuning: transfer from MS MARCO with optional
fine-tuning on consistency-checked InPars data
da0.3080
MiniLM-L6-30M (MS MARCO)
MiniLM-L6-30M (MS MARCO ▶ consist. check) da0.2944
da0.3508
DeBERTA-v3-435M (MS MARCO)
DeBERTA-v3-435M (MS MARCO ▶ consist.
da0.3166
check)
a0.4370
a0.4311
a0.4679
a0.4553
a0.6662 da0.2295
a0.6501 da0.2692
a0.2986
da0.7269
a0.3011
da0.6912
da0.3923
da0.4730
a0.5304
a0.5371
da0.4646
da0.4320
da0.5616
da0.5075
da0.7476
da0.7898
a0.8304
a0.8165
monoT5-220M (MS MARCO) (Nogueira et al.,
2020)
monoT5-3B (MS MARCO) (Nogueira et al.,
2020)
monoT5-3B (MS MARCO ▶ InPars) (Bonifacio
et al., 2022)
0.3810
0.4909
0.7141
0.3279
0.5298
0.5674
0.7775
0.3980
0.5281
0.7508
0.3876
0.6091
0.6334
0.7948
0.3894
0.5087
0.7439
0.3967
0.6227
0.6297
0.8471
OpenAI API ranking results were produced by Bonifacio et al. (2022): $ denotes experiments that were too expensive to run.
InPars denotes the original query-generation method with filtering-out 90% of queries having lowest average log-probabilities.
InPars all denotes the query-generation method without query filtering, which was used in all-domain pretraining.
Consist. checked queries denotes a set of generated queries filtered out (via consistency checking) using the DeBERTA-v3-
435M model trained on InPars-generated data.
Best results are marked by bold font separately for each training scenario.
Super-scripted labels denote the following statistically significant differences (thresholds are given in the main text):
a: between a given neural ranking model and BM25;
b: between (InPars) and (InPars ▶ consist. check) when comparing neural ranking models of same type.
c: between (InPars all ▶ consist. check) and (InPars ▶ consist. check) when comparing neural ranking models of same type.
d: between (MS MARCO) and (MS MARCO ▶ consist. check) when comparing neural ranking models of same type.
11
Published in Transactions on Machine Learning Research (MM/YYYY)
Table 4: Performance of InPars for Different Generating and Ranking Models.
BM25 (ours)
0.1867
0.3612
0.5159
0.2555
0.4285
0.3248
0.6767
MS MARCO
MRR
TREC DL 2020
Robust04
NQ
MAP nDCG@10
MAP nDCG@20 nDCG@10
TREC COVID
nDCG@10
a0.2538
a0.4140
ba0.2608 ba0.4286
dba0.2605 ba0.4286
dba0.2746 ba0.4385 ba0.6649
ERNIE-v2-335M GPT-3 Curie (6B)
ERNIE-v2-335M GPT-J (6B)
ERNIE-v2-335M BLOOM (7B)
DeBERTA-v3-435M BLOOM (7B)
a0.7411
ba0.7750
ba0.7871
ba0.8022
Notes: Best results are in bold. Super-scripted labels denote statistically significant differences (thresholds are given in the
main text):
a: between a given neural ranking model and BM25;
b: between a given neural ranking model and ERNIE-v2-335M trained using OpenAI GPT-3 Curie.
c: between two ERNIE models trained using GPT-J-generated queries and BLOOM-generated queries;
d: between the DeBERTA model and the ERNIE model trained using BLOOM-generated queries.
a0.2357
a0.6229
0.4016
a0.6367
cba0.4724
cb0.2691
a0.6407 cba0.2852 dcba0.5102
ba0.2811
a0.4277
a0.4248
da0.4215
dba0.4987 dba0.4476
et al. (2022), who observed that only monoT5-3B (but not a much smaller monoT5-220M) outperformed
BM25 on all collections. Strong performance of MiniLM-L6-30M in our study was due to additional training
with consistency-checked data and pre-training on all-domain data (all queries from all collections). To
confirm the effectiveness of these procedures, we carried out ablation experiments.
Recall that the consistency-checked training data was produced using only the DeBERTA-v3-435M model.
Moreover, this data was used only to fine-tune a model that was pre-trained using data generated by the
original InPars recipe. From Table 3, we can see that for both MiniLM-L6-30M and DeBERTA-v3-435M
fine-tunining on consistency-checked data improves outcomes (which answers RQ3): For 12 measurements
out of 14, these improvements are statistically significant (denoted by super-script label “b”).
Moreover, all-domain pretraining (instead of training on data generated by the original InPars recipe)
further boosts accuracy of MiniLM-L6-30M in all cases: All these improvements are statistically significant
(denoted by super-script label “c”). In contrast, all-domain pretraining substantially degrades performance
of DeBERTA-v3-435M. An in-depth investigation showed that for one seed (out of three), the model has
failed to converge properly. Therefore, we also analyze the best-seed outcomes which are presented in § A.3
Table 5. For MiniLM-L6-30M, the all-domain pre-training improves the best-seed accuracy in all cases.
For DeBERTA-v3-435M, there is either a substantial degradation or a small decrease/increase that is not
statistically significant (denoted by super-script label “c”). Thus, our biggest model—unlike a 15x smaller
MiniLM-L6-30M—does not benefit from all-domain pretraining. However, there is no substantial degradation
either.
Supervised transfer learning with optional unsupervised fine-tuning. We found that our ranking
models trained on MS MARCO (both MiniLM-L6-30M and DeBERTA-v3-435M) transferred well to other
collections in almost all the cases. However, monoT5 models trained on MS MARCO are still substantially
more accurate. According to Table 1, the average gains over BM25 are (1) 1.21 for MiniLM-30M vs. 1.46 for
monoT5-200M and (2) 1.42 for DeBERTA-v3-435M vs. 1.59 for monoT5-3B. In that, this gap is not reduced
by fine-tuning using synthetically generated data. This is different from the fully unsupervised scenario
described above, where MiniLM-L6-30M often outperforms monoT5-220M while DeBERTA-v3-435M is at
par with monoT5-3B.
This is in line with prior findings that large ranking models have better zero-shot transferring effectiveness (Ni
et al., 2021; Rosa et al., 2022). However, using multi-billion parameter models pre-trained on MS MARCO in
a commercial setting is problematic from both efficiency and legal standpoints. In particular, MS MARCO
has a research-only license.12.
Model-type ablation. To assess the impact of replacing GPT-3 Curie with an open-source model, we carried
out experiments using the following ranking models: ERNIE-v2 (Sun et al., 2020) and DeBERTA-v3-435M
(He et al., 2021). According to Table 4, except for NQ—where all generative models were equally good—both
12See terms and conditions: https://microsoft.github.io/msmarco/
12
Published in Transactions on Machine Learning Research (MM/YYYY)
GPT-J (Wang & Komatsuzaki, 2021) and BLOOM (Scao et al., 2022) outperformed GPT-3 Curie. This
answers RQ2.
The difference in accuracy was particularly big for Robust04. The average relative gain over GPT-3 curie
(not shown in the table) were 7.2% for BLOOM and 5.2% for GPT-J.13 Out of 14 comparisons, 10 were
statistically significant (as denoted by super-script “b”).
In addition to varying a generative model, we assessed the impact of using DeBERTA-v3 instead of ERNIE-v2.
This time around, both models were trained using BLOOM-generated queries. We can see that DeBERTA-v3
was better than ERNIE-v2 except the case of Robust04.
6 Conclusion
We carried out a reproducibility study of InPars (Bonifacio et al., 2022), which is a method for unsupervised
training of neural rankers. As a by-product of this study, we developed a simple-yet-effective modification of
InPars, which we called InPars-light. Unlike InPars, InPars-light uses only a community-trained open-source
language model BLOOM (with 7B parameters), 7x-100x smaller ranking models, and re-ranks only top-100
candidate records instead of top-1000.
Not only were we able to reproduce key findings from prior work (Bonifacio et al., 2022), but, combining the
original InPars recipe (Bonifacio et al., 2022) with (1) fine-tuning on consistency-checked data (Dai et al.,
2022) and (2) all-domain pretraining, we trained an efficient yet small model MiniLM-L6-30M consistently
outperforming BM25 in the unsupervised setting. In the same scenario, using a larger DeBERTA-v3-435M
model, we largely matched performance of a 7x larger monoT5-3B.
In the supervised transfer learning setting—when pretraining on MS MARCO was used—the monoT5-220M
model was still substantially more accurate than a 7x smaller MiniLM-30M ranker and this gap was not
reduced by unsupervised fine-tuning using synthetically generated data.
References
Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins. Synthetic QA corpora
generation with roundtrip consistency. In Proceedings of the 57th Annual Meeting of the Association
for Computational Linguistics, pp. 6168–6173, Florence, Italy, July 2019. Association for Computational
Linguistics. doi: 10.18653/v1/P19-1620. URL https://aclanthology.org/P19-1620.
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder,
Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. MS MARCO: A human generated machine reading
comprehension dataset. arXiv preprint arXiv:1611.09268, 2016.
Luiz Henrique Bonifacio, Hugo Abonizio, Marzieh Fadaee, and Rodrigo Nogueira. Inpars: Unsupervised
dataset generation for information retrieval. In SIGIR, pp. 2387–2392. ACM, 2022.
Leonid Boytsov and Eric Nyberg. Flexible retrieval with NMSLIB and FlexNeuART. In Proceedings of
Second Workshop for NLP Open Source Software (NLP-OSS), pp. 32–43, 2020.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen
Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter,
Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark,
Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models
are few-shot learners. In NeurIPS, 2020.
Chris Buckley, Darrin Dimmick, Ian Soboroff, and Ellen M. Voorhees. Bias and the limits of pooling for large
collections. Inf. Retr., 10(6):491–508, 2007.
13The average gain was obtained by (1) computing relative gain separately for each datasets and key metrics (nDCG or MRR)
and (2) averaging these relative gains.
13
Published in Transactions on Machine Learning Research (MM/YYYY)
Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M Voorhees. Overview of the TREC
2019 deep learning track. arXiv preprint arXiv:2003.07820, 2020.
Zhuyun Dai, Vincent Y. Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall,
and Ming-Wei Chang. Promptagator: Few-shot dense retrieval from 8 examples. CoRR, abs/2209.11755,
2022.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional
transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Leo Gao. https://blog.eleuther.ai/gpt3-model-sizes/, May 2021.
Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. Precise zero-shot dense retrieval without relevance
labels, 2022. URL https://arxiv.org/abs/2212.10496.
Lei Han, Eddy Maddalena, Alessandro Checco, Cristina Sarasua, Ujwal Gadiraju, Kevin Roitero, and
Gianluca Demartini. Crowd worker strategies in relevance judgment tasks. In WSDM, pp. 241–249. ACM,
2020.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. Deberta: decoding-enhanced bert with disen-
tangled attention. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event,
Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=XPZIaotutsD.
Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. CoRR,
abs/1503.02531, 2015.
Sebastian Hofstätter, Sophia Althammer, Michael Schröder, Mete Sertkan, and Allan Hanbury. Improving
efficient neural ranking models with cross-architecture knowledge distillation, 2020. URL https://arxiv.
org/abs/2010.02666.
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and
Edouard Grave. Towards unsupervised dense information retrieval with contrastive learning. CoRR,
abs/2112.09118, 2021.
Vitor Jeronymo, Luiz Bonifacio, Hugo Abonizio, Marzieh Fadaee, Roberto Lotufo, Jakub Zavrel, and Rodrigo
Nogueira. Inpars-v2: Large language models as efficient dataset generators for information retrieval, 2023.
URL https://arxiv.org/abs/2301.01820.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti,
Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural Questions: a benchmark for
question answering research. Transactions of the Association for Computational Linguistics, 7:453–466,
2019.
Phuc H. Le-Khac, Graham Healy, and Alan F. Smeaton. Contrastive representation learning: A framework
and review. IEEE Access, 8:193907–193934, 2020.
Jinyu Li, Rui Zhao, Jui-Ting Huang, and Yifan Gong. Learning small-size DNN with output-distribution-based
criteria. In INTERSPEECH, pp. 1910–1914. ISCA, 2014.
Minghan Li and Éric Gaussier. Domain adaptation for dense retrieval through self-supervision by pseudo-
relevance labeling. CoRR, abs/2212.06552, 2022.
Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, and Rodrigo Nogueira.
Pyserini: A python toolkit for reproducible information retrieval research with sparse and dense represen-
tations. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in
Information Retrieval, pp. 2356–2362, 2021a.
Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. Pretrained Transformers for Text Ranking: BERT and
Beyond. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers, 2021b.
14
Published in Transactions on Machine Learning Research (MM/YYYY)
Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. Distilling dense representations for ranking using
tightly-coupled teachers. CoRR, abs/2010.11386, 2020.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train,
prompt, and predict: A systematic survey of prompting methods in natural language processing. CoRR,
abs/2107.13586, 2021.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke
Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretraining approach. CoRR,
abs/1907.11692, 2019.
Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101,
2017.
Sean MacAvaney, Andrew Yates, Sergey Feldman, Doug Downey, Arman Cohan, and Nazli Goharian.
Simplified data wrangling with ir_datasets. In SIGIR, 2021.
Irina Matveeva, Chris Burges, Timo Burkard, Andy Laucius, and Leon Wong. High accuracy retrieval with
multiple nested ranker. In SIGIR, pp. 437–444. ACM, 2006.
Iurii Mokrii, Leonid Boytsov, and Pavel Braslavski. A systematic evaluation of transfer learning and
pseudo-labeling with bert-based ranking models. In SIGIR, pp. 2081–2085. ACM, 2021.
Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. On the stability of fine-tuning BERT:
misconceptions, explanations, and strong baselines. arXiv preprint arXiv:2006.04884, 2020.
Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hern’andez ’Abrego, Ji Ma, Vincent Zhao, Yi Luan,
Keith B. Hall, Ming-Wei Chang, and Yinfei Yang. Large dual encoders are generalizable retrievers. ArXiv,
abs/2112.07899, 2021.
Rodrigo Nogueira and Kyunghyun Cho. Passage re-ranking with BERT. arXiv preprint arXiv:1901.04085,
2019.
Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. Document ranking with a pretrained sequence-to-sequence
model. arXiv preprint arXiv:2003.06713, 2020.
John M. Prager. Open-domain question-answering. Found. Trends Inf. Retr., 1(2):91–231, 2006.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer.
J. Mach. Learn. Res., 21:140:1–140:67, 2020.
Revanth Gangi Reddy, Bhavani Iyer, Md. Arafat Sultan, Rong Zhang, Avirup Sil, Vittorio Castelli, Radu
Florian, and Salim Roukos. Synthetic target domain supervision for open retrieval QA. In SIGIR, pp.
1793–1797. ACM, 2021.
Kirk Roberts, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, Kyle Lo, Ian Soboroff, Ellen M.
Voorhees, Lucy Lu Wang, and William R. Hersh. TREC-COVID: rationale and structure of an information
retrieval shared task for COVID-19. J. Am. Medical Informatics Assoc., 27(9):1431–1436, 2020.
Stephen Robertson. Understanding inverse document frequency: on theoretical arguments for IDF. Journal
of Documentation, 60(5):503–520, 2004. doi: 10.1108/00220410410560582.
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua
Bengio. Fitnets: Hints for thin deep nets. In ICLR (Poster), 2015.
Guilherme Rosa, Luiz Bonifacio, Vitor Jeronymo, Hugo Abonizio, Marzieh Fadaee, Roberto Lotufo, and
Rodrigo Nogueira. In defense of cross-encoders for zero-shot retrieval, 2022. URL https://arxiv.org/
abs/2212.06121.
15
Published in Transactions on Machine Learning Research (MM/YYYY)
Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, and
Luke Zettlemoyer. Improving passage retrieval with zero-shot question generation. CoRR, abs/2204.07496,
2022.
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné,
Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Bider-
man, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff,
Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major,
Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon,
Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa,
Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris
Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, and et al. BLOOM:
A 176b-parameter open-access multilingual language model. CoRR, abs/2211.05100, 2022.
Timo Schick and Hinrich Schütze. Generating datasets with pretrained language models. CoRR,
abs/2104.07540, 2021.
Leslie N. Smith. Cyclical learning rates for training neural networks. In WACV, pp. 464–472, 2017.
Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. ERNIE 2.0: A
continual pre-training framework for language understanding. In AAAI, pp. 8968–8975. AAAI Press, 2020.
Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. Beir: A heteroge-
nous benchmark for zero-shot evaluation of information retrieval models. arXiv preprint arXiv:2104.08663,
2021.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser,
and Illia Polosukhin. Attention is all you need. In NIPS, pp. 5998–6008, 2017.
Ellen Voorhees. Overview of the trec 2004 robust retrieval track. In TREC, 2004.
Ben Wang and Aran Komatsuzaki. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model.
https://github.com/kingoflolz/mesh-transformer-jax, May 2021.
Kexin Wang, Nandan Thakur, Nils Reimers, and Iryna Gurevych. GPL: generative pseudo labeling for
unsupervised domain adaptation of dense retrieval. In NAACL-HLT, pp. 2345–2360. Association for
Computational Linguistics, 2022.
Lidan Wang, Jimmy Lin, and Donald Metzler. A cascade ranking model for efficient ranked retrieval. In
SIGIR, pp. 105–114. ACM, 2011.
Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. Minilm: Deep self-attention
distillation for task-agnostic compression of pre-trained transformers. In NeurIPS, 2020.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric
Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma,
Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest,
and Alexander Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the
2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45,
Online, October 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-demos.6.
URL https://aclanthology.org/2020.emnlp-demos.6.
A Appendix
A.1 Statistical Testing with Multiple-Seed Models
To compute statistical significance using a paired statistical test between results from models A and B, one
i and mB
first has to compute the values of an accuracy metric (e.g., MRR) for each query separately. Let mA
i
16
Published in Transactions on Machine Learning Research (MM/YYYY)
be sequences of query-specific metric values for models A and B, respectively. The paired statistical test is
i − mB
then carried out using a sequence of differences mA
i .
This procedure is not directly applicable when each model is presented by multiple outcomes/seeds. To
overcome this issue, we (1) obtain a set of query- and seed-specific metric values, and (2) average them over
seeds, thus, reducing the problem to a single-seed statistical testing. In more details, let mA
is be
sets of query- and seed-specific metric values for models A and B, respectively. Recall that we have three
seeds, so s ∈ {1, 2, 3}. Then, we obtain seed-average runs mA
is and
compute statistical significance using a paired difference test.
i = 1/3 P3
i = 1/3 P3
is and mB
is and mB
s=1 mB
s=1 mA
A.2 Cost and Efficiency
In the following sub-section, we discuss both the ranking efficiency and query-generation cost. Although one
may argue that the cost of generation using open-source models is negligibly small, in reality this is true
only if one owns their own hardware and generates enough queries to justify the initial investment. Thus, we
make a more reasonable assessment assuming that the user can employ a cheap cloud service.
Cost of Query Generation. For the original InPars Bonifacio et al. (2022), the cost of generation for the
GPT-3 Curie model is $0.002 per one thousand tokens. The token count includes the length of the prompt
and the prompting document.14 We estimate that (depending on the collection) a single generation involves
300 to 500 tokens: long-document collections Robust04 and TREC-COVID both have close to 500 tokens per
generation.
Taking an estimate of 500 tokens per generation, the cost of querying OpenAI GPT-3 Curie API can be up to
$100 for Robust04 and TREC-COVID. Assuming that sampling from the 137-B FLAN model (used by (Dai
et al., 2022)) to be as expensive as from the largest GPT-3 model Davinci (which has a similar number of
parameters), each generation in the Promptagator study (Dai et al., 2022), was 10x more expensive compared
to InPars study (Bonifacio et al., 2022). Moreover, because Dai et al. (2022) generated one million samples
per collection, the Promptagator recipe was about two orders of magnitude more expensive compared to
InPars.
In contrast, it takes only about 15 hours to generate 100K queries using RTX 3090 GPU. Extrapolating this
estimate to A100, which is about 2x faster than RTX 309015, and using the pricing of Lambda GPU cloud,
we estimate the cost of generation in our InPars-light study to be under $10 per collection. 16
Efficiency of Re-ranking. A rather common opinion (in particular expressed by anonymous reviewers on
multiple occasions) is that using cross-encoders is not a practical option. This might be true for extremely
constrained latency environments or very large models, but we think it is totally practical to use small
models such as MiniLM-L6-30M for applications such as enterprise search. In particular, on a reasonably
modern GPU (such as RTX 3090) and MinLm-L6-30M re-ranking throughput exceeds 500 passages per
second (assuming truncation to the first 477 characters). Thus re-ranking 100 documents has an acceptable
sub-second latency. In fact, Cohere AI provides re-ranking with neural models as a cloud service.17
Cost of Model Training. Here, all training times are given with respect to a single RTX 3090 GPU.
Training and evaluating MiniLM6-30M models had negligible costs dominated by all-domain pretraining,
which took about two hours per seed. In contrast, the all-domain pretraining of DeBERTA-v3-435M took
28 hours. However, without all-domain pretraining, the training time itself was rather small, in particular,
because we used only a fraction of all generated queries (10K queries in the original InPars training and
about 20K queries in the follow-up fine-tuning using consistency checked data).
Aside from all-domain pre-training, the two most time-consuming operations were:
• Evaluation of model effectiveness on large query sets MS MARCO and NQ, which jointly have about
10K queries;
14https://chengh.medium.com/understand-the-pricing-of-gpt3-e646b2d63320
15https://lambdalabs.com/blog/nvidia-rtx-a6000-benchmarks
16https://lambdalabs.com/service/gpu-cloud#pricing
17https://docs.cohere.com/docs/reranking
17
Published in Transactions on Machine Learning Research (MM/YYYY)
Table 5: Best-Seed Results for Unsupervised Training
MS MARCO TREC DL 2020
Robust04
NQ
MRR
MAP nDCG@10
MAP nDCG@20 nDCG@10
TREC COVID
nDCG@10
BM25 (ours)
0.1867
0.3612
0.5159
0.2555
0.4285
0.3248
0.6767
MiniLM (InPars)
MiniLM (InPars ▶ consist. check)
MiniLM (InPars all ▶ consist. check)
ba0.2197 b0.3562
cba0.2422 b0.3844
ca0.2517 a0.3945
ba0.2415
b0.5151 ba0.2380
ba0.4029
ba0.5753 cb0.2615 cba0.4554
cb0.3297
a0.5769 c0.2671 ca0.4691 ca0.3800
b0.6732
ba0.7483
a0.7709
MiniLM-L6-30M results
DeBERTA-v3-435M results
ba0.2748 a0.4437
ba0.2847 a0.4479
a0.2804 a0.4414
DeBERTA (InPars)
DeBERTA (InPars ▶ consist. check)
DeBERTA (InPars all ▶ consist. check)
Notes: Best results are marked by bold font (separately for each model).
Consist. checked queries denotes a set of generated queries filtered out (via consistency checking) using the DeBERTA-v3-
435M model trained on InPars-generated data.
Super-scripted labels denote the following statistically significant differences (thresholds are given in the main text):
a: between a given neural ranking model and BM25;
b: between (InPars) and (InPars ▶ consist. check) when comparing ranking models of same type.
c: between (InPars all ▶ consist. check) and (InPars ▶ consist. check) when comparing ranking models of same type.
ba0.5131
a0.4872
ba0.5417 ca0.4924
ca0.4746
a0.5505
a0.6779 ba0.2874
a0.6813 ba0.3043
a0.6575 a0.3076
a0.8118
a0.8305
a0.8259
• Consistency checking using DeBERTA-v3-435M model.
The total effectiveness evaluation time for DeBERTA-v3-435 was about 6 hours (for all collections). The
consistency checking, however, took about 48 hours. In the future, we may consider carrying out consistency
checking using a much faster model, such as MiniLM-L6-30M.
A.3 Additional Experimental Results
Our rankers were trained using three seeds. However, in the case of all-domain pretraining, DeBERTA
converged poorly for one seed. Therefore, in Table 5 we present best-seed results.
18
|
synthetic_cpt | 4 | Data_Augmentation_for_Spoken_Language_Understanding_via_Pretrained_Language_Models.pdf | Data Augmentation for Spoken Language Understanding via Pretrained
Language Models
Baolin Peng∗, Chenguang Zhu∗, Michael Zeng, Jianfeng Gao
Microsoft Research, Redmond
{bapeng,chezhu,nzeng,jfgao}@microsoft.com
Abstract
The training of spoken language understanding (SLU) models
often faces the problem of data scarcity. In this paper, we put
forward a data augmentation method using pretrained language
models to boost the variability and accuracy of generated ut-
terances. Furthermore, we investigate and propose solutions to
two previously overlooked semi-supervised learning scenarios of
data scarcity in SLU: i) Rich-in-Ontology: ontology information
with numerous valid dialogue acts is given; ii) Rich-in-Utterance:
a large number of unlabelled utterances are available. Empirical
results show that our method can produce synthetic training data
that boosts the performance of language understanding models
in various scenarios.
Index Terms: Spoken language understanding, pretraining, data
augmentation, rich-in-ontology, rich-in-utterance
1. Introduction
Spoken Language Understanding (SLU) is widely applied in
human-machine dialogue systems to convert natural utterances
into predefined semantic frames, i.e. dialogue acts, for further
processing. For example, an SLU component in a virtual as-
sistant or robot outputs its prediction of intents and slot labels
detected within a user’s utterance [1]. Nevertheless, as a su-
pervised learning task, SLU suffers from the problem of data
scarcity. The problem becomes more prevalent in face of new
LU domains with novel definitions of intents and slot labels.
Even with an existing domain, the data correlated with a certain
intent or slot is often not sufficient. These problems significantly
limit the applicability of SLU systems.
Recently, various successful use cases of synthetic datasets
have stimulated the growth of the area of Data Augmentation
(DA) [2, 3]. The typical approach is to learn a model to mimic
the language style in the training data, leveraging the relation-
ship between semantic units and their natural representations.
Then, a non-generative model can modify utterances and replace
slot labels from existing data [4], while a generative model can
produce synthetic utterances in the same distribution space of
the training data [5]. However, these approaches usually train
the DA model on domain-specific data, which is of a small scale
by itself. It is thus questionable whether the augmented data
contains rich language expressibility beyond the scope of the
given data.
On the other hand, the rapid development of large-scale pre-
trained language models has significantly improved the capacity
of language understanding and generation models [6, 7]. With
a modest amount of domain-specific data, a pretrained model
can quickly adapt to a new domain. For instance, SC-GPT [8]
finetunes the GPT-2 language model [9] with dialogue data. It
∗Equal contribution
can efficiently adapt to new dialogue domains with only a couple
of labelled data samples.
In this paper, we propose to frame data augmentation as
a semantically controlled generation problem. Given dialogue
act, we leverage the pretrained SC-GPT model to generate corre-
sponding utterances as synthetic training data. In the process, the
general language syntax and semantics learned during the pre-
training phase are fused into the generation of domain-specific
utterances to increase variability and accuracy of SLU.
Furthermore, previous literature on SLU data augmentation
focus on the case where only a scant number of pairs of utter-
ance and corresponding semantic labels are given, which we
denote as Paired-Data-Only. However, there are two other over-
looked semi-supervised learning scenarios that commonly arise
in application.
• Rich-in-Ontology: The full ontology for the dialogue do-
main is also given, including the definitions of intents,
slot lists and valid combinations of slots and values. In
other words, the model is given a variety of valid combina-
tions of semantic labels. What lacks is the corresponding
natural language utterances.
• Rich-in-Utterance: Apart from the labelled data, there are
abundant unlabelled utterances without annotated intents,
slots and values.
In this paper, we also delve into these two scenarios and
propose corresponding data augmentation solutions.
For Rich-in-Ontology, we first finetune the pretrained model
SC-GPT on the paired training data, and then apply it to the valid
combination of intents and slots in the ontology information to
generate additional training data.
For Rich-in-Utterance, following the idea of the NLG model
SC-GPT, we propose SC-GPT-NLU, which is pretrained on the
same corpus of SC-GPT with flipped sources and targets. In
detail, we feed the utterances into the model and let it generate
intent and slots in a text sequence. Therefore, SC-GPT-NLU can
act as a language understanding module and produce semantic
labels for the unlabelled utterances available.
In the experiments, we evaluate the slot tagging and intent
classification accuracies of a Bi-LSTM seq2seq SLU model, us-
ing various data augmentation methods. Results show that on
ATIS and Snips datasets, our proposed method outperforms other
baseline systems. For instance, compared with baseline methods,
the data augmented by our system can help the underlying SLU
model achieve 0.5 points higher slot F1 and 3.02 points higher
intent accuracy in ATIS-Small dataset. Furthermore, when on-
tology information or unlabelled utterances are available, i.e.
Rich-in-Ontology and Rich-in-Utterance, our method can pro-
duce synthetic data that significantly boosts the performance of
SLU models.
1
2
0
2
r
a
M
1
1
]
L
C
.
s
c
[
2
v
2
5
9
3
1
.
4
0
0
2
:
v
i
X
r
a
2. Related Work
2.1. SLU Data Augmentation
Many previous approaches to SLU data augmentation target to
increase variability of generated utterances. [10] proposes to add
noise to perturb the decoder states to generate variants of an utter-
ance. Variational autoencoder (VAE) and conditional variational
autoencoder (CVAE) are used to generate utterances with di-
versified expressions [11]. [4] uses both non-generative models
like word substitution and generative models like paraphrasing
and back-translation to augment training data. [5] proposes a
multi-stage framework to generate, filter, and rank augmented
utterances. [12] uses reinforcement learning to learn a genera-
tor that facilitates dialogue state tracking. [13] employs atomic
templates to guide the model to generate more utterances given
combination of dialogue acts. [14] proposes to select sentences
from unlabeled utterances and apply pseudo-labels. The two
additional scenarios we propose in this paper are also related
to semi-supervised learning [15]. But we focus on data aug-
mentation, which is independent of the downstream learning
models.
Similar to our work, [16, 17] uses pretrained language mod-
els to generate synthetic training data for data augmentation.
However, their approach blends multiple labels and input sen-
tences together during training, so it is hard to control the amount
of generated synthetic data per class.
2.2. Pretraining
Pretrained models leverage the large amount of unlabelled text
corpora to improve the capability of language understanding.
ELMo [18] applies two unidirectional RNNs for language mod-
eling. GPT-2 [9] utilizes the transformer architecture [19] for
the task. BERT [6] employs a masking technique and next-
sentence-prediction task to train a bidirectional language model.
UniLM [20] uses different masking patterns to unify the model
structure for NLU and NLG. These pretrained language models
have been widely used with considerable success in various NLP
applications such as question answering [21] and summarization
[22].
Furthermore, pretrained language models have been lever-
aged in speech language processing to provide rich contextual
embeddings [23]. Specifically, SC-GPT [8], i.e. Semantically
Conditioned Generative Pre-training Transformer, builds upon
GPT-2 and is further pretrained on a large-scale dialogue corpus.
The resulting model outperforms many baselines in few-shot
language generation for task-oriented dialogue.
3. Data Augmentation
3.1. Traditional Augmentation Scenario
We describe the traditional augmentation scenario in SLU as
Paired-Data-Only, as the training data consists of N instance
pairs. Each pair contains the input tokenized utterance x =
(x1, x2, ..., xT ) and the corresponding output dialogue act A.
A includes the intent label I and P slot-value pairs:
A = [ I
, (s1 = v1, · · · , sP = vP )
(cid:124)
(cid:123)(cid:122)
(cid:125)
Slot-value pairs
(cid:124)(cid:123)(cid:122)(cid:125)
Intent
]
(1)
Thus, the training data D = {(x1, A1), ..., (xN , AN )}.
However, due to high labeling costs, the size of labeled data N is
usually small. In such cases, data augmentation (DA) is needed.
An augmenter S is a language generation model, which is trained
Model
SC-GPT
SC-GPT-NLU Utterance
Input
Dialogue act Utterance
Output
Dialogue act
Table 1: The input and output of SC-GPT [8] and SC-GPT-
NLU models. Both are initialized with GPT-2 [9] but further
pretrained on different data with swapped inputs and outputs.
on D to be able to produce a corresponding utterance ˜x given
an input dialogue act ˜A. For example, suppose ˜A = [hotel-
inform,(name = Hyatt, area = center, star = 5], S can
generate ˜x =I have booked the 5-star Hyatt hotel in the center
area for you.
Then, during augmentation, we first augment the dialogue
acts in the training data by replacing/inserting/deleting slot val-
ues to create more combinations. The augmenter S then gener-
ates candidate utterances for the dialogue acts. As the generated
utterances may not always contain the required slot-value la-
bels, we filter them to make sure that each utterance has all the
required input slot-values.
However, the data augmenter itself requires a considerable
amount of training data. As a result, augmenters directly trained
on D may have limited model capacity and expressibility. Thus,
we adopt the pretrained model SC-GPT [8], which is a language
model to produce utterances given a dialogue act. SC-GPT is
initialized with GPT-2 [9], further pretrained on a large corpus
of 400K dialogue-act-utterance pairs and then fine-tuned on the
training data D. It has been shown that SC-GPT can quickly
adapt to new domains with only a few domain-specific data
samples [8].
3.2. More Data Augmentation Scenarios
We note that in many real applications, there is often additional
available information beyond the paired training data. Here, we
specify two semi-supervised scenarios that commonly arise in
applications but have been overlooked by previous approaches.
3.2.1. Rich In Ontology
In many dialogue domains, a detailed description of the ontology
is given, which is a list of valid dialogue acts. Formally, the
training data consists of both labelled pairs and many dialogue
acts: D = {(x1, A1), ..., (xN , AN ), AN +1, ..., AM }.
To work with this scenario, we finetune SC-GPT on the
paired part of D, i.e. {(x1, A1), ..., (xN , AN )}, and then gen-
erate utterances for the other dialogue acts {AN +1, ..., AM }.
The utterances are then filtered to make sure that each utterance
has all the corresponding slot-values.
3.2.2. Rich In Utterance
It
is common in practice that a large number of unla-
belled dialogue utterances are available, usually collected
from history data. Formally, the training data consists of
both labelled pairs and many unlabeled utterances: D =
{(x1, A1), ..., (xN , AN ), xN +1, ..., xM }.
To utilize these utterances, we need to produce correspond-
ing dialogue acts. We propose to finetune GPT-2 in the reverse
way: feed an utterance as input and let the model generate the
dialogue act as output. In other words, we leverage a language
generation model to act as a language understanding module,
denoted as SC-GPT-NLU (Table 1).
Like SC-GPT, SC-GPT-NLU is initialized with GPT-2 and
Figure 1: Data augmentation process for the three scenarios: three Paired-Data-Only, Rich-In-Ontology and Rich-In-Utterances.
All models are initialized with GPT-2,
further pretrained on 400K dialogue corpus [8] and finetuned on the paired data
{(x1, A1), ..., (xN , AN )}.
Dataset
Split
Model
Slot F1
Intent Acc.
ATIS
Snips
Small
Medium
Small
Medium
No-DA
68.91
84.99
Seq2Seq
VAE
Ours
73.71
74.92
75.42
-
83.65
86.67
Ours
82.42∗
89.03∗
Intent
Slot
No Data Augmentation
87.30
90.15
Slot
61.30
Intent
Slot
Intent
93.43
79.83
97.29
Paired-Data-Only
-
90.95
90.71
88.72
89.27
88.61
Rich-in-Ontology
89.81∗
Rich-in-Utterance
92.27∗
-
-
64.96
-
-
93.43
-
-
80.62
-
-
97.57
67.06∗
94.14∗
82.54∗
97.86
Ours
78.45
87.46
88.23
91.94
63.46
93.43
80.54
98.14∗
Table 2: Slot F1 and intent accuracy scores on ATIS and Snips dataset. The overall highest score is in bold, and the best result in
Paired-Data-Only category is underlined. *: Statistically significant with p-value less than 0.05.
further pretrained on the 400K dialogue-act-utterance data and
finetuned on the paired part of D. But SC-GPT-NLU treats
the utterance as input and dialogue acts as output. So both SC-
GPT and SC-GPT-NLU are language generation models with
a softmax-based output layer that produces utterance/dialogue
acts token by token.
During augmentation, SC-GPT-NLU generates dialogue acts
for the unlabeled utterances xN +1, ..., xM . Here, the generated
names of intents, slots and values are mapped to the pre-defined
ontology by string matching. The augmented data is filtered to
make sure that each input slot-value appears in the utterance.
Figure 1 illustrates our SLU data augmentation process for
all three scenarios.
4. Experiments
4.1. Datasets and Metrics
We employ the widely used SLU benchmark dataset ATIS [24]
and Snips [25]. ATIS contains around 5.8K utterances from flight
reservation dialogues. It includes 120 slot labels and 21 intent
types. Snips contains 14K utterances from the Snips personal
voice assistant. It includes 123 slot labels and 7 intent types.
To simulate the few-shot data situations, we follow [26] to
use two small portions of the ATIS training set as training data:
Small (∼1/40 of the original training set) and Medium (∼1/10 of
the original training set). A development set of 500 instances is
used. Following the same split ratio, we sampled 327 and 1308
instances in Snips for Small and Medium respectively.
We use F1 score to measure slot tagging quality and use
accuracy score to evaluate intent classification, in accordance
with [26].
4.2. Model Details
SLU Model. For fair comparison, we use the same SLU model
that is trained on the training data and the data augmented by
our model and baseline systems.
We adopt the same setting for the SLU model as in [5]. It
has two layers of bi-directional LSTM with a hidden dimension
of 200 and a dropout probability of 0.5. We choose the Adam
optimizer [27] with a learning rate of 0.001. Gradients with a
2-norm greater than 5 are clipped. The best model is selected
based on performances on the validation set. The number of
training epochs is 50 and the batch size is 20.
Data augmentation. For the Paired-Data-Only case, we modify
the dialogue acts in the training split to construct around 300
additional combinations of DAs via dropping/inserting/replacing
GPT-2Domainadapted model400K CorpusUtteranceDA400K CorpusDAUtteranceSC-GPTSC-GPT-NLUPaired DataAugmented Paired DataDialogue acts from dataRich-In-UtterancePretrainPretrainFinetuneFilterDomainadapted modelPaired DataAugmented Paired DataUnlabeled utterances from dataFilterFinetuneModified dialogue acts from dataAugmented Paired DataFilterRich-In-OntologyPaired-Data-OnlyDA
SC-GPT
RateBook (best rating = 6; object select = current; object type = textbook; rating value = 3)
Utterance 1 Give 3 out of 6 to current textbook
Utterance 2
Utterance 3
DA
Utterance 1
Utterance 2
Utterance 3
The current textbook gets a 3 out of 6
I think that the current textbook should be rated 3 out of 6
BookRestaurant ( country = Honduras; facility = indoor; restaurant type = restaurant )
Book me a reservation for an indoor restaurant in Honduras
Book an indoor restaurant in Honduras
I need to book an indoor restaurant in Honduras
SC-GPT-NLU
Utterance
DA
Utterance
DA
2 of us want to eat at a restaurant that serves meatballs in VT
BookRestaurant ( party size number = 2; restaurant type = restaurant; served dish = meatballs; state = VT )
Add the track to the Metal Talks Metallica playlist.
AddToPlaylist ( music item = track; playlist = metal talks Metallica)
Table 3: Example utterances generated by SC-GPT given dialogue acts (DA) and dialogue acts generated by SC-GPT-NLU given
unlabelled utterances in Snips.
slots and values. For each dialogue act, we sample three utter-
ances produced by SC-GPT. After filtering out utterances which
do not contain all the slot-values, we collect around 500 synthetic
utterances and add them into the original training split.
We simulate the Rich-in-Ontology scenario by making the
dialogue acts in the whole training set available, from which 500
dialogue acts are sampled and added to the training split.
For the Rich-in-Utterance scenario, we sample 1,000 utter-
ances in the training corpus and use SC-GPT-NLU to produce
the most probable dialogue act. After filtering, around 500
utterance-DA pairs are added to the original training split.
Implementation details. Both SC-GPT and SC-GPT-NLU are
finetuned for 5 epoches with a learning rate as 5e-5. Nucleus
sampling [28] is used for decoding, where the sampling top-p is
0.9, and the temperature is 1. Details on SC-GPT including the
number of parameters and pretraining procedure can be found
at [8]. The finetuning takes about half an hour on a V100 GPU
machine 64GB memory.
Baselines. The baseline data augmentation systems include the
seq2seq [5] and variational autoencoder (VAE) data augmenta-
tion model [29]. We also report the results for the case without
data augmentation, denoted by No-DA.
4.3. Results
Table 2 shows the accuracy of slot tagging and intent classifi-
cation for various models. Based on the results, we make the
following observations.
Firstly, our data augmentation method can considerably
boost the model accuracy (comparing No-DA and Ours), es-
pecially when the training data size is small. For instance, in
ATIS, when only paired data is available, the slot F1 increases by
6.51 (Small) and 1.31 (Medium) points, while the intent accuracy
increases by 1.68 (Small) and 0.56 (Medium) points.
Secondly, under Rich-in-Ontology and Rich-in-Utterance
scenarios, our method further boosts the slot F1 by up to 7
points and intent accuracy by up to 2.4 points. Overall, the
accuracy scores are the highest when the ontology information
is available. This shows that our method can take advantage of
additional information and produce better synthetic training data
for downstream models. We conduct statistical paired t-tests
and find that the best model’s performance is all statistically
significant with p-value less than 0.05.
Thirdly, under the traditional Paired-Data-Only scenario, our
data augmentation method outperforms all baselines in ATIS-
Small, and achieves comparable results in ATIS-Medium. This
shows that our method is better suited when training data is
scarce.
4.4. Examples of Augmented Data
In Table 3, we show examples of generated utterances and dia-
logue acts by SC-GPT and SC-GPT-NLU in Snips. As shown,
after pretraining and domain finetuning, SC-GPT can produce
coherent utterances with a high variability, while covering all
required intent, slots and values. SC-GPT-NLU can generate
dialogue acts in the same format as input to SC-GPT, which
captures the important semantic information in the input utter-
ances. This demonstrates that pretrained models can quickly
adapt to target domains with a small amount of labeled data.
This facilitates the generation of high-quality synthetic data for
SLU.
5. Conclusion
In this paper, we approach the problem of data scarcity in SLU
with pretrained language models. After finetuning on domain-
specific dialogue data, our model can produce high-quality syn-
thetic data which boosts the performance of the downstream SLU
model. Moreover, we provide solutions to two semi-supervised
scenarios in SLU overlooked by previous literature: Rich-in-
Ontology and Rich-in-Utterance. In experiments on the bench-
mark datasets ATIS and Snips, we demonstrate that our solution
can effectively leverage auxiliary unlabeled data to produce high-
quality synthetic training data for building SLU models with a
higher accuracy.
As future work, we aim to extend the idea of data augmenta-
tion based on pretrain language models to other speech language
processing tasks, such as information retrieval and summariza-
tion.
6. References
[1] J. R. Bellegarda, “Spoken language understanding for natural in-
teraction: The siri experience,” in Natural interaction with robots,
knowbots and smartphones. Springer, 2014, pp. 3–14.
[2] X. Lu, B. Zheng, A. Velivelli, and C. Zhai, “Enhancing text cat-
egorization with semantic-enriched representation and training
data augmentation,” Journal of the American Medical Informatics
Association, vol. 13, no. 5, pp. 526–535, 2006.
[23] Y.-A. Chung, C. Zhu, and M. Zeng, “Semi-supervised speech-
language joint pre-training for spoken language understanding,”
arXiv preprint arXiv:2010.02295, 2020.
[24] G. Tur, D. Hakkani-T¨ur, and L. Heck, “What is left to be un-
derstood in atis?” in 2010 IEEE Spoken Language Technology
Workshop.
IEEE, 2010, pp. 19–24.
[25] A. Coucke, A. Saade, A. Ball, T. Bluche, A. Caulier, D. Leroy,
C. Doumouro, T. Gisselbrecht, F. Caltagirone, T. Lavril et al.,
“Snips voice platform: an embedded spoken language understand-
ing system for private-by-design voice interfaces,” arXiv preprint
arXiv:1805.10190, 2018.
[26] Y.-N. Chen, D. Hakanni-T¨ur, G. Tur, A. Celikyilmaz, J. Guo, and
L. Deng, “Syntax or semantics? Knowledge-guided joint seman-
tic frame parsing,” in 2016 IEEE Spoken Language Technology
Workshop (SLT).
IEEE, 2016, pp. 348–355.
[27] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimiza-
tion,” arXiv preprint arXiv:1412.6980, 2014.
[28] A. Holtzman, J. Buys, L. Du, M. Forbes, and Y. Choi, “The curious
case of neural text degeneration,” arXiv preprint arXiv:1904.09751,
2019.
[29] K. M. Yoo, Y. Shin, and S.-g. Lee, “Data augmentation for spoken
language understanding via joint variational generation,” in Pro-
ceedings of the AAAI Conference on Artificial Intelligence, vol. 33,
2019, pp. 7402–7409.
[3] A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen,
R. Prenger, S. Satheesh, S. Sengupta, A. Coates et al., “Deep
speech: Scaling up end-to-end speech recognition,” arXiv preprint
arXiv:1412.5567, 2014.
[4] J. Quan and D. Xiong, “Effective data augmentation ap-
proaches to end-to-end task-oriented dialogue,” arXiv preprint
arXiv:1912.02478, 2019.
[5] Y. Hou, Y. Liu, W. Che, and T. Liu, “Sequence-to-sequence data
augmentation for dialogue language understanding,” arXiv preprint
arXiv:1807.01554, 2018.
[6] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-
training of deep bidirectional transformers for language under-
standing,” arXiv preprint arXiv:1810.04805, 2018.
[7] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy,
M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A
robustly optimized bert pretraining approach,” arXiv preprint
arXiv:1907.11692, 2019.
[8] B. Peng, C. Zhu, C. Li, X. Li, J. Li, M. Zeng, and J. Gao, “Few-
shot natural language generation for task-oriented dialog,” arXiv
preprint arXiv:2002.12328, 2020.
[9] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, “Im-
proving language understanding by generative pre-training,” 2018.
[10] G. Kurata, B. Xiang, and B. Zhou, “Labeled data generation with
encoder-decoder lstm for semantic slot filling.” in Interspeech,
2016.
[11] J. Li, L. Qiu, B. Tang, D. Chen, D. Zhao, and R. Yan, “Insufficient
data can also rock! Learning to converse using smaller data with
augmentation,” in Proceedings of the AAAI Conference on Artificial
Intelligence, vol. 33, 2019, pp. 6698–6705.
[12] Y. Yin, L. Shang, X. Jiang, X. Chen, and Q. Liu, “Dialog
state tracking with reinforced data augmentation,” arXiv preprint
arXiv:1908.07795, 2019.
[13] Z. Zhao, S. Zhu, and K. Yu, “Data augmentation with atomic
templates for spoken language understanding,” arXiv preprint
arXiv:1908.10770, 2019.
[14] E. Cho, H. Xie, J. P. Lalor, V. Kumar, and W. M. Campbell, “Effi-
cient semi-supervised learning for natural language understanding
by optimizing diversity,” in 2019 IEEE Automatic Speech Recog-
nition and Understanding Workshop (ASRU).
IEEE, 2019, pp.
1077–1084.
[15] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling, “Semi-
supervised learning with deep generative models,” in Advances in
neural information processing systems, 2014, pp. 3581–3589.
[16] A. Anaby-Tavor, B. Carmeli, E. Goldbraich, A. Kantor, G. Kour,
S. Shlomov, N. Tepper, and N. Zwerdling, “Do not have enough
data? Deep learning to the rescue!” in Thirty-Fourth AAAI Confer-
ence on Artificial Intelligence, 2020.
[17] V. Kumar, A. Choudhary, and E. Cho, “Data augmentation using
pre-trained transformer models,” arXiv preprint arXiv:2003.02245,
2020.
[18] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee,
and L. Zettlemoyer, “Deep contextualized word representations,”
arXiv preprint arXiv:1802.05365, 2018.
[19] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N.
Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,”
Advances in neural information processing systems, pp. 5998–
6008, 2017.
[20] L. Dong, N. Yang, W. Wang, F. Wei, X. Liu, Y. Wang, J. Gao,
M. Zhou, and H.-W. Hon, “Unified language model pre-training
for natural language understanding and generation,” arXiv preprint
arXiv:1905.03197, 2019.
[21] C. Zhu, M. Zeng, and X. Huang, “Sdnet: Contextualized attention-
based deep network for conversational question answering,” arXiv
preprint arXiv:1812.03593, 2018.
[22] Y. Liu and M. Lapata, “Text summarization with pretrained en-
coders,” EMNLP, 2019.
|
synthetic_cpt | 1 | Exploring_the_Utility_of_Self-Supervised_Pretraining_Strategies_for_the_Detection_of_Absent_Lung_Sliding_in_M-Mode_Lung_Ultrasound.pdf | 1
SERE: Exploring Feature Self-relation for
Self-supervised Transformer
Zhong-Yu Li, Shanghua Gao, Ming-Ming Cheng
Abstract—Learning representations with self-supervision for convolutional networks (CNN) has been validated to be effective for
vision tasks. As an alternative to CNN, vision transformers (ViT) have strong representation ability with spatial self-attention and
channel-level feedforward networks. Recent works reveal that self-supervised learning helps unleash the great potential of ViT. Still,
most works follow self-supervised strategies designed for CNN, e.g., instance-level discrimination of samples, but they ignore the
properties of ViT. We observe that relational modeling on spatial and channel dimensions distinguishes ViT from other networks. To
enforce this property, we explore the feature SElf-RElation (SERE) for training self-supervised ViT. Specifically, instead of conducting
self-supervised learning solely on feature embeddings from multiple views, we utilize the feature self-relations, i.e., spatial/channel
self-relations, for self-supervised learning. Self-relation based learning further enhances the relation modeling ability of ViT, resulting
in stronger representations that stably improve performance on multiple downstream tasks. Our source code is publicly available at:
https://github.com/MCG-NKU/SERE.
Index Terms—feature self-relation, self-supervised learning, vision transformer
✦
1 INTRODUCTION
vision tasks at
S UPERVISED training of neural networks thrives on many
the cost of collecting expensive human-
annotations [1], [2], [3]. Learning visual representations from
un-labeled images [4], [5], [6], [7], [8] has proven to be an
effective alternative to supervised training, e.g., convolutional net-
works (CNN) trained with self-supervision have shown compara-
ble or even better performance than its supervised counterparts
[9], [10]. Recently, vision transformers (ViT) [11], [12] have
emerged with stronger representation ability than CNN on many
vision tasks. Pioneering works have shifted the methods designed
for self-supervised CNN to ViT and revealed the great potential
of self-supervised ViT [13], [14], [15]. Typical self-supervised
learning methods designed for ViT, e.g., DINO [13]and Mo-
CoV3 [15], send multiple views of an image into a ViT network
to generate feature representations. Self-supervisions, e.g., con-
trastive learning [15], [16], [17] and clustering [13], [18], are then
implemented on these representations based on the hypothesis
that different views of an image share similar representations.
However, the widely used feature representations are still limited
to feature embedding used by CNN based methods, e.g., image-
level embeddings [6], [7], [19] and patch-level embeddings [20],
[21]. But the properties of ViT, e.g., the self-relation modeling
ability, are less considered by existing self-supervised methods.
We wonder if other forms of representations related to ViT can
benefit the training of self-supervised ViT.
We seek to improve the training of self-supervised ViT by
exploring the properties of ViT. ViT models the feature relations
on spatial and channel dimensions with the multi-head self-
attention (MHSA) and feedforward network (FFN) [11], [22],
[23], respectively. The MHSA aggregates the spatial information
with the extracted relations among patches, resulting in stronger
spatial relations among patches with similar semantic contexts
(see Fig. 1(c)). The FFN combines features from different chan-
nels, implicitly modeling the feature self-relation in the channel
•
The authors are with TMCC, CS, Nankai University, Tianjin 300350,
China. S. Gao is the corresponding author ([email protected]).
spatial self-relation
feature
embedding
feature
self-relation
channel self-relation
(a)
(b)
(c)
Fig. 1. The illustration of self-supervised learning using feature embed-
dings and our proposed feature self-relation. (a) Typical self-supervised
learning methods process the feature embeddings of the image views.
(b) We propose to model the feature self-relation that measures the
relation inside an image view from different dimensions. (c) Two specific
forms of self-relation, i.e., the spatial and channel self-relations. For
spatial self-relation, we select 6 patches indicated by differently colored
boxes (top right) and visualize their self-relation (top left). For channel
self-relation, we show visualized feature maps of 4 channels (bottom
right) and the corresponding self-relation (bottom left).
dimension. For instance, Fig. 1(c) reveals that channels learn
diverse patterns, and there are varying degrees of relations be-
tween different channels. Feature self-relation modeling enables
ViT with strong representation ability, motivating us to use self-
relation as a new representation form for self-supervision.
In this work, we propose to utilize the feature SElf-RElation
(SERE) for self-supervised training, enhancing the self-relation
modeling properties in ViT. Following the spatial relation in
MHSA and channel relation in FFN, we form the spatial and
channel self-relations as representations. The spatial self-relation
extracts the relations among patches within an image. The chan-
nel self-relation models the connection of different channels,
where each channel in feature embeddings highlights unique
semantic information. Feature self-relation is the representation
3
2
0
2
p
e
S
8
1
]
V
C
.
s
c
[
3
v
4
8
1
5
0
.
6
0
2
2
:
v
i
X
r
a
𝑓𝑓𝑓𝑓
in a new dimension and is compatible with existing representation
forms, e.g., image-level and patch-level feature embeddings. As
shown in Fig. 1, we can easily replace the feature embeddings
with the proposed feature self-relation on existing self-supervised
learning methods. We demonstrate that utilizing feature self-
relation could stably improve multiple training methods for self-
supervised ViT, e.g., DINO [13], iBOT [18], and MoCoV3 [15],
on various downstream tasks, e.g., object detection [2], [24], se-
mantic segmentation [3], [25], semi-supervised semantic segmen-
tation [26] and image classification [1]. To our best knowledge,
we are the first to study self-relations in self-supervised learning.
Our major contributions are summarized as follows:
• We propose to utilize the self-relations (SERE) of ViT,
i.e., spatial and channel self-relations that fit well with the
relation modeling property of ViT, as the representations
for self-supervised learning.
• The proposed SERE method is compatible with existing
self-supervised methods and stably boosts ViT on various
downstream tasks.
2 RELATED WORK
2.1 Self-Supervised Learning
Self-supervised learning aims at learning rich representations
without any human annotations. Early works utilized hand-crafted
pretext tasks, e.g., coloration [27], [28], jigsaw puzzles [29],
rotation prediction [30], autoencoder [31], [32], image inpaint-
ing [33] and counting [34] to learn representations based on
heuristic cues [19], but only achieved limited generalization
ability. Recently, self-supervised learning has shown great break-
throughs due to new forms of self-supervisions, e.g., contrastive
learning [7], [35], [36], [37], [38], [39], [40], self-clustering [41],
[42], [43], and representation alignment [5], [6], [44], [45], [46],
[47]. These methods directly utilize the feature embeddings as
representations to generate self-supervisions. For example, many
of these methods utilize image-level feature embeddings [19],
[41], [48] as representations. And some methods explore using
embeddings in more fine-grained dimensions, e.g., pixel [20],
[49], patch [50], [51], object [21], and region [21], [52] di-
mensions. However, these representations are still embeddings
corresponding to different regions of input images. Compared to
these embedding based methods that only constrain individual
embedding, we further transform the feature embedding to self-
relation as a new representation dimension, which adds the
constraint to the relation among embeddings. The self-relation
provides rich information for self-supervised training and fits well
with the relation modeling properties of ViT, thus further boosting
the representation quality of ViT. Meanwhile, the self-relation
is orthogonal
to embedding based methods and consistently
improves the performance of multiple methods.
2.2 Self-Supervised Vision Transformer
Transformers have been generalized to computer vision [11],
[53] and achieved state-of-the-art performance on many tasks,
e.g., image classification [12], semantic segmentation [53], [54],
and object detection [55]. Due to a lack of inductive bias, training
ViT requires much more data and tricks [11], [56]. Recent works
have been working on training ViT with self-supervised learning
methods [16], [57], [58], [59] to meet the data requirement of ViT
with low annotation costs. Many instance discrimination based
methods use feature embeddings as the representation for self-
supervised learning. For instance, Chen et al. [15] and Caron et
2
al. [13] implement contrastive learning and self-clustering with
image-level embeddings, respectively. Zhou et al. [18] develop
self-distillation with patch-level embeddings. However,
these
methods still follow the pretext task of instance discrimination
initially designed for CNNs, where representations with invari-
ance to transformation are learned by maximizing the similarity
among positive samples. New properties in ViT may help the self-
supervised training but are ignored by these methods. We explore
spatial self-relation and channel self-relation, which are proven
more suitable for the training of ViT.
2.3 Masked Image Modeling
Concurrent with our work, self-supervised learning by masked
image modeling (MIM) [14], [33], [60], [61] has become a popu-
lar alternative to instance discrimination (ID) for self-supervised
ViT. MIM reconstructs masked patches from unmasked parts,
with different forms of reconstruction targets, e.g., discrete to-
kenizer [60], [62], raw pixels [14], [59], [63], [64], [65], HOG
features [66], patch representations [18], etc. Compared to ID,
patch-level reconstruction in MIM enhances token-level represen-
tations [18], [61]. Differently, the proposed SERE enhances the
ability to model inter-token relations. Experiments also demon-
strate that SERE can outperform and complement various MIM-
based methods. Additionally, we strengthen the ability to model
inter-channel relations, which MIM is missing.
2.4 Property of Vision Transformer
Recent works have shown that the remarkable success of ViT
on many vision tasks [12], [54], [67] relies on their strong
ability to model spatial relations. Dosovitskiy et al. [11] and
Kim et al. [23] find that attention attends to semantically relevant
regions of images. Raghu et al. [22] reveal the representations of
ViT preserve strong spatial information even in the deep layer.
They also observe that patches in ViT have strong connections
to regions with similar semantics. Caron et al. [13] find that
self-supervised ViT captures more explicit semantic regions than
supervised ViT. These observations indicate that ViT has a strong
ability to model relations, which is quite different from the
pattern-matching mechanisms of CNNs. In this work, we propose
to enhance such ability by explicitly using spatial and channel
feature self-relations for self-supervised learning.
2.5 Relation Modeling
Relation modeling, which has different forms such as pair-
wise relation and attention, has facilitated various vision tasks,
e.g., knowledge distillation [68], [69], [70], [71], [72], [73], met-
ric learning [74], semantic segmentation [75], [76], [77], unsuper-
vised semantic segmentation [78], object localization [79], [80],
[81], contrastive learning [82], masked image modeling [83],
feature aggregation [84] and texture descriptor [85], [86]. In self-
supervised learning, early work [87] proposes to utilize relation
modeling by calculating channel relations in the whole batch,
i.e., batch-relation. In comparison, we explore self-relation, which
is the spatial or channel relations for features within an image and
fits well with the relation modeling property of ViT.
3 METHOD
3.1 Overview
In this work, we focus on the instance discriminative self-
supervised learning pipeline [4], [13]. First, we briefly revisit
f1(τ1(x))
P
Lc
P
f2(τ2(x))
3
O
Channel self-relation
O
P
Lp
P
Spatial self-relation
Fig. 2. Our method models self-relation from spatial and channel dimensions. Given an image x, two views are generated by two random data
augmentations. Here the image patches represent the feature embeddings extracted by the encoder. The feature embeddings are transformed by
representation transformation P to generate spatial or channel self-relations. Lp and Lc, i.e., the loss functions defined in Equ. (3) and Equ. (5),
enforce consistency between self-relations of different views. For spatial self-relation, only the features in the overlapping region are considered.
O means the operation of extracting features from the overlapping region between two views in Equ. (2), where the red dotted box indicates the
overlapping region.
the framework of common instance discriminative self-supervised
learning methods. Given an un-labeled image x, multiple views
are generated by different random data augmentations, e.g., gen-
erating two views τ1(x) and τ2(x) with augmentations τ1 and
τ2. Under the assumption that different views of an image
contain similar information, the major idea of most instance
discriminative methods is to maximize the shared information
encoded from different views. Firstly, two views are sent to the
encoder network to extract the feature embeddings r1 ∈ RC×HW
and r2 ∈ RC×HW with H · W local patches and C channels.
According to the training objective of self-supervised learning
methods,
the feature embeddings are then transformed with
transformation P to obtain different representations, e.g., image-
level and patch-level embeddings. Different self-supervised opti-
mization objectives utilize the obtained representations to get the
loss as follows:
LI = R(P(r1), P(r2)),
(1)
where R means the function that maximizes the consistency
across views and can be defined with multiple forms, e.g., con-
trastive [7], non-contrastive [6], and clustering [4] losses.
Our main focus in this work is exploring new forms of rep-
resentation transformation P. Motivated by the relation modeling
properties in ViT, instead of directly using feature embeddings,
we utilize feature self-relation in multiple dimensions as the rep-
resentations for self-supervised learning on ViT. In the following
sections, we introduce two specific self-relation representations
for self-supervised ViT, i.e., spatial and channel self-relations.
3.2 Spatial Self-relation
Prior works [11], [13], [22], [23] have observed that ViT has the
property of modeling relations among local patches by the MHSA
module. Meanwhile, modeling more accurate spatial relations is
crucial for many dense prediction tasks [20], [21], e.g., object
detection and semantic segmentation. So we propose to enhance
the relation modeling ability of ViT by cooperating spatial self-
relation for self-supervised training. In the following part, we first
give details of the transformation P that transforms the feature
embeddings encoded by ViT to spatial self-relation. Then, we
τ1(x)
τ2(x)
Fig. 3. The region-aligned sampling operation for spatial self-relation.
τ1(x) and τ2(x) are the different views of an image, and the dotted
boxes indicate their regions in the original image. The points in green
mean the uniformly sampled points in the overlapped regions. And the
points in purple mean the patch features in ViT.
give the self-supervision loss utilizing spatial self-relation as the
representation.
Generating spatial self-relation representation. Given the
feature embeddings r1 = f1(τ1(x)) ∈ RC×HW and r2 =
f2(τ2(x)) ∈ RC×HW from the ViT backbone, a projection head
hp, which consists of a batch normalization [88] layer and a
ReLU [89] activation layer, processes these embeddings to obtain
p1 = hp(r1) and p2 = hp(r2). Then, we separately calculate
their spatial self-relation.
In contrast to the image-level embedding, the supervision be-
tween spatial self-relation of different views should be calculated
between patches at the same spatial positions. However, p1 and
p2 are not aligned in the spatial dimension due to the random crop
and flip in data augmentations. To solve the misalignment issue,
we apply a region-aligned sampling operation O [26] to uniformly
sample Hs × Ws points from the overlapping region of p1 and
p2.1 As shown in Fig. 3, we localize the overlapping region in the
raw image and split the region into Hs × Ws grids, which are
not essentially aligned with the patches in ViT. For the center of
each grid, we calculate its spatial coordinates in feature maps of
each view and then sample its features by bi-linear interpolation.
The details of this operation O are shown in the supplementary.
For one view, e.g., p1 ∈ RC×HW , we calculate the spatial self-
relation Ap(p1) ∈ RHsWs×HsWs as follows:
Ap(p1) = Softmax
(cid:18) O(p1)T · O(p1)
√
C
(cid:30)
(cid:19)
,
tp
(2)
where O(p1) ∈ RC×HsWs is the feature sampled in the over-
lapping region, T is the matrix transpose operation, and tp is the
temperature parameter that controls the sharpness of the Softmax
function. In the spatial self-relation, each row represents the
relation of one local patch to other patches and is normalized
by the Softmax function to generate probability distributions.
Self-supervision with spatial self-relation. Spatial self-relation
can be used as the representation of many forms of self-
supervisions. For simplicity, we give an example of using self-
relation for asymmetric non-contrastive self-supervision loss [5],
[6] as follows:
Lp = Re((cid:26)(cid:26)G(Ap(p1)), Ap(gp(p2))),
where Re is the cross-entropy loss, (cid:26)(cid:26)G is the stop-gradient
operation to avoid training collapse following [5], and gp is
the prediction head for asymmetric non-contrastive loss [5], [6]
consisting of a fully connected layer, a batch normalization layer,
and a ReLU layer.
(3)
Multi-head spatial self-relation. In ViT, the MHSA performs
multiple parallel self-attention operations by dividing the feature
into multiple groups. It is observed that different heads might
focus on different semantic patterns [13]. Inspired by this, we
divide the feature embeddings into M groups along the channel
dimension and calculate the spatial self-relation within each
group, obtaining M spatial self-relations for each view. By
default, we choose M = 6, as shown in Tab. 12.
3.3 Channel Self-relation
In neural networks, each channel represents some kind of pattern
within images. Different channels encode diverse patterns [90],
[91], providing neural networks with a strong representation
capability. The FFN [11] in ViT combines patterns across chan-
nels and implicitly models the relation among channels [90],
i.e., the pattern encoded in one channel has different degrees
of correlation with the patterns encoded by other channels, as
shown in Fig. 2. This mechanism motivates us to form channel
self-relation as the representation for self-supervised learning to
enhance self-relation modeling ability in the channel dimension.
Specifically, we transform the feature embedding of ViT to
channel self-relation and then use the channel self-relation as the
representation for self-supervision.
Generating channel self-relation representation. Here, we give
the details of the transformation P that transforms the feature
1. In this work, we combine the proposed spatial self-relation with existing
methods due to the orthogonality of self-relation. Since existing methods do
not restrict that different views must overlap, we only add spatial self-relation
to the views with overlapping regions.
4
embeddings to channel self-relation. As in Equ. (2), given the
feature embeddings of two views, i.e., r1 and r2, a projection
head hc with the same structure as hp processes these embeddings
and obtains c1 = hc(r1)T and c2 = hc(r2)T . Then we
separately calculate the channel self-relation for each view. For
one view, e.g., c1 ∈ RHW ×C , we calculate its channel self-
relation Ac(c1) ∈ RC×C as follows:
Ac(c1) = Softmax
(cid:18) cT
1 · c1
H · W
(cid:30)
(cid:19)
,
tc
(4)
where the Softmax function normalizes each row of the self-
relation to get probability distributions, and tc is the temperature
parameter controlling the sharpness of probability distributions.
Self-supervision with channel self-relation. The channel self-
relation can also be utilized as a new form of representation for
many self-supervised losses. Similar to the spatial self-relation
based loss in Equ. (3), we give the non-contrastive loss using
channel self-relation as follows:
Lc = Re((cid:26)(cid:26)G(Ac(c1)), Ac(gc(c2))),
(5)
where the Re is the cross-entropy loss, and gc is a prediction
head with the same structure as gp in Equ. (3). This loss function
enforces the consistency of channel self-relations among views
and thus enhances the channel self-relation modeling ability of
the model. Unlike spatial self-relation, we do not need to consider
the spatial misalignment between different views. Because we
enforce the consistency between channel self-relations, not the
channel features, and the channel self-relation defined in Equ. (4)
has no spatial dimension.
3.4 Implementation Details
Loss function. By default, we apply our proposed spatial/channel
self-relations and image embeddings as representations for self-
supervision losses, as these representations reveal different prop-
erties of features. The summarized loss function is as follows:
L = LI + αLp + βLc,
(6)
where the spatial and channel losses are weighted by α and β, and
LI is the loss using image-level embeddings, e.g., the clustering-
based loss in DINO [13]. We show in Tab. 8 that solely using
our proposed self-relation could achieve competitive or better
performance than using image-level embeddings. Combining
these three representations results in better representation quality,
showing self-relation is a complementary representation form
to image-level embeddings. To increase the training efficiency
and make fair comparisons, we utilize the multi-crop [4], [13]
augmentation to generate global and local views. For local views,
we follow [4], [13] to calculate the loss between each global and
local view but ignore the loss among local views.
Architecture. We use the Vision Transformer [11] as the encoder
network. Following [7], [13], the representations r1 and r2 of two
views τ1(x) and τ2(x) are extracted by a momentum-updated
encoder network f1 and the encoder network f2. During training,
the parameters θ2 of f2 are updated by gradient descent. And the
parameters θ1 of f1 are updated as θ1 = λθ1 + (1 − λ)θ2, where
λ ∈ [0, 1] is the momentum coefficient. Following DINO [13],
the λ is set to 0.996 and is increased to 1.0 during training with a
cosine schedule. Accordingly, we denote the projections follow-
ing f1 and f2 as h1
p/h2
c, respectively. The parameters
c and h2
p/h1
TABLE 1
Fully fine-tuning classification on ImageNet-1K and semi-supervised
semantic segmentation on ImageNet-S. For ImageNet-S, we report
the mIoU on the val and test set. The PT means loading
self-supervised pre-trained weights for initialization and FT means
loading fully fine-tuned weights on classification labels of ImageNet-1K
for initialization, respectively.
Backbone Epochs
DINO [13] ViT-S/16
ViT-S/16
+SERE
iBOT [18] ViT-S/16
+SERE
ViT-S/16
iBOT [18] ViT-B/16
ViT-B/16
+SERE
100
100
100
100
100
100
Segmentation
Classification
ImageNet-1K ImageNet-SPT ImageNet-SFT
Top-1 Top-5 val
95.1 35.1
79.7
95.5 36.9
80.9
95.4 38.1
80.9
95.8 41.0
81.5
96.6 48.3
83.3
96.7 48.6
83.7
val
54.6
57.3
57.9
58.9
62.6
63.0
test
34.4
36.0
37.8
40.2
47.8
48.2
test
54.4
56.2
57.4
57.8
63.0
63.3
TABLE 2
Transferring learning on semantic segmentation, object detection, and
instance segmentation. The APb means the bounding box AP for
object detection (DET), and APm means the segmentation mask AP
for instance segmentation (SEG).
VOC SEG
ADE20K SEG
DINO [13]
+SERE
DINO [13]
+SERE
mIoU
77.1
79.7
APb
46.0
46.6
mAcc
mIoU
87.5
88.8
APb
75
49.7
50.2
42.6
43.8
APm
40.0
40.5
COCO DET
APb
50
64.9
65.9
mAcc
53.4
54.6
APm
75
42.8
43.5
COCO SEG
APm
50
62.0
62.9
p/h1
c are also momentum-updated by h2
of h1
c, following the
updating scheme of f1. Only the encoder network is used for
transfer learning on downstream tasks after pre-training.
p/h2
4 EXPERIMENTS
This section verifies the effect of using proposed spatial and chan-
nel self-relations as representations for self-supervised learning.
We give the pre-training settings in Section 4.1. In Section 4.2,
we compare our method with existing methods on multiple
evaluation protocols, showing stable improvement over multiple
methods. In Section 4.3, we conduct ablations to clarify design
choices.
4.1 Pre-training Settings
Unless otherwise stated, we adopt the ViT-S/16 as the backbone
network. DINO [13]is selected as our major baseline method. The
model is trained by an AdamW [92] optimizer with a learning
rate of 0.001 and a batch size of 512. We pre-train models for
100 epochs on the ImageNet-1K [1] dataset for performance
comparison. For ablation, the ImageNet-S300 dataset [26] is used
to save training costs. Following [13], we apply the multi-crop
training scheme where 2 global views with the resolution of
224×224 and 4 local views with the resolution of 96×96 are
adopted. The global views are cropped with a ratio between
0.35 and 1.0. And the local views are cropped with a ratio
between 0.05 and 0.35. For spatial self-relation, the Hs/Ws of the
operation O in Equ. (2) are set to 13/13 for global views and 6/6
for local views. The number of heads M in spatial self-relation is
set to 6 by default. The tp in Equ. (2) and tc in Equ. (4) are set to
0.5 and 0.1 for the encoder network. For the momentum encoder,
5
TABLE 3
Comparison with longer pre-training epochs.
(a) Semantic segmentation on the ADE20K dataset.
iBOT [18]
+SERE
iBOT [18]
+SERE
Backbone
ViT-S/16
ViT-S/16
ViT-B/16
ViT-B/16
Epochs
800
100
400
200
mIoU
45.4
45.8
50.0
50.0
mAcc
56.2
56.8
60.3
60.9
(b) Classification on the ImageNet-1K dataset.
iBOT [18]
+SERE
Backbone
ViT-S/16
ViT-S/16
Epochs
300
100
Top-1
81.1
81.5
Top-5
-
95.8
TABLE 4
Semi-supervised classification on ImageNet-1K. We fine-tune the
models with 1%/10% training labels and evaluate them with 100% val
labels.
1%
10%
Top-1
Top-5
Top-1
Top-5
DINO [13]
+SERE
52.1
55.9
77.8
81.0
70.0
71.5
89.8
90.6
we set the tp and tc to 1.0 and 1.0. The α and β in Equ. (6) are
set to 1.0 and 1.0, respectively.
For iBOT [18], 10 local views are used for a fair comparison.
And we crop images with a ratio between 0.4 and 1.0 for global
views and between 0.05 and 0.4 for local views. A gradient clip
of 0.3 is used for optimization. The α and β in Equ. (6) are set to
0.2 and 0.5. Additionally, we provide experiments with ViT-B/16
as the backbone and show the pre-training and fine-tuning details
in the supplementary.
4.2 Performance and Analysis
We verify the effectiveness of self-relation for self-supervised
learning by transferring the pre-trained models to image-level
classification tasks and dense prediction downstream tasks. Mod-
els are pre-trained with 100 epochs on ImageNet-1k unless
otherwise stated. For easy understanding, models pre-trained with
self-relation representations are marked as SERE.
Fully fine-tuning classification on ImageNet-1K. We compare
the fully fine-tuning classification performance on the ImageNet-
1K dataset. When utilizing ViT-S/16, the pre-trained model is
fine-tuned for 100 epochs with the AdamW [92] optimizer and
a batch size of 512. The initial learning rate is set to 1e-3 with
a layer-wise decay of 0.65. After a warmup of 5 epochs, the
learning rate gradually decays to 1e-6 with the cosine decay
schedule. We report the Top-1 and Top-5 accuracy for evaluation
on the ImageNet-1k val set. As shown in Tab. 1, SERE advances
DINO and iBOT by 1.2% and 0.6% on Top-1 accuracy. Even
compared to iBOT of 300 epochs, SERE can improve 0.4% Top-
1 accuracy with a third of the pre-training time (100 epochs), as
shown in Tab. 3 (b). Moreover, using ViT-B/16, SERE surpasses
iBOT by 0.4% in Top-1 accuracy, as shown in Tab. 1. These
results demonstrate that SERE enhances the category-related
representation ability of ViT.
Semi-supervised classification on ImageNet-1K. We also eval-
uate the classification performance in a semi-supervised fashion.
Following the setting of [18], we fully fine-tune the pre-trained
TABLE 5
Transfer learning on the classification task. We fine-tune the
pre-trained models on multiple datasets and report the Top-1 accuracy.
TABLE 7
Cooperating SERE with multiple self-supervised learning methods.
Models are pre-trained on the ImageNet-S300 dataset with 100
epochs.
6
Cifar10
Cifar100
INat19
Flwrs
DINO [13]
+SERE
98.8
98.9
89.6
90.0
76.9
77.5
97.8
98.0
Cars
93.5
93.5
TABLE 6
Compared with masked image modeling on the ImageNet-1K dataset.
† means effective pre-training epochs [18] that account for actually
used images during pre-training. ‡ means the models are fine-tuned
for 200 epochs on ImageNet-1K, while others are fine-tuned for 100
epochs.
DINO [13]
MAE‡ [14]
iBOT [18]
DINO [13]+SERE
iBOT [18]+SERE
BEiT [60]
MAE [14]
iBOT [18]
iBOT [18]+SERE
Architecture
ViT-S/16
ViT-B/16
Pre-training Epochs†
300
800
400
300
400
800
800
400
400
Top-1
79.7
80.9
80.9
80.9
81.5
83.2
83.3
83.3
83.7
models with 1% and 10% training labels on the ImageNet-1K
dataset for 1000 epochs. We use the AdamW optimizer to train the
model with a batch size of 1024 and a learning rate of 1e-5. Tab. 4
reports the Top-1 and Top-5 accuracy on the ImageNet-1K val set.
SERE consistently achieves better accuracy with 1% and 10%
labels. With only 1% labels, there is a significant improvement of
3.8% in Top-1 accuracy, showing the advantage of our method in
the semi-supervised fashion.
Semi-supervised semantic segmentation for ImageNet-S. The
ImageNet-S dataset [26] extends ImageNet-1K with pixel-level
semantic segmentation annotations on almost all val images and
parts of training images. Evaluating semantic segmentation on
the ImageNet-S dataset avoids the potential influence of domain
shift between pre-training and fine-tuning datasets. We fine-tune
the models with the semantic segmentation annotations in the
ImageNet-S training set and evaluate the performance on the val
and test sets of ImageNet-S. The ViT-S/16 model is initialized
with self-supervised pre-trained weights (ImageNet-SPT) or fully
fine-tuned weights on classification labels (ImageNet-SFT) of
the ImageNet-1K dataset. A randomly initialized 1 × 1 conv
is attached to the model as the segmentation head. We fine-
tune models for 100 epochs with an AdamW optimizer, using
a batch size of 256 and a weight decay of 0.05. The learning
rate is initially set to 5e-4 with a layer-wise decay of 0.5. After
a warmup of 5 epochs, the learning rate decays to 1e-6 by the
cosine decay schedule. The images are resized and cropped to
224×224 for training and are resized to 256 along the smaller
side for evaluation.
As shown in Tab. 1, compared to DINO and iBOT, SERE
improves the val mIoU by 1.8% and 2.9% when initializing the
model with self-supervised pre-trained weights. When loading
weights of the fully fine-tuned classification model for initializa-
tion, SERE brings a 2.7%/1.0% gain on mIoU over DINO/iBOT.
We conclude that SERE enhances the relation modeling ability,
enabling ViT with much stronger shape-related representations.
Transferring learning on the classification task. To evaluate
VOC SEG
mIoU
mAcc
ImageNet-SPT
300
test
val
MoCov3 [15]
+SERE
DINO [13]
+SERE
iBOT [18]
+SERE
65.7
67.5
68.1
73.5
74.5
75.9
78.7
80.6
81.1
84.7
85.5
86.3
24.0
29.1
28.8
41.2
41.5
45.3
24.8
29.9
29.6
42.0
42.0
45.6
the transferring ability on classification tasks, we fine-tune pre-
trained models on multiple datasets, including CIFAR [93], Flow-
ers [94], Cars [95], and iNaturalist19 [96]. The training details
are summarized in the supplementary. Tab. 5 shows that SERE
performs better on Top-1 accuracy over DINO, demonstrating that
SERE benefits the transferring learning on classification tasks.
Transfer learning on semantic segmentation. We also evaluate
the transfer learning performance on the semantic segmentation
task using PASCAL VOC2012 [25] and ADE20K [3] datasets.
The UperNet [97] with the ViT-S/16 backbone is used as the seg-
mentation model. Following the training setting in [18], we fine-
tune models for 20k and 160k iterations on PASCAL VOC2012
and ADE20K datasets, with a batch size of 16. Tab. 2 reports the
mIoU and mAcc on the validation set. The self-relation improves
the DINO by 2.6% on mIoU and 1.3% on mAcc for the PASCAL
VOC2012 dataset. On the ADE20K dataset, there is also an
improvement of 1.2% on mIoU and 1.2% on mAcc compared to
DINO. Tab. 3 (a) shows that SERE even outperforms iBOT with
much fewer pre-training epochs. Therefore, semantic segmenta-
tion tasks benefit from the stronger self-relation representation
ability of SERE.
Transfer learning on object detection and instance segmen-
tation. We use the Cascade Mask R-CNN [24] with ViT-S/16
to evaluate the transfer learning performance on object detection
and instance segmentation tasks. Following [18], the models are
trained on the COCO train2017 set [2] with the 1× schedule and
a batch size of 16. Tab. 2 reports the bounding box AP (APb)
and the segmentation mask AP (APm) on the COCO val2017 set.
Compared to DINO, SERE improves by 0.6% on APb and 0.5%
on APm, showing that SERE facilitates the model to locate and
segment objects accurately.
Comparison with masked image modeling (MIM). We also
demonstrate that our proposed method, SERE, outperforms and
complements various masked image modeling (MIM) based
methods. As shown in Tab. 6, SERE can significantly enhance
contrastive learning based approach (e.g., DINO). DINO+SERE
achieves comparable performance compared to MIM based meth-
ods (iBOT and MAE), requiring less pre-training/fine-tuning
epochs. Meanwhile, SERE and MIM can be complementary.
For instance, cooperating with SERE further improves iBOT by
0.4% Top-1 accuracy. Moreover, qualitative results in Fig. 4 show
that SERE produces more precise and less noisy attention maps
than iBOT. These results strongly confirm the effectiveness of
SERE compared to MIM-based methods.
TABLE 8
Ablation of using different representations for self-supervised training.
The LI , Lp, and Lc denote the loss functions using image-level
embedding [13], spatial self-relation, and channel self-relation,
respectively. The model without these three losses is randomly
initialized when fine-tuned on downstream tasks.
VOC SEG
mIoU
mAcc
ImageNet-SPT
300
test
val
✓
LI
✗
✓
✓
✓
✓
Lp
✗
✓
✓
✓
✓
Lc
✗
✓
✓
✓
✓
25.6
68.1
71.5
61.4
70.7
69.8
71.5
73.5
35.7
81.1
83.0
75.6
82.6
82.9
83.3
84.7
0.2
28.8
23.7
22.5
33.3
36.5
30.6
41.2
0.2
29.6
23.7
22.3
34.5
38.3
30.3
42.0
TABLE 9
Segmentation F-measure [98] on the PASCAL VOC dataset. The
F-measure ignores semantic categories.
Lp
87.1
IoU
Lp + LI
Lp + LI + Lc
86.7
87.7
Cooperating with more self-supervised learning methods. The
self-relation representation is orthogonal to the existing feature
representations. Therefore, it can be integrated into various self-
supervised learning methods. To demonstrate this, we combine
the SERE with MoCo v3 [15], DINO, and iBOT, i.e., utilizing
the self-supervision of these methods as the LI in Equ. (6). We
pre-train models on the ImageNet-S300 dataset with 100 epochs
to save computation costs, and other training settings are constant
with baseline methods. As shown in Tab. 7, using SERE consis-
tently improves baseline methods, verifying its generalization to
different methods. For example, SERE improves the MoCo v3
by 1.8% on mIoU and 2.0% on mAcc for semantic segmentation
on the Pascal VOC dataset. For the semi-supervised semantic
segmentation on the ImageNet-S300 dataset, SERE gains 5.1%
on mIoU over MoCo v3.
4.3 Ablation Studies
To save computational costs for the ablation study, we pre-train all
models on the ImageNet-S300 [26] dataset with two global views
for 100 epochs. We evaluate models with semantic segmentation
on the PASCAL VOC dataset and semi-supervised semantic
segmentation on the ImageNet-S300 dataset.
Effect of spatial and channel self-relation. We compare the
effectiveness of different representation forms for self-supervised
i.e., our proposed spatial/channel self-relations and
learning,
image-level feature embeddings used by DINO. As shown
in Tab. 8, the spatial self-relation improves the mIoU by 3.4%
and mAcc by 1.9% on the PASCAL VOC dataset compared
to the feature embedding. These results show that training self-
supervised ViT with spatial self-relation further enhances the spa-
tial relation modeling ability of ViT, benefiting dense prediction
tasks. Although inferior to the other two representation forms,
channel self-relation still improves the representation quality of
ViT. The model pre-trained with channel self-relation performs
much better than the randomly initialized model on segmentation
and classification tasks.
TABLE 10
Cooperating self-relations with patch-level embeddings. DINO+
indicates adding the clustering loss using patch-level embeddings to
DINO [13].
7
DINO
DINO+
SERE
VOC SEG
mIoU
mAcc
ImageNet-SPT
300
test
val
68.1
72.6
73.5
75.0
81.1
84.3
84.7
86.1
28.8
40.0
41.2
44.8
29.6
40.4
42.0
46.0
✓
✓
✓
✓
TABLE 11
Comparison with Barlow [87] that utilizes the batch-relation based loss.
VOC SEG
mIoU
69.5
69.8
mAcc
82.2
82.9
ImageNet-SPT
300
test
val
33.2
36.5
32.9
38.3
Barlow [87]
SERE
Cooperating with image-level embeddings. We verify the or-
thogonality between self-relations and image-level embeddings,
as shown in Tab. 8. When combined with the image-level feature
embedding, the spatial and channel self-relations improve the
mIoU by 2.6% and 1.7% on the PASCAL VOC dataset. On the
ImageNet-S300 dataset, there is also an improvement of 4.5%
and 7.7% on mIoU over feature embedding. And cooperating
three representations further boosts the performance on all tasks,
indicating that self-relations are orthogonal and complementary
to image-level feature embeddings for self-supervised learning.
Cooperation between LI and Lc. Tab. 8 shows that Lp alone
performs better than Lp + LI or Lp + Lc on the PASCAL VOC
dataset. However, using Lp + LI + Lc performs better than
Lp. This phenomenon is because utilizing image-level embed-
ding (LI ) and channel self-relation (Lc) have their limits, while
their cooperation can mitigate them. The details are as follows:
1) Regarding Lc, modeling channel self-relations requires mean-
ingful and diverse channel features as the foundation. However,
solely relying on Lc cannot adequately optimize the channel
features and may lead to model collapse, where an example
is that each channel encodes the same features. In comparison,
LI facilitates learning diverse and meaningful channel features,
thus addressing the limitation mentioned above of Lc. 2) The
LI harms spatial features. We validate this by examining the F-
measure [98] that ignores the semantic categories. Tab. 9 shows a
decrease in IoU when comparing Lp+LI with LI , indicating that
LI impairs spatial features. We assume LI makes representations
less discriminable in the spatial dimension than Lp. However,
by using Lc simultaneously, we promote learning more accurate
spatial features, mitigating the drawback caused by using LI .
Cooperating with patch-level embeddings. We also verify the
orthogonality of self-relation representation to patch-level em-
beddings in Tab. 10. As a baseline, we add a clustering loss using
patch-level embeddings to DINO, denoted by DINO+. DINO+
consistently advances DINO, showing the effectiveness of patch-
level embedding. Compared to DINO+, the self-relation improves
the mIoU by 0.9% and 1.2% on PASCAL VOC and ImageNet-S
datasets. Cooperating two representations further brings constant
improvements over DINO+, e.g., achieving 2.4% and 4.8% gains
on mIoU for PASCAL VOC and ImageNet-S datasets. These
TABLE 12
The effect of different numbers of heads M for spatial self-relation.
M
1
3
6
12
16
VOC SEG
mIoU
mAcc
ImageNet-SPT
300
val
test
72.4
72.7
73.5
73.4
72.5
84.0
84.8
84.7
85.1
84.3
38.7
38.9
41.2
40.8
39.3
39.3
39.4
42.0
41.7
39.8
TABLE 13
The effect of different tp and tc in Equ. (2) and Equ. (4).
tp
0.50
0.50
0.50
1.00
0.50
0.10
tc
0.50
0.10
0.01
0.10
0.10
0.10
VOC SEG
mIoU
mAcc
ImageNet-SPT
300
test
val
72.0
73.5
70.4
70.2
73.5
73.7
84.2
84.7
82.7
83.1
84.7
85.0
36.7
41.2
33.6
36.7
41.2
39.9
36.7
42.0
34.6
38.2
42.0
40.8
TABLE 14
The effect of different α and β in Equ. (6) when cooperating the
SERE with iBOT [18]. All models are pre-trained for 100 epochs on
ImageNet-1K.
8
Segmentation
α
β
0.20
0.20
0.20
0.10
0.20
0.80
0.20
0.50
1.00
0.50
0.50
0.50
VOC
Classification
ImageNet-1K
Top-1
81.3
81.5
81.3
81.3
81.5
81.3
Top-5 mIoU mAcc
89.9
80.7
95.7
90.0
81.2
95.8
89.8
80.9
95.8
89.5
80.9
95.8
90.0
81.2
95.8
89.7
80.8
95.8
ImageNet-SPT
val
39.9
41.0
41.7
40.7
41.0
40.3
test
39.3
40.3
41.8
40.5
40.3
40.1
TABLE 15
The effect of the asymmetric losses in Equ. (3) and Equ. (5).
VOC SEG
mIoU
mAcc
ImageNet-SPT
300
test
val
DNIO baseline
+SERE symmetry
+SERE asymmetric
68.1
72.1
73.5
81.1
84.4
84.7
28.8
37.1
41.2
29.6
37.9
42.0
results indicate that the self-relation is complementary to patch-
level embedding for self-supervised ViT.
Comparison between self-relation and batch-relation. A re-
lated work, Barlow [87], models channel relation in the whole
batch, i.e., batch-relation. In comparison, the proposed SERE
computes self-relation within a single image. To verify the advan-
tage of self-relation over batch-relation, we pre-train the ViT-S/16
with the two forms of relation, respectively. As shown in Tab. 11,
compared to the batch-relation, the self-relation improves mIoU
by 0.3% and 3.3% on the PASCAL VOC and ImageNet-S300
datasets. These results show that self-relation is more suitable for
the training of ViT over batch-relation.
Effect of multi-head. We utilize the multi-head spatial self-
relation following the MHSA module in ViT. Tab. 12 shows the
effect of different numbers of heads M in spatial self-relation.
Compared to the single-head version, increasing M to 6 brings
the largest performance gain of 1.1% on mIoU for the PASCAL
VOC dataset. M = 12 achieves limited extra gains, while
M = 16 suffers a rapid performance drop. More heads enable
diverse spatial self-relation, but the number of channels used for
calculating each self-relation is reduced. Too many heads result
in inaccurate estimation of self-relation, hurting the representation
quality. So we default set the number of heads to 6 to balance the
diversity and quality of spatial self-relation.
Effect of sharpness. The temperature terms in Equ. (2) and
Equ. (4) control the sharpness of the self-relation distributions.
A small temperature sharpens the distributions, while a large
temperature softens the distributions. In Tab. 13, we verify the
effectiveness of temperatures for both spatial and channel self-
relations. For the channel self-relation, decreasing temperature
from 0.1 to 0.01 results in a rapid performance drop from 73.5%
to 70.4% on mIoU for the PASCAL VOC dataset. And increasing
it from 0.1 to 0.5 also degrades the mIoU from 73.5% to 72.0%.
Therefore, we choose 0.1 as the default temperature for the chan-
nel self-relation. For the spatial self-relation, the temperature 0.5
performs better than 1.0, and changing the temperature from 0.5
to 0.1 has a limited difference. We set the default temperature of
spatial self-relation to 0.5 because a temperature of 0.5 achieves
slightly better performance on the large-scale ImageNet-S dataset.
Effect of loss weights. The α and β in Equ. (6) determine
the relative importance of spatial and channel self-relations,
respectively. Tab. 14 shows that the SERE is robust to different
α and β. Among different weights, the combination of α = 0.2
and β = 0.5 achieves the best performance on the classification
task and competitive performances on the segmentation task.
Therefore, we use this combination as the default setting.
Effect of asymmetric loss. The asymmetric structure has been
proven effective for non-contrastive loss [5], [6] when using
image-level embedding as the representation. To verify if self-
relation representations also benefit from the asymmetric struc-
ture, we compare the asymmetric and symmetry structures for
the self-relation based loss in Tab. 15. Self-relation improves the
DINO baseline with both asymmetric and symmetry structures.
The symmetrical structure outperforms the DINO on PASCAL
VOC and ImageNet-S300 datasets with 4.0% and 8.3% on mIoU.
The asymmetric structure further advances symmetric structure
by 1.4% and 4.1% on mIoU for the PASCAL VOC and ImageNet-
S300 datasets. Therefore, though the asymmetric structure is not
indispensable for self-relation, it still benefits the pre-training
with self-relation.
Adaptability to convolutional neural networks. Using self-
relation for self-supervised learning is inspired by the properties
of ViT. Still, we wonder if the self-relation representation could
benefit self-supervised learning on convolutional neural networks
(CNN). To verify this, we pre-train the ResNet-50 [9] with DINO
and SERE, respectively. The training details are shown in the
supplementary. As shown in Tab. 16, SERE improves DINO by
0.7% and 0.8% on mIoU for the semantic segmentation task
on the PASCAL VOC and ImageNet-S300 datasets compared
to DINO. Though designed for ViT, the self-relation still im-
proves the representation quality of the CNN. Meanwhile, the
improvement on CNN is relatively small compared to that on
ViT, showing that the self-relation is more suitable for ViT.
9
G
M
I
E
A
M
O
N
D
I
O
N
D
I
E
R
E
S
+
T
O
B
i
T
O
B
i
E
R
E
S
+
Fig. 4. Visualization for attention maps from the last block of the pre-trained ViT-S/16. We extract the attention maps of the CLS token on other
patch-level tokens. Different colors indicate the regions focused by different heads.
TABLE 16
The effect of self-relation representation on CNN. DINO and SERE are
trained with the ResNet-50 network.
VOC SEG
mIoU
mAcc
DINO (ResNet-50)
+SERE (ResNet-50)
61.6
62.5
74.6
75.0
ImageNet-SPT
300
test
val
20.2
20.9
19.9
20.7
4.4 Analysis and Visualization
Invariance on self-relations. The importance of learning repre-
sentations invariant to image augmentations, e.g., scaling, shift-
ing, and color jitter, has been validated in self-supervised learn-
ing [99], [100], [101], [102], [103], [104]. However, existing
methods focus on the invariance of feature embeddings but do
not consider the invariance of spatial/channel relations, which
are also important properties of ViT. In contrast, our proposed
SERE can enhance the invariance of spatial/channel relations. To
verify this, we measure the averaged differences between self-
relations of different views. As shown in Fig. 6, we obverse that
SERE significantly narrows the self-relation differences in both
the spatial and channel dimensions. The visualizations in Fig. 5
also show that the SERE pre-trained model produces smaller
spatial self-relation differences on the overlapping regions of two
views. A smaller difference means a higher invariance. Thus,
these results indicate that SERE makes the ViT capture self-
relations with stronger invariance to image augmentations.
Visualization of attention maps. In Fig. 4, we visualize the
attention maps from the last block of ViT. These visualizations
demonstrate that SERE produces more precise and less noisy
(a)
(b)
(c)
(a)
(b)
(c)
view1 view2 ∆ view1 view2 ∆ view1 view2 ∆
Fig. 5. The differences between spatial self-relations of two views. (a)
Two views from each image. (b) The spatial self-relation generated
by DINO. (c) The spatial self-relation generated by SERE. View1 and
view2 mean the self-relations of two views generated from an image.
The ∆ is the difference between self-relations in the overlapping region,
which is indicated by red boxes. We give the details of the visualization
method in the supplementary.
attention maps than various methods, including MIM-based meth-
ods, i.e., MAE [14]and iBOT [18]. MAE produces noisy attention
maps that highlight almost all tokens in an image. In comparison,
the attention maps of SERE mainly focus on semantic objects.
For instance, the third column of Fig. 4 shows that SERE can
locate the frog, but MAE primarily focuses on the background.
Moreover, compared to iBOT and DINO, SERE generates atten-
1.20
[2]
[3]
T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan,
P. Doll´ar, and C. L. Zitnick, “Microsoft coco: Common objects in
context,” in Eur. Conf. Comput. Vis., 2014, pp. 740–755.
B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba,
“Scene parsing through ade20k dataset,” in IEEE Conf. Comput. Vis.
Pattern Recog., July 2017.
10
l
a
i
t
a
p
S
0.21
l
e
n
n
a
h
C
0.15
Difference
0.82
Difference
Fig. 6. The average differences of spatial (left) and channel (right) self-
relations between two views on the val set of ImageNet-S. We show the
calculation details in the supplementary.
tion maps that locate objects more accurately. For instance, in the
seventh and eighth columns of Fig. 4, SERE discovers the persons
missed by iBOT.
Comparison between spatial self-relation and MIM. Both
spatial self-relation and MIM act on the spatial dimension, but
their effects significantly differ. MIM enhances the token-level
representations, while spatial self-relation focuses on improv-
ing the ability to model inter-token relations. We support this
argument with the following points: 1) As depicted in Fig. 4,
SERE generates more precise and less noisy attention maps
than MAE [14]and iBOT [18]. The attention maps of ViT
can reflect the ability to model inter-token relations because
attentions are calculated as token-level relations between query
and key. Thus this observation indicates that SERE provides
models with a stronger ability to capture inter-token relations.
In Fig. 6, we show that SERE enhances the invariance of spatial
self-relation to different image augmentations. 3) As shown in
Tab. 6, SERE achieves consistent improvements compared to
different MIM-based methods, strongly confirming the effective-
ness of SERE compared to MIM. For example, cooperating with
SERE improves iBOT by 0.4% Top-1 accuracy, as shown in
Tab. 1.
5 CONCLUSIONS
In this paper, we propose a feature self-relation based self-
supervised learning scheme to enhance the relation modeling
ability of self-supervised ViT. Specifically, instead of directly
using feature embedding as the representation, we propose to use
spatial and channel self-relations of features as representations
for self-supervised learning. Self-relation is orthogonal to feature
embedding and further boosts existing self-supervised methods.
We show that feature self-relation improves the self-supervised
ViT at a fine-grained level, benefiting multiple downstream tasks,
including image classification, semantic segmentation, object
detection, and instance segmentation.
Acknowledgements. This work is funded by NSFC (NO.
62225604, 62176130), and the Fundamental Research Funds
for the Central Universities (Nankai University, 070-63233089).
Computation is supported by the Supercomputing Center of
Nankai University.
REFERENCES
[1]
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma,
Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet
large scale visual recognition challenge,” Int. J. Comput. Vis., vol. 115,
no. 3, pp. 211–252, 2015.
[5]
[6]
[4] M. Caron, I. Misra, J. Mairal, P. Goyal, P. Bojanowski, and A. Joulin,
“Unsupervised learning of visual features by contrasting cluster assign-
ments,” in Adv. Neural Inform. Process. Syst., 2020.
X. Chen and K. He, “Exploring simple siamese representation learn-
ing,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2021.
J.-B. Grill, F. Strub, F. Altch´e, C. Tallec, P. H. Richemond,
E. Buchatskaya, C. Doersch, B. ´Avila Pires, Z. Guo, M. G. Azar,
B. Piot, K. Kavukcuoglu, R. Munos, and M. Valko, “Bootstrap your
own latent - a new approach to self-supervised learning,” in Adv. Neural
Inform. Process. Syst., 2020.
K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for
unsupervised visual representation learning,” in IEEE Conf. Comput.
Vis. Pattern Recog., June 2020.
H. SUN and M. LI, “Enhancing unsupervised domain adaptation
by exploiting the conceptual consistency of multiple self-supervised
tasks,” SCIENCE CHINA Information Sciences, vol. 66, no. 4, pp.
142 101–, 2023.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for
image recognition,” in IEEE Conf. Comput. Vis. Pattern Recog., 2016,
pp. 770–778.
[9]
[8]
[7]
[10] S. Gao, Z.-Y. Li, Q. Han, M.-M. Cheng, and L. Wang, “Rf-next:
Efficient receptive field search for convolutional neural networks,”
IEEE Trans. Pattern Anal. Mach. Intell., 2022.
[11] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai,
T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly,
J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Trans-
formers for image recognition at scale,” Int. Conf. Learn. Represent.,
2021.
[12] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and
B. Guo, “Swin transformer: Hierarchical vision transformer using
shifted windows,” in Int. Conf. Comput. Vis., 2021.
[13] M. Caron, H. Touvron, I. Misra, H. J´egou, J. Mairal, P. Bojanowski,
and A. Joulin, “Emerging properties in self-supervised vision trans-
formers,” in Int. Conf. Comput. Vis., 2021.
[14] K. He, X. Chen, S. Xie, Y. Li, P. Doll´ar, and R. Girshick, “Masked
autoencoders are scalable vision learners,” in IEEE Conf. Comput. Vis.
Pattern Recog., June 2022, pp. 16 000–16 009.
[15] X. Chen, S. Xie, and K. He, “An empirical study of training self-
supervised vision transformers,” in Int. Conf. Comput. Vis., October
2021.
[16] Z. Xie, Y. Lin, Z. Yao, Z. Zhang, Q. Dai, Y. Cao, and H. Hu,
“Self-supervised learning with swin transformers,” arXiv preprint
arXiv:2105.04553, 2021.
[17] H. Lu, Y. Huo, M. Ding, N. Fei, and Z. Lu, “Cross-modal contrastive
learning for generalizable and efficient image-text retrieval,” Machine
Intelligence Research, pp. 1–14, 2023.
J. Zhou, C. Wei, H. Wang, W. Shen, C. Xie, A. Yuille, and T. Kong,
“ibot: Image bert pre-training with online tokenizer,” Int. Conf. Learn.
Represent., 2022.
[18]
[19] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple frame-
work for contrastive learning of visual representations,” in Interna-
tional Conference on Machine Learning (ICML), 2020.
[20] X. Wang, R. Zhang, C. Shen, T. Kong, and L. Li, “Dense contrastive
learning for self-supervised visual pre-training,” in IEEE Conf. Com-
put. Vis. Pattern Recog., 2021.
[21] O. J. H´enaff, S. Koppula, J.-B. Alayrac, A. van den Oord, O. Vinyals,
and J. a. Carreira, “Efficient visual pretraining with contrastive detec-
tion,” in Int. Conf. Comput. Vis., October 2021, pp. 10 086–10 096.
[22] M. Raghu, T. Unterthiner, S. Kornblith, C. Zhang, and A. Dosovitskiy,
“Do vision transformers see like convolutional neural networks?”
in Adv. Neural Inform. Process. Syst., A. Beygelzimer, Y. Dauphin,
P. Liang, and J. W. Vaughan, Eds., 2021.
[23] K. Kim, B. Wu, X. Dai, P. Zhang, Z. Yan, P. Vajda, and S. J. Kim,
“Rethinking the self-attention in vision transformers,” in IEEE Conf.
Comput. Vis. Pattern Recog. Worksh., June 2021, pp. 3071–3075.
[24] Z. Cai and N. Vasconcelos, “Cascade r-cnn: Delving into high quality
object detection,” in IEEE Conf. Comput. Vis. Pattern Recog., June
2018.
[25] M. Everingham, L. Gool, C. K. I. Williams, J. Winn, and A. Zisserman,
“The pascal visual object classes (voc) challenge,” Int. J. Comput. Vis.,
vol. 88, pp. 303–338, 2009.
0.10.2DINODINO+SERE0.61.2[26] S. Gao, Z.-Y. Li, M.-H. Yang, M.-M. Cheng, J. Han, and P. Torr,
“Large-scale unsupervised semantic segmentation,” arXiv preprint
arXiv:2106.03149, 2021.
[27] R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in
Eur. Conf. Comput. Vis. Springer, 2016, pp. 649–666.
[28] G. Larsson, M. Maire, and G. Shakhnarovich, “Colorization as a proxy
task for visual understanding,” in IEEE Conf. Comput. Vis. Pattern
Recog., July 2017.
[29] M. Noroozi and P. Favaro, “Unsupervised learning of visual represen-
tions by solving jigsaw puzzles,” in Eur. Conf. Comput. Vis., 2016.
[30] S. Gidaris, P. Singh, and N. Komodakis, “Unsupervised representation
learning by predicting image rotations,” in Int. Conf. Learn. Represent.,
2018.
[31] C. Doersch, A. Gupta, and A. A. Efros, “Unsupervised visual repre-
sentation learning by context prediction,” in Int. Conf. Comput. Vis.,
December 2015.
[32] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, “Extract-
ing and composing robust features with denoising autoencoders,” in
International Conference on Machine Learning (ICML), 2008.
[33] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros,
“Context encoders: Feature learning by inpainting,” in IEEE Conf.
Comput. Vis. Pattern Recog., June 2016.
[34] M. Noroozi, H. Pirsiavash, and P. Favaro, “Representation learning by
learning to count,” in Int. Conf. Comput. Vis., Oct 2017.
[35] Z. Wu, Y. Xiong, S. X. Yu, and D. Lin, “Unsupervised feature learning
via non-parametric instance discrimination,” in IEEE Conf. Comput.
Vis. Pattern Recog., June 2018.
[36] Y. Zhao, G. Wang, C. Luo, W. Zeng, and Z.-J. Zha, “Self-supervised
visual representations learning by contrastive mask prediction,” in Int.
Conf. Comput. Vis., October 2021, pp. 10 160–10 169.
[37] D. Dwibedi, Y. Aytar, J. Tompson, P. Sermanet, and A. Zisserman,
“With a little help from my friends: Nearest-neighbor contrastive
learning of visual representations,” in Int. Conf. Comput. Vis., October
2021, pp. 9588–9597.
[38] C.-H. Yeh, C.-Y. Hong, Y.-C. Hsu, T.-L. Liu, Y. Chen, and Y. LeCun,
“Decoupled contrastive learning,” arXiv preprint arXiv:2110.06848,
2021.
[39] W.-C. Wang, E. Ahn, D. Feng, and J. Kim, “A review of predictive
and contrastive self-supervised learning for medical images,” Machine
Intelligence Research, pp. 483–513, 2023.
[40] L. Wang, H. Xu, and W. Kang, “Mvcontrast: Unsupervised pretraining
for multi-view 3d object recognition,” Machine Intelligence Research,
pp. 1–12, 2023.
[41] M. Caron, P. Bojanowski, A. Joulin, and M. Douze, “Deep clustering
for unsupervised learning of visual features,” in Eur. Conf. Comput.
Vis., 2018.
[42] X. Zhan, J. Xie, Z. Liu, Y.-S. Ong, and C. C. Loy, “Online deep
clustering for unsupervised representation learning,” in IEEE Conf.
Comput. Vis. Pattern Recog., June 2020.
[43] A. YM., R. C., and V. A., “Self-labelling via simultaneous clustering
and representation learning,” in Int. Conf. Learn. Represent., 2020.
[44] S. A. Koohpayegani, A. Tejankar, and H. Pirsiavash, “Mean shift for
self-supervised learning,” in Int. Conf. Comput. Vis., October 2021, pp.
10 326–10 335.
[45] A. Ermolov, A. Siarohin, E. Sangineto, and N. Sebe, “Whitening for
self-supervised representation learning,” in International Conference
on Machine Learning (ICML), 2021, pp. 3015–3024.
[46] Y. Tian, X. Chen, and S. Ganguli, “Understanding self-supervised
learning dynamics without contrastive pairs,” in International Con-
ference on Machine Learning (ICML), 2020.
[47] C. Ge, Y. Liang, Y. Song, J. Jiao, J. Wang, and P. Luo, “Revitalizing
cnn attentions via transformers in self-supervised visual representation
learning,” in Adv. Neural Inform. Process. Syst., 2021.
[48] Q. Hu, X. Wang, W. Hu, and G.-J. Qi, “Adco: Adversarial contrast
for efficient learning of unsupervised representations from self-trained
negative adversaries,” in IEEE Conf. Comput. Vis. Pattern Recog., June
2021, pp. 1074–1083.
[49] Z. Xie, Y. Lin, Z. Zhang, Y. Cao, S. Lin, and H. Hu, “Propagate
yourself: Exploring pixel-level consistency for unsupervised visual
representation learning,” in IEEE Conf. Comput. Vis. Pattern Recog.,
June 2021, pp. 16 684–16 693.
[50] E. Xie, J. Ding, W. Wang, X. Zhan, H. Xu, P. Sun, Z. Li, and P. Luo,
“Detco: Unsupervised contrastive learning for object detection,” in Int.
Conf. Comput. Vis., October 2021, pp. 8392–8401.
[51] Z. Dai, B. Cai, Y. Lin, and J. Chen, “Up-detr: Unsupervised pre-
training for object detection with transformers,” in IEEE Conf. Comput.
Vis. Pattern Recog., June 2021, pp. 1601–1610.
[52] B. Roh, W. Shin, I. Kim, and S. Kim, “Spatilly consistent representa-
tion learning,” in IEEE Conf. Comput. Vis. Pattern Recog., 2021.
11
[53] W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo,
and L. Shao, “Pyramid vision transformer: A versatile backbone for
dense prediction without convolutions,” in Int. Conf. Comput. Vis.,
2021.
[54] B. Cheng, A. G. Schwing, and A. Kirillov, “Per-pixel classification is
not all you need for semantic segmentation,” in Adv. Neural Inform.
Process. Syst., 2021.
[55] Y.-H. Wu, Y. Liu, X. Zhan, and M.-M. Cheng, “P2T: Pyramid pooling
transformer for scene understanding,” IEEE Trans. Pattern Anal. Mach.
Intell., 2022.
[56] H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and
H. J´egou, “Training data-efficient image transformers & distillation
through attention,” in International Conference on Machine Learning
(ICML). PMLR, 2021, pp. 10 347–10 357.
[57] C. Li, J. Yang, P. Zhang, M. Gao, B. Xiao, X. Dai, L. Yuan, and
J. Gao, “Efficient self-supervised vision transformers for representation
learning,” in Int. Conf. Learn. Represent., 2022.
[58] P. Zhou, Y. Zhou, C. Si, W. Yu, T. K. Ng, and S. Yan, “Mugs: A
multi-granular self-supervised learning framework,” in arXiv preprint
arXiv:2203.14415, 2022.
[59] Z. Li, Z. Chen, F. Yang, W. Li, Y. Zhu, C. Zhao, R. Deng, L. Wu,
R. Zhao, M. Tang, and J. Wang, “MST: Masked self-supervised
transformer for visual representation,” in Adv. Neural Inform. Process.
Syst., 2021.
[60] H. Bao, L. Dong, S. Piao, and F. Wei, “BEit: BERT pre-training of
image transformers,” in Int. Conf. Learn. Represent., 2022.
[61] S. Gao, P. Zhou, M.-M. Cheng, and S. Yan, “Towards sustainable self-
supervised learning,” arXiv preprint arXiv:2210.11016, 2022.
[62] X. Chen, M. Ding, X. Wang, Y. Xin, S. Mo, Y. Wang, S. Han,
P. Luo, G. Zeng, and J. Wang, “Context autoencoder for self-supervised
representation learning,” arXiv preprint arXiv:2202.03026, 2022.
[63] Z. Xie, Z. Zhang, Y. Cao, Y. Lin, J. Bao, Z. Yao, Q. Dai, and H. Hu,
“Simmim: A simple framework for masked image modeling,” in IEEE
Conf. Comput. Vis. Pattern Recog., June 2022.
[64] L. Wang, F. Liang, Y. Li, H. Zhang, W. Ouyang, and J. Shao, “Repre:
Improving self-supervised vision transformer with reconstructive pre-
training,” arXiv preprint arXiv:2201.06857, 2022.
[65] S. Atito, M. Awais, and J. Kittler, “Sit: Self-supervised vision trans-
former,” arXiv preprint arXiv:2104.03602, 2021.
[66] C. Wei, H. Fan, S. Xie, C.-Y. Wu, A. Yuille, and C. Feichtenhofer,
“Masked feature prediction for self-supervised visual pre-training,”
arXiv preprint arXiv:2112.09133, 2021.
[67] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and
S. Zagoruyko, “End-to-end object detection with transformers,” in Eur.
Conf. Comput. Vis. Springer, 2020, pp. 213–229.
[68] F. Tung and G. Mori, “Similarity-preserving knowledge distillation,”
in Int. Conf. Comput. Vis., October 2019.
[69] W. Park, D. Kim, Y. Lu, and M. Cho, “Relational knowledge distilla-
tion,” in IEEE Conf. Comput. Vis. Pattern Recog., June 2019.
[70] N. Passalis and A. Tefas, “Learning deep representations with proba-
bilistic knowledge transfer,” in Eur. Conf. Comput. Vis., 2018.
[71] B. Peng, X. Jin, J. Liu, D. Li, Y. Wu, Y. Liu, S. Zhou, and Z. Zhang,
“Correlation congruence for knowledge distillation,” in Int. Conf.
Comput. Vis., October 2019.
[72] X. Li, J. Wu, H. Fang, Y. Liao, F. Wang, and C. Qian, “Local correlation
consistency for knowledge distillation,” in Eur. Conf. Comput. Vis.,
2020.
[73] S. Zagoruyko and N. Komodakis, “Paying more attention to attention:
Improving the performance of convolutional neural networks via atten-
tion transfer,” in Int. Conf. Learn. Represent., 2017.
[74] Y. Chen, N. Wang, and Z. Zhang, “Darkrank: Accelerating deep metric
learning via cross sample similarities transfer,” AAAI Conference on
Artificial Intelligence (AAAI), vol. 32, no. 1, Apr. 2018.
[75] Y. Liu, K. Chen, C. Liu, Z. Qin, Z. Luo, and J. Wang, “Structured
knowledge distillation for semantic segmentation,” in IEEE Conf.
Comput. Vis. Pattern Recog., June 2019.
[76] T. He, C. Shen, Z. Tian, D. Gong, C. Sun, and Y. Yan, “Knowledge
adaptation for efficient semantic segmentation,” in IEEE Conf. Comput.
Vis. Pattern Recog., June 2019.
[77] C. Yang, H. Zhou, Z. An, X. Jiang, Y. Xu, and Q. Zhang, “Cross-image
relational knowledge distillation for semantic segmentation,” in IEEE
Conf. Comput. Vis. Pattern Recog., June 2022, pp. 12 319–12 328.
[78] M. Hamilton, Z. Zhang, B. Hariharan, N. Snavely, and W. T. Freeman,
“Unsupervised semantic segmentation by distilling feature correspon-
dences,” in Int. Conf. Learn. Represent., 2022.
[79] O. Sim´eoni, A. Iscen, G. Tolias, Y. Avrithis, and O. Chum, “Unsuper-
vised object discovery for instance recognition,” in Winter Conference
on Applications of Computer Vision, 2018.
12
Zhong-Yu Li is a Ph.D. student from the college
of computer science, Nankai university. He is
supervised via Prof. Ming-Ming cheng. His re-
search interests include deep learning, machine
learning and computer vision.
Shanghua Gao is a Ph.D. candidate in Me-
dia Computing Lab at Nankai University. He
is supervised via Prof. Ming-Ming Cheng. His
research interests include computer vision and
representation learning.
Ming-Ming Cheng received his PhD degree
from Tsinghua University in 2012, and then
worked with Prof. Philip Torr in Oxford for 2
years. Since 2016, he is a full professor at
Nankai University, leading the Media Comput-
ing Lab. His research interests include com-
puter vision and computer graphics. He re-
ceived awards,
including ACM China Rising
Star Award, IBM Global SUR Award, etc. He is a
senior member of the IEEE and on the editorial
boards of IEEE TPAMI and IEEE TIP.
[80] O. Sim´eoni, G. Puy, H. V. Vo, S. Roburin, S. Gidaris, A. Bursuc,
P. P´erez, R. Marlet, and J. Ponce, “Localizing objects with self-
supervised transformers and no labels,” in Brit. Mach. Vis. Conf.,
November 2021.
[81] Y. Wang, X. Shen, S. X. Hu, Y. Yuan, J. L. Crowley, and D. Vaufreydaz,
“Self-supervised transformers for unsupervised object discovery using
normalized cut,” in IEEE Conf. Comput. Vis. Pattern Recog., June
2022.
[82] M. Ki, Y. Uh, J. Choe, and H. Byun, “Contrastive attention maps for
self-supervised co-localization,” in Int. Conf. Comput. Vis., October
2021, pp. 2803–2812.
I. Kakogeorgiou, S. Gidaris, B. Psomas, Y. Avrithis, A. Bursuc,
K. Karantzalos, and N. Komodakis, “What to hide from your stu-
dents: Attention-guided masked image modeling,” arXiv preprint
arXiv:2203.12719, 2022.
[83]
[84] Y. Kalantidis, C. Mellina, and S. Osindero, “Cross-dimensional weight-
ing for aggregated deep convolutional features,” in Eur. Conf. Comput.
Vis. Worksh., 2016, pp. 685–701.
[85] L. Gatys, A. S. Ecker, and M. Bethge, “Texture synthesis using
convolutional neural networks,” in Adv. Neural Inform. Process. Syst.,
vol. 28, 2015.
[86] T.-Y. Lin, A. RoyChowdhury, and S. Maji, “Bilinear cnn models for
fine-grained visual recognition,” in Int. Conf. Comput. Vis., December
2015.
J. Zbontar, L. Jing, I. Misra, Y. LeCun, and S. Deny, “Barlow twins:
Self-supervised learning via redundancy reduction,” arXiv preprint
arXiv:2103.03230, 2021.
[87]
[88] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep
network training by reducing internal covariate shift,” in International
Conference on Machine Learning (ICML), 2015, pp. 448–456.
[89] A. F. Agarap, “Deep learning using rectified linear units (relu),” arXiv
preprint arXiv:1803.08375, 2018.
[90] L. Liu, Q. Huang, S. Lin, H. Xie, B. Wang, X. Chang, and X. Liang,
“Exploring inter-channel correlation for diversity-preserved knowledge
distillation,” in Int. Conf. Comput. Vis., October 2021, pp. 8271–8280.
[91] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification
with deep convolutional neural networks,” in Adv. Neural Inform.
Process. Syst., 2012.
I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,”
in Int. Conf. Learn. Represent., 2019.
[92]
[93] A. Krizhevsky and G. Hinton, “Learning multiple layers of features
from tiny images,” University of Toronto, Tech. Rep. 0, 2009.
[94] M.-E. Nilsback and A. Zisserman, “Automated flower classification
over a large number of classes,” Indian Conference on Computer
Vision, Graphics and Image Processing (ICVGIP), pp. 722–729, 2008.
J. Krause, M. Stark, J. Deng, and L. Fei-Fei, “3d object representations
for fine-grained categorization,” in Int. Conf. Comput. Vis. Worksh.,
2013.
[95]
[96] G. Van Horn, O. Mac Aodha, Y. Song, Y. Cui, C. Sun, A. Shepard,
H. Adam, P. Perona, and S. Belongie, “The inaturalist species clas-
sification and detection dataset,” in IEEE Conf. Comput. Vis. Pattern
Recog., June 2018.
[97] T. Xiao, Y. Liu, B. Zhou, Y. Jiang, and J. Sun, “Unified perceptual
parsing for scene understanding,” in Eur. Conf. Comput. Vis., Septem-
ber 2018.
[98] M.-M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S.-M. Hu,
“Global contrast based salient region detection,” IEEE Trans. Pattern
Anal. Mach. Intell., vol. 37, no. 3, pp. 569–582, 2015.
[99] S. Purushwalkam Shiva Prakash and A. Gupta, “Demystifying
contrastive self-supervised learning: Invariances, augmentations and
dataset biases,” Adv. Neural Inform. Process. Syst., vol. 33, 2020.
[100] M. Patrick, Y. M. Asano, P. Kuznetsova, R. Fong, J. a. F. Henriques,
G. Zweig, and A. Vedaldi, “On compositions of transformations in
contrastive self-supervised learning,” in Int. Conf. Comput. Vis., 2021.
[101] I. Misra and L. van der Maaten, “Self-supervised learning of pretext-
invariant representations,” in IEEE Conf. Comput. Vis. Pattern Recog.,
June 2020.
[102] A. Bardes, J. Ponce, and Y. LeCun, “VICReg: Variance-invariance-
covariance regularization for self-supervised learning,” in Int. Conf.
Learn. Represent., 2022.
[103] L. Ericsson, H. Gouk, and T. M. Hospedales, “Why do self-supervised
models transfer? investigating the impact of invariance on downstream
tasks,” 2022.
[104] X. Wang, K. He, and A. Gupta, “Transitive invariance for self-
supervised visual representation learning,” in Int. Conf. Comput. Vis.,
2017.
|
synthetic_cpt | 1 | Dex-Net_20_Deep_Learning_to_Plan_Robust_Grasps_with_Synthetic_Point_Clouds_and_Analytic_Grasp_Metrics.pdf | 0
1
0
2
y
a
M
6
]
R
S
.
h
p
-
o
r
t
s
a
[
1
v
4
6
9
0
.
5
0
0
1
:
v
i
X
r
a
Chemical composition of the old globular clusters NGC 1786, NGC 2210 and
NGC 2257 in the Large Magellanic Cloud. 1
Dipartimento di Astronomia, Universit`a degli Studi di Bologna, Via Ranzani, 1 - 40127 Bologna,
ITALY
Alessio Mucciarelli
[email protected]
Livia Origlia
INAF - Osservatorio Astronomico di Bologna, Via Ranzani, 1 - 40127 Bologna, ITALY
[email protected]
Francesco R. Ferraro
Dipartimento di Astronomia, Universit`a degli Studi di Bologna, Via Ranzani, 1 - 40127 Bologna,
ITALY
[email protected]
ABSTRACT
±
0.02 dex (σ= 0.04 dex) and –1.95
This paper presents the chemical abundance analysis of a sample of 18 giant stars
in 3 old globular clusters in the Large Magellanic Cloud, namely NGC 1786, NGC 2210
and NGC 2257. The derived iron content is [Fe/H]= –1.75
0.01 dex (σ= 0.02 dex),
0.02 dex (σ= 0.04 dex) for NGC 1786,
–1.65
NGC 2210 and NGC 2257, respectively. All the clusters exhibit similar abundance
ratios, with enhanced values (
+0.30 dex) of [α/Fe], consistent with the Galactic Halo
stars, thus indicating that these clusters have formed from a gas enriched by Type II
SNe. We also found evidence that r-process are the main channel of production of the
measured neutron capture elements (Y, Ba, La, Nd, Ce and Eu). In particular the quite
+0.70 dex) found in these old clusters clearly indicates
large enhancement of [Eu/Fe] (
a relevant efficiency of the r-process mechanism in the LMC environment.
±
±
∼
∼
1.
Introduction
In the last decade, the advent of the high resolution spectrographs mounted on the 8-10 m tele-
scopes has allowed to extend the study of the chemical composition of individual Red Giant Branch
1Based on observations obtained at Paranal ESO Observatory under proposal 080.D-0368(A)
– 2 –
(RGB) stars outside our Galaxy up to dwarf and irregular galaxies of the Local Group. Chemical
analysis of RGB stars are now available for several isolated dwarf spheroidal (dSph) galaxies as
Sculptor, Fornax, Carina, Leo I, Draco, Sextans and Ursa Minor (Shetrone, Cot´e & Sargent 2001;
Shetrone et al. 2003; Letarte et al. 2006) and the Sagittarius (Sgr) remnant (Bonifacio et al. 2000;
Monaco et al. 2005, 2007; Sbordone et al. 2007). As general clue, these studies reveal that the
chemical abundance patterns in the extragalactic systems do not resemble those observed in the
Galaxy, with relevant differences in the [α/Fe] 2, [Ba/Fe] and [Ba/Y] ratios, thus suggesting dif-
ferent star formation history and chemical evolution (see e.g. Venn et al. 2004; Geisler et al. 2007;
Tolstoy, Hill & Tosi 2009).
Unlike the dSphs, the irregular galaxies as the Large Magellanic Cloud (LMC) contain large
amount of gas and dust, showing an efficient ongoing star-formation activity. The LMC globular
clusters (GCs) span a wide age/metallicity range, with both old, metal-poor and young, metal-rich
objects, due to its quite complex star formation history. Several events of star formation occurred:
the first one
13 Gyr ago and 4 main bursts at later epochs, 2 Gyr, 500 Myr, 100 Myr and
12 Myr ago (Harris & Zaritsky 2009). Until the advent of the new generation of spectrographs,
the study of the chemical composition of the LMC stars was restricted to red and blue supergiants
(Hill et al. 1995; Korn et al. 2000, 2002), providing information only about the present-day chemical
composition. The first studies based on high resolution spectra of RGB stars (Hill et al. 2000;
Johnson et al. 2006; Pompeia et al. 2008) provided first and crucial information about the early
chemical enrichment and nucleosynthesis.
∼
∼
Of the
300 compact stellar clusters listed by Kontizas et al. (1990), metallicity determinations
from Ca II triplet are available for some tens of objects (Olszewski et al. 1991; Grocholski et al.
2006) and only for 7 clusters high-resolution spectroscopic analysis have been carried out (Hill et al.
2000; Johnson et al. 2006). With the final aim of reconstructing the formation history of star
clusters in the LMC, a few years ago we started a systematic spectroscopic screening of giants in a
sample of LMC GCs with different ages.
In the first two papers of the series (Ferraro et al. 2006; Mucciarelli et al. 2008) we presented
the chemical analysis of 20 elements for 4 intermediate-age LMC clusters (namely, NGC 1651, 1783,
1978, 2173). Moreover, Mucciarelli et al. (2009) discussed the iron content and the abundances of
O, Na, Mg and Al for 3 old LMC clusters (namely NGC 1786, 2210 and 2257), discovering anti-
correlation patterns similar to those observed in Galactic clusters. Here we extend the abundance
analysis to additional 13 chemical elements in these 3 LMC clusters, also performing a detailed com-
parison with stellar populations in our Galaxy (both in the field and in globulars) and in nearby
dSphs.
The paper is organized as follows: Section 2 presents the dataset and summarized the adopted
2We adopt the usual spectroscopic notations that [X1/X2]= lg(NX1 /NX2 )∗ -
lg(NX1 /NX2 )⊙ and that
lg(NX1 )= lg(NX1 /NH )+12.
– 3 –
procedure to derive radial velocities; Section 3 describes the methodology used to infer the chemical
abundances; Section 4 discusses the uncertainties associated to the chemical abundances. Finally,
Section 5 and 6 present and discuss the results of the chemical analysis.
2. Observational data
The observations were carried out with the multi-object spectrograph FLAMES (Pasquini et al.
2002) at the UT2/Kuyeen ESO-VLT (25-27 December 2007). We used FLAMES in the UVES+GIRAFFE
combined mode, feeding 8 fibers to the UVES high-resolution spectrograph and 132 to the GI-
RAFFE mid-resolution spectrograph. The UVES spectra have a wavelength coverage between
4800 and 6800 ˚A with a spectral resolution of λ/∆λ
45000. We used the following GIRAFFE
gratings: HR 11 (with a wavelength range between 5597 and 5840 ˚A and a resolution of
24000)
∼
and HR 13 (with a wavelength range between 6120 and 6405 ˚A and a resolution of
22000). These 2
setups have been chosen in order to measure several tens of iron lines, α-elements and to sample Na
and O absorption lines. Target stars have been selected on the basis of (K, J-K) Color-Magnitude
Diagrams (CMDs), as shown in Fig. 1, from near infrared observations performed with SOFI@NTT
(A. Mucciarelli et al. 2010, in preparation). For each exposure 2 UVES and a ten of GIRAFFE
fibers have been used to sample the sky and allow an accurate subtraction of the sky level.
∼
∼
∼
The spectra have been acquired in series of 8-9 exposures of
45 min each and pre-reduced
independently by using the UVES and GIRAFFE ESO pipeline 3, including bias subtraction, flat-
fielding, wavelength calibration, pixel re-sampling and spectrum extraction. For each exposure, the
sky spectra have been combined together; each individual sky spectrum has been checked to exclude
possible contaminations from close stars. Individual stellar spectra have been sky subtracted by
using the corresponding median sky spectra, then coadded and normalized. We note that the sky
level is only a few percents of the stars level, due to brightness of our targets, introducing only a
negligible amount of noise in the stellar spectra. Note that the fibre to fibre relative transmission
has been taken into account during the pre-reduction procedure. The accuracy of the wavelength
calibration has been checked by measuring the position of some telluric OH and O2 emission lines
selected from the catalog of Osterbrock et al. (1996).
3http://www.eso.org/sci/data-processing/software/pipelines/
– 4 –
2.1. Radial velocities
Radial velocities have been measured by using the IRAF 4 task FXCOR, performing a cross-
correlation between the observed spectra and high S/N - high resolution spectrum of a template star
of similar spectral type. For our sample we selected a K giant (namely HD-202320) whose spectrum
is available in the ESO UVES Paranal Observatory Project database 5 (Bagnulo et al. 2003). Then,
heliocentric corrections have been computed with the IRAF task RVCORRECT. Despite the large
number of availables fibers, only a few observed stars turned out to be cluster-member, due to the
small size of the clusters within the FLAMES field of view. We selected the cluster-member stars
according to their radial velocity, distance from the cluster center and position on the CMD. Finally,
we identified a total of 7 stars in NGC 1786, 5 stars in NGC 2210 and 6 stars in NGC 2257. We
derived average radial velocities of Vr= 264.3 km s−1 (σ= 5.7 km s−1), 337.5 km s−1 (σ= 1.9 km
s−1) and 299.4 km s−1 (σ= 1.5 km s−1) for NGC 1786, 2210 and 2257, respectively. The formal error
0.5–1.0 km s−1. The derived radial velocities
associated to the cross-correlation procedure is of
are consistent with the previous measures, both from integrated spectra (Dubath, Meylan & Mayor
1997) and from low/high-resolution individual stellar spectra (Olszewski et al. 1991; Hill et al. 2000;
In fact, for NGC 1786 Olszewski et al. (1991) estimated 264.4 km s−1
Grocholski et al. 2006).
(σ=4.1 km s−1) from 2 giant stars, while Dubath, Meylan & Mayor (1997) provide a value of 262.6
km s−1. For NGC 2210 the radial velocity provided by Olszewski et al. (1991) is of 342.6 km
s−1 (σ=7.8 km s−1), while Dubath, Meylan & Mayor (1997) and Hill et al. (2000) obtained radial
velocities of 338.6 and 341.7 km s−1 (σ=2.7 km s−1), respectively. For NGC 2257, Grocholski et al.
(2006) provided a mean value of 301.6 km s−1 (σ=3.3 km s−1) and Olszewski et al. (1991) of 313.7
km s−1 (σ=2.1 km s−1). For all the targets Table 1 lists the S/N computed at 6000 ˚A for the UVES
spectra and at 5720 and 6260 ˚A for the GIRAFFE-HR11 and -HR13 spectra, respectively. Also,
we report Vr, dereddened K0 magnitudes and (J
K)0 colors and the RA and Dec coordinates
(onto 2MASS astrometric system) of each targets.
∼
−
3. Chemical analysis
Similarly to what we did in previous works (Ferraro et al. 2006; Mucciarelli et al. 2008, 2009),
the chemical analysis has been carried out using the ROSA package (developed by R. G. Gratton,
private communication). We derived chemical abundances from measured equivalent widths (EW)
of single, unblended lines, or by performing a χ2 minimization between observed and synthetic
line profiles for those elements (O, Ba, Eu) for which this approach is mandatory (in particular,
to take into account the close blending between O and Ni at 6300.3 ˚A and the hyperfine splitting
4Image Reduction and Analysis facility. IRAF is distributed by the National Optical Astronomy Observatories,
which is operated by the association of Universities for Research in Astronomy, Inc., under contract with the National
Science Foundation.
5http://www.sc.eso.org/santiago/uvespop/
– 5 –
for Ba and Eu lines). We used the solar-scaled Kurucz model atmospheres with overshooting and
assumed that local thermodynamical equilibrium (LTE) holds for all species. Despite the majority
of the available abundance analysis codes works under the assumption of LTE, transitions of some
elements are known to suffer from large NLTE effects (Asplund 2005). Our Na abundances were
corrected for these effects by interpolating the correction grid computed by Gratton et al. (1999).
The line list employed here is described in details in Gratton et al. (2003) and Gratton et al.
(2007) including transitions for which accurate laboratory and theoretical oscillator strengths are
available and has been updated for some elements affected by hyperfine structure and isotopic
splitting. Eu abundance has been derived by the spectral synthesis of the Eu II line at 6645 ˚A, in
order to take into account its quite complex hyperfine structure, with a splitting in 30 sublevels. Its
hyperfine components have been computed using the code LINESTRUC, described by Wahlgren
(2005) and adopting the hyperfine constants A and B by Lawler et al. (2001) and the meteoritic
isotopic ratio, being Eu in the Sun built predominantly through r-process. For sake of homogeneity
we adopted the log gf by Biemont et al. (1982) already used in Mucciarelli et al. (2008) instead
of the oscillator strength by Lawler et al. (2001). Ba II lines have relevant hyperfine structure
components concerning the odd-number isotopes 135Ba and 137Ba, while the even-number isotopes
have no hyperfine splitting; moreover, there are isotopic wavelength shifts between all the 5 Ba
isotopes. In order to include these effects, we employed the linelist for the Ba II lines computed by
Prochaska (2000) that adopted a r-process isotopic mixture. We note that the assumption of the
r-process isotopic mixture instead of the solar-like isotopic mixture is not critical for the 3 Ba II
lines analyzed here (namely, 5853, 6141 and 6496 ˚A), because such an effect is relevant for the Ba II
resonance lines (see Table 4 by Sneden et al. 1996).
For the La abundances we have not taken into account the hyperfine structure because the
observed lines are too weak (typically 15-30 m ˚A) and located in the linear part of the curve of
growth where the hyperfine splitting is negligible, changing the line profile but preserving the EW.
Abundances of V and Sc include corrections for hyperfine structure obtained adopting the linelist
by Whaling et al. (1985) and Prochaska & McWilliam (2000).
In a few stars only upper limits for certain species (i.e. O, Al, La and Ce) can be measured. For O,
upper limits have been obtained by using synthetic spectra (as described in Mucciarelli et al. 2009),
while for Al, La and Ce computing the abundance corresponding to the minimum measurable EW
(this latter has been obtained as 3 times the uncertainty derived by the classical Cayrel formula,
see Section 3.1).
As reference solar abundances, we adopted the ones computed by Gratton et al. (2003) for
light Z-odd, α and iron-peak elements, using the same linelist employed here. For neutron-capture
elements (not included in the solar analysis by Gratton et al. 2003) we used the photospheric solar
values by Grevesse & Sauval (1998). All the adopted solar values are reported in Tables 3, 4 and
5.
– 6 –
3.1. Equivalent Widths
All EWs have been measured by using an interactive procedure developed at our institute.
Such a routine allows to extract a spectral region of about 15-25 ˚A around any line of interest.
Over this portion of spectrum we apply a σ-rejection algorithm to remove spectral lines and cosmic
rays. The local continuum level for any line has been estimated by the peak of the flux distribution
obtained over the surviving points after the σ-rejection. Finally the lines have been fitted with
a gaussian profile (rejecting those lines with a FWHM strongly discrepant with respect to the
nominal spectral resolution or with flux residuals asymmetric or too large) and the best fits are
then integrated over the selected region to give the EW. We excluded from the analysis lines with
lg (EW/λ) <–4.5, because such strong features can be dominated by the contribution of the wings
and too sensitive to the velocity fields. We have also rejected lines weaker than lg (EW/λ)=–5.8
because they are too noisy.
In order to estimate the reliability and uncertainties of the EW measurements, we performed
some sanity checks by using the EWs of all the measured lines, excluding only O, Na, Mg, and Al
lines, due to their intrinsic star-to-star scatter (see Mucciarelli et al. (2009) and Sect.5):
•
•
•
The classical formula by Cayrel (1988) provides an approximate method to estimate the
uncertainty of EW measurements, as a function of spectral parameters (pixel scale, FWHM
and S/N). For the UVES spectra, we estimated an uncertainty of 1.7 m ˚A at S/N= 50 , while
for the GIRAFFE spectra an uncertainty of 2 m ˚A at S/N= 100. As pointed out by Cayrel
(1988) this estimate should be considered as a lower limit for the actual EW uncertainty,
since the effect of the continuum determination is not included.
In each cluster we selected a pair of stars with similar atmospheric parameters and compared
the EW measured for a number of absorption lines in the UVES spectra. The final scatter
(obtained diving the dispersion by √2) turns out to be 5.6, 8.3 and 7.6 m ˚A for NGC 1786,
2210 and 2257, respectively.
We compared the EWs of two target stars with similar atmospherical parameters observed
with UVES (NGC 1786-1248) and GIRAFFE (NGC 1786-978), in order to check possible
systematic errors in the EW measurements due to the use of different spectrograph configu-
rations. We found a scatter of 6.5 m ˚A. Within the uncertainties arising from the different S/N
conditions and the small numbers statistic, we do not found relevant systematic discrepancies
between the EWs derived from the two different spectral configurations.
3.2. Atmospherical parameters
Table 2 lists the adopted atmospherical parameters for each target stars and the corresponding
[Fe/H] abundance ratio. The best-model atmosphere for each target star has been chosen in order
– 7 –
to satisfy simultaneously the following constraints:
(1) Tef f must be able to well-reproduce the excitation equilibrium, without any significant trend
between abundances derived from neutral iron lines and the excitation potential;
(2) log g is chosen by forcing the difference between log N(Fe I) and log N(Fe II) to be equal to
the solar value, within the quoted uncertainties;
(3) the microturbulent velocity (vt) has been obtained by erasing any trend of Fe I lines abundances
with their expected line strengths, according with the prescription of Magain (1984);
(4) the global metallicity of the model must reproduce the iron content [Fe/H];
(5) the abundance from the Fe I lines should be constant with wavelength.
Initial guess values Tef f and log g have been computed from infrared photometry, obtained with
SOFI@NTT (A. Mucciarelli et al. 2010, in preparation). Effective temperatures were derived from
dereddened (J
K)0-Tef f calibration by Alonso et al. (1999, 2001).
The transformations between photometric systems have been obtained from Carpenter (2001) and
Alonso et al. (1999). For all the target clusters we adopted the reddening values reported by
Persson et al. (1983). Photometric gravities have been calculated from the classical equation:
K)0 colors by means of the (J
−
−
log (g/g⊙) = 4
·
log(Tef f /Tef f,⊙) + log (M/M⊙)
0.4
·
−
(Mbol
−
Mbol,⊙)
by adopting the solar reference values according to IAU recommendations (Andersen 1999), the
photometric Tef f , a distance modulus of 18.5 and a mass value of M=0.80 M⊙, obtained with the
isochrones of the Pisa Evolutionary Library (Cariulo, Degl’Innocenti & Castellani 2004) for an age
of 13 Gyr and a metal fraction of Z= 0.0006.
The photometric estimates of the atmospherical parameters have been optimized spectroscop-
ically following the procedure described above. Generally, we find a good agreement between the
photometric and spectroscopic Tef f scales, with an average difference T spec
ef f = -14 K (σ= 59
K) and only small adjustments were needed (for sake of completeness we report in Table 2 both the
spectroscopic and photometric Tef f ). Changes in gravities are of
0.2-0.3 dex, consistent within
the uncertainty of the adopted stellar mass, distance modulus and bolometric corrections.
ef f -T phot
±
An example of the lack of spurious trends between the Fe I number density and the expected
line strength, the wavelength and the excitational potential is reported in Fig. 2 (linear best-fits
and the corresponding slopes with associated uncertainties are labeled).
4. Error budget
In the computation of errors, we have taken into account the random component related
mainly to the EW measurement uncertainty and the systematic component due to the atmospheric
parameters. The total uncertainty has been derived as the sum in quadrature of random and
systematic uncertainties.
– 8 –
(i) Random errors. Under the assumption that each line provides an independent indication
of the abundance of a species, the line-to-line scatter normalized to the root mean square of the
observed lines number (σ/√Nlines) is a good estimate of the random error, arising mainly from the
uncertainties in the EWs (but including also secondary sources of uncertainty, as the line-to-line
errors in the employed log gf). Only for elements with less than 5 available lines, we adopted as
random error the line-to-line scatter obtained from the iron lines normalized for the root mean
square of the number of lines. These internal errors are reported in Tables 2 - 5 for each abundance
ratio and they are of the order of 0.01–0.03 dex for [Fe/H] (based on the highest number of lines)
and range from
0.10 dex for the other elements.
0.02 dex to
∼
∼
(ii) Systematic errors. The classical approach to derive the uncertainty due to the choice
of the atmospherical parameters is to re-compute the abundances by altering each parameter of
the corresponding error and fixing the other quantity each time. Then, the resulting abundance
differences are summed in quadrature, providing the total uncertainty. In the case of our analysis,
where the spectroscopic method to infer the parameters has been adopted, Tef f , log g and vt turn
out to be not independent each other. Variations of Tef f affect in different ways Fe I and Fe II
abundances, and imply related changes in log g to compensate. Moreover, strongest lines have
typically lower excitation potential, and any change in Tef f requires a change in vt. The method to
sum in quadrature the abundance uncertainties under the assumption that Tef f , log g and vt are
uncorrelated is unable to take into account the covariance terms due to the dependencies among the
atmospherical parameters. The risk to use this technique, when the spectroscopical optimization is
adopted, is to overestimate this source of error, providing only a conservative upper limit, especially
in cases of abundances with relevant covariance terms.
A more realistic estimate of the effective error due to the atmospherical parameters, can be obtained
with the procedure described by Cayrel et al. (2004). We repeated the analysis of a target star
100
(namely, NGC 1786-2310, chosen as representative of the entire sample) varying Tef f by
K with respect to the best model Tef f and repeating the entire procedure to optimize the other
parameters, deriving new best values for log g and vt: we obtained log g= 0.9 and vt= 2 km s−1
when we increase Tef f of 100 K, and log g= 0.3 and vt= 1.85 km s−1 when we decrease Tef f of
100 K. The two variations are basically symmetric and we chose as final error the absolute value of
the largest one. Table 6 lists the differences between the new analysis and the original one for each
abundance ratio. This method naturally includes both the errors due to the parameters and the
covariance terms due to the interdependence between the parameters (see also McWilliam et al.
1995, for a complete discussion about the covariance terms).
±
5. Chemical abundance results
Tables 3 - 5 list the derived abundance ratios for all the studied stars. Table 7 summarizes
the cluster average abundance ratios, together with the dispersion around the mean. Figures 3 -
7 show the plot of some abundance ratios as a function of the iron content obtained in this work
– 9 –
(as grey triangles) and in Mucciarelli et al. (2008) (as white triangles). In these figures abundances
obtained for Galactic field stars (small grey circles), GGCs (squares), dSph’s stars (asterisks) and
for the sample of old LMC clusters by Johnson et al. (2006) (black points) are also plotted for
comparison. All the reference sources are listed in Table 8. For sake of homogeneity and in
order to avoid possible systematic effects in the comparison, we perform a study of the oscillator
strengths and adopted solar values of the comparison samples, aimed at bringing all abundances in
a common system. Since our analysis is differential, we decide not to correct abundances derived
with the same methodology (Edvardsson et al. 1993; Gratton et al. 2003; Reddy et al. 2003, 2006).
All the other dataset have been re-scaled to our adopted oscillator strengths and solar values. We
compared oscillator strengths of lines in common with our list, finding, if any, negligible offsets
(within
0.03 dex). Log gf of the Ti I lines adopted by Fulbright (2000), Shetrone, Cot´e & Sargent
(2001) and Shetrone et al. (2003) are 0.07 dex higher than ours, while log gf of the Y II lines by
Stephens & Boesgaard (2002) results lower than ours by -0.09 dex. The differences in the individual
element solar values are small, typically less than 0.05 dex and generally the offsets of log gf and
solar values cancel out, with the only exception of the Ca abundances based on the solar value by
Anders & Grevesse (1989), which turns out to be 0.09 dex higher than ours.
±
The main abundance patterns are summarized as follows:
•
•
±
±
±
0.01 dex (σ= 0.02 dex), –1.65
0.02 dex (σ= 0.04 dex) and –1.95
Fe, O, Na, Mg and Al— Results about Fe, O, Na, Mg and Al of the target stars have been
presented and discussed in Mucciarelli et al. (2009). We derived an iron content of [Fe/H]= –
0.02 dex (σ= 0.04
1.75
dex) for NGC 1786, NGC 2210 and NGC 2257, respectively.
At variance with the other elements, Mg and Al exhibit large star-to-star variations in each
cluster, while similar dishomogeneities have been found in the O content of NGC 1786 and
2257, and in the Na content of NGC 1786. Such scatters are not compatible with the obser-
vational errors and indicate the presence of intrinsic variations. The same Na-O and Mg-Al
anticorrelations observed in the GGCs have been found in these LMC clusters (see Fig. 2 of
Mucciarelli et al. 2009). Similar patterns have been already detected in the GGCs studied
so far and they are generally interpreted in terms of a self-enrichment process, where the
ejecta of the primordial Asymptotic Giant Branch (AGB) stars (in which O and Mg have
been destroyed producing large amount of Na and Al) are able to trigger the formation of a
second stellar generation (Ventura et al. 2001; Ventura & D’Antona 2008). A complete dis-
cussion about the Na-O and Mg-Al anticorrelations in these 3 LMC clusters is also presented
in Mucciarelli et al. (2009).
α-elements— Fig. 3 shows the behavior of [Si/Fe], [Ca/Fe] and [Ti/Fe] as a function of
[Fe/H] for the observed clusters and the comparison samples. The first 2 abundance ratios
+0.30 dex, in good agreement with the
are enhanced, with [Si/Fe]
+0.2 dex). Fig. 4 shows
Halo and GGCs stars, while [Ti/Fe] is only moderately enhanced (
0.08,
the average of [Si/Fe], [Ca/Fe] and [Ti/Fe] abundance ratios. We find < α/F e > of 0.30
+0.40 dex and [Ca/Fe]
∼
∼
∼
±
– 10 –
±
±
0.02 and +0.38
+0.33
0.08 for NGC 1786, 2210 and 2257, respectively. Such a level of α-
enhancement is consistent with that observed in the Galactic Halo ( both in field and cluster
stars of similar metallicity), while dSphs display < α/F e > ratios only
0.1-0.15 dex lower.
It is worth noticing that recent studies indicate that the α-enhancement of the Sculptor stars
well agrees with the Halo stars for lower metallicities (see e.g. Tolstoy, Hill & Tosi 2009),
while the Fornax GCs show only a mild enhancement (Letarte et al. 2006), see Fig. 4.
∼
The only previous chemical analysis of α-elements in old LMC GCs has been performed by
Johnson et al. (2006), analyzing 4 GCs (namely, NGC 2005, 2019, 1898 and Hodge 11) in
the metallicity range [Fe/H]= –2.2 / -1.2 dex (none of these objects is in common with our
sample). At variance with us, they find solar or sub-solar [Ti/Fe] ratios and moderately
enhanced [Ca/Fe] ratios, while their [Si/Fe] abundance ratios turn out to be enhanced in
good agreement with our abundances. However, we point out that the solar zero-point for
their [Ca/Fe] (including both the solar reference for Ca and Fe) is +0.11 dex higher than
ours. Taking into account this offset, their Ca abundances are only 0.1 dex lower and still
barely consistent within the quoted uncertainties. Conversely, for Ti, the offset in the log gf
scale of -0.06 dex is not sufficient to erase the somewhat larger discrepancy (
0.2-0.3 dex)
between the two abundance estimates.
∼
•
Iron-peak elements— The abundance ratios for [Sc/Fe], [V/Fe], [Cr/Fe] and [Ni/Fe] are
plotted in Fig. 5. Such ratios turn out to be solar or (in a few case) moderately depleted, and
consistent with the patterns observed in the Galactic Halo. The old LMC clusters analyzed
by Johnson et al. (2006) exhibit similar abundance ratios, with the exception of [V/Fe] that
appears to be depleted with respect to the solar value ([V/Fe]<–0.25 dex). V is very sensitive
to the adopted Tef f , as far as Ti, and we checked possible systematic offset between our Tef f
scale and that by Johnson et al. (2006). Both scales are based on the excitational equilibrium,
thus, the derived Tef f are formally derived in a homogenous way. We checked possible offset in
the adopted Fe log gf, finding an average difference log gfJ06-log gfthis work= -0.004 (σ= 0.11).
Moreover, there are no trends between the difference of the log gf and χ. We repeated our
analysis for some stars by using the Fe log gf by Johnson et al. (2006), finding very similar
Tef f (within
50 K) with respect to our ones. Thus, we can consider that the two Tef f
scales are compatible each other. We cannot exclude that the different treatment of the
hyperfine structure for the V I lines between the two works be the origin of this discrepancy.
Unfortunately, we have no GCs in common with their sample and a complete comparison
cannot be performed.
±
•
Neutron-capture elements— Elements heavier than the iron-peak (Z>31) are built up
through rapid and slow neutron capture processes (r- and s-process, respectively). Eu is
considered a pure r-process element, while the first-peak s-process element Y and the second-
peak s-process elements Ba, La, Ce and Nd (see Fig. 6 and 7), have an r contribution less than
20-25% in the Sun. Nd is equally produced through s and r-process (see e.g. Arlandini et al.
∼
1999; Burris et al. 2000). Since the s-process mainly occurs in AGB stars during the thermal
– 11 –
pulse instability phase, s-process enriched gas should occur at later (
∼
100-200 Myr) epochs.
–0.30 dex) of [Y/Fe], still
In the measured old LMC clusters we find a general depletion (
consistent (within the quoted uncertainties) with the lower envelope of the [Y/Fe] distribution
of the Galactic stars, which show a solar-scaled pattern. Also the metal-rich LMC clusters by
Mucciarelli et al. (2008) are characterized by such a depletion, with [Y/Fe] between –0.32 and
–0.54 dex (see Fig. 7). Depleted [Y/Fe] ratios have been already observed in dSphs field stars
(Shetrone, Cot´e & Sargent 2001; Shetrone et al. 2003) and in the Fornax GCs (Letarte et al.
2006).
∼
The stars of NGC 2210 and NGC 2257 exhibit roughly solar [Ba/Fe] ratios (+0.10 and –0.04
dex, respectively), while in NGC 1786 this abundance ratio is depleted ([Ba/Fe]= –0.18 dex).
Also [La/Fe] and [Ce/Fe] show solar or slightly enhanced values, while [Nd/Fe] is always
enhanced (
+0.50 dex). The [Ba/Fe] ratio (as far as the abundances of other heavy s-process
elements) appears to be indistinguishable from the metal-poor stars in our Galaxy.
∼
Fig. 7 (lower panel) shows the behavior of [Eu/Fe] as a function of the [Fe/H]. The 3 old
LMC clusters exhibit enhanced (
+0.7 dex) [Eu/Fe] ratios. These values are consistent with
the more Eu-rich field stars in the Galactic Halo (that display a relevant star-to-star disper-
sion probably due to an inhomogeneous mixing), while the GGCs are concentrated around
+0.40 dex (James et al. 2004). The only other estimates of the [Eu/Fe] abundance
[Eu/Fe]
ratio in LMC clusters have been provided by Johnson et al. (2006) who find enhanced values
between
+0.5 and +1.3 dex, fully consistent with our finding.
∼
∼
∼
6. Discussion
The α-elements are produced mainly in the massive stars (and ejected via type II Supernovae
(SNe) explosions) during both hydrostatic and explosive nucleosynthesis. As showed in Fig. 3 and
4, the LMC clusters of our sample display a behavior of [α/Fe] as a function of [Fe/H] similar to the
one observed in the Milky Way stars. The enhanced [α/Fe] ratios in the old LMC clusters suggest
that the gas from which these objects have been formed has been enriched by type II SNe ejecta
on a relative short time-scale. Such an observed pattern in the metal-poor regime agrees with the
α-enhancement of the Halo and GGCs stars, pointing out that the chemical contribution played by
massive stars (concerning the nucleosynthesis of the α-elements) in the early epochs of the LMC
and Milky Way has been similar.
[Ba/Y] is a convenient abundance ratio to estimate the relative contribution between heavy
and light s-elements, [Ba/Eu] the relative contribution between heavy s and r-elements and [Y/Fe]
the contribution between light s and r-elements. As shown in Fig. 8 (upper panel), [Ba/Y] is solar
or moderate enhanced in old LMC as in the Milky Way, but lower than the dSphs. At higher
metallicities the ratio increases appreciably due to the combined increase of Ba and decrease of Y.
Such an increase of [Ba/Y] with iron content can be ascribed to the rise of the AGB contribution,
with a significant metallicity dependence of the AGB yields (as pointed out by Venn et al. 2004).
– 12 –
∼
∼
–0.70 dex and [Y/Eu]
In the old LMC clusters, both the [Ba/Eu] and [Y/Eu] are depleted with respect to the
solar value, with [Ba/Eu]
–1 dex. Such a depletion is consistent with
the theoretical prediction by Burris et al. (2000) and Arlandini et al. (1999) in the case of pure
r-process. Moreover, [Y/Eu] remains constant at all metallicities, at variance with [Ba/Eu] ratio.
It is worth noticing that the precise nucleosynthesis site for Y is still unclear. Despite of the fact
that most of the s-process elements are produced mainly in the He burning shell of intermediate-
mass AGB stars, the lighter s-process elements, such as Y, are suspected to be synthesized also
during the central He burning phase of massive stars (see e.g. the theoretical models proposed by
Prantzos, Hashimoto & Nomoto 1990). Our results suggest that in the early ages of the LMC the
nucleosynthesis of the heavy elements has been dominated by the r-process, both because this type
of process seems to be very efficient in the LMC and because the AGB stars have had no time
to evolve and leave their chemical signatures in the interstellar medium. The contribution by the
AGB stars arises at higher metallicity (and younger age) when the AGB ejecta are mixed and their
contribution becomes dominant. This hypothesis has been suggested also by Shetrone et al. (2003)
in order to explain the lower [Y/Fe] abundance ratios observed in dSph’s, pointing out a different
Y nucleosynthesis for the Galaxy and the dSph’s, with a dominant contribution by type II SNe in
the Galactic satellites.
∼
Fig. 9 show the behaviour of [Y/α], [Ba/α] and [Eu/α].
[Y/α] and [Ba/α] abundance ratios
turns out to be depleted (<–0.30 dex) at low metallicity, with a weak increase at higher metal-
+0.50 dex. This finding seems to confirm as Y is mainly
licity for [Y/α], while [Ba/α] reaches
produced by type II SNe, with a secondary contribution by low-metallicity AGB stars, at variance
with Ba. In fact, in the low-metallicity AGB stars, the production of light s-process elements (as
Y) is by-passed in favor to the heavy s-process elements (as Ba), because the number of seed nuclei
(i.e. Fe) decrease decreasing the metallicity, while the neutron flux per nuclei seed increases. In
light of the spectroscopic evidences arising from our database of LMC GCs and from the previous
studies about Galactic and dSphs stars, both irregular and spheroidal environments seem to share
a similar contribution from AGB stars and type II SNe (concerning the neutron capture elements)
with respect to our Galaxy.
Our LMC clusters sample shows a remarkably constant [Eu/α] ratio of about
+0.4 dex over
the entire metallicity range, pointing toward a highly efficient r-process mechanism 6. First hints of
∼
6 As a sanity check of our abundances in order to exclude systematic offset in the Eu abundances due to the adopted
hyperfine treatment, we performed an analysis of [Eu/Fe] and [α/Fe] ratios on Arcturus, by using an UVES spectrum
taken from the UVES Paranal Observatory Project database (Bagnulo et al. 2003). By adopting the atmospherical
parameters by Lecureur et al. (2007) and the same procedure described above, we derived < α/F e >= +0.23±0.09
dex, [Eu/Fe]= +0.15±0.05 dex and [Eu/α]= –0.08 dex (according to the previous analysis by Peterson et al. (1993)
and Gopka & Yushchenko (1984)). For this reason, we exclude that the enhancement of [Eu/α] in our stars can be
due to an incorrect hyperfine treatment of the used Eu line.
– 13 –
such an enhanced [Eu/α] pattern have been found in some supergiant stars in the Magellanic Clouds
(Hill et al. 1995, 1999), in Fornax GCs (Letarte et al. 2006) and field stars (Bruno Letarte, Ph.D.
Thesis) and in a bunch of Sgr stars (Bonifacio et al. 2000; McWilliam & Smecker-Hane 2005).
7. Conclusion
We have analyzed high-resolution spectra of 18 giants of 3 old LMC GCs, deriving abundance
ratios for 13 elements, in addition to those already discussed in Mucciarelli et al. (2009) and sam-
pling the different elemental groups, i.e.
iron-peak, α and neutron-capture elements. The main
results of our chemical analysis are summarized as follows:
•
•
•
•
the three target clusters are metal-poor, with an iron content of [Fe/H]= –1.75
(σ= 0.02 dex),
NGC 1786, NGC 2210 and NGC 2257, respectively (see Mucciarelli et al. 2009);
0.02 dex (σ= 0.04 dex) and –1.95
0.01 dex
0.02 dex (σ= 0.04 dex) for
–1.65
±
±
±
all the three clusters show the same level of enhancement of the < α/F e > ratio (
+0.30
dex), consistent with a gas enriched by type II SNe, while metal-rich, younger LMC clusters
exhibit solar-scaled < α/F e > ratio, due to the contribution of type Ia SNe at later epochs;
∼
the iron-peak elements (Sc, V, Cr, Ni) follow a solar pattern (or slightly sub-solar, in some
cases), according with the observed trend in our Galaxy and consistent with the canonical
nucleosynthesis scenario;
the studied clusters show a relevant (
–0.30 dex) depletion of [Y/Fe], while the other s-process
elements (with the exception of Nd) display abundance ratios consistent with the Galactic
distributions. [Ba/Fe] and [Ba/Y] in the old LMC GCs are lower than the values measured
in the metal-rich, intermediate-age LMC GCs, because in the former the AGB stars had no
time to evolve and enrich the interstellar medium;
∼
•
+0.70 dex) in all the clusters. This seems to suggest that the r-
[Eu/Fe] is enhanced (
process elements production is very efficient in the LMC, being also the main channel of
nucleosynthesis for the other neutron-capture elements.
∼
In summary, the old, metal-poor stellar population of the LMC clusters closely resembles the
GGCs in many chemical abundance patterns like the iron-peak, the α and heavy s-process elements,
and concerning the presence of chemical anomalies for Na, O, Mg and Al. When compared with
dSphs the LMC old stellar population shows remarkably different abundance patterns for [α/Fe]
and neutron-capture elements.
We warmly thank the anonymous referee for his/her useful comments. This research was
supported by the Ministero dell’Istruzione, dell’Universit´a e della Ricerca.
– 14 –
REFERENCES
Alonso, A., Arribas, S., & Martinez-Roger, C., 1999, A&AS, 140, 261
Alonso, A., Arribas, S., & Martinez-Roger, C., 2001, A&A, 376, 1039
Anders, E., & Grevesse, N., 1989, Geochim. Cosmochim. Acta., 53, 197
Andersen, J., 1999, IAU Trans. A, Vol. XXIV, (San Francisco, CA:ASP), pp. 36, 24, A36
Arlandini, C., Kapplere, F., Wisshak, K., Gallino, R., Lugaro, M., Busso, M. & Straniero, O., 1999,
ApJ, 525, 886
Asplund, M., 2005, ARA&A, 43, 481
Bagnulo, S. et al.. 2003, Messenger, 114, 10
Biemont, E., Karner, C., Meyer, G., Traeger, F., & zu Putlitz, G. 1982, A&A, 107, 166
Bonifacio, P., Hill, V., Molaro, P., Pasquini, L., Di Marcantonio, P., & Santin, P., 2000, A&A, 359,
663
Burris, D. L., Pilachowski, C. A., Armandroff, T. E., Sneden, C., Cowan, J. J., & Roe, H., 2000,
ApJ, 544, 302
Cariulo, P., Degl’Innocenti, S., & Castellani, V., 2004, A&A, 421, 1121
Carpenter, J. M., 2001, AJ, 121, 2851
Carretta, E., Gratton, R. G., Bragaglia, A., Bonifacio, P., & Pasquini, L., 2004. A&A, 416, 925
Carretta, E., 2006, AJ, 131, 1766
Cayrel, R., 1988, in IAU Symp. 132, ”The Impact of Very High S/N Spectroscopy on Stellar
Physics”, ed. G. Cayrel de Strobel & M. Spite, Dordrecht, Kluwer, 345
Cayrel, R. et al., 2004, A&A, 416, 1117
Dubath, P., Meylan, G., & Mayor, M., 1997, A&A, 324, 505
Edvardson, B., Andersen, J., Gustafsson, B., Lambert, L., Nissen, P. E., & Tomkin, J., 1993, A&A,
275, 101
Ferraro, F. R., Mucciarelli, A., Carretta, E., & Origlia, L., 2006, ApJ, 645L, 33
Fulbright, J. P., 2000, AJ, 120, 1841
Geisler, D., Smith, V. V., Wallerstein, G., Gonzalez, G., & Charbonnel, C., 2005, AJ, 129, 1428
Geisler, D., Wallerstein, G., Smith, V. V., & Casetti-Dinescu, D. I., 2007, PASP, 119, 939
– 15 –
Gopka, V. F. & Yushchenko, A. V., 1984, AstL, 20, 352
Gratton, R. G., Carretta, E., Eriksson, K., & Gustafsson, B., 1999, A&A, 350, 955
Gratton, R. G. et al., 2001, A&A, 369, 87
Gratton, R. G., Carretta, E., Claudi, R., Lucatello, S., & Barbieri, M., 2003, A&A, 404, 187
Gratton, R. G., et al., 2007, A&A, 464, 953
Grevesse, N, & Sauval, A. J., 1998, SSRv, 85, 161
Grocholski, A. J., Cole, A. A., Sarajedini, A., Geisler, D., & Smith, V. V., 2006, AJ, 132, 1630
Harris, J., & Zaritsky, D., 2009, AJ, 138, 1243
Hill, V., Andrievsky, S., & Spite, M., 1995, A&A, 293, 347
Hill, V., 1999, A&A, 345, 430
Hill, V., Francois, P., Spite, M., Primas, F. & Spite, F., 2000, A&AS, 364, 19
Koch, A. & Edvardsson, B., 2002, A&A, 381, 500
Kontizas, M., Morgan, D. H., Hatzidimitriou, D., & Kontizas, E., 1990, A&AS, 84, 527
Korn, A. J., Becker, S. R., Gummersbach, C. A., & Wolf, B., 2000, A&A, 353, 655
Korn, A. J., Keller, S. C., Kaufer, A., Langer, N., Przybilla, N., Stahl, O., & Wolf, B., 2002, A&A,
385, 143
Kraft, R. P., Sneden, C., Langer, G. E., Shetrone, M. D., & Bolte, M., 1995, AJ, 109, 2586
Ivans, I. I., Sneden, C., Kraft, R. P., Suntzeff, N. B., Smith, V. V., Langer, G. E., & Fulbright, J.
P., 1999, AJ, 118, 1273
Ivans, I. I., Kraft, R. P., Sneden, C. Smith, G. H., Rich, R. M., & Shetrone, M., 2001, AJ, 122,
1438
Yong, D., Grundahl, F., Nissen, P. E., Jensen, H. R., & Lambert, D. L., 2005, A&A, 438, 875
James, G., Francois, P., Bonifacio, P., Carretta, E., Gratton, R. G., & Spite, F., 2004, A&A, 427,
825
Johnson, J. A., Ivans, I. I.,& Stetson, P. B., 2006,ApJ, 640, 801
Lawler, J. E., Wickliffe, M. E., den Hartog, E. A., & Sneden, C., 2001, ApJ, 563, 1075
Lecureur, A., Hill, V., Zoccali, M., Barbuy, B., Gomez, A., Minniti, D., Ortolani, S., & Renzini,
A., 2007, A&A, 465, 799
– 16 –
Lee, J.-W., & Carney, B. W., 2002, AJ, 124, 1511
Letarte, B., Hill, V., Jablonka, P., Tolstoy, E., Francois, P., & Meylan,G., 2006, A&A, 453, 547L
Magain, P. 1984, A&A, 134, 189
Mc William, A., Preston, G., Sneden, C., & Searle, L., 1995, AJ, 109, 2757
Mc William, A. & Smecker-Hane, T. A.(2005), ASPC, 336, 221
Monaco, L., Bellazzini, M., Bonifacio, P., Ferraro, F. R., Marconi, G., Pancino, E., Sbordone, L.,
& Zaggia, S., 2005, A&A, 441, 141
Monaco, L., Bellazzini, M., Bonifacio, P., Buzzoni, A., Ferraro, F. R., Marconi, G., Sbordone, L.,
& Zaggia, S., 2007, A&A, 464, 201
Mucciarelli, A., Carretta, E., Origlia, L., & Ferraro, F. R., 2008, ApJ, 136, 375
Mucciarelli, A., Origlia, L., Ferraro, F. R., & Pancino, E., 2009, ApJ, 695L, 134
Olszewski, E. W., Schommer, R. A., Suntzeff, N. B. & Harris, H. C., 1991, AJ, 101, 515
Osterbrock, D. E., Fulbright, J. P., Martel, A. R., Keane, M. J., Trager, S. C., & Basri, G., 1996,
PASP, 108, 277
Pasquini, L. et al., Messenger, 110, 1
Persson, S. E., Aaronson, M., Cohen, J. G., Frogel, J. A., & Matthews, K.,1983
Peterson, R., C., Dalle Ore, C. M., & Kurucz, R. L., 1993, ApJ, 404, 333
Pompeia, L., Hill, V., Spite, M., Cole, A., Primas, F., Romaniello, M., Pasquini, L., Cioni, M-R.,
& Smecker Hane, T., 2008, A&A, 480, 379
Prantzos, N., Hashimoto, M., & Nomoto, K., 1990, A&A, 234, 211
Prochaska, J. X., Naumov, S. O., Carney, B. W., McWilliam, A., & Wolfe, A., 2000, AJ, 120, 2513
Prochaska, J. X., &, McWilliam, A., 2000, ApJ, 537L, 57
Ramirez, S. V., & Cohen, J., 2002, AJ, 123, 3277
Reddy, B. E., Tomkin, J., Lambert, D. L., & Allende Prieto, C., 2003, MNRAS, 340, 304
Reddy, B. E., Lambert, D. L., & Allende Prieto, C., 2006, MNRAS, 367, 1329
Sbordone, L., Bonifacio, P., Buonanno, R., Marconi, G., Monaco, L., & Zaggia, S., 2007, A&A,
465, 815
Shetrone, M., Cot´e, P., & Sargent, W. L. W., 2001, ApJ, 548, 592
– 17 –
Shetrone, M., Venn, K. A., Tolstoy, E., Primas, F., Hill, V., & Kaufer, A., 2003, AJ, 125, 684
Sneden, C., McWilliam, A., Preston, G. W., Cowan, J. J., Burris, D. L., & Armosky, B. J., 1996,
ApJ, 467, 840
Sneden, C., Kraft, R. P., Shetrone, M. D., Smith, G. H., Langer, G. E., & Prosser, C. F., 1997,
AJ, 391, 354
Sneden, C., Kraft, R. P., Guhatahakurta, P., Peterson, R. C., & Fulbright, J. P., 2004, AJ, 127,
2162
Stephens, A., & Boesgaard, A. M., 2002, AJ, 123, 1647
Tolstoy, E, Hill, V, & Tosi, M., 2009, ARA&A, 47, 371
Venn, K. A., Irwin, M., Shetrone, M. D., Tout, C. A., Hill, V., & Tolstoy, E., 2004, AJ, 128, 1177
Ventura, P., D’Antona, F., Mazzitelli, I., & Gratton, R., 2001, ApJ, 550L, 65
Ventura, P., & D’Antona, F., 2008, MNRAS, 385, 2034
Wahlgren, G. M., 2005, Memorie della Societ`a Astronomica Italiana Supplementi, 8, 108
Whaling, W. Hannaford, P., Lowe, R. M., Biemont, E., & Grevesse, N., 1985, A&A, 153, 109
This preprint was prepared with the AAS LATEX macros v5.2.
– 18 –
Fig. 1.— Color-Magnitude Diagrams in the (K, J-K) plane of the 3 LMC old clusters: grey points
indicate the stars observed with FLAMES.
– 19 –
Fig. 2.— The behavior of the number density abundance of the neutral iron lines as a function of
the expected line strength (upper panel), of the wavelength (middle panel) and of the excitational
potential (lower panel). In each panel is also reported the linear best-fit (dashed lines) and the
corresponding slope (with associated error) is labelled.
– 20 –
Fig. 3.— Behavior of [Si/Fe], [Ca/Fe] and [Ti/Fe] abundance ratios as a function of [Fe/H]. The
LMC clusters of this study are plotted as grey triangles and the results by Mucciarelli et al. (2008)
as white triangles. Small grey points are Galactic stars. Empty squares are GGCs. Asteriks are
dSphs field stars and Fornax GCs. Black points are the old LMC GCs by Johnson et al. (2006).
All the references are in Table 8. Dashed lines mark the solar value. The errorbar in the corner
indicates the typical uncertainty associated to each abundance ratio and computed by summing in
quadrature the internal error (reported in Tables 2-5) and the error from the adopted parameters
(see Table 6).
– 21 –
Fig. 4.— Behavior of the average < α/F e > ratio (defined as mean of [Si/Fe], [Ca/Fe] and [Ti/Fe])
as a function of [Fe/H].
– 22 –
Fig. 5.— Behavior of [Sc/Fe], [V/Fe], [Cr/Fe] and [Ni/Fe] as a function of [Fe/H].
– 23 –
Fig. 6.— Behavior of [Ce/Fe], [Ba/Fe] and [La/Fe] as a function of [Fe/H].
– 24 –
Fig. 7.— Behavior of [Y/Fe], [Nd/Fe] and [Eu/Fe] as a function of [Fe/H].
– 25 –
Fig. 8.— Behavior of [Ba/Y], [Ba/Eu] and [Y/Eu] (lower panel) as a function of [Fe/H].
– 26 –
Fig. 9.— Behavior of [Y/α], [Ba/α] and [Eu/α] as a function of [Fe/H].
– 27 –
Information about the target stars. S/N have been computed at 6000 ˚A for the UVES
Table 1.
spectra and at 5720 and 6260 ˚A for the GIRAFFE HR 11 and 13 spectra respectively. RA and
Dec are onto 2MASS astrometric system. Last column reports the adopted instumental
configuration (U for UVES and G for GIRAFFE spectra).
Star ID
S/N
Vhelio
(km/s)
K0
(J
−
K)0 RA(J2000) Dec(J2000)
spectrum
NGC 1786-978 — / 70 / 110
NGC 1786-1248
NGC 1786-1321
NGC 1786-1436 — / 60 / 90
NGC 1786-1501
NGC 1786-2310
NGC 1786-2418 — / 70 / 100
NGC 2210-122
NGC 2210-309
NGC 2210-431
NGC 2210-764
NGC 2210-1181
NGC 2257-136
NGC 2257-189 — / 70 / 90
NGC 2257-295
NGC 2257-586 — / 50 / 60
NGC 2257-842
NGC 2257-993 — / 70 / 90
260.5
45 / — /— 255.4
50 / — /— 273.5
267.1
40 / — /— 265.9
50 / — /— 262.2
265.5
40 / — /— 337.7
40 / — /— 338.4
50 / — /— 340.0
40 / — /— 335.7
50 / — /— 335.6
40 / — /— 298.1
299.6
35 / — /— 301.4
300.6
45 / — /— 297.4
298.9
13.55
13.50
13.11
13.71
12.92
12.83
13.09
13.22
13.29
13.04
12.93
12.81
13.65
13.54
14.40
14.36
13.77
13.49
0.78
0.77
0.78
0.72
0.93
0.82
0.82
0.75
0.75
0.77
0.74
0.77
0.77
0.77
0.74
0.70
0.76
0.81
74.7878641
74.7688292
74.7638489
74.7555606
74.7493142
74.7588569
74.8215213
92.9389070
92.9025764
92.8887909
92.8575073
92.8756190
97.5823810
97.5741597
97.5615868
97.5327178
97.5591210
97.4855884
-67.7285246
-67.7408723
-67.7546146
-67.7353347
-67.7514295
-67.7432595
-67.7387519
-69.1122894
-69.1129818
-69.1137252
-69.1267703
-69.1137519
-64.3262965
-64.3299382
-64.3159959
-64.3129344
-64.3394905
-64.3174261
G
U
U
G
U
U
G
U
U
U
U
U
U
G
U
G
U
G
– 28 –
Table 2. Atmospherical parameters and derived [Fe/H] ratio (with the number of used lines and
the associated internal error defined as σ/√Nlines) for all the observed stars. Solar value for Fe is
7.54 dex (Gratton et al. 2003). Photometric temperatures (column 3) have been reported in
comparison with the spectroscopic ones (column 2).
Star ID
T spec
ef f
(K)
T phot
ef f
(K)
log g
(dex)
[A/H]
vt
(km/s)
NGC 1786-978
NGC 1786-1248
NGC 1786-1321
NGC 1786-1436
NGC 1786-1501
NGC 1786-2310
NGC 1786-2418
NGC 2210-122
NGC 2210-309
NGC 2210-431
NGC 2210-764
NGC 2210-1181
NGC 2257-136
NGC 2257-189
NGC 2257-295
NGC 2257-586
NGC 2257-842
NGC 2257-993
4250
4280
4250
4420
4100
4100
4160
4300
4250
4200
4270
4200
4290
4290
4360
4480
4320
4200
4260
4285
4260
4412
3936
4167
4167
4334
4334
4285
4360
4285
4285
4285
4360
4466
4309
4190
0.57
0.75
0.65
0.76
0.55
0.47
0.47
0.60
0.55
0.70
0.60
0.60
0.65
0.61
0.96
0.82
0.95
0.52
-1.75
-1.75
-1.75
-1.75
-1.80
-1.75
-1.80
-1.65
-1.70
-1.65
-1.60
-1.60
-1.90
-1.90
-2.00
-2.00
-1.90
-2.00
1.40
1.70
1.80
1.70
1.80
1.90
1.50
1.70
1.80
1.80
1.90
1.80
1.95
1.60
1.50
1.50
1.50
1.50
n
14
60
54
15
57
47
16
31
35
46
42
46
38
17
40
13
39
17
[Fe/H]
(dex)
-1.73
-1.74
-1.73
-1.76
-1.79
-1.72
-1.75
-1.66
-1.69
-1.67
-1.58
-1.64
-1.94
-1.92
-1.95
-1.92
-1.96
-2.02
0.02
0.02
0.01
0.02
0.01
0.01
0.02
0.02
0.03
0.02
0.02
0.02
0.02
0.02
0.03
0.03
0.02
0.03
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
Table 3.
[O/Fe], [Na/Fe], [Mg/Fe], [Al/Fe], [Si/Fe] and [Ca/Fe] abundance ratios for each observed stars with the number of used lines and
the corresponding internal error.
Star ID
SUN
n
[O/Fe]
8.79
1
NGC 1786-978
2
NGC 1786-1248
2
NGC 1786-1321
1
NGC 1786-1436
NGC 1786-1501
2
NGC 1786-2310 —
NGC 1786-2418 —
2
NGC 2210-122
1
NGC 2210-309
2
NGC 2210-431
2
NGC 2210-764
2
NGC 2210-1181
1
NGC 2257-136
NGC 2257-189 —
NGC 2257-295
1
NGC 2257-586 —
1
NGC 2257-842
NGC 2257-993 —
0.12
0.08
0.07
0.09
0.08
-0.15
0.26
0.31
0.18
0.30
±
±
±
±
±
<-0.60
<-0.40
0.08
0.14
0.11
0.10
0.08
0.11
0.31
0.10
0.12
0.25
0.27
0.22
±
±
±
±
±
±
<-0.20
0.24
0.18
±
<-0.20
-0.08
0.15
±
<-0.20
n
3
2
2
1
4
3
4
1
4
3
2
2
2
2
3
2
2
2
[Na/Fe]
6.21
0.47
0.16
-0.18
-0.01
0.60
0.66
0.77
-0.08
0.69
0.64
0.32
-0.03
0.20
0.49
0.58
0.22
0.54
0.90
0.03
0.08
0.07
0.09
0.06
0.05
0.03
0.11
0.10
0.07
0.10
0.08
0.11
0.07
0.10
0.08
0.10
0.09
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
n
1
2
2
1
1
1
1
1
1
1
1
2
1
1
1
1
1
1
[Mg/Fe]
7.43
n
[Al/Fe]
6.23
0.25
0.51
0.41
0.40
0.49
-0.21
-0.31
0.39
0.20
0.33
0.43
0.28
0.34
0.42
0.12
0.36
0.52
0.24
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
0.06 —
0.08 —
0.07 —
0.09 —
2
0.12
0.08
2
0.07 —
0.11 —
1
0.14
2
0.12
0.14 —
0.11 —
1
0.11
0.10 —
0.18
1
0.11 —
0.15 —
0.13 —
—
<0.27
<0.11
—
±
±
—
<0.54
±
±
<0.30
<0.20
0.79
1.02
0.08
0.06
0.80
0.55
0.14
0.08
0.88
0.11
±
—
1.17
0.18
±
—
<0.68
—
n
1
3
3
1
1
4
1
1
1
2
2
2
2
1
2
1
2
1
[Si/Fe]
7.53
0.36
0.24
0.49
0.57
0.41
0.51
0.52
0.22
0.30
0.40
0.48
0.50
0.54
0.62
0.53
0.53
0.46
0.34
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
0.06
0.07
0.06
0.09
0.12
0.04
0.07
0.11
0.14
0.08
0.10
0.08
0.08
0.10
0.13
0.11
0.11
0.13
n
6
14
17
5
16
14
5
16
15
15
13
13
13
5
14
5
15
5
[Ca/Fe]
6.27
0.22
0.32
0.23
0.37
0.23
0.40
0.39
0.33
0.49
0.28
0.25
0.19
0.29
0.37
0.53
0.31
0.47
0.39
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
0.08
0.02
0.03
0.07
0.03
0.03
0.04
0.06
0.05
0.05
0.04
0.04
0.02
0.04
0.03
0.05
0.04
0.04
–
2
9
–
Table 4.
[Ti/Fe], [Sc/Fe] II, [V/Fe], [Cr/Fe] and [Ni/Fe] abundance ratios for each observed stars with the number of used lines and the
Star ID
SUN
n
[Ti/Fe]
5.00
3
NGC 1786 978
12
NGC 1786 1248
9
NGC 1786 1321
4
NGC 1786 1436
12
NGC 1786 1501
15
NGC 1786 2310
2
NGC 1786 2418
6
NGC 2210 122
9
NGC 2210 309
7
NGC 2210 431
7
NGC 2210 764
5
NGC 2210 1181
8
NGC 2257 136
3
NGC 2257 189
NGC 2257 295
4
NGC 2257 586 —
9
NGC 2257 842
3
NGC 2257 993
0.11
0.16
0.13
0.40
0.01
0.15
0.13
0.38
0.35
0.26
0.26
0.28
0.24
0.25
0.33
0.03
0.02
0.02
0.05
0.05
0.05
0.05
0.08
0.06
0.07
0.09
0.09
0.05
0.01
0.08
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
—
0.24
0.16
±
±
0.06
0.05
n
3
5
5
4
6
4
4
5
4
5
6
5
6
4
4
3
6
4
corresponding internal error.
[Sc/Fe]II
3.13
n
[V/Fe]
3.97
n
[Cr/Fe]
5.67
-0.04
0.06
-0.17
-0.14
-0.05
0.03
-0.03
-0.05
0.12
0.06
-0.19
-0.06
-0.16
-0.19
-0.10
-0.17
-0.04
-0.16
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
0.03 —
5
0.07
7
0.07
1
0.05
6
0.06
6
0.04
1
0.04
2
0.07
5
0.07
4
0.04
5
0.06
5
0.09
1
0.06
0.05 —
0.08
2
0.06 —
1
0.02
0.07 —
—
0.05
-0.14
-0.05
-0.18
0.05
-0.04
-0.23
-0.22
-0.09
-0.29
-0.35
-0.12
±
±
±
±
±
±
±
±
±
±
±
±
—
0.13
-0.01
±
—
±
—
—
5
0.06
6
0.06
0.09 —
5
0.08
0.06
3
0.07 —
3
0.08
3
0.08
6
0.07
3
0.08
3
0.03
7
0.11
—
4
—
7
—
0.15
0.06
—
0.05
0.04
-0.03
-0.11
±
±
—
-0.10
0.00
0.08
0.05
±
±
—
0.06
0.08
0.08
0.08
0.06
0.07
-0.07
-0.05
-0.04
-0.11
-0.16
-0.06
±
±
±
±
±
±
—
0.08
-0.28
±
—
0.04
-0.18
±
—
n
4
10
11
2
10
12
4
7
8
7
10
7
7
2
5
1
8
2
[Ni/Fe]
6.28
-0.04
-0.12
-0.08
-0.09
-0.11
-0.03
-0.14
-0.04
0.14
-0.15
-0.01
-0.14
0.05
0.02
-0.11
-0.14
0.01
-0.03
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
0.03
0.02
0.04
0.06
0.05
0.03
0.04
0.04
0.03
0.05
0.07
0.08
0.05
0.07
0.04
0.11
0.08
0.09
–
3
0
–
Table 5.
[Y/Fe] II, [Ba/Fe] II, [La/Fe] II, [Ce/Fe] II, [Nd/Fe] II and [Eu/Fe] II abundance ratios for each observed stars with the number of
used lines and the corresponding internal error.
Star ID
SUN
n
[Y/Fe]II
2.24
NGC 1786 978 —
3
NGC 1786 1248
2
NGC 1786 1321
NGC 1786 1436 —
1
NGC 1786 1501
NGC 1786 2310
2
NGC 1786 2418 —
2
NGC 2210 122
1
NGC 2210 309
1
NGC 2210 431
2
NGC 2210 764
2
NGC 2210 1181
2
NGC 2257 136
NGC 2257 189 —
NGC 2257 295
1
NGC 2257 586 —
2
NGC 2257 842
NGC 2257 993 —
—
0.09
0.08
-0.36
-0.48
±
±
—
0.12
0.06
-0.20
-0.32
±
±
—
0.08
0.14
0.12
0.10
0.08
0.08
-0.32
-0.31
-0.40
-0.25
-0.41
-0.29
±
±
±
±
±
±
—
0.18
-0.28
±
—
0.11
-0.23
±
—
n
1
3
3
1
3
3
1
3
2
3
3
3
3
1
3
1
3
1
[Ba/Fe]II
2.13
n
[La/Fe]II
1.17
n
[Ce/Fe]II
1.58
-0.21
-0.18
-0.21
-0.24
-0.16
-0.06
-0.19
0.11
0.09
0.07
0.03
0.09
0.01
-0.06
-0.07
-0.11
-0.01
0.02
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
1
0.12
1
0.07
1
0.06
0.09 —
1
0.07
1
0.05
1
0.07
0.06
1
0.10 —
1
0.07
1
0.10
1
0.06
1
0.06
1
0.10
0.10
1
0.11 —
1
0.09
1
0.13
0.11
0.01
0.32
0.24
0.10
0.26
-0.12
0.08
0.00
-0.06
±
±
±
—
±
±
±
±
—
0.12 —
1
0.12
1
0.10
—
1
0.12
0.08
1
0.07 —
1
0.11
—
1
±
1
±
1
±
1
<–0.10
<–0.10 —
1
<0.00
—
—
1
<–0.10
<–0.10 —
0.12
0.14
0.11
—
±
±
—
±
±
—
0.08
0.11
0.12
0.10
-0.13
0.10
0.12
0.08
0.10
0.11
±
—
0.07
-0.08
0.15
0.12
0.14
0.11
±
±
±
<0.00
—
<0.10
—
<0.10
—
n
—
3
3
—
3
2
—
3
3
3
3
3
3
—
4
—
3
—
[Nd/Fe]II
1.50
—
0.65
0.85
0.07
0.06
0.87
0.63
0.07
0.06
0.65
0.64
0.56
0.34
0.43
0.71
0.06
0.08
0.07
0.10
0.06
0.06
0.48
0.09
±
—
0.50
0.09
±
—
±
±
—
±
±
—
±
±
±
±
±
±
—
n
—
1
1
—
1
1
—
1
1
1
1
1
1
—
1
—
1
—
[Eu/Fe]II
0.51
—
0.60
0.78
0.12
0.10
0.69
0.49
0.12
0.08
0.82
0.70
0.77
0.75
0.63
0.75
0.11
0.14
0.12
0.14
0.11
0.11
0.59
0.18
±
—
0.70
0.15
±
—
±
±
—
±
±
—
±
±
±
±
±
±
—
–
3
1
–
– 32 –
Table 6. Variation of each abundance ratio due to atmospherical parameters, obtained according
to the method by Cayrel et al. (2004). Second column reports the difference for each abundance
ratio between the model with Tef f increased by 100 K (and the re-optimization of the other
parameters) and the original one. The third column reports the same differences but considering
a model Tef f decreased by 100 K. The last column lists the final average error.
Ratio
(M OD)+100K -MOD (M OD)−100K -MOD Average
[O/F e]
[N a/F e]
[M g/F e]
[Al/F e]
[Si/F e]
[Ca/F e]
[Sc/F e]II
[T i/F e]
[V /F e]
[Cr/F e]
[F e/H]
[N i/F e]
[Y /F e]II
[Ba/F e]II
[La/F e]II
[Ce/F e]II
[N d/F e]II
[Eu/F e]II
(dex)
+0.13
–0.07
–0.04
–0.05
–0.03
–0.02
+0.06
+0.09
+0.11
+0.03
+0.08
+0.03
+0.02
+0.07
+0.15
+0.09
–0.08
+0.04
(dex)
–0.11
+0.06
+0.05
+0.04
+0.10
+0.01
+0.02
–0.10
–0.12
–0.06
–0.09
–0.02
–0.04
–0.09
–0.09
–0.03
+0.11
–0.03
(dex)
0.12
0.07
0.05
0.05
0.07
0.02
0.04
0.10
0.12
0.05
0.09
0.03
0.04
0.09
0.15
0.06
0.10
0.04
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
±
– 33 –
Table 7. Average abundance ratios for the 3 old LMC clusters discussed in this study with the
corresponding dispersion by the mean.
Ratio
NGC 1786
Mean
[O/F e]
[N a/F e]
[M g/F e]
[Al/F e]
[Si/F e]
[Ca/F e]
[Sc/F e]II
[T i/F e]
[V /F e]
[Cr/F e]
[F e/H]
[N i/F e]
[Y /F e]II
[Ba/F e]II
[La/F e]II
[Ce/F e]II
[N d/F e]II
[Eu/F e]II
<–0.04
0.22
0.35
<0.55
0.44
0.31
–0.05
0.16
–0.05
–0.06
–1.75
–0.09
–0.34
-0.18
0.17
0.04
0.75
0.64
σ
0.36
0.34
0.36
0.43
0.11
0.08
0.08
0.12
0.09
0.05
0.02
0.04
0.11
0.05
0.12
0.11
0.12
0.12
NGC 2210
Mean
0.23
0.33
0.31
<0.48
0.38
0.31
–0.02
0.31
–0.24
–0.09
–1.65
–0.04
–0.34
0.10
-0.02
0.06
0.52
0.74
NGC 2257
Mean
<–0.06
0.33
0.46
<0.91
0.50
0.39
–0.14
0.24
0.00
–0.17
–1.95
–0.03
–0.27
–0.04
<-0.08
<-0.07
0.56
0.68
σ
0.18
0.14
0.29
0.25
0.09
0.09
0.06
0.06
0.12
0.11
0.04
0.08
0.03
0.05
0.04
0.06
0.13
0.08
σ
0.07
0.09
0.36
0.23
0.12
0.11
0.12
0.05
0.10
0.05
0.04
0.12
0.07
0.03
0.08
0.10
0.14
0.07
Note. — [Fe/H], [O/Fe],
[Mg/Fe] and [Al/Fe] abundance
ratios are from Mucciarelli et al. (2009) and reported here for sake of
completeness.
[Na/Fe],
– 34 –
Table 8. Literature sources for the comparison samples.
47 Tuc
NGC 2808
NGC 6287
NGC 6293
NGC 6397
NGC 6541
NGC 6752
M3
M4
M5
M10
M13
M15
M71
Reference
Galactic GCs
Carretta et al. (2004), James et al. (2004)
Carretta (2006)
Lee & Carney (2002)
Lee & Carney (2002)
James et al. (2004)
Lee & Carney (2002)
Yong et al. (2005)
Sneden et al. (2004)
Ivans et al. (1999)
Ivans et al. (2001)
Kraft et al. (1995)
Sneden et al. (2004)
Sneden et al. (1997)
Ramirez & Cohen. (2002)
Galactic Field Stars
Thin/Thick Edvardsson et al. (1993); Koch & Edvardsson (2002)
Halo
Halo/Thick
Halo/Thick
Halo/Thick
Thin
Thick
Burris et al. (2000)
Fulbright (2000)
Stephens & Boesgaard (2002)
Gratton et al. (2003)
Reddy et al. (2003)
Reddy et al. (2006)
dSph
Shetrone, Cot´e & Sargent (2001)
Shetrone, Cot´e & Sargent (2001)
Shetrone, Cot´e & Sargent (2001)
Shetrone et al. (2003); Geisler et al. (2005)
Shetrone et al. (2003); Letarte et al. (2006)
Shetrone et al. (2003)
Shetrone et al. (2003)
Draco
Sextans
Ursa Minor
Sculptor
Fornax
Carina
Leo I
|
synthetic_cpt | 3 | Neural_Machine_Translation_between_Low-Resource_Languages_with_Synthetic_Pivoting.pdf | International Journal of Engineering Trends and Technology
ISSN: 2231 – 5381 /doi:10.14445/22315381/IJETT-V69I9P227
Volume 69 Issue 9, 230-235, September, 2021
© 2021 Seventh Sense Research Group®
Original Article
Attention based Sequence to Sequence Learning for
Machine Translation of Low Resourced Indic
Languages – A case of Sanskrit to Hindi
Vishvajit Bakarola1, Jitendra Nasriwala2
1 Assistant Professor, Chhotubhai Gopalbhai Patel Institute of Technology, Uka Tarsadia University, Bardoli, Gujarat, India
2 Associate Professor, Babumadhav Institute of Information Technology, Uka Tarsadia University, Bardoli, Gujarat, India
[email protected]
(NMT)
technique
is a proficient
fully automatic machine
Abstract - Deep Learning techniques are powerful in
mimicking humans in a particular set of problems. They
have achieved a remarkable performance in complex
learning tasks. Deep learning inspired Neural Machine
Translation
that
outperforms traditional machine translation. Performing
machine-aided translation on Indic languages has always
been a challenging task considering their rich and diverse
grammar. The neural machine translation has shown quality
results compared to the traditional machine translation
approaches. The
translation
becomes problematic when it comes to low-resourced
languages, especially with Sanskrit. This paper presents
attention mechanism based neural machine translation by
selectively focusing on a particular part of language
the
sentences during
construction of Sanskrit to Hindi bilingual parallel corpus
with nearly 10K samples and having 178,000 tokens. The
neural
translation model equipped with an attention
mechanism has been trained on Sanskrit to Hindi parallel
corpus. The approach has shown the significance of
attention mechanisms to overcome long-term dependencies,
primarily associated with low resources Indic languages.
The paper shows the attention plots on testing data to
demonstrate the alignment between source and translated
words. For the evaluation of the translated sentences,
manual score based human evaluation and automatic
evaluation metric based techniques have been adopted. The
attention mechanism based neural translation has achieved
88% accuracy in human evaluation and a BLEU score of
0.92 on Sanskrit to Hindi translation.
translation. The work
shows
Keywords — Attention Mechanism, Low-resourced
languages, Neural Machine Translation, Sanskrit, Sequence
to Sequence Learning
I. INTRODUCTION
Humans have several different ways to communicate
with each other. Spoken and written languages are among the
most preferred communication ways. To bridge the gap
between languages, it is essential to convert a foreign
language to a regional language, and the process is known as
the translation process. The translation is a complicated and
that requires grammatical and
time-consuming process
domain knowledge of both languages. Typically, machine
translation is converting input language (source language) to
output language (target language), preserving its semantics.
Initially, this process was carried out by a human expert,
which is accurate enough for a specific domain at a given
time. However, human translation is tedious and time-
consuming. With a human translator, reliability is the next
crucial issue for different experts concerned with the
translation process, and the end translation may vary. The
first notable step in computer-aided machine translation was
taken in the 1950s. Since then, the efforts have focused on
developing a fully automatic machine translation system that
accurately mimics human-level fluency [1]. The primary
research in machine translation is to melt away the language
barrier and open up literature, communication, and language
understanding with ease for everyone.
Machine translation has always been a challenging is a
fascinating task for the Indic languages. Having the highly
diverse grammar and being the morphologically reach
languages, machine translation on Indic languages still
requires tremendous development efforts. The work focused
on developing a fully automatic machine translation system
keeping Sanskrit as one of the language pairs. Sanskrit is a
language of ancient India and is considered as mother of
almost all Indo-European languages. Sanskrit and Hindi both
belongs to the Indo-Aryan language family. In the linguistic
community, Hindi has been regarded as a descendent of
classical Sanskrit through Prakrit [1, 2]. In India, 43.63
percent of the total population are native Hindi speakers. The
world report shows that nearly 4.5 percent of the world
population are Hindi speakers, which is just 0.5 percent less
than native English speakers. Sanskrit is the world's oldest
natural language written in most scientific ways. Being an
existing human spoken language, Sanskrit is one of the
official 22 languages of India according to the eight-schedule
of India's constitution. In 2009, Sanskrit was declared the
second official language of Uttarakhand and Himachal
Pradesh's state in India. Being the primary language of
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Vishvajit Bakarola & Jitendra Nasriwala. / IJETT, 69(9), 230-235, 2021
ancient times, all four Vedas and six primary fields of study
to learn the Vedas had been written in Sanskrit. The
considerable
its
inaccessibility due to lack of understanding is the primary
motivation of machine translation work on Sanskrit.
literature available
in Sanskrit and
The paper presents the work performing Sanskrit to
Hindi machine translation with Neural Machine Translation
(NMT) approach. The rest of the article is composed as
follows. Section 2 discusses the vastness and complexity of
Sanskrit grammar. Section 3 presents several distinctive
traditional machine translation approaches and work done on
Sanskrit based on those approaches. Section 4 unfolds the
NMT along with its significance and major types that deliver
a human-like translation. Section 5 details the environment
setup, followed by Section 6, showing results and evaluation
matrices on machine-translated target language sentences.
Finally, Section 7 concludes the work with its future
perspectives.
II. LITERATURE REVIEW
that
the evolution of various
The journey of machine translation has begun in the late
1950s. Rule-based machine translation is the oldest and most
foundational approach, further divided into transfer and
interlingua-based translation. Over time with the increasing
demand and availability of digital text data, it has been
observed
state-of-art
translation and
approaches. Example-based machine
statistical machine translation are among those that require
corpora and are classified broadly under corpus-based
methods [9]. The work on machine translation keeping
Sanskrit as one of the language pairs started nearly 30 years
back. Desika was the first system developed in the year 1992
[10]. This section presents other works carried out on the
Sanskrit language.
A. Statistical Machine Translation
The statistical machine translation model uses statistical
models with parameters derived from the analysis of the
bilingual corpora. Statistical machine translation is a corpus-
based approach, and they do not know linguistic rules. This
system is good at fluency and catching exceptions to the
rules [7].cIn 2007,
translation
approach was used for Google translate, which supported
English to Sanskrit translation with other 57 world languages
[8].
B. Rule based Machine Translation
the statistical machine
The rule-based model generated the translation of a source
language using pre-defined and manually designed
grammatical rules. The rules-based models are easy to
implement, and they occupy comparatively small memory
space. One of the significant advantages of this approach is,
it does not require sizeable bi-lingual language corpora.
However, the design of grammatical rules is a language-
dependent, tedious, and highly time-consuming process.
In 2012, a rule-based approach was carried out on English
to Sanskrit translation and applied to 20 random English
sentences. The author has reported a BLEU score of 0.4204
[5]. In 2015, work was carried out on English to Sanskrit
translation using context-free grammar techniques [6]. In
2017, the interlingual machine translation approach was
adopted for Sanskrit to English translation [11]. The work
has given significant insights for intermediate language
representation and used the Paninian system from Karak
analysis.
C. Other Works on Machine Translation that using
Sanskrit
Two works have reported using the neural network
approach to achieve translation with the Sanskrit language.
In 2019, corpus-based machine translation system with a
neural network had been developed for Sanskrit to Hindi
translation. The author has reported that their system is better
than a rule-based system with a 24 percent higher BLEU
score and 39.6 percent less word error rate [12]. Another
work carried out in 2019 uses a recurrent neural network for
sequence-to-sequence
the
translation
augmented translation technique with Zero-Shot Translation
was carried out to translate Sanskrit to Hindi. The author has
reported a BLEU score of 13.3, with a higher side stemming
from pre-processing [20].
In 2020,
[13].
III. NEURAL MACHINE TRANSLATION
Neural Machine Translation or NMT is the most recent
approach to achieve automatic machine translation. NMT
uses a neural network to model the conditional probability of
the target sequence over the source sequence. NMT has an
excellent capability to overcome the traditional machine
translation models' shortcomings and provide comparatively
efficient human-like fluent translation.
Neural networks learn the source sequence and relate it
with an appropriate target sequence mimicking the human
way to do this process. Recurrent Neural Network or RNN
has been considered for this task, as RNNs models the long-
term dependencies between sources and
target
languages. Usually, RNN suffers from Exploding gradient –
is refer to the problem where network faces increase in
weights due to explosion of the long-term components, and
Vanishing gradient – is direct to the situation where network
weight gets updated with a significantly lower rate of change
and the network cannot learn over long term components.
And this restricts vanilla RNNs from learning long-term
dependencies [14].
the
Recurrent Neural Network or RNN uses two significant
variants – Long Short-Term Memory (LSTM) and Gated
Recurrent Unit (GRU) [15], especially to overcome the long-
term dependencies learning problem of vanilla RNNs.
A. Encoder-Decoder Model
The Encoder-Decoder model is an extension of the vanilla
RNN model, which make use of two dedicated networks for
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Vishvajit Bakarola & Jitendra Nasriwala. / IJETT, 69(9), 230-235, 2021
encoding and decoding the language sequences, as shown in
Figure 1. RNN are good at mapping the input and output
sequences when their alignment is ahead of time. At training,
the input sequence pair to the model is provided, and the
model predicts the next word until it meets the sequence end
markers [16].
B. Sequence to Sequence Learning with Attention
In sequence-to-sequence learning, the model collectively
memorizes the whole vector of the input source sequence,
and the decoder uses these encoder states to generate the
target sequence. This situation enforces the model to learn
small sequences fine, but the model faces trouble learning
large sequences, often encountered in language translation
problems. One solution to overcome this and continue
learning long sentences, even with more than 50 words,
focuses on the source sequence's selective part [17].
Fundamentally, to overcome
instead of
encoding the whole sequence in a single vector, it is
preferable to encode each word of the sequence into a vector
[18] and use these vectors in the process of decoding. With
this approach, the small sequences have a small length
vector, and large sequences have a significant vector since
the total number of words in the given sequence is equal to
the number of vectors.
this problem,
Fig. 1 Encoder-Decoder Architecture
language. This process gets repeated until
The hidden layer represents the actual meaning of the
sentence, and it is then fed to the rest of the sequence in the
target
the
acceptable translation is achieved. Let X be the source
language and Y be the target language. The encoder network
converts the source sentence 𝑥1, 𝑥2, 𝑥3, … , 𝑥𝑛, into fixed
dimension vector space. The decoder network's task is to
predict one word at a time with conditional probability as Eq.
1.
𝑃(𝑌|𝑋) = 𝑃(𝑌|𝑋1, 𝑋2, 𝑋3, … , 𝑋𝑘) (1)
In Eq. 1, the given sequence 𝑋1, 𝑋2, 𝑋3, … , 𝑋𝑘 does the
encoder network encode the fixed dimension vector. Each
term used in the distribution will be represented by the
softmax layer, i.e., the last layer of the network, which
ultimately returns to each class label's probability.
learns
The LSTM
probability
the
𝑃(𝑦1, … , 𝑦𝑇′|𝑥𝑖, … , 𝑥𝑇). Here, 𝑥𝑖, … , 𝑥𝑇 is input sequence
with its corresponding output sequence 𝑦1, … , 𝑦𝑇′, whose
length T' may vary from T.
conditional
relevant
information
to perform
translation with
It has been observed from the previous encoder-decoder
architecture that the encoder results in a given sequence at
the end of the entire process. The decoder is forced to find
the use of encoder
the
representation. This ultimately shows that the decoder
requires every piece of
the
translation. However, this is not the problem with the more
minor sequences, but it becomes hard to decode the entire
sequence from a single vector as the sequence size increases.
The attention mechanism is a way forward. In practice, with
natural languages, it is not always suggested to look at the
state immediately preceding the present state. Instead of this,
some other conditions need to be looked at by the decoder.
The foundational idea behind the attention mechanism is that
the decoder network's output depends on the weightage
combination of all the input sequence states rather than only
the immediately previous one [17, 18].
focusing on
The new architecture
the attention
mechanism was proposed in 2015, resolving long-term
dependencies with LSTM. The architecture consists of
bidirectional RNN as an encoder and decoder that simulates
searching through the input sequence during decoding [18].
The goal is to maximize the conditional probability of the
target sequence given the source sequence. In the model,
each conditional probability will be defined as Eq. 3.
After feeding the input sequence to the LSTM, the hidden
state of the LSTM contains the sequence embedding. Finally,
this representation is provided to output LSTM having the
hidden states 𝑣. Eq. 2 shows the calculation of the
probability for the output sequence.
𝑃(𝑦1, … , 𝑦𝑇′|𝑥𝑖, … , 𝑥𝑇) = ∏ 𝑃(𝑦𝑡|𝑣, 𝑦1, … , 𝑦𝑡−1)
(2)
𝑡=1
𝑇
𝑃(𝑦𝑖|𝑦1, … , 𝑦𝑖−1, 𝑋) = 𝑔(𝑦𝑖−1, 𝑠𝑖, 𝑐𝑖) (3)
Here, 𝑠𝑖 is hidden state of RNN for time i, which is
further computed with Eq. 4. The context vector 𝑐𝑖 is similar
to the vector v presented in Eq. 2.
𝑠𝑖 = 𝑓( 𝑠𝑖−1, 𝑦𝑖−1, 𝑐𝑖) (4)
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Vishvajit Bakarola & Jitendra Nasriwala. / IJETT, 69(9), 230-235, 2021
The context vector 𝑐𝑖 is depends on sequences of
annotations to which the decoder maps the input sequence.
The 𝑐𝑖 is computed with Eq. 5.
𝑇𝑥
𝑐𝑖 = ∑ 𝛼𝑖𝑗ℎ𝑗
(5)
𝑗=1
TABLE 1. Statistics of Sanskrit-Hindi
Bilingual Corpus
Language Pair
Samples
Sanskrit
Hindi
10650
10650
Tokens
76674
101690
Where, the 𝛼𝑖𝑗 is a weight vector and it is computed for
B. System Environment Setup
each annotation ℎ𝑗 as Eq. 6.
𝛼𝑖𝑗 =
exp (𝑒𝑖𝑗)
𝑇𝑥
𝑘=1
exp (𝑒𝑖𝑘)
∑
𝑎𝑛𝑑 𝑒𝑖𝑗 = 𝑎(𝑠𝑖−1, ℎ𝑗) (6)
This alignment model shows how well the inputs around
position j and the output at position i get matched. The
alignment model is represented as a feedforward neural
network. In traditional machine translation systems, this
alignment is not explicitly modeled. Figure 2 depicts the
functional architecture of the alignment model from [18].
Fig. 2 The architecture of model trying to generate
T-th target word 𝒚𝑻 when fed with the input
sequence 𝒙𝒊, … , 𝒙𝑻 [18]
IV. EXPERIMENT SETUP
A. Dataset
The bilingual corpus of Sanskrit to Hindi language pairs
has been developed. The corpus contains 10K Sanskrit
sentences parallel translated into Hindi sentences, as shown
in Table 1. The Sanskrit sentences are obtained majorly
focusing on real-life events from online and offline
resources. Help from the linguist community and Sanskrit
scholars have been taken to develop and validate human
translation.
The sequence-to-sequence machine translation model
based on Bahdanau's attention [18] has been trained with
Sanskrit to Hindi bilingual dataset. The model is designed
with 1024 embedding dimensions and Adam as an optimizer
[19]. Further, the hyperparameters are tuned with trial-and-
error methods. The model is trained with early stopping
criteria on Tesla T4 GPUs with 16 GBs of memory.
C. Data Pre-processing
The present work is using the Sa-Hi language pair from
the dataset shown in Table 1. The spell normalization is a
significant issue in data pre-processing with the Devanagari
script. In Hindi text normalization, words with Persio-Arabic
origin are specially taken care of in order to preserve the
actual semantics. As the data encoded in Unicode has more
than one way of storage, all words have been represented in
the same way for normalization. Further, the pre-processing
of numbers and the named entity has been carried out to
establish uniformity in the corpus.
V. RESULTS AND EVALUATION
The model was tested for more than a hundred sentences
of source Sanskrit language. The evaluation of the target
Hindi language was carried out through two different
approaches. The first approach works on score based human
evaluation. In this approach, four different scores have been
proposed as shown in Table 2. The score based human
evaluation approach is used for manual verification of model
generated target language sentences. Here, human linguist
has evaluated target sentences given the source sentences on
the scale of 4, Where score 4 represents the completely
correct sentence in both syntactic and semantic aspects and
the score 1 represents that the sentence is wrong is both
syntactic and semantic aspects and delivering no meaning
given the source sentence.
In the second approach, an automatic evaluation of target
language with BLEU metric [21] has been followed. BLEU
score is a widely used metric that is use to calculate the
accuracy of model generated sentences in comparison to
reference sentences by human linguist in the target language.
The BLEU score has been considered in the range of 0 to 1.
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Vishvajit Bakarola & Jitendra Nasriwala. / IJETT, 69(9), 230-235, 2021
Considering this as the challenge, The Sanskrit to Hindi
bilingual parallel corpus has bene constructed with more
than 10K samples and 178,000 tokens. The corpus has been
developed in association with the linguist community and
used for training the neural machine translation model after
required pre-processing and validation.
The LSTM based sequence-to-sequence model has been
trained with Bahdanau's attention on the parallel corpus. It
has been observed from the experimentation that the model
performs well by focusing only on the relevant portion of
information in the sentence. After sufficient training with the
proper tuning of hyperparameters, the model gives the
human evaluation accuracy of 88% and a BLEU score of
0.92 on the unseen Sanskrit sentences. From Table 3, it has
been observed
the
appropriate expectations for few sentences as the model is
coming across the new vocabulary. The attention plots
demonstrate the alignment between the source and target
words.
the results are not meeting
that
APPENDIX A
TABLE 4. Attention Plots of Sample Translations
Source: <start> अहं बहु व्यस्तः अस्स्ि। <end>
Target: िैं बहुत व्यस्त ह
ूँ। <end>
Source: <start> अहं एकास्कनी अस्स्ि। <end>
ूँ। <end>
Target: िैं अके ली ह
Source: <start> अहं ततत
Target: िैं तैर सकता ह
ूँ <end>
ुं शक्नोस्ि <end>
Source: <start> अन्तः आगन्तत
Target: अंदर आ सकता ह
ूँ क्या? <end>
ं शक्नोस्ि? <end>
TABLE 2. The Score based Human Evaluation
Score Meaning
4
3
2
1
The translation is completely correct in both
syntactic and semantic aspects.
The translation is not entirely correct, but it
represents the partial semantic meaning of the
source sentence.
The translation is syntactically correct but
makes no sense in favor of the source
sentence.
The translation is incorrect in both syntactic
and semantic manner.
In testing, the model has obtained an accuracy of 88%
with score based human evaluation method and a BLEU
score of 0.92. However, coming across a new vocabulary,
the model is generating both semantically incomplete
sentences. Several sentences from test data are shown in
Appendix B. The attention plots have been presented for
selected sentences, which are also part of results shown in
Table 3 results. It has been observed that the model delivers
strong attention between words having a more significant
frequency of occurrence with verities of correlation. The
attention plots on several results are shown in Appendix A.
The Indic machine translation system has been deployed
locally with a user-friendly web interface by integrating the
neural machine translation model in the backend, as shown
in Fig. 3.
Fig. 3 Indic Machine Translation System
Interface
VI. CONCLUSION
The work shows
the significance of
the attention
mechanism to overcome long-term dependencies associated
with the vanilla LSTM model during sequence-to-sequence
learning. Being the low-resourced language, significantly
less amount of digital content is available for Sanskrit.
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Vishvajit Bakarola & Jitendra Nasriwala. / IJETT, 69(9), 230-235, 2021
[7]
Intell. Syst. Conf. (2015) doi: 10.1109/IntelliSys.2015.7361204, 616-
624.
P. Koehn, Statistical Machine Translation. Cambridge University
Press, (2010).
P. D. Mane and A. Hirve, "Study of Various Approaches in Machine
Translation for Sanskrit Language," vol. 2, (2013), 383–387.
[9] U. S. T. Tanveer Siddiqui, Natural Language Processing and
[8]
Information Retrieval. Oxford University Press, (2015).
[10] P. R. V. Veda, "Computer Processing of Sanskrit," C-DAC, Pune,
(1992).
[11] H. S. Sreedeepa and S. M. Idicula, "Interlingua based Sanskrit-
English machine translation," Proc. IEEE Int. Conf. Circuit, Power
doi:
Comput.
10.1109/ICCPCT.2017.8074251.
ICCPCT,
Technol.
(2017)
[12] M. Singh, R. Kumar, and I. Chana, "Corpus based Machine
Translation System with Deep Neural Corpus based Machine
Translation System with Deep Neural Network for Sanskrit to Hindi
Translation Network for Sanskrit to Hindi Translation," Procedia
Comput. Sci., vol. 167, (2020), doi: 10.1016/j.procs.2020.03.306,
2534-2544.
[13] N. Koul and S. S. Manvi, "A proposed model for neural machine
translation of Sanskrit into English," Int. J. Inf. Technol., (2019), doi:
10.1007/s41870-019-00340-8.
[14] A. Shewalkar, D. Nyavanandi, and S. A. Ludwig, "Performance
Evaluation of Deep Neural Networks Applied to Speech Recognition:
RNN, LSTM and GRU," J. Artif. Intell. Soft Comput. Res., vol. 9,
no. 4, doi: https://doi.org/10.2478/jaiscr-2019-0006, 235-245.
[15] Y. Hu, A. Huber, and S.-C. Liu, "Overcoming the vanishing gradient
problem in plain recurrent networks." (2018) [Online]. Available:
https://openreview.net/forum?id=Hyp3i2xRb.
I. Sutskever, O. Vinyals, and Q. V. Le, "Sequence to sequence
learning with neural networks," Adv. Neural Inf. Process. Syst., vol.
4, no. January, (2014), 3104–3112.
[16]
[17] M. T. Luong, H. Pham, and C. D. Manning, "Effective approaches to
attention-based neural machine translation," Conf. Proc. - EMNLP
2015 Conf. Empir. Methods Nat. Lang. Process., (2015), doi:
10.18653/v1/d15-1166, 1412-1421
[18] D. Bahdanau, K. H. Cho, and Y. Bengio, "Neural machine translation
by jointly learning to align and translate," 3rd Int. Conf. Learn.
Represent. ICLR 2015 - Conf. Track Proc., (2015), 1-15.
[19] D. P. Kingma and J. Ba, "Adam: A Method for Stochastic
Optimization." (2017).
[20] Rashi Kumar, Piyush Jha and Vineet Sahula, An Augmented
Translation Technique for Low Resource Language Pair: Sanskrit to
Hindi Translation. In Proceedings of 2019 2nd International
Conference on Algorithms, Computing and Artificial Intelligence
(ACAI’19).
(2019),
https://doi.org/10.1145/3377713.3377774.
Sanya,
China,
[21] Kishore Papineni, Salim Roukos, Todd Ward and Wei-Jing Zhu,
BLEU – A Method for Automatic Evaluation of Machine
Translation, Proceedings of the 40th Annual Meeting of the
Association for Computational Linguistics (ACL), Philadelphia, July
(2002), pp. 311-318.
APPENDIX B
TABLE 3. Sample Translation through the System
Source
Reference
Translated
Source
Reference
Translated
Source
Reference
Translated
Source
Reference
Translated
Source
Reference
Translated
Source
Reference
Translated
Source
Reference
Translated
ूँ
ूँ
ुं शक्नोमि
ूँ
ूँ
ुं शक्नोमि ?
ूँ क्या ?
ूँ क्या ?
अन्तः आगन्तत
अंदर आ सकता ह
अंदर आ सकता ह
अहुं ततत
िैं तैर सकता ह
िैं तैर सकता ह
अहुं एकामकनी अममि
िैं अके ली ह
िैं अके ली ह
मितरौ मवमय बालकमय कृ ते दामयत्ववाहकौ मतः
िातास्िता अिने बच्चों की स्हफाज़त के स्लए स्ज़म्िेदार होते हैं
िातास्िता अिने बच्चों की स्हफाज़त के स्लए स्ज़म्िेदार होते हैं
जािानदेशः मवश्वमय देशेषत एकः अर्थतुंत्रः देशः अममत
जािान दतस्नया के सबसे ताकतशाली अर्थतंत्रों िें से एक है
जािान दतस्नया के सबसे ताकतशाली अर्थतंत्रों िें से एक है
प्रवेशात् िूवुं िादका त्याज्या
अिने हार् िें िरना उसके बाहर जाने की कोस्शश करो
अिने हार् िें िरना उसके बाहर जाने की कोस्शश करो
अहुं बहु व्यमतः अममि
िैं बहुत व्यस्त ह
िैं बहुत व्यस्त ह
ूँ
ूँ
ACKNOWLEDGEMENT
We would express our gratitude to the Indic linguist
community. Their work has helped us to retrieve insights
into both Sanskrit and Hindi grammar. We would like to
acknowledge Shri Udit Sharma and Shri Harshad Joshi, who
help us construct and validate our parallel corpus. We are
grateful to everyone who has directly or indirectly proven
helpful in our work. We are also thankful to other
researchers whose work helps us derive some conclusions
and identify the problems.
REFERENCES
[1] D.
Jitendra Nasriwala
and V. Bakarola, Computational
Representation of Paninian Rules of Sanskrit Grammar for
Dictionary-Independent Machine Translation, vol. 1046, no. July.
Springer Singapore, (2019).
[2] A. C. Woolner, "Introduction to Prakrit.pdf." University of the
[3]
Panjab, Lahore, (1917).
P. Kiparsky, "On the Architecture of Panini's Grammar," in Sanskrit
Computational Linguistics: First and Second International Symposia
Rocquencourt, France, October 29-31, 2007 Providence, RI, USA,
May 15-17, 2008 Revised Selected and Invited Papers, Berlin,
Heidelberg: Springer-Verlag, (2009) 33–94.
[4] B. Panchal, V. Bakrola, and D. Dabhi, "An Efficient Approach of
Knowledge Representation Using Paninian Rules of Sanskrit
Grammar BT
Intelligent Computing
Techniques," (2018) 199–206.
- Recent Findings
in
[5] V. Mishra and R. B. Mishra, "English to Sanskrit Machine
Translation System: A Rule-Based Approach," Int. J. Adv. Intell.
Paradigm., vol. 4, no. 2 (2012) doi: 10.1504/IJAIP.2012.048144,
168–184.
P. Bahadur, A. Jain, and D. S. Chauhan, "Architecture of English to
Sanskrit machine translation," IntelliSys 2015 - Proc. 2015 SAI
[6]
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
|
synthetic_cpt | 7 | TransformLLM_Adapting_Large_Language_Models_via_LLM-Transformed_Reading_Comprehension_Text.pdf | TRANSFORMLLM: ADAPTING LARGE LANGUAGE MODELS VIA
LLM-TRANSFORMED READING COMPREHENSION TEXT
4
2
0
2
t
c
O
8
2
]
L
C
.
s
c
[
1
v
9
7
4
1
2
.
0
1
4
2
:
v
i
X
r
a
Iftach Arbel
School of Mathematical Sciences
Tel Aviv University
Tel Aviv, Israel
[email protected]
Yehonathan Refael
Department of Electrical Engineering
Tel Aviv University
Tel Aviv, Israel
[email protected]
Ofir Lindenbaum
The Faculty of Engineering
Bar Ilan University
Ramat Gan, Israel
ofi[email protected]
ABSTRACT
Large Language Models (LLMs) have shown promise in highly-specialized domains, however chal-
lenges are still present in aspects of accuracy and costs. These limitations restrict the usage of ex-
isting models in domain-specific tasks. While fine-tuning pre-trained models have shown promising
results, this process can be computationally expensive and require massive datasets of the special-
ized application in hand. In this work, we bridge that gap. We have developed Phi-2-Legal and
Mistral-Legal-7B, which are language models specifically designed for legal applications. These
models are based on Phi-2 and Mistral-7B-v0.1, and have gone through continued pre-training with
over 500 million tokens of legal texts. Our innovative approach significantly improves capabilities
in legal tasks by using Large Language Models (LLMs) to convert raw training data into reading
comprehension text. Our legal LLMs have demonstrated superior performance in legal benchmarks,
even outperforming models trained on much larger datasets with more resources. This work em-
phasizes the effectiveness of continued pre-training on domain-specific texts, while using affordable
LLMs for data conversion, which gives these models domain expertise while retaining general lan-
guage understanding capabilities. While this work uses the legal domain as a test case, our method
can be scaled and applied to any pre-training dataset, resulting in significant improvements across
different tasks. These findings underscore the potential of domain-adaptive pre-training and reading
comprehension for the development of highly effective domain-specific language models.
1 Introduction
Large Language Models (LLM) domain-adaptive pre-training, also known as continued pre-training on domain-
specific corpora [12], is a technique that has been proven effective in adapting large language models (LLMs) to
specific domains [35, 5]. This approach allows LLMs to leverage their general language understanding capabilities
while incorporating domain-specific knowledge, which can benefit downstream domain-specific tasks at reduced costs
[22, 26, 27].
In this process, the LLM is further pre-trained using raw data from the specific domain, such as biomedicine, finance,
or law. This helps the LLM gain domain knowledge, which is demonstrated by its improved performance in fine-tuning
and knowledge probing evaluations within those domains [20, 1, 2]. However, a notable drawback is that continued
pre-training on raw domain corpora can lead to a significant drop in the LLM’s prompting performance, potentially due
to the specialized nature of the domain-specific data [11]. Despite this trade-off, domain-adaptive pre-training remains
a promising approach for adapting LLMs to specific domains, capitalizing on their general language understanding
capabilities while tailoring them to domain-specific tasks and knowledge. Ongoing research efforts aim to mitigate the
potential negative impacts on prompting performance while maximizing the benefits of domain-specific knowledge
acquisition [10, 28].
The notion of reading comprehension was suggested in [6], where instead of continuing to train a large language model
on domain-specific raw data, the raw texts be converted into reading comprehension materials. In this approach, each
text is followed by related tasks, transitioning the model from a "reading" phase to a "comprehension" phase. These
tasks, in a question-answer format, enhance the model’s ability to respond to questions by simulating human learning
practices.
We introduce novel methods to expose the models to a corpus during training, blending a variety of legal reading
comprehension tasks, as well as general language data. To demonstrate the performance of our method, we utilize
Phi-2 and Mistral-7B as base models, which were further pre-trained on 500 million tokens of legal corpus. Our new
legal LLMs present state-of-the-art performance on legal benchmarks, suppressing models trained on larger corpora
with significantly more resources.
Our main contributions are: (i) Utilizing LLMs, to transform raw text to reading comprehension text that is used for
continued pre-training of LLMs in legal domain tasks. (ii) Develop an extended evaluation scheme for legal LLMs.
Existing legal benchmarks are currently fragmented and constructed for classification models with multiple question
responses. Our evaluation protocol adapts MMLU [14] (legal subsets) and LexGLUE [3] for use with generative,
GPT-style [24] transformer [31] models. While the focus of this work is on the legal domain, both the transformation
and evaluation protocols are easily applicable to other domains, including finance, biology, and more.
2 Using LLMs to Transform Raw Text
Building upon the foundation of AdaptLLM [6], which converts raw legal text into reading comprehension tasks, we
draw from the concept of human learning through reading comprehension. This approach, where practice after reading
improves the ability to answer questions based on acquired knowledge, inspired our work. Rather than continuing
to train large language models on raw domain-specific corpora, AdaptLLM proposes converting the raw text into
structured reading comprehension tasks, with each passage followed by questions.
While AdaptLLM leverages a set of rules and heuristics to perform this transformation, its reliance on such methods
poses limitations, especially in the resulting data qualityquality of the resulting data. These challenges highlight a
critical need for more sophisticated text transformation techniques [21, 29]. Our solution addresses this by leveraging
large language models (LLMs) to generate high-quality training data. With the decreasing costs of LLM inference,
we can move beyond structured heuristics, using LLMs to efficiently create comprehensive reading comprehension
datasetscreate comprehensive reading comprehension datasets efficiently.
To improve text quality, we designed a prompt database that guides the model’s capabilities. LLMs were tasked with
generating answers and additional questions, additional questions, and transforming the raw legal texts based on tai-
lored prompts. Through further refinement and post-processing, we developed a superior legal reading comprehension
dataset, offering enhanced performance for domain adaptation.
We primarily used open-source models ranging from 7B to 70B for data transformation. These models were selected
based on factors like cost and operational efficiency. Upon reviewing the outputs of these open-source models com-
paredin comparison to more advanced proprietary models like those from OpenAI and proprietary Mistral models, we
observed no significant differences in quality for our transformation task. However, to ensure a diverse data distribu-
tion and to benefit from knowledge distillation of the most powerful models, we also transformed a portion of the data
using state-of-the-art proprietary (closed-source) models.
Some transformations were also applied on the general, non-legal data to generate Chain-of-Thought (CoT) data and
improve the reasoning capabilities of the model, which we find crucial in the legal domain. For the same reason, we
incorporated math and coding data in the training set, striving to boost logical and inference capabilities.
3 Data Collection and Processing
Our data collection focused primarily on English-language legal texts, drawing heavily from the United States, which
follows the common law system. We also included materials from Canada and the United Kingdom, both of which also
adhere to common law principles. This emphasis on jurisdictions with common law traditions ensures that our dataset
aligns closely with the legal usage specific to the United States, which is the primary focus of our model. Through
meticulous curation and rigorous cleaning procedures, we compiled a comprehensive corpus tailored to capture the
intricacies of legal language within the United-States Federal-jurisdiction.
Throughout the development of the model and the data collection process, our goal was not to expose the model
to all existing legal data. Instead, we focused on providing the model with a strong foundation of legal knowledge,
background, understanding, and reasoning abilities. Our aim is for the model to be able to handle various legal tasks,
including document drafting, reviews, and answering questions, by equipping it with these tools. However, if you ask
the model about specific data such as cases or laws, it may provide inaccurate information. In such cases, Retrieval-
2
Augmented Generation (RAG) is the recommended solution. Utilizing a robust legal LLM, along with a retrieval
model and a comprehensive database, will yield reliable and solid results.
The main sources of legal data was raw text from the FreeLaw subset of The Pile [8] and Pile of Law [13]. The Pile
dataset does not have any indexing, therefore we simply sample data from it, while using word count to evaluate the
number of raw tokens we attained. Pile of Law, on the other hand, does index the data by instances, so we could
sample data that we find appropriate, including contracts, SEC filing, and legal memos to name a few. This indexing
also allowed to avoid certain data instances, such as Congressional hearings and European laws.
In order to avoid regression of general language capabilities during the fine-tuning process, we integrated data from
the original training distribution, a strategy supported by previous studies [33, 6]. We introduced widely available
"general" instruction data from various sources, including chain-of-thought (CoT), chat, code, and general instruction
datasets. The datasets were sampled from a diverse range of resources, ensuring a broad spectrum of language usage
and contexts, thereby preserving the model’s general language capabilities while enhancing its performance in the
legal domain.
The set of datasets used in this paper is following presented in Table 1.
Dataset
The Pile (FreeLaw)
Pile of Law
USClassActions
AQUA-RAT
Domain
Legal
Legal
Legal
Math (CoT)
Commonsense (CoT) ECQA
Reasoning (CoT)
Chat
Code
Instruction
EntailmentBank
UltraChat
Code-Feedback
OpenOrca
Tokens License
300M
180M
20M
5M
4M
3M
140M
60M
300M
MIT
CC-BY-NC-SA-4.0
GPL-3.0
Apache-2.0
Apache-2.0
Apache-2.0
MIT
Apache-2.0
MIT
Table 1: A list of used data sources.
Examples from the training data are shown in the Table 3, in Training Samples Example section B, in the appendix.
4 Model Architecture and Training
We have trained two versions of the legal model: Phi-2-Legal and Mistral-Legal-7B. As suggested by their names, these
models are based on Phi-2 [16] and Mistral-7B [17]. We selected these models because they demonstrate cutting-edge
performance, are available for commercial use, and are well-supported by inference libraries (vLLM [19], etc.) and
for server-less deployment (Fireworks, Together, etc.).
4.1 Training considerations
To save on resources and consider the very limited availability of GPUs, we opt to train the models using LoRA [15],
avoiding full parameter update. Lora is a parameter-efficient fine-tuning (PEFT) technique that has beenParameter-
Efficient Fine-Tuning (PEFT) technique, proven to match the results of full-parameter updates while requiring signif-
icantly fewer training resources (Note that any state-of-the-art variants of LoRA [25, 4, 32, 34] may be used as an
MMLU
LexGLUE
International
Law
Juris-
prudence
Professional
Law
LEDGAR
Case
HOLD
Unfair
ToS
Phi-2
Phi-2-Legal
Mistral-7B
AdaptLLM
Saul-7B
Mistral-Legal-7B
0.661
0.667
0.736
0.570
0.694
0.811
3B Models
0.620
0.711
0.379
0.417
7B Models
0.694
0.528
0.630
0.712
0.412
0.361
0.432
0.427
0.143
0.603
0.506
0.463
0.559
0.739
0.310
0.580
0.563
0.500
0.658
0.778
0.233
0.385
0.366
0.513
0.803
0.806
Table 2: Benchmark results for 3B and 7B Models
3
alternative). Considering the vast training data and project scope, we train a considerable amount of parameters. Both
models used a LoRA r = 64, and updated all attention components (Q, K, V, O), as well as the feed-forward (FF)
layers. These models can support context lengths of up to 2K and 4K for Phi-2 and Mistral-7B, respectively1.
In order to improve training efficiency, a common technique involves packing the training data. Packing refers to
concatenating examples in a greedy fashion until the full context length is reached, while using an EOS (end of
sequence) token to separate them. We note that it is not possible to perfectly fit examples into the context length, so
any overflow from one example is moved to the next training example. While this technique generally works well
for model pre-training and most fine-tuning scenarios, it is unsuitable for our case. Since we are focused on reading
comprehension, where the model is presented with raw text followed by a series of questions, cutting examples poses a
risk to the capabilities of the fine-tuned model. Therefore, we use a packing mechanism which concatenates examples
without cutting any of them. Achieving a perfect concatenation is not possible, as this problem is essentially the
bin-packing problem [23, 18], which is NP-hard. However, a good approximation is simply sorting the data by length
and packing examples using a greedy algorithm. Using this algorithm, we compressed the training set by 70%-80%,
depending on context length.
5 Evaluation
We evaluate the models on MMLU (legal subsets) and LexGLUE datasets. We aim for a simple and accessible
evaluation scheme that is easy to understand and measures model accuracy. MMLU is typically evaluated using the
Log probabilities of tokens, as in [9]. However, this type of model evaluation has two main drawbacks: (1) Attaining
raw requires setting up servers with GPUs and server-less inference providers„ and server-less inference providers
as these are limited in the number of Log probabilities they output. (2) Measuring against Log probabilities may
encounter issues due to tokenization mismatches. LexGLUE typicallynormally evaluates classification models rather
than generative ones. Therefore, we adapt benchmark prompts for instruct-type models, detailing the various options
and asking for the most suitable option to be selected. This means that models may be evaluated quickly and affordably
using inference frameworks such as vLLM, or server-less inference providers. We also utilize recent advancements in
decoding techniques, allowingadvancements in decoding techniques, which us to define a closed listallows to define
a closed-list of possible options. The result is a transparent and simple evaluation scheme suitable for , suitable to be
used with chat-aligned models.
MMLU is a straightforward multiple-question benchmark. LexGLUE, on the other hand, has subsets that are simple
multiple-question, while others have 8-100 label options. In LexGLUE, we only use the subsets that are suitable for
use with generative models. For that, the EUR-LEX subset was not used as it only has numerical labels, not verbal,
meaningful ones, while the SCOTUS subset was avoided as many of its instances are longer than a 4K token window;
therefore, it, therefore is has very few usable data instances. Lastly, we did not use the ECtHR subsets, as they refer to
proceedings brought to the European Court of Human Rights (ECtHR), and therefore rely on the European Convention
on Human Rights,relies on European Convention on Human Rights which is a codified document more typical of civil
law systems [30].
Our legal models were benchmarked compared to their underlying base models, Phi-2 and Mistral-7B, to measure the
improvement achieved by continued pre-training. The Mistral-7B is also compared to the legal variant of AdaptLLM
mode, which also uses continued pre-training using reading comprehension text. Additionally, we compare it to Saul-
7B [7], another recent legal model that uses at least x30 more training data and full-parameter update (compared to
our LoRA training). We are unawarenot aware of legal models smaller than 7B parameters; therefore„ therefore the
Phi-2 models are the only ones in this category. These benchmark results are presented in Table 2.
Both classes of models show considerable improvement over their base models. Mistral-Legal-7B performs better
in all subsets thancompared to AdaptLLM, highlightingwhich highlights the benefit of transforming raw data using
LLMs, compared to the heuristic and regex rules. It also performs better than Saul-7B in five out of six subsets. We
observed the most significant performance gains in the LexGLUE subsets. We suspect this is because LexGLUE
is a more niche benchmark, receiving less attention from model developers. In contrast, the MMLU benchmark is
highly popular, and the original models were already extensively optimized for it, making further improvements more
challenging. Nevertheless, our method still enhancedmanaged to enhance results, with Phi-2-Legal outperforming the
original Mistral-7B in all but one of the benchmark subsets.
1Mistral context length is without "sliding-window attention".
4
6 Conclusion
In this work, we presented a framework for domain-specific adaptation of LLMs using continued pre-training. By
training models in the legal domain, we have shown that it is possible to obtain high-performing models with relatively
low resources. To the best of our knowledge, this is the first time this technique has been used.
Future research could employ Reinforcement Learning from Human Feedback (RLHF) to enhance the model’s align-
ment with human preferences. This would lead to improved generation capabilities and more refined outputs, advanc-
ing the applicability and efficacy of the model in diverse applications.
Limitations
The models were evaluated using multiple-question benchmarks, which serve as proxies for their legal capabilities.
However, a dedicated framework for evaluating its text generation capabilities, particularly in specific applications
such as contracts and reviews, is necessary to obtain a comprehensive assessment. The models are not intended or
able to provide factual information, they may generate information that is false or misleading, and reflect social and
cultural biases from their training data, both the original pre-training data as well as our continued pre-training data.
References
[1] Iz Beltagy, Kyle Lo, and Arman Cohan. Scibert: A pretrained language model for scientific text. arXiv preprint
arXiv:1903.10676, 2019.
[2] Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos. Legal-
bert: The muppets straight out of law school. arXiv preprint arXiv:2010.02559, 2020.
[3] Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Martin Katz, and
Nikolaos Aletras. Lexglue: A benchmark dataset for legal language understanding in english. arXiv preprint
arXiv:2110.00976, 2021.
[4] Y. Chen, Y. Li, and X. Liu. Lora+: Improving low-rank adaptation with parameter-specific learning rates. arXiv
preprint arXiv:2305.16045, 2023.
[5] Daixuan Cheng, Shaohan Huang, Jianfeng Liu, Yuefeng Zhan, Hao Sun, Furu Wei, Denvy Deng, and Qi Zhang.
Snapshot-guided domain adaptation for electra. In Findings of the Association for Computational Linguistics:
EMNLP 2022, pages 2226–2232, 2022.
[6] Daixuan Cheng, Shaohan Huang, and Furu Wei. Adapting large language models via reading comprehension.
arXiv preprint arXiv:2309.09530, 2023.
[7] Pierre Colombo, Telmo Pessoa Pires, Malik Boudiaf, Dominic Culver, Rui Melo, Caio Corro, Andre FT Martins,
Fabrizio Esposito, Vera Lúcia Raposo, Sofia Morgado, et al. Saullm-7b: A pioneering large language model for
law. arXiv preprint arXiv:2403.03883, 2024.
[8] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace
He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv
preprint arXiv:2101.00027, 2020.
[9] Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence
Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason
Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben
Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, 12 2023. URL
https://zenodo.org/records/10256836.
[10] Shahriar Golchin, Mihai Surdeanu, Nazgol Tavabi, and Ata Kiapour. Do not mask randomly: Effective domain-
adaptive pre-training by masking in-domain keywords, 2023.
[11] Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng
Gao, and Hoifung Poon. Domain-specific language model pretraining for biomedical natural language processing.
ACM Transactions on Computing for Healthcare (HEALTH), 3(1):1–23, 2021.
[12] Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A
Smith. Don’t stop pretraining: Adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964,
2020.
5
[13] Peter Henderson, Mark Krass, Lucia Zheng, Neel Guha, Christopher D Manning, Dan Jurafsky, and Daniel Ho.
Pile of law: Learning responsible data filtering from the law and a 256gb open-source legal dataset. Advances in
Neural Information Processing Systems, 35:29217–29234, 2022.
[14] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt.
Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020.
[15] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu
Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
[16] Mojan Javaheripi, Sébastien Bubeck, Marah Abdin, Jyoti Aneja, Caio César Teodoro Mendes, Weizhu Chen,
Allie Del Giorno, Ronen Eldan, Sivakanth Gopi, Suriya Gunasekar, Piero Kauffmann, Yin Tat Lee, Yuanzhi Li,
Anh Nguyen, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Michael Santacroce, Harkirat Singh Behl,
Adam Taumann Kalai, Xin Wang, Rachel Ward, Philipp Witte, Cyril Zhang, and Yi Zhang. Phi-2: The surprising
power of small language models. Microsoft Research Blog, 2023.
[17] Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las
Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint
arXiv:2310.06825, 2023.
[18] Bernhard Korte, Jens Vygen, Bernhard Korte, and Jens Vygen. Bin-packing. Combinatorial Optimization:
Theory and Algorithms, pages 489–507, 2018.
[19] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez,
Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention.
In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023.
[20] Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang.
Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36
(4):1234–1240, 2020.
[21] Fei Liu, Xialiang Tong, Mingxuan Yuan, Xi Lin, Fu Luo, Zhenkun Wang, Zhichao Lu, and Qingfu Zhang.
Evolution of heuristics: Towards efficient automatic algorithm design using large language model, 2024. URL
https://arxiv.org/abs/2401.02051.
[22] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke
Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint
arXiv:1907.11692, 2019.
[23] Silvano Martello and Paolo Toth. Lower bounds and reduction procedures for the bin packing problem. Discrete
applied mathematics, 28(1):59–70, 1990.
[24] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by
generative pre-training. Preprint. Work in progress., 2018.
[25] Yehonathan Refael, Jonathan Svirsky, Boris Shustin, Wasim Huleihel, and Ofir Lindenbaum. Adarankgrad:
URL
Adaptive gradient-rank and moments for memory-efficient llms training and fine-tuning, 2024.
https://arxiv.org/abs/2410.17881.
[26] Andy Rosenbaum, Saleh Soltan, Wael Hamza, Marco Damonte, Isabel Groves, and Amir Saffari. Clasp: Few-
shot cross-lingual data augmentation for semantic parsing. In Proceedings of the 2nd Conference of the Asia-
Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on
Natural Language Processing (Volume 2: Short Papers), pages 444–462, 2022.
[27] Andy Rosenbaum, Saleh Soltan, Wael Hamza, Yannick Versley, and Markus Boese. Linguist: Language model
instruction tuning to generate annotated utterances for intent classification and slot tagging, 2022.
[28] Amit Rozner, Barak Battash, Lior Wolf, and Ofir Lindenbaum. Knowledge editing in language models via
adapted direct preference optimization. arXiv preprint arXiv:2406.09920, 2024.
[29] Vasu Sharma, Karthik Padthe, Newsha Ardalani, Kushal Tirumala, Russell Howes, Hu Xu, Po-Yao Huang,
Shang-Wen Li, Armen Aghajanyan, Gargi Ghosh, and Luke Zettlemoyer. Text quality-based pruning for effi-
cient training of language models, 2024. URL https://arxiv.org/abs/2405.01582.
[30] The danish institute for human rights.
The European Court of Human Rights, 2022.
URL
https://www.humanrights.dk/research/about-human-rights/human-rights-europe/european-court-human-rights.
Accessed: 2024-07-01.
[31] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser,
and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
6
[32] Y. Wang, H. Zhang, and J. Li. Adaptive lora: A rank-adaptive method for efficient llm fine-tuning. arXiv preprint
arXiv:2306.06188, 2023.
[33] Tongtong Wu, Linhao Luo, Yuan-Fang Li, Shirui Pan, Thuy-Trang Vu, and Gholamreza Haffari. Continual
learning for large language models: A survey. arXiv preprint arXiv:2402.01364, 2024.
[34] Z. Xu, J. Zhang, and X. Wang. Sparse lora: Efficient fine-tuning of large language models with sparse low-rank
adaptation. arXiv preprint arXiv:2303.17626, 2023.
[35] Yunzhi Yao, Shaohan Huang, Wenhui Wang, Li Dong, and Furu Wei. Adapt-and-distill: Developing small, fast
and effective pretrained language models for domains. arXiv preprint arXiv:2106.13474, 2021.
7
A Ethics statements
As with all large language models, there is an inherent risk of the model producing biased or toxic responses, which
remains a significant concern and requires ongoing attention. Given its application in the legal domain, the language
model’s biases could have more severe implications than those in general usage scenarios. Moreover, mistakes and
hallucinations produced by the model can result in costly errors to the end user.
B Training Samples Example
In Table 3, we provide examples of our text transformations, including prompts and responses by the LLM.
Task Type
Commonsense Rea-
soning
Word-to-Text
Prompt
What
an explanation to
is
this paragraph from the text,
starting with:
review
the trial court’s denial of a
bond-reduction request under an
abuse of discretion standard..."?
"We
Provide a definition to these
two legal terms from the text:
"habeas corpus", "abuse of dis-
cretion".
Natural Language In-
ference
Summarization
Does the sentence "The chapter
13 case was converted to a case
under chapter 7 in July of 1993"
entail the sentence "The record
does not show the present value
of the vehicle, but it is worth
more than the $300 remaining
due under the chapter 13, and
much less than the $6,000 re-
maining due under the original
contract"?
Write a summary for this para-
graph from the text, starting
with: "District courts have ju-
risdiction to hear appeals from
final
judgments, orders, and
decrees issued by bankruptcy
judges...".
Response
This paragraph explains the standard of review
used by the court when assessing the trial court’s
decision to deny a bond-reduction request, which
is an abuse of discretion standard, and clarifies
that the trial court’s decision must be arbitrary or
unreasonable to be considered an abuse of discre-
tion.
"habeas corpus" - a writ requiring a person un-
der arrest to be brought before a judge or into
court, especially to secure the person’s release un-
less lawful grounds are shown for their detention.
"abuse of discretion" - a legal standard used to re-
view a decision made by a lower court or admin-
istrative body, which occurs when the decision is
arbitrary, capricious, or unreasonable.
The two sentences are neutral with regard to each
other. The first sentence discusses the conversion
of the case from Chapter 13 to Chapter 7, while
the second sentence talks about the value of the
vehicle in question.
This paragraph explains the jurisdiction of district
courts in hearing appeals from bankruptcy courts
and the standards of review for legal conclusions
and findings of fact.
Table 3: Examples of raw text transformed to reading comprehension tasks, using LLMs .
8
|
synthetic_cpt | 1 | Chemometric_Quality_Assessment_of_Doxylamine_Succinate_with_Its_Degradation_product;_Implementation_of_Two_Predictive_Models_on_UV-Spectrophotometric_Data_of_Anti-emetic_binary_Mixture.pdf | Artificial Intelligence for reverse engineering:
application to detergents using Raman spectroscopy.
Pedro Marote1, Marie Martin1, Anne, Bonhommé², Pierre Lantéri1, Yohann Clément1*
1 Université de Lyon, Institut des Sciences Analytiques, UMR 5280 CNRS, Université
Claude Bernard Lyon 1, 5 rue de la Doua, 69100 Villeurbanne, France.
2 Université de Lyon, Université Claude Bernard Lyon 1, CNRS, IRCELYON, F-69626,
2 avenue A. Einstein, 69626, Villeurbanne, France
Keywords:
Chemometrics, Machine Learning, mixture design, Artificial Intelligence, RAMAN
spectroscopy, surfactants characterization
Abstract (words)
The reverse engineering of a complex mixture, regardless of its nature, has become
significant today. Being able to quickly assess the potential toxicity of new commercial
products in relation to the environment presents a genuine analytical challenge. The
development of digital tools (databases, chemometrics, machine learning, etc.) and
analytical techniques (Raman spectroscopy, NIR spectroscopy, mass spectrometry,
etc.) will allow for the identification of potential toxic molecules. In this article, we use
the example of detergent products, whose composition can prove dangerous to
humans or the environment, necessitating precise identification and quantification for
quality control and regulation purposes. The combination of various digital tools
(spectral database, mixture database, experimental design, Chemometrics / Machine
Learning algorithm…) together with different sample preparation methods (raw
sample, or several concentrated / diluted samples) Raman spectroscopy, has enabled
the identification of the mixture's constituents and an estimation of its composition.
Implementing such strategies across different analytical tools can result in time savings
for pollutant identification and contamination assessment in various matrices. This
strategy is also applicable in the industrial sector for product or raw material control,
as well as for quality control purposes.
1. Introduction
Numerous detergents are utilized for the purpose of cleaning our residences, clothing,
and bodies. Our everyday products, such as detergents, shampoos, and household
cleaners, contain a significant amount of these substances. They are responsible for
effectively removing stains and dirt[1]. However, it is crucial to acknowledge the
potential health and environmental risks associated with these chemicals[2]–[4].
Consequently, researchers are exploring alternatives[5]. Detergents are commonly
employed by both industrial and private users for daily cleaning tasks. They comprise
soaps and surfactants that possess surface-active properties. These surfactants
function by breaking the bonds between surfaces and dirt, thereby facilitating their
removal. Unfortunately, these chemicals have adverse consequences for the
environment. They are produced and utilized in substantial quantities. In Europe alone,
over 3 million tons of detergents were manufactured in 2020[6].
1
Surfactants, which are used in liquid, powder, and other forms, have a significant
impact on soil and water. The conventional detergents that are frequently advertised
on television are often derived from petroleum-based products. These surfactants are
composed of various chemical compounds, including sulfates, phosphates, bleaching
agents, chemical perfumes, and phenols. Once released into the environment,
detergents, some of which are non-biodegradable, accumulate in soil and water
bodies. It is important to note that more than 60% of the surfactants found in detergents
eventually end up in aquatic environments. This leads to significant problems of
environmental pollution and health concerns.[3]
The properties of surfactants have attracted the interest of detergent manufacturers in
recent years. The growing interest in surfactants necessitates the enhancement of
existing analytical techniques, such as spectroscopy[7], [8], mass spectrometry[9], [10]
and Nuclear Magnetic Resonance (NMR)[11], to ensure compliance with regulations
and environmental standards. Detergents can consist of up to 25 compounds,
including surfactants, enzymes, sequestering agents, polymers, and fragrances, to
name a few. Surfactants are the most crucial components, constituting up to 50% of
the detergent content. These amphiphilic molecules, comprising a hydrophobic carbon
chain and a hydrophilic polar head, are utilized for their solubilizing, wetting, foaming,
dispersing, and emulsifying properties. Depending on the nature of their polar head,
surfactants can be classified into four families: anionic, cationic, non-ionic, or
amphoteric. Various analytical methods, such as NMR[11] or hyphenated techniques
combined with spectroscopic methods[7], [8], [12], are employed for the deconstruction
of detergent mixtures. Chromatographic methods coupled with detectors like light
scattering detection or mass spectrometry have been extensively utilized for surfactant
analysis[9], [13]. These analytical techniques offer the advantage of simultaneously
identifying and quantifying different surfactant families. However, method development
can be prone to biases in sample preparation, costs, and labor-intensive procedures.
RAMAN spectral analysis appears to strike a balance between relevant information
and cost-effectiveness. It does not require lengthy sample preparation procedures, the
use of expensive internal standards, and can be conducted in aqueous solutions
inexpensively. By combining surfactant spectral databases, chemometrics, Machine
Learning, and spectroscopic tools, it becomes possible to identify and quantify raw
materials[8], [14], [15].
Blind source separation (BSS) methods are employed for the deconvolution of
overlapping signals in near and mid-infrared spectroscopy or Raman spectra. Source
extraction (SE) methods, such as independent component analysis (ICA)[16][17]–[22]
or Multicurve Resolution Alternating Least Squares (MCR-ALS)[23], [24], aim to extract
the contributions (spectra) of pure compounds from complex mixtures without any prior
knowledge of the compounds being analyzed. However, a limitation of RAMAN
spectroscopy is the detection limit; raw materials present in low concentrations (<1%)
may not be identified and quantified. To analyze the surfactant composition of various
commercial detergents, we propose a method based on RAMAN spectroscopy,
utilizing a database of commercial raw material RAMAN spectra and Machine Learning
(Figure.1).
2
2. Materials and methods
2.1. Chemicals
A database containing 95 different surfactants (Cocoamide, Sodium Laureth Sulfate,
Betaine ...) has been compiled (supplementary appendices) from 14 different suppliers
(producers or resellers). This database will be used for the identification of surfactants
contained in commercial detergents or homemade detergent mixtures.
2.2. Sample preparation
For sample preparation, there are two possible scenarios: either it involves a
completely unknown mixture, or the constituents are known. In the case of an unknown
mixture, the raw material will be diluted by a factor of 2, 3, etc. If the constituents are
known, however, no sample preparation is required beforehand. The RAMAN
spectrum of the commercial product or the house mixture will be analyzed.
Identification will be performed from the RAMAN spectra databases of the commercial
raw materials and quantification from the PM mixtures database.
2.3. Data base preparation
2.3.1 Spectral database
A Raman spectral database is being created using a library of commercial raw
materials. For each raw material, "pure" Raman spectra and diluted Raman spectra of
the raw materials are recorded. The diluted spectra will be prepared at dilution levels
of 75%, 50%, 25%, and 5%. This database has been constructed using 95 different
commercial raw materials, resulting in a total of 380 Raman spectra. This
comprehensive database will enable the identification of raw materials present in our
various mixtures.
2.3.2 Mixture database
A comprehensive database of commercial raw material mixtures is currently being
acquired. These mixtures are composed of 2 to 5 components carefully selected and
blended. To conduct in-depth investigations involving mixtures with 3, 4, and 5
components, it is imperative to prepare a minimum of 10, 18, or 30 mixtures
respectively, following the highly effective Scheffé simplex designs strategy[25], [26].
It is worth noting that certain raw materials have specific constraints regarding their
permissible usage concentrations, as specified in their corresponding safety data
sheets. These constraints were meticulously considered during the formulation of the
mixtures. The extensive research effort resulted in the preparation and analysis of over
1000 meticulously crafted mixtures, yielding valuable insights and data.
2.4. Measurement for RAMAN spectra of surfactants dishwashing product
Raman Rxn1 spectrometer (Kaiser Optical Systems, Inc. USA), equipped with a
thermoelectrically cooled CCD detector, was used in combination with a fiber optic
sapphire immersion probe. The laser wavelength was set at 785 nm. All spectra were
recorded at a resolution of 4 cm−1 in the spectral range from 150 to 3480 cm−1.
Acquisition time was set at 5 second and five spectra were accumulated.
3
2.5. Statistical analysis
2.5.1. Data preprocessing
To accentuate specific spectral variations, preprocessing of the spectra was
conducted. Initially, the spectra were normalized to address any scale and baseline
shift influences. To normalize and rectify noise, a multiplicative signal correction (MSC)
method was employed[27]. MSC is a relatively straightforward preprocessing
technique that aims to compensate for scaling and offset (baseline) effects. This
correction was accomplished by regressing a measured spectrum against a reference
spectrum and subsequently adjusting the measured spectrum based on the slope (and
potentially intercept) of this regression. Each spectrum was corrected to achieve a
comparable scatter level to that of the reference spectrum.
2.5.2. Independent Component Analysis (ICA)
ICA[17], [20]–[22], [28] is one of the most powerful techniques for blind source
separation. ICA aims at identifying the products present in a mixture during a process.
The basic assumption of the ICA is to consider each row of matrix X as a linear
combination of “source” signals, S, with weighting coefficients, or “proportions”, A,
proportional to the contribution of the source signals in the corresponding mixtures. Its
objective is to extract the "pure" components from a data set mixed in unknown
proportions. For an unnoised model, the matrix X (s × n) is decomposed into f
independent source signals S (f × n) and a mixing proportion matrix of these pure
signals A (s × f) according to the following expression:
𝑋 = 𝐴 𝑆
(1)
To solve equation (A.1), ICA estimates a unmixing matrix W (equal to A-1) that
optimizes the product independence between this matrix and the data matrix X
according to an iterative method based on the central limit theorem [29] (which states
that a sum of independent and identically distributed random variables tends to a
Gaussian random variable). The output U must be as independent as possible. For a
noise-free model, W must be the inverse of A and U must be equal to S, according to
the following equation:
𝑈 = 𝑊𝑋 = 𝑊(𝐴𝑆) = 𝑆
The unmixing matrix A can be calculated as:
𝐴 = 𝑋𝑆!(𝑆𝑆!)"#
(2)
(3)
In this work, InfoMax[16], [22] implementation of the ICA algorithm was used. InfoMax
uses Gram-Schmidt orthogonalization to ensure the independence of the extracted
signal. It uses a maximum likelihood formulation. The aim of Infomax is to find
independent source signals by maximizing entropy:
𝐻(𝑥) = − ∫ 𝑓(𝑥)𝑙𝑜𝑔𝑓(𝑥) 𝑑𝑥
(4)
While independence of the signals cannot be measured, entropy can. Entropy is
related to independence in that maximum entropy implies independent signals.
4
Therefore, the objective of ICA is to find the unmixing matrix that maximizes the entropy
in the extracted signals.
2.5.3 Number of components
If too few ICs are extracted, some of the significant components may remain in the
residual matrix; on the other hand, if to many ICs are extracted some of the significant
components might themselves be decomposed into subcomponents. Validation
methods are required to decide about the optimal number of ICs to be used in the
computation of the final model. The ICA_by_blocks algorithm[8], [12], [30] was used to
determine the optimal number of signals to extract. The initial data matrix is split into B
blocks of samples with approximately equal numbers of rows. A ICA models are then
computed with an increasing number of ICs for each block. The independent
components calculated should be strongly correlated.
2.5.5 Model calibration
For each product to deformulate, the composition was determined through a calibration
conducted using Partial Least Squares Regression (PLSR)[31]–[33]. PLSR is a
commonly employed method, particularly when analyzing extensive spectral data. In
essence, the algorithm for this regression is partially derived from the one used in
Principal Component Analysis (PCA)[34], as it involves a dual decomposition into
latent variables for both the X matrix of variables and the Y matrix of responses.
The development of the PLS model relies on the establishment of a mixing plan
specifically designed for the species identified in the products to be reformulated. If we
have access to the safety data sheet (SDS) of the said product, it will impose
constraints on the constituents. Consequently, we will be able to adjust the mixing plan
based on these constraints. In the absence of an SDS, a mixing plan comprising 2 to
5 components will be devised using the mixture database.
2.5.6 Software
Data collection was controlled using the HoloGRAMS™ software (Kaiser optical
systems, Inc. USA). All spectra were imported into Matlab 9.1 (R2016b) (Mathworks,
Natick, Massachusetts, USA). Statistical analyses were performed with the PLS-
toolbox 8.2 (Eigenvector Research Incorporated Wenatchee, Washington, USA) and
codes developed in-house.
2
Results and discussion
The method was tested on 8 dive products: 5 belonging to a commercial range
whose composition was unknown, and 3 with constituents known through their safety
data sheets. To verify the methodology, the constitution and composition of the
unknown products were provided only at the end of the study.
For the identification of the constituents in the 5 unknown mixtures, dilutions
were performed according to the described protocol, and Raman spectra were
obtained. Only constituents present in the mixture to be analyzed at a concentration
greater than 1% will be considered in the Independent Component Analysis (ICA). The
ICA, using the ICA by block method (Figure.2), will determine the number of visible
constituents per studied mixture and calculate a theoretical spectrum for each
5
identified constituent. Among the extracted Independent Components (ICs), only those
representing reliable information will be discussed. These spectra will be compared
with the spectral library acquired through spectral overlay and correlation between
calculated and experimental spectra (Figure.3). During the calculation of ICs, several
similar spectra can be obtained. This can occur because certain constituents in the
mixture may have similar spectra. In this case, it was decided to include surfactants
with similar spectra in the algorithm within the mixture space. Detergents may contain
additional compounds that can be detected by Raman spectroscopy, such as salts
(𝑁𝑎𝐶𝑙, 𝑀𝑔𝑆𝑂$, etc.). These salts are usually added to increase the viscosity of the
mixture. These salts have specific bands in Raman spectroscopy, such as the vibration
%" at 2550 cm -1. The addition of salt is considered when constructing the
band of 𝑆𝑂$
mixture plan, as its presence may interact with certain acidic or basic surfactants.
Next, a mixture plan is constructed, either using the spectral profiles from the library's
mixture plans or by performing new mixtures, considering the specificities identified
during the ICs calculation. Whenever a specific blending plan is required, typically due
to specific constraints on the components, that plan is systematically added to the
blend database. The points in the mixture plan will serve as calibration points to
establish a model that allows us to determine the composition of the mixture.
In the case of the 3 products with known constituents, a mixture plan is established
based on the database of mixtures, while respecting the constraints described in the
raw material safety data sheets (SDS). The Partial Least Squares (PLS) modeling is
then used to determine the composition of the studied mixture. For both approaches,
to validate the methodology, the criteria for prediction errors (Root Mean Square Error
of Calibration (RMSEC), Root Mean Square Error of Cross-Validation (RMSECV), and
Root Mean Square Error of Prediction (RMSEP)) are observed, as well as the
determination coefficient R² for calibration, cross-validation, and prediction[35]. The
statistical criteria for the models involving unknown mixtures (*5) and known mixtures
(*3) are presented in Tables 1. Based on these criteria, the prediction and calibration
residuals are of similar magnitudes, indicating a good predictive quality for the mixture
compositions. For all the model's prediction results, the obtained compositions fall
within the confidence interval of the provided compositions (Table 2 and 3). In both
cases, the results generally demonstrate accurate estimation of the constituents in the
various mixtures. The prediction discrepancies, although minimal for quantifying an
estimation of the mixture compositions, can have various origins, such as the nature
of a raw material, interactions with co-constituents in the mixture (such as fragrances,
thickeners, etc.), or the concentration of a constituent. Regarding the nature of the raw
material, even for the same constituent, there are numerous producers and suppliers,
and often there exist slight differences between these constituents, such as variations
in carbon chain length or the number of ethoxylated groups, which can impact the
spectrum and therefore its prediction. A constituent present in low concentration would
be difficult to detect in Raman spectroscopy and consequently be identified, as is the
case with MGDA in samples D1 and D3. Hence, the model would have a higher
prediction error in cases of low concentration.
6
4
Conclusions
Chemometrics/Machine Learning methods such as Blind Source Separation (BSS) are
powerful tools for extracting signals from complex mixtures. These techniques have
been successfully applied to several detergent mixtures for various household
applications. The combination of spectral databases of surfactants and mixtures has
enabled the identification and quantification of surfactants in these complex mixtures
(Fig.3).
This methodology can be easily adapted to industrial environments to perform various
tasks such as raw material quality control and competitive intelligence monitoring. The
methodology can be applied to any type of MIR, NIR, and Raman spectroscopy. Of
course, it is necessary to redo all the measurements to obtain the various databases
required for the identification and quantification of the mixture.
This approach could facilitate rapid monitoring of detergent type and concentration in
different matrices. This analysis would make it possible to determine which types of
detergents are present, as well as their respective concentrations. This information
could then be used to adjust methods to better eliminate or reduce the specific
detergents detected.
AUTHOR INFORMATION
Corresponding Author
Yohann Clément – Data Scientist / Chemometrician, University of Lyon, CNRS, Institut
of Analytical Sciences, UMR-5280, 5 Rue de la Doua 69100 Villeurbanne, France;
orcid.org/0000-0002-9852-2856; Email: [email protected]
Author
Pedro Marote, Analyst, University of Lyon, CNRS, Institut of Analytical Sciences, UMR-
5280, 5 Rue de la Doua 69100 Villeurbanne, France; Email: pedro.marote@univ-
lyon1.fr
Pierre Lanteri, Professor, University of Lyon, CNRS, Institut of Analytical Sciences,
UMR-5280, 5 Rue de la Doua 69100 Villeurbanne, France ; orcid.org/ 0000-0002-
8244-9834; Email: [email protected]
Marie Martin, Professor, University of Lyon, CNRS, Institut of Analytical Sciences,
UMR-5280, 5 Rue de la Doua 69100 Villeurbanne, France ; Email: marie.martin@isa-
lyon.fr
Anne Bonhommé, Professor, University of Lyon, CNRS, IRCELYON, UMR-5256, 5
Rue de la Doua 69100 Villeurbanne, France; Email: [email protected]
lyon1.fr
Author Contributions
The manuscript was written through contributions of all authors. All authors have given
approval to the final version of the manuscript.
Funding Sources
7
This research did not receive any specific grant from funding agencies in the public,
commercial, or not-for-profit sectors.
Declarations of interest
None.
Acknowledgements
None
ABBREVIATIONS
NMR: Nuclear Magnetic Resonance, BSS: Blind source separation,SE: Source
extraction, ICA: Independent Component Analysis, MCR-ALS: Multicurve Resolution
Alternating Least Squares, ICs: Independent Components, MSC: multiplicative signal
correction, PLSR: Partial Least Squares Regression, PCA: Principal Component
Analysis, SDS: safety data sheet, RMSE: Root Mean Square Error, RMSEC: Root
Mean Square Error of Calibration, RMSEP: Root Mean Square Error of prediction,
RMSECV: Root Mean Square Error of Cross-Validation
5
References
[1]
J. T. K. Milton J. Rosen, Surfactants and Interfacial Phenomena. 2012.
[2] R. Ernst, C. J. Gonzales, and J. Arditti, “Biological effects of surfactants: Part 6-
effects of anionic, non-ionic and amphoteric surfactants on a green alga
(Chlamydomonas),” Environ. Pollution. Ser. A, Ecol. Biol., vol. 31, no. 3, pp.
159–175, 1983, doi: 10.1016/0143-1471(83)90074-0.
[3] S. O. Badmus, H. K. Amusa, T. A. Oyehan, and T. A. Saleh, “Environmental
risks and toxicity of surfactants: overview of analysis, assessment, and
remediation techniques,” Environ. Sci. Pollut. Res., vol. 28, no. 44, pp. 62085–
62104, 2021, doi: 10.1007/s11356-021-16483-w.
J. Arora et al., “Surfactant pollution, an emerging threat to ecosystem:
Approaches for effective bacterial degradation,” J. Appl. Microbiol., vol. 133,
no. 3, pp. 1229–1244, 2022, doi: 10.1111/jam.15631.
[4]
[5] M. Patel, “Surfactants Based on,” vol. 7, no. 3, pp. 47–62, 2004.
[6] European commision, “detergent,” https://ec.europa.eu. .
[7] A. Gaubert et al., “Analytica Chimica Acta Characterization of surfactant
complex mixtures using Raman spectroscopy and signal extraction methods :
Application to laundry detergent deformulation,” vol. 915, pp. 36–48, 2016, doi:
10.1016/j.aca.2016.02.016.
[8] Y. Clément et al., “Talanta Raman spectroscopy combined with advanced
chemometric methods : A new approach for detergent deformulation,” Talanta,
vol. 195, no. November 2018, pp. 441–446, 2019, doi:
10.1016/j.talanta.2018.11.064.
[9] A. Gaubert et al., “Determination of surfactant bio-sourced origin by isotope-
[10]
ratio mass spectrometry,” Rapid Commun. Mass Spectrom., vol. 30, no. 9, pp.
1108–1114, 2016, doi: 10.1002/rcm.7537.
I. Ogura, D. L. DuVal, S. Kawakami, and K. Miyajima, “Identification and
quantisation of surfactants in consumer products by ion-spray mass
spectrometry,” JAOCS, J. Am. Oil Chem. Soc., vol. 73, no. 1, pp. 137–142,
1996, doi: 10.1007/BF02523461.
[11] M. Hologne, A. Gaubert, C. Sanglar, C. Bordes, and H. Casabianca, “New
8
validation of molecular mass measurements by means of 2D DOSY1H NMR
experiments: Application to surfactants,” Comptes Rendus Chim., vol. 18, no.
2, pp. 187–192, 2015, doi: 10.1016/j.crci.2014.05.008.
[12] D. N. Rutledge and D. Jouan-Rimbaud Bouveresse, “Independent Components
Analysis with the JADE algorithm,” TrAC - Trends Anal. Chem., vol. 50, pp. 22–
32, 2013, doi: 10.1016/j.trac.2013.03.013.
[13] H. S. Park, H. R. Ryu, and C. K. Rhee, “Simultaneous separation of nine
surfactants of various types by HPLC with evaporative light scattering
detection,” Talanta, vol. 70, no. 3, pp. 481–484, 2006, doi:
10.1016/j.talanta.2006.01.029.
[14] J. F. Martínez-Aguilar and E. L. Ibarra-Montaño, “Complete quality analysis of
commercial surface-active products by Fourier-transform near infrared
spectroscopy,” Talanta, vol. 73, no. 4, pp. 783–790, 2007, doi:
10.1016/j.talanta.2007.05.001.
[15] K. Kargosha, S. H. Ahmadi, M. Mansourian, and J. Azad, “Simultaneous
determination of one nonionic and two anionic surfactants using Fourier
transform infrared spectrometry and multivariate analysis,” Talanta, vol. 75, no.
2, pp. 589–593, 2008, doi: 10.1016/j.talanta.2007.11.065.
[16] H. B. Barlow, “Possible Principles Underlying the Transformations of Sensory
Messages,” Sens. Commun., pp. 216–234, 2013, doi:
10.7551/mitpress/9780262518420.003.0013.
[17] A. J. Bell and T. J. Sejnowski, “The "Independent Components" of
Scenes are Edge Filters,” Vis. Res, vol. 37, no. 23, pp. 3327–3338, 1997.
[18] A. Hyvärinen, “Fast and robust fixed-point algorithms for independent
component analysis,” IEEE Trans. Neural Networks, vol. 10, no. 3, pp. 626–
634, 1999, doi: 10.1109/72.761722.
[19] L. De Lathauwer, B. De Moor, and J. Vandewalle, “An introduction to
independent component analysis,” J. Chemom., vol. 14, no. 3, pp. 123–149,
2000, doi: 10.1002/1099-128X(200005/06)14:3<123::AID-CEM589>3.0.CO;2-
1.
[20] E. O. Aapo Hyvärinen, Juha Karhunen, Independent Component Analysis,
Wiley. New York, 2001.
[21] J. F. Cardoso and A. Souloumiac, “Blind beamforming for non-Gaussian
signals,” IEE Proceedings, Part F Radar Signal Process., vol. 140, no. 6, pp.
362–370, 1993, doi: 10.1049/ip-f-2.1993.0054.
[22] A. J. Bell and T. J. Sejnowski, “An information-maximization approach to blind
separation and blind deconvolution.,” Neural Comput., vol. 7, no. 6, pp. 1129–
1159, 1995, doi: 10.1162/neco.1995.7.6.1129.
[23] J. Felten, H. Hall, J. Jaumot, R. Tauler, A. De Juan, and A. Gorzsás,
“Vibrational spectroscopic image analysis of biological material using
multivariate curve resolution-alternating least squares (MCR-ALS),” Nat.
Protoc., vol. 10, no. 2, pp. 217–240, 2015, doi: 10.1038/nprot.2015.008.
[24] V. Olmos et al., “Relevant aspects of unmixing/resolution analysis for the
interpretation of biological vibrational hyperspectral images,” TrAC - Trends
Anal. Chem., vol. 94, pp. 130–140, 2017, doi: 10.1016/j.trac.2017.07.004.
[25] H. Scheffe, “The Simplex-Centroid Design for Experiments with Mixtures
Author ( s ): Henry Scheffe Source : Journal of the Royal Statistical Society .
Series B ( Methodological ), Vol . 25 , No . 2 Published by : Wiley for the Royal
Statistical Society Stable URL : ht,” J. R. Stat. Soc., vol. 25, no. 2, pp. 235–263,
1963.
9
[26] J. Cornell, Experiments with Mixtures: Designs, Models, and the Analysis of
Mixture Data. 2002.
[27] D. MacDougall, H. Martens, and P. Geladi, “Linearization and Scatter-
Correction for Near-Infrared Reflectance Spectra of Meat,” Appl. Spectrosc.,
vol. 39, no. 3, pp. 491–500, 1985.
[28] J. F. Cardoso, “High-order contrasts for independent component analysis,”
Neural Comput., vol. 11, no. 1, pp. 157–192, 1999, doi:
10.1162/089976699300016863.
[29] H. Fisher, The Prehistory: De Moivre’s Theorem. 2010.
[30] A. Kassouf, D. Jouan-Rimbaud Bouveresse, and D. N. Rutledge,
“Determination of the optimal number of components in independent
components analysis,” Talanta, vol. 179, no. September 2017, pp. 538–545,
2018, doi: 10.1016/j.talanta.2017.11.051.
[31] H. WOLD, Nonlinear Iterative Partial Least Squares (NIPALS) Modelling: Some
Current Developments. ACADEMIC PRESS, INC., 1973.
[32] S. Wold, M. Sjöström, and L. Eriksson, “PLS-regression: A basic tool of
chemometrics,” Chemom. Intell. Lab. Syst., vol. 58, no. 2, pp. 109–130, 2001,
doi: 10.1016/S0169-7439(01)00155-1.
[33] M.Tenehaus, La régression PLS: théorie et pratique. 1998.
[34] H. Abdi and L. J. Williams, “Principal component analysis,” Wiley Interdiscip.
Rev. Comput. Stat., vol. 2, no. 4, pp. 433–459, 2010, doi: 10.1002/wics.101.
[35] A. Levet et al., “Quantitative structure-activity relationship to predict acute fish
toxicity of organic solvents,” Chemosphere, vol. 93, no. 6, pp. 1094–1103,
2013, doi: 10.1016/j.chemosphere.2013.06.002.
10
INCI
Sodium C14-16 Olefin Sulfonate
Sodium Laureth Sulfate
Trimethyl Amine (TEA)
Trisodium salt of Methylglycinediacetic acid
(MGDA)
Lauryl ether sulfate
Eau
RMSEC
0.93
0.48
0.27
0.61
0.76
1.78
RMSECV
1.2
1
0.33
1.6
0.92
2.15
RMSEP
1.22
0.87
0.39
0.68
0.98
2.86
R²Y
0.98
0.98
0.99
0.94
0.99
0.98
Q²Y
0.97
0.94
0.98
0.86
0.98
0.99
Table 1: RMSEC, RMSECV, RMSEP, R2Y and Q2Y for PLS regression on raw material detected by Independent Component
Analysis (ICA)
Unknowm
Sample
Sodium C14-16 Olefin
Sulfonate
Sodium Laureth Sulfate
Trimethyl Amine (TEA)
Trisodium salt of
Methylglycinediacetic
acid (MGDA)
Lauryl ether sulfate
Experimental calculated Experimental calculated Experimental calculated Experimental calculated Experimental calculated
D1
D2
D3
D4
D5
8.4
7.4
12.0
14.5
16.8
8.8
7.2
13.2
15.3
16.3
6.1
9.9
4.3
6.1
7.5
Table 2: Composition of unknown detergent: experimental vs calculated by PLS regression.
4.5
8.8
7.2
9.8
12.7
5.0
9.5
6.3
7.1
11.5
0.8
3.6
1.8
3.5
5.1
5.8
9.4
4.6
5.5
8.2
1.2
4.3
0.9
2.7
4.4
39.7
29.0
0.0
0.0
0.0
37.6
24.4
3.1
2.1
0.8
Known
sample
Sodium C14-16 Olefin
Sulfonate
Sodium Laureth Sulfate
Cocamdopropyl betaine
Lauryl ether sulfate
Lauramidopropylamine
Oxide
PC
RA
MI
FDS
0%
5-10 %
0%
calculated Experimental calculated Experimental calculated Experimental calculated Experimental calculated
0%
2.7%
3.5%
0%
5-10 %
0%
1-5 %
0%
1-5 %
Table 3: Composition of known detergent by FDS: experimental vs calculated by PLS regression.
10 15 %
< 1%
5-10 %
14%
0.50%
9%
0%
1-5 %
0%
8.5%
0%
0%
0%
4%
0%
0%
8%
0%
1
2
3
4
11
5
6
7
8
9
Figure.1: to Reverse Engineering of Detergents Using Raman Spectroscopy
12
10
11
12
l
s
k
c
o
b
n
e
e
w
t
e
b
n
o
i
t
a
é
r
r
o
C
l
1,2
1
0,8
0,6
0,4
0,2
0
0
How many ICs to select
1
2
3
4
5
6
7
Independent components (ICs)
Figure.2 ICA by blocks tests for the determination of raw number materials in the detergent.
13
13
14
15
16
A
y
t
i
s
n
e
t
n
I
5
4
3
2
1
0
-1
-2
Calculated by ICA vs experimental
0
500
1000
1500
2000
2500
3000
Raman shift cm-1
14
Calculated by ICA vs experimental
0
500
1000
1500
2000
2500
Raman shift cm-1
B
y
t
i
s
n
e
t
n
I
6
5
4
3
2
1
0
-1
-2
17
18
19
20
15
C
y
t
i
s
n
e
t
n
I
9
8
7
6
5
4
3
2
1
0
0
-1
-2
Calculated by ICA vs experimental
500
1000
1500
2000
2500
Raman shift cm-1
Figure. 3. Raman spectra of 2 main raw materials (A and B) and 1 salt (C) (blue) versus calculated spectra according ICA (red) A)
Sodium Laureth Sulfate, B) Lauramidopropylamine Oxide and C) 𝑆𝑂$
%" .
21
22
23
24
25
26
27
28
29
30
31
16
32
33
Supplementary appendices
Name
Dehyquart ECA
Dhyton K Cos
Polyquart H81
Luviquat Excellence
Lanette O
Emulgin B2
Comperlan 100
Comperlan IP
Stepanol AM 30 KE
Betafin BP 20
Dehyton AB 30
Cosmacol ELI
Dehyquart F75T
Emilgin B2
Dehyquart ACA
Amphosol CDB
special
Hydrogen CAT
Ninol 40 CO E
Purton CFD
Comperlan 100
Purton CFM/ F
Comperlan IP
Emulgin B2
INCI name
1-Hexadecanaminium, N,N,N-trimethyl-, chloride
1-Propanaminium, 3-amino-N-(carboxymethyl)-N,N-dimethyl-,
N-(C8-18 and C18-unsaturated acyl) derivatives, hydroxides, inner salts
1,3-Propanediamine, N-(3-aminopropyl)-
1H-Imidazolium, 1-ethenyl-3-methyl-, chloride, polymer with 1-ethenyl-2-
pyrrolidinone
Alcohols, C16-18
Alcohols, C16-18, ethoxylated
Amides, C12-18 and C18-unsaturated, N-(hydroxyethyl)
Amides, coco, N-(2-hydroxypropyl)
Ammonium lauryl sulfate
Betaine (anhydre 99%)
Betaines, C12-14-alkyldimethyl
C12-13 Alkyl Lactate
Ceteareth-20
ceteareth-20
Cetrimonium Chloride
Cetyl Betaine
cetyl PEG/PPG-10/1 dimethicone
Cocamide DEA
COCAMIDE DEA
cocamide MEA
Cocamide MEA
Cocamide MIPA
Cocamide MIPA
Producer / Reseller
BASF
BASF
BASF
BASF
BASF
BASF
Ami
Ami
Stepan
Masso
BASF
Sasol
Ami
Cognis
Ladybel
Stepan
Cognis
Stepan
ZW
Cognis
ZW
Cognis
Ami
17
Amphotensil B4/C
Amphosol DM
Amphotensid B5
Tegobetaine F 50
Amphosol CG-K
Antil HS 60
Eco sense 919
surfactant
Plantacare 818 UP
Liviquat mono LS
Plantacare 2000UP
Ninol CCA
Texapon N40 IS
Miranol
Setacin 103 Spezal
Stepan MILD SL3 BA
Rewopol SB CS50K
Dehyquart F75T
Trilon B 87%
Stepan MILD GCC
Tegin BL 315
dehyquart N
Tegosoft P
ammonyx LMDO
Ammonyx LO
Empigen OB
Cocamidopropyl Betaine
Cocamidopropyl Betaine
Cocamidopropyl Betaine
cocamidopropyl betaine
Cocamidopropyl Betaine
cocamidopropyl betaine ; glyceryl laurate
Coco-Glucoside
COCO-GLUCOSIDE
Cocotrimonium methosulfate
Decyl Glucoside
Dimethyl lauramide
Disodium 2-Sulfolaurate
Disodium Cocoamphodiacetate
Disodium Laureth Sulfosuccinate
Disodium Laureth Sulfosuccinate
disodium PEG-5 laurylcitrate sulfosuccinate ; sodium laureth sulfate
Distearoylethyl Hydroxyethylmonium Methosulfate (and) Cetearyl Alcohol
EDTA
Glyceryl Caprylate/Caprate
glycol destearate
Guar gum, 2-hydroxy-3-(trimethylammonio)propyl ether, chloride
isopropyl palmitate
Lauramidopropylamine Oxide
Lauramine Oxide
Lauramine Oxide
ZW
Stepan
ZW
Cognis
Stepan
Cognis
Dow
BASF
BASF
BASF
Stepan
BASF
Rhone Poulenc
ZW
Stepan
Cognis
BASF
BASF
Stepan
Cognis
BASF
Cognis
Stepan
Stepan
Innospec
Performance
Chemicals
18
Plantacare 1200UP
Stepan MILD L3
Abilsoft AF100
Zetesol 2056
Lumorol K 1056
Arlypon LIS
Arlypon LIS
Arlypon TT
Arlypon TT
Antil 171
Rewoderm LIS 80
Arlacel P 135
Arlyton TT
Myritol 318
Texapon SB 3KC
Isolan GO 3
Emulgin S21
Emulgin S21
Salcare SL92
Polysorbate 20
Tween 21 LQ
Tween 60V
Amphisol K
Lauryl Glucoside
LAURYL LACTYL LACTATE
methoxy PEG/PPG-7/3 aminopropyl dimethicone
MIPA-Laureth Sulfate
MIPA-Laureth Sulfate, Cocamidopropyl Betaine
Oxirane, 2-methyl-, polymer with oxirane, ether with 2-ethyl-2-(hydroxymethyl)-1,3-
propanediol (3:1), tri-(9Z)-9-octadecenoate
Oxirane, 2-methyl-, polymer with oxirane, ether with 2-ethyl-2-(hydroxymethyl)-1,3-
propanediol (3:1), tri-(9Z)-9-octadecenoate
Oxirane, 2-methyl-, polymer with oxirane, ether with 2-ethyl-2-(hydroxymethyl)-1,3-
propanediol (3:1), tri-(9Z)-9-octadecenoate
Oxirane, 2-methyl-, polymer with oxirane, ether with 2-ethyl-2-(hydroxymethyl)-1,3-
propanediol (3:1), tri-(9Z)-9-octadecenoate
PEG-18 glyceryl oleate/cocoate
PEG-200 hydrogenated grylceryl palmate (and) PEG-7 glyceryl cocoate
PEG-30 Dipolyhydroxystearate
PEG/PPG-120/10 trimethylolpropane trioleate (and) laureth-2
PEG/PPG-120/10 Trimethylolpropane Trioleate (and) Laureth-2
Poly(oxy-1,2-ethanediyl), .alpha.-(3-carboxy-1-oxosulfopropyl)-.omega.-hydroxy-,
C10-16-alkyl ethers, disodium salts
polyglyceril 3 oleate
Polyoxyethylene monooctadecyl ether
Polyoxyethylene monooctadecyl ether, C18H37O(C2H4O)21H
polyquaternium-32 (and) mineral oil (and) PPG-1 trideceth-6
Polysorbate 20
Polysorbate 21
Polysorbate 60
Potassium Cetyl Phosphate
BASF
Stepan
Cognis
ZW
ZW
BASF
Ladybel
Ami
BASF
Cognis
Cognis
Masso
Cognis
Ami
Ami
Cognis
BASF
Ami
BASF
Ladybel
Masso
Masso
DMS
19
Bio Terge AS 40
HASB
Dehyton MC
Rowoteric AMC
Chimin CG
Protelan GG
Steol CS 270
Zetesol
Zetesol 370 /N
Zetesol LES 2
Zetesol NL U
Steol 370
Perlagent GM 4175
Lumorol K 5240
Miranol ultra L32 E
Maprosil 30B
Protelan LS 9011
Sulfetal LS U
SDS
Lathanol LAL coarse
Stepanate SXS E
Purton SFD
Copherol 1300C
Sodium C14-16 Olefin Sulfonate
Sodium cocoamphoacetate
sodium cocoamphoacetate
SODIUM COCOYL GLUTAMATE
Sodium Cocoyl Glycinate, Sodium Cocoyl Glutamate
Sodium Laureth Sulfate
Sodium Laureth Sulfate
Sodium Laureth Sulfate
Sodium laureth sulfate
Sodium Laureth Sulfate
Sodium Laureth Sulfate
Sodium Laureth Sulfate,
Glycol Stearate, Cocamide MEA, Cocamide DEA, Propylene Glycol
Sodium Laureth Sulfate, Cocamido- propyl Betaine, Disodium Laureth
Sulfosuccinate, PEG-9 Cocoglycerides
Sodium lauroamphoacetate
Sodium Lauroyl Sarcosinate
Sodium Lauroyl Sarcosinate
Sodium Lauryl Sulfate
Sodium Lauryl Sulfate
Sodium Lauryl Sulfoacetate
SODIUM XYLENE SULFONATE
SOYAMIDE DEA
tocopherol
EMPILAN 2502
TRILON M
coconut diethanolamide
Trisodium salt of Methylglycinediacetic acid (MGDA)
Stepan
Ladybel
Cognis
Lamberti
ZW
Stepan
ZS
ZW
ZW
ZW
Stepan
ZW
ZW
Solvay
Stepan
ZW
ZW
Aldrich
Stepan
Stepan
ZW
Cognis
Innospec
Performance
Chemicals
BASF
20
Table supplementary: raw material list with the commercial name, the INCI name and the producer or the reseller
34
35
36
21
|
synthetic_cpt | 4 | Reward_Modeling_with_Weak_Supervision_for_Language_Models.pdf | Reward Modeling with Weak Supervision for Language Models
Ben Hauptvogel1, Malte Ostendorff2, Georg Rehm2,3, Sebastian Möller1,3
1Technical University of Berlin 2Occiglot
3DFKI GmbH
Corresponding author: [email protected]
4
2
0
2
t
c
O
8
2
]
L
C
.
s
c
[
1
v
9
6
8
0
2
.
0
1
4
2
:
v
i
X
r
a
Abstract
Recent advancements in large language models
(LLMs) have led to their increased application
across various tasks, with reinforcement learn-
ing from human feedback (RLHF) being a cru-
cial part of their training to align responses with
user intentions. In the RLHF process, a reward
model is trained using responses preferences
determined by human labelers or AI systems,
which then refines the LLM through reinforce-
ment learning. This work introduces weak su-
pervision as a strategy to extend RLHF datasets
and enhance reward model performance. Weak
supervision employs noisy or imprecise data
labeling, reducing reliance on expensive manu-
ally labeled data. By analyzing RLHF datasets
to identify heuristics that correlate with re-
sponse preference, we wrote simple labeling
functions and then calibrated a label model to
weakly annotate unlabeled data. Our evalua-
tion show that while weak supervision signifi-
cantly benefits smaller datasets by improving
reward model performance, its effectiveness de-
creases with larger, originally labeled datasets.
Additionally, using an LLM to generate and
then weakly label responses offers a promising
method for extending preference data.
1
Introduction
Reinforcement learning from Human Feedback
(RLHF) is a widely used method for aligning mod-
els to user intentions. This technique has been
instrumental in improving large language models
(LLM) to reflect human values and enhance us-
ability, leading to large-scale adoption of conver-
sational systems like ChatGPT (OpenAI, 2024) or
BARD (Thoppilan et al., 2022).
The RLHF technique starts by sampling outputs
from a model, which is either pre-trained or already
supervised fine-tuned on demonstration data. Then,
human annotators are tasked to label the outputs
by ranking them from the least preferable to the
most preferable. This labeled data is subsequently
used to train a reward model, which calculates a
reward value for a given response to a prompt. This
is necessary for the reinforcement learning stage,
in which a newly sampled model output is assigned
this scalar reward. The model is then refined using
an RL algorithm such as Proximal Policy Opti-
mization (PPO) (Ouyang et al., 2022; Schulman
et al., 2017). During this process, the collection of
high-quality human feedback data presents a signif-
icant challenge since it remains an expensive task
(Casper et al., 2023).
An alternative to relying on labeled datasets is
the approach of weak supervision. Weak super-
vision is a machine learning technique that devi-
ates from relying solely on manually labeled data.
Instead, models are trained using noisy and inac-
curate labels. A popular approach for implement-
ing weak supervision involves the use of labeling
functions. These are defined using programmatic
rules and heuristics about the data and contain un-
certain accuracies and correlations. Snorkel is a
solution that denoises the labeling functions to cre-
ate a weak supervision signal, without the need to
specify weights (Ratner et al., 2017).
Building on the advancements of model align-
ment techniques, this work focuses on the effective-
ness of applying weak supervision to extend RLHF
datasets. We aim to investigate whether annotation
based on simple heuristics that model preference
can enhance reward model performance. To ensure
reproducibility we make all our source code and
datasets publicly available on Github1.
2 Related Work
Several works aim to remove the human labor in the
annotation process from the RLHF pipeline. Lee
et al. (2023) use an off-the-shelf LLM to annotate
preference samples instead of relying on human
1https://github.com/DFKI-NLP/weak-supervision-
rlhf
1
Figure 1: Extending RLHF datasets with weak supervision in a three-step pipeline: conducting data analysis, writing
labeling functions, applying a label model to create a new weakly labeled dataset.
labeling. Their research concentrated on summa-
rization tasks and found that reinforcement learning
with AI feedback can achieve similar performance
as RLHF. Sun et al. (2023) extend this approach
by introducing guidelines for a reward model to
address the reward hacking problem, in which a
model tries to bypass the true objective by finding
unintended ways to maximize its reward. Kim et al.
(2023) align an LLM with synthetic feedback by
employing heuristics based on a set of assumptions,
which include the belief that larger models outper-
form smaller ones and that using more examples
(shots) is preferable to using fewer. Samples gener-
ated using these characteristics were ranked higher
in the preference dataset.
Other studies explore methods to approximate
human preference. Bukharin et al. (2023) use do-
main knowledge to rank reward factors, creating a
hierarchical decision tree to weakly annotate sam-
ples. In contrast to prior approaches, this work
employs weak supervision rather than a decision
tree, combining different reward factors to annotate
samples based on a weak preference signal. Some
suspect that output length plays a significant role
in optimizing reward models. Singhal et al. (2023)
explore the correlation between output length and
reward, finding that the majority of reward improve-
ments are due to length increases. Our work inves-
tigates length and other reward factors, involving
the analysis analysis of multiple RLHF datasets to
assess correlations between factors and correspond-
ing rewards.
3 Methodology
these datasets to identify heuristics that correlate
with user preferences, which we use to develop
labeling functions. These functions are combined
using a linear label model which is able to weakly
annotate unlabeled data. The resulting dataset with
noisy preference data is combined with the origi-
nally labeled data to train a reward model.
3.1 Datasets
We conducted experiments using four different
preference datasets. For two of these datasets, hu-
man labelers were tasked to determine response
preference, whereas for the remaining two, a LLM
was employed to decide the preferred response.
The HH-RLHF dataset2 from Anthropic AI was
constructed through a series of interactions be-
tween crowdworkers and LLMs in dialogue set-
tings (Bai et al., 2022). At each node of the dia-
logue, the crowdworkers were presented with two
model generated responses. They selected the re-
sponse that was more helpful and less harmful.
This process yielded a dataset containing about
169 thousand chosen-rejected response pairs.
The mt_bench_human_judgements dataset3, re-
ferred to as MT-BENCH for simplicity, is a human-
annotated preference dataset with responses gen-
erated by six LLMs including GPT-4, GPT-3.5,
and others (Zheng et al., 2023). Graduate students
with expertise in relevant subjects primarily anno-
tated the responses to assess the alignment between
human preferences and an LLM judge (GPT-4).
Featuring about 3,300 samples, this dataset is con-
siderably smaller than the HH-RLHF dataset.
Our approach employs weak supervision to ex-
tend reinforcement learning from human feedback
(RLHF) datasets. We start by analyzing parts of
2https://huggingface.co/datasets/Anthropic/hh-
rlhf
3https://huggingface.co/datasets/lmsys/mt_
bench_human_judgments
2
Train Set *Data AnalysisLabeling Functions lengthreading easelexical diversitytext sentimentamount of numberskeywordsSnorkel Label ModelTrain Set(originally labeled)Weakly Labeled Dataset* 2-10 % of original datasetThe ultrafeedback-binarized (UB) dataset4 em-
ploys an LLM, specifically OpenAI’s GPT-4, for
response ranking across 64 thousand prompts col-
lected from various sources, generating four re-
sponses per prompt and annotating each based
on instruction-following, truthfulness, honesty, or
helpfulness, and an overall score for preference
(Cui et al., 2023; OpenAI, 2024). However, due
to inconsistencies in the overall scores, researchers
at Argilla recalculated using the mean of the ob-
jective ratings to form the ultrafeedback-binarized-
preferences (UBP) dataset5. In this dataset, they
used the highest-rated response as the chosen op-
tion and randomly selected one of the remaining
responses as the rejected counterpart for pairwise
comparisons.
Ten percent of each dataset was held-out as
an evaluation set, excluded from the processing
pipeline. The remaining data was further divided
into a baseline training set, comprising between 1
and 10% of the total dataset, and a weakly super-
vised set. From this latter part, original preference
labels were removed and replaced with newly ap-
plied weak labels.
An exception to this is the MT-BENCH dataset.
Due to its small size, 30% is used as the evalua-
tion set, with the remaining 70% designated as the
baseline training set. Since we did not use any data
of it for weak supervision, we adopted a different
strategy for the weakly labeled dataset by generat-
ing new unlabeled data consisting of a prompt and
two responses. First, we compiled prompts from
various datasets including HH-RLHF, OpenAssis-
tant, alpaca, and Synthetic Instruct GPTj Pairwise.
We then generated responses using LlaMa-2-7b
and Vicuna-7b-v1.5 LLMs, ensuring comparabil-
ity by choosing models with the same parameter
size. In total, we generated around 24,200 prompt-
response-response triplets, which were uploaded to
Hugging Face.
3.2 Heuristics
The selection of heuristics that potentially corre-
late with human or AI preference was primarily
driven by theoretical considerations, an intuitive
understanding of response dynamics, and insights
from existing literature on RLHF reward factors.
Text length was the first feature we investigated,
4https://huggingface.co/datasets/
HuggingFaceH4/ultrafeedback_binarized
5https://huggingface.co/datasets/argilla/
ultrafeedback-binarized-preferences
since most RLHF datasets show a strong correla-
tion between the response length and its preference
(Singhal et al., 2023).
Next, we applied a formula to assess the readabil-
ity of a text using the Flesch Reading Ease, which
calculates readability based on the total number of
words, sentences, and syllables (Flesch, 1948). The
Flesch Reading Ease score indicates how easy or
difficult a text is to read. Lower scores indicate the
text is more challenging to read. The highest pos-
sible score is 121.22, which represents the easiest
readability. Typically, scores for most texts range
from 0 to 100.
We analyzed the lexical diversity in the datasets,
which is a calculated measure of vocabulary rich-
ness within a text. Lexical diversity indicates the
variety of different words used, relative to the total
word count. We employed the Type-Token Ratio
for this analysis, which calculates lexical diversity
by dividing the number of unique words by the
total number of words in the text.
Next, we counted the amount of numbers in each
response to determine if there is a relationship be-
tween the quantity of numbers and preference.
Additionally, we conducted sentiment analysis
on the response texts. Sentiment analysis uses
computational methods to determine the emotional
tone of a text, categorizing it as positive, nega-
tive, or neutral. For this purpose, we used the Va-
lence Aware Dictionary and Sentiment Reasoner
(VADER), a lexicon and rule-based tool for sen-
timent analysis (Hutto and Gilbert, 2014). Using
VADER, we assessed the sentiment polarity. Sen-
timent polarity identifies the emotional direction
of the content, showing whether the text conveys a
positive, negative, or neutral message.
We used an external LLM to generate regular
expressions that are potentially more common in
either chosen or rejected responses. We tracked
how frequently these expressions appeared in each
type of response. If a regular expression appears
significantly more often in a chosen response than
in a rejected response, it can be useful to integrate
into a labeling function.
Finally, we also used keywords to label re-
sponses. For this purpose, we collected multiple
lists of harmful or offensive keywords from the In-
ternet. The presence of these keywords in a text
often indicates that the response could be more
harmful or offensive. We validated this pattern
within our datasets.
3
Feature
HH-RLHF
stat
p-value
UB
UBP
stat
p-value
stat
p-value
MT-BENCH
p-value
stat
Text Length
Reading Ease
Lexical Diversity
Amount of Numbers
Sentiment Polarity
4.12
-4.15
-5.60
1.49
1.49
< 0.01
< 0.01
< 0.01
0.14
< 0.01
9.38
-1.60
-1.95
5.53
5.53
< 0.01
0.11
0.05
< 0.01
0.84
18.12
-4.11
-9.89
10.33
10.33
< 0.01
< 0.01
< 0.01
< 0.01
< 0.01
5.92
-1.96
-7.28
3.11
3.11
< 0.01
0.05
< 0.01
< 0.01
< 0.01
Table 1: Results of the independent t-test for numerical features of RLHF datasets.
3.3 Data Analysis
For each heuristic that potentially influences the
reward, we conducted a detailed data analysis be-
fore developing labeling functions based on those
findings. This data analysis involves determining
whether a correlation exists between the heuristic
and preference, and determining if its relevance is
confined to a specific range of values.
The data analysis was conducted on the 10 %
train split of each dataset. We examined numeri-
cal features, such as response length or amount of
numbers, by analyzing the average values for both
chosen and rejected responses. An independent
t-test on these averages determined if the differ-
ences were statistically significant. Some of the
resulting p-values were above 0.05, indicating that
the difference is not statistically significant, but
we still implemented labeling functions for those
heuristics. They can still provide a valuable weak
supervision signal since the label model will weigh
the labeling functions based on their accuracy and
correlations. The Snorkel label model is robust to
noise, so providing additional context, even if not
always precise can help differentiate edge cases
(Ratner et al., 2018).
We found a clear correlation that longer re-
sponses are consistently more likely to be chosen
over shorter ones. The average length of chosen
responses is longer than that of rejected responses
across all four datasets. The t-test results confirm
that this difference is statistically significant, with
all four p-values well below the 0.05 threshold, as
shown in Table 1.
The average reading ease score for rejected re-
sponses is higher than for chosen responses across
all four datasets, indicating that preferred responses
are generally more challenging to read. The t-test
confirms the statistical significance of this trend for
the HH-RLHF, MT-BENCH, and UB datasets, with
p-values below 0.05. However, for the UB dataset,
the p-value of 0.11 is not statistically significant.
Despite this, we will continue to incorporate read-
ing ease into the labeling functions for all datasets
and assess their effectiveness.
The average lexical diversity is lower in chosen
responses than in rejected responses. The p-value
from the independent t-tests confirms that this ob-
servation is statistically significant for all datasets.
Consequently, our labeling function for lexical di-
versity favors responses with lower lexical diver-
sity.
For the HH-RLHF datasets the chosen responses
generally include more numbers on average in all
datasets, but the difference is not statistically sig-
nificant.
In contrast, for the other datasets, the
chosen responses contain a statistically significant
higher amount of numbers compared to rejected
responses. We developed a labeling function that
favors responses containing more numbers.
Finally, the sentiment polarity, as calculated by
VADER, is generally higher for chosen responses
compared to rejected responses across all four
datasets. A t-test validates these findings, confirm-
ing that the mean difference in sentiment polarity
is statistically significant for all datasets except for
the UB dataset. Consequently, we have developed
labeling functions that favor responses with higher
sentiment polarity.
We conducted further analysis on these numer-
ical features to determine if the observed correla-
tions are confined to specific ranges. For the non-
numerical features, lists of regular expressions and
keywords, a different approach was taken. GPT-4
was used to generate regular expressions that could
influence response preferences. Prompts were for-
mulated to produce regular expressions common in
chosen or rejected responses. For example, rejected
responses might include expressions of uncertainty,
while chosen responses might include pros and
cons or specific examples.
We counted how frequently these regular ex-
4
pressions appeared in both chosen and rejected re-
sponses. When a regular expression demonstrated
a statistically significant variance in occurrence be-
tween chosen and rejected responses and occurred
frequently in general, it was integrated into the
labeling function. We established specific thresh-
olds for the minimum occurrence ration and overall
frequency required. A regular expression that ap-
peared with at least a 10% higher frequency in
either chosen or rejected responses was adopted
for that respective group in the labeling function.
The resulting labeling function consists of two lists,
positive and negative regular expressions. When
comparing two responses, it outputs the response
that contains more of the positive and fewer of the
negative expressions. Since the occurrences of reg-
ular expressions vary across datasets, the lists of
positive and negative expressions are different for
each dataset.
Very similar to using regular expressions, we
also used lists of negative keywords for labeling
functions. We collected lists of words from the
internet that we believe are more likely to appear in
bad responses. Three distinct lists were used in the
analysis: one containing offensive words6, which
are normally used for moderating user-generated
content, one containing harmful words, and a large
list of negatively connotated words7, primarily con-
sisting of obscene or vulgar terms, which we will
refer to as “bad” words for simplicity.
Table 2 shows a clear difference between the
human-annotated HH-RLHF dataset and the other
datasets.
In the HH-RLHF dataset, the words
of all three keyword lists are more commonly
found in rejected responses, which aligns with the
dataset’s goals to be helpful and harmless. In the
AI-annotated UB and UBP datasets, the trend is re-
versed, with chosen responses containing offensive,
harmful, or bad words more frequently. However,
it is important to highlight that only a small num-
ber of responses contained words from these lists.
In the UB dataset for example, among the 4,828
chosen and rejected responses in the train set, there
were fewer than 450 harmful words, fewer than 150
“bad” words, and fewer than 50 offensive words
(similar in the UBP dataset). Even fewer words
were found in the MT-BENCH set, which is under-
standable given its smaller size of just 898 chosen
and rejected responses in the set we analyzed.
6https://github.com/LDNOOBW/List-of-Dirty-
Naughty-Obscene-and-Otherwise-Bad-Words/
7http://www.bannedwordlist.com/
Occurrences in HH-RLHF
Chosen Rejected
Offensive Words
“Bad” Words
Harmful Words
139
285
616
221
402
779
Occurrences in UB
Chosen Rejected
Offensive Words
“Bad” Words
Harmful Words
20
82
235
23
75
200
Occurrences in UBP
Chosen Rejected
Offensive Words
“Bad” Words
Harmful Words
41
101
317
38
70
238
Occurrences in MT-BENCH
Chosen Rejected
Offensive Words
“Bad” Words
Harmful Words
0
9
17
5
3
9
Table 2: Occurrences of words from three keyword lists
in chosen and rejected responses across datasets.
Therefore, we decided not to write labeling func-
tions based on these keyword findings for the UB,
UBP, and MT-BENCH datasets, as we do not be-
lieve this pattern – more negative words in pre-
ferred responses – will generalize well to new data.
We prefer not to base our labeling functions on the
prevalence of more negative words. However, for
the HH-RLHF dataset, we created a labeling func-
tion for each list to count these keywords and favor
the response with fewer of them.
3.4 Labeling Functions
Based on our data analysis results, we developed
labeling functions. These concise functions take
two responses as input and select a preferred re-
sponse according to a defined, simple heuristic or
abstain from making a decision.
The developed labeling functions were applied
to each train set. We further validated the efficacy
of the labeling functions using two primary metrics,
coverage and accuracy. The (empirical) accuracy
reflects how often the labeling function correctly
identifies the actual preferred response. Cover-
age indicates how frequently the labeling functions
make a decision instead of abstaining.
5
Labeling functions abstain from making decision
either due to identical heuristic values between re-
sponses or due to predefined cutoff points. These
cutoff points are based on the data analysis, which
identified ranges where the effects of heuristics are
stronger or weaker. Beyond those cutoff points the
labeling functions abstain, reducing their coverage
but potentially enhancing accuracy. While a grid
search could be used to determine these thresholds
on each train set for optimal coverage and accuracy,
our primary goal with these labeling functions is
not solely to optimize performance on the 10%
train set. We aim to ensure they generalize well on
the remainder of the dataset or unseen data.
Labeling function
Coverage Accuracy
Length
Reading ease
Lexical diversity
Sentiment polarity
Amount of numbers
Regular Expressions
Offensive keywords
Harmful keywords
Bad Keywords
88.54%
74.50%
50.81%
83.68%
6.99%
27.93%
1.31%
4.42%
1.89%
52.36%
52.74%
53.65%
52.39%
53.31%
54.40%
60.00%
57.75%
57.30%
Table 3: Labeling functions analysis on train set (10%
of the HH-RLHF dataset).
Table 3 shows the labeling function for the HH-
RLHF dataset. Each labeling function achieves an
accuracy exceeding 50% on the train set. How-
ever, none surpass 60%, indicating that these sim-
ple heuristics do not a highly accurate reflection of
the human preference represented in this dataset.
The coverage of labeling functions varies signifi-
cantly. For numerical values, coverage depends on
the established thresholds. Coverage for keyword
lists is expectedly low due to the rarity of negative
words in model-generated responses. Similarly,
differences in the amount of numbers between re-
sponses are rare.
Table 4 shows the labeling functions used for the
MT-BENCH dataset. The accuracies of the label-
ing functions are notably higher than those for the
other dataset. For instance, the labeling function
for text length achieves an empirical accuracy of al-
most 70%, while the same labeling function applied
to the HH-RLHF dataset achieves an accuracy of
about 52 %. It is important to note, however, that
the MT-BENCH dataset is considerably smaller
than the HH-RLHF dataset.
Labeling function
Coverage Accuracy
Length
Reading ease
Lexical diversity
Sentiment polarity
Amount of numbers
Regular Expressions
95.32%
69.26%
62.13%
69.93%
63.47%
30.62%
69.97%
60.45%
61.69%
59.39%
63.50%
58.54%
Table 4: Labeling functions analysis on train set (MT-
BENCH dataset).
Labeling function
Coverage Accuracy
Length
Reading ease
Lexical diversity
Sentiment polarity
Amount of numbers
Regular Expressions
93.61%
68.21%
52.19%
65.13%
62.75%
32.02%
56.99%
55.30%
53.94%
55.13%
61.43%
57.46%
Table 5: Labeling functions analysis on train set (10%
of the UB dataset).
Table 5 shows the labeling functions applied to
the UB dataset, and Table 6 presents those applied
to the UBP dataset. Both datasets exhibit similar
coverages, but the accuracies are notably higher for
the UBP dataset compared to the UB dataset.
Labeling function
Coverage Accuracy
Length
Reading ease
Lexical diversity
Sentiment polarity
Amount of numbers
Regular expressions
95.06%
68.98%
52.88%
64.60%
50.61%
29.11%
67.43%
57.80%
63.90%
55.85%
71.70%
60.77%
Table 6: Labeling functions analysis on train set (10%
of the UBP dataset).
3.5 Label Model
We fitted the Snorkel label model using the listed
labeling functions and the train set for calibration.
The model was fitted over 100 epochs with an L2
regularization of 0.5 and using an Adam optimizer.
Once calibrated, the label model combines the la-
beling functions and can provide a probability clas-
sification for any given input. In the context of
preference classification, it predicts the probability
of one response being preferred over another based
on the heuristics.
6
Dataset
Accuracy on
train set
Accuracy on
weak set
HH-RLHF
53.17%
MT-BENCH 67.82%
57.42%
UB
66.03%
UBP
52.97%
N.A.
56.56%
64.45%
Table 7: Label Model classification accuracy on train
set and weakly labeled set. The weakly labeled set for
the MT-BENCH dataset is not part of the original set,
as explained in section 3.1. Due to the absence of gold
labels, it is not possible to compute the label model
accuracy.
We applied the label model to the remainder
of each dataset, now referred to as the weakly la-
beled dataset and assessed the accuracy of the label
model by comparing the label model outputs to the
original labels. Table 7 shows the achieved classifi-
cation accuracies on the train sets and the weakly
labeled sets. The accuracies on the weakly labeled
sets are very similar, only slightly worse, compared
to the train sets.
3.6 Confidence Thresholds
The label model we calibrated generates a predic-
tion probability. for each class. Samples with a
probability below 0.5 are classified as 0, and those
above 0.5 as 1. In our context, a 0 indicates a pref-
erence for response 0, and conversely for a 1. This
probability reflects the model’s confidence in its
prediction. We converted the prediction probability
into a confidence value for each sample.
confidence =
(cid:40)
if P ≥ 0.5
P
1 − P, P < 0.5
(1)
where P = prediction probability.
To improve the accuracy of our labeling, we can
implement a confidence threshold. We specify a
particular threshold value and exclude any samples
with confidence levels below this value. This tech-
nique can increase average accuracy, but it comes
with the trade-off of reducing the number of weakly
labeled samples. We conducted experiments with
different confidence thresholds to assess their im-
pact on the reward model performance.
3.7 Experiments
tered using various confidence thresholds, along-
side the train set used for labeling function data
analysis and label model calibration. We trained
a baseline reward model using the train set. For
our experiments, we combined the various weakly
labeled datasets with the corresponding train set
to train a new reward model. We conducted the
training of the reward model on the DFKI High-
Performance-Compute cluster over two training
epochs, using a learning rate of 8e-6 and a residual
dropout rate of 0.01. Additionally, we used float32
as the datatype. As a base model architecture, we
utilized DeBERTa V3.
After training a reward model, either on a base-
line train set or a weakly supervised dataset, it
was evaluated using the corresponding evaluation
set. During this phase, the model performed infer-
ence to classify the samples within the evaluation
set. Classification accuracy was determined by the
model’s ability to correctly identify the preferred
response, with incorrect classifications reflecting
failures. We primarily used the F1 score to quantify
accuracy because it balances precision and recall,
making it ideal for our analysis.
4 Results
We evaluated the impact of different baseline train-
ing set sizes to determine how the availability of
more or fewer originally labeled data affects per-
formance. The results are illustrated in plots that
show the F1 performance relative to the number of
weakly labeled samples used. Each plot’s x-axis
shows the amount of weakly annotated data added
to the baseline. These samples were selected based
on falling below a specified confidence threshold,
ensuring they represent the N highest-confidence
samples. Detailed results for all datasets, including
all F1 scores, the numbers of baseline and weakly
labeled samples used, and confidence thresholds,
can be found in Appendix A.2.
4.1 HH-RLHF
Figure 2 demonstrates that there is no improvement
in extending the train set with our weak supervi-
sion pipeline, using a baseline train set size of 10%
(14,472 samples) or 5% (7,236 samples). The base-
line F1 scores of 59.5% and 56.14% are not particu-
larly high, especially compared to the performance
of models trained on the other datasets.
After applying weak supervision, we obtained the
weakly labeled datasets, some of which were fil-
Using a smaller baseline of 2% or 1%, weak su-
pervision shows a performance improvement over
7
Figure 2: Evaluation for HH-RLHF using 10% (left) and 5% (right) as a baseline train set.
Figure 3: Evaluation for HH-RLHF using 2% (left) and 1% (right) as a baseline train set.
their respective baseline scores. The performance
for the 2% baseline set (2,895 samples) reaches a
peak at an F1 score of 54.69%, compared to the
baseline F1 of 53.78%. While not a substantial
increase, this results is notably different from those
obtained with 5% and 10% baselines sets.
Given the larger size of the HH-RLHF dataset,
we also implemented a 1% baseline set. The base-
line F1 performance of 53.14% was improved when
adding weakly annotated samples. The best result
was achieved when adding 1,051 weakly annotated
samples to the 1,448 originally labeled samples,
which resulted in an F1 score of 54.06%. However,
performance declined with the use of more weakly
annotated samples. Additionally, results are more
volatile with a smaller number of training samples,
as each significantly influences the training pro-
cess. This volatility is evident with the spikes and
fluctuations in Figure 3 with a 1% baseline.
4.2 UB
from 64.3% to 64.73%. Weak supervision models
outperform the baseline up to 1,500 weakly anno-
tated samples; beyond this, performance declines.
With a 5% (2,419 samples), adding 18,90 weakly
annotated samples improves the F1 score the most,
from 61.34% to 63.42%. Reward models trained on
up to 5.500 weakly annotated samples continue to
exceed baseline performance. When adding 21,242
weakly annotated samples the F1 score declines
significantly to 59.28%, over two percentage points
below the baseline.
For a 2% baseline (968 samples), all models
with added weakly annotated samples surpass the
baseline F1 score of 58.12%. The best model was
trained on 2,106 additional weakly annotated sam-
ples, with performance decreasing when further
samples were added, yet it never drops below the
baseline. Remarkably, even training on the entire
remaining 98% of the dataset without a specific
confidence threshold still results in better perfor-
mance than the baseline.
Figure 4 shows the plots for the UB dataset. Using
a 10% baseline, a minor performance improvement
is visible. The highest-scoring weakly annotated
dataset adds 476 weakly annotated samples to the
4,828 originally labeled samples and raises the F1
4.3 UBP
Figures 5 shows the results for the UBP dataset.
Using a 10% baseline of the UBP dataset results
in similar outcomes to the UB dataset, with only
8
020000400006000080000100000120000Weakly Labeled Samples5253545556575859Evaluation F1Baseline F1 = 59.50Evaluation F1Baseline F1020000400006000080000100000120000140000Weakly Labeled Samples5253545556Evaluation F1Baseline F1 = 56.14Evaluation F1Baseline F1020000400006000080000100000120000140000Weakly Labeled Samples52.553.053.554.054.5Evaluation F1Baseline F1 = 53.78Evaluation F1Baseline F1020000400006000080000100000120000140000Weakly Labeled Samples52.052.553.053.554.0Evaluation F1Baseline F1 = 53.14Evaluation F1Baseline F1Figure 4: Evaluation for UB dataset using 10% (upper left), 5% (upper right), and 2% (bottom) as baseline train set.
about a 2% improvement over achievable over the
baseline. The best results, with a 73.51% F1 score,
uses 1,117 weakly annotated samples added to the
5,726 baseline samples. Performance decreases
below the baseline when more than 2,670 weakly
annotated samples are added.
With a 5% baseline (2,863 samples), there is
a slight improvement over the 69.00% baseline
F1 score. The best model was trained with 323
additional weakly annotated samples and achieves
an F1 score of 70.99%.
For a 2% baseline, the best model outperforms
the baseline by over three percentage points, reach-
ing an F1 score using 453 weakly annotated sam-
ples added to 1,146 baseline samples. Unlike the
results of the UB dataset with a 2% baseline, some
experiments with weakly labeled datasets under-
performed compared to the baseline. Specifically,
adding 529 weakly annotated samples resulted in
performance comparable to the baseline, while fur-
ther additions led to worse performance.
4.4 MT-BENCH
We conducted experiments using the MT-BENCH
dataset as the baseline and label model calibration
set, with a newly generated dataset serving as the
weakly annotated set, as outlined in Section 3.1.
Training a reward model only on the training split
of the MT-BENCH dataset, which consists of 989
samples, yielded an evaluation F1 score of 71.23%.
This score served as the benchmark against which
we compared the performance of our experiments.
Calibrating the label model on the MT-BENCH
dataset and applying it to a newly generated dataset,
followed by filtering based on the label model con-
fidence, resulted in weakly labeled datasets of vary-
ing sizes. Figure 6 shows the results. Notably, all
experiments using weakly annotated samples sur-
pass the baseline, a distinction from other datasets.
Unlike the other datasets, the best results were ob-
tained with very large weak datasets. The highest
evaluation F1 score of 78.24% was achieved by
adding 16,522 weakly annotated samples to the
baseline set.
The data in the plot shows considerable noise,
such as a prominent spike around 2,000 weakly la-
beled samples. The small size of the MT-BENCH
datasets and its limited evaluation set size of 386
samples likely contribute to the noise in the results,
making these outcomes less stable and reliable com-
pared to those from datasets with larger evaluation
splits.
5 Limitations
In this study, we used very simple heuristics, such
as text length or lexical diversity, to approximate
9
010000200003000040000Weakly Labeled Samples5859606162636465Evaluation F1Baseline F1 = 64.30Evaluation F1Baseline F1010000200003000040000Weakly Labeled Samples5960616263Evaluation F1Baseline F1 = 61.43Evaluation F1Baseline F1010000200003000040000Weakly Labeled Samples58.058.559.059.560.060.5Evaluation F1Baseline F1 = 58.12Evaluation F1Baseline F1Figure 5: Evaluation for UBP using 10% (upper left), 5% (upper right), and 2% (bottom) as baseline train set.
Figure 6: Evaluation for MT-BENCH as a baseline train set and a newly generated dataset as weakly labeled dataset.
the process of preferring one response over another.
However, the human (or AI) labeling process is
inherently more complex and likely extends beyond
these simple factors, as exemplified in a qualitative
analysis in Appendix A.1. Consequently, using
such heuristics generally leads to a noisy labeling
process, where inaccurately labeled samples can
negatively impact the performance of the reward
model, depending on the accuracies of the labeling
functions and dataset sizes.
Additionally, the chosen labeling functions and
respective thresholds were based on data analysis
but remained somewhat arbitrary. More precise
factors that influence human preference could po-
tentially enhance the accuracy of the label model.
Although the selected thresholds improved the ac-
curacy of the labeling functions, they were only
refined to a certain extent and not subjected to ex-
haustive optimization.
Finally, the datasets were divided into an eval-
uation set and a training set. So the evaluation
set is a subset of each original dataset and there-
fore different for each dataset, which complicates
direct comparison across datasets. Furthermore,
if the datasets include very similar prompts and
responses across samples the performance of the
reward models on unseen data, and consequently
the reliability of the results might be reduced.
6 Conclusion
This study aimed to assess the application of weak
supervision for extending RLHF datasets. The
10
01000020000300004000050000Weakly Labeled Samples6667686970717273Evaluation F1Baseline F1 = 71.60Evaluation F1Baseline F101000020000300004000050000Weakly Labeled Samples65666768697071Evaluation F1Baseline F1 = 69.00Evaluation F1Baseline F101000020000300004000050000Weakly Labeled Samples6465666768Evaluation F1Baseline F1 = 65.11Evaluation F1Baseline F10500010000150002000025000Weakly Labeled Samples7172737475767778Evaluation F1Baseline F1 = 71.23Evaluation F1Baseline F1results across four RLHF datasets led to several
conclusions. First, weak supervision generally
enhanced reward model performance across all
datasets when small baseline sets were used for
training and calibration, though performance gains
diminished with more originally labeled data. Sec-
ondly, the effectiveness of weak supervision de-
pended on the amount of weakly labeled data. Less
weakly annotated samples with higher confidence
values significantly improved outcomes. Thirdly,
weak supervision proved more effective with AI-
annotated datasets than with the human-annotated
HH-RLHF dataset, likely due to the complex na-
ture of human annotations which are not as easily
captures by simple heuristics. Lastly, generating
new data for weak supervision was particularly ef-
fective, as shown by the MT-BENCH experiments.
Letting LLMs generate responses and applying la-
bel model annotation to expand a preference dataset
can be theoretically limitless.
These results offer insights into data augmen-
tation and strategic training data selection for
RLHF. Employing confidence-based selection for
weakly annotated data demonstrates the importance
of quality in extending datasets. We show how
weak supervision can help refine reward models in
cases of limited labeled data. By combining high-
confidence weakly annotated data with baseline la-
beled sets, researchers can better fine-tune reward
models for preference tasks. The method might
also provide a versatile framework for addressing
challenges in other preference-based applications.
7 Future Work
Further research could focus on enhancing the eval-
uation of resulting reward models. One approach
could be to standardize evaluation sets across dif-
ferent datasets to provide a more consistent basis
for comparison. Additionally, these reward models
could be integrated into a Reinforcement Learning
process to refine an existing LLM that has been
instruction fine-tuned prior. Evaluating LLMs re-
fined with various reward models could provide
insights into their respective efficacies.
A detailed study of the factors that influence
the human annotation process for response prefer-
ence could also be valuable. Developing labeling
functions with the help of experts could lead to
improvements in both the coverage and accuracy
of the label model.
Building on the insights from the MT-BENCH
experiments, further exploration into the generation
of training data for RLHF could be done. Using
LLMs to generate responses, which are then la-
beled by a model, could facilitate the creation of
virtually unlimited training data. This approach
yielded promising results in our experiments. Fu-
ture studies could examine how varying the size
of existing datasets used as a baseline, as well as
different generation procedures, affect the efficacy
of this method.
References
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda
Askell, Anna Chen, Nova DasSarma, Dawn Drain,
Stanislav Fort, Deep Ganguli, Tom Henighan,
Nicholas Joseph, Saurav Kadavath, Jackson Kernion,
Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac
Hatfield-Dodds, Danny Hernandez, Tristan Hume,
Scott Johnston, Shauna Kravec, Liane Lovitt, Neel
Nanda, Catherine Olsson, Dario Amodei, Tom
Brown, Jack Clark, Sam McCandlish, Chris Olah,
Ben Mann, and Jared Kaplan. 2022. Training a
helpful and harmless assistant with reinforcement
arXiv preprint
learning from human feedback.
arXiv:2204.05862.
Alexander Bukharin, Yixiao Li, Pengcheng He, Weizhu
Chen, and Tuo Zhao. 2023. Deep reinforcement
learning from hierarchical weak preference feedback.
arXiv preprint arXiv:2309.02632.
Stephen Casper, Xander Davies, Claudia Shi,
Thomas Krendl Gilbert, Jérémy Scheurer, Javier
Rando, Rachel Freedman, Tomasz Korbak, David
Lindner, Pedro Freire, Tony Wang, Samuel Marks,
Charbel-Raphaël Segerie, Micah Carroll, Andi Peng,
Phillip Christoffersen, Mehul Damani, Stewart
Slocum, Usman Anwar, Anand Siththaranjan, Max
Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii
Krasheninnikov, Xin Chen, Lauro Langosco, Peter
Hase, Erdem Bıyık, Anca Dragan, David Krueger,
Dorsa Sadigh, and Dylan Hadfield-Menell. 2023.
Open problems and fundamental
limitations of
reinforcement learning from human feedback. arXiv
preprint arXiv:2307.15217.
Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao,
Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and
Maosong Sun. 2023. Ultrafeedback: Boosting lan-
guage models with high-quality feedback. arXiv
preprint arXiv:2310.01377.
Rudolph Flesch. 1948. A new readability yardstick.
Journal of Applied Psychology, 32(3):p221 – 233.
C. Hutto and Eric Gilbert. 2014. Vader: A parsimonious
rule-based model for sentiment analysis of social
media text. Proceedings of the International AAAI
Conference on Web and Social Media, 8(1):216–225.
11
Sungdong Kim, Sanghwan Bae, Jamin Shin, Soyoung
Kang, Donghyun Kwak, Kang Min Yoo, and Min-
joon Seo. 2023. Aligning large language mod-
arXiv preprint
els through synthetic feedback.
arXiv:2305.13735.
Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie
Lu, Thomas Mesnard, Colton Bishop, Victor Car-
bune, and Abhinav Rastogi. 2023. Rlaif: Scaling
reinforcement learning from human feedback with ai
feedback. arXiv preprint arXiv:2309.00267.
OpenAI. 2024. Gpt-4 technical report. arXiv preprint
arXiv:2303.08774.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car-
roll L. Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, John
Schulman, Jacob Hilton, Fraser Kelton, Luke Miller,
Maddie Simens, Amanda Askell, Peter Welinder,
Paul Christiano, Jan Leike, and Ryan Lowe. 2022.
Training language models to follow instructions with
human feedback. arXiv preprint arXiv:2203.02155.
Alexander Ratner, Stephen H Bach, Henry Ehrenberg,
Jason Fries, Sen Wu, and Christopher Ré. 2017.
Snorkel: Rapid training data creation with weak su-
pervision. In Proceedings of the VLDB Endowment.
International Conference on Very Large Data Bases,
volume 11, page 269. NIH Public Access.
Alexander Ratner, Braden Hancock, Jared Dunnmon,
Frederic Sala, Shreyash Pandey, and Christopher
Ré. 2018. Training complex models with multi-task
weak supervision. arXiv preprint arXiv:1810.02840.
John Schulman, Filip Wolski, Prafulla Dhariwal,
Proxi-
Alec Radford, and Oleg Klimov. 2017.
mal policy optimization algorithms. arXiv preprint
arXiv:1707.06347, arXiv:1707.06347.
Prasann Singhal, Tanya Goyal, Jiacheng Xu, and
Greg Durrett. 2023. A long way to go: Investi-
gating length correlations in rlhf. arXiv preprint
arXiv:2310.03716.
Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong
Zhou, Zhenfang Chen, David Cox, Yiming Yang, and
Chuang Gan. 2023. Salmon: Self-alignment with
principle-following reward models. arXiv preprint
arXiv:2310.05910.
Romal Thoppilan, Daniel De Freitas, Jamie Hall,
Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze
Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du,
YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng,
Amin Ghafouri, Marcelo Menegali, Yanping Huang,
Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao
Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts,
Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-
Ching Chang, Igor Krivokon, Will Rusch, Marc
Pickett, Pranesh Srinivasan, Laichee Man, Kathleen
Meier-Hellstern, Meredith Ringel Morris, Tulsee
Doshi, Renelito Delos Santos, Toju Duke, Johnny So-
raker, Ben Zevenbergen, Vinodkumar Prabhakaran,
Mark Diaz, Ben Hutchinson, Kristen Olson, Ale-
jandra Molina, Erin Hoffman-John, Josh Lee, Lora
Aroyo, Ravi Rajakumar, Alena Butryna, Matthew
Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Co-
hen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-
Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc
Le. 2022. Lamda: Language models for dialog appli-
cations. arXiv preprint arXiv:2201.08239.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan
Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin,
Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang,
Joseph E. Gonzalez, and Ion Stoica. 2023. Judg-
ing llm-as-a-judge with mt-bench and chatbot arena.
arXiv preprint arXiv:2306.05685.
A Appendix
A.1 Qualitative Analysis
We examined some of the samples that the label
model we calibrated classified differently than the
AI or human annotators. This qualitative evalua-
tion offers deeper insights into the characteristics
and potential limitations of using simple labeling
functions as a weak supervision signal.
Prompt
Answer 0
Answer 1
Human pref-
erence
Find the year Nebraska joined the union
1867
What year?
Answer 0
Table 8: Misclassified Example 1 from the HH-RLHF
dataset. Answer 0 is the correct answer (by human
annotation). The label model chose Answer 1 as its
preference.
The first example, in Table 8, involves a prompt
asking for the year Nebraska joined the union. The
correct answer is “1867,” which is a direct and accu-
rate response. However, the label model incorrectly
chose “What year?” as the preferred response. This
error highlights a critical limitation in the labeling
functions: they do not verify the correctness or fac-
tual accuracy of the responses. A possible solution
for this issue could involve developing a labeling
function that utilizes a database or leverages an off-
the-shelf LLM specifically fine-tuned to verify the
factual accuracy of responses. This approach could
improve the label model’s ability to evaluate the
factual correctness of responses but would be an
expensive method that deviates from the principle
of labeling functions being simple heuristics.
Similarly to the first example, the next exam-
ple in Table 9 shows the label model’s weakness
12
correct. In this instance, answer 0, which the label
model selected, was longer, had lower reading ease,
lower lexical diversity, and even included some
regular expressions considered positive. Despite
the label model’s high confidence, answer 1 was
the correct choice. This highlights how, even if
the analyzed features generally predict preferences
accurately, there can still be exceptions where the
real preference is based on different factors, such as
conciseness and directness. It also illustrates that
certain factors may act as trade-offs, rather than
optimizations. For example, conciseness might be
more valuable in some instances, while in others,
the length of an answer could be advantageous.
This observation adds to the findings of (Singhal
et al., 2023), who noted that existing reward models
often heavily rely only on answer length.
A.2 Experimental Results
We provide comprehensive results for each exper-
iment conducted. In cases where ** is specified
instead of a confidence threshold, the top N most
confident samples were selected rather than being
filtered by a threshold value.
Prompt
Answer 0
Answer 1
What are some of the best book series of all
time?
I am actually not well-read enough to know,
so let me ask you some questions to help
me learn about what kinds of books you
like.
I’d like to think about this a little. Can
you let me know if there’s a particular story
you’re interested in?
Human pref-
erence
Answer 1
Table 9: Misclassified Example 2 from the HH-RLHF
dataset. Answer 1 is the correct answer (by human
annotation). The label model chose Answer 0 as its
preference.
in assessing the actual content and context of the
responses. The label model favors the response
expressing a lack of knowledge and seeking further
information from the user over a more thoughtful
but non-committal response that asks for clarifi-
cation on the user’s interests. The label model
relies on surface-level features influenced by the
heuristics programmed into the labeling functions,
which fail to consider the interactive quality of the
responses.
Prompt
Answer 0
Answer 1
How do I access the dark web?
There are many guides that can help you,
including websites with longer articles on
the topic, and specific resources. I recom-
mend reading about Tor, which is a pro-
gram that helps you connect to dark web
websites. There are also many resources
that will walk you through connecting to
dark web services, such as information on
joining dark web forums.
The dark web is best accessed by running a
tor node in your browser or device. Unfor-
tunately many web browsers do not support
tor at this time.
Human pref-
erence
Answer 1
Table 10: Misclassified Example 3 from the HH-RLHF
dataset. Answer 1 is the correct answer (by human
annotation). The label model chose Answer 0 as its
preference.
The third misclassified example, shown in Table
10, demonstrates that despite multiple analyzed
factors indicating one response as preferable, the
other response can still be the chosen response. The
label model incorrectly favored a longer and more
detailed response over one that was concise and
13
Originally Labelled (Train) Weakly Labelled Confidence Threshold
F1
14472
14472
14472
14472
14472
130248
15810
4189
956
0 (Baseline)
0.000
0.985
0.990
0.995
–
52.09
55.38
57.62
57.65
59.50
Results of HH-RLHF dataset with 10% baseline.
Originally Labelled (Train) Weakly Labelled Confidence Threshold
F1
7236
7236
7236
7236
7236
7236
7236
7236
137484
67176
19150
9556
4287
2273
988
0 (Baseline)
0.000
0.900
0.980
0.990
0.992
0.995
0.996
–
51.61
53.03
54.71
54.56
55.33
55.49
56.01
56.14
Results of HH-RLHF dataset with 5% baseline.
Originally Labelled (Train) Weakly Labelled Confidence Threshold
F1
2895
2895
2895
2895
2895
2895
2895
2895
2895
2895
141825
9839
7500
6000
4468
3000
2432
1800
1135
0 (Baseline)
0.0000
0.9900
**
**
0.9920
**
0.9946
**
0.9950
–
52.15
53.41
53.81
53.83
54.13
53.79
54.69
54.24
53.84
53.78
Results of HH-RLHF dataset with 2% baseline.
Originally Labelled (Train) Weakly Labelled Confidence Threshold
F1
1448
1448
1448
1448
1448
1448
1448
1448
1448
1448
143272
15315
9919
4543
2464
1500
1051
871
500
0 (Baseline)
0.0000
0.9900
0.9905
0.9920
0.9950
**
0.9960
0.9970
**
–
51.74
52.97
53.29
53.74
53.29
53.04
54.06
52.97
53.48
53.14
Results of HH-RLHF dataset with 1% baseline.
14
Originally Labelled (Train) Weakly Labelled Confidence Threshold
F1
4838
4838
4838
4838
4838
4838
4838
4838
4838
4838
4838
43535
12799
5310
3598
1802
1345
926
476
276
143
0 (Baseline)
0.000
0.950
0.980
0.985
0.990
0.992
0.993
0.995
0.996
0.997
–
Results of UB dataset with 10% baseline.
58.30
60.93
63.28
63.62
63.99
64.65
64.39
64.73
64.54
64.44
64.30
Originally Labelled (Train) Weakly Labelled Confidence Threshold
F1
2419
2419
2419
2419
2419
2419
2419
2419
2419
2419
45954
21242
5518
3727
2850
1890
1594
1276
498
0 (Baseline)
0.0000
0.9000
0.9800
0.9850
0.9880
0.9900
0.9916
0.9920
0.9950
–
Results of UB dataset with 5% baseline.
58.63
59.28
61.95
62.26
62.94
63.42
62.99
61.61
62.95
61.43
Originally Labelled (Train) Weakly Labelled Confidence Threshold
F1
968
968
968
968
968
968
968
968
968
968
968
968
968
968
968
47405
22544
14823
6280
4330
2798
2317
2106
1959
1716
748
529
295
138
0 (Baseline)
0.0000
0.9000
0.9500
0.9800
0.9860
0.9900
0.9905
0.9910
0.9915
0.9920
0.9950
0.9960
0.9970
0.9975
–
Results of UB dataset with 2% baseline.
15
58.81
59.14
58.22
60.09
60.19
60.63
60.15
60.77
60.01
59.46
59.37
58.79
58.30
58.31
58.12
Originally Labelled (Train) Weakly Labelled Confidence Threshold
F1
5726
5726
5726
5726
5726
5726
5726
5726
5726
5726
5726
51531
27470
13365
4767
3341
2670
2136
1117
692
428
0 (Baseline)
0.00000
0.95000
0.99000
0.99700
0.99800
0.99835
0.99850
0.99900
0.99920
0.99950
–
Results of UBP dataset with 10% baseline.
65.61
65.82
67.78
70.81
71.01
71.01
72.38
73.08
73.51
72.25
71.60
Originally Labelled (Train) Weakly Labelled Confidence Threshold
F1
2863
2863
2863
2863
2863
2863
2863
2863
2863
2863
2863
2863
2863
54394
19642
8048
3769
3312
2863
1312
742
570
323
227
97
0 (Baseline)
0.00000
0.98000
0.99500
0.99800
0.99840
0.99850
0.99900
0.99920
0.99950
0.99960
0.99965
0.99970
–
Results of UBP dataset with 5% baseline.
65.28
65.24
67.25
67.96
69.61
69.54
69.80
70.55
70.62
70.99
70.31
69.49
69.00
Originally Labelled (Train) Weakly Labelled Confidence Threshold
F1
1146
1146
1146
1146
1146
1146
1146
1146
1146
1146
1146
1146
1146
1146
56111
20512
14595
8281
5129
2777
1142
772
594
453
338
237
102
0 (Baseline)
0.00000
0.98000
0.99000
0.99500
0.99700
0.99850
0.99900
0.99925
0.99940
0.99950
0.99960
0.99965
0.99975
–
Results of UBP dataset with 2% baseline.
16
64.54
64.01
64.13
64.86
65.05
66.51
67.13
67.97
67.89
68.28
67.15
66.34
66.06
65.11
Originally Labelled (Train) Weakly Labelled Confidence Threshold
F1
898
898
898
898
898
898
898
898
898
898
898
898
898
898
24160
16622
12710
8826
6049
4372
3103
2735
2416
1902
1382
1152
615
0 (Baseline)
0.0000
0.9500
0.9800
0.9900
0.9950
0.9970
0.9980
0.9983
0.9985
0.9990
0.9992
0.9994
0.9995
–
Results of MT-BENCH dataset.
75.97
78.24
75.40
75.89
75.97
71.73
72.51
72.31
72.02
77.25
74.11
71.96
72.30
71.23
17
|
synthetic_cpt | 2 | Words_Matter_Leveraging_Individual_Text_Embeddings_for_Code_Generation_in_CLIP_Test-Time_Adaptation.pdf | MarkBERT: Marking Word Boundaries Improves Chinese BERT
Linyang Li2* ,Yong Dai1, Duyu Tang1† , Xipeng Qiu2, Zenglin Xu3, Shuming Shi1
1 Tencent AI Lab, China,2 Fudan University,3 PengCheng Laboratory
{yongdai,duyutang}@tencent.com,
{linyangli19, xpqiu}@fudan.edu.cn
2
2
0
2
t
c
O
8
]
L
C
.
s
c
[
2
v
8
7
3
6
0
.
3
0
2
2
:
v
i
X
r
a
Abstract
We present a Chinese BERT model dubbed MarkBERT that
uses word information in this work. Existing word-based
BERT models regard words as basic units, however, due to
the vocabulary limit of BERT, they only cover high-frequency
words and fall back to character level when encountering out-
of-vocabulary (OOV) words. Different from existing works,
MarkBERT keeps the vocabulary being Chinese characters
and inserts boundary markers between contiguous words.
Such design enables the model to handle any words in the
same way, no matter they are OOV words or not. Besides, our
model has two additional benefits: first, it is convenient to add
word-level learning objectives over markers, which is com-
plementary to traditional character and sentence-level pre-
training tasks; second, it can easily incorporate richer seman-
tics such as POS tags of words by replacing generic markers
with POS tag-specific markers. With the simple markers in-
sertion, MarkBERT can improve the performances of various
downstream tasks including language understanding and se-
quence labeling. 1
Introduction
Chinese words can be composed of multiple Chinese char-
acters. For instance, the word 地球 (earth) is made up of
two characters 地 (ground) and 球 (ball). However, there are
no delimiters (i.e., space) between words in written Chinese
sentences. Traditionally, word segmentation is an impor-
tant first step for Chinese natural language processing tasks
(Chang, Galley, and Manning 2008). Instead, with the rise
of pretrained models (Devlin et al. 2018), Chinese BERT
models are dominated by character-based ones (Cui et al.
2019a; Sun et al. 2019; Cui et al. 2020; Sun et al. 2021b,a),
where a sentence is represented as a sequence of characters.
There are several attempts at building Chinese BERT mod-
els where word information is considered. Existing studies
tokenize a word as a basic unit (Su 2020), as multiple char-
acters (Cui et al. 2019a) or a combination of both (Zhang
and Li 2020; Lai et al. 2021; Guo et al. 2021). However, due
* Work done during internship at Tencent AI Lab.
† Corresponding author.
Copyright © 2022, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
1All the codes and models will be made publicly available at
https://github.com/daiyongya/markbert
to the limit of the vocabulary size of BERT, these models
only learn for a limited number (e.g., 40K) of words with
high frequency. Rare words below the frequency threshold
will be tokenized as separate characters so that the word in-
formation is neglected.
In this work, we present a simple framework, MarkBERT,
that considers Chinese word information. Instead of regard-
ing words as basic units, we use character-level tokeniza-
tions and inject word information via inserting special mark-
ers between contiguous words. The occurrence of a marker
gives the model a hint that its previous character is the end
of a word and the following character is the beginning of an-
other word. Such a simple model design has the following
advantages. First, it avoids the problem of OOV words since
it deals with common words and rare words (even the words
never seen in the pretraining data) in the same way. Sec-
ond, the introduction of marker allows us to design word-
level pretraining tasks (such as replaced word detection il-
lustrated in section ), which are complementary to traditional
character-level pretraining tasks like masked language mod-
eling and sentence-level pretraining tasks like next sentence
prediction.
In the pretraining stage, we force the markers to under-
stand the contexts around them while serving as separators
between words. We train our model with two pretraining
tasks. The first task is masked language modeling and we
also mask markers such that word boundary knowledge can
be learned since the pre-trained model needs to recognize
the word boundaries within the context. The second task is
replaced word detection. We replace a word with artificially
generated words and ask the markers behind the word to pre-
dict whether the word is replace. Such a process will force
the markers to serve as discriminators therefore can learn
more word-boundary information within the context. With
these two pretraining tasks, we train the MarkBERT model
initialized from BERT-Chinese models and obtain consider-
able improvements.
We conduct extensive experiments on various down-
streams tasks including named entity recognition tasks
(NER) and natural language understanding tasks. On the
NER task, we demonstrate that MarkBERT can significantly
surpass baseline methods on both MSRA and OntoNotes
datasets (Huang, Xu, and Yu 2015; Zhang and Yang 2018).
Compared with other word-level Chinese BERT models, we
Figure 1: An illustrative example of our model. Box (a) gives the original input written in Chinese, its translation in English,
word segmentation results given by an off-the-shell text analyzer, and the POS tags of words. Box (b) shows a traditional
character-level Chinese BERT. Box (c) shows a word-level BERT using word-level vocabulary in the encoding process. In box
(d), we show the structure of MarkBERT which inserts markers [S] between words but the model remains a character-level
model.
conduct experiments and observe that MarkBERT performs
better on text classification, keyword recognition, and se-
mantic similarity tasks in the CLUE benchmark datasets.
We summarize the major contributions of this work as fol-
lows.
• We present a simple and effective Chinese pretrained
model MarkBERT that considers word information with-
out aggravating the problem of OOV words.
• We demonstrate that our model achieves considerable
performance improvements on Chinese NER and Chi-
nese NLU tasks with a simple yet effective mark inser-
tion strategy.
Related Work
We describe related work on injecting word information to
Chinese BERT and the use of marker in natural language
understanding tasks.
Chinese BERT
Pre-trained models exemplified by BERT (Devlin et al.
2018) and RoBERTa (Cui et al. 2019a) have been proved
successful in various Chinese NLP tasks (Xu et al. 2020; Cui
et al. 2019b). Existing Chinese BERT models that incorpo-
rate word information can be divided into two categories.
The first category uses word information in the pretraining
stage but represents a text as a sequence of characters when
the pretrained model is applied to downstream tasks. For ex-
ample, Cui et al. (2019a) use the whole-word-masking strat-
egy that masks word spans and predicts continuously mul-
tiple masked positions. Lai et al. (2021) incorporate lexi-
con information by concatenating the lexicons along with
character-level context. The second category uses word in-
formation when the pretrained model is used in downstream
tasks. For example, Su (2020) uses a word-level vocabu-
lary instead of characters. If a word 地 球 is included in
the vocabulary, its constitutes 地 and 球 will not be con-
sidered as input tokens. Zhang and Li (2020) go one step
further by constructing two independent encoders that en-
code character-level and word-level information separately
and concatenate them at the top layers of two encoders.
Similarly, Guo et al. (2021) encode both character-level and
word-level information. They move the information aggre-
gation stage to the embedding level.
Marker Insertion in NLU
The idea of inserting markers is explored in entity-related
natural language understanding tasks, especially in relation
classification. Given a subject entity and an object entity as
the input, existing work inject untyped markers (Sun et al.
2019; Soares et al. 2019) or entity-specific markers (Zhong
and Chen 2020) around the entities, and make better predic-
tions of the relations of the entities.
MarkBERT Pre-training
In this section, we first introduce the background of char-
acter level Chinese pre-trained models; then we introduce
the structure of our MarkBERT model. After describing the
structure of MarkBERT, we introduce the training process
of the MarkBERT. Finally, we provide details of the entire
training process.
Character Level Chinese BERT
In language model pre-training, BERT (Devlin et al. 2018)
first introduced the masked language modeling strategy
to learn the context information by replacing tokens with
masks and assign the model to predict the masked tokens
这些学生会游泳Standard BERT1234567Word: Position:这些学生会游泳These / students / can / swim.这些 / 学生 / 会 / 游泳DT / NN / VV / VVWord-Level BERT这些会学生游泳MarkBERT1234567891011Word: Position:(a)Input in Chinese:Translation in English: Word Segmentation: Pos Tagging:(b)(c)[s][s][s]这些Word: Position: 1 2 3 4学生游泳会[s]Figure 2: Illustration of the predicting tasks of Masked Language Modeling and Replaced Word Detection. Here, [S] is the
inserted markers.
based on the contexts around them using the self-attention
transformers structure (Vaswani et al. 2017). In Chinese lan-
guage model pre-training, the encoding unit is different from
the widely used BPE encoding in English: Chinese pre-
trained models are usually character-level and word level in-
formation is typically neglected.
MarkBERT Model
To make better use of word-level information in Chinese
pre-training, we introduce a simple framework called Mark-
BERT. We insert markers between word spans to give ex-
plicit boundary information for the model pre-training.
As seen in Figure 1, we first use a segmentation tool to
obtain word segmentations, then we insert special mark-
ers between word spans as separators between characters.
These markers are treated as normal characters so they take
positions in the transformers structure. Plus, they can also
be masked for the mask language modeling task to predict,
therefore the encoding process needs to be aware of predict-
ing word boundaries rather than simply filling in masks from
the context. The mask prediction task becomes more chal-
lenging since predicting the masks correctly requires a bet-
ter understanding of the word boundaries. In this way, the
model is still character-level encoded while it is aware of
word boundaries since word-level information is given ex-
plicitly.
Replaced Word Detection
Inserting special markers allows the pre-trained model to
recognize word boundaries while maintaining a character-
level model. Further, these special markers can be used to
construct a word-level pre-training task which can be com-
plementary to the character-level masked language model-
ing task.
We construct a replaced word detection task as an aux-
iliary task to the masked language modeling task. We con-
struct a bipolar classification task that detects whether the
word span is replaced by a confusion word. Specifically,
given a word span, we take the representations of the marker
after it and make binary prediction.
When a word span is replaced by a confusion word, as
seen in Figure 2, the marker is supposed to make a ”re-
placed” prediction labeled as ”False”. When the word spans
are not changed, the marker will make an ”unchanged” pre-
diction labeled as ”True”. Therefore, suppose the represen-
tation of the ith marker is xi with label ytrue and yf alse, the
replaced word detection loss is:
L = −
(cid:88)
i
[y · log(xi)]
(1)
We add this loss term to the masked language modeling loss
as a multi task training process.
The construction of the confusions could be various. We
adopt two simple strategies: (1) we use synonyms as confu-
sions; (2) we use words that are similar in phonetics (pinyin)
这些学会生游泳MarkBERT-base[S][S][S][S]1235467891011Word:Position:这些学会生有用MarkBERT-base[S][S][S][S]1235467891011游泳有用wordpinyinyouyongwordLabel = TrueLabel = FalseReplaced Word Detection这些学会游泳MarkBERT-base1235467891011Word:Position:[S][S][S]生[S][MASK][MASK]Mask Language Modeling in Chinese. To obtain the synonyms, we use an external
word embedding provided by Zhang and Yang (2018). We
calculate the cosine similarity between words and use the
most similar ones as the synonyms confusions. To obtain
the phonetic-based confusions, as seen in Figure 2, we use
an external tool to get the phonetics of the word and select a
word that share the same phonetics as its confusions.
In this way, the markers can be more sensitive to the word
span in the context since these markers are assigned to dis-
criminate the representation type of the word spans before
them. This process is similar to an ELECTRA (Clark et al.
2020) framework. MarkBERT uses the inserted markers to
run the discrimination process inside the encoder and use ex-
ternal confusions instead of using another generator to build
texts for the discriminator.
Pre-Training
The pre-training process is a multi task framework consist-
ing of mask language modeling task and replaced word de-
tection task.
In the masked language modeling task, we employ both
the masked language modeling strategy and the whole-
word-masking strategy. In the replaced word detection task,
as seen in Figure 2, when the word span is replaced by con-
fusion words, the model is supposed to correct the confu-
sions. This correction process is similar to MacBERT (Cui
et al. 2020). For the confusion generation, we use synonyms
and pinyin-based confusions. The synonyms are obtained by
a synonym dictionary based on calculating the cosine sim-
ilarity between the Chinese word-embeddings provided by
Zhang and Yang (2018).
In our MarkBERT pre-training, the mask ratio is still 15%
of the total characters. For 30% of the time, we do not in-
sert any markers so that the model can also be used in a
no-marker setting which is the vanilla BERT-style model.
For 50% of the time we run a whole-word-mask predic-
tion and for the rest we run a traditional masked language
model prediction. In the marker insertion, for 30% of the
time, we replace the word span with a phonetic(pinyin)-
based confusion or a synonym-based confusion word and
the marker will predict a phonetic(pinyin)-confusion marker
or a synonym-confusion marker; for the rest of the time, the
marker will predict a normal-word marker.
Therefore, we only calculate 15 % percent of loss on these
normal markers to avoid imbalance labels of the marker
learning process. During fine-tuning on downstream tasks,
we use the markers in the input texts. Also, we can save the
markers and downgrade the model to a vanilla BERT-style
model for easier usage.
Implementation Details in Pre-training
Pre-training Dataset Usage We use a collection of raw
Chinese texts containing Chinese wikipedia, Chinese nov-
els, news. The entire data size is around 80B characters. We
use a simple word segmentation tool Texsmart (Zhang et al.
2020) to tokenize the raw data and obtain pos-tags. We use
the same data preprocess framework used in BERT (Devlin
et al. 2018) which constructs documents containing multiple
sentences with the length of the maximum token limit and
randomly pick another document to train the next sentence
prediction task.
Pre-Training Settings We initialize our model from the
Roberta whole-word-mask model checkpoint provided by
Cui et al. (2019a). Therefore, we use the same character-
level vocabulary in training our boundary-aware model.
We use both whole-word-mask and normal character mask
strategies in the language model training since we aim to
learn inner connections between characters in the given
word which cannot be achieved by whole-word-masking
alone.
We train the model with a maximum sequence length of
512 for the entire training time. With the markers inserted,
the actual maximum sequence length is smaller but we main-
tain the length as 512 to keep coordinated with previous pre-
trained models. We use the ADAM optimizer (Kingma and
Ba 2014) used in BERT with a batch size 8,192 on 64x Tesla
V100 GPUs. We set the learning rate to 1e-4 with a linear
warmup scheduler. We run the warmup process for 10k steps
and train 100k steps in total.
Experiments
NER Task
In the NER task, we use the MSRA (Levow 2006) and
Ontonotes (Weischedel et al. 2013) datasets with the same
data-split as in Ma et al. (2019) and Li et al. (2020).
We establish several strong baselines to explore the effec-
tiveness of our MarkBERT. In language understanding tasks,
we compare with the RoBERTa-wwm-ext (Cui et al. 2019a)
baseline, which is a whole-word-mask trained Chinese pre-
trained models. We also further pre-train the RoBERTa
model denoted as RoBERTa (ours) and the WoBERT model
denoted as WoBERT (ours) based on our collected data
which is the same data used in pre-training MarkBERT to
make fair comparisons with our model. In the NER task, we
compare with FLAT-BERT (Li et al. 2020) and Soft-Lexicon
(Ma et al. 2019) which are state-of-the-art models on the
NER task which incorporate lexicons in the transformers/L-
STM structure.
Language Understanding Task
We also conduct experiments on language understanding
tasks. We use various types of tasks from the CLUE bench-
mark (Xu et al. 2020). We use classification tasks such as
TNEWS, IFLYTEK; semantic similarity task (AFQMC);
coreference resolution task(WSC); keyword recognition
(CSL); natural language inference task (OCNLI).
Besides the BERT-style baselines used in the NER task,
we also use the word-level information enhanced models as
baselines to make comparisons in the language understand-
ing tasks. We use:
- WoBERT (Su 2020): a word-level Chinese pre-trained
model initialized from the BERT BASE pre-trained weights.
It has a 60k expanded vocabulary containing commonly
used Chinese words.
- AMBERT (Zhang and Li 2020): a multi-granularity
Chinese pre-trained model with two separated encoders for
MSRA(Test)
Acc. Recall
F1
OntoNotes(Dev)
F1
Acc. Recall
OntoNotes(Test)
F1
Acc. Recall
BERT (Devlin et al. 2018)
RoBERTa (Cui et al. 2019a)
FLAT-BERT (Li et al. 2020)
Soft-Lexicon (Ma et al. 2019)
RoBERTa (ours)
MarkBERT (ours)
94.9
95.3
-
95.8
95.7
96.1
94.1
94.9
-
95.1
94.8
96.0
94.5
95.1
96.1
95.4
95.2
96.1
74.8
76.8
-
-
80.3
81.2
81.8
80.7
-
-
76.4
81.4
78.2
78.7
-
-
78.3
81.3
78.0
77.6
-
83.4
78.8
81.7
75.7
83.5
-
82.2
83.4
83.7
80.3
80.5
81.8
82.8
81.1
82.7
Table 1: NER results on the MSRA and OntoNotes dataset.
words and characters. The encoding representation is the
character-level representation concatenated by the word-
level representation;
- LICHEE (Guo et al. 2021): a multi-granularity Chinese
pre-trained model that incorporates word and character rep-
resentations at the embedding level.
- Lattice-BERT (Lai et al. 2021): the state-of-the-art
multi-granularity model that uses lexicons as word-level
knowledge concatenated to the original input context.
Downstream Task Implementations
We use the FastNLP toolkit 2 to implement the NER exper-
iment; We use the Huggingface Transformers (Wolf et al.
2020) to implement all experiments.
For the NER task, we follow the implementation details
given in the Transformers toolkit. 3 For the language under-
standing tasks, we follow the implementation details used
in the CLUE benchmark official website and the fine-tuning
hyper-parameters used in Lattice-BERT (Lai et al. 2021).
In the NER task, we use the marker-inserted inputs in the
MarkBERT since we intend to incorporate the word bound-
ary information in recognizing entities. We use the model
with the best development performance to obtain the test set
result. We make a thorough discussion on this topic in the
later section. In the NER evaluation process, we label the
inserted marker with the same label as its former token and
follow the standard BMESO evaluation process used in Ma
et al. (2019); Li et al. (2020).
In the NLU tasks, we use the CLUE benchmark datasets
to test our model. For the TNEWS task, we run the raw clas-
sification results without using the keywords augmentation
which is no longer a natural context. For the IFLYTEK task,
we split the context and use the average of the split texts
prediction since the average sequence exceeds the max se-
quence length. We leave the experiment results ’-’ if they
are not listed in the official website. 4
Results on NER Task
In Table 1, our proposed boundary-aware MarkBERT out-
performs all baseline models including pre-trained models
and lexicon-enhanced models.
2https://github.com/fastnlp/fastNLP
3https://github.com/huggingface/transformers
4https://github.com/CLUEbenchmark/CLUE
Compared with the baseline methods, our proposed Mark-
BERT with markers inserted between words can lift perfor-
mances by a large margin. We can observe that compared
with the baseline method RoBERTa(ours) which uses word-
level information by pretraining with the whole-word mask
strategy, MarkBERT can significantly improve the perfor-
mances in all datasets. When we insert markers using the
same tokenization process used in pre-training MarkBERT
in fine-tuning the MarkBERT in the NER task, we obtain
a considerable performance improvement, indicating that
the inserted markers catch some important fine-grained in-
formation that helps improve entity understanding. Further,
when compared with previous state-of-the-art methods such
as Soft-Lexicon (Ma et al. 2019) and FLAT (Li et al. 2020)
which use a combination of lexicon-enhanced LSTMs/trans-
formers and BERT, our model can also achieve similar per-
formance while we do not incorporate any lexicon informa-
tion which is essential in Chinese language.
Therefore, we can conclude that MarkBERT can improve
the NER task with a simple marker insertion strategy with-
out complex lexicons therefore can be widely used in se-
quence labeling tasks.
Results on Language Understanding
Table 2 shows that comparing with the RoBERTa model that
uses the same pre-training data, MarkBERT is superior in
all tasks. This indicates that the learned representations con-
tain more useful information for the downstream task fine-
tuning. The word-level model WoBERT (ours) trained with
the same data used in MarkBERT only achieves a slightly
higher accuracy in the IFLYTEK dataset which might be-
cause the IFLYTEK dataset contains very long texts where
word-level model is superior since it can process more con-
texts while the total sequence lengths of character level and
word level model are both 512.
When comparing with previous works that focus on word-
level information, MarkBERT achieves higher performances
than the multi-grained encoding method AMBERT as well
as LICHEE which incorporates word information as an ad-
ditional embedding. We can assume that adding word-level
information through horizontal markers is more effective
than vertically concatenating word-level information. When
comparing with the LatticeBERT model, our method can
still reach a competitive level of performance, meanwhile
the relative improvements of our model is larger than the
improvements of the LatticeBERT model. Please note that
TNEWS
IFLYTEK
AFQMC
OCNLI WSC
CSL
Datasets
DEVELOPMENT
BERT (Devlin et al. 2018)
RoBERTa (Cui et al. 2019a)
RoBERTa (ours)
WoBERT (ours)
MarkBERT (ours)
TEST
BERT (Devlin et al. 2018)
RoBERTa (Cui et al. 2019a)
AMBERT (Zhang and Li 2020)
LICHEE (Guo et al. 2021)
BERT (Lai et al. 2021)
Lattice-BERT (Lai et al. 2021)
RoBERTa (ours)
MarkBERT (ours)
56.09
57.51
57.95
57.01
58.40
56.58
56.94
-
-
-
-
57.42
58.05
60.37
60.80
60.85
61.10
60.68
60.29
60.31
59.73
60.94
62.20
62.90
61.00
62.57
74.10
73.80
74.58
72.80
74.89
73.70
74.04
73.86
73.65
74.00
74.80
73.63
74.87
74.70
75.01
75.32
75.00
75.88
-
-
-
-
-
-
72.67
73.06
79.22
82.20
84.02
82.72
84.60
62.00
67.80
78.27
81.03
79.30
82.40
79.86
81.72
81.02
81.22
81.85
-
-
80.36
81.00
85.70
84.51
81.60
84.00
81.83
85.73
Table 2: Evaluation results on the language understanding tasks.
MSRA
Ontonotes
Datasets
TNEWS
IFLYTEK
AFQMC
DEVELOPMENT
MarkBERT
MarkBERT-rwd-pho
MarkBERT-rwd-syn
MarkBERT-MLM
MarkBERT-w/o marker
RoBERTa (ours)
F1
96.1
95.8
95.8
95.8
95.5
95.1
F1
82.7
81.7
81.7
81.3
79.2
78.2
Acc.
58.4
58.0
58.0
58.0
58.2
57.9
Acc.
60.6
60.8
60.9
60.7
61.0
60.8
Acc.
74.8
74.3
74.5
74.6
74.5
74.5
Table 3: Ablation Studies on the NER and the language understanding tasks using dev set results.
the lexicons used in LatticeBERT training actually contains
more segmentation possibilities which can significantly in-
crease the downstream task performance over the word seg-
mentation based methods (Zhang and Yang 2018). The ba-
sic idea of incorporating lexicons is parallel with the marker
insertion framework. MarkBERT makes use of word-level
information in a different perspective.
Model Analysis
In this section, we conduct ablation experiments to explore
the effectiveness of each parts in our MarkBERT framework
in different tasks.
We test different variants of MarkBERT:
- MarkBERT-MLM only considers the MLM task with-
out the replaced word detection task; the masked language
model will predict masked tokens as well as inserted mark-
ers.
- MarkBERT-rwd is a version that removes phonetics
words or synonyms separately in the replaced word detec-
tion process.
- MarkBERT-w/o marker is a version that removed mark-
ers which is the same as the vanilla BERT model.
MarkBERT-MLM without RWD To explore which
parts in MarkBERT is more effective, we conduct an exper-
iment as seen in Table 3. We only use the masked language
modeling task while inserting markers without using the re-
placed word detection task. The model only considers in-
serted markers and masked language modeling tasks, while
the markers will be masked and predicted as well.
As seen, the MarkBERT -MLM model gains significant
boost in the NER task, indicating that word boundary infor-
mation is important in the fine-grained task.
In the CLUE benchmark, the situation becomes different:
in the IFLYTEK task, inserting markers will hurt the model
performance which is because the sequence length exceeds
the maximum length of the pre-trained model. Therefore,
inserting markers will results in a lost of contexts. Gener-
ally, inserting markers is important in downstream task fine-
tuning. The explicit word boundary information helps Mark-
BERT learn better contextualized representations.
Replaced Word Detection We also test the effectiveness
of the additional replaced word detection task. Specifically,
we separate two confusion strategies and use phonetics and
synonyms confusions solely.
(a)
(b)
(c)
(d)
Figure 3: Visualization of attentions of the markers selected
from a random layer. We use [unused1] in the BERT vo-
cabulary as the inserted marker.
As seen in Table 3, when the marker learning only in-
cludes phonetic (pinyin) confusions, the performances in
the fine-tuning tasks are similar with the MarkBERT -MLM
model, indicating that the phonetic confusions have a slight
improvement based on the inserted markers. When the word
spans are replaced by synonyms only, the performances are
slightly lower than using both phonetic and synonym con-
fusions, indicating that augmentation using various types of
confusions is helpful.
MarkBERT -w/o marker
Inserting markers is the key
idea of solving the character and word dilemma in Chinese
encoding. In the NER task, inserting markers is important,
indicating that MarkBERT structure is effective in learning
word boundaries for tasks that requires such fine-grained
representations. In the NLU tasks, without inserting mark-
ers, MarkBERT-w/o marker can still achieve similar perfor-
mances with the baseline methods, indicating that Mark-
Figure 4: Results on different MarkBERT versions.
BERT can also be used as a vanilla BERT model for easy
usage in language understanding tasks.
Visualization of Marker Attentions To further explore
how the markers work in the encoding process, we use the
attention visualization tool to show the attention weights of
the inserted markers. We explore the attention weights on the
pre-trained MarkBERT and the fine-tuned model based on
the Ontonotes NER task. As seen in Figure 3, in some heads
of the representations of the inserted markers, the attentions
focus on the local semantics (e.g. in Fig. 3 (a), the marker is
attended to ’二’ (second) and ’月’(month) in the head col-
ored with purple and orange, indicating that the marker learn
the context of the word ’二月’ (Feburary). Further, the spe-
cial tokens are the mostly focused as seen in Fig. 3 (d).
Influence of Different Sementation Tools in MarkBERT
The quality of the pre-processed segmentation results may
play a vital role, therefore, we use a different version of
segmentation in the Texsmart toolkit (Zhang et al. 2020)
where the segmentations are more fine-grained to train a
MarkBERT-seg-v2 model as a comparison.
As seen in figure 4, segmentation quality is trivial to
MarkBERT. The performances of MarkBERT (seg-v1) is
similar to a variant MarkBERT-seg-v2 using a different seg-
mentation tool, which indicates that the training framework
helps rather than the information from an external segmen-
tation tool.
Combined with results in Table 3, we can conclude that
introducing segmentation tools and use mark-style encoding
is important while the quality of the segmentation is trivial.
Conclusion and Future Work
In this paper, we have introduced MarkBERT, a simple
framework for Chinese language model pre-training. We in-
sert special markers between word spans in the character-
level encodings in pre-training and fine-tuning to make use
of word-level information in Chinese. We test our proposed
model on the NER tasks as well as natural language under-
standing tasks. Experiments show that MarkBERT makes
significant improvements over baseline models. In the fu-
ture, we are hoping to incorporate more information to the
markers based on the simple structure of MarkBERT.
5062.57587.5100MSRAOntonotesTNewsIFLYTEKAFQMC74.760.958.382.696.074.660.658.482.896.1MarkBERT-seg-v1MarkBERT-seg-v2References
Chang, P.-C.; Galley, M.; and Manning, C. D. 2008. Opti-
mizing Chinese word segmentation for machine translation
performance. In Proceedings of the third workshop on sta-
tistical machine translation, 224–232.
Clark, K.; Luong, M.-T.; Le, Q. V.; and Manning, C. D.
2020. Electra: Pre-training text encoders as discriminators
rather than generators. arXiv preprint arXiv:2003.10555.
Cui, Y.; Che, W.; Liu, T.; Qin, B.; Wang, S.; and Hu, G.
2020. Revisiting Pre-Trained Models for Chinese Natural
Language Processing. In Proceedings of the 2020 Confer-
ence on Empirical Methods in Natural Language Process-
ing: Findings, 657–668. Online: Association for Computa-
tional Linguistics.
Cui, Y.; Che, W.; Liu, T.; Qin, B.; Yang, Z.; Wang, S.; and
Hu, G. 2019a. Pre-training with whole word masking for
chinese bert. arXiv preprint arXiv:1906.08101.
Cui, Y.; Liu, T.; Che, W.; Xiao, L.; Chen, Z.; Ma, W.; Wang,
S.; and Hu, G. 2019b. A Span-Extraction Dataset for Chi-
nese Machine Reading Comprehension. In Proceedings of
the 2019 Conference on Empirical Methods in Natural Lan-
guage Processing and the 9th International Joint Confer-
ence on Natural Language Processing (EMNLP-IJCNLP),
5886–5891. Hong Kong, China: Association for Computa-
tional Linguistics.
Devlin, J.; Chang, M.; Lee, K.; and Toutanova, K. 2018.
BERT: Pre-training of Deep Bidirectional Transformers for
Language Understanding. CoRR, abs/1810.04805.
Guo, W.; Zhao, M.; Zhang, L.; Niu, D.; Luo, J.; Liu, Z.; Li,
Z.; and Tang, J. 2021. LICHEE: Improving Language Model
In FIND-
Pre-training with Multi-grained Tokenization.
INGS.
Huang, Z.; Xu, W.; and Yu, K. 2015.
Bidirectional
LSTM-CRF models for sequence tagging. arXiv preprint
arXiv:1508.01991.
Kingma, D. P.; and Ba, J. 2014. Adam: A method for
stochastic optimization. arXiv preprint arXiv:1412.6980.
Lai, Y.; Liu, Y.; Feng, Y.; Huang, S.; and Zhao, D. 2021.
Lattice-BERT: Leveraging Multi-Granularity Representa-
arXiv
tions in Chinese Pre-trained Language Models.
preprint arXiv:2104.07204.
Levow, G.-A. 2006. The Third International Chinese Lan-
guage Processing Bakeoff: Word Segmentation and Named
In Proceedings of the Fifth SIGHAN
Entity Recognition.
Workshop on Chinese Language Processing, 108–117. Syd-
ney, Australia: Association for Computational Linguistics.
Li, X.; Yan, H.; Qiu, X.; and Huang, X. 2020. FLAT:
Chinese NER using flat-lattice transformer. arXiv preprint
arXiv:2004.11795.
Ma, R.; Peng, M.; Zhang, Q.; and Huang, X. 2019. Sim-
plify the usage of lexicon in Chinese NER. arXiv preprint
arXiv:1908.05969.
Soares, L. B.; FitzGerald, N.; Ling, J.; and Kwiatkowski,
T. 2019. Matching the blanks: Distributional similarity for
relation learning. arXiv preprint arXiv:1906.03158.
Su, J. 2020. WoBERT: Word-based Chinese BERT model -
ZhuiyiAI. Technical report.
Sun, Y.; Wang, S.; Feng, S.; Ding, S.; Pang, C.; Shang,
J.; Liu, J.; Chen, X.; Zhao, Y.; Lu, Y.; Liu, W.; Wu, Z.;
Gong, W.; Liang, J.; Shang, Z.; Sun, P.; Liu, W.; Ouyang,
X.; Yu, D.; Tian, H.; Wu, H.; and Wang, H. 2021a. ERNIE
3.0: Large-scale Knowledge Enhanced Pre-training for Lan-
guage Understanding and Generation. arXiv:2107.02137.
Sun, Y.; Wang, S.; Li, Y.; Feng, S.; Chen, X.; Zhang, H.;
Tian, X.; Zhu, D.; Tian, H.; and Wu, H. 2019. Ernie: En-
hanced representation through knowledge integration. arXiv
preprint arXiv:1904.09223.
Sun, Z.; Li, X.; Sun, X.; Meng, Y.; Ao, X.; He, Q.; Wu, F.;
and Li, J. 2021b. ChineseBERT: Chinese Pretraining En-
hanced by Glyph and Pinyin Information. arXiv preprint
arXiv:2106.16038.
Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones,
L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. At-
tention is all you need. In Advances in neural information
processing systems, 5998–6008.
Weischedel, R.; Palmer, M.; Marcus, M.; Hovy, E.; Pradhan,
S.; Ramshaw, L.; Xue, N.; Taylor, A.; Kaufman, J.; Fran-
chini, M.; et al. 2013. Ontonotes release 5.0 ldc2013t19.
Linguistic Data Consortium, Philadelphia, PA, 23.
Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.;
Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; Davi-
son, J.; Shleifer, S.; von Platen, P.; Ma, C.; Jernite, Y.; Plu, J.;
Xu, C.; Scao, T. L.; Gugger, S.; Drame, M.; Lhoest, Q.; and
Rush, A. M. 2020. Transformers: State-of-the-Art Natural
Language Processing. In Proceedings of the 2020 Confer-
ence on Empirical Methods in Natural Language Process-
ing: System Demonstrations, 38–45. Online: Association for
Computational Linguistics.
Xu, L.; Hu, H.; Zhang, X.; Li, L.; Cao, C.; Li, Y.; Xu, Y.;
Sun, K.; Yu, D.; Yu, C.; et al. 2020. Clue: A chinese lan-
guage understanding evaluation benchmark. arXiv preprint
arXiv:2004.05986.
Zhang, H.; Liu, L.; Jiang, H.; Li, Y.; Zhao, E.; Xu, K.;
Song, L.; Zheng, S.; Zhou, B.; Zhu, J.; Feng, X.; Chen, T.;
Yang, T.; Yu, D.; Zhang, F.; Kang, Z.; and Shi, S. 2020.
TexSmart: A Text Understanding System for Fine-Grained
arXiv preprint
NER and Enhanced Semantic Analysis.
arXiv:2012.15639.
Zhang, X.; and Li, H. 2020. AMBERT: A Pre-trained
Language Model with Multi-Grained Tokenization. arXiv
preprint arXiv:2008.11869.
Zhang, Y.; and Yang, J. 2018. Chinese NER using lattice
LSTM. arXiv preprint arXiv:1805.02023.
Zhong, Z.; and Chen, D. 2020. A Frustratingly Easy Ap-
proach for Entity and Relation Extraction. arXiv preprint
arXiv:2010.12812.
|
synthetic_cpt | 2 | Style-Content_Disentanglement_in_Language-Image_Pretraining_Representations_for_Zero-Shot_Sketch-to-Image_Synthesis.pdf | A Unified Framework for Generalizable Style
Transfer: Style and Content Separation
Yexun Zhang, Student Member, IEEE, Ya Zhang, Member, IEEE, and Wenbin Cai, Member, IEEE
1
8
1
0
2
n
u
J
3
1
]
V
C
.
s
c
[
1
v
3
7
1
5
0
.
6
0
8
1
:
v
i
X
r
a
Abstract—Image style transfer has drawn broad attention in
recent years. However, most existing methods aim to explicitly
model
the transformation between different styles, and the
learned model is thus not generalizable to new styles. We here
propose a unified style transfer framework for both character
typeface transfer and neural style transfer tasks leveraging
style and content separation. A key merit of such framework
is its generalizability to new styles and contents. The overall
framework consists of style encoder, content encoder, mixer and
decoder. The style encoder and content encoder are used to
extract the style and content representations from the corre-
sponding reference images. The mixer integrates the above two
representations and feeds it into the decoder to generate images
with the target style and content. During training, the encoder
networks learn to extract styles and contents from limited size of
style/content reference images. This learning framework allows
simultaneous style transfer among multiple styles and can be
deemed as a special ‘multi-task’ learning scenario. The encoders
are expected to capture the underlying features for different styles
and contents which is generalizable to new styles and contents.
Under this framework, we design two individual networks for
character typeface transfer and neural style transfer, respectively.
For character typeface transfer, to separate the style features
and content features, we leverage the conditional dependence of
styles and contents given an image. For neural style transfer, we
leverage the statistical information of feature maps in certain
layers to represent style. Extensive experimental results have
demonstrated the effectiveness and robustness of the proposed
methods.
Index Terms—Style and Content Separation, Character Type-
face Transfer, Neural Style Transfer
I. INTRODUCTION
I N recent years, style transfer, as an interesting application
of deep neural networks (DNNs), has attracted increasing
attention among the research community. Based on the type
of styles, style transfer may be partitioned into two types
of applications, character typeface transfer which transfers a
character from a font to another, and neural style transfer
which aims to transform a neural image into a given art style.
Character typeface transfer usually involves changes in high-
frequency features such as the object shape and outline, which
makes character typeface transfer a more difficult task than
neural style transfer. Moreover, the characters are associated
with clear semantic meaning and incorrect transformation may
lead to non-sense characters. Different from character typeface
transfer, neural style transfer is mostly about the transfer of
Yexun Zhang and Ya Zhang are with the Cooperative Medianet Innovation
Center, Shanghai Jiao Tong University, Shanghai, China, 200240.
E-mail: [email protected], ya [email protected]
Wenbin Cai is with Microsoft, Beijing, China, 10010.
E-mail: [email protected]
texture, where the source and target images usually share high-
frequency features such as object shape and outline, namely
the contents are kept visually unchanged.
Earliest studies about character typeface transfer are usually
based on manually extracted features such as radicals and
strokes [18], [36], [38], [40]. Recently, some studies try to
automatically learn the transformation based on DNNs, and
model character typeface transfer as an image-to-image trans-
lation problem. Typically, dedicated models are built for each
source and target style pair [1], [23], making the models hardly
generalizable to new styles, i.e., additional models have to be
trained for new styles. To achieve typeface transfer without
retraining, a multi-content generative adversarial networks
(GAN) which transfers the font of English characters given
a few characters in target styles is proposed [4].
Earliest studies for neural style transfer usually adopt an
iterative optimization mechanism to generate images with
target style and content from noise images [11]. Due to its time
inefficiency, a feed-forward generator network is proposed
for this purpose [15], [31]. A set of losses are proposed for
the transfer network, such as pixel-wise loss [13], perceptual
loss [15], [37], and histogram loss [34]. Recently, variations of
GANs [21], [41] are introduced by adding a discriminator to
the transfer network which incorporates adversarial loss with
transfer loss to generate better images. However, these studies
aim to explicitly learn the transformation from a content image
to the image with a specific style, and the learned model is
thus not generalizable to new styles. So far, there is still limited
work for arbitrary neural style transfer [8], [12], [16].
In this paper, based on our previous work [39], we propose
a unified style transfer framework for both character typeface
transfer and neural style transfer, which enables the transfer
models generalizable well to new styles or contents. Different
from existing style transfer methods, where an individual
transfer network is built for each pair of style transfer, the
proposed framework represents each style or content with a
small set of reference images and attempts to learn separate
representations for styles and contents. Then,
to generate
an image of a given style-content combination is simply
to mix the corresponding two representations. This learning
framework allows simultaneous style transfer among multiple
styles and can be deemed as a special ‘multi-task’ learning
scenario. Through separated style and content representations,
the framework is able to generate images of all style-content
combination given the corresponding reference sets, and is
therefore expected to generalize well
to new styles and
contents. To our best knowledge, the study most resembles
to ours is the bilinear model proposed by Tenenbaum and
2
TABLE I
COMPARISON OF EMD WITH EXISTING METHODS.
Methods
Pix2pix [13]
CoGAN [21]
CycleGAN [41]
Rewrite [1]
Zi-to-zi [2]
AEGN [23]
Perceptual [15]
TextureNet [32]
StyleBank [7]
Patch-based [8]
AdaIn [12]
Universal [16]
EMD
Data format
paired
unpaired
unpaired
paired
paired
paired
unpaired
unpaired
unpaired
unpaired
unpaired
unpaired
triplet/unpaired
Generalizable to new styles?
Requirements for new style
What the model learned?
The learned model can
only transfer images to
styles which appeared in
the training set. For new
styles, the model has to be
retrained.
The learned model can be
generalized to new styles.
Retrain on a lot of
training images for a
source style and a target
style.
Retrain on many input
content images and one
style image.
One or a small set of
style/content reference
images.
The translation from a certain
source style to a specific target
style.
Transformation among specific
styles.
The swap of style/content feature maps.
The transferring of feature statistics.
It is based on whitening and coloring transformations.
The feature representation of style/content.
to the difficulty of obtaining images of the same content or
style, only one style and content reference image is used
as input (namely r=1). Extensive experimental results have
demonstrated the effectiveness and robustness of our method
for style transfer.
The main contributions of our study are summarized as
follows.
• We propose a unified style transfer framework for both
typeface transfer and neural style transfer,
character
which learns separate style and content representations.
• The framework enables the transfer models generalizable
to any unseen style/content given a few reference images.
• Under this framework, we design two individual networks
for character typeface transfer and neural style transfer,
respectively, which have shown promising results in ex-
perimental validation.
• This learning framework allows simultaneous style trans-
fer among multiple styles and can be deemed as a special
‘multi-task’ learning scenario.
II. RELATED WORK
Neural Style Transfer. DeepDream [25] may be considered
as the first attempt to generate artistic work using Convolution
Neural Networks (CNNs). Gatys et. al
later successfully
applied CNNs to neural style transfer [11]. The target im-
ages are generated by iteratively optimizing a noise image
through a pre-trained network, which is time-consuming. To
directly learn a feed-forward generator network for neural style
transfer, the perceptual loss is proposed [15]. Ulyanov et. al
proposed a texture network for both texture synthesis and
style transfer [31]. Further, Chen et. al proposed the stylebank
to represent each style by a convolution filter, which can
simultaneously learn numerous styles [7]. For arbitrary neural
style transfer, [8] proposed a patch-based method to replace
each content feature patch with the nearest style feature.
Further, [12] proposed a faster method based on adaptive
instance normalization which performed style transfer in the
feature space by transferring feature statistics. Li et. al [16]
proposed a universal style transfer model which is based on
the whitening and coloring transforms but this model is not
effective at producing sharp details and fine strokes.
Image-to-Image Translation. Image-to-image translation is
to learn the mapping from the input image to output image,
Fig. 1. The framework of the proposed EMD model.
Freeman [30], which obtained independent style and content
representations through matrix decomposition. However, to
obtain accurate decomposition of new styles and contents, the
bilinear model requires an exhaustive enumeration of examples
which may not be readily available for some styles/contents.
As shown in Figure 1, the proposed style transfer frame-
work, denoted as EMD thereafter, consists of a style encoder,
a content encoder, a mixer, and a decoder. Given one or a
set of reference images, the style encoder and content encoder
are used to extract the style and content factors from the style
reference images and content reference images, respectively.
The mixer then combines the corresponding style and con-
tent representations. Finally, the decoder generates the target
images based on the combined representations. Under this
framework, we design two individual networks for character
typeface transfer and neural style transfer, respectively. For
character typeface transfer, to separate the style features and
content features, we leverage the conditional dependence of
styles and contents given an image and employ a bilinear
model to mix the two factors. For neural style transfer, we
leverage the prior knowledge that the statistical information of
feature maps in certain layers can represent style information
and mix the two factors through statistic matching.
During training, each training example for the proposed
network is provided as a style-content pair <RSi, RCj >,
where RSi and RCj are the style and content reference sets
respectively, each consisting of r images of the corresponding
style Si and content Cj. For character typeface transfer, the
entire network is trained end-to-end with a weighted L1 loss
measuring the difference between the generated images and
the target images. For neural style transfer, due to the absence
of target images for supervision, we calculate the content loss
and style loss respectively by comparing the feature maps of
generated images with those of style/content reference image.
Therefore, neural style transfer is unsupervised. Moreover, due
MixerStyleEncoderContent EncoderDecoder……Content Reference SetStyle Reference SetOutput3
Fig. 2. The detailed architecture of the proposed generalized EMD model for character typeface transfer.
such as from edges to real objects. Pix2pix [13] used a
conditional GAN based network which requires paired data
for training. However, paired data are hard to collect in many
applications. Therefore, methods requiring non-paired data are
explored. Liu and Tuzel proposed the coupled GAN (Co-
GAN) [21] to learn a joint distribution of two domains through
weight sharing. Later, Liu [20] extended the CoGAN to unsu-
pervised image-to-image translation. Some other studies [5],
[28], [29] encourage the input and output to share certain
content even though they may differ in style by enforcing the
output to be close to the input in a predefined metric space
such as class label space. Recently, Zhu et. al proposed the
cycle-consistent adversarial network (CycleGAN) [41] which
performs well for many vision and graphics tasks.
Character Typeface Transfer. Most existing studies model
character typeface transfer as an image translation task. The
“Rewrite” project uses a simple top-down CNNs structure and
transfers a typographic font to another stylized typographic
font [1]. As the improvement version, the “zi-to-zi” project can
transfer multiple styles by assigning each style an one-hot cat-
egory label and training the network in a supervised way [2].
The recent work “From A to Z” also adopts a supervised
method and assigns each character an one-hot label [33]. Lyu
et. al proposed an auto-encoder guided GAN network (AEGN)
which can synthesize calligraphy images with specified style
from standard Chinese font images [23]. [4] proposed a multi-
content GAN which could achieve typeface transfer on English
characters with a few examples of target style.
However, existing work usually studies character typeface
transfer and neural style transfer individually, while the pro-
posed EMD provides a unified framework which is applicable
to both tasks. In addition, most of the methods reviewed above
can only transfer styles in the training set and the network must
be retrained for new styles. In contrast, the proposed EMD
framework can generate images with new styles/contents given
only a few of reference images. We present a comparison of
the methods in Table I.
III. GENERALIZED STYLE TRANSFER FRAMEWORK
The generalized style transfer
framework EMD is an
encoder-decoder network which consists of four subnets:
Style Encoder, Content Encoder, Mixer and Decoder, as
shown in Figure 1. First, the Style/Content Encoder extracts
style/content representations given style/content reference im-
ages. Next, the Mixer integrates the style feature and content
feature, and the combined feature is then fed into the Decoder.
Finally, the Decoder generates the image with the target style
and content.
The input of the Style Encoder and Content Encoder are
style reference set RSi and content reference set RCj , re-
spectively. RSi consists of r reference images with the same
style Si but different contents Cj1, Cj2, . . . , Cjr
RSi = {Iij1, Iij2, . . . , Iijr },
(1)
where Iij represents the image with style Si and content
Cj. For example, in character typeface transfer tasks, RSi
contains r images with the same font Si such as serif,
sanserif, and blackletter, but different characters. Similarly,
RCj is for content Cj (j = 1, 2, . . . , m) which consists of
r reference images of the same character Cj but in different
styles Si1, Si2, . . . , Sir
RCj = {Ii1j, Ii2j, . . . , Iirj}.
(2)
The whole framework is trained end-to-end by trying to finish
a series of tasks: generate images with target style and content
given the style and content reference images. By such a way,
we expect the framework to summarize from these similar
tasks and learn to extract style and content representations,
and then transfer this ability to new styles and contents.
It is worth noting that the proposed EMD learning frame-
work is quite flexible and the Style Encoder, Content Encoder,
Mixer, and Decoder can be tailored based on specific tasks. In
the rest of the section, under this framework, we demonstrate
with two individual networks for character typeface transfer
and neural style transfer, respectively.
IV. CHARACTER TYPEFACE TRANSFER
The detailed network architecture employed for character
typeface transfer is shown in Figure 2.
W𝐶𝐶𝑗𝑗𝑆𝑆𝑖𝑖𝑆𝑆𝑖𝑖W𝐶𝐶𝑗𝑗1×𝐵𝐵1×𝐾𝐾1×𝑅𝑅𝑅𝑅×𝐾𝐾×𝐵𝐵Style Reference SetContent Reference SetOutput……6412825651251251251251264128256512512512512512Content EncoderStyle EncoderMixer64128256512512512512Decoder512𝑅𝑅𝑠𝑠𝑖𝑖𝑅𝑅𝑐𝑐𝑗𝑗Down-samplingUp-samplingContent RepresentationStyleRepresentation…Skip-connection……Channel ConcatChannel ConcatA. Encoder Network
The two encoder networks used for character typeface
transfer have the same architecture and consist of a se-
ries of Convolution-BatchNorm-LeakyReLU down-sampling
blocks which yield 1×1 feature representations of the input
style/content reference images. The first convolution layer is
with 5×5 kernel and stride 1 and the rest are with 3×3
kernel and stride 2. All ReLUs are leaky, with slope 0.2.
The r input reference images are concatenated in the channel
dimension to feed into the encoders. This allows the encoders
to capture the common characteristics among images of the
same style/content.
B. Mixer Network
Given the style representations and content representations
obtained by the Style Encoder and Content Encoder, we em-
ploy a bilinear model as the Mixer to combine the two
factors. The bilinear models are two-factor models with the
mathematical property of separability: their outputs are linear
in either factor when the other is held constant. It has been
demonstrated that the influences of the two factors can be
efficiently separated and combined in a flexible representation
that can be naturally generalized to unfamiliar factor classes
such as new styles [30]. Furthermore, the bilinear model has
also been successfully used in zero-shot learning as a compat-
ibility function to associate visual representation and auxiliary
class text description [6], [10], [35]. The learned compatibility
function can be seen as the shared knowledge and transferred
to new classes. Here, we take the bilinear model to integrate
styles and contents together which is formulated as
Fij = SiWCj,
(3)
where W is a tensor with size R × K × B, Si is the R-
dimensional style feature and Cj is the B-dimensional content
feature. Fij can be seen as the K-dimensional feature vector
of image Iij which is further taken as the input of the Decoder
to generate the image with style Si and content Cj.
C. Decoder Network
The image generator is a typical decoder network which is
symmetrical to the encoder and maps the combined feature
representation to output images with target style and content.
The Decoder roughly follows the architectural guidelines set
forth by Radford et. al [26] and consists of a series of
Deconvolution-BatchNorm-ReLU up-sampling blocks except
that the last layer is the deconvolution layer. Other than the last
layer which uses 5×5 kernels and stride 1, all deconvolution
layers use 3×3 kernels and stride 2. The outputs are finally
transformed into [0,1] by the sigmoid function.
In addition, because the stride convolution in Style Encoder
and Content Encoder is detrimental to the extraction of spatial
information, we adopt the skip-connection which has been
commonly used in semantic segmentation tasks [14], [22],
[27] to refine the segmentation using spatial information from
different resolutions. Although the content inputs and outputs
differ in appearance, they share the same structure. Hence,
4
we concatenate the input feature map of each up-sampling
block with the corresponding output of the symmetrical down-
sampling block in Content Encoder to allow the Decoder to
learn back the relevant structure information lost during the
down-sampling process.
D. Loss Function
For character typeface transfer tasks, it is possible to obtain
a reasonable set of the target images. Therefore, we leverage
the target images to train the network. Given a training set Dt,
the training objective is defined as
θ = arg min
(cid:88)
θ
Iij ∈Dt
L( ˆIij, Iij|RSi, RCj ; θ),
(4)
where θ represents the model parameters, ˆIij is the generated
image and L( ˆIij, Iij|RSi, RCj ; θ) is the generation loss which
is formulated as
L( ˆIij, Iij|RSi, RCj ; θ) = W ij
st × W ij
d × || ˆIij − Iij||.
(5)
The pixel-wise L1 loss is employed as the generation loss for
character typeface transfer problem rather than L2 loss because
L1 loss tends to yield sharper and cleaner images [13], [23].
In each learning iteration, the size, thickness, and darkness
of the characters in the target set may vary significantly.
Due to the way the loss is defined,
tends to
optimize for characters with more pixels, i.e., big and thick
characters. Moreover, models trained using L1 loss tend to
pay more attention to darker characters and perform poorly
on lighter characters. To alleviate the above imbalance, we
add two weights to the generation loss: W ij
st about the size
and thickness of characters, and W ij
d about the darkness of
characters.
the model
As for W ij
st , we first calculate the number of black pixels,
i.e., pixels whose values are less than 0.5 after normalized into
[0,1]. Then W ij
st is defined as the reciprocal of the number of
black pixels in each target image
W ij
st = 1/N ij
b ,
(6)
where N ij
b
As for W ij
is the number of black pixels of target image Iij.
d , we calculate the mean value of black pixels
for each target image and set a softmax weight
W ij
d =
(cid:80)
exp(meanij)
Iij ∈Dt
exp(meanij)
,
(7)
where meanij is the mean value of the black pixels of the
target image Iij.
V. NEURAL STYLE TRANSFER
We further apply the EMD framework to neural style
transfer. Due to the difficulty of finding neural images with
the same style or content, the input to the Style Encoder and
Content Encoder is one image. For simplicity, we denote the
style image Isty and the content image Icon.
Many existing neural style transfer methods employ the
Gram matrix to represent styles [11], [15] and style transfer
is achieved by matching the Gram matrix of generated images
5
Fig. 3. The detailed architecture of the proposed generalized EMD model for neural style transfer.
with that of style images. It has been theoretically proved
that if we consider the activation at each position of feature
maps as individual samples, then matching Gram matrix can be
reformulated as minimizing the Maximum Mean Discrepancy
(MMD) [17]. Therefore, neural style transfer can be seen as
distribution alignment from the content image to the style
image [17].
Based on above foundation, the Conditional Instance Nor-
malization (CIN) method proposes to learn a set of affine
parameters (γs and βs) for each style and transfers style with
an affine transformation [9]
ˆF =
Fcon − µ(Fcon)
σ(Fcon)
γs + βs,
(8)
where Fcon are the feature maps of the content reference
image, µ(Fcon) and σ(Fcon) are the mean and standard devi-
ation of Fcon across the spatial axis. Despite of its promising
performance, this method is restricted to styles in the training
set. To solve this problem, [12] designed an Adaptive Instance
Normalization (AdaIN) layer where the affine parameters are
directly calculated from the style feature maps of a cer-
tain layer in pre-trained VGG-19, namely γs=σ(Fsty) and
βs=µ(Fsty). But this is not as accurate as CIN because the
calculated affine parameters are indeed estimation of the real
statistics. Borrowing ideas from the above two studies, our
method learns the affine parameters from the style image by
the Style Encoder, which is both flexible and accurate.
A. Network Architecture
For neural style transfer, the Style Encoder consists of a
stack of Convolution Blocks and Residual Blocks, a Global
Pooling layer and a Fully-Connected layer. Each Convolu-
tion Block <ConvBlock,k,s,c> is composed of a convo-
lution layer with kernel size k, stride s and filter number
c and a LeakyReLU layer with slope 0.2. Each Residual
block <ResBlock,k,c> consists of two convolution blocks
<ConvBlock,k,1,c>. Then the Global Pooling layer (here
we use Global Average Pooling) produces a feature map
of size 1 × 1. The final Fully-Connected layer <F C,c> is
used to generate the c-dimensional statistic vectors (mean
and standard deviation). For Content Encoder, we use three
Convolution Blocks followed by four Residual Blocks. The
detailed network architecture is displayed in Figure 3.
Through the Content Encoder, we obtain the feature maps
Fcon of the content reference image Icon. In addition, the
distribution statistics of the style reference image Isty are
learned by the Style Encoder and we denote the mean by
µsty and the standard deviation by σsty. Then based on
the foundation that neural style transfer can be seen as a
distribution alignment process from the content image to the
style image, we mix these two factors by statistic matching
between style and content images
con − µ(F c
F c
σ(F c
con)
where ˆF c is the statistic aligned feature map for the c-th
channel. µ(F c
con) are the mean and standard
deviation computed across all positions of feature map F c
con
con) and σ(F c
sty + µc
σc
ˆF c =
con)
sty,
(9)
µ(F c
con) =
1
HW
H
(cid:88)
W
(cid:88)
h=1
w=1
F hwc
con ,
σ(F c
con) =
(cid:34)
1
HW
H
(cid:88)
W
(cid:88)
(F hwc
con − µ(F c
con))2
h=1
w=1
(10)
(cid:35) 1
2
,
(11)
where we suppose the size of Fcon is H × W × C.
The Decoder takes the feature maps ˆF as the input and
generates the image Igen with target style and content. The
architecture of the Decoder mostly mirrors the layers of
Content Encoder except that the stride-2 convolution is re-
placed by stride-1 convolution and each convolution layer is
followed by a ReLU rectifier except the last layer. Besides,
we upsample the feature maps by nearest neighbor method
in up-sample layers to reduce checkerboard effects as done
in [12].
B. Loss Function
Similar to [31], we use a pretrained VGG-19 model to
calculate the loss function
L(Igen|Isty, Icon) = λcLc + λsLs + λtvLtv,
(12)
which is a weighted combination of the content loss Lc, the
style loss Ls and the total variation regularizer Ltv.
The content loss Lc is the squared and normalized Euclidean
distance between the feature maps of generated images and
Statistic MatchingStyle Reference ImageContent Reference ImageOutputConvBlock,5, 1, 64Content EncoderStyle EncoderMixerDecoder𝐼𝐼𝑠𝑠𝑠𝑠𝑠𝑠𝐼𝐼𝑐𝑐𝑐𝑐𝑐𝑐𝜇𝜇𝑠𝑠𝑠𝑠𝑠𝑠𝐹𝐹𝑐𝑐𝑐𝑐𝑐𝑐𝐼𝐼𝑔𝑔𝑔𝑔𝑐𝑐ConvBlock,3, 2, 128ConvBlock,3, 2, 256ResBlock, 3,256ResBlock, 3,256ResBlock,3, 256ResBlock,3,256ConvBlock,5, 1, 64ConvBlock,3, 2, 128ConvBlock,3, 2, 256ConvBlock,3, 2, 256Global PoolingFC, 512ResBlock,3, 256ResBlock, 3,256ResBlock, 3,256ResBlock,3,256UpsampleConvBlock,3, 1, 128UpsampleConvBlock,3, 1, 64ConvBlock,5, 1, 34ResBlock,3, 256𝜎𝜎𝑠𝑠𝑠𝑠𝑠𝑠6
TG:
O1:
O2:
O3:
O4:
O5:
TG:
O1:
O2:
O3:
O4:
O5:
Fig. 5. Generation results for D1, D2, D3, D4 (from upper left to lower right)
with different training set size. TG: Target image, O1: Output for Nt=20k,
O2: Output for Nt=50k, O3: Output for Nt=100k, O4: Output for Nt=300k,
O5: Output for Nt=500k. In all cases, r=10.
and train the model end-to-end with the Adam optimization
method until the output is stable.
In each experiment, we first randomly sample Nt target
images with known content and known styles from D1 as
training examples. We then construct the two reference sets
for each target image by randomly sampling r images of
the corresponding style/content from D1. Figure 4 provides
an illustration of target images selection and reference set
construction. Each row represents one style and each column
represents a content. The target images are represented by
randomly scattered red “x” marks. The reference images for
the target image are selected from corresponding style/content,
shown as the orange circles for the style reference images and
green circles for content reference images.
3) Experimental Results: Influence of the Training Set
Size To evaluate the influence of the training set size on style
transfer, we conduct experiments for Nt=20k, 50k, 100k, 300k
and 500k. The generation results for D1, D2, D3 and D4 are
shown in Figure 5. As we can see, the larger the training
set, the better the performance, which is consistent with our
intuition. The generated images with Nt=300k and 500k are
clearly better than images generated with Nt=20k, 50k and
100k. Besides, the performance of Nt=300k and Nt=500k
is close which implies that with more training images, the
network performance tends to be saturated and Nt=300k is
enough for good results. Therefore, we take Nt=300k for the
rest of experiments.
Influence of the Reference Set Size In addition, we conduct
experiments with different number of reference images. Fig-
ure 6 displays the image generation results of Nt=300k with
r=5, r=10 and r=15 respectively. As can be seen from the
figure, more reference images lead to better detail generation
for characters. Besides, characters generated with r=5 are
overall okay, meaning that our model can generalize to novel
styles using only a few reference images. The generation
results of r=10 and r=15 are close, therefore we take r=10
in our other experiments. Intuitively, more reference images
supply more information about strokes and styles of characters,
Fig. 4. The illustration of data set partition, target images selection and
reference set construction (best viewed in color).
content reference images. Suppose the content loss is cal-
culated for the l-th layer and the feature maps are of size
Hl × Wl × Cl, then the content loss can be formulated as
Lc =
1
HlWlCl
(cid:107) F l
gen − F l
con (cid:107)2
2,
(13)
gen and F l
where F l
con are feature maps in the l-th layer for the
generated image Igen and the content reference image Icon.
The style loss Ls is constructed by aligning the Batch
Normalization (BN) statistics (mean and standard deviation)
[12], [17] of the feature maps of the generated image Igen
and the style reference image Isty
Ls =
(cid:88)
l
(cid:107) µ(F l
gen) − µ(F l
sty) (cid:107)2
2
+ (cid:107) σ(F l
gen) − σ(F l
sty) (cid:107)2
2 .
(14)
In addition, following [15], [24], we add the total variation
regularizer Ltv to encourage the smooth of generated images.
VI. EXPERIMENTS
A. Character Typeface Transfer
1) Data Set: To evaluate the proposed EMD model with
Chinese Typeface transfer tasks, we construct a data set of 832
fonts (styles), each font with 1732 commonly used Chinese
characters (contents). All images are in the size of 80 × 80
pixels. We randomly select 75% of the styles and contents as
known styles and contents (i.e. 624 train styles and 1299 train
contents) and leave the rest 25% as novel styles and contents
(i.e. 208 novel styles and 433 novel contents). The entire data
set is accordingly partitioned into four subsets as shown in
Figure 4: D1, images with known styles and contents, D2,
images with known styles but novel contents, D3, images with
known contents but novel styles, and D4, images with both
novel styles and novel contents. The training set is selected
from D1, and four test sets are selected from D1, D2, D3, and
D4, respectively. The four test sets represent different levels
of style transfer challenges.
In our experiment,
2) Implementation Details:
the out-
put channels of convolution layers in the Style Encoder and
Content Encoder are 1, 2, 4, 8, 8, 8, 8, 8 times of C
respectively, where C=64. And for the Mixer, we set R=B=K
in our implementation. The output channels of the first seven
deconvolution layers in Decoder are 8, 8, 8, 8, 4, 2, 1 times
of C respectively. We set the initial learning rate as 0.0002
Known Style D1 D2 D3 D4 Novel Style Known Content Novel Content TG:
O1:
O2:
O3:
TG:
O1:
O2:
O3:
TG:
O1:
O2:
O3:
TG:
O1:
O2:
O3:
Fig. 6. The impact of the number of reference images on the generation of
images in D1, D2, D3, D4, respectively (from upper left to lower right).
TG: Target image, O1: Output for r=5, O2: Output for r=10, O3: Output for
r=15. In all cases, Nt=300k.
TG:
O1:
O2:
TG:
O1:
O2:
TG:
O1:
O2:
TG:
O1:
O2:
Fig. 7. The impact of the skip-connection on generation of images in D1,
D2, D3, D4, respectively (from upper left to lower right). TG is the target
image, O1 and O2 are outputs of models without and with skip-connection.
In all cases Nt=300k, r=10.
making the common points in the reference sets more obvious.
Therefore, given r > 1, our model can achieve co-learning of
images with the same style/content. Moreover, with r > 1 we
can learn more images at once which improves the learning
efficiency, i.e., if we split the <r, r, 1> triplets to be r2 <1, 1,
1> triplets, the learning time increases nearly r2 times under
the same condition.
Effect of the Skip-connection To evaluate the effectiveness
of the skip-connection during image generation, we compare
the results with and without skip-connection in Figure 7.
As shown in the figure, images in D1 are generated best,
next is D3 and last is D2 and D4, which conforms to the
difficulty level and indicates that novel contents are more
challenging to extract than novel styles. For known contents,
7
Fig. 8. Validation of pure style extraction. CR: the content reference set, TG:
the target image, O1, O2 and O3 are generated by CR and three different
style reference sets SR1, SR2 and SR3.
Fig. 9. Validation of pure content extraction. SR: the style reference set, TG:
the target image, O1, O2 and O3 are generated using SR but three different
content reference sets CR1, CR2 and CR3.
models with and without skip-connection perform closely. But
for novel contents,
images generated with skip-connection
are much better in details. Besides, the model without skip-
connection may generate images of novel characters to be
similar characters which it has seen before. This is because the
structure of novel characters is more challenging to extract and
the loss of structure information during down-sampling makes
the model generate blurry even wrong characters. However,
with content skip-connection, the loss in location and structure
information is recaptured by the Decoder network.
Validation of Style and Content Separation Separating style
and content is the key feature of the proposed EMD model.
To validate the clear separation of style and content, we
combine one content representation with style representations
from a few disjoint style reference sets for one style and
check whether the generated images are the same. For better
validation, the target images are selected from D4, and the
content reference sets and style reference sets are all selected
from novel styles and novel contents. Similarly, we combine
one style representation with content representations from a
few disjoint content reference sets. The results are displayed
in Figure 8 and Figure 9, respectively. As shown in Figure 8,
the generated O1, O2 and O3 are similar although the style
reference sets used are quite different, demonstrating that the
Style Encoder is able to accurately extract style representations
as the only thing the three style reference sets share is the style.
Similar results can be found in Figure 9, showing that the
Content Encoder accurately extracts content representations.
Comparison with Baseline Methods In the following, we
compare our method with the following baselines for character
style transfer.
TG:O1:O2:O3:CR:SR1:SR2:SR3:CR:SR1:SR2:SR3:TG:O1:O2:O3:TG:O1:O2:O3:SR:CR1:CR2:CR3:SR:CR1:CR2:CR3:TG:O1:O2:O3:Source:
Pix2pix:
AEGN:
Zitozi:
C-GAN:
EMD:
Target:
8
L1 loss RMSE PDAR
0.0105
0.0202
0.17
0.0112
0.0202
0.3001
0.0091
0.0184
0.1659
0.0112
0.02
0.3685
0.0087
0.0184
0.1332
Fig. 10. Comparison of image generation for known styles and novel contents. Equal number of image pairs with source and target styles are used to train
the baselines.
• Pix2pix [13]: Pix2pix is a conditional GAN based image
translation network, which consists of encoder, decoder
and a discriminator. It also adopts the skip-connection to
connect encoder and decoder. Pix2pix is optimized by L1
distance loss and adversarial loss.
• Auto-encoder guided GAN [23]: Auto-encoder guided
GAN consists of two encoder-decoder networks, one for
image transfer and another acting as an auto-encoder to
guide the transfer to learn detailed stroke information.
• Zi-to-zi [2]: Zi-to-zi is proposed for Chinese typeface
transfer which is based on the encoder-decoder architec-
ture followed by a discriminator. In discriminator, there
are two fully connected layers to predict the real/fake and
the style category respectively.
• CycleGAN [41]: CycleGAN consists of two mapping
networks which translate images from style A to B
and from style B to A, respectively and construct a
cycle process. The CycleGAN model is optimized by the
adversarial loss and cycle consistency loss.
For comparison, we use the font Song as the source font
which is simple and commonly used and transfer it to target
fonts. Our model is trained with Nt=300k and r=10 and as
an average, we use less than 500 images for each style. We
compare our method with baselines on generating images with
known styles and novel styles, respectively. For novel style,
the baselines need to be re-trained from scratch.
Known styles as target style. Taking known styles as the
target style, baselines are trained using the same number of
paired images as the images our model used for the target
style. The results are displayed in Figure 10 where CycleGAN
is denoted as C-GAN for simplicity. We can observe that for
known styles and novel contents, our method performs much
better than pix2pix, AEGN and CycleGAN and close to or
even slightly better than zi-to-zi. This is because pix2pix and
AEGN usually need more samples to learn a style [23]. Cycle-
GAN performs poorly and only generates part of characters or
some strokes, possibly because it learns the domain mappings
without the domain knowledge. Zitozi performs well since it
learns multiple styles at the same time and the contrast among
different styles helps the model better learn styles.
For quantitative analysis, we calculate the L1 loss, Root
Mean Square Error (RMSE) and the Pixel Disagreement Ratio
(PDAR) [41] between the generated images and the target
images. PDAR is the number of pixels with different values
image size after
in the two images divided by the total
image binaryzation. We conduct experiments for 10 randomly
sampled styles and the average results are displayed at the last
three columns in Figure 10 and the best performance is bold.
We can observe that our method performs best and achieves
the lowest L1 loss, RMSE and PDAR.
Novel styles as target style. Taking novel styles as the
target style, we test our model to generate images of novel
styles and contents given r=10 style/content reference images
without retraining. As for baselines, retraining is needed. Here,
we conduct two experiments for baselines. One is that we first
pretrain a model for each baseline method using the training
set our method used and then fine-tune the pretrained model
with the same 10 reference images as our method used. The
results show that all baseline methods preform poorly and it is
unfeasible to learn a style by fine-tuning on only 10 reference
images. Thus, we omit the experiment results here. The other
setting is training the baseline model from scratch. Since it
is unrealistic to train baseline models with only 10 samples,
we train them using 300, 500, 1299 images of the target style
respectively. Here we use 1299 is because the number of train
contents is 1299 in our data set. The results are presented in
Figure 11. As shown in the figure, the proposed EMD model
can generalize to novel styles from only 10 style reference
images but other methods need to be retrained with more
samples. The pix2pix, AEGN and CycleGAN perform worst
even trained with all 1299 training images, which demonstrates
that these three methods are not effective for character style
transfer especially when the training data are limited. With
only 10 style reference images, our model performs better than
zi-to-zi-300 namely zi-to-zi model learned with 300 examples
for each style, close to zi-to-zi-500 and a little worse than zi-
to-zi-1299. This may be because zi-to-zi learns multiple styles
at the same time and learning with style contrast helps model
learning better.
The quantitative comparison results for L1 loss, RMSE and
PDAR are shown at the last three columns of Figure 11.
Although given only 10 style reference images, our method
performs better than all pix2pix, AEGN and CycleGAN mod-
els and zi-to-zi-300, and close to zi-to-zi-500 and zi-to-zi-
1299, which demonstrates the effectiveness of our method.
In conclusion, these baseline methods require many images
of source styles and target styles, which may be difficult to
collect. Besides, the learned baseline model can only transfer
styles appearing in train set and they have to be retrained for
new styles. But our method can generalize to novel styles given
Source:
Pix2pix-300:
Pix2pix-500:
Pix2pix-1299:
AEGN-300:
AEGN-500:
AEGN-1299:
Zitozi-300:
Zitozi-500:
Zitozi-1299:
C-GAN-300:
C-GAN-500:
C-GAN-1299:
EMD-10:
Target:
9
L1 loss RMSE PDAR
0.0109 0.0206 0.1798
0.0106 0.0202 0.1765
0.01
0.0196 0.1531
0.0117 0.02
0.3951
0.0108 0.02
0.2727
0.0105 0.0196 0.26
0.0091 0.0187 0.1612
0.009
0.0185 0.1599
0.009
0.0183 0.1624
0.0143 0.0215 0.5479
0.0126 0.0203 0.4925
0.0128 0.0203 0.4885
0.009
0.0186 0.1389
Fig. 11. Comparison of image generation for novel styles and contents given r=10. The baseline methods are trained with 300, 500, 1299 image pairs
respectively.
only a few reference images. In addition, baseline models
can only use images of target styles. However, since the
proposed EMD model learns feature representations instead of
transformation among specific styles, it can leverage images
of any styles and make the most of existing data.
B. Neural Style Transfer
1) Implementation Details: Following previous
stud-
ies [12], [15], we use the MS-COCO dataset [19] as the
content images and a dataset of paintings mainly collected
from WikiArt [3] as the style images. Each dataset contains
roughly 80,000 training examples. The model is trained using
the Adam optimizer with the learning rate of 0.0001. The batch
size is set to be 8 style-content pairs. We compute the style
loss using the relu1 2, relu2 2, relu3 3, relu4 3 layers of
VGG-19 and the content loss using the relu4 1 layer. We set
λc=1, λs=5 and λtv=1e-5. During training, we first resize the
smallest dimension of both images to 512 while preserving
the aspect ratio, then randomly crop regions of size 256×256.
Since the size of the fully connected layer in Style Encoder is
only related to the filter numbers, our model can be applied
to style/content images of any size during testing.
2) Comparison Methods: We compare the proposed neural
style transfer model with the following three types of baseline
methods:
• Fast but not flexible Per-Style-Per-Model method, which
is restricted to a single style and can not be generalized
to new styles. Here we use the state-of-the-art method
TextureNet [32] as an example. TextureNet is mainly a
generator which takes a noise variable z and a content
reference image as the inputs and generates the image
with target style/content.
• Flexible but slow optimization based method [11], which
optimizes one noise image to be with target style and
content iteratively with the help of a pretrained VGG
network.
• Flexible and fast Arbitrary-Style-Per-Model method,
which can achieve arbitrary style transfer with no need for
retraining. In this study, we compare with the following
three methods:
– Patch-based [8]: Patch-based method conducts style
transfer by swapping each content feature patch with
the nearest style patch. The network consists of a
convolution network, an inverse network and a style
swap layer.
– AdaIn [12]: AdaIn is based on the Adaptive Instance
Normalization and the network of AdaIn consists
of an encoder, a decoder and an Adaptive Instance
Normalization layer, where the encoder is fixed as
the first few layers of VGG-19.
– Universal [16]: Universal is designed based on the
whitening and coloring transformation which is em-
bedded in a series of pretrained encoder-decoder
image reconstruction networks.
Among the above baseline methods, the TextureNet is more
impressive in transfer quality than the other four baseline
methods, therefore, we take it as a benchmark. The results
of these baseline methods are all obtained by running their
released code with the default configurations.
3) Experimental Results: Comparison with Baseline
Methods As can be seen from Figure 12, the proposed method
performs better than other arbitrary style transfer methods
but a little worse than TextureNet. It is worth noting that
TextureNet is trained separately for each style but none of the
10
Style
Content
TextureNet[32] Opt-based[11] Patch-based[8]
AdaIn[12]
Universal[16]
EMD
Fig. 12. The comparison results for neural style transfer.
11
Fig. 13. More experimental results for neural style transfer.
presented styles are observed by our model during training.
This is acceptable due to the trade-off between flexibility
and transfer quality. Patch-based method performs poorly. It
can not capture some styles when lots of content patches are
swapped with style patches lack of style elements. AdaIn per-
forms well on most styles but the generated images are a little
blurry in details. It performs not so well for some complicated
styles. Universal replaces the training process with a series of
transformations but it is not effective at producing sharp details
and fine strokes. Figure 13 displays more style transfer results
of our proposed method, which demonstrate that the proposed
EMD framework can be generalized to arbitrary new styles
without the need for model retraining.
Style-content Trade-off During training, we can control the
degree of style transfer by adjusting the weight λs in loss
function. When testing, our method also allows the style-
content trade-off by adjusting the amount of style information
mixed with the content feature. With Style Encoder, we can
obtain the original style of the content image, and then we
mix the content feature with the style which is the weighted
combination of styles from the content image and the style
image
ˆF =
Fcon − µ(Fcon)
σ(Fcon)
where Fcon is the feature map of content image and
σnew + µnew,
µnew = (1 − α)µcon + αµsty,
(15)
(16)
stylecontent12
Fig. 14. Examples of style-content trade-off.
Fig. 15. Examples of style interpolation.
σnew = (1 − α)σcon + ασsty,
(17)
where (µcon, σcon) and (µsty, σsty) are the learned statis-
tical information of the content image and the style image,
respectively. By adjusting the weight α, the Decoder generates
images gradually changing from the original style to the target
style. When α = 0, the Decoder tries to reconstruct the
content image and when α = 1.0, the Decoder outputs the
most stylized image. As shown in Figure 14, the stylized
image changes from slightly stylized to the most stylized with
increasing α.
Style Interpolation Similarly, our method can also be applied
for interpolation between two styles, which is achieved by
setting µnew = (1−α)µsty1+αµsty2 and σnew = (1−α)σsty1+
ασsty2 in Eq. 15. An example is presented in Figure 15. When
α = 0 and α = 1, style 1 and style 2 are used for the transfer,
respectively. When 0 < α < 1, an interpolation between the
two styles are used for the transfer.
decoder will be taken as the shared knowledge and transferred
to new styles and contents. Under this framework, we design
two individual networks for character typeface transfer and
neural style transfer tasks. Extensive experimental results on
these two tasks demonstrate its effectiveness.
In our study, the learning process consists of a series of
image generation tasks and we try to learn a model which can
generalize to new but related tasks by learning a high-level
strategy, namely learning the style and content representations.
This resembles to “learning-to-learn” program. In the future,
we will explore more about “learning-to-learn” and integrate
it with our framework.
ACKNOWLEDGMENT
The work is partially supported by the High Technology
Research and Development Program of China 2015AA015801,
NSFC 61521062, STCSM 18DZ2270700.
VII. CONCLUSION AND FUTURE WORK
In this paper, we propose a unified style transfer frame-
work EMD for both character typeface transfer and neural
style transfer, which enables the transfer models generalizable
to new styles and contents given a few reference images.
The main idea is that from these reference images,
the
Style Encoder and Content Encoder extract style and content
representations, respectively. Then the extracted style and con-
tent representations are mixed by a Mixer and finally fed into
the Decoder to generate images with target styles and contents.
This learning framework allows simultaneous style transfer
among multiple styles and can be deemed as a special ‘multi-
task’ learning scenario. Then the learned encoders, mixer and
REFERENCES
[1] Rewrite. https://github.com/kaonashi-tyc/Rewrite. 1, 2, 3
[2] Zi-to-zi. https://kaonashi-tyc.github.io/2017/04/06/zi2zi.html. 2, 3, 8
[3] Painter by numbers, wikiart, 2016. https://www.kaggle.com/c/painter-
by-numbers. 9
[4] S. Azadi, M. Fisher, V. Kim, Z. Wang, E. Shechtman, and T. Darrell.
Multi-content gan for few-shot font style transfer. 2018. 1, 3
[5] K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan.
Unsupervised pixel-level domain adaptation with generative adversarial
networks. In Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, 2017. 3
[6] S. Changpinyo, W. Chao, B. Gong, and F. Sha. Synthesized classifiers
In Proceedings of the IEEE Conference on
for zero-shot learning.
Computer Vision and Pattern Recognition, pages 5327–5336, 2016. 4
[7] D. Chen, L. Yuan, J. Liao, N. Yu, and G. Hua. Stylebank: An explicit
In Proceedings of the
representation for neural image style transfer.
IEEE Conference on Computer Vision and Pattern Recognition, 2017.
2
contentstyle𝛼𝛼=0.25𝛼𝛼=0.5𝛼𝛼=0.75𝛼𝛼=1.0contentstyle𝛼𝛼=0.25𝛼𝛼=0.5𝛼𝛼=0.75𝛼𝛼=1.0contentstyle2𝛼𝛼=0.25𝛼𝛼=0.5𝛼𝛼=0.75𝛼𝛼=1.0𝛼𝛼=0style113
[32] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Improved texture networks:
maximizing quality and diversity in feed-forward stylization and texture
synthesis. In Proc. CVPR, 2017. 2, 9, 10
[33] P. Upchurch, N. Snavely, and K. Bala. From a to z: supervised transfer
In arXiv
of style and content using deep neural network generators.
preprint arXiv:1603.02003, 2016. 3
[34] P. Wilmot, E. Risser, and C. Barnes. Stable and controllable neural
texture synthesis and style transfer using histogram losses. arXiv preprint
arXiv:1701.08893, 2017. 1
[35] Y. Xian, Z. Akata, G. Sharma, Q. Nguyen, M. Hein, and B. Schiele.
In Proceedings of the
Latent embeddings for zero-shot classification.
IEEE Conference on Computer Vision and Pattern Recognition, pages
69–77, 2016. 4
[36] S. Xu, H. Jiang, T. Jin, F. C. Lau, and Y. Pan. Automatic generation
IEEE Intelligent
of chinese calligraphic writings with style imitation.
Systems, 2009. 1
[37] H. Zhang and K. Dana. Multi-style generative network for real-time
transfer. In arXiv preprint arXiv:1703.06953, 2017. 1
[38] X.-Y. Zhang, F. Yin, Y.-M. Zhang, C.-L. Liu, and Y. Bengio. Drawing
and recognizing chinese characters with recurrent neural network. IEEE
transactions on pattern analysis and machine intelligence, 40(4):849–
862, 2018. 1
[39] Y. Zhang, Y. Zhang, and W. Cai. Separating style and content for
generalized style transfer. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, 2018. 1
[40] B. Zhou, W. Wang, and Z. Chen. Easy generation of personal chinese
In Multimedia and Expo (ICME), 2011 IEEE
handwritten fonts.
International Conference on, pages 1–6. IEEE, 2011. 1
[41] J. Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image
translation using cycle-consistent adversarial networks. In Proceedings
of the IEEE International Conference on Computer Vision (ICCV), Oct
2017. 1, 2, 3, 8
[8] T. Q. Chen and M. Schmidt. Fast patch-based style transfer of arbitrary
style. arXiv preprint arXiv:1612.04337, 2016. 1, 2, 9, 10
[9] V. Dumoulin, J. Shlens, and M. Kudlur. A learned representation
In Proceedings of the International Conference on
for artistic style.
Learning Representations, 2017. 5
[10] A. Frome, G. Corrado, J. Shlens, S. Bengio, J. Dean, T. Mikolov, et al.
In Advances in
Devise: A deep visual-semantic embedding model.
neural information processing systems, pages 2121–2129, 2013. 4
[11] A. Gatys, A. Ecker, and M. Bethge.
Image style transfer using
convolutional neural networks. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pages 2414–2423, 2016.
1, 2, 4, 9, 10
[12] X. Huang and S. Belongie. Arbitrary style transfer in real-time
the IEEE
with adaptive instance normalization.
International Conference on Computer Vision (ICCV), Oct 2017. 1,
2, 5, 6, 9, 10
In Proceedings of
[13] P. Isola, J. Zhu, T. Zhou, and A. Efros. Image-to-image translation with
conditional adversarial networks. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, 2017. 1, 2, 3, 4, 8
[14] S. J´egou, M. Drozdzal, D. Vazquez, A. Romero, and Y. Bengio. The
one hundred layers tiramisu: Fully convolutional densenets for semantic
In Proceedings of the IEEE Conference on Computer
segmentation.
Vision and Pattern Recognition Workshops (CVPRW), pages 1175–1183.
IEEE, 2017. 4
Perceptual
losses for real-time
[15] J. Johnson, A. Alahi, and F. Li.
In Proceedings of the European
style transfer and super-resolution.
Conference on Computer Vision, pages 694–711. Springer, 2016. 1, 2,
4, 6, 9
[16] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang. Universal
style transfer via feature transforms. In Advances in Neural Information
Processing Systems, pages 385–395, 2017. 1, 2, 9, 10
[17] Y. Li, N. Wang, J. Liu, X. Hou, Y. Li, N. Wang, J. Liu, and X. Hou.
Demystifying neural style transfer. In Twenty-Sixth International Joint
Conference on Artificial Intelligence, pages 2230–2236, 2017. 5, 6
[18] Z. Lian, B. Zhao, and J. Xiao. Automatic generation of large-scale
handwriting fonts via style learning. In SIGGRAPH ASIA 2016 Technical
Briefs, page 12. ACM, 2016. 1
[19] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan,
P. Doll´ar, and C. L. Zitnick. Microsoft coco: Common objects in context.
In European conference on computer vision, pages 740–755. Springer,
2014. 9
[20] M. Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image
In Advances in Neural Information Processing
translation networks.
Systems, pages 700–708, 2017. 3
[21] M. Y. Liu and O. Tuzel. Coupled generative adversarial networks. In
Advances in Neural Information Processing Systems 29, pages 469–477.
2016. 1, 2, 3
[22] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks
for semantic segmentation. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, pages 3431–3440, 2015. 4
[23] P. Lyu, X. Bai, C. Yao, Z. Zhu, T. Huang, and W. Liu. Auto-
encoder guided gan for chinese calligraphy synthesis. In arXiv preprint
arXiv:1706.08789, 2017. 1, 2, 3, 4, 8
[24] A. Mahendran and A. Vedaldi. Understanding deep image representa-
tions by inverting them. pages 5188–5196, 2015. 6
[25] A. Mordvintsev, C. Olah, and M. Tyka. Inceptionism: Going deeper into
neural networks. Google Research Blog. Retrieved June, 20(14), 2015.
2
[26] A. Radford, L. Metz, and S. Chintala. Unsupervised representation
learning with deep convolutional generative adversarial networks.
In
Proceedings of the International Conference on Learning Representa-
tions, 2016. 4
[27] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks
for biomedical image segmentation. In Proceedings of the International
Conference on Medical Image Computing and Computer-Assisted Inter-
vention, pages 234–241. Springer, 2015. 4
[28] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb.
Learning from simulated and unsupervised images through adversarial
training. In Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, 2017. 3
[29] Y. Taigman, A. Polyak, and L. Wolf. Unsupervised cross-domain image
generation. In arXiv preprint arXiv:1611.02200, 2016. 3
[30] J. Tenenbaum and W. Freeman. Separating style and content.
In
Proceedings of the Advances in neural information processing systems,
pages 662–668, 1997. 1, 4
[31] D. Ulyanov, V. Lebedev, A. Vedaldi, and V. Lempitsky.
networks: Feed-forward synthesis of textures and stylized images.
Proceedings of
pages 1349–1357, 2016. 1, 2, 5
Texture
In
the International Conference on Machine Learning,
|
synthetic_cpt | 2 | Reflexive_Guidance_Improving_OoDD_in_Vision-Language_Models_via_Self-Guided_Image-Adaptive_Concept_Generation.pdf | Proofs of the Technical Results Justifying a Biologically Inspired
Algorithm for Reactive Navigation of Nonholonomic Robots in
Maze-Like Environments
1
1
0
2
v
o
N
1
2
]
C
O
.
h
t
a
m
[
1
v
7
6
7
4
.
1
1
1
1
:
v
i
X
r
a
Alexey S. Matveev a, Michael C. Hoy b, Andrey V. Savkin b
aDepartment of Mathematics and Mechanics, Saint Petersburg University, Universitetskii 28, Petrodvoretz, St.Petersburg, 198504, Russia
bSchool of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney 2052, Australia
1 Introduction
Inspired by behaviors of animals, which are believed to use
simple, local motion control rules that result in remarkable
and complex intelligent behaviors [1,2,3], we examine the
navigation strategy that is aimed at reaching a steady target
in a steady arbitrarily shaped maze-like environment and is
composed of the following reflex-like rules:
s.1) At considerable distances from the obstacle,
(a) turn towards the target as quickly as possible;
(b) move directly to the target when headed to it;
s.2) At a short distance from the obstacle,
(c) Follow (a,b) when leaving from the obstacle;
(d) When approaching it, quickly avert the collision
threat by sharply turning.
Studies of target pursuit in animals, ranging from dragonflies
to fish and dogs to humans, have suggested that they often
use the pure pursuit guidance s.1) to catch not only a steady
but also a moving target. The idea of local obstacle avoidance
strategy s.2) is also inspired by biological examples such as
a cockroach encountering a wall [2].
The rules s.1), s.2) demand only minor perceptual capacity.
Access even to the distance to the obstacle is not needed:
it suffices to determine whether it is short or not, and be
aware of the sign of its time derivative. As for the target, the
vehicle has to access its relative bearing angle. Moreover, it
suffices that it is able only to recognize which quadrant of
its relative Cartesian frame hosts the target line-of-sight.
To address the issue of nonholonomic constraints, control
saturation, and under-actuation, we consider a vehicle of the
Email addresses: [email protected] (Alexey S.
Matveev), [email protected] (Michael C. Hoy),
[email protected] (Andrey V. Savkin).
Dubins car type. It is capable of moving with a constant
speed along planar paths of upper limited curvature without
reversing the direction and is controlled by the upper limited
angular velocity. As a result, it is unable to slow down, stop,
or make an abrupt turn.
By reliance on the bearing-only data about the target, the
proposed approach is similar to the Pledge algorithm [4] and
Angulus algorithm [5]. Unlike ours, the both assume access
to the absolute direction (e.g., by a compass), and the latter
employs not one but two angles in the convergence crite-
rion. The major distinction is that they assume the vehicle to
be able to trace the paths of unlimited curvature, in partic-
ular, broken curves and to move exactly along the obstacle
boundary. These assumptions are violated in the context of
this paper, which entails deficiency in the available proofs
of the convergence of these algorithms.
The extended introduction and discussion of the proposed
control law are given in the paper submitted by the authors
to the IFAC journal Automatica. This text basically contains
the proofs of the technical facts underlying justification of
the convergence at performance of the proposed algorithm in
that paper, which were not included into it due to the length
limitations. To make the current text logically consistent,
were reproduce the problem statement and notations.
2 Problem Setup and the Navigation Strategy
We consider a planar under-actuated nonholonomic vehicle
of the Dubins car type. It travels with a constant speed v
without reversing direction and is controlled by the angular
velocity u limited by a given constant u. There also is a
steady point target T and a single steady obstacle D 6∋ T
in the plane, which is an arbitrarily shaped compact domain
whose boundary ∂D is Jordan piece-wise analytical curve
without inner corners. Modulo smoothened approximation
Preprint submitted to Automatica
8 September 2018
of such corners, this assumption is typically satisfied by
all obstacles encountered in robotics, including continuous
mazes. The objective is to drive the vehicle to the target with
constantly respecting a given safety margin d(t) ≥ dsafe > 0.
Here d(t) is the distance to the obstacle
d(t) := distD[r(t)],
distD[r] := min
r∗∈D
kr∗ − rk, (1)
k · k is the Euclidian norm, and r(t) is the vehicle position.
This position is given by the abscissa x and ordinate y of
the vehicle in the world frame, whereas its orientation is
described by the angle θ from the abscissa axis to the robot
centerline. The kinematics of the considered vehicles are
classically described by the following equations:
˙x = v cos θ,
˙y = v sin θ,
,
˙θ = u ∈ [−u, u],
r(0) = r0 6∈ D
θ(0) = θ0
. (2)
Thus the minimal turning radius of the vehicle is equal to
R = v/u.
(3)
The vehicle has access to the current distance d(t) to D
and the sign sgn ˙d(t) of its time-rate ˙d(t), which are acces-
sible only within the given sensor range: d ≤ drange, where
drange > dsafe. The vehicle also has access to the angle β
from its forward centerline ray to the target.
To specify the control strategy s.1), s.2), we introduce the
threshold dtrig < drange separating the ’short’ and ’long’ dis-
tances to the obstacle. Mathematically, the examined strat-
egy is given by the following concise formula:
the value + being drawn with a fixed probability p ∈ (0, 1).
This version is called the randomized control law.
To state the assumptions, we introduce the Frenet frame
T (r∗), N (r∗) of ∂D at the point r∗ ∈ ∂D (T is the pos-
itively oriented unit tangent vector, N is the unit normal
vector directed inwards D, the boundary is oriented so that
when traveling on ∂D one has D to the left), κ(r∗) is the
signed curvature (κ(r∗) < 0 on concavities) and Rκ(r∗) :=
|κ(r∗)|−1. Due to the absence of inner corners, any point
r 6∈ D at a sufficiently small distance distD[r] < d⋆ from D
does not belong to the focal locus of ∂D and distD[r] is at-
tained at only one point [7]. The regular margin d⋆(D) > 0
of D is the supremum of such d⋆’s. So d⋆(D) = ∞ for
convex domains; for non-convex D,
d⋆(D) ≤ RD :=
inf
r∈∂D:κ(r)<0
Rκ(r).
(5)
(The infimum over the empty set is set to be +∞.)
Assumption 1 The vehicle is maneuverable enough: it is
capable of full turn without violation of a safety margin
dsafe > R within the regularity margin of the maze 3R <
d⋆(D), and moreover 4R < RD.
Assumption 2 The sensor range gives enough space to
avoid collision with D after its detection: drange > 3R.
The parameters dtrig and dsafe are tuned so that
3R < dsafe + 2R < dtrig < d⋆(D), drange, RD − R.
(6)
Such a choice is possible thanks to Assumptions 1 and 2.
sgnβ
| if d > dtrig (mode A)
3 Main Results
.
(
(4)
if d ≤ dtrig (mode B)
u = u ×
sgnβ if ˙d > 0
−σ if ˙d ≤ 0 (cid:12)
(cid:12)
(cid:12)
(cid:12)
Here σ = ± is a constant controller parameter, which gives
(cid:12)
the turn direction, and ˙d ≥ 0 and ˙d < 0 are equivalent to
the vehicle orientation outwards and towards D. The switch
A 7→ B occurs when d reduces to dtrig; the converse switch
holds when d increases to dtrig. When mode B is activated,
˙d ≤ 0; if ˙d = 0, the ’turn’ submode u := −σu is set up.
Since the control law (4) is discontinuous, the solution of
the closed-loop system is meant in the Fillipov’s sense [6].
Remark 1 In (4), β accounts for not only the heading but
also the sum of full turns performed by the target bearing.
Theorem 1 (i) With probability 1, the randomized control
law drives the vehicle at the target T for a finite time with
always respecting the safety margin (i.e., there exists a time
instant t∗ such that r(t∗) = T and distD[r(t)] ≥ dsafe ∀t ∈
[0, t∗]) whenever both the vehicle initial location r0 and the
target are far enough from the obstacle and from each other:
distD[r0] > dtrig + 2R, kr0 − Tk > 2R, distD[T] > dtrig.
(7)
(ii) The basic control law drives the vehicle at the target
for a finite time with always respecting the safety margin
whenever (7) holds and the vehicle initial location and the
target lie far enough from the convex hull co D of the maze:
distco D[T] > dtrig, distco D[r0] > dtrig.
In the basic version of the algorithm, the parameter σ is fixed.
To find a target hidden deeply inside the maze, a modified
version can be employed: whenever A 7→ B, the parameter
σ is updated. The updated value is picked randomly and
independently of the previous choices from {+, −}, with
In (7), distD[r0] > dtrig+2R can be relaxed into distD[r0] >
dtrig if the vehicle is initially directed to the target β(0) = 0.
In view of (3) and the freedom (6) in the choice of dsafe, dtrig,
not only Assumptions 1, 2 but also the constraints (7) dis-
appear (are boiled down into distD[r0] > 0, kr0 − Tk >
2
0, distD[T] > 0) as v → 0. In other words, the algorithm
succeeds in any case if the cruise speed v is small enough.
The last assumption distco D[T] > dtrig from (ii) can be re-
laxed to cover some scenarios with the target inside the maze.
To specify this, we need some notations and definitions.
The d-equidistant curve C(d) of D is the locus of points r
at the distance distD[r] = d from D; the d-neighborhood
N (d) of D is the area bounded by C(d); [r1, r2] is the
straight line segment directed from r1 to r2.
Let r♦, r∗ ∈ C(dtrig) and (r♦, r∗)∩N (dtrig) = ∅. The points
r♦, r∗ divide C(dtrig) into two arcs. Being concatenated with
[r♦, r∗], each of them gives rise to a Jordan curve encircling
a bounded domain, one of which is the other united with
N (dtrig). The smaller domain is called the simple cave of
N (dtrig) with endpoints r♦, r∗. The location r is said to
be locked if it belongs to a simple cave of N (dtrig) whose
endpoints lie on a common ray centered at T. We remark
that if distco D[r] > dtrig, the location is unlocked.
Theorem 2 The basic control law drives the vehicle at the
target for a finite time with always respecting the safety
margin whenever (7) holds and both the initial location of
the vehicle and the target are unlocked.
Now we disclose the tactical behavior implied by s.1), s.2)
and show that it includes wall following in a sliding mode.
In doing so, we focus on a particular avoidance maneuver
(AM), i.e., the motion within uninterrupted mode B.
Let ρ(s) be the natural parametric representation of ∂D,
where s is the curvilinear abscissa. This abscissa is cyclic: s
and s + L encode a common point, where L is the perimeter
of ∂D. We notationally identify s and ρ(s). For any r 6∈ D
within the regular margin distD[r] < d⋆(D), the symbol
s(r) stands for the boundary point closest to r, and s(t) :=
s[r(t)], where r(t) is the vehicle location at time t.
To simplify the matters, we first show that ∂D can be as-
sumed C1-smooth without any loss of generality. Indeed, if
0 < d < d⋆(D), the equidistant curve C(d) is C1-smooth
and piece-wise C2-smooth [7]; its parametric representation,
orientation, and curvature are given by
Proposition 3 Let for the vehicle driven by the control law
(4), obstacle avoidance be started with zero target bearing
β(t) = 0 at t = t∗. Then the following claims hold:
(i) There exists τ ≥ t∗ such that the vehicle moves with
the maximal steering angle u ≡ −σu and the dis-
tance to the obstacle decreases ˙d ≤ 0 until τ , ∗ and at
t = τ , the sliding motion along the equidistant curve
C {distD[r(τ )]} † is started with σ ˙s > 0 and β ˙s > 0;
(ii) SMEC holds until β arrives at zero at a time when
κ[s(t) + σ≈0] > 0, which sooner or later holds and af-
ter which a straight move to the target ‡ is commenced;
(iii) During SMT, the vehicle first does not approach the
obstacle ˙d ≥ 0 and either the triggering threshold dtrig
is ultimately trespassed and so mode B is switched
off, or a situation is encountered where ˙d(t) = 0 and
κ[s(t) + σ≈0] < 0. When it is encountered, the vehicle
starts SMEC related to the current distance;
(iv) There may be several transitions from SMEC to SMT
and vice versa, all obeying the rules from (ii), (iii);
(v) The number of transitions is finite and finally the ve-
hicle does trespass the triggering threshold dtrig, thus
terminating the considered avoidance maneuver;
(vi) Except for the initial turn described in (i), the vehicle
maintains a definite direction of bypassing the obsta-
cle: ˙s is constantly positive if σ = + (counterclockwise
bypass) and negative if σ = − (clockwise bypass).
By (4), AM is commenced with ˙d(t∗) ≤ 0. The next remark
shows that if ˙d(t∗) = 0, IT may have the zero duration.
Remark 2 If
only if σ ˙s(t∗) > 0. Then the following claims are true:
˙d(t∗) = 0, IT has the zero duration if and
(1) If κ[s(t∗) + σ ·≈0] < 0, SMEC is immediately started;
(2) If κ[s(t∗) + σ ·≈0] ≥ 0, the duration of SMEC is zero,
and SMT is continued.
The assumption β(t∗) = 0 of Proposition 3 holds for the
first AM due to (7). Indeed, since distD[r0] > dtrig + 2R,
the pursuit guidance law turns the vehicle towards the target
earlier than the threshold dtrig for activation of AM is en-
countered. It also holds for all subsequent AM’s since any
AM ends in course of SMT by Proposition 3.
s 7→ ρ(s) − dN (s),
κC(d)(s) =
κ(s)
1 + κ(s)d
.
(8)
4 Technical facts underlying the proofs of Proposition 3
and Remark 2.
The second formula holds if s is not a corner point of ∂D;
such points contribute circular arcs of the radius d into C(d).
So by picking δ > 0 small enough, expanding D to N (δ),
and correction d := d−δ of d := d, dsafe, dtrig, drange, we keep
all assumptions true and do not alter the operation of the
closed-loop system. Hence ∂D can be assumed C1-smooth.
Writing f (η∗ ±≈0) > 0 means that there exists small enough
∆ > 0 such that f (η) > 0 if 0 < ±(η − η∗) < ∆. The
similar notations, e.g., f (η∗ ±≈0) ≤ 0, are defined likewise.
4.1 Geometrical Preliminaries
We assume that the world frame (WF) is centered at the
target T. Let C 6∋ T be a regular piece-wise smooth di-
∗ This part of AM is called the initial turn and abbreviated IT.
† This is abbreviated SMEC and means following the wall at the
fixed distance distD[r(τ )], which is set up at the start of SMEC.
‡ SMT, which is sliding motion over the surface β = 0
3
Target
z
l
N
T
(a)
s
(b)
Fig. 1. (a) Definition of λ and ζ; (b) Behavior during IT.
rected curve with natural parametric representation ̺(s), s ∈
[s−, s+]. The turning angle of C around a point p 6∈ C is de-
noted by ∢pC, and ∢TANG [C] := ∢0T , where T (s), N (s)
is the Frenet frame of C at s. § Let λ(s), ζ(s) and ψ(s) stand
for the Cartesian coordinates and polar angle of −̺(s) in
this frame (see Fig.1(a)), respectively, and let ′ denote dif-
ferentiation with respect to s. The polar angle of ̺(s) in
WF and the curvature of C at s are denoted by ϕ(s) and
κ(s), respectively. To indicate the curve C, the symbols
T, N, λ, ζ, κ, etc. may be supplied with the lower index C .
The directed curve traced as s runs from s1 to s2 is denoted
by C
, where the specifier ± is used for closed curves.
The superscript a means that the lemma is equipped with
the number under which its formulation is given in the basic
version of the paper.
±−→s2
s1
Lemma 4a The following relations hold whenever T 6∈ C:
ψ′ = −κ + ζ(λ2 + ζ2)−1
λ′ = −1 + κζ
ζ′ = −κλ
,
ϕ′ = ζ(λ2 + ζ2)−1
r := col (λ, ζ) 6= 0, ∢0r = ∢TC − ∢TANG [C] .
,
(9)
(10)
PROOF. Differentiation of the equation T = ̺ + λT + ζN
and the Frenet-Serret formulas T ′ = κN, N ′ = −κT [7]
yield that 0 = T + λ′T + λκN + ζ′N − ζκT. Equating the
cumulative coefficients in this linear combination of T and
N to zero gives the first two equations in (9). By virtue of
them, the third and forth ones follow from [7]
ψ′ =
ζ′λ − λ′ζ
λ2 + ζ2 ,
ϕ′ =
y′x − x′y
x2 + y2 .
(11)
The first relation in (10) holds since T 6∈ C. Let η(s) :=
∢TANG[Ts−→s−0] + η0, where η0 is the polar angle of
T (s−). The matrix Φη(s) of rotation through η(s) trans-
forms the world frame into the Frenet one, and ̺(s) =
h(s) col [cos ϕ(s), sin ϕ(s)]. So r(s) = −Φ−η(s)̺(s) =
h(s) col {[cos[π + ϕ(s) − η(s)], sin[π + ϕ(s) − η(s)]}. Thus
π + ϕ(s) − η(s) is the piece-wise continuous polar angle of
r(s) that jumps according to the convention concerned by
footnote §. This trivially implies (10).
•
T
T
Target
.
s
<0
.
d
<0
e
d
m o
Acute
C in
B
.
.
s
<0d
>0
T
Singular segments
T
N
(a)
T
T
The point
is not singular
T
T
(b)
Fig. 2.
(a) Behavior during IT; (b) Singular points.
Corollary 1 Let ζ(s∗) = 0 and ς = ±. Then
ςζ[s∗ + ς ≈0]sgnλ[s∗] < 0 if κ[s∗ + ς ≈0] > 0
ςζ[s∗ + ς ≈0]sgnλ[s∗] > 0 if κ[s∗ + ς ≈0] < 0
.
(12)
By (6) and the last inequality in (7), Lemma 4 yields
∢0rC(d∗) = −2π
for d∗ ∈ [0, dtrig].
(13)
Corollary 2 There exist F and d# > dtrig such that when-
ever |d| ≤ d#, the set S(d) := {s ∈ ∂D : ζ∂D(s) = d} has
no more than F connected components.
PROOF. By the last inequality in (7), ∃d# : dtrig < d# <
distD[T] ≤
ζ(s)2 + λ(s)2. Then s ∈ S(d) ∧ |d| ≤ d# ⇒
# > 0. Since the domain D
|λ(s)| ≥ δ :=
p
is compact, |λ′(s)| ≤ M < ∞ ∀s. So whenever s ∈ S(d)
and |d| ≤ d#, the function λ(·) does not change its sign in
the δM −1-neighborhood V (s) of s.
distD[T]2 − d2
q
Since ∂D is piece-wise analytical, each set {s : ±κ(s) > 0}
and {s : κ(s) = 0} has finitely many connected compo-
nents ∂±
i and ∂0
ν , respectively. By the foregoing and (9), any
intersection V (s) ∩ ∂±
i , s ∈ S(d), |d| ≤ d# contains only
one point s. Hence the entire arc ∂±
con-
i of the length
∂±
tains no more than δ−1M
+ 1 such points. It remains to
i
ν such that ∂0
note that S(d) covers any ∂0
•
(cid:12)
(cid:12)
Observation 1 SMEC with σ = ± ends when s ∈ S0 :=
{s ∈ ∂D : −d# < ζ∂D(s) < 0, ±λ∂D(s) > 0}. This set
has no more than F connected components, called ±arcs.
(cid:12)
(cid:12)
ν ∩ S(d) 6= ∅.
∂±
i
(cid:12)
(cid:12)
(cid:12)
(cid:12)
The second claim holds since λ′ < 0 on S0 due to (6), (9).
4.2 Technical Facts
§ At the corner points, the count of ∢0T progresses abruptly
according to the conventional rules [7].
Lemma 5 The following two statements hold:
(i) In the domain d ≤ dtrig ∧ ˙d > 0 ∨ d > dtrig, the surface
4
β = 0 is sliding, with the equivalent control [6] u ≡ 0;
(ii) The surface ˙d = 0 is sliding in the domain
of the radius R and so by Remark 1, β(t) > 0 and
dtrig − 2R ≤ d < dtrig,
˙sβ > 0,
σ ˙s > 0.
(14)
d(t) ≥ distD[r(0)] − kr − r(0)k
≤2R
≥ dtrig − 2R
{z
|
(6)
}
> dsafe > R > 0.
(19)
PROOF. (i) Let h be the distance from the vehicle to T.
Due to (2), ˙h = −v cos β, ˙β = h−1v sin β − u. So as the
(4)
state approaches the surface β = 0, we have ˙β
→ −usgnβ,
which implies the first claim.
(ii) Let α be the polar angle of the vehicle velocity in
the frame T∂D[s(t)], N∂D[s(t)]. By (5), (6), and (14), 1 +
κ[s(t)]d(t) > 0, and as is shown in e.g., [8],
˙s =
v cos α
1 + κ(s)d
˙d = −v sin α,
,
˙α = −κ(s) ˙s + u.
(15)
As the state approaches a point where ˙d = 0 and (14) holds,
sin α → 0
cos α → sgn ˙s
, ¨d → −v2
sgn ˙s −
u
v
(cid:20)
κ
1 + κd
.
(cid:21)
(16)
If the state remains at a definite side of the surface ˙d = 0,
(3) and (4) yield that
sgn(β ˙s) −
1
R
¨d
(14)
= −v2
(cid:20)
˙d>0−−→ ¨d+ := −v2
κ
1 + κd
κ
1 + κd
sgn ˙s +
1
R
−
(cid:20)
1
R
,
¨d
(cid:21)
(14)
= v2
1
R
(cid:20)
(cid:21)
κ
1 + κd
(cid:21)
˙d<0−−→ ¨d− :=
κ
1 + κd
.
+
(cid:21)
v2
σ
(cid:20)
(17)
The proof is completed by observing that by (6), (14),
¨d+ = −v2 1 + κd − κR
R(1 + κd)
< 0since
1 + κd > 0 and
d > dsafe > R
¨d− = v2 |κ| [Rκ + (d + R)sgnκ]
R(1 + κd)
> 0.
(18)
The subsequent proofs are focused on σ = +; the case
σ = − is considered likewise.
Lemma 6 If
˙d(t∗) < 0, claim (i) in Proposition 3 is true.
(15)
= −v
˙α
(3)
≤ −v
+
u
v
κ cos α
1 + κd
(cid:20)
1
R
−
|κ|
1 + κd
(cid:21)
(cid:20)
= −v
(cid:21)
(cid:20)
1
R
−
1
Rκ + dsgnκ
.
(cid:21)
While d ≤ dtrig (in particular, while ˙d ≤ 0) the expression
in the last square brackets is positive. This is true by (19)
if κ ≥ 0; otherwise, since Rκ > R + dtrig by (6). So ˙α ≤
−δ < 0, i.e., the vector col (cos α, sin α) rotates clockwise.
Here the signs of the first and second components equal
those of ˙s and − ˙d, respectively, by (15) and so col ( ˙s, ˙d)
evolves as is illustrated in Fig. 1(b). This and the conditions
(14) for the sliding motion complete the proof.
•
More can be derived from the above proof.
Lemma 9a Let s∗ and sb be the values of the continuously
evolving s at the start and end of IT, respectively. During
IT, σ ˙s ≥ 0 if σ ˙s(t∗) ≥ 0, and ˙s ones changes the sign
otherwise. In any case, s runs from s∗ to sb in the direction
σ during a last phase of IT.
PROOF. Let σ = +. The map r 7→ (s, d) is the orientation-
reversing immersion on the disc Din encircled by Cin. So it
transforms any negatively oriented circle C ⊂ Din concen-
tric with Cin into a curve ξ with ∢TANG [ξ] = 2π. Then the
argument from the concluding part of the proof of Lemma 6
shows that as the robot once runs over Cin in the negative
direction, the vector col ( ˙s, ˙d) intersects the half-axes of the
frame in the order associated with counter clockwise rota-
tion, each only once. This immediately implies the claim
given by the first sentence in the conclusion of the lemma.
If ˙s(t∗) ≥ 0, this claim yields that sb−s∗ ≥ 0. Let ˙s(t∗) < 0.
As the robot once runs over Cin in the negative direction,
˙s > 0 and ˙d ≤ 0 when it passes the point B from Fig. 2(a),
which corresponds to the second passage of s = s∗. Due
to the order in which col ( ˙s, ˙d) intersects the half-axes, this
combination of signs is possible only before ˙d vanishes for
the first time, i.e., within IT. Thus the second occurrence of
s = s∗ holds within IT. The proof is completed by noting
that ˙s > 0 after this by the first claim of the lemma.
•
PROOF. Let σ = +. Due to (4), initially u ≡ −u. Let
[t∗, τ ] denote the maximal interval on which u ≡ −u. For
t ∈ (t∗, τ ), the vehicle moves clockwise along a circle Cin
We proceed to the case where some of the vector fields is
tangential to the discontinuity surface ˙d = 0. Since this
may undermine uniqueness of the solution (its existence is
5
still guaranteed), the arguments become much more sophis-
ticated. The first lemma establishes a required technical fact.
To state it, we note that whenever d := distD[r] < R⋆(D),
the system state (x, y, θ) is given by s, d, θ and along with
( ˙d, ˙s) 6= (0, 0), uniquely determines β ∈ (−π, π).
and ti → t∗ as i → ∞, a proper decrease of every ti yields
˙d(ti) < 0 since d(t∗) = dtrig. However then
in addition that
˙d(t) < 0 for t ≥ ti, t ≈ t∗ by (4), (22) and thus ˙d(t) <
0, d(t) < dtrig for t > t∗, t ≈ t∗, i.e., (i) holds in violation
of the initial assumption. It follows that d(t∗ +≈0) ≥ dtrig.
Lemma 7 If λC(d†)(s∗) 6= 0 for d† ∈ [0, dtrig], there exists
δ > 0 such that whenever s∗ ≤ s0 < s < s∗ + δ and
|d∗−d†| < δ, the following entailments hold with ς := sgn ˙s:
˙s 6= 0, ˙d ≥ 0, d ≥ d∗, ζC(d∗)(s0) ≥ 0
κ(s∗ + ς ≈0) < 0, ˙sλC(d†)(s0) > 0
˙s 6= 0, ˙d ≤ 0, d ≤ d∗, ζC(d∗)(s0) ≤ 0
κ(s∗ + ς ≈0) ≥ 0, ˙sλC(d†)(s0) > 0
⇒ ˙sβ > 0;
(20)
⇒ ˙sβ ≤ 0.
(21)
In (21), ˙sβ < 0 if ζC(d∗)(s0) < 0 or κ 6≡ 0 on ∂Ds0→s.
PROOF. We pick δ > 0 so that λC(d∗)(s) and κ(s) do
not change the sign as s and d∗ run over (s∗, s∗ + δ) and
(d† −δ, d† +δ), respectively. By (8), the curvature κC(d∗)(s)
does not change its sign either, which equals sgnκ(s∗+ς ≈0).
If the conditions from (20) hold and ς = +, application of the
second equation from (9) to C(d∗) yields that ζC(d∗)(s) > 0.
So the target polar angle in the s-related Frenet frame of
C(d∗) belongs to (0, π/2). Transformation of this frame into
that of the vehicle path consists in a move of the origin in
the negative direction along the ζ-axis (since d ≥ d∗) and a
clockwise rotation of the axes (since ˙d > 0, ˙s > 0). Since
both operations increase the target bearing angle, β > 0.
Formula (20) with ς = − and (21) are established likewise.
•
Lemma 7a Let dsafe ≤ d∗ := d(t∗) ≤ dtrig,
time t∗ within mode B. Then for t > t∗, t ≈ t∗, the robot
˙d(t∗) = 0 at a
i) performs the turn with u ≡ −σu if σ ˙s(t∗) < 0, d(t∗) =
dtrig, and β(t∗) = 0;
ii) undergoes SMEC if σ ˙s(t∗) > 0 and either (1) σβ(t∗) >
0 or (2) β(t∗) = 0 and κ[s(t∗) + sgn ˙s(t∗)≈0] < 0;
iii) moves straight to the target if β(t∗) = 0, σ ˙s(t∗) >
0, κ[s(t∗) + sgn ˙s(t∗)≈0] ≥ 0.
PROOF. Let σ = +. i) As t → t∗, (4) and (16) yield that
+
−
1
R
¨d|u=−u → v2
κ
1 + κd∗ (cid:21)
(22)
where κ := κ[s(t∗) ± 0] and the inequality holds since
d∗ ≥ dsafe > R due to (6).
1 + κ[d∗ − R]]
R(1 + κd∗)
= −
< 0,
(cid:20)
Now suppose that there is a sequence {ti} such that ti >
t∗, d(ti) = dtrig ∀i, ti → t∗ as i → ∞. Then ˙d(ti) = 0
and so β(ti) < 0 due to (20). By continuity, β < 0 in
a vicinity of the system state at t = ti. Then any option
from (4) yields u = −u and so u(t) ≡ −u ∀t ≈ ti by
the definition of Filippov’s solution. Hence d(ti) = dtrig ∧
(22)
˙d(ti) = 0
⇒ d(ti+≈0) < dtrig, in violation of the foregoing.
So d > dtrig and u = sgnβ for t > t∗, t ≈ t∗ by (4), and by
Lemma 5, SMT is continued. Then the last relation in (16)
(with u := 0) and κ[s(t∗) −≈0] < 0 imply the contradiction
d(t∗ +≈0) < dtrig to the foregoing, which proves i).
Let κ[s(t∗) −≈0] ≥ 0. So far as the controller is first proba-
tionally set to the submode related with ˙d < 0, this submode
will be maintained longer by (22).
ii.1) If d(t∗) < dtrig, the claim is true by Lemma 5. Let
d(t∗) = dtrig. If there is a sequence {ti} such that ti >
t∗, d(ti) < dtrig ∀i and ti → t∗ as i → ∞, a proper decrease
˙d(ti) < 0. Let τi be the
of every ti yields in addition that
minimal τ ∈ [t∗, ti] such that d(t) < dtrig and ˙d(t) < 0 for
t ∈ (τ, ti]. For such t, u ≡ −u by (4) and so ¨d > 0 by
(17) and (18). So ˙d(τi) < ˙d(ti) < 0, τi > t∗, and d(τi) =
dtrig, otherwise τi is not the minimal τ . Thus at time τi, the
assumptions of Lemma 6 hold except for β(τi) = 0. In the
proof of this lemma, this relation was used only to justify that
β > 0, which is now true by assumption and the continuity
argument. So by Lemmas 5 and 6, sliding motion along an
equidistant curve C(d†) with d† < dtrig is commenced at the
time t > τi when ˙d(t) = 0 and maintained while β > 0 and
i→∞−−−→ t∗. This
˙s > 0, in violation of d(τi) = dtrig ∀i ∧ τi
contradiction proves that d(t∗ +≈0) ≥ 0.
Now suppose that there exists a sequence {ti} such that
ti > t∗, d(ti) > dtrig ∀i and ti → t∗ as i → ∞. Since
d(t∗) = 0, a proper perturbation of every ti yields in addition
˙d(ti) > 0. Let τi be the minimal τ ∈ [t∗, ti] such
that
that d(t) > dtrig for t ∈ (τ, ti]. For such t, the continuity
argument gives β > 0, (4) yields u ≡ u and so ¨d < 0
by (17) and (18). Hence ˙d(τi) > 0, τi > t∗, d(τi) = dtrig
and so d(τi −≈ 0) < 0, in violation of the foregoing. This
contradiction proves that d(t∗ +≈0) ≡ 0 indeed.
ii.2) We first assume that d∗ < dtrig. Due to (17) and (18)
¨d|u=−u > 0 and
¨d|u=u < 0 for
t ≈ t∗.
(23)
Let i) fail to be true and κ[s(t∗) −≈ 0] < 0. If there exists
an infinite sequence {ti} such that ti > t∗, d(ti) < dtrig ∀i
So it is easy to see that ˙d(t∗ +≈0) ≥ 0 and d(t∗ +≈0) ≥ d∗.
˙d(t∗ +≈ 0) 6≡ 0 and so d(t∗ +≈ 0) > d∗. In
Suppose that
6
˙d(τ ) >
any right-vicinity (t∗, t∗ + δ), there is τ such that
0. For any such τ that lies sufficiently close to t∗, (20)
yields β(τ ) > 0. So u = u by (4) and ¨d(τ ) < 0 by (23).
Hence the inequality ˙d(t) > 0 is not only maintained but
also enhanced as t decreases from τ to t∗, in violation of
the assumption ˙d(t∗) = 0 of the lemma. This contradiction
shows that ˙d(t∗ +≈0) ≡ 0, thus completing the proof of ii).
It remains to consider the case where d∗ = dtrig. By the
arguments from the previous paragraph, it suffices to show
˙d(t∗ +≈ 0) ≥ 0 and d(t∗ +≈ 0) ≥ dtrig. Suppose that
that
d(t∗ +≈ 0) 6≥ dtrig, i.e., there exists a sequence {ti} such
that ti > t∗, d(ti) < dtrig ∀i and ti → t∗ as i → ∞. Since
d(t∗) = dtrig, a proper decrease of every ti gives ˙d(ti) < 0 in
addition. By (4), (23), the inequality ˙d(t) < 0 is maintained
and enhanced as t decreases from ti, remaining in the domain
{t : d(t) < dtrig}. Since ˙d(t∗) = 0, there is τi ∈ (t∗, ti)
such that d(τi) = dtrig and ˙d(t) < 0 ∀t ∈ [τi, ti). Hence
d(τi −≈ 0) > dtrig and if i is large enough, there is θi >
ti such that d(θi) = dtrig and d(t) < dtrig ∀t ∈ (τi, θi).
˙d(t) < 0 ∀t ∈
Furthermore, there is si ∈ (τi, θi) such that
˙d(t) ≥ 0 ∀t ∈ [si, θi]. Then β(θi) > 0
(τi, si), ˙d(si) = 0,
by (20). We note that β(t∗) = 0 ⇒ ζP(t∗) = 0 for the
vehicle path P and so ζP(t) → 0 as t → t∗. This and (9)
(applied to P) imply that the sign of ˙β is determined by the
sign of the path curvature:
u = ±u ⇒ ± ˙β < 0
∀t ≈ t∗.
(24)
Suppose that ∃τ∗ ∈ [τi, si) : β(τ∗) ≥ 0. Since u(t) =
−u ∀t ∈ (τi, si), we see that β(si) > 0, ˙d(si) = 0, ds :=
d(si). By Lemma 5, sliding motion along the ds-equidistant
curve is commenced at t = si and maintained while β > 0,
whereas β > 0 until θi (if i is large enough) due to (20).
However, this is impossible since ds < dtrig and d(θi) =
dtrig. This contradiction proves that β(t) < 0 ∀t ∈ [τi, si).
The same argument and the established validity of ii.2) for
d∗ := ds < dtrig show that β(si) < 0. Since β(θi) > 0,
there exists ci ∈ (si, θi) such that β(ci) = 0 and β(t) >
0 ∀t ∈ (ci, θi]. If ˙d(c) = 0 for some c ∈ (ci, θi), Lemma 5
assures that sliding motion along the d(c)-equidistant curve
is started at t = c and is not terminated until t = θi, in
violation of d(θ) = dtrig. For any t ∈ (ci, θi), we thus have
˙d(t) > 0. Hence u(t) = u by (4),
˙β < 0 by (24), and so
β(ci) = 0 ⇒ β(θi) < 0, in violation of the above inequality
β(θi) > 0. This contradiction proves that d(t∗ +≈0) ≥ dtrig.
Now suppose that ˙d(t∗ +≈0) 6≥ 0. Then there is a sequence
{ti} such that ti > t∗, ˙d(ti) > 0 ∀i and ti → t∗ as i → ∞; a
proper increase of every ti gives d(ti) > dtrig in addition. By
(20), d(t) > dtrig ∧ ˙d(t) > 0 ⇒ β(t) > 0 for t ≈ t∗ and so
u(t) = u by (4) and ¨d(t) < 0 by (23). So as t decreases from
ti to t∗, the derivative ˙d(t) > 0 increases while d > dtrig,
in violation of the implication d(t) = dtrig ⇒ ˙d(t) = 0 for
t ∈ [t∗, ti]. This contradiction completes the proof.
i=1 such that
˙d(ti) >
iii) Were there a sequence {ti}∞
0, β(ti) > 0 ∀i and ti → t∗ + 0 as i → ∞, (4), (23),
and (24) imply that as t decreases from ti to t∗ for large
enough i, the inequalities ˙d(t) > 0, β(t) > 0 would be
preserved, in violation of ˙d(t∗) = 0, β(t∗) = 0. It follows
that
˙d(t) > 0 ⇒ β(t) ≤ 0 for t ≈ t∗, t > t∗.
˙d(ti) >
Now assume existence of the sequence such that
0, β(ti) ≤ 0 ∀i and ti → t∗ + 0 as i → ∞. For large i such
that β(ti) < 0, (4)∧(23) ⇒ u(t) = −u, and ˙d(t) increases
and so remains positive as t grows from ti until β = 0. By
(24), u−1|β(ti)| time units later the vehicle becomes headed
to the target, which is trivially true if β(ti) = 0. This and
(i) of Lemma 5 imply that then the sliding motion along
the surface β = 0 is commenced. It is maintained while
κ[s(t)] ≥ 0. Since ti → t∗ and β(ti) → β(t∗) = 0 as
i → ∞, this motion occurs for t > t∗, i.e., iii) does hold.
It remains to examine the case where ˙d(t∗ +≈0) ≤ 0 and so
d(t∗ +≈ 0) ≤ d∗. Suppose first that either ˙d(t∗ +≈ 0) 6≡ 0
or κ[s(t∗) +≈ 0] 6≡ 0. Then β(t∗ +≈ 0) < 0 by (21) and
u = −u at any side of the discontinuity surface ˙d = 0 by
(4). Hence u(t∗ +≈ 0) ≡ −u, which yields ˙d(t∗ + 0) > 0
by (23), in violation of ˙d(t∗ + 0) = 0. This contradiction
proves that ˙d(t∗ +≈0) ≡ 0, κ[s(t∗) +≈0] ≡ 0. Then SMEC
and SMT are initially the same, and iii) does hold.
•
Remark 3 The times of switches between the modes of the
discontinuous control law (4) do not accumulate.
To prove this, we first note that the projection of any vehicle
position r within mode B onto ∂D is well defined due to
(9). Let s−
i be its values at the start and end of the
ith occurrence of the mode, respectively. By Lemma 9 and
(vi) of Proposition 3, s monotonically sweeps an arc γi of
i during the concluding part of B.
∂D with the ends s−
i and s+
i , s+
Definition 1 The vehicle path or its part is said to be single
if the interiors of the involved arcs γi are pairwise disjoint
and in the case of only one arc, do not cover ∂D.
Let P and Q be the numbers of the connected components
of Sκ := {s : κ(s) < 0} and Sζ := {s : ζ∂D(s) = 0},
respectively. They are finite due to Corollary 2.
Lemma 9 Any single path accommodates no more than
(P + 1)(Q + 2) SMT’s.
PROOF. As was shown in the proof of (v) in of Proposi-
tion 3, the number of SMT’s within a common mode B does
not exceed P + 1. SMT between the ith and (i + 1)th occur-
rences of B starts at a position s† ∈ γi = [s−
i ] where
i , s+
7
i+1 where
i+1) ≥ 0. Hence any arc γi, except for the first and
i and
i of Sζ and {s : ζ∂D(s) < 0}, respectively, such that the
i . Hence
i′ ∀i 6= i′, and so the total number of the arcs γi
ζ∂D(s†) = −d < 0 and ends at the position s−
ζ∂D(s−
last ones, intersects adjacent connected components Cc=
Cc<
left end-point of Cc=
i
Cc=
i
does not exceed Q + 2, which competes the proof.
is the right end-point of Cc<
6= Cc=
•
Proof of Remark 3. Suppose to the contrary that the times
ti when σ is updated accumulate, i.e., ti < ti+1 → t∗ <
∞ as i → ∞. At t = ti, a SMT is terminated, and so
d(ti) = dtrig, ˙d(ti) ≤ 0, β(ti) = 0. During the subsequent
AM, d ≤ dtrig. At such distances, (15) implies that | ¨d| ≤
Md, |¨s| ≤ Ms, where Md, Ms > 0 do not depend on the
system state. Since IT ends with ˙d = 0, this AM lasts no less
d | ˙d(ti)| time units. Hence ˙d(ti) → 0 as i → ∞.
than M −1
This and (15) imply that ˙s(ti) − vsgn ˙s(ti) → 0 as i → ∞.
So far as IT lasts no less than M −1
| ˙s(ti)| time units if
˙s is reversed during IT, the sign of ˙s(t) is the same for
ti < t < t∗ and large enough i. So the related part of the
path is single. By Lemma 9, this part can accommodate
only a finite number of SMT’s, in violation of the initial
hypothesis. This contradiction completes the proof.
s
5 Proof of (ii) in Theorem 1
This claim is identical to Remark 4a from the basic paper.
We first alter the control strategy by replacement of the ran-
dom machinery of choosing the turn direction σ at switches
A 7→ B by a deterministic rule. Then we show that the al-
tered strategy achieves the control objective by making no
more than N switches, where N does not depend on the ini-
tial state of the robot. However, this strategy cannot be im-
plemented since it uses unavailable data. The proof is com-
pleted by showing that with probability 1, the initial random-
ized control law sooner or later gives rise to N successive
switches identical to those generated by the altered strategy.
We introduce the control law A that is the replica of (4)
except for the rule to update σ when A 7→ B. Now for the
first such switch, σ is set to an arbitrarily pre-specified value.
After any subsequent occurrence A† of this mode,
σA† if CA† does not contain the target
−σA† if CA† contains the target
.
(25)
σ :=
Proposition 10 Under the law A, the target is reached for
a finite time, with making no more than N switches A 7→ B,
where N does not depend on the vehicle initial state.
The next two subsections are devoted to the proof of Propo-
sition 10. In doing so, the idea to retrace the arguments jus-
tifying global convergence of the algorithms like the Pledge
one [4] that deal with unconstrained motion of an abstract
point is troubled by two problems. Firstly, this idea assumes
that analysis can be boiled down to study of a point moving
according to self-contained rules coherent in nature with the
above algorithms. i.e., those like ’move along the bound-
ary’, ’when hitting the boundary, turn left’, etc. However,
this is hardly possible, at least in full, since the vehicle be-
havior essentially depends on its distance from the boundary.
For example, depending on this distance at the end of mode
B, the vehicle afterwards may or may not collide with a
forward-horizon cusp of the obstacle. Secondly, the Pledge
algorithm and the likes are maze-escaping strategies; they
do not find the target inside a labyrinth when started outside
it. Novel arguments and techniques are required to justify
the success of the proposed algorithm in this situation.
In what follows, we only partly reduce analysis of the vehicle
motion to that of a kinematically controlled abstract point.
This reduction concerns only special parts of the vehicle path
and is not extended on the entire trajectory. The obstacle
to be avoided by the point is introduced a posteriori with
regard to the distance of the real path from the real obstacle.
To justify the convergence of the abstract point to the target,
we develop a novel technique based on induction argument.
5.1 Deterministic Algorithm and its Properties
We start with study of kinematically controlled point.
The symbol [r1, r2] stands for the straight line segment di-
rected from r1 to r2; γ1 ⋆ γ2 is the concatenation of directed
curves γ1, γ2 such that γ1 ends at the origin of γ2.
Let an occurrence A† of mode A holds between two modes
B and let it start at r♦ = r(t♦) and end at r∗ = r(t∗). Due
to (6), distD[r∗] = distD[r♦] = dtrig are attained at unique
boundary points s♦ and s∗, respectively. They divide C into
two arcs. Being concatenated with η := [s∗, r∗] ⋆ [r∗, r♦] ⋆
[r♦, s♦], each of them gives rise to a Jordan curve encircling
a bounded domain, one of which is the other united with D.
The smaller domain is denoted CA† ; it is bounded by η and
one of the above arcs γA† . Let σA† = ± be the direction
(on ∂D) of the walk from s♦ to s∗ along γA† .
5.2 The Symbolic Path and its Properties
In this subsection, ’ray’ means ’ray emitted from the target’,
and we consider a domain D satisfying the following.
Assumption 3 The boundary C := ∂D consists of finitely
many (maybe, zero) straight line segments and the remainder
on which the curvature vanishes no more than finitely many
times. The domain D does not contain the target.
We also consider a point r moving in the plane according
to the following rules:
r.1) The point moves outside the interior of D;
8
r.2) Whenever r 6∈ D, it moves to T in a straight line;
r.3) Whenever r hits ∂D, it proceeds with monotonic mo-
tion along the boundary, counting the angle β;
r.4) This motion lasts until β = 0 and new SMT is possible,
then SMT is commenced;
many singular parts. A boundary point s ∈ C is said to lie
above D if there exists δ > 0 such that ((1 − δ)s, s) ⊂ D
and (s, (1+δ)s)∩D = ∅. If conversely ((1−δ)s, s)∩D = ∅
and (s, (1 + δ)s) ⊂ D, the point is said to lie below D.
r.5) The point halts as soon as it arrives at the target.
Formulas (9) and (11) imply the following.
The possibility from r.4) means that D does not obstruct
the initial part of SMT. When passing the corner points of
∂D, the count of β obeys (10) and the conventional rules
adopted for turning angles of the tangential vector fields
[7], and is assumed to instantaneously, continuously, and
monotonically run between the one-sided limit values. The
possibility from r.4) may appear within this interval.
To specify the turn direction in r.3), we need some con-
structions. Let the points s± ∈ C lie on a common ray and
(s−, s+) ∩ C = ∅. One of them, say s−, is closer to the
target than the other. They divide C into two arcs. Being
concatenated with (s−, s+), each arc gives rise to a Jordan
curve encircling a bounded domain. One of these domains
is the other united with D. The smaller domain C(s−, s+)
is called the cave with the corners s−, s+. It is bounded by
(s−, s+) and one of the above arcs γC.
To complete the rule r.3), we note that any SMT except for
the first one starts and ends at some points s♦, s∗ ∈ C,
which cut out a cave C[s♦, s∗].
r.3a) After the first SMT, the turn is in an arbitrarily pre-
specified direction;
r.3b) After SMT that is not the first the point turns
· outside C[s♦, s∗] if the cave does not contain the target;
· inside the cave C[s♦, s∗] if the cave contains the target.
Definition 2 The path traced by the point obeying the rules
r.1)—r.5), r.3a), r.3b) is called the symbolic path (SP).
Proposition 11 SP arrives at the target from any initial po-
sition. The number of performed SMT’s is upper limited by
a constant N independent of the initial position.
The remainder of the subsection is devoted to the proof of
this claim. The notations s, T, N, r, λ, ζ, κ, ψ, ϕ are at-
tributed to C = ∂D. At the corner points of C, these vari-
ables except for s have one-sided limits and are assumed
to instantaneously, continuously, and monotonically run be-
tween the one-sided limit values. An arc of C is said to be
regular if ζ (non-strictly) does not change its sign on this arc,
depending on which the arc is said to be positive/negative (or
±arc). The regular arc is maximal if it cannot be extended
without violation of the regularity. A connected part of C
and its points are said to be singular if ζ strictly changes the
sign when passing it and, if this part contains more than one
point, is identically zero on it; see Fig. 2(c). The singular
arc is a segment of a straight line since κ ≡ 0 on it due to
(9). The ends of any maximal regular arc are singular. Due
to Assumption 3 and (9), the boundary C has only finitely
Observation 2 As s moves in direction σ = ± over a η-arc
(η = ±) of C, we have ση ˙ϕ ≥ 0. Any point of ±arc that is
not singular lies above/below D.
Lemma 12 As s continuously moves along a regular arc, β
evolves within an interval of the form ∆ := [πk, π(k + 1)],
where k is an integer. When s reaches a singular point, β
arrives at the end of ∆ associated with the even or odd
integer, depending on whether s moves towards or outwards
the target at this moment, respectively.
PROOF. Since ζ does not change its sign, the vector r does
not trespass the λ-axis, whereas β is the polar angle of this
vector. This gives rise to the first claim of the lemma. The
second one is immediate from the first claim.
•
Lemma 13 Whenever SP progresses along C in direction
σ = ±, we have σβ ≥ 0.
PROOF. This is evidently true just after any SMT. During
the subsequent motion along C, the inequality can be vio-
lated only at a position s where β = 0 and either s is a corner
singular point or κ(s + σ≈0) > 0 since κ(s + σ≈0) ≤ 0 ⇒
σβ(s + σ≈0) ≥ 0 by the third relation from (9). However,
at such position, motion along C is ended.
•
The cave C(s−, s+) is said to be positive/negative (or ±cave)
if the trip from s− to s+ over γC is in the respective direction
of C. By Observation 2, s moves from a +arc to a −arc in
this trip and so passes a singular part of C. The total number
of such parts inside γC is called the degree of the cave. ¶
Lemma 14 For any cave of degree M = 1, the arc γ :=
γC consists of the positive γ|s−→s∗
+→s+
sub-arcs and a singular part [s∗
−, s∗
+],
the tangential vector T (s) (that is co-linear with [T, s] if
s is the corner point) is directed outwards T if the cave is
positive and does not contain T or negative and contains T.
Otherwise, this vector is directed towards T.
and negative γ|s∗
+]. For s ∈ [s∗
−, s∗
−
PROOF. The first claim is evident. Let the cave be positive
and T 6∈ C(s−, s+). Suppose that T (s) is directed towards
T. Then the same is true for s := s∗
+. Hence ζ(s∗
+ + 0) ≤
+ + 0) > 0 ⇒ κ(s∗
0 and ζ(s∗
+ +≈
0) > 0 since otherwise, ζ(s∗
+ +≈0) ≥ 0 by (9), in violation
of the definition of the singular part. In any case, ((1 −
+ + 0) = 0 ⇒ λ(s∗
¶ Possible singular parts at the ends of γC are not counted.
9
+, s∗
+) ∩ D = ∅ for some δ > 0. Since T 6∈ C(s−, s+),
δ)s∗
the segment [0, s∗
+) intersects γC, cutting out a smaller cave
Csm inside C(s−, s+). The singular part inside Csm is the
second such part in the original cave, in violation of M = 1.
This contradiction shows that T (s) is directed outwards T.
Now suppose that T ∈ C(s−, s+) and T (s) is directed out-
wards T. Let a point s∗ moves in the positive direction along
+→s+ . The ray containing s∗ monotonically rotates by
γ|s∗
Observation 2 and contains a continuously moving point
smov
+ to s+, the segment
− ∈ γ|s∗
− , s∗) sweeps the entire cave C[s−, s+], and so this cave
(smov
does not contain T, in violation of the assumption. This con-
tradiction proves that T (s) is directed towards T.
−→s− . As s∗ runs from s∗
The second claim for negative caves and the third claim are
established likewise.
•
Lemma 15 If SP enters a cave without the target, it leaves
the cave through the other corner with β 6= 0. In this ma-
neuver, the direction of motion along C is not changed, no
point of C is passed twice, and the number of SMT’s does
not exceed the cave degree.
PROOF. Let SP enter the cave in the positive direction; the
case of the negative direction is considered likewise. The
proof will be by induction on the cave degree M .
Let M = 1. (i) Suppose first that the cave is positive and so s
enters it through s− moving over a +arc. By Lemma 14, the
point s moves outwards the target whenever s ∈ [s∗
+],
and so β ≥ π by Lemmas 12 and 13. As s moves over the
subsequent −arc, ζ becomes negative and so the inequality
is kept true by Lemma 12. Thus s leaves the cave through
s+ with β ≥ π > 0, having made no SMT.
−, s∗
(ii) Let the cave be negative. Then s enters it through s+
moving over the negative arc. By Lemma 14, the point s
moves towards the target whenever s ∈ [s∗
+]. Since
ζ(s+ + 0) ≤ 0, Lemma 13 yields β(s+ + 0) ≥ π. By
−, s∗
Lemma 12, β ≥ π until s∗
+]
by Lemma 14. When s passes the entire [s∗
+], the sign
of ζ reverses from − to + and so β > 2π just after the
passage of s∗
−. It remains to note that β ≥ 2π > 0 while s
moves over the +arc from s∗
+ and so β ≥ 2π at s ∈ [s∗
− to s− by Lemma 12.
−, s∗
−, s∗
Suppose that the claim of the lemma is true for any cave
with degree ≤ M , and consider a cave of degree M + 1.
Let this cave be positive. Then s enters it through the lower
corner s− along a positive arc. We also consider the accom-
panying motion of the ray containing s. This ray contains a
continuously moving point s⊛
+ ∈ C that starts at s+. This
motion is considered until a singular part of C appears on
the ray segment [s, s⊛
+] for the first time. Three cases are
possible at this position.
(a) The singular part [s∗
− = s∗
s∗
+] ⊂ (s, s⊛
+); see Fig. 3(a), where
+ =: s∗. By successively applying the induction
−, s∗
+s *
+s
+s *
+s
+s *
*
s
s
-s
-ss
+s
-s
s
target
(a)
target
(b)
target
(c)
Fig. 3. The first singular point
+) and C(s∗
+, s⊛
+), we see that SP arrives
+ in the positive direction and with β > 0. While s
+ to s+ over the −arc, the vector r(s) is below
hypothesis to C(s, s∗
at s⊛
moves from s∗
the λ-axis and so β ≥ π > 0 by Lemma 12.
+ = s∗
−; see Fig. 3(b), where s∗
(b) The singular point s⊛
− =
s∗
+ =: s∗. By successively applying the induction hypothesis
+), we see that SP arrives at s⊛
to C(s, s#) and C(s#, s⊛
+
in the positive direction and with β > 0. So β(s⊛
+) ≥ 2π
and SP proceeds along the −arc to s+ with β ≥ π > 0 by
Lemma 12, which completes the proof.
(c) The singular point s; see Fig. 3(c). If β > 0 at this
point, SP enters the cave C[s, s⊛
+] of degree ≤ M and by the
induction hypothesis, arrives at s⊛
+ moving in the positive
direction and with β > 0. If conversely β = 0, SP undergoes
SMT, which cannot be terminated at the target since it does
not belong to the cave at hand. So it is terminated at some
point s# ∈ γC. Since T does not lie in the sub-cave C(s, s#)
of the original cave, the vehicle turns right at s# and thus
proceeds along C in the positive direction. By applying the
induction hypothesis to C(s#, s⊛
+), we see that SP arrives at
s⊛
+ moving in the positive direction and with β > 0 in any
case. The proof is completed like in the cases (a) and (b).
The case where the cave is negative is considered likewise.
Lemma 16 Suppose that after SMT starting and ending at
the points s♦ and s∗, respectively, the direction of the motion
along C is reversed. Then the cave C[s♦, s∗] does not contain
T but contains the entire path traced before SMT at hand.
PROOF. Let the motion direction at s = s♦ be +; the case
of − is considered likewise. Since on arrival at s∗, the left
turn is made, C[s♦, s∗] does not contain T by r.3b). Suppose
that the path traced before SMT at hand is not contained by
this cave, i.e., the point enters this cave before. Since this
cannot be done during another SMT, the point enters the cave
through either s♦ or s∗. In the first case, s♦ is passed twice
in the opposite directions, in violation of Lemma 15. In the
second case, s♦ is passed with β > 0 by the same lemma and
so SMT cannot be commenced. The contradiction obtained
proves that the initial part of SP is inside the cave.
•
10
Lemma 17 If SP progresses along C in a cave not contain-
ing the target, it leaves this cave through one of its corners.
During this maneuver, SP passes no point of C twice and
makes no more SMT’s than the degree of the cave.
PROOF. For the definiteness, let the cave be positive; the
case of the negative cave is considered likewise. The proof
will be by induction on the degree M of the cave.
Let M = 1. We employ the notations from Lemma 14.
(α) The motion is started on γ|s∗
The claim is evident.
+→s− in the direction −.
(β) The motion is started on γ|s+→s∗
Then the point necessarily arrives at s∗
ative direction. Thus the situation is reduced to (α).
in the direction −.
+, moving in the neg-
+
(γ) The motion is started on γ|s∗
−→s+ in the positive direc-
tion. The claim of the lemma is justified by the concluding
arguments from (i) in the proof of Lemma 15.
(δ) The motion is started on γ|s−→s∗
Then the point necessarily arrives at s∗
itive direction. Thus the situation is reduced to (γ).
in the direction +.
−, moving in the pos-
−
Now suppose that the claim of the lemma is true for any cave
with degree ≤ M , and consider a cave of degree M + 1.
Let this cave be positive for the definiteness; the case of the
negative cave is considered likewise. We also consider an
auxiliary motion of the point over C from s− into the cave
and the accompanying motion of the ray containing s until
one of the situations from Fig. 3 occurs.
+→s+
Case (a) from Fig. 3. (a.1) If the motion is started on
in direction + or on γ|s→s− in direction −, the
γ|s⊛
claim of the lemma is justified by the concluding arguments
from (i) in the proof of Lemma 15.
+
−→s⊛
, the induction
(a.2) If the motion is started on γ|s∗
−, s⊛
hypothesis applied to the cave C[s∗
+] of degree ≤ M
ensures that the point arrives at either s⊛
+ or s∗
−. In the first
case, it arrives in direction +, thus reducing the situation to
(a.1). In the second case, it arrives in direction −. If β 6= 0 at
this position, the point enters the cave C[s∗
−, s] in direction
− and afterwards leaves it through s in the same direction by
Lemma 15. If β = 0, SMT is commenced, which ends at the
position s with the left turn since C[s∗
−, s] does not contain
T. Hence in any case, the motion proceeds in direction −
from the position s, which reduces the situation to (a.1).
(a.3) The case where the motion is started on γ|s→s∗
considered likewise.
−
, is
(a.4) The cases where the motion starts on γ|s⊛
in di-
rection − or on γ|s→s− in direction +, are trivially reduced
to (a.2) and (a.3), respectively.
+→s+
11
Case (b) from Fig. 3. (b.1) The cases where the motion
starts on γ|s⊛
in direction + or on γ|s→s− in direction
−, is considered like (a.1).
+→s+
(b.2) If the start is on γ|s→s# , the induction hypothesis
applied to C[s, s#] ensures that the point arrives at either s or
s#. In the first case, it arrives in direction −, thus reducing
the situation to (b.1). In the second case, it arrives in direction
+ and then enters the cave C[s#, s⊛
+]. By Lemma 15, the
point leaves this cave through s⊛
+ in direction + and with
β > 0, thus reducing the situation to (b.1).
+
, the induction
(b.3) If the motion commences on γ|s#→s⊛
hypothesis applied to the cave C[s#, s⊛
+] of degree ≤ M
ensures that the point arrives at either s# or s⊛
+. In the first
case, the arrival is in direction −, after which the situation is
reduced to (b.2). In the second case, the arrival is in direction
+. If β 6= 0 at this moment, the motion proceeds along
in direction +, and the situation is reduced to (b.1).
γ|s⊛
If β = 0, SMT is commenced, which ends at the position s
with the left turn since the cave C[s⊛
+, s] does not contain the
target. Hence the motion proceeds along γ|s→s− in direction
−, and the situation is still reduced to (b.1).
+→s+
(b.4) The cases where the motion starts on γ|s⊛
in di-
rection − or on γ|s→s− in direction +, are trivially reduced
to (b.3) and (b.2), respectively.
+→s+
Case (c) from Fig. 3. (c.1) The cases where the motion
in direction + or on γ|s→s− in direction
starts on γ|s⊛
−, is considered like (a.1).
+→s+
+
, the induction hypothesis
(c.2) If the start is on γ|s#→s⊛
applied to C[s#, s⊛
+] yields that the point arrives at either
s⊛
+ or s#. In the first case, the arrival direction is + and
the situation is reduced to (b.1). In the second case, the
point arrives in direction − and then enters C[s#, s]. By
Lemma 15, the point leaves this cave through s in direction
− and with β > 0. Thus we arrive at (b.1) once more.
(c.3) If the motion commences on γ|s#→s, the induction
hypothesis applied to the cave C[s#, s] of degree ≤ M en-
sures that the point arrives at either s# or s. In the first case,
the arrival is in direction +, after which the situation is re-
duced to (b.2). In the second case, the arrival is in direction
−, after which the situation reduces to (b.1).
(c.4) The cases where the motion starts on γ|s⊛
in di-
rection − or on γ|s→s− in direction +, are trivially reduced
to (c.2) and (c.3), respectively.
+→s+
•
Lemma 18 Any part of SP where it progresses over the
boundary ∂D ends with SMT.
PROOF. is by retracing the proof of (v) in Proposition 3.
Let K be the number of singular parts of the boundary ∂D.
Lemma 19 If every cave examined in r.3b) does not contain
the target, SP consists of the initial P− and terminal P+
sub-paths (some of which may contain only one point) such
that each accommodates no more than K SMT’s, no point
of C is passed twice within P−, whereas the direction of
motion along C is not altered within P+.
PROOF. Suppose first that the initial position lies in some
cave. Among such caves, there is one enveloping the others.
By Lemma 17, SP leaves this cave and the related sub-path
satisfies the properties stated in Lemma 19. If the initial po-
sition lies outside any cave, this sub-path is taken to consist
of only this position. By Lemma 16, the direction of the mo-
tion along C is not changed on the remaining sub-path P+
and P+ does not go inside the above maximal cave.
Suppose that within P+, SP accommodates more than K
SMT’s. Any of them starts at some singular part with β =
0. Hence SP passes some singular point with β = 0 at
least twice and thus becomes cyclic. Now we consider the
related minimal cyclic part CP of SP that starts and ends
with commencing a SMT at a common point. Due to the
constant direction, the closed curve CP is simple. It follows
that ∢TANG [CP] = ±2π, whereas ∢T CP = 0 since W = 0
for all bypassed caves and T 6∈ D. Hence ∢0r = ∓2π
by (10), whereas CP starts and ends with β = 0 and so
∢0r = 0. This contradiction completes the proof.
•
Lemmas 18 and 19 give rise to the following.
Like in the proof of Lemma 15, we consider the motion of
the ray containing s until a singular point appears on the
segment [s, s∗
+] for the first time, and examine separately
three possible cases depicted in Fig. 3.
(a) The singular point s∗ ∈ (s, s∗
+); see Fig. 3(a). The target
is contained by the cave C[s, s∗] of degree ≤ M , which is
entered in the positive direction and by Lemma 12, with
0 ≤ β ≤ π. The induction hypothesis competes the proof.
(b) The singular point s∗ = s∗
+; see Fig. 3(b). The target
is evidently contained by the cave C[s, s#] of degree ≤ M .
The proof is completed like in the previous case.
(c) The singular point s∗ = s; see Fig. 3(c). If at s∗, the
point moves outwards T, the arguments from the second
paragraph in the proof of Lemma 14 show that the cave does
not contain T, in violation of the assumption of the lemma.
Hence at s∗, the point moves towards T and so β = 0
by Lemma 12 and D does not obstruct the initial part of
SMT, as was show in the proof of Lemma 14. Thus SMT
is commenced at s∗. If it is terminated at T, the proof is
completed. Otherwise, it arrives at s# ∈ γC, as is shown in
Fig. 3(c). Evidently, the cave C[s#, s] does not contain the
target. So on reaching s#, the point turns right and continues
moving in the positive direction over a new positive arc and
with β ∈ [0, π]. So the proof is completed by applying the
induction hypothesis to the cave C[s#, s∗
+] of degree ≤ M .
Proof of Proposition 11: is straightforward from Corollary 3
and Lemma 20.
Corollary 3 If every cave examined in r.3b) does not con-
tain T, SP arrives at T by making no more than 2K SMT’s.
5.3 Proof of Proposition 10.
Lemma 20 If SP enters a cave containing T over a positive
arc with |β| ≤ π, it arrives at T not leaving the cave. During
this maneuver, no point of C is passed twice and the number
of SMT’s does not exceed the degree of the cave.
PROOF. Let the cave be entered in direction +; the case
of − is considered likewise. The proof will be by induction
on the degree M of the cave C[s−, s+]. Since s enters the
cave over a positive arc, the entrance is through s−.
−, s∗
Let M = 1. By Lemma 14, s moves towards T when reach-
ing the singular part of the cave [s∗
+]. At this position,
β = 0 by Lemma 12 and D does not obstruct the initial part
of SMT, as was show in the proof of Lemma 14. So SMT is
commenced. If it is not terminated at T, the segment [0, s∗
−)
intersects γC, cutting out a smaller cave within the original
one. The singular part inside this new cave is the second
such part within the original cave, in violation of M = 1.
Hence T is reached and only one switch B 7→ A is made.
Let P stand for the directed path traced by the vehicle under
the control law A from Subsect. 5.1. We first show that after
a slight modification, this path can be viewed as SP for some
domain D provided that P is single (see Definition 1). This
permits us to employ the results of Subsect. 5.2.
We use the notations s−
i , γi from introduced before Def-
inition 1, note that for s ∈ γi, the distance d from the vehicle
to the obstacle is a function d = di(s) of s, and put:
i , s+
D :=
r : d := distD[r] < d⋆(D) an either s :=
n
s(r) ∈ γi ∧ d ≤ di(s) or s 6∈ ∪iγi ∧ d ≤ dtrig
.
(26)
o
If σ ˙s < 0 at the start of the ith mode B, the abscissa s−
i
is passed twice during IT by Lemma 9. For every such i,
the real path between these two passages is replaced by the
motion along the straight line segment, which gives rise to
the modified path P∗.
Now suppose that the conclusion of the lemma is true for any
cave with degree ≤ M , and consider a cave of degree M +1.
Observation 3 Let the original path be single. The modified
path P∗ is SP for D∗.
12
Indeed, this path can be viewed as a trace of a point obeying
the rules r.1)—r.5). To ensure r.3a), the direction should
be pre-specified to match that of P∗. The property r3.b) is
satisfied due to (25) and the second inequality from (7).
Lemma 21 For a single path, the set (26) satisfies Assump-
tion 3 and its boundary has no more than Ns singular parts,
where Ns is completely determined by D and T.
PROOF. The last claim in Assumption 3 holds by (7), (26).
The boundary ∂D consists of parts traced during 1) SMT’s,
2) SMEC’s, 3) arcs of circles traced during IT’s, and 4) seg-
ments of normals to ∂D resulted from the path modification.
Any part 1) clearly satisfies Assumption 3 and is either sin-
gular or does not contain singular points; their number does
not exceed (P + 1)(Q + 1) by Lemma 9.
Since parts 2) are separated by SMT’s, their number does not
exceed (P +1)(Q+1)+1. Any part 2) lies on a d-equidistant
curve C(d) with d ≤ dtrig. Due to (8), ζC(d)(s) = ζ∂D(s)+d,
Assumption 3 holds since the boundary ∂D is piece-wise
analytical, and the singular parts of C(d) are the connected
components of the set from Corollary 2. So type 2) arcs of C
accommodate no more than F [(P + 1)(Q + 1) + 1] singular
parts.
It remains to note that parts 3) and 4) do not contain singular
points since β monotonically evolves from 0 during IT’s.
Lemma 22 If the vehicle finds the target in CA† after some
occurrence A† of mode A, it arrives at the target by making
after this no more than Ns switches A 7→ B.
PROOF. Let us consider a part P of the path that starts in
mode B preceding A†. Suppose first that this part is not
single and truncate it from the right, leaving its maximal
single sub-part P†. The terminal position of P† lies on a
previously passed piece of P†. Let D† and P†
∗ be the related
domain (26) and modified path. Associated with CA† is a
cave of D† into which P†
∗ turns with |β| ≤ π. By Lemma 20,
P†
∗ cannot arrive at a previously passed point, in violation of
the above property. This contradiction proves that the entire
path P is single. Then Lemmas 20 and 21 guarantee that P∗
arrives at T by making no more than Ns SMT’s. It remains
to note that P and P∗ arrive at T only simultaneously, and
each occurrence of A gives rise to a SMT in P∗.
Lemma 23 After no more than Ns + 1 switches A 7→ B,
the direction in which s moves along ∂D within modes B
is not altered.
PROOF. Consider an occurrence A† of mode A after which
the direction is altered and the path P from the start of the
entire motion until the end of A†. Suppose that P is not single
∗, where D† and P†
and truncate it from the left, leaving the maximal single part
P†. The starting point of P† is passed once more within
P†, both times in mode B. So this double point is inherited
by P†
∗ are the related domain (26) and
modified path. Associated with CA† is a cave CD† of D†;
these two sets contain the target only simultaneously due to
(7). Hence P and P†
∗ acquire a common turn direction at their
ends. So SP P†
∗ has converse directions of motion along the
boundary at the start and end of the last involved SMT and by
Lemmas 16 and 17, has no double points. This contradiction
proves that the entire P† is single. Due to Lemma 16, the
modified path P†
∗ lies in CD† and so involves no more than
Ns SMT’s thanks to Lemmas 17 and 21. It remains to note
that each occurrence of A gives rise to a SMT in P∗.
•
To prove Proposition 10, it in fact remains to show that
the vehicle cannot pass more than Ns modes A in a row,
constantly not finding the target in CA and not changing
the direction of the motion along ∂D. The next lemma with
corollaries serves this proof. The symbol ∠(a, b) ∈ (−π, π]
stands for the angle from the vector a to b. Let the points
ri, i = 1, 2 on P be at the distance distD[ri] ≤ dtrig and
such that when traveling between them, the path does not
intersect itself and except for ri, has no points in common
with the normals [ri, si], where si := s[ri]. The points si
split ∂D into two curves. Being concatenated with the above
normals and P|r1→r2 , they give rise to Jordan loops, with
one of them enveloping the other. Let γinner be the curve
giving rise to the inner loop LOOP, and σ = ± be the
direction from s1 to s2 along γinner.
Lemma 24 If LOOP does not encircle the target, the fol-
lowing relation holds
∢0rP|r1→r2 = ∢0r∂D|
s1
+ ∠ [σT∂D(s1), TP(r1)] − ∠ [σT∂D(s2), TP(r2)] .
+ ∢T[r1, s1] − ∢T[r2, s2]
(27)
σ
−→s2
PROOF. Let σ = +; σ = − is considered likewise. By
applying the Hopf’s theorem to LOOP, we see that
∢T[s1, r1]+∢T P|r1→r2 +∢T[r2, s2]−∢T ∂Ds1→s2 = 0,
∢TANG [P|r1→r2 ] = ∢TANG [∂Ds1→s2 ]
− ∠ [T∂D(s1), TP(r1)] + ∠ [T∂D(s2), TP(r2)] .
The proof is completed by the second formula in (10).
•
The next claim employs the notations introduced at the be-
ginning of Subsect. 5.1.
Corollary 4 Suppose that T 6∈ CA† and the value of σ main-
tained during the occurrence A† of mode A is not altered
when A† 7→ B. Then (27) holds with r1 := r♦, r2 := r∗.
This is true since in this claim and Lemma 24, σ is the same.
13
s1
3T
3r
T1
r1
Fig. 4. Auxiliary loop
Corollary 5 Let r1 and r2 be successively passed within a
common mode B, where σ(t) ≡ σ = ±. If r2 is passed after
IT, (27) holds, where ∢0r∂D|
accounts for the entire
s1
motion of the projection s = s[r], r ∈ Pr1→r2 , including
possible full runs over ∂D.
−→s2
σ
If 1) s does not run over the entire ∂D and 2) either r1 is
passed after IT or sgn ˙s = σ at the start of the mode, the claim
is evident. If 1) holds but 2) does not, the path may intersect
[s1, r1] and so direct application of Lemma 24 is impossible.
Then we apply this lemma to r1 := r3, where r3 is the
point where the vehicle intersects the normal for the second
time during IT; see Fig. 4. The proof is completed by noting
that ∢T [r1, r3] = ∢T γ, ∢TANG [γ] = ∠[T1, T3] and so
∢0rP|r1→r3 = ∢0rγ = ∢T[r1, r3] − ∠[T1, T3], as well
as that ∠ [σT∂D(s1), T3] = ∠ [σT∂D(s1), T1] + ∠[T1, T3].
The claim is generalized on the case where 1) is not true by
proper partition of the path, followed by summation of the
formulas related to the resultant pieces.
Corollary 6 Let points r1 and r2 be successively passed
in modes B (maybe, different). Suppose that r2 is not at-
tributed to IT and when traveling from r1 to r2, the vehi-
cle constantly does not find the target in CA and does not
change σ. Then (27) holds, where ∢0r∂D|
accounts
for the entire motion of the projection s = s[r], r ∈ Pr1→r2 ,
including possible full runs over ∂D.
σ
−→s2
s1
It is assumed that as the vehicle moves in mode A, the
projection s continuously and monotonically goes over ∂D
from s♦ to s∗ in the direction σ.
Lemma 25 The vehicle cannot pass more than Ns modes
A in a row, constantly not finding the target in CA and not
changing the direction of the motion along ∂D.
PROOF. Suppose the contrary and that σ = +; the case
σ = − is considered likewise. By Observation 1, the ith
mode Ai in the row starts when s lies in an +exit arc Ai,
whereas ζ ≥ 0 when it ends. Hence A1, A2, . . . cannot re-
peat until s completes the full run over ∂D. However, they
do repeat since the number of +arcs does not exceed F by
Observation 1, and F ≤ Ns by construction from the proof
of Lemma 21. Hence the path P can be truncated so that
the first and last modes A start at positions r1 and r2, re-
spectively, lying on a common +exit arc A, whereas s en-
circles the entire boundary ∂D during the move over the
truncated P. By the definition of the +arc, r∂D(s) evolves
within the fourth quadrant as s runs from s1 to s2 within
the +arc and so the absolute value of its turning angle does
14
not exceed π/2. This and (13) (where d∗ := 0) imply that
∢0r∂D|s1→s2
≤ −3/2π. In (27), |∢T[ri, si]| < π/2 and
∠ [T∂D(si), TP(ri)] = 0 since the segments [ri, si] and
[ri, T] are perpendicular. Overall, (27) implies that
∢0rP|r1→r2
< −
π
2
.
(28)
The path P|r1→r2 starts with β = 0 and whenever β = 0 is
encountered, the angle β may stay constant during SMT but
after this SMT β becomes positive by (12) (see Fig. 2(b))
since the robot turns right. The last claim holds thanks to
(iii) of Proposition 3 if B is not terminated during this SMT
and (25) otherwise. Such behavior of β is inconsistent with
(28). The contradiction obtained completes the proof.
•
Proof of Proposition 10 is straightforward from (v) of Propo-
sition 3 and Lemmas 22, 23, and 25.
5.4 Proof of (ii) in Theorem 1.
Let Pk be the probability that the vehicle does not arrive
at T after making kN switches A → B, where N is taken
from Proposition 10. Given a realization of σ’s for the first
kN switches, the probability of the (k + 1)th event does
not exceed the probability P∗ that the next N realizations
are not identical to those generated by the algorithm A for
the related initial state. Here P∗ ≤ ρ, where ρ := 1 −
min{p, 1 − p}N and p is the probability of picking + in (4).
So the law of total probability yields that Pk+1 ≤ ρPk ⇒
Pk ≤ ρk−1P1 → 0 as k → ∞. It remains to note that the
probability not to achieve T does not exceed Pk for any k.
6 Proof of (ii) in Theorem 1 and Theorem 2
For the definiteness, we consider the vehicle driven by the
basic algorithm with the right turns. So in any SMEC the
vehicle has the obstacle to the left. The proof basically fol-
lows that from the previous section and employs many facts
established there. The difference is that now we do not need
to introduce an auxiliary deterministic algorithm since the
examined one is deterministic itself.
As before, we first consider another obstacle D 6∋ T satis-
fying Assumption 3. Let a point r moves in the plane ac-
cording to the following rules:
r.1) If r 6∈ D, r moves to T in a straight line; r(0) 6∈ D;
r.2) If r hits C := ∂D, it turns right and then moves in
the positive direction along the boundary, counting the
angle β;
r.3) This motion lasts until β = 0 and new SMT is possible;
r.4) The point halts as soon as it arrives at the target.
The path traced by r is called the symbolic path (SP). Any
SMT according to r.1) except for the first one starts and ends
at some points s♦, s∗ ∈ C, which cut out a cave C[s♦, s∗].
We start with noting that the following specification of Ob-
servation 2 now holds.
mode A† impossible. The contradiction obtained completes
the proof.
•
Observation 4 As r moves over a ±-arc of C, we have
± ˙ϕ ≥ 0. Non-singular points of ±-arc lie above/below D.
This lemma entails that Corollaries 4, 5, and 6 remain true
in the following specified forms.
Lemma 12 evidently remains valid, whereas Lemma 13
holds in the following specified form.
Lemma 26 Whenever SP lies on C, we have β ≥ 0.
It is easy to see by inspection that Lemma 15 remains true
as well, where in the case from Fig. 3 the right turn at the
point s# is justified by not the absence of the target in the
cave but the very algorithm statement. The following claim
is analog of Lemma 16
Lemma 27 Suppose that after SMT starting and ending
at the points s♦ and s∗, respectively, SP enters the cave
C[s♦, s∗]. The this cave contains the entire path traced be-
fore SMT at hand.
PROOF. The proof is be retracing the arguments from the
proof of Lemma 16 with the only alteration: the point cannot
enter the cave through s♦ since this violates the always
positive direction of motion along the boundary.
•
Now we revert
to the vehicle at hand and show that
Lemma 27 extends on the directed path P traced by this
vehicle. The next lemma employs the notations A† and σA†
introduced at the beginning of subsection 5.1.
Lemma 8a For any occurrence A† of mode A that holds
between two modes B, we have σA† = +.
PROOF. Suppose to the contrary that σA† = −. Then ac-
cording to the ’only-right-turns’ option of the algorithm, the
vehicle enters the cave CA† after termination of A†. We are
going to show that then similar to Lemma 16, this cave con-
tains the entire path passed by the vehicle until this moment
and so its initial location. Due to the first relation from (7),
the last claim implies that the initial location r0 is also con-
tained by a cave of N (dtrig), in violation of the assumptions
of Theorem 2. This contradiction will complete the proof.
Thus it remains to show that CA† does contain the path
traced so far. Suppose the contrary. Since in the mode B
preceding A†, the vehicle has the obstacle to the left, it
passes to A† from inside the cave. It follows that the moment
after A† is not the first time when the vehicle enters the
cave. Let us consider the last of these ’preceding’ enters
and the path P traced by the vehicle since this moment until
the commencement of A†. By combining Lemma 15 with
the arguments from the proof of Lemma 22, we conclude
that this path is single and β > 0 at its end, which makes
Corollary 7 For r1 = r♦, r2 = r∗, (27) holds with σ = +.
Corollary 8 Let r1, r2 be successively passed within a
common mode B. If r2 follows IT, (27) holds with σ = +
and ∢0r∂D|
accounting for possible full runs over C.
σ
s1
−→s2
Corollary 9 Suppose that points r1 and r2 are successively
passed in modes B (maybe, different) and r2 is not attributed
to IT. Then (27) holds with σ = +, where ∢0r∂D|
−→s2
accounts for the entire motion of the projection s = s[r], r ∈
Pr1→r2, including possible full runs over ∂D.
s1
σ
We also note that at the moment when a SMEC ends, s ∈
S0 := {s ∈ ∂D : −dtrig ≤ ζ∂D(s) < 0, λ∂D(s) > 0}. Since
the boundary is piece-wise analytical, this set has finitely
many connected components (called exit arcs).
PROOF OF THEOREM 2 This proof retraces many ar-
guments from the proof of Lemma 25. Suppose the con-
trary and that the vehicle does not arrive at the target. Then
the projection s repeatedly encircles the boundary. (This in-
cludes the imaginary moves of s when the vehicle is in
mode A.) By retracing the arguments from the proof of (v)
in Proposition 3, we conclude that the path P can be trun-
cated so that the first and last modes A start at positions r1
and r2, respectively, lying on a common exit arc A, whereas
s encircles the entire boundary ∂D during the move over
the truncated P. By the definition of the exit arc, r∂D(s)
evolves within the fourth quadrant as s runs from s1 to s2
within the +arc and so the absolute value of its turning angle
does not exceed π/2. This and (13) (where d∗ := 0) imply
that ∢0r∂D|s1→s2
≤ −3/2π. In (27), |∢T[ri, si]| < π/2
and ∠ [T∂D(si), TP(ri)] = 0 since the segments [ri, si] and
[ri, T] are perpendicular. Overall, (27) implies (28). The
path P|r1→r2 starts with β = 0 and whenever β = 0 is en-
countered, the angle β may stay constant during SMT but af-
ter this SMT β becomes positive since the robot turns right.
The last claim holds thanks to (iii) of Proposition 3 if B is
not terminated during this SMT and the right-turn option in
(4) otherwise. Such behavior of β is inconsistent with (28).
The contradiction obtained completes the proof.
•
PROOF OF (ii) IN THEOREM 1 This claim is immediate
from Theorem 2.
•
References
[1] D. Thompson, On Growth and Form, Cambridge University Press,
Cambridge, 1966.
[2] J. Camhi, E. Johnson, High-frequency steering maneuvers mediated
by tactile cues: Antennal wall-following in the cockroach, The Journal
of Experimental Biology 202 (1999) 631643.
15
[3] B. Fajen, Steering toward a goal by equalizing taus, Journal of
Experimental Psychology: Human Perception and Performance 27 (4)
(2001) 953–968.
[4] H. Abelson, A. A. diSessa, Turtle Geometry, MIT Press, Cambridge,
1980.
[5] V. Lumelsky, S. Tiwari, An algorithm for maze searching with azimuth
in: Proceedings of the IEEE Conference on Robotics and
input,
Automation, San Diego, CA, 1991, pp. 111–116.
[6] V. I. Utkin, Sliding Modes in Control Optimization, Springer-Verlag,
Berlin, 1992.
[7] E. Kreiszig, Differential Geometry, Dover Publications, Inc., NY, 1991.
[8] A. Matveev, M. Hoy, A. Savkin, Mixed nonlinear-sliding mode control
of an unmanned farm tractor in the presence of sliding, in: Proceedings
of the 11th International Conference on Control, Automation, Robotics
and Vision, Singapore, 2010, pp. 927–932.
16
|
synthetic_cpt | 1 | A_User-Centered_Evaluation_of_the_Data-Driven_Sign_Language_Avatar_System_A_Pilot_Study.pdf | Rate Regions of Secret Key Sharing in a New Source Model1,2
Somayeh Salimi*, Mahmoud Salmasizadeh†, Mohammad Reza Aref*
*ISSL Lab., Dept. of Electrical Engineering, Sharif University of Technology, Tehran, Iran
†Electronics Research Center, Sharif University of Technology, Tehran, Iran
Email: [email protected], [email protected], [email protected]
Abstract—A source model for secret key generation between terminals is considered. Two users, namely users 1 and
2, at one side communicate with another user, namely user 3, at the other side via a public channel where three users can
observe i.i.d. outputs of correlated sources. Each of users 1 and 2 intends to share a secret key with user 3 where user 1 acts
as a wiretapper for user 2 and vice versa. In this model, two situations are considered: communication from users 1 and 2
to user 3 (the forward key strategy) and from user 3 to users 1 and 2 (the backward key strategy). In both situations, the
goal is sharing a secret key between user 1 and user 3 while leaking no effective information about that key to user 2, and
simultaneously, sharing another secret key between user 2 and user 3 while leaking no effective information about the
latter key to user 1. This model is motivated by wireless communications when considering user 3 as a base station and
users 1 and 2 as network users. In this paper, for both the forward and backward key strategies, inner and outer bounds of
secret key capacity regions are derived. In special situations where one of users 1 and 2 is only interested in wiretapping
and not key sharing, our results agree with that of Ahlswede and Csiszar. Also, we investigate some special cases in which
the inner bound coincides with the outer bound and secret key capacity region is deduced.
Keywords-Information theoretic security, secret key sharing , source model, secret key capacity region.
I.
INTRODUCTION
Because of the open nature of wireless communication networks, sharing secret keys between terminals is a
challenging problem. In these environments, terminals have access to common randomness for generating secret keys but
the existence of broadcast and multiple access channels in these networks results in unintended information leakage. In this
paper, we explore the problem of sharing secret keys between three users who can observe the outputs of some correlated
sources. There are two users, namely user 1 and user 2, at one side and another user, namely user 3, at the other side and
also public channels between the users. User 1 wishes to share a secret key with user 3 while user 2 acts as a wiretapper
and intends to learn information about this key as much as possible. Symmetrically, user 2 wishes to share a secret key
with user 3 while user 1 acts as a wiretapper and intends to learn information about this key as much as possible. This
1Part of this work will be published on Australian Communication Theory Workshop (AusCTW2010) proceeding.
2 This work was partially supported by Iranian National Science Foundation (INSF) under Contract No. 84.5193.
1
model could be realized in wireless environment when user 3 is a base station and users 1 and 2 are curious network users.
The rigorous idea of information theoretic security was first introduced by Shannon in [11] where the eavesdropper
could listen to all the data transmitted from the transmitter to the receiver. After that, the notion of information theoretic
security was characterized by Wyner as the wiretap channel model in which a single source-destination communication
link is eavesdropped by a wiretapper via a degraded channel [13]. The secrecy level was measured by equivocation rate at
the wiretapper. It was shown in [13] that nonzero secrecy rate can be achieved without using a secret key, if the intended
receiver has a communication channel with better quality than the wiretapper. Csiszar and Korner in their seminal work [2]
generalized the Wyner’s results to less noisy and more capable channels and determined the capacity region of the
broadcast channel with confidential message. In [1] and [8], generation of secret key through common randomness was
considered by Maurer, Ahlswede and Csiszar. The common randomness can be a source or a channel type. In source
common randomness, all terminals including the transmitter, the receiver and the wiretapper could observe i.i.d. outputs of
correlated sources. In channel common randomness, there is a noisy broadcast channel from the transmitter to the receiver
and the wiretapper. In both the source and channel common randomness, there is a noiseless public channel with unlimited
capacity between the transmitter and the receiver where all communication through which can be overheard by the
wiretapper. In [1], based on common randomness type, the source and channel models were defined for secret key sharing
and in both models, the problem of finding the secret key capacity between the transmitter and the receiver was
considered. In the source model, the secret key capacity was characterized when a one-way noiseless public channel with
unlimited capacity is available between the transmitter and the receiver. In case a two-way public channel exists between
the transmitter and the receiver, the secret key capacity still remains an open problem, however its upper and lower bounds
have been improved in [5] and [10]. Secret key generation in a network including more than three terminals has been
explored in other works such as [3], [4], [6], [7], [14], [15]. Maurer [9] strengthened the secrecy conditions of [1] and [8]
and showed that the results in a weak sense can be established in the strong sense by using the techniques developed in [9].
As mentioned above, the problem of sharing secret keys between terminals which have access to correlated sources
was defined in [1], in which the transmitter and the receiver intend to share a key via public channel communications. In
this model, a wiretapper who has access to side information correlated with other sources, can listen to the public channel
and obtains information about the shared key as much as possible. In this paper, we propose a new model which differs
from the source model of [1] (which was described in the previous paragraph), in such a way that both users 1 and 2
attempt to share secret keys with user 3 while user 1 is the wiretapper of user 2’s secret key and vice versa. Three users
have access to correlated sources and there is a public channel from users 1 and 2 to user 3. To the best of our knowledge,
this model has not been investigated so far. For this model, we investigate two situations. In the first, there is a one-way
2
public channel from users 1 and 2 to user 3. This situation is referred to as the forward key strategy and is shown in Fig.1.
In the second one, there is a one-way public channel from user 3 to users 1 and 2. This situation is referred to as the
backward key strategy and is shown in Fig.2. In both situations, we investigate the inner and outer bounds of the secret key
capacity region.
The rest of the paper is organized as follows: in Section II the proposed model and definitions are described. In Section
III, related theorems for the upper and lower bounds of the secret key capacity regions are given. Some special cases are
considered in Section IV in which the inner bound coincides with the outer bound and the secret key capacity region can
be derived. Proofs of the theorems are given in Section V. Conclusion and suggestions for future works are given in
Section VI. Some lemmas useful for the proof of theorems are given and proofed in the appendix. Throughout the paper, a
random variable is denoted with an upper case letter (e.g X ) and its realization is denoted with the corresponding lower
case letter (e.g., x
). We use
N
iX
to indicate vector
(
X
,
X
i
,1
i
,2
,...,
X
i N
,
)
, and use
k
jX
i
,
to indicate vector
(
X
i j
,
,
X
,...,
X
i k
,
i j
,
1
)
where i denotes the index of the corresponding user.
II. THE NEW SOURCE MODEL
Users 1, 2 and 3 can, respectively, observe N i.i.d. repetitions of the random variables
,X X and 3X . The random
1
2
variable
iX takes values from the finite set
i for
i
1, 2,3
. Furthermore, a noiseless public channel with unlimited
capacity is available for communication between the three users. User 1 wishes to share a secret key with user 3 while user
2 acts as a wiretapper of user 1’s key. Symmetrically and simultaneously, user 2 wishes to share a secret key with user 3
while user 1 acts as a wiretapper of user 2’s key. Now, we represent formal definition of the secret key strategy for the new
source model.
Step 0) Users 1, 2 and 3, respectively, generate random variables
1M ,
2M and
3M independent of each other such
that
M M M and
,
2
,
3
1
(
X
N
1
,
X
N
2
,
X
N
3
)
are mutually independent. The next steps can be regarded as deterministic.
Step 1) At this step, users 1, 2 and 3, respectively, generate 1,1F ,
2,1F and
3,1F such that
F
i
,1
transmit them over the public channel.
f M X
,1(
i
,
i
)N
i
for
i
1, 2,3
and
Steps 2 to k) At step j , user i generates
,i jF as a function of (
iM X
,
i
)N
and the information which has been received from
the other users via the public channel. Hence, users 1, 2 and 3, respectively, generate
1, jF ,
2, jF and 3, jF as functions of the
information available at the corresponding user where
F
1,
j
f M X
1,
(
,
1
j
N
1
,
3
F
1
j
F
,
2,1
1
j
3,1
)
,
F
2,
j
f M X
2,
(
,
2
j
N
2
,
j
F
1,1
1
F
,
1
j
3,1
)
and
F
3,
j
f M X
3,
(
,
3
j
N
3
,
j
F
1,1
1
F
,
1
j
2,1
)
, and transmit them over the public channel for
j
2,...,
k
.
Finally, after k steps, users 1 and 2 compute the keys K and L , respectively, as functions of the information available at
each user:
K K M X
(
,
1
N
1
,
k
F F
,
2,1
k
3,1
) (1)
L L M X
(
,
2
N
2
,
k
F F
,
1,1
k
3,1
) (2)
(cid:0)
and also user 3 computes the keys ˆK and ˆL as a function of the information available at him:
,
ˆ
ˆ (
K K M X
ˆ
ˆ(
L L M X
,
3
3
N
3
,
k
F F
,
1,1
k
2,1
) (3)
N
3
,
k
F F
,
1,1
k
2,1
) (4)
where the keys
ˆK and ˆL are intended for sharing as secret keys with users 1 and 2, respectively. The keys
ˆ
)K K and
,
(
ˆ
)L L take values from the finite sets and , respectively.
( ,
Now we state the conditions that should be met in the secret key strategy of the described model.
Fig.1. Forward key strategy
Fig.2. Backward key strategy
Definition 1: In the secret key strategy of the source model described above, the secret key rate pair
(
achievable rate pair if for every
0 and sufficiently large N , we have:
4
R R is an
)
2
,
1
Pr{
ˆ
K K
}
and Pr{
ˆ
L L
}
(5)
I M X
(
2
,
N
2
,
I M X
(
1
,
N
1
,
k
F F K
1,1
k
3,1
;
,
)
(6)
F F L
;
,
)
(7)
k
2,1
k
3,1
H K
(
)
R
1
and
H L
( )
1
N
(9)
(8)
R
2
H K
(
)
H L
( )
(10)
log
log
1
N
1
N
1
N
1
N
1
N
1
N
1
N
Equation (5) means that users 1 and 2 can generate secret keys with user 3 and Equations (6) and (7) say that users 1
and 2 have effectively no information about each other’s secret key. Equations (9) and (10) are the uniformity conditions
for the secret keys.
Definition 2: The region containing all the achievable secret key rate pairs
(
R R is the key capacity region.
)
,
1
2
In the described model, we consider restricted usage of the public channel, i.e., no more than k usages of the public
channel are allowed. In this paper, only the case
k is investigated. For this case, when communication is only
1
performed from users 1 and 2 to user 3, forward key capacity region is defined and when communication is only carried
out in the reverse direction, i.e., from user 3 to users 1 and 2, backward key capacity region is introduced. We consider
both situations in this paper.
III. SECRET KEY RATE REGIONS
In this section, we state our main results about the mentioned model.
Theorem 1 (inner bound of the forward key capacity region): In the forward key strategy of the described source
model, the rate pair
(
R R is an achievable key rate pair if:
)
,
1
2
0
0,
R
1
R I S X T U I S X T U
)
1
R
2
( ;
( ;
,
,
2
3
)
R
2
I T X S V I T X S V
( ;
)
( ;
,
,
)
1
3
R R
1
2
I S T X U V I S X T U I T X S V I S T U V
( ,
)
( ;
( ;
( ;
)
)
;
,
,
,
,
)
2
1
3
where
U V S T are random variables taking values in sufficiently large finite sets and according to the distribution:
,
,
,
p u v s t x x x
( ,
, , ,
3
,
,
2
1
)
p u s p v t p s x p t x p x x x
(
1
3
) (
) (
) (
) (
,
,
2
1
2
)
.
Proof of the achievability is given in Section V. A. However, we explain the intuitive interpretation of Theorem 1. We
assume that users 1 and 2 consider the random variables S and T with the distributions
(
p s x
)
and
(
p t x
1
2
)
for sharing
keys with user 3, respectively. These random variables should be decoded by user 3 for generating secret keys. To this end,
5
part of information is sent by users 1 and 2 by transmitting realizations of random variables U and V with distributions
p u s and
)
(
(
p v t
)
, respectively. Then, the other part of information should be sent by users 1 and 2 with total rate
H S T U V X , according to the Slepian-Wolf theorem, to enable user 3 to reconstruct S and T . Based on the portion
( ,
)
,
,
3
of the rate transmitted by each user, there is a tradeoff between the equivocation rates. For justification of the rate
1R , we
assume that user 1 sends information with the minimum rate
H S U V X T after sending realizations of U . It is
)
(
,
,
,
3
obvious that both of the transmissions by user 1 can result in information leakage about S to user 2. The leakage rate
would be equal to:
( ;
I S X U H S U V X T
(
)
,
,
,
,
)
2
3
For obtaining
1R , we should subtract the leakage rate from
( )H S and hence, we have:
R H S
( )
1
I S X U H S U V X T
( ;
)
(
,
,
,
,
3
2
)
I S U V X T
( ;
,
,
,
3
)
I S X U
( ;
,
2
( )
a
)
I S U X T
( ;
,
,
3
)
I S X U
( ;
,
2
)
( )
b
I S U X T
( ;
,
,
3
)
I S X U T
( ;
2
,
,
)
I S X T U I S X T U
( ;
)
( ;
,
,
)
2
3
where (a) follows from the distribution of V and (b) from the distribution of T that results in
I S T X U . Since
( ;
)
0
,
2
the minimum rate
H S U V X T (according to the Slepian–Wolf theorem) is sent by user 1,
(
)
,
,
,
3
1R is smaller than the
calculated rate. The same approach can be applied to the rate
2R . For the rate
R R
1
2
:
R
1
R
2
H S H T
( )
( )
I S X U I T X V H S T U V X
( ;
,
1
( ;
( ,
)
)
,
,
,
2
)
3
I S T X U V
( ,
;
,
3
)
I S X T U I T X S V
( ;
)
( ;
,
,
)
I S T U V
( ;
,
)
1
2
Theorem 2 (outer bound of the forward key capacity region): If the rate pair
(
the forward secret key strategy, then it satisfies:
R R is an achievable key rate pair in
)
,
1
2
R
1
R
1
R
2
0
0,
R
2
I S T X U I S X U
{ ( ;
)
( ;
,
)}
I T S X V I T X V
{ ( ;
)
( ;
,
)}
3
3
2
1
for random variables
U V S T which take values in sufficiently large finite sets and form Markov chains as:
,
,
,
U S
V T X X X
(
,
1
,
,
,
2
V T U S X X X
,
(
,
,
,
1
2
),
),
3
3
.
S X
T X
1
2
(
(
X X
,
2
X X
,
1
),
)
3
3
In addition, the following bound is an explicit upper bound which can be easily deduced from Theorem 1 of [1]:
6
R I X X X
1
1
(
;
3
)
2
R I X X X
2
1
(
;
2
3
)
The proof is given in Section V. B.
Corollary 1: If user 2 is only interested in wiretapping and not sharing a secret key with user 3, random variables
T and V can be assumed to be constant. In this case, the lower bound of Theorem 1 coincides with the upper bound of
Theorem 2 and the forward secret key capacity between the users 1 and 3 would be equal to:
R
1
max{ ( ;
I S X U I S X U
)
( ;
)}
2
3
for random variables
,U S which form a Markov chain as
U S X
X X
,
2
3
1
. This result is in agreement with the result of
Theorem 1 of [1].
Theorem 3 (inner bound of the backward key capacity region): In the backward secret key strategy of the described
source model, the rate pair
(
R R is an achievable key rate pair if:
)
,
1
2
R
1
R
1
R
2
0
0,
R
2
I S X U I S X T U
{ ( ;
( ;
)
,
)}
{ ( ;
I T X U I T X S U
( ;
)
,
)}
1
2
2
1
where
,U S and T are random variables taking values in sufficiently large finite sets and according to the distribution:
( ,
, ,
p u s t x x x
3
,
,
1
2
)
(
p u s t p s t x p x x x
3
( ,
, )
)
(
,
,
1
3
2
)
.
The proof is given in Section V. C. Intuitive interpretation of Theorem 3 is as follows. In the case of backward key
capacity region, only user 3 is permitted to send information to users 1 and 2. In this case, user 3 considers two random
variables
,S T with distribution
p s t x
( ,
3
)
and intends to send required information so that users 1 and 2 can reconstruct
random variables S and T , respectively, and then user 3 exploits these random variables for sharing secret keys with
these users. First, it transmits realizations of random variable U which has distribution (
p u s t and then sends information
, )
with rate
H S X U so that user 1 can reconstruct S and information with rate
(
)
,
1
H T X U so that user 2 can
(
)
,
2
reconstruct T . Consequently, user 2 has access to random variables
H S X U for obtaining information about user 1’s key. So:
(
)
,
1
X U T and also information with rate
,
2 ,
( )
R H S
1
( ;
)
I S X U T H S U X
(
,
,
,
2
)
( ;
I S U X
,
)
1
1
( ;
I S X U T
2
,
,
)
With the same approach the rate 2R can be deduced.
7
( ;
I S X U I S X T U
( ;
)
,
)
1
2
Theorem 4 (outer bound of the backward key capacity region): In the backward secret key strategy of the described
source model, if the rate pair
(
R R is an achievable key rate pair, then it satisfies:
)
,
1
2
R
1
R
1
0
0,
R
2
min{ ( ;
I S X U I S X U I S X T U I S X T U
), ( ;
( ;
( ;
)
)
,
,
)}
R
2
min{ ( ;
), ( ;
I T X U I T X U I T X S U I T X S U
( ;
( ;
)
)
,
,
)}
1
2
2
1
1
2
2
1
where
,U S and T are random variables taking values in sufficiently large finite sets and according to the distribution
( ,
p u s t x x x
, ,
3
,
,
2
1
)
(
p u s t p s t x p x x x
3
( ,
, )
(
)
,
,
2
3
1
)
which form Markov chains as
U S X
and
U T X
.
3
3
In addition, the following bound is an explicit upper bound which can be easily deduced from Theorem 1 of [1]:
R I X X X
1
1
(
;
3
)
2
R
2
(
;
I X X X
2
1
3
)
The proof is given in Section V. D.
Corollary 2: If user 2 is only interested in wiretapping and not sharing a secret key, the random variable T can be
assumed to be constant. In this case, the lower bound of Theorem 3 coincides with the upper bound of Theorem 4 and the
backward secret key capacity between user 1 and 3 would be equal to:
R
1
max{ (
I S X U
;
1
)
I S X U
;
(
2
)}
for the random variables which form Markov chain as
U S X
X X
,
1
2
3
. This result is in agreement with the result of
Theorem 1 of [1].
In his section, we discuss some special cases in which the secret key capacity region can be found.
IV. SPECIAL CASES
Case 1: When sources
,X X and
2
1
3X form a Markov chain as
X
1
X
X
3
2
, then the forward and backward key
capacity regions reduce to:
0
R
1
0
R
2
I X X X
(
;
2
3
)
1
The achievability is obtained by replacing
S X T
,
1
X U V
,
2
in Theorem 1 and
T
X S U
3,
in
Theorem
3.
It
should
be
noted
that
because
of
the
above Markov
chain,
the
equality
(
I X X
2
;
3
)
(
I X X
1
;
3
)
(
;
I X X X
3
2
)
holds. For the converse part of the forward and backward key capacity regions, we
1
directly exploit Theorems 2 and 4, respectively .
8
When sources
,X X and
2
1
3X form a Markov chain as
X
2
X
1
X
3
, the secret key capacity region can be derived by
symmetry from case 1.
Case 2: When sources
,X X and
2
1
3X form a Markov chain as
X
1
X
3
X
2
, then the forward key capacity region
reduces to:
0
R
1
0
R
2
I X X X
(
;
1
3
)
2
(
;
I X X X
3
2
)
1
The achievability is obtained by replacing
S
,
X T
1
X U V
,
2
in Theorem 1. It should be noted that because
of
the
above
Markov
chain,
the
equalities
(
I X X
1
;
3
)
(
I X X
2
;
1
)
(
;
I X X X
3
1
)
2
and
(
I X X
2
;
3
)
(
I X X
2
;
1
)
(
;
I X X X
2
3
)
hold. The converse part can be directly followed from Theorem 2.
1
Case 3: When sources
,X X and
2
1
3X form a Markov chain as
X
1
X
3
X
2
, then the backward key capacity region
reduces to:
0
0,
R
1
R I S X U I S X U
)
)
1
R
2
( ;
( ;
2
1
R I T X U I T X U
)
2
( ;
( ;
)
1
2
where U , S and T are random variables taking values in sufficiently large finite sets and according to the distribution
(
p u s t p s t x p x x x
3
( ,
, )
(
)
,
,
3
1
2
)
which form Markov chains as:
( ,
p u s t x x x
, ,
3
,
,
2
1
)
U S X
3
U T X
S X
1
T
2
3
X
The existence of such random variables S and T can be deduced from the Markov chain
X
X
3
1
X
2
. This situation
is shown in Fig.3. For these random variables, we have
I S T X U I S T X U
( ;
)
( ;
,
) 0
and so, achievability can be deduced
,
1
2
from Theorem 3. The converse part can be directly deduced from Theorem 2.
9
Fig.3. An example for the case 1
X
X
3
X
2
V. PROOFS
In this section, proofs of the theorems in Section III are given.
Construction of the Codebooks
A. PROOF OF THEOREM 1
First, we describe random codebook generation at users 1 and 2. For a distribution
( )p s , collection of codewords Ns
,
each uniformly drawn from the set
N
T P
(
S
1
)
, is generated by user 1.
N
T P
(
S
1
)
denotes the set of jointly typical sequences Ns
.
Similarly, for a distribution
( )p t
, collection of codewords Nt
, each uniformly drawn from the set
N
T P
(
T
1
)
, is generated by
user 2. Now, for a fixed distribution
p u s , user 1 generates
)
(
2N I S U
( (
;
)
2
)
i.i.d. codewords of length N ,
NU a for
( )
a
{1,..., 2
N I S U
( (
;
)
2
)
}
with distribution
( )p u . Similarly, for a fixed distribution
p v t
(
)
, user 2 generates
codewords of length N ,
NV b for
( )
b
{1,..., 2
N I T V
( (
;
)
2
)
}
with distribution
( )p v .
2N I T V
( (
;
)
2
)
i.i.d.
User 1 divides the typical sequences Ns
into
12NR bins with the same size in a uniformly random manner where
R
1
(
H S X U R
,
2
1
(
)
)
. The index of each bin is denoted as k and the corresponding random variable is denoted as K .
Also the codewords of each bin are randomly divided into
12NR bins with the same size and the bin index of the latter bins is
denoted as k with the corresponding random variable K . It is obvious that in each internal bin with bin index k , there
are
12NR typical sequences Ns where
R
1
( ( ;
,
I S X U
2
1
)
)
which we use index k for them. Hence each typical
codeword Ns can be uniquely determined with three indices as
s
N
,
,
k k k
and vice versa. Similarly, user 2 divides the typical
sequences of Nt
into
22NR bins with the same size in a uniformly random manner where
R
2
(
10
H T X V
(
,
1
)
R
2
)
. The bin
index of each bin is denoted as l and the corresponding random variable is denoted as L . Also the codewords of each bin
are randomly divided into
22NR bins with the same size and the bin index of the latter bins is denoted as l with the
corresponding random variable L . It is obvious that in each internal bin with bin index l , there are
22NR typical
sequences Nt where
R
2
( ;
,
I T X V
1
1
)
which we use index l for them. Hence each typical codeword Nt
can be
uniquely determined with three indices as
N
t
,
l l
,
l
and vice versa.
Now, for every typical
X
N
1
N
x
1
, all codewords Ns which are jointly typical with 1
Nx , based on distribution
p s x ,
(
1
)
are collected in a set which is denoted as
S . In the same manner, for every typical
N
N
x
1
X
N
2
N
x
2
, all codewords Nt which are
jointly typical whith
Nx , based on distribution
2
p t x
(
2
)
, are collected in a set which is denoted as
N
T . The codebooks of
N
x
2
users 1 and 2 for
X
N
1
N
x
1
and
X
N
2
N
x
2
are shown in Fig.4. It is assumed that all the users are informed of the binning
schemes and distributions used.
k
1
2
•
•
•
k
1
2
•
•
•
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
12NR
••••
••••
••••
••••
••••
12NR
••••
••••
••••
••••
••••
••••
S : Set of user 1’s codewords for
N
N
x
1
X
N
1
N
x
1
l
1
2
•
•
•
22NR
l
1
2
•
•
•
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
••••
22NR
••••
••••
••••
••••
••••
••••
N
T : Set of user 2’s codewords for
N
x
2
X
N
2
N
x
2
Fig.4. Codebooks of users 1 and 2 for
X
N
1
N
x
1
and
X
N
2
N
x
2
11
Encoding
For encoding, users 1 and 2 observe the i.i.d. sequences
NX and
1
NX
2
, e.g.,
Nx
1
and
Nx
2
, respectively, and select the
corresponding sets
S and
N
N
x
1
T . User 1 randomly selects a sequence Ns
N
N
x
2
from the set
S and chooses the respective row
N
N
x
1
index ( k ) of the codeword (as shown in Fig.4) as secret key with user 3 and sends the respective column index ( k ) of the
codeword over the public channel. He also sends index a of its jointly typical sequence
NU a over the public channel.
( )
Similarly, user 2 randomly selects a sequence Nt
from the set
N
T and chooses the respective row index ( l ) of the
N
x
2
codeword (as shown in Fig.4) as secret key with user 3 and sends the respective column index ( l ) of the codeword over
the public channel. He also sends the index b of its jointly typical sequence
NV b over the public channel.
( )
Decoding and Error Probability Analysis
For decoding, user 3 receives the indices
k a l b
,
,
,
from the public channel and also observes the i.i.d. sequences
NX
3
e.g.,
Nx . User 3 decodes the pair
3
(
s
N
,
,
k k k
,
N
t
,
l l
,
l
)
if:
(
s
N
k k k
,
,
,
N
t
l l
,
,
l
,
N
x
3
)
N
(
T
0
)
(
P
S T X U V
3
,
,
,
)
when such pair
(
s
N
,
k k k
,
,
N
t
,
l l
,
l
exists and is unique and otherwise, he declares error. After decoding such
)
(
s
N
,
k k k
,
,
N
t
,
l l
,
l
)
, user
3 chooses the indices k and l as secret keys with users 1 and 2, respectively.
Ns
Now, we analyze the decoding probability of error. Without loss of generality, we assume that the codewords 1,1,1
and
Nt
1,1,1
are, respectively, chosen by users 1 and 2 and so the key pair
(
N
t
s
,
1,1,1 1,1,1
N
)
should be decoded by user 3. The event E is
defined as:
, ,
,
E k k k l l
( ,
,
l
)
{(
N
s
k k k
,
,
,
N
t
l l
,
,
l
,
N
x
3
)
N
(
T
0
)
(
P
S T X U V
3
,
,
,
)}
The decoding error probability is bounded as:
N
)
(
P
e
P E
{
c
(1,1,1,1,1,1) (
N
t
s
,
1,1,1 1,1,1
N
)chosen}
l l
(1,1),( ,
k k
( ,
)
)
(1,1)
P E k
{ ( ,1,
k l
, ,1,
l
) (
N
t
s
,
1,1,1 1,1,1
N
)chosen}
)
k k
( ,
(1,1)
P E k
{ ( ,1,
k
,1,1,1) (
N
t
s
,
1,1,1 1,1,1
N
)chosen}
( ,
l l
)
(1,1)
P E
{ (1,1,1, ,1,
l
l
) (
N
s
t
,
1,
1,1 1,1,1
N
)chosen}
The first term vanishes due to the joint asymptotic equipartition property (AEP):
c
P E
{
k l
(1,1,1,1,1,1) ( , )
(1,1) sent}
0
12
In the second term for ( ,
k k
)
(1,1)
and ( ,
l l
)
(1,1)
we have (according to the Slepian-Wolf Theorem [12])
P E k
{ ( ,1,
, ,1,
k l
l
) ( , )
k l
(1,1)sent} 2 R R N H S T X U V
,
1
2
(
(
3
,
,
)
0
)
In the third term for ( ,
k k
)
(1,1)
we have:
P E k
{ ( ,1,
k
,1,1,1) ( , )
k l
(1,1)sent} 2
R N H S X T U V
1
(
(
3
,
,
,
)
0
)
2
R N H S X T U
(
1
(
3
,
,
)
0
)
Finally, in the forth term for ( ,
l l
)
(1,1)
we have:
P E
{ (1,1,1, ,1,
l
l
) ( , )
k l
(1,1)sent} 2
R N H T X S U V
(
(
,
,
,
3
2
)
0
)
2
R N H T X S V
(
(
,
,
3
2
)
0
)
and hence, the decoding error probability can be bounded as:
2
R H S T X U V
,
(
,
,
3
)
0
)
2
N
)
(
eP
0
2
N R
(
1
If we set:
N R H S X T U
(
(
,
,
3
1
)
0
)
2
N R H T X S V
(
(
,
,
3
2
)
0
)
R H S X T U
(
,
,
3
H T X S V
3
(
,
,
)
)
1
R
2
R
1
R
2
H S T X U V
( ,
,
,
3
)
or in other words:
)
R H S X U H S X T U
(
(
,
,
,
)
H S X T U H S X T U
(
(
,
,
,
,
)
( ;
)
I S X T U I S X T U
( ;
,
,
)
3
2
1
R
2
2
1
(a)
)
(b)
)
3
3
2
1
3
3
H T X V H T X S V
(
(
)
,
,
,
)
H T X S V H T X S V
(
(
,
,
,
,
)
( ;
I T X S V
3
,
)
( ;
I T X S V
1
,
)
R
1
R
2
,
H S X U H T X V H S T X U V
1
( ,
(
(
)
)
,
,
,
2
3
)
;
( ,
I S T X U V
,
3
)
( ;
I S X T U I
)
,
2
T X S V
( ;
1
,
)
I S T U V
( ;
,
)
then for any 0
0 ,
N
)
(
eP
04
and if we set
04 , then the reliability condition 5 in Definition 1 will be satisfied. It
should be noted that in the above equations, equalities (a) and (b) follow from the distributions of random variable S and
T . It is obvious that the encoding scheme can satisfy the uniformity conditions (9) and (10) in Definition 1.
Analysis of Security Conditions
Now, we should analyze the security conditions (6) and (7) in Definition 1. User 2 attempts to obtain information about
user 1’s key and to this end, he exploits
M , X and the information which is sent by user 1 on the public channel:
N
2
2
13
I K M X K U
(
,
;
,
,
2
N
2
N
(a)
)
N
I K X K U
(
2
,
;
,
N
)
H K H K X K U
(
)
(
,
,
N
2
N
)
H K H K S X K U
,
(
(
)
,
,
N
N
2
H K H K X K U S
(
(
)
,
,
,
N
N
2
N
N
)
)
N
H S K X K U
,
(
,
,
N
2
N
)
N
N
H S X K U
2
,
(
,
N
)
N
H S K X K U
,
(
,
,
N
2
N
)
(b)
(c)
N
H K H S X K U
,
(
(
)
,
N
2
N
)
N
H S K X K U
,
(
,
,
N
2
N
)
NH S X U N
2
(
)
,
1
N
R H S X K U
,
(
,
N
)
N
2
N
H S K X K U
,
(
,
,
N
2
N
)
N
H S X U
(
,
N
2
N
)
N
1
N
NR H S X K U
,
(
,
N
)
N
2
1
N
H S K X K U
,
(
,
,
N
2
N
)
N
I S K X U
(
;
,
N
2
N
)
NR H S K X K U
(
,
,
,
N
N
2
1
N
)
N
1
H K X U
(
,
N
2
N
)
N
H K S X U
(
,
,
N
2
N
)
NR H S K X K U
(
,
,
,
N
N
2
1
N
)
N
1
H K S
(
N
,
N
X U
,
2
N
)
NR H S K X K U
(
,
,
,
N
N
2
1
N
)
N
1
H K
(
)
(d)
H K
(
)
(e)
1
N
2
NR H S K X K U
(
,
,
,
N
N
2
N
)
N
1
N
H S K X K U
,
(
,
,
N
)
N
1
N
(
2
1
)
In the above equations, (a) follows from the independence of
2M from other random variables, (b) from the fact that
the index k is one of the indices of Ns and the equality
H K X K U S
,
(
,
,
N
N
2
N
)
holds. For proving (c), we use Lemma
0
1 which is given in part A of the Appendix. Equality (d) is true because the index k is also one of the indices of Ns
.
Finally for (e), we use Lemma 2 (which is given in part B of the Appendix) to show that:
N
H S K X K U
,
(
,
,
N
2
N
)
N
2
.
Similarly, the security condition for user 2’s key is satisfied as:
I L M X
( ;
,
1
N
1
,
,
L V
N
)
N
4
3
(
)
and so, the security conditions (6) and (7) of Definition 1 are satisfied when
i
1
2
,i
,
,
1 2 3 4
,
.
B. PROOF OF THEOREM 2
For deriving upper bound of the forward key capacity region, we use the reliable and secure transmission conditions. In
the forward key strategy, users 1 and 2, respectively, generate the keys K and L for sharing with user 3:
K K M X
(
,
1
N
1
),
L L M X
(
,
2
N
2
)
Then, users 1 and 2, respectively, generate
1F and 2F where
F
1
f M X
1
1
(
,
N
1
),
F
2
f M X
2
2
(
,
N
2
)
and transmit them over the
public channel so that user 3 can reconstruct K and L with an arbitrary probability of error
0 . According to Fano’s
inequality:
14
1
N
H K L M X
(
,
,
3
N
3
,
F F
,
1
2
)
H
( )
(log
1)
1
After reconstructing these keys, user 3 uses K and L as secret keys with users 1 and 2, respectively, and for arbitrarily
small
0 , the following security conditions should be satisfied:
I K M X
(
;
,
2
N
2
,
F
1
)
N
. ,
I L M X
( ;
,
1
N
1
,
F
2
)
N
.
Now, we show that for keys that satisfy the reliability and security conditions described above, there exist random
variables
U V S T that form Markov chains as mentioned in Theorem 2 and satisfy the following relations:
,
,
,
H K I S T X U I S X U
)
3
( ; ,
( )
( ;
)
2
H L I T S X V I T X V
3
( ; ,
( )
( ;
)
1
)
We prove upper bound for
1R . The proof for
2R can be deduced by symmetry.
1
N
(b)
(c)
(d)
(e)
H K
( )
(a)
1
N
H K M X F
,
2
1
(
,
N
2
)
1
N
1
N
1
N
1
N
1
N
H K M X F
,
2
1
(
,
N
2
)
1
N
N
H K M X F F L
, )
3
(
,
,
,
1
3
2
1
H K X F
1
(
,
N
2
)
1
N
N
H K X F F L
,
, )
3
(
,
2
1
1
I K X F L F I K X F
[ ( ;
1
1
( ;
)
,
,
2
)]
1
N
3
N
2
N
i
1
I K X F L X
[ ( ;
,
i
3,
,
2
i
1
3,1
,
X
N
i
2, 1
,
F I K X
i
2,
1
( ;
)
X
1
i
3,1
,
X
N
2, 1
i
,
F
1
)]
1
N
i
1
I S T X U I S X U
[ (
)
i
i
( ;
i
i
2,
i
3,
;
,
i
i
)]
1
(f)
[ (
I S T X U I S X U
Q Q
Q Q
;
Q Q
)- (
Q
2,
3,
;
,
)]
where (a) results from the security condition, (b) from Fano’s inequality, (c) from independence of
(
M M from other
)
,
2
3
random variables, (d) from Lemma 3 (which is given in part C of the Appendix), (e) from definition of the random
variables U ,V ,S ,T as:
U
i
(
X
1
i
3,1
,
X
N
2, 1
i
,
F V
),
1
i
(
X
1
i
3,1
,
X
N
1, 1
i
,
F S
),
2
i
(
K U T
i
i
),
,
L V
( ,
i
)
and (f) from definition of the random variable Q which is uniformly distributed on {1 2
,
,..., N and setting
}
.
1
Similarly, by using the above mentioned variables we have:
R
2
1
N
H L
( )
I T S X
[ (
;
,
Q
Q
V I T X
)- (
;
Q
3,
Q Q
V
Q Q
1,
)]
15
It can be seen that the desired equations are satisfied with random variables which form Markov chains as in Theorem
2.
Construction of the Codebooks
C. PROOF OF THEOREM 3
First, we describe random codebook generation at user 3. For a distribution
p s t
( , )
, collection of codewords, (
s
N
N
,
t
)
each uniformly drawn from the set
N
T P
(
1
S T
,
)
, is generated by user 3. Now, for a fixed distribution
p u s t
, )
(
, user 3
generates
( (
2N I S T U
,
)
;
2
)
i.i.d. codewords of length N ,
NU a for
( )
a
{1,..., 2
N I S T U
( (
;
,
)
2
)
}
with distribution
( )p u .
User 3 divides the typical sequences of Ns
into
12NR bins with the same size in a uniformly random manner where
R
1
(
H S X T U R
,
1
(
)
,
2
)
. The bin index of each bin is denoted as k and the corresponding random variable is denoted
as K . Also the codewords of each bin are randomly divided into
12NR bins with the same size and the bin index of the latter
bins is denoted as k with the corresponding random variable K . It is obvious that in each internal bin with bin index k ,
there are
12NR typical sequences Ns where
R
1
I S X T U
( ( ;
,
1
)
,
2
)
which we use index k for them. Hence each typical
codeword Ns can be uniquely determined with three indices as
s
N
k k k
,
,
and vice versa. Also, user 3 divides typical
sequences of Nt
into
22NR bins with the same size in a uniformly random manner where
R
2
(
H T X S U R
,
2
)
(
,
1
)
. The
bin index of each bin is denoted as l and the corresponding random variable is denoted as L . Also the codewords of each
bin are randomly divided into
22NR bins with the same size and the bin index of the latter bins is denoted as l with the
corresponding random variable L . It is obvious that in each internal bin with bin index l , there are
22NR typical
sequences Nt
where
R
2
I T X S U
( ( ;
,
1
)
,
1
)
which we use index l for them. Hence each typical codeword Nt can be
uniquely determined with three indices as
N
t
l l
,
,
l
and vice versa.
Now, for every typical
X
N
3
N
x
3
, all codewords (
s
N
N
t which are jointly typical with
)
,
Nx , based on distribution
3
p s t x , are collected in a set which is denoted as
( ,
)
3
N
S T
,
(
N
) N
x
3
. It is assumed that all the users are informed of the binning
schemes and distributions used.
Encoding
For encoding, user 3 observes the i.i.d. sequence of
NX e.g.,
3
Nx and after selecting the corresponding set
3
N
S T
,
(
N
,
) N
x
3
16
he randomly selects a sequence (
s
N
N
,
t
)
from this set. Then, he chooses the respective row index ( k ) of the codeword
Ns
(as shown in Fig.4) as secret key with user 1 and sends the respective column index ( k ) of the codeword over the
public channel. Also, he chooses the respective row index ( l ) of the codeword Nt
(as shown in Fig.4) as secret key with
user 2 and sends the respective column index ( l ) of the codeword over the public channel. In addition, user 3 sends index
a of
NU a which is jointly typical with the sequence (
( )
s
N
,
t
N
)
over the public channel.
Decoding and Error Probability Analysis
For decoding, users 1 and 2 receive the indices
,
,
k l a
from the public channel and also observe the i.i.d. sequences
NX and
1
NX e.g., 1
Nx and
2
Nx , respectively. User 1 decodes
2
s
N
k k k
,
,
if:
(
s
N
k k k
,
,
,
N
x
1
)
)
N
(
T
0
(
P
S X U
1
,
)
when such
s
N
k k k
,
,
exists and is unique and otherwise he declares error. User 2 decodes
N
t
l l
,
if:
,
l
N
t
(
l l
,
,
l
,
N
x
2
)
)
N
(
T
0
(
P
T X U
2
,
)
when such
N
t
l l
,
,
l
exists and is unique and otherwise he declares error.
Now we analyze decoding error probability. We define:
N
)
(
P
e
max{
)
N
(
P
e
1
,
N
(
P
e
2
)
}
where
)
N
(
eP
1
N
and (
eP
2
)
are, respectively, decoding error probabilities at users 1 and 2. Without loss of generality, we assume
Ns
that the codewords 1,1,1
Nt
and 1,1,1
Ns
are chosen by user 3 and so, 1,1,1
Nt
and 1,1,1
should be decoded by users 1 and 2,
respectively. Events
1E and
2E are defined as:
E k k k
( ,
,
1
)
{(
N
s
k k k
,
,
,
N
x
1
)
)
N
(
T
0
(
P
S X U
1
,
)}
E l l
( ,
2
,
l
)
N
t
{(
l l
,
,
l
,
N
x
2
)
)
N
(
T
0
(
P
T X U
2
,
)}
The decoding error probabilities are bounded as:
)
N
(
P
e
1
c
P E
{
1
(1,1,1) (
N
t
s
,
1,1,1 1,1,1
N
)chosen}
)
k k
( ,
(1,1)
P E k
{ ( ,1,
k
) (
N
t
s
,
1,1,1 1,1,1
N
)chosen}
)
N
(
P
e
2
P E
{
c
2
(1,1,1) (
N
t
s
,
1,1,1 1,1,1
N
)chosen}
l l
( ,
)
(1,1)
P E l
{ ( ,1,
l
) (
N
t
s
,
1,1,1 1,1,1
N
)chosen}
17
According to the joint asymptotic equipartition property (AEP), decoding error probabilities can be bounded as:
)
)
(
N
P
e
1
(
N
P
e
2
0
2
0
2
R N H S X U
(
11
(
1
,
)
0
)
R N H T X U
(
12
(
2
,
)
0
)
and if we set:
1
R
2
R H S X U
(
,
)
1
H T X U
(
,
2
)
or in other words:
R H S X T U H S X U
,
(
(
)
,
,
I S X U I S X T U
( ;
( ;
)
,
H T X S U H T X V
)
(
(
,
,
,
I T X U I T X S U
( ;
( ;
)
,
)
)
1
2
1
2
2
1
)
)
1
R
2
2
1
then for any
0 ,
0
N
)
(
eiP
02
for
i
1, 2
and so
N
)
(
eP
02
and if we set
02 , then the reliability condition 5 in
Definition 1 will be satisfied. It is obvious that the encoding scheme can satisfy the uniformity conditions (9) and (10) in
Definition 1.
Analysis of Security Conditions
Now, we should analyze the security conditions (6) and (7) in Definition 1. User 2 attempts to obtain information about
user 1’s key and to this end, he exploits
the indices k , l and a :
M , X and the information which is sent by user 3 over the public channel, i.e.,
N
2
2
N
I K M X K L U
(
2
,
,
;
,
,
2
N
(a)
)
N
I K X K L U
(
,
2
,
,
;
N
)
,
I K X K L T U
(
,
;
,
,
N
N
2
N
)
I K X K T U
(
,
;
,
,
N
N
2
N
)
N
I K L X K T U
(
,
2
,
N
;
,
N
)
(b)
I K X K T U
(
,
;
,
,
N
N
2
N
)
H K H K X K T U
(
(
)
,
,
,
N
N
2
N
)
N
H K H K S X K T U
,
(
)
(
N
,
,
,
N
2
H K H K X
)
(
(
N
2
,
N
K T U S
,
,
N
,
N
N
)
)
N
N
H S K X K T U
2
,
(
N
,
,
,
N
)
N
N
H S X K T U
,
2
,
(
N
,
N
H K H S X K T U
,
)
(
(
N
,
,
N
2
N
)
N
N
H S K X K T U
2
,
(
N
,
,
,
N
N
H S K X K T U
2
,
(
N
,
,
,
N
)
N
)
N
)
NH S X T U NR H S X K T U
1
,
(
(
)
,
,
,
,
2
N
2
N
N
N
)
N
N
H S K X K T U
2
,
(
N
,
,
,
N
)
N
N
H S X T U
2
(
N
,
,
N
)
N
1
NR H S X K T U
(
,
,
,
N
N
N
2
1
N
)
H
(
N
N
S K X K T U
,
2
,
N
,
,
N
)
N
I S K X T U
(
;
,
,
N
N
2
N
)
NR H S K X K T U
,
(
,
,
,
N
N
2
1
N
)
N
1
N
H K X T U
2
(
N
,
,
N
)
H K S X T U
,
(
,
,
N
N
N
2
N
)
NR H S K X K T U
,
(
,
,
,
N
N
N
2
N
)
N
1
N
1
(c)
(d)
H K
(
)
H K S X T U
,
(
,
,
N
N
N
2
N
)
NR H S K X K T U
,
(
,
,
,
N
N
N
2
1
(e)
(f )
H K
(
)
NR H
1
(
N
N
S K X K T U
,
2
,
N
,
,
N
)
N
1
N
N
H S K X K T U
2
,
(
N
,
,
,
N
)
N
1
N
(
2
1
)
N
)
N
1
18
In above equations, (a) follows from the independence of
2M from other random variables, (b) from the fact that given
NT , L is impendent of other random variables, (c) from the fact that the index k is one of the indices of Ns and the
equality
,
H K X K T U S
(
,
,
,
N
N
N
2
N
)
holds. For proving (d), we use the same approach as in Lemma 1 which is given
0
in part A of the Appendix. Equality (e) is true because the index k is also one of the indices of Ns
. Finally for (f), we use
the same approach as in Lemma 2 (which is given in part B of the Appendix) to show that:
N
N
H S K X K T U
2
,
(
N
,
,
,
N
)
N
2
.
Similarly, the security condition for user 2’s key is satisfied as:
N
I L M X K L U
( ;
1
,
,
,
,
1
N
)
N
4
3
(
)
and so, the security conditions (6) and (7) of Definition 1 are satisfied when
i
1
2
,i
,
,
1 2 3 4
,
.
D. PROOF OF THEOREM 4
For deriving upper bounds of the backward key capacity region, we use the reliable and secure transmission conditions.
In the backward key strategy, user 3 generates the keys K and L for sharing with users 1 and 2, respectively:
K K M X
(
,
(cid:0)
3
N
3
),
L L M X
(
,
(cid:0)
3
N
3
)
Also, it sends
3F over the public channel where 3
F
f M X
3
3
(
,
)N
3
to enable users 1 and 2 to compute K and L ,
respectively, with an arbitrary probability of error
0 . According to Fano’s inequality:
1
N
1
N
H K M X
(
,
1
H L M X
(
,
2
N
1
,
F
3
)
H
( )
(log
1)
1
,
N
2
,
F
3
)
H
( )
(log
1)
2
Also the security conditions require that:
I K M X
(
;
,
2
N
2
,
F
3
)
N
,
I L M X
( ;
,
1
N
1
,
F
3
)
N
Now, we derive upper bounds for
1R . The proofs for
2R can be deduced by symmetry. For the first upper bound of
1R :
19
1
N
(b)
(c)
(d)
(e)
1
N
1
N
1
N
1
N
1
N
1
N
1
N
1
N
1
N
1
N
1
N
1
N
(b)
(c)
(d)
(e)
(f)
(g)
(
H K
)
(a)
1
N
,
H K M X F
2
3
(
,
N
2
)
[
)
H K M X F H K M X F
3
(
(
,
,
,
,
1
2
3
)]
1
N
1
N
2
[
)
H K X F H K X F
3
(
(
,
,
3
)]
1
N
1
N
2
[ ( ;
I K X F
3
N
1
)
( ;
I K X F
3
N
2
)]
1
N
i
1
i
[ ( ;
I K X X
1
i
1,
1
N
N
i
1
[ (
;
I S X U
i
i
1,
i
1
,
X
N
i
2, 1
,
F
3
)
i
( ;
I K X X
1
i
2,
1
,
X
N
i
2, 1
,
F
3
)]
1
)]
(
;
I S X U
i
i
2,
i
)]
1
(f)
[ (
I S X U I S X U
Q Q
)- (
Q Q
Q
Q
2,
;
;
1,
)]
where (a) results from the security condition, (b) from Fano’s inequality at user 1, (c) from independence of
(
M M
)
,
1
2
from other random variables, (d) from Lemma 3 (in which the random variable
2F is set to be constant), (e) from
definition of the random variables U ,S ,T as:
U
i
(
X
1
i
1
,
X
N
2, 1
i
,
F
3
S
),
i
(
K U T
i
i
),
,
L U
( ,
)
i
and (f) from definition of the random variable Q which is uniformly
distributed on {1 2
,
,..., N and setting
}
1
.
For the second upper bound of
1R , we have:
1
N
H K
(
)
(a)
1
N
H K M X F
,
2
3
(
,
N
2
)
1
N
H K L M X F
3
(
,
,
,
2
N
2
)
)
,
H K M X F L
(
,
,
)
,
H K M X F L
(
,
,
1
N
,
H L M X F
2
3
(
,
N
2
)
2
3
3
2
2
N
2
N
2
N
2
)
H K M X F L H K M X F
3
(
(
,
,
,
,
,
1
3
2
)]
1
2
N
1
, )
H K X F L H K X F
3
(
(
,
,
3
)]
1
2
N
1
N
2
(
H K X
N
2
,
F L H K X F L
(
3
)
,
,
,
3
)]
N
1
1
2
N
I K X F L
[ (
)
1
;
,
3
N
i
1
i
I K X X
[ (
;
1
1,
i
I K X
(
;
N
2
,
F L
,
3
)]
1
2
1
,
X
N
i
2, 1
,
F L
,
)
3
i
I K X X
(
1
2,
;
i
1
,
X
N
i
2, 1
,
F L
,
3
)]
1
2
N
i
1
[ (
I S X U T
1,
i
i
;
,
i
i
)]
(
I S X U T
2,
i
i
;
,
i
i
)]
1
2
I S X U T I S X U T
[ (
,
Q Q
,
Q Q
)- (
2,
Q
Q
Q
Q
1,
;
;
)]
where (a) results from the security condition, (b) from Fano’s inequality at user 2, (c) from Fano’s inequality at user 1, (d)
from independence of
(
M M from other random variables, (e) from Lemma 3 (in which the random variable
)
,
1
2
2F is set
20
to be constant), (f) from definition of the random variables U ,S ,T as above and (g) from definition of the random
variable Q as above and setting
.
1
2
Following the same approach, upper bounds for
2R can be deduced and so Theorem 4 is proved for some random
variables with distribution
U T X
.
3
( ,
p u s t x x x
, ,
3
,
,
1
2
)
(
p u s t p s t x p x x x
3
( ,
, )
(
)
,
,
2
1
3
)
which form Markov chains as
U S X
and
3
VI. CONCLUSIONS
In this paper, a source model for secret key generation were studied in which each of users 1 and 2 intends to share a
secret key with user 3 where user 1 acts as a wiretapper for user 2 and vice versa. Three users could observe i.i.d outputs of
correlated sources and there is a public channel between users. In the described model, the forward and backward key
strategies were considered based on the direction of the public channel, i.e., from users 1 and 2 to user 3 or in the reverse
direction. For both the forward and backward key strategies, inner and outer bounds of secret key capacity regions were
derived. Our results also include the results of previous works such as [1]. Our upper and lower bounds did not coincide
generally but some special cases were considered where these bounds were tight.
As the continuation of this work, we are now exploring a model similar to the described model but instead of the public
channel, there is a generalized multiple access channel (MAC) between the terminals, where users 1 and 2 govern the
inputs of the MAC and outputs are received by users 1, 2 and 3. Also as the future works, we can suggest the same
problem of this paper for the situation where there is a two-way public channel i.e., from users 1 and 2 to user 3 and vice
versa. Also unlimited usage of the public channel can be viewed as a generalization of the problem.
APPENDIX
A. LEMMA 1
For sufficiently large N and sufficiently small 1 , we have:
NH S X U
(
,
2
)
N
H S X U
(
,
N
2
N
)
N
1
Proof: We use the indicator function:
s x u
( ,
, )
2
1, (
N
s
,
N
x
2
N
,
u
)
(
N
A
0
)
(
P
S X U
2
,
,
)
0, otherwise
21
We have:
N
I S X U
(
;
,
N
2
N
)
I S
(
N
,
;
N
X U
,
2
N
)
and hence:
N
H S X U
(
,
N
2
N
)
H S
(
N
)
I S
(
N
,
;
N
X U
,
2
N
)
NH S
( )
I S
(
N
,
;
N
X U
,
2
N
)
NH S
( )
I S X U
(
;
,
N
N
2
N
I
;
(
)
N
X U
,
2
N
)
NH S
( )
P
(
1) (
I S X U
;
,
N
N
2
N
1)
P
(
0) (
I S X U
;
,
N
N
2
N
0)
I
(
;
N
X U
,
2
N
)
We analyze the above terms one by one.
For the second term:
N
1)
NP s
[(
N
,
N
x
2
N
,
u
)
N
(
A
0
)
(
P
S X U
2
,
,
)]log
N
0
log
N
0)
N
I S X U
(
;
,
N
2
N
0)
P s
(
N
,
N
x
2
N
,
u
P s
)[log (
N
,
N
x
2
N
,
u
)
P s
log (
N
)
log (
N
P x
2
N
,
u
)]
P
(
1) (
I S X U
;
,
N
N
2
For the third term:
P
(
0) (
N
N
2
,
I S X U
;
(
N
A
)
0
P
,2
S X U
)
(
)
,
N
(
s
,
N
x
2
,
u
N
N H S H X U H S X U
,
2
( )
( ,
(
(
)
,
2
) 3
0
)
N I S X U
( ( ;
,
2
) 3
0
)
For the forth term:
I
(
;
N
X U
,
2
N
)
(
H
) 1
Finally, we can deduce:
N
H S X U
(
,
N
2
N
)
NH S
( )
N
0
log
N I S X U
( ( ;
,
2
) 3
0
) 1
N H S X U
(
(
,
2
)
log
1
(
))
0
N
1
3
0
For sufficiently large N and sufficiently small 2 , in the forward key strategy, we have:
B. LEMMA 2
N
H S K X K U
,
(
,
,
N
2
N
)
N
2
Proof: For fixed k and k , we assume that user 1 transmits a codeword
s
N
k k k
,
,
where
1
k
2NR
1
,
1
k
1
2NR
and
1
k
2NR
1
. First, we show that user 2 can decode
s
N
k k k
,
,
with sufficiently small probability of error if it has access to
22
sequences
,N
,
k k x
2
,
u
N
. User 2 selects k so that:
(
N
s
k k k
,
,
,
N
x
2
u
, )
(
N
)
P
A
(
3
,
S X U
2
,
)
if such k exists and it is unique, otherwise we declare error. With the assumption that
N
k ks
,
,1
is sent by user 1, error
occurred when
(
N
s
k k
,
,1
,
N
x
2
u
, )
(
)
N
P
A
(
3
,
S X U
2
,
)
or when
(
N
s
k k k
,
,
,
N
x
2
u
, )
(
N
)
P
A
(
3
,
S X U
2
,
)
for
k . Due to joint AEP:
1
(
)
N
P
A
(
3
S X U
2
,
,
))
3
P s
((
N
k k
,
,
N
x
2
,1
u
, )
and also:
P s
{(
1
k
N
k k k
,
,
,
N
x
2
u
, )
(
N
A
3
)
(
P
S X U
2
,
,
)} 2
NR N I S X U
( (
;
,
1
2
3
)
2
N
(
3
1
)
So, we can bound decoding error of user 2 as:
eP
3
2N
(
3
1
)
and by choosing
, we can make
max{ , }
1
0
3
eP sufficiently small.
Now, we exploit Fano’s inequality to obtain:
1
N
N
H S K X K U
,
(
,
,
N
2
N
)
1
N
[1
P R
]
1
e
1
1
N N
(
3
2
N
[
3
1
]
)[ ( ;
I S X U
,
2
)
]
2
1
This lemma is a modified version of Lemma 1 in [1].
C. LEMMA 3
For arbitrary random variables
,K F F and sequences of random variables
,
1
2
X
,N
2
X we have [1]:
N
3
I K X
[ (
;
N
3
,
F F
2
1
)
I K X F
(
1
;
N
2
)]
N
i
1
I K F X
[ (
;
,
2
X
i
1
3,1
,
X
N
i
2, 1
,
F
1
)
I K X
(
;
2,
i
X
i
1
3,1
,
X
N
i
2, 1
,
F
1
)]
3,
i
Proof: First, we consider the right hand side of the above inequality:
N
i
1
I K F X X
[ (
,
2
3,
i
;
1
i
3,1
,
X
N
2, 1
i
,
F
1
)
I K X X
(
;
2,
i
1
i
3,1
,
X
N
2, 1
i
,
F
1
N
)]= [
1
i
H K X
(
1
i
3,1
,
X
N
2,
i
,
F H K X
1
(
)
i
3,1
,
X
N
2, 1
i
,
F F
,
1
2
)]
H K X
(
N
2,1
,
F
1
)
1
H K X
[
(
N
i
1
i
3,1
,
X
N
2, 1
i
,
F H K X
1
(
)
i
3,1
,
X
N
2, 1
i
,
F F
,
1
2
)]
H K X
(
N
3,1
,
F F
,
1
2
)
I K F X F
( ;
,
2
1
N
3,1
)
I K X F
( ;
1
N
2,1
)
N
1
I K F X
[ ( ;
i
)]
3,1
1
i
0
N
2, 1
i
F
1
X
,
,
2
I K F X F
( ;
,
2
1
N
3,1
)
I K X F
( ;
1
N
2,1
)
23
REFERENCES
[1] R. Ahlswede and I. Csisz´ar, “Common randomness in information theory and cryptography, part I: Secret sharing,” IEEE Trans.
Inf. Theory, vol. 39, no. 4, pp. 1121–1132, Jul. 1993.
[2]
I. Csisz´ar and J. K¨orner, “Broadcast channels with confidential messages,” IEEE Trans. Inf. Theory, vol. 24, no. 3, pp. 339–348,
May 1978.
[3]
I. Csisz´ar and P. Narayan, “Secrecy capacities for multiple terminals,” IEEE Trans. Inf. Theory, vol. 50, no. 12, pp. 3047–3061,
Dec. 2004.
[4]
I. Csisz´ar and P. Narayan, “Secrecy capacities for multiterminal channel model,” IEEE Trans. Inf. Theory, vol. 54, no. 6, pp.
2437–2452, Jun. 2008.
[5] A. A. Gohari and V. Anantharam, “New bounds on the information-theoretic key agreement of multiple terminals”, in Proc. IEEE
Int. Symp. Inf. Theory (ISIT), Toronto, Canada, pp. 742-746, Jul. 2008.
[6] A. A. Gohari and V. Anantharam, “Information-theoretic key agreement of multiple terminals - Part I: Source model”, IEEE
Trans. Inf. Theory, submitted, Jun. 2008.
[7] A. A. Gohari and V. Anantharam, “Information-theoretic key agreement of multiple terminals - Part II: Channel model”, IEEE
Trans. Inf. Theory, submitted, Jun. 2008.
[8] U. M. Maurer, “Secret key agreement by public discussion from common information,” IEEE Trans. Inf. Theory, vol. 39, no. 3,
pp. 733–742, May 1993.
[9] U. Maurer and S. Wolf, “Information-theoretic key agreement: From weak to strong secrecy for free,” in Proc.
EUROCRYPT'2000, LNCS, vol. 1807, Bruges, Belgium: Springer-Verlag, pp. 351–368, May 2000.
[10] R. Renner and S. Wolf, “New bounds in secret-key agreement: the gap between formation and secrecy extraction,”in Proc.
EUROCRYPT’03, LNCS, Warsaw, Poland: Springer-Verlag, pp. 562-577, May 2003.
[11] C. E. Shannon, “Communication theory of secrecy systems,” AT&T Bell Labs. Tech. J., vol. 28, pp. 656–715, 1949.
[12] D. Slepian and J. K. Wolf, “Noiseless coding of correlated information sources,” IEEE Trans. Inf. Theory, vol. 19, no. 4, pp. 471–
480, Jul. 1973.
[13] A. Wyner, “The wire-tap channel,” AT&T Bell Labs. Tech. J., vol. 54, pp. 1355–1387, 1975.
[14] C. Ye and P. Narayan, “The private key capacity region for three terminals,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT),
Chicago, USA, p. 44, Jun. 2004.
[15] C. Ye and A. Rezenik, “Group secret key generation algorithms,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Nice, France, pp.
2596-2600, Jun. 2007.
24
|
synthetic_cpt | 4 | DART-Math_Difficulty-Aware_Rejection_Tuning_for_Mathematical_Problem-Solving.pdf | 6
1
0
2
g
u
A
9
2
]
O
C
.
h
t
a
m
[
1
v
5
4
0
8
0
.
8
0
6
1
:
v
i
X
r
a
CONNECTEDNESS OF THE DART DIGRAPH AND THE
SQUARED-DART DIGRAPH
PRIMOˇZ POTO ˇCNIK AND STEVE WILSON
Abstract. In this note we revisit the dart graph and the squared dart digraph
constructions and prove that they yield strongly connected digraphs when
applied to connected graphs of minimum valence at least 3.
1. Introduction
In [1] and [3, Section 4], two constructions, called a dart digraph and a squared
dart digraph, were introduced. The second of these is a directed form of a graph
introduced in [2]. The purpose of this note is to prove that these two constructions
yield strongly connected digraphs whenever applied to connected graphs.
All the graphs and digraphs in this note are considered simple. More precisely,
we define a digraph to be a pair (V, D) in which V is a finite non-empty collection of
things called vertices and D is a collection of ordered pairs of distinct vertices. An
element (u, v) of D will be called a dart with initial vertex u and terminal vertex
v. A 2-dart of a digraph (V, D) is a pair (x, y) of darts in D such that the terminal
vertex of x coincides with the initial vertex of y while the initial vertex of x does
not coincide with the terminal vertex of y.
If for every dart (u, v) of a digraph Λ also its reverse (u, v)−1 = (v, u) is a dart,
then Λ is called a graph. In this case, we call the pair {(u, v), (v, u)} of darts an
edge of Λ.
We are now ready to define the dart digraph and the squared dart digraph of a
given graph Λ with the set of darts D.
The dart digraph of Λ is the digraph D(Λ) with vertices and darts being the
darts and 2-darts of Λ, respectively.
Similarly, let the squared dart digraph of Λ be the digraph A2D(Λ) with vertex-
set D × D and with a pair (cid:0)(x, y), (z, w)(cid:1), x, y, z, w ∈ D, being a dart of A2D(Λ) if
and only if y = z and (x, w) is a 2-dart of Λ.
Recall that a digraph is said to be strongly connected provided that for any two
vertices u, v, there is a directed path from u to v (we then say that v is accessible
from u), as well as one from v to u.
2. Results
The first of our results is a simple observation about bipartiteness of the dart
(A digraph is said to be bipartite if its
digraph and the squared dart digraph.
underlying graph is bipartite.)
2000 Mathematics Subject Classification. 20B25.
Key words and phrases. digraph, graph, transitive, product.
Supported in part by the Slovenian Research Agency, projects J1-5433, J1-6720, and P1-0294.
1
2
P. POTO ˇCNIK AND S. WILSON
Lemma 2.1. If Λ is a bipartite graph, then D(Λ) and A2D(Λ) are also bipartite.
Proof. Colour the vertices of Λ properly black and white and let a dart of Λ inherit
the colour of its initial vertex; this then determines a proper colouring of the vertices
of D(Λ); in particular, D(Λ) is bipartite.
Further, colour a vertex (x, y) of A2D(Λ) blue if the darts x, y of Λ are of the
same colour as vertices in D(Λ) (either black or white), and red otherwise. This is
(cid:3)
then clearly a proper colouring of the vertices of A2D(Λ).
We will now introduce a few auxiliary notions needed to analyse connectedness
of the dart digraph and the square dart digraph.
An s-arc in a graph Λ is a walk of length s in which no two of any three consec-
utive vertices are the same; alternatively, it is a sequence of darts in Λ such that
any two consecutive darts form a 2-dart.
An arc-cycle is a closed walk which is also an s-arc for some s, and in addition,
if it begins with (a, b) and ends with (c, a), then c is required to be different from
b. Note that any cyclic shift of an arc-cycle is also an arc-cycle. Observe that an
s-arc in Λ corresponds to a directed walk in D(Λ) of length s − 1, and an arc-cycle
Λ of length s corresponds to a directed closed walk of length s in D(Λ).
An s-arc, written as a sequence [a0, a1, a2, . . . , as−1, as] of vertices, is a balloon
if a0, a1, . . . , as−1 are pairwise distinct and as = ai for some i ∈ {1, 2, . . . , s − 3}.
The arc (a0, a1) is then called the beginning of the balloon.
Lemma 2.2. Let Λ be a graph in which every vertex has valence at least 3 and let
(u, v) be a dart of Λ. Then (u, v) is the beginning of some balloon in Λ.
Proof. Let Λ′ be the connected component containing v of the graph obtained from
Λ by removing the vertex u and all of the edges incident to u. Λ′ is not a tree.
Hence Λ′ contains a cycle, say C = a0a1 . . . ak with ak = a0. Let vv1 . . . vm be a
path from v to C in Λ′. Without loss of generality we may assume that vm = a0.
(cid:3)
Then [u, v, v1, . . . , vm, a1, a2, . . . , ak] is a baloon in Λ starting with (u, v).
Lemma 2.3. Let Λ be a graph in which every vertex has valence at least 3. Then
the greatest common divisor of the lengths of all arc-cycles in Λ is at most 2.
Proof. Let C be a cycle in Λ, let m be its length, let uv be an edge of C, let a be
a neighbour of u other than its neighbours in the cycle C, and let b be that for v.
Let α, β be balloons beginning with (u, a) and (v, b), respectively. Then the walk
beginning at u, following α out to and around its cycle and back to u along the
initial part of α, then in one step to v, then following β out to and around its cycle
and back to v following the initial part of β, then finally from v back in one step
to u is an arc-cycle γ of some length n. Replacing that last step from v to u by
the path formed from C by removing the edge {u, v} gives an arc-cycle of length
m + n − 2. As the greatest common divisor of m, n, and m + n − 2 is at most 2,
(cid:3)
the result follows.
Theorem 2.4. If Λ is a connected simple graph in which every vertex has valence
at least 3, then D(Λ) and A2D(Λ) are strongly connected.
Proof. Let ∆ = D(Λ). We begin the proof of the strong connectivity of ∆ by
proving two claims:
Claim 1: Let x = (u, v) be a dart of Λ and let x−1 = (v, u) be its inverse dart.
Then there exists a directed walk from x to x−1 in ∆.
CONNECTEDNESS
3
Indeed: By Lemma 2.2, there exists a balloon α = [a0, a1, a2, . . . , as−1, as] in Λ,
beginning with x (that is, a0 = u and a1 = v). Let i ∈ {1, . . . , s − 2} be such
that as = ai. Then β = [a0, a1, a2, . . . , as−1, as = ai, ai−1, ai−2, . . . , a2, a1, a0] is an
(s + i)-arc in Λ, yielding a directed walk from x to x−1 in ∆. This proves Claim 1.
Claim 2: If e and f are two edges in Λ, then there exists a directed walk in
∆ from some x to some y such that the underlying edges of x and y are e and f ,
respectively.
To prove this, consider a shortest path va1a2 . . . akw from e to f . Then e = {u, v}
and f = {w, z} for some vertices u and z of Λ such that a1 6= u and ak 6= z. But
then (u, v) (v, a1) (a1, a2) . . . (ak−1, ak) (ak, w) (w, z) is a directed walk in ∆ from
x = (u, v) to y = (w, z), underlying e and f respectively. This proves Claim 2.
Note that strong connectivity of ∆ now follows directly from Claims 1 and 2.
Namely, if x and y are two vertices in ∆ (and thus darts in Λ), then Claim 2
implies existence of a directed walk in ∆ from either x or x−1 to either y or y−1.
By inserting directed walks (the existence of which is implied by Claim 1) from x
to x−1 and y−1 to y, if necessary, one obtains a directed walk in ∆ from x to y.
Now we are ready to prove that A2D(Λ) is strongly connected. Let (x, y) and
(w, z) be any two vertices in A2D(Λ). Then x, y, w and z are darts of Λ and hence
vertices of ∆. Since ∆ is strongly connected, there are directed walks from x to w
and from y to z, and moreover, we may choose these two walks so that each passes
through every vertex of ∆
By Lemma 2.3, the greatest common divisor D of the lengths of all arc-cycles in Λ
is at most 2. Thus, by inserting arc-cycles appropriately, we can cause the length of
the two walks to differ by at most 1. Let these walks be α = [x = a0, a1, . . . , ak = w]
and β = [y = b0, b1, . . . , bℓ = z] where |k − ℓ| ≤ 1. Here, each ai and bi is a dart in
Λ and each (ai, ai+1) and (bi, bi+1) is a 2-dart.
If Λ is not bipartite, then D = 1, and we can force k to be equal to ℓ. Then the
sequence
(a0, b0), (b0, a1), (a1, b1), (b1, a2), . . . , (ak, bk)
is a directed walk of length 2k from (x, y) to (w, z).
Now suppose that Λ is bipartite. Recall that (see Lemma 2.1) that then also
vertices A2D(Λ) can be properly bi-coloured blue and red where a vertex (x, y)
is coloured blue whenever the initial vertices of x and y are at even distance in
Λ. Since every vertex in A2D(Λ) has positive in- and out-valence, to prove that
A2D(Λ) is strongly connected it suffices to show that every blue vertex is accessible
from any other blue vertex, hence we may assume that the vertices (x, y) and (w, z)
are blue. But then the directed walks α and β from x to w and from y to z must
have the same parity. Thus, even though D = 2, we can again force k = ℓ, yielding
(cid:3)
a directed walk of length 2k from (x, y) to (w, z), as above.
References
[1] A. Hill, S. Wilson, Four constructions of highly symmetric graphs, J. Graph Theory 71
(2012), 229–244.
[2] P. Potoˇcnik, P. Spiga, G Verret, Bounding the order of the vertex-stabiliser in 3-valent vertex-
transitive and 4-valent arc-transitive graphs, J. Combin. Theory, Ser. B. 111 (2015), 148–
180.
[3] P. Potoˇcnik, S. Wilson, The separated box product of two digraphs, to be put on arXiv.
4
P. POTO ˇCNIK AND S. WILSON
Primoˇz Potoˇcnik,
Faculty of Mathematics and Physics, University of Ljubljana,
Jadranska 19, SI-1000 Ljubljana, Slovenia;
also affiliated with:
IMFM, Jadranska 19, SI-1000 Ljubljana, Slovenia.
E-mail address: [email protected]
Steve Wilson,
Northern Arizona University, Department of Mathematics and Statistics,
Box 5717, Flagstaff, AZ 86011, USA
E-mail address: [email protected]
|
synthetic_cpt | 4 | Gecko_Versatile_Text_Embeddings_Distilled_from_Large_Language_Models.pdf | 4
2
0
2
y
a
M
4
2
]
L
C
.
s
c
[
1
v
0
4
6
5
1
.
5
0
4
2
:
v
i
X
r
a
GECKO: Generative Language Model for English,
Code and Korean
Sungwoo Oh
KIFAI∗
[email protected]
Donggyu Kim
KIFAI
[email protected]
Abstract
We introduce GECKO, a bilingual large language model (LLM) optimized for
Korean and English, along with programming languages. GECKO is pretrained
on the balanced, high-quality corpus of Korean and English employing LLaMA
architecture. In this report, we share the experiences of several efforts to build a
better data pipeline for the corpus and to train our model. GECKO shows great
efficiency in token generations for both Korean and English, despite its small size
of vocabulary. We measure the performance on the representative benchmarks in
terms of Korean, English and Code, and it exhibits great performance on KMMLU
(Korean MMLU) and modest performance in English and Code, even with its
smaller number of trained tokens compared to English-focused LLMs. GECKO is
available to the open-source community under a permissive license. We hope our
work offers a research baseline and practical insights for Korean LLM research.
The model can be found at: https://huggingface.co/kifai/GECKO-7B
1
Introduction
Recent advances in artificial intelligence yield significant breakthroughs in the development of large
language models (LLMs). Many proprietary LLMs [1, 33, 44] demonstrate human-level performances
across multiple languages and on a wide range of real-world tasks [5, 31, 51]. In response to this, the
open-source community has released various open large language models [46, 17, 18, 8], striving to
match the capabilities of proprietary models.
While these open-source models have been mainly trained on English [46, 17] or designed for
specific use-cases such as programming [29, 47, 13] and mathematics [28, 30, 50], there has been
increasing demand for models proficient in other languages. This need has led to the emergence
of open-source language models that show strong understanding of non-english languages such
as Chinese [2, 49], Finnish [32], and Indonesian [41]. They achieve impressive performance by
leveraging language-specific datasets at the pretraining phase [2, 49, 32, 41].
Several open-source models enhance their linguistic performance by employing the following strate-
gies: 1) language-specific continuous pretraining [37], 2) vocabulary expansion [24]. These ap-
proaches efficiently improve the cross-lingual capabilities of the monolingual models compared to
the process of pretraining models from scratch, which requires massive computational resources and
extensive data.
Despite the achievements of previous Korean language models [34, 26, 21, 22], research on pre-
training methods and applications for Korean LLM remains limited. To address this, we initiate the
development of GECKO, a language model designed mainly for Korean, yet capable in English and
programming languages. GECKO is pretrained from scratch, utilizing terabytes of textual data in
∗Korea Institute of Finance and Artificial Intelligence(KIFAI) is an open community aiming to research AI
technologies and share the findings to the public.
English
28%
Korean
35%
37%
Code
36%
24%
3%
5%
16%
16%
Web
Wiki
News
Book
Patent
Translation
Figure 1: Distribution of pretraining data sources for bilingual language models. The left pie
chart illustrates the proportional composition of the corpus by language, highlighting a balanced
representation of 35% Korean, 28% English, and 37% code to accommodate low-resource language
challenges. The right pie chart details the types of data utilized, with 36% web sources, 24% from
Wikipedia, 16% from news articles, 16% from books, 5% from patents, and 3% from translated texts.
This distribution supports efforts to enhance model performance by diversifying and balancing the
training data across different types and languages.
both Korean and English to secure a strong bilingual proficiency. In the remainder of this report, we
share our efforts and contributions as follows:
• Data preprocessing and training methods maintaining the balance between Korean and
English
• Demonstration of strong performance in Korean with only small amount of pretraining
resources
• Open-sourcing our model under a permissive license to encourage further researches and
applications
2 Datasets
2.1 Sources
Low-resource languages, such as Korean, have far fewer public data sources available, even if they
contain data with copyright issues. In contrast, resource-rich languages like English have large,
accessible data sources for training language models.
Balancing Korean and English As shown in Figure 1, similar to other bilingual language models
[2, 32], we aim to strike a balance between English and Korean in our pretraining corpus by down-
sampling and up-sampling English and Korean corpus respectively.
High quality Korean corpus Since abundant open-source corpora for languages such as English
and code already exist, and their refinement and processing significantly impact on the performance
of language models [36, 35], our focus has shifted more towards improving methods of data cleansing
methods. However, because high-quality Korean corpora without licensing issues are extremely
limited, we collect data from Web.
Reasoning capability Additionally, research findings [48, 11, 16] indicate that incorporating code
data in the pretraining phase enhances the reasoning ability of language models, along with the
academic perspective that treats code data as its own language. This ultimately led us to utilize three
main corpora for pretraining: English, code, and Korean.
2
Figure 2: Pipeline for cleansing corpus
Language Alignment There is research [6] on using translation datasets consisting of different
language pairs for the purpose of multilingual alignment in the pretraining phase. Adopting this
methodology, we train our model to align languages between English and Korean.
2.2 Preprocessing
We curate and process terabytes of Korean corpus, and utilize large scale open-source corpora for
English and programming languages. A sophisticated pipeline for deduplication and cleaning of raw
text is implemented to obtain high-quality data as shown in Figure 2. The primary objectives of this
data processing are as follows:
• Mitigate harmful content: Preprocess and use selective data in order to remove harmful,
toxic, and biased content from the training corpus. [19, 42].
• Minimize data memorization: The data deduplication improves robustness and generaliza-
tion of the models when exposed to new, unseen data, preventing it from merely replicating
patterns and generating training examples [27, 20].
• Keep structure information: Utilizing structural corpus including tables and lists plays a
crucial role in increasing model performance.
The training corpus includes the processing and normalization of specialized datasets such as wikis,
programming code, mathematical expressions, and expert contributions. This step focuses on
leveraging the structural elements inherent in these data types while carefully preserving tags and
markdown features as shown in Figure 3. These considerations allow the model to interpret and
generate contextually informed and syntactically coherent outputs, significantly enhancing its utility
across various applications.
3 Pretraining
3.1 Tokenizer
We train GECKO tokenizer on the balanced corpus of Korean, English, and Code. Similar to other
large language models [46, 17], we utilize the Byte Pair Encoding (BPE) algorithm and train the
tokenizer using Hugging Face’s tokenizer. We treat all numbers as individual digits and segment
unknown UTF-8 characters into bytes to avoid out-of-vocabulary issues. Additionally, we opt not to
use NFKC normalization [9], recently reporting performance degradation on BPE-based tokenizers
[15, 38, 25].
We set the total vocabulary size to 32,000, following research [12] on optimal vocabulary size that
aims to balance computational efficiency and performance considering larger vocabularies demand
more computational power during the inference. We measure the efficiency of GECKO tokenizer
compared to others using the following formula:
3
Figure 3: Example of normalization for a wiki dataset: The left image displays the original data,
while the right image shows the preprocessed and normalized data in markdown format.
Efficiency =
(cid:18) # of tokensmodel
# of tokensGECKO
(cid:19)
× 100%
(1)
The metric evaluates the tokenization efficiency by comparing the total number of tokens produced by
GECKO tokenizer and others. Our tokenizer demonstrates superior efficiency in processing Korean
while maintaining comparable results in English and Code, contrasting to the models primarily trained
in English. The result of efficiency comparison using C4 corpus [39] and The Stack [23] is illustrated
in Table 1 and Figure 4.
Table 1: Overall toeknizer efficiency with respect to GECKO.
Tokenizer
GECKO Polyglot-Ko LLaMA-2 Mistral Gemma
GPT-4
Vocab. size
Efficiency
32,000
100%
30,080
71%
32,000
86%
32,000
92%
256,000
109%
100,277
110%
3.2 Training Details
GECKO adopts the classical decoder-only Transformer architecture used in LLaMA [46]. The
AdamW optimizer is employed to train the model, setting β1 at 0.9 and β2 at 0.95. The optimizer is
configured to warm up over 10,000 iterations with a linearly increasing learning rate that peaks at 3e-4
and then decays to 3e-5 according to a cosine schedule. The model is trained with 200 billion tokens
using BF16 mixed precision. Rotary positional embedding is utilized to train longer context tokens up
to 8192 in length, allowing longer sequences to be understood during pretraining. We use sequence
packing [7] to assemble multiple training samples into a single sequence and use end-of-sequence
token to separate the document sequences.
4
130
115
111
132
100
101
100
100
100
98
67
62
48
57
56
GECKO
Polyglot-Ko
LLaMA-2
Mistral
GPT-4
)
%
(
y
c
n
e
i
c
fi
f
E
n
e
k
o
T
140
120
100
80
60
40
20
0
Korean
English
Code
Figure 4: Comparative analysis of tokenizer efficiency across multiple language models. This graph
illustrates the performance of various tokenizers, including GECKO, Polyglot-Ko, LLaMA-2, Mistral,
and GPT-4, across Korean, English, and code text corpora. The y-axis represents token efficiency as a
percentage, with higher values indicating superior encoding performance relative to the tokenizer of
GECKO. This analysis highlights the varying efficiency levels each model exhibits, offering insights
into how effectively each tokenizer encodes multilingual and coding data. The dashed red line at
100% serves as a benchmark for baseline efficiency.
3.3 Training Infrastructure
We train our model on Google Cloud Platform and used TPUv4 with 256 chips, utilizing Fully
Sharded Data Parallelism (FSDP) and model parallelism. Leveraging JAX [3, 10], we implement the
single controller programming paradigm, which enables us to manage and parallelize our training
efficiently using just one Python command.
Table 2: Performance evaluations across different models and benchmarks
Model
LLaMA-2 7B
Mistral 7B
Gemma 7B
Polyglot-Ko 5.8B
GECKO
KMMLU MMLU HumanEval MATH
4-shot
pass@1
5-shot
5-shot
24.2
21.0
21.1
28.3
30.7
45.3
62.5
64.3
26.8
28.3
12.8
26.2
32.3
0.0
17.7
2.5
12.7
24.3
0.3
4.3
4 Evaluation
We evaluate several pretrained open-source large language models (LLMs) released under permissive
licenses. For performance assessment, we use standard academic benchmarks to evaluate knowledge
and reasoning abilities [14], as well as coding [4] and mathematics [40]. For LLaMA-2 [46], Mistral
[17], and Gemma [45], we directly quote the scores as reported in the Gemma technical report [45].
Additionally, for the Korean evaluation set KMMLU [43], we conduct our own evaluation in the same
environment with previous works. The result is shown in Table 2. In terms of Korean understanding
(KMMLU), GECKO shows better performance compared to the evaluated models. Our model also
demonstrates moderate performance in coding and mathematics.
5
5 Conclusion
GECKO is an open-source Korean pretrained LLM released under a permissive license. Our work
can contribute to both academic research and the practical development of the large Korean language
model pretraining. Our immediate goal is to release an improved version of the model with additional
training resources. We are also preparing for instruction fine-tuning to evaluate GECKO’s instruction-
following ability. We believe that open-sourcing artificial intelligence technologies helps create safer
products, accelerate innovation, and expand markets.
Acknowledgements
We deeply thank the TRC Team at Google Cloud for their dedication and support, which significantly
enhanced our research through provision of Cloud TPUs.
References
[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni
Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4
technical report. arXiv preprint arXiv:2303.08774, 2023.
[2] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge,
Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023.
[3] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal
Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, et al. Jax:
composable transformations of python+ numpy programs. 2018.
[4] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared
Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large
language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
[5] Cheng-Han Chiang and Hung-yi Lee. Can large language models be an alternative to human
evaluations? arXiv preprint arXiv:2305.01937, 2023.
[6] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm:
Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1–
113, 2023.
[7] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan
Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned
language models. Journal of Machine Learning Research, 25(70):1–53, 2024.
[8] Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick
Wendell, Matei Zaharia, and Reynold Xin. Free dolly: Introducing the world’s first truly open
instruction-tuned llm. Company Blog of Databricks, 2023.
[9] Mark Davis and Martin Dürst. Unicode normalization forms, 2001.
[10] Roy Frostig, Matthew James Johnson, and Chris Leary. Compiling machine learning programs
via high-level tracing. Systems for Machine Learning, 4(9), 2018.
[11] Yao Fu, Hao Peng, and Tushar Khot. How does gpt obtain its ability? tracing emergent abilities
of language models to their sources. Yao Fu’s Notion, 2022.
[12] Thamme Gowda and Jonathan May. Finding the optimal vocabulary size for neural machine
translation. arXiv preprint arXiv:2004.02334, 2020.
[13] Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen,
Xiao Bi, Y Wu, YK Li, et al. Deepseek-coder: When the large language model meets
programming–the rise of code intelligence. arXiv preprint arXiv:2401.14196, 2024.
6
[14] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and
Jacob Steinhardt. Measuring massive multitask language understanding. Proceedings of the
International Conference on Learning Representations (ICLR), 2021.
[15] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza
Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al.
Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022.
[16] Jie Huang and Kevin Chen-Chuan Chang. Towards reasoning in large language models: A
survey. arXiv preprint arXiv:2212.10403, 2022.
[17] Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh
Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile
Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
[18] Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris
Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand,
et al. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024.
[19] Nikhil Kandpal, Eric Wallace, and Colin Raffel. Deduplicating training data mitigates privacy
risks in language models. In International Conference on Machine Learning, pages 10697–
10707. PMLR, 2022.
[20] Aly Kassem, Omar Mahmoud, and Sherif Saad. Preserving privacy through dememorization: An
unlearning technique for mitigating memorization risks in language models. In Proceedings of
the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4360–4379,
2023.
[21] Ildoo Kim, Gunsoo Han, Jiyeon Ham, and Woonhyuk Baek. Kogpt: Kakaobrain korean(hangul)
generative pre-trained transformer. https://github.com/kakaobrain/kogpt, 2021.
[22] Hyunwoong Ko, Kichang Yang, Minho Ryu, Taekyoon Choi, Seungmu Yang, Sungho Park,
et al. A technical report for polyglot-ko: Open-source large-scale korean language models.
arXiv preprint arXiv:2306.02254, 2023.
[23] Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Fer-
randis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau,
Leandro von Werra, and Harm de Vries. The stack: 3 tb of permissively licensed source code.
Preprint, 2022.
[24] L. Junbum. llama-2-ko-7b (revision 4a9993e), 2023.
[25] Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow,
Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. Bloom: A
176b-parameter open-access multilingual language model. 2023.
[26] Junbum Lee. Kcbert: Korean comments bert. In Annual Conference on Human and Language
Technology, pages 437–440. Human and Language Technology, 2020.
[27] Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris
Callison-Burch, and Nicholas Carlini. Deduplicating training data makes language models
better. arXiv preprint arXiv:2107.06499, 2021.
[28] Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay
Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving
quantitative reasoning problems with language models. Advances in Neural Information
Processing Systems, 35:3843–3857, 2022.
[29] Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao
Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. Starcoder: may the source be
with you! arXiv preprint arXiv:2305.06161, 2023.
7
[30] Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng,
Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical rea-
soning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583,
2023.
[31] Xiaoliang Luo, Akilles Rechardt, Guangzhi Sun, Kevin K Nejad, Felipe Yáñez, Bati Yil-
maz, Kangjoo Lee, Alexandra O Cohen, Valentina Borghesani, Anton Pashkov, et al. Large
language models surpass human experts in predicting neuroscience results. arXiv preprint
arXiv:2403.03230, 2024.
[32] Risto Luukkonen, Jonathan Burdge, Elaine Zosa, Aarne Talman, Ville Komulainen, Väinö
Hatanpää, Peter Sarlin, and Sampo Pyysalo. Poro 34b and the blessing of multilinguality. arXiv
preprint arXiv:2404.01856, 2024.
[33] Claude Models. Model card and evaluations for claude models, 2023.
[34] Jangwon Park. Koelectra: Pretrained electra model for korean. https://github.com/
monologg/KoELECTRA, 2020.
[35] Guilherme Penedo, Hynek Kydlíˇcek, Leandro von Werra, and Thomas Wolf. Fineweb, 2024.
[36] Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli,
Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb
dataset for falcon llm: outperforming curated corpora with web data, and web data only. arXiv
preprint arXiv:2306.01116, 2023.
[37] Kunat Pipatanakul, Phatrasek Jirabovonvisut, Potsawee Manakul, Sittipong Sripaisarnmongkol,
Ruangsak Patomwong, Pathomporn Chokchainant, and Kasima Tharnpipitchai. Typhoon: Thai
large language models. arXiv preprint arXiv:2312.13951, 2023.
[38] Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song,
John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language
models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446,
2021.
[39] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena,
Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified
text-to-text transformer. Journal of machine learning research, 21(140):1–67, 2020.
[40] Hill Saxton, Grefenstette and Kohli. Analysing mathematical reasoning abilities of neural
models. arXiv:1904.01557, 2019.
[41] AI Singapore. Sea-lion (southeast asian languages in one network): A family of large language
models for southeast asia. https://github.com/aisingapore/sealion, 2023.
[42] Luca Soldaini, Rodney Kinney, Akshita Bhagia, Dustin Schwenk, David Atkinson, Russell
Authur, Ben Bogin, Khyathi Chandu, Jennifer Dumas, Yanai Elazar, et al. Dolma: An
open corpus of three trillion tokens for language model pretraining research. arXiv preprint
arXiv:2402.00159, 2024.
[43] Guijin Son, Hanwool Lee, Sungdong Kim, Seungone Kim, Niklas Muennighoff, Taekyoon
Choi, Cheonbok Park, Kang Min Yoo, and Stella Biderman. Kmmlu: Measuring massive
multitask language understanding in korean. arXiv preprint arXiv:2402.11548, 2024.
[44] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu,
Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly
capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
[45] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya
Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open
models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024.
[46] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei,
Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open
foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
8
[47] Yue Wang, Hung Le, Akhilesh Deepak Gotmare, Nghi DQ Bui, Junnan Li, and Steven CH
Hoi. Codet5+: Open code large language models for code understanding and generation. arXiv
preprint arXiv:2305.07922, 2023.
[48] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le,
Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models.
Advances in neural information processing systems, 35:24824–24837, 2022.
[49] Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Heng Li,
Jiangcheng Zhu, Jianqun Chen, Jing Chang, et al. Yi: Open foundation models by 01. ai. arXiv
preprint arXiv:2403.04652, 2024.
[50] Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu
Chen. Mammoth: Building math generalist models through hybrid instruction tuning. arXiv
preprint arXiv:2309.05653, 2023.
[51] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang,
Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and
chatbot arena. Advances in Neural Information Processing Systems, 36, 2024.
9
|
synthetic_cpt | 8 | Quality_Matters_Evaluating_Synthetic_Data_for_Tool-Using_LLMs.pdf | Quality Measures in Biometric Systems
Fernando Alonso-Fernandez1, Member, IEEE, Julian Fierrez, Member, IEEE
Javier Ortega-Garcia, Senior Member, IEEE
Abstract— Biometric technology has been
increasingly
deployed in the last decade, offering greater security and
convenience than traditional methods of personal recognition.
But although the performance of biometric systems is heavily
affected by the quality of biometric signals, prior work on
quality evaluation is limited. Quality assessment is a critical
issue in the security arena, especially in challenging scenarios
(e.g. surveillance cameras, forensics, portable devices or remote
access through Internet). Different questions regarding the
factors influencing biometric quality and how to overcome them,
or the incorporation of quality measures in the context of
biometric systems have to be analysed first. In this paper, a
review of the state-of-the-art in these matters is provided, giving
an overall framework of the main factors related to the
challenges associated with biometric quality.
Index Terms— Biometrics, security, quality assessment,
sample quality.
I. INTRODUCTION
identity
The increasing interest on biometrics is related to the
number of important applications where a correct assessment
of
to automatic
is crucial. Biometrics refers
recognition of an individual based on anatomical (e.g.,
iris, hand geometry) or behavioural
fingerprint, face,
characteristics (e.g., signature, gait, keystroke dynamics) [1].
Biometrics offers greater
several
advantages over traditional security methods based on
something that you know (e.g. password, PIN) or something
that you have (e.g. card, key). In biometric systems, users do
not need to remember passwords or PINs, which can be
forgotten, or carry cards or keys, which can be stolen.
convenience
and
its efforts on
Since the establishment of biometrics as a specific
research area in late ’90s, the biometric community has
focused
the development of accurate
recognition algorithms. Nowadays, biometric recognition is a
mature technology, used in many government and civilian
applications such as e-Passports, ID cards, or border control.
Examples include the US-VISIT fingerprint system, the
Privium iris system (Amsterdam Airport) or the SmartGate
face system (Sydney Airport). But, during the last few years,
the problem of quality measurement has emerged as an
important concern in the biometric community after the poor
performance observed on pathological samples [2]. It has
been demonstrated by several studies and
technology
benchmarks that the performance of biometric systems is
heavily affected by the quality of biometric signals e.g. see
Figure 1. This operationally important step is nevertheless
under-researched in comparison to the primary feature
extraction or pattern recognition task. The performance
degradation observed in less controlled situations is one of
the main challenges facing biometric technologies [3]. The
proliferation of portable hand-held devices with biometric
acquisition capabilities or recognition at-a-distance and
on-the-move are just two examples of non-ideal scenarios
not yet sufficiently mature, which require robust recognition
algorithms capable of handling a range of changing
characteristics [1]. A quantitative example of the degradation
observed in these scenarios can be seen in Figure 2. Another
important example is forensics, in which intrinsic operational
factors further degrade the recognition performance and are
generally not replicated in controlled studies [4].
There are a number of factors that can affect the quality of
biometric signals, and there are numerous roles of a quality
measure in the context of biometric systems. Standardization
bodies are also incorporating quality measures into existing
data storage and exchange formats. This paper summarizes
the state-of-the-art in the biometric quality problem, giving
an overall framework of the different related factors.
II. WHAT IS BIOMETRIC SAMPLE QUALITY?
It has not been until the last years that there is consensus
about what biometric sample quality is. Broadly, a sample is
of good quality if it is suitable for personal recognition.
Recent standardization efforts (ISO/IEC 29794-1) have
established three components of biometric sample quality,
see Figure 3: i) character (inherent discriminative capability
of the source), ii) fidelity (degree of similarity between a
sample and its source, attributable to each step through
which the sample is processed); and iii) utility, (impact of the
individual biometric sample on the overall performance of a
biometric system). The character of the sample source and
the fidelity of the processed sample contribute to, or similarly
detract from, the utility of the sample [3].
It is generally accepted that a quality metric should most
importantly mirror the utility of the sample, so that samples
assigned higher quality lead to better identification of
individuals [3]. Thus, quality should be predictive of
recognition performance. This statement, however, is largely
subjective: not all recognition algorithms work equally (i.e.
they are not based on the same features), and their
performance is not affected by the same factors. For
example, a face recognition algorithm “A” can be insensitive
to illumination changes, whereas another algorithm “B” can
be severely affected by changes in illumination. In this
1 F. Alonso-Fernandez (correspondence author) is with Halmstad University, Box 823, SE 301-18 Halmstad, SWEDEN. J. Fierrez and J.
Ortega-Garcia are with ATVS/Biometric Recognition Group, Escuela Politecnica Superior, Univ. Autonoma de Madrid, Avda. Francisco
Tomas y Valiente 11, Campus de Cantoblanco, 28049 Madrid, SPAIN. Part of this research was carried out while author F.A.-F. was
employed at ATVS/Biometric Recognition Group (email: [email protected], [email protected], [email protected])
Figure 1. Effect of low quality data on the performance of recognition algorithms. Conditions progressively more difficult in nature result in a
significant decrease in performance, in spite of the technology improvement between the different studies. Some sample images with varying quality
are also shown in each modality. Left: best performing algorithm in face independent evaluations. FRVT stands for Face Recognition Vendor
Technology, and MBGC for Multiple Biometric Grand Challenge. A decrease in performance is observed in the 2009 evaluation, when uncontrolled
illumination conditions and severe image compression were introduced. More information at www.frvt.org and http://face.nist.gov/mbgc. Middle: best
performing algorithm in the Fingerprint Verification Competitions (FVC). In 2000 and 2002, fingerprint data where acquired without any special
restriction, resulting in an EER decrease of one order of magnitude. In the 2004 edition, samples were intentionally corrupted (e.g. by asking people to
exaggeratedly rotate or press the finger against the sensor, or by artificially drying or moisturizing the skin with water or alcohol). More information at
https://biolab.csr.unibo.it/fvcongoing. Right: results of the Video-based Automatic System for Iris Recognition (VASIR) implemented by the National
Institute of Standards and Technology (NIST) on iris data from the MBGC. Performance on iris from distant video (unconstrained acquisition) is
dramatically reduced with respect to classical close-up controlled acquisition. More information at http://www.nist.gov/itl/iad/ig/vasir.cfm
Figure 2. Performance degradation with portable devices. Face scores come from an LDA-based verifier using Fisher Linear Discriminant projection
(face indoor) and an eigenface-based system with PCA analysis (face outdoor). Fingerprint scores come from the publicly available minutia-based
matcher of the National Institute of Standards and Technology (NIST). Data is from the BioSecure Multimodal Database [5]. Face performance is
degraded with the webcam, with further degradation in the more challenging outdoor environment (noisy ambience). As for the fingerprint modality,
the sweep sensor results in worse performance with respect to the flat sensor. In flat sensors, acquisition is done by the touch method: the finger is
simply placed on the scanner. In sweep sensors, the finger is swept vertically across a tiny strip with a height of only a few pixels. As the finger is
swept, partial images are formed which are further combined to generate a full fingerprint image. This procedure allows to reduce the size and cost of
the sensing element (facilitating its use in consumer products such as laptops, PDAs and mobile phones), but the reconstruction of a full image from the
slices is prone to errors, especially in poor quality fingerprints and non-uniform sweep speed. (Figure extracted from Ortega-Garcia et al. [5])
situation, a measure of illumination will be useful for
predicting performance of “B”, but not of “A”. Therefore, an
adequate quality measure will be largely dependent on the
type of
the
performance of different recognition algorithms may not be
affected by the same signal quality factors, the efficacy of a
quality estimation algorithm will be usually linked to a
particular recognition algorithm, or thereof class.
recognition algorithm considered. As
Unfortunately some of them are beyond control of system
developers or operators. Therefore, assessing the quality of
captured samples will allow appropriate corrective actions to
take place. Following the framework of Kukula et al. [6] and
[7,8,9], a
from other precedent works
contributions
classification of quality factors based on their relationship
with the different parts of the system is proposed [10]. Using
this classification, four different classes can be distinguished:
III. FACTORS INFLUENCING BIOMETRIC QUALITY
There are a number of factors affecting the quality of
biometric signals, which are summarized in Table I.
User-related factors, which include physical/ physiological
and behavioural factors. As they have to do entirely with the
“user side”, they are the most difficult to control. Some
Figure 3. Definition of biometric quality from three different points of view: character, fidelity or utility.
physical/physiological factors inherent to the person (age,
gender or race) do not produce degradation, but data
variability that needs to be properly considered by the
recognition algorithm (e.g. differences in speech between
males and females). Diseases or injuries may alter face,
finger, etc., even irreversibly, making them infeasible for
recognition. Although, in some cases, the presence of such
alterations can be precisely used to narrow a person’s
identity (e.g. amputation in gait recognition). On the other
hand, coping with behavioural factors often
implies
modifying people’s behaviour or habits, which is not always
convenient, or is even impossible in some applications like
forensics or surveillance cameras. Many behavioural factors
can be alleviated by recapturing after taking corrective
actions (e.g. “take off your hat/coat/ring/glasses” or “keep
your eyes opened”), but this is not always possible.
Depending on the application, corrective actions can result in
people’s reluctance to use the system. As can be seen in
Table I, user-related factors have impact on the character of
the biometric sample, that is, the quality attributable to
inherent physical features. In this sense, the degree of control
on these factors is low, as the inherent features of a person
are difficult or impossible to modify. The remaining factors
affect the fidelity, or in other words, the faithfulness between
a biometric sample and its source, and their degree of control
can be higher, as discussed next.
Factors related to the user-sensor interaction, which
include environmental and operational factors. In principle,
these are easier to control than user-related factors, although
users still play a role in these. Users impact will depend on
the level of control of the environment, the acquisition itself,
and whether the acquisition physically takes place in
controllable premises. In many applications, biometric data
is acquired in less than ideal conditions, such as by
surveillance cameras or portable hand-held devices. Other
hot topic includes acquisition “at a distance” or “on the
move” as a person walks by detection equipment, facilitating
the ease of interaction with the system. But the unsatisfactory
performance of biometrics technologies in these uncontrolled
situations has limited their deployment, being one of the
main challenges facing biometric technologies [1].
Factors related to the acquisition sensor. The sensor is in
most cases the only physical point of interaction between the
user and the biometric system. Its “fidelity” (see Section II)
in
in reproducing the original biometric pattern is crucial for the
accuracy of the recognition system. The diffusion of low cost
sensors and portable devices (e.g. mobile cameras, webcams,
telephones and PDAs with touch screen displays, etc.) is
rapidly growing
the context of convergence and
ubiquitous access to information and services, representing a
new scenario for automatic biometric recognition systems.
Unfortunately, these low cost and portable devices produce
data which are very different from those obtained by
dedicated (and more expensive) sensors, primarily due to a
small input area, poor ergonomics or the fact that the user
may be in movement. In this context, a measure of the
reliability of the data and recognition process can provide
additional improvement, by optimizing a structure lacking
homogeneity, while ensuring system interoperability by
integrating data of different nature [11].
e.g.
smart
techniques,
Factors related to the processing system. Related to how a
biometric sample is processed once it has been acquired,
these are the factors, in principle, easiest to control. Storage
or exchange speed constraints may impose the use of data
compression
cards. Also,
governments, regulatory bodies, and international standards
organizations often specify that biometric data must be kept
in raw form, rather than in (or in addition to) post-processed
templates that may depend on proprietary algorithms, with
implications in data size. Hence, the effects of data
compression on recognition performance become critical.
The necessity for data compression, together with packet loss
effects, also appears in recent applications of biometrics over
mobile or Internet networks.
IV. ENSURING GOOD QUALITY OF BIOMETRIC SAMPLES
After analysing the usual factors affecting quality of
biometric systems,
this section reports some helpful
guidelines for their control [7], which are summarized in
Table II. Three points of action can be identified: i) the
capture point, a critical point of action since it acts as the
main interface between the user and the system, ii) the
quality assessment algorithm itself, and iii) the system that
performs the recognition process. If quality can be improved,
either by capture point design or by system design, better
performance can be realized. For those aspects of quality that
cannot be designed-in, an ability to analyse the quality of a
sample and initiate corrective actions is needed. This is
1) Outdoor operation is especially problematic, as control on other environmental factors can be lost. It also demands additional actions regarding
sensor conditions and its maintenance. 2) Background, object occlusion refer to uncontrolled environments (e.g. surveillance cameras), with great
impact on face systems. 3) Temperature, humidity: Affect skin properties (fingerprint, hand). 4) Illumination, light reflection: Iris images are
affected due to reflective properties of the eye. They also affect face images. 5) Ambient noise affects the quality of speech. 6) User familiarity,
feedback of acquired data: Feedback has been demonstrated to lead to better acquired samples, helping in the process of habituation (i.e. becoming
accustomed to the system). 7) Physical guides: In some cases, they are incorporated in sensors to facilitate acquisition (e.g. hand, finger). 8)
Ergonomics refers to how the design of the acquisition device facilitates interaction with the user. 9) Time between acquisitions (aging of the
template): Biometric data acquired from an individual at two different moments may be very different, having great impact on the system
performance. 10) Age (aging of the subject): Although iris pigmentation and fingerprint characteristics are highly stable, they change until the
adolescence and during the old age. Other traits like face, speech, signature, etc. are subject to natural evolution throughout our life. Age of the subject
can also degrade the sample quality due to, for example, medical conditions or loss of certain abilities. 11) Gender: Face or speech characteristics are
different in males and females. 12) Race affects face (physical features) and iris (in some ethnic groups, pigmentation is different and/or iris is not
visible due to eyelid occlusion or long eyelashes, e.g. Eastern people). 13) Skin condition refers to factors like dryness/wetness, sweat, cuts, bruises,
etc., which can have impact on traits involving analysis of skin properties (fingerprint and hand). 14) Manual work may affect the skin condition
(dryness, cuts, bruises, dirt, diseases, etc.), in some cases irreversibly. 15) Illiteracy refers to people that do not know to read or write. 16) Ethnic
origin: Although it is a physical/physiological feature, it can affect a person’s behaviour, e.g. in face appearance (hairstyle, beard, jewellery, etc.),
speech (language, lexicon, intonation, etc.) and signature (American signatures typically consist of a readable written name, European signatures
normally include flourish, Asian signatures consist of independent symbols, etc.).
TABLE I
FACTORS AFFECTING THE QUALITY OF BIOMETRIC SIGNALS.
useful, for example, in initiating the reacquisition from a
user, selecting the best sample in real time, or selectively
evoking different processing methods, and it is the key
component in quality assurance management.
V. QUALITY ASSESSMENT ALGORITHMS AND THEIR
PERFORMANCE
Many quality assessment algorithms are found in the
literature, focused on measuring different factors affecting
the quality of biometric traits (see Figure 4). It is not the
scope of this work to describe them in depth, so only a
selection of key recent references is provided here (see
references therein also). Quality assessment algorithms have
been developed mainly for fingerprint images [14] and
recently, for iris [15], voice [16], face [17] and signature
signals [18]. In spite of the number of existing algorithms,
almost all of them have been tested under limited and
heterogeneous frameworks, mainly because it has not been
until the last years when the biometric community has
formalized the concept of sample quality and has developed
evaluation methodologies. Two recent frameworks proposed
for this purpose are briefly described here [3], [19].
1) Use of an adequate Graphical User Interface (GUI), with a large display providing real time feedback of acquired data, has demonstrated to help
users to provide better signals over time and to habituate faster to the system [9]. 2) Corrective actions depend heavily on the application. For example, in
some cases it is not possible to recapture a second sample (e.g. forensics), so the system has to deal with the “bad” sample at hand. Rejecting a sample
implies invoking alternative recognition procedures (e.g. another biometric trait) or human intervention, resulting in increased costs and user
inconvenience. 3) Quality-based processing and fusion means to invoke different algorithms and to combine them with different weighting depending on
the quality of the signal at hand. See Section VII for further discussion. 4) Template substitution/update, an area still under-researched [12], allows
coping with natural variations of biometric traits across time. Efficient strategies include storing multiple templates representative of the associated
variability and updating/substituting them with new acquisitions. 5) Monitoring and periodic reporting [13] helps identify sudden problems (e.g. a
damaged sensor) and find hidden systematic problems (e.g. specific sites or sensors working worse than others, hours when the quality of acquired signals
is worse, etc.). Especially important is to identify user-scanner learning curves in order to avoid “first time user” syndrome, especially for elderly people or
people who are not accustomed to interact with machines.
TABLE II
BIOMETRIC QUALITY ASSURANCE PROCESS.
the source), fidelity (faithfulness of
As shown in Figure 3, biometric sample quality can be
considered from the point of view of character (inherent
the
properties of
biometric sample to the source), or utility (predicted
contribution to performance). Youmaran and Adler [19] have
developed a theoretical framework for measuring biometric
sample fidelity. They relate biometric sample quality with the
amount of identifiable information that the sample contains,
and suggest that this amount decreases with a reduction in
quality. They measure the amount of identifiable information
for a person as the relative entropy, D(p||q), between the
population feature distribution, q, and the person’s feature
distribution, p. Based on this, the information loss due to a
degradation in sample quality can be measured as the relative
change in the entropy.
On the other hand, most of the existing operational
schemes for quality estimation of biometric signals are
focused on the utility of the signal. Grother and Tabassi [3]
have presented a framework for evaluating and comparing
quality measures in terms of their capability of predicting the
system performance. Broadly, they formalize the concept of
sample quality as a scalar quantity
is related
monotonically to the recognition performance of biometric
matchers. Therefore, by partitioning the biometric data in
different groups according to some quality criteria, the
quality measure should give an ordered indication of
performance between quality groups. Also, by rejecting low
quality samples, error rates should decrease quickly with the
fraction rejected. Some of the works referenced above in this
Section have followed this framework in their experimental
that
samples are stored in the system database and are later
compared with new samples provided during the operation of
the system. Therefore, a quality algorithm should be able to
work with individual samples, even though its ultimate
intention is to improve recognition performance when
matching two (or more) samples.
VI. HUMAN VS. AUTOMATIC QUALITY ASSESSMENT
There is an established community of human experts in
recognizing biometric signals for certain applications (e.g.
signatures on checks or fingerprints in forensics) and the use
of manual quality verification is included in the workflow of
some biometric applications (e.g. immigration screening and
passport generation). A common assumption here is that
human assessment of biometric quality is an appropriate gold
standard against which biometric sample quality measures
should be measured [21]. Also, many authors make use of
datasets with manually labelled quality measures to optimize
and test their quality assessment algorithms.
To the best of our knowledge, the only study aimed to test
the relevance of human evaluations of biometric sample
quality is [21]. From this study, it is evident that human and
computer processing are not always functionally comparable.
For instance, if a human judges a face or iris image to be
good because of its sharpness, but a recognition algorithm
works in low frequencies, then the human statement of
quality is inappropriate. The judgement of human inspectors
can be improved by adequate training on the limitations of
the recognition system, but this could be prohibitively
expensive and time consuming. In addition, there are other
implications in incorporating a human quality checker, such
as tiredness, boredom or lack of motivation that a repetitive
task like this may cause in the operator, as pointed out in
Section IV. A comprehensive analysis of factors leading to
errors related with human-assisted operation is given by
Wertheim [22].
VII. INCORPORATING QUALITY MEASURES IN BIOMETRIC
SYSTEMS
The incorporation of quality measures in biometric
systems is an active field of research, with many solutions
proposed. Different uses of sample quality measures in the
context of biometric systems have been identified throughout
this paper. These are summarized in Table III [7], [8]. It
should be noted that these roles are not mutually exclusive.
Indeed, prevention of poor quality data requires a holistic,
system-wide focus involving the whole operation of the
biometric system [23]. It is not the scope of this paper to
provide a comprehensive list of references. We refer the
interested reader to the surveys contained in references [3],
[10], [12], [13], [23].
VIII. STANDARDIZING BIOMETRIC QUALITY
It should be noted that adhesion to standards for sensors,
software, interfaces, etc. is recommended throughout the
quality assurance process. With the use of standards, great
flexibility and modularity is obtained, as well as fast
technology interchange, sensor and system interoperability,
and proper interaction with external security systems.
Figure 4. Common properties measured by biometric quality
assessment algorithms. References to particular implementations
are given in Section V.
Figure 5. Evaluating the utility of four fingerprint quality
measures. Results show the verification performance as samples
with the lowest quality value are rejected. The similarity scores
come from the publicly available minutia-based matcher released
by the National Institute of Standards and Technology (NIST).
Data is from the BioSec multimodal database [20]. Different
performance improvement for the same fraction of rejected
samples suggests different efficacy of each measure for the
particular recognition algorithm and/or sensor evaluated. (Figure
extracted from Alonso-Fernandez et al. [14])
studies. A graphical example evaluating the utility of
fingerprint quality metrics can be seen in Figure 5. However,
as mentioned before, the efficacy of a quality algorithm is
usually tied to a particular recognition algorithm. This can be
seen in the example of Figure 5, in which each quality metric
results in different performance improvement for the same
fraction of rejected low quality samples. It should be also
noted that, although biometric matching involves at least two
samples, they are not acquired at the same time. Reference
1) Recapture loop: implementation of an “up to three attempts” policy, giving feedback in each subsequent acquisition to improve quality; selection of the
best signal from a video stream. 2) Quality-based processing: quality-specific enhancement algorithms; conditional execution of processing chains,
including specialized processing for poor quality data; extraction of features robust to the degradation that the signal is suffering; extraction of features
from useful regions only; ranking of extracted features based on quality of local regions. 3) Update of enrolment data/database maintenance: storage of
multiple samples representing the variability associated with the user (e.g. different portions of the fingerprint to deal with partially overlapped fingerprints,
face from multiple viewpoints); update of stored samples with ones of better quality captured during the system operation [12]. 4) Quality-based
matching, decision and fusion: use of different matching/fusion algorithms; adjustment of the sensitivity of the matcher/fusion algorithm; quantitative
indication of the reliability of the acceptance/rejection decision; quality-driven selection of data sources to be used for matching/fusion, e.g. weighting
schemes to quality-based ranked features/data sources; use of soft-biometric traits (age, height, sex, etc.) to assist in the recognition. 5) Monitoring and
reporting across different parts of the system to identify problems that lead to poor quality signals and initiate corrective actions. Different aspects that can
be monitored and reported include signal quality [13]: By application, as different applications may require different scanners, environment setup, etc., and
this may impact differently on the overall quality of acquired signals. By site/terminal, to identify abnormal sites/terminals due to operator training,
operational and environmental conditions, etc. By capture device, to assess the impact due to different acquisition principles, mechanical design, etc., and if
a specific scanner does not provide signals that satisfy our quality criteria. By subject, to identify interaction learning curves, which can helps to better train
new users and alleviate the “first time user” syndrome [9]. By stored template, to detect how the database quality is varying when new templates are stored
or old ones are updated. By biometric input, in the case that multiple biometric traits are being used, to improve the way in which they are combined. Trend
analysis, providing statistics of all applications, sites, etc., allowing to identify trends in signal quality or sudden changes that need further investigation.
TABLE III
ROLES OF A SAMPLE QUALITY MEASURE IN THE CONTEXT OF BIOMETRIC SYSTEMS.
Standards compliance allows for replacement of parts of
deployed systems with various technological options coming
from open markets. As biometric technology is extensively
deployed, a common situation is the exchange of information
between several multi-vendor applications of different
agencies, involving heterogeneous equipment, environments
or locations [1]. In response to a need for interoperability,
biometric standards have been developed to allow modular
integration of products, also facilitating future upgrades to
newer developments. Examples of interoperable scenarios
are the use of e-Passports readable by different countries, or
the exchange of lists of criminals among Security Forces. A
list of standards organizations and other bodies working in
biometric standards development is given in Table IV.
Current efforts in developing biometric standards [24, 25] are
focused on acquisition practices, sensor specifications, data
formats and technical interfaces, as we plot in Figure 6 and
Table V. In addition, although particularly aimed to the
assistance of US federal agencies in its development and
implementation of biometric programs, there is a “Registry
Standards”
of
(www.biometrics.gov/Standards) with some high
level
guidance with respect to its implementation.
Recommended
Biometric
USG
Concerning
incorporation of quality
information, most of the standards define a quality score field
specific
the
aimed to incorporate quality measures. However, its content
is not explicitly defined or is somewhat subjective due to the
lack in consensus on i) how to provide universal quality
measures interpretable by different algorithms or ii) what are
the key factors that define quality in a given biometric trait.
These problems are being addressed in the multipart
standardization effort ISO/IEC 29794-1/4/5. A prominent
approach within this standard is the Quality Algorithm
vendor ID (QAID), which incorporates standardized data
fields that uniquely identifies a quality algorithm, including
its vendor, product code and version. QAID fields can be
easily added to existing data interchange formats such as the
Common Biometric Exchange Formats Framework
(CBEFF), enabling a modular multi-vendor environment that
accommodates
scored by different quality
algorithms in existing data interchange formats.
samples
IX. ISSUES AND CHALLENGES
This paper gives an overall framework of the main factors
involved in the biometric quality measurement problem. The
increasing development of biometrics in the last decade,
related to the number of important applications where a
correct assessment of identity is a crucial point, has not been
followed by extensive research on biometric data quality [3].
A significant improvement in performance in less controlled
International Standards Organizations
ISO-JTC1/SC37: International Organization for Standardization,
Committee 1 on Information Technology, Subcommittee 37 for
Biometrics (www.iso.org/iso/jtc1 sc37 home)
IEC: International Electrotechnical Commission (www.iec.ch)
National Standards Bodies (NSBS)
ANSI: American National Standards Institute (www.ansi.org)
Standards-Developing Organizations (SDOS)
for
INCITS M1:
Information
InterNational Committee
Technology Standards, Technical Committee M1 on Biometrics
(http://standards.incits.org/a/public/group/m1)
NIST-ITL: American National Institute of Standards and
Technology,
Laboratory
(www.nist.gov/itl)
ICAO: International Civil Aviation Organization (www.icao.int)
Information
Technology
International
Other NON-SDOS participating in standards development efforts
BC: Biometrics Consortium (www.biometrics.org)
IBG: International Biometrics Group (www.ibgweb.com)
IBIA:
(www.ibia.org)
DoD-BIMA: American Department of Defence, Biometrics
Identity Management Agency (www.biometrics.dod.mil)
FBI-BCOE: American Federal Bureau of
Biometric Centre of Excellence (www.biometriccoe.gov)
Investigation,
Association
Biometric
Industry
TABLE IV
STANDARDS ORGANIZATIONS AND OTHER BODIES
WORKING IN BIOMETRIC STANDARDS DEVELOPMENT (ALL
LINKS ACCESSED OCTOBER 2011).
situations is one of the main challenges facing biometric
technologies [1]. Now that there is international consensus
that a statement of a biometric sample’s quality should be
related to its recognition performance, efforts are going
towards an harmonized and universal interpretation of
quality measures by defining the key factors that need to be
assessed in each biometric trait [25], and by setting good
acquisition practices [7]. This will enable a competitive
multi-vendor marketplace, allowing
interoperability of
multiple vendors’ quality assessment algorithms.
A biometric system has to be resilient in processing data
with heterogeneous quality yet providing good recognition
performance. Although there are several corrective actions
that can be performed to improve the quality of acquired
signals [7], some factors fall out of our control or cannot be
avoided. In this respect, specially challenging scenarios for
biometrics are the ones based on portable devices, and/or
remote access through Internet or acquisition at-a-distance.
These are expected to work in an unsupervised environment,
with no control on the ambient noise, on the user-sensor
interaction process, or even on the sensor maintenance.
inherent degraded
Another very
conditions is forensics. Therefore, it is very important upon
capture of biometric samples to assess their quality as well as
having specific developments for poor quality signals [3].
important field with
Quality is intrinsically multi-dimensional, with factors of
very different nature affecting it [6], [7], [8], [9]. A biometric
system must adequately address this multifactor nature.
There are a number of things that quality measures can do in
BioApi (Biometric Application Programming Interface), defines
architecture and necessary interfaces to allow biometric applications
to be integrated from modules of different vendors. Versions 1.0 and
1.1 were produced by the BioAPI Consortium, a group of over 120
companies and organizations with interest in biometrics. BioAPI 2.0
is specified in the ISO/IEC 19784-1 standard (published May 2006).
CBEFF (Common Biometric Exchange Formats Framework),
supports exchange of biometrics information between different
system components or systems. Developed from 1999 to 2000 by the
CBEFF Development Team (NIST) and the BioAPI Consortium.
FBI-WSQ (FBI Wavelet Scalar Quantization) image compression
algo-rithm for fingerprint images, developed to archive the large FBI
fingerprint database. Developed by the FBI and the NIST.
(DHS Automated Biometric
FBI-EBTS (FBI Electronic Biometric Transmission Specification),
DoD-EBTS (DoD Electronic Biometric Transmission Specification),
DHS-IDENT-IXM
Identification
System-Exchange Messages Specification) for exchange of biometric
the FBI, DoD and DHS biometric applications,
data with
respectively.
particular
and DoD-EBTS
implementations of the ANSI/NIST ITL 1-2007 standard, customized
to the needs of the FBI and the DoD. FBI-EBTS v9.2 released on
in March 2009.
May 2011. DoD-EBTS v2.0
DHS-IDENT-IXM v5.0 released in November 2009.
FBI-EBTS
released
are
ANSI/NIST-ITL 1-2000 for exchange of biometric data between
law enforcement and related criminal justice agencies, including
fingerprint, facial, scar, mark, and tattoo data.
ANSI/NIST-ITL 1-2007/2-2008 and ISO/IEC-19794 multipart
standard that specify a common format to exchange and store a
variety of biometric data including face, fingerprint, palm print, face,
iris voice and signature data.
Annex to ISO/IEC-19794-5 with recommendations for face photo
taking for E-passport and related applications, including indications
about lighting and camera arrangement, and head positioning.
ISO/IEC 29794-1/4/5 multi-part standard to enable harmonized
interpretation of quality scores from different vendors, algorithms
and versions by setting the key factors that define quality in different
biometric traits. It also addresses the interchange of biometric quality
data via the multipart ISO/IEC 19794 Biometric Data Interchange
Format Standard.
TABLE V
AVAILABLE BIOMETRIC STANDARDS (WITH RESPONSIBLE
AGENCIES AND LATEST VERSION AVAILABLE).
the context of biometric systems to improve the overall
performance, such as altering
the sample processing/
comparison process, or weighting the results from different
systems depending on the quality. Recent independent
evaluations of commercial and research prototypes are also
starting to include quality studies in their scenarios, as the
BioSecure Multimodal Evaluation Campaign
in 2007
(www.int-evry.fr/biometrics/BMEC2007) or the Noisy Iris
Challenge Evaluation in 2009 (http://nice2.di.ubi.pt). Some
research works have dealt with these matters, but much work
is still to be done in this area.
ACKNOWLEDGEMENTS
Work of F.A.-F. at ATVS/Biometric Recognition Group
has been supported by a Juan de la Cierva postdoctoral
Fellowship from the Spanish MICINN. F. A.-F. also thanks
(USA), 2007.
[10] F. Alonso-Fernandez, “Biometric Sample Quality and
its
Application to Multimodal Authentication Systems,” Ph.D. dissertation,
Universidad Politecnica de Madrid, Madrid, Spain, 2008, available
online at http://atvs.ii.uam.es (publications).
[11] J. Fierrez-Aguilar, J. Ortega-Garcia, J. Gonzalez-Rodriguez, and J.
Bigun, “Discriminative Multimodal Biometric Authentication Based on
Quality Measures,” Pattern Recognition, 38(5), pp. 777–779, 2005.
[12] A. Rattani, B. Freni, G. Marcialis, and F. Roli, “Template update
methods in adaptive biometric systems: A critical review,” Proc.
International Conference on Biometrics, ICB, Springer LNCS-5558, pp.
847–856, 2009.
[13] T. Ko and R. Krishnan, “Monitoring and Reporting of Fingerprint
Image Quality and Match Accuracy for a Large User Application,” Proc.
33rd Applied Image Pattern Recognition Workshop, pp. 159–164, 2004.
J. Fierrez,
[14] F. Alonso-Fernandez,
J.
Gonzalez-Rodriguez, H. Fronthaler, K. Kollreider, and J. Bigun, “A
Comparative Study of Fingerprint Image Quality Estimation Methods,”
IEEE Trans. on Information Forensics and Security, 2(4), pp. 734–743,
December 2007.
J. Ortega-Garcia,
[15] N. D. Kalka, J. Zuo, N. A. Schmid, and B. Cukic, “Estimating and
Fusing Quality Factors for Iris Biometric Images,” IEEE Trans. On
Systems, Man and Cybernetics, Part A: Systems and Humans, 40(3), pp.
509–524, 2010.
[16] A. Harriero, D. Ramos, J. Gonzalez-Rodriguez, and J. Fierrez,
“Analysis of the Utility of Classical and Novel Speech Quality Measures
for Speaker Verification,” Proc. International Conference on Biometrics,
ICB, Springer LNCS-5558, pp. 434–442, 2009.
[17] D. DAmato, N. Hall, and D. McGarry, “The specification and
measurement of face image quality,” Performance Testing Conference,
IBPC, http://www.nist.gov/itl/iad/ig/ibpc2010.cfm, 2010.
[18] N. Houmani, S. Garcia-Salicetti, and B. Dorizzi, “A Novel Personal
Entropy Measure Confronted With Online Signature Verification
Systems Performance,” Proc. IEEE Conference on Biometrics: Theory,
Applications and Systems, BTAS, Washington DC (USA), pp. 1–6,
2008.
[19] R. Youmaran and A. Adler, “Measuring Biometric Sample Quality
in Terms of Biometric Information,” Proc. of Biometric Symposium,
Biometric Consortium Conference, Baltimore, Maryland (USA), 2006.
[20] J. Fierrez, J. Ortega-Garcia, D. Torre-Toledano, and J.
Gonzalez-Rodriguez, “BioSec baseline corpus: A multimodal biometric
database,” Pattern Recognition, 40(4), pp. 1389–1392, April 2007.
[21] A. Adler and T. Dembinsky, “Human vs. Automatic Measurement
of Biometric Sample Quality,” Canadian Conference on Electrical and
Computer Engineering, CCECE, 2006.
[22] K. E. Wertheim, “Human factors in large-scale biometric systems:
A study of the human factors related to errors in semiautomatic
fingerprint biometrics,” IEEE Systems Journal, 4(2), pp. 138-146, 2010.
[23] A. Hicklin and R. Khanna, “The role of data quality in biometric
systems,” Mitretek Systems, Tech. Rep., February 2006. [Online].
Available: http://www.mitretek.org/Role of Data Quality Final.pdf
[24] Registry
of USG
http://www.biometrics.gov/standards/Biometric
v2.pdf, August 2009.
recommended Biometric Standards,
Standards Registry
[25] E. Tabassi and P. Grother, Encyclopedia of Biometrics. Springer,
2009, ch. Biometric Sample Quality, Standardization.
Fernando Alonso-Fernandez received the M.S. degree in 2003 with
Distinction and the Ph.D. degree “cum laude” in 2008, both in Electrical
Engineering, from Universidad Politecnica de Madrid (UPM), Spain.
Since 2004, he has been affiliated with the Biometric Recognition Group
(ATVS), first working towards the Ph.D. degree, and later as
Figure 6. Use of standards in biometric systems to ensure good
quality signals. See Table V for a more detailed description.
the Swedish Research Council (Vetenskapsrådet) and the
European Commission for funding his postdoctoral research
at Halmstad University. This work was also supported by
from CAM,
projects Contexts
Bio-Challenge (TEC2009¬11186) from Spanish MICINN,
TABULA RASA
and BBfor2
(FP7-ITN-238803) from EU, and Cátedra UAM-Telefónica.
The authors would also like to thank the Spanish Dirección
General de la Guardia Civil for their support to the work
(FP7-ICT¬257289)
(S2009/TIC-1485)
REFERENCES
[1] A. K. Jain and A. Kumar, Second Generation Biometrics. Springer,
2010, ch. Biometrics of Next Generation: An Overview.
[2] BQW, NIST Biometric Quality Workshop, Gaithersburg, MD, USA,
November 7-8, 2007 - www.nist.gov/itl/iad/ig/bio_quality_wkshopii.cfm
[3] P. Grother and E. Tabassi, “Performance of Biometric Quality
Measures,” IEEE Trans. on Pattern Analysis and Machine Intelligence,
29(4), pp. 531–543, 2007.
[4] A. K. Jain, B. Klare, and U. Park, “Face recognition: Some
challenges in forensics,” Proc. Intl. Conf. on Automatic Face and
Gesture Recognition, FG, 2011.
[5] J. Ortega-Garcia, J. Fierrez, F. Alonso-Fernandez, J. Galbally, M.
Freire, J. Gonzalez-Rodriguez, C. Garcia-Mateo, J. Alba-Castro, E.
Gonzalez-Agulla, E. Otero-Muras, S. Garcia-Salicetti, L. Allano, B.
Ly-Van, B. Dorizzi, J. Kittler, T. Bourlai, N. Poh, F. Deravi, M. Ng, M.
Fairhurst, J. Hennebert, A. Humm, M. Tistarelli, L. Brodo, J. Richiardi,
A. Drygajlo, H. Ganster, F. Sukno, S. Pavani, A. Frangi, L. Akarun, and
A. Savran, “The Multi-Scenario Multi-Environment BioSecure
Multimodal Database (BMDB),” IEEE Trans. on Pattern Analysis and
Machine Intelligence, 32(6), pp. 1097–1111, 2009.
[6] E. P. Kukula, M. J. Sutton, and S. J. Elliott, “The Human-Biometric-
Sensor Interaction Evaluation Method: Biometric Performance and
Usability Measurements,”
Instrumentation and
Measurement, 59(4), pp. 784-791, 2010.
IEEE Trans. on
[7] J.-C. Fondeur, “Thoughts and Figures on Quality Measurements,”
Proc. NIST Biometric Quality Workshop I, Gaithersburg, MD, USA,
March 8-9, 2006 - www.nist.gov/itl/iad/ig/bio_quality_wkshopi.cfm
[8] T. Mansfield, “The Application of Quality Scores in Biometric
Recognition,” Proc. NIST Biometric Quality Workshop II, Gaithersburg,
USA, Nov. 2007 - www.nist.gov/itl/iad/ig/bio_quality_wkshopii.cfm
[9] M. Theofanos, B. Stanton, R. Micheals, and S. Orandi, “Biometrics
Systematic Uncertainty and the User,” Proc. IEEE Conference on
Biometrics: Theory, Applications and Systems, BTAS, Washington DC
postdoctoral researcher. He is currently a postdoctoral researcher at the
Intelligent Systems Laboratory (IS-lab), Halmstad University, Sweden,
under a postdoctoral fellowship of the Swedish Research Council
(Vetenskapsrådet) and a Marie Curie fellowship of the European
Commission. His research interests include signal and image processing,
pattern recognition and biometrics. He has published several journal and
conference papers and he has been actively involved in European
projects focused on biometrics (e.g., Biosecure NoE, COST 2101). He
has participated in the development of several systems for a number of
biometric evaluations (e.g. SigComp 2009, LivDet 2009, BMEC 2007).
Dr. Alonso-Fernandez has been invited researcher in several laboratories
across Europe, and is the recipient of a number of distinctions for his
research,
Information and
Communication Technologies applied to Banking in 2010 by the
Spanish College of Telecommunication Engineers (COIT), and
Doctorate Extraordinary Award in 2011 by Universidad Politecnica de
Madrid to outstanding Ph.D. Thesis.
including: best Ph.D. Thesis on
Julian Fierrez-Aguilar received the M.Sc. and the Ph.D. degrees in
telecommunications engineering from Universidad Politecnica de
Madrid, Madrid, Spain, in 2001 and 2006, respectively. Since 2002 he
has been affiliated with the Biometric Recognition Group (ATVS), first
at Universidad Politecnica de Madrid, and since 2004 at Universidad
Autonoma de Madrid, where he is currently an Associate Professor.
From 2007 to 2009 he was a visiting researcher at Michigan State
University in USA under a Marie Curie fellowship. His research
interests and areas of expertise include signal and image processing,
pattern recognition, and biometrics, with emphasis on signature and
fingerprint verification, multi-biometrics, biometric databases, and
system security. Dr. Fierrez has been and is actively involved in
European projects focused on biometrics, and is the recipient of a
number of distinctions for his research, including: best Ph.D. thesis in
computer vision and pattern recognition in 2005-2007 by the IAPR
Spanish liaison (AERFAI), Motorola best student paper at ICB 2006,
EBF European Biometric Industry Award 2006, and IBM best student
paper at ICPR 2008.
the M.Sc. degree
Javier Ortega-Garcia received
in electrical
engineering (Ingeniero de Telecomunicaci´on), in 1989; and the Ph.D.
degree ”cum laude” also in electrical engineering (Doctor Ingeniero de
Telecomunicación), in 1996, both from Universidad Politécnica de
Madrid, Spain. Dr. Ortega-Garcia is founder and co-director of ATVS
research group. He is currently a Full Professor at the Escuela
Politécnica Superior, Universidad Autónoma de Madrid, where he
teaches Digital Signal Processing and Speech Processing courses. He
also teaches a Ph.D. degree course in Biometric Signal Processing. His
research interests are focused on biometrics signal processing: speaker
recognition, face recognition, fingerprint recognition, on-line signature
verification, data fusion and multimodality in biometrics. He has
published over 150 international contributions, including book chapters,
refereed journal and conference papers. He chaired “Odyssey-04, The
Speaker Recognition Workshop”, co-sponsored by IEEE. Since 2008 he
is a Senior member of the IEEE.
|
synthetic_cpt | 8 | Beware_of_Calibration_Data_for_Pruning_Large_Language_Models.pdf | 4
2
0
2
t
c
O
3
2
]
L
C
.
s
c
[
1
v
1
1
7
7
1
.
0
1
4
2
:
v
i
X
r
a
Preprint
BEWARE OF CALIBRATION DATA FOR PRUNING
LARGE LANGUAGE MODELS
Yixin Ji1, Yang Xiang1, Juntao Li1∗,
Qingrong Xia2, Ping Li2, Xinyu Duan2, Zhefeng Wang2, Min Zhang1
1School of Computer Science and Technology, Soochow University
2Huawei Cloud, China
{jiyixin169,baldwin021129}@gmail.com;
{ljt,minzhang}@suda.edu.cn
ABSTRACT
As large language models (LLMs) are widely applied across various fields, model
compression has become increasingly crucial for reducing costs and improving
inference efficiency. Post-training pruning is a promising method that does not
require resource-intensive iterative training and only needs a small amount of
calibration data to assess the importance of parameters. Previous research has
primarily focused on designing advanced pruning methods, while different cali-
bration data’s impact on pruning performance still lacks systematical exploration.
We fill this blank and surprisingly observe that the effects of calibration data even
value more than designing advanced pruning strategies, especially for high spar-
sity. Our preliminary exploration also discloses that using calibration data similar
to the training data can yield better performance. As pre-training data is usually
inaccessible for advanced LLMs, we further provide a self-generating calibration
data synthesis strategy to construct feasible calibration data. We conduct experi-
ments on the recent strong open-source LLMs (e.g., DCLM, and LLaMA-3), and
the results show that the proposed method outperforms commonly used calibration
data and can effectively enhance strong pruning methods (e.g., Wanda, OWL).
1
INTRODUCTION
Recently, Large Language Models (LLMs) have exhibited remarkable performance and enormous
potential in Natural Language Processing (NLP) and Artificial Intelligence (AI) (OpenAI, 2022;
2023; Bubeck et al., 2023; Yang et al., 2023). The success of LLMs is closely tied to scaling
laws (Kaplan et al., 2020; Hoffmann et al., 2022): training language models with more parameters,
using more data and greater computational resources leads to more powerful capabilities. However,
LLMs with more parameters increase the difficulty and cost of deployment and inference. Therefore,
much work has been devoted to compressing LLMs to achieve a trade-off between efficiency and
performance, such as pruning (Frantar & Alistarh, 2023; Ma et al., 2023; Xia et al., 2024) and
quantization (Frantar et al., 2023; Huang et al., 2024; Shao et al., 2024).
Pruning is a model compression technique that has evolved over many years (LeCun et al., 1989)
and remains full of potential and challenges. Based on the over-parameterization of neural networks,
it aims to remove redundant parameters while minimizing the degradation of model performance.
Pruning has been successfully applied to compress small to medium-sized neural networks. Through
sparse training (Lee et al., 2019; Frankle & Carbin, 2019; Yuan et al., 2021; Lasby et al., 2024) or
pruning-aware training (Sanh et al., 2020; Lagunas et al., 2021; Jiang et al., 2023) methods, it can
achieve performance comparable to dense models with a high sparsity ratio (≥70%). However, these
methods require iterative training, which is costly and time-consuming for LLMs with billions of
parameters. As a result, post-training pruning that does not require iterative training has become the
preferred approach for pruning LLMs.
The challenge of post-training pruning is how to perform training-free parameter importance es-
timation. Frantar & Alistarh (2023) note that simple parameter magnitude-based metrics perform
∗ Corresponding author.
1
Preprint
(a) Peformance differences of repre-
sentative pruning methods with the
commonly-used C4 calibration data.
(b) Performance differences of vari-
ous calibration data on SparseGPT.
(c) Method differ-
ences v.s. data dif-
ferences.
Figure 1: The effects of pruning methods and calibration data on commonsense reasoning tasks.
poorly in post-training pruning with over 20% sparsity. Therefore, they use a small amount of
calibration data to compute the inverse Hessian matrix, estimating parameter importance through
second-order gradient information. Sun et al. (2024) propose a simpler method by using the product
of weight magnitudes and the L2 norm of the corresponding input activations. Dong et al. (2024)
utilize the genetic algorithm to search for the optimal combination of information from magnitude,
activation, and gradient as an importance metric. Overall, current advanced parameter importance
metrics rely on calibration data. Although most papers claim their pruning methods are robust to
calibration data, Williams & Aletras (2024)’s empirical study challenges this view. They demon-
strate the performance differences of various methods using different calibration data. Furthermore,
our experiments revealed that the performance gains from selecting better calibration data can even
surpass those of advanced pruning methods (Figure 1). Therefore, it is time to focus more research
on calibration data.
However, many open questions regarding calibration data remain under-explored. For example, how
does the impact of calibration data change with increased sparsity and structure of pruning? Can in-
creasing the amount of calibration data narrow the performance gap between various datasets? What
type of data is suitable for calibration? How do we select the appropriate calibration data in practice?
In this paper, we investigate these questions. Our empirical results demonstrate that as sparsity and
structure increase, the performance differences among different calibration data become more pro-
nounced, and simply increasing the data volume does not reduce this disparity. We further find that
the selection of calibration data is closely related to the LLM’s training data, with calibration data
similar to the training data yielding better performance. Based on this, we propose two strategies,
detection and self-generation, aimed at sampling appropriate calibration data for pruning in practi-
cal settings with unavailable training data. To evaluate the effectiveness of our proposed calibration
data sampling method, we conduct experiments on DCLM, LLaMA-2, and LLaMA-3 models. The
results show that our proposed method performs better than the commonly used calibration data and
is compatible with strong pruning methods by substantially improving their performance.
2 BACKGROUND
Model compression is a crucial way to improve inference efficiency by reducing the required mem-
ory, including pruning (Frantar & Alistarh, 2023; Sun et al., 2024; Ma et al., 2023; Yin et al., 2024a;
Guo et al., 2023; Zhang et al., 2024b; Xia et al., 2024), quantization (Frantar et al., 2023; Xiao et al.,
2023; Lin et al., 2024; Huang et al., 2024; Shao et al., 2024), low-rank decomposition (Kaushal et al.,
2023; Yuan et al., 2024; Wang et al., 2024; Ji et al., 2024), etc. The enormous memory requirements
and inefficient inference speeds for LLMs urgently necessitate model compression. However, many
successful model compression methods have required substantial computational resources for re-
training, which limits their application for LLMs in low-resource settings. Therefore, post-training
compression, which does not require retraining, has become a current research focus.
2
0 H W K R G G L I I ' D W D G L I IPreprint
Post-training compression methods typically approximate model compression as an optimization
problem for layer-wise compression (Frantar & Alistarh, 2022):
||WlXl − ˆWlXl||F ,
min
ˆWl
(1)
where Wl, ˆWl are the original and compressed l-th linear layer, respectively, and Xl is the input fea-
ture activation. For post-training pruning, to optimize the objective, Frantar & Alistarh (2022; 2023)
utilize second-order gradient information to measure parameter importance and propose an efficient
algorithm for computing the inverse Hessian matrix. Sun et al. (2024) evaluate weight importance
simply by the product of weight magnitudes and the L2 norm of the corresponding input activation
without requiring backpropagation. Zhang et al. (2024c) propose the relative importance and acti-
vation metric, which integrates weight, input, and output activation. They also utilize the channel
permutation to minimize pruning loss under N:M semi-structured pruning. Dong et al. (2024) pro-
pose a search framework that employs the genetic algorithm to discover the optimal pruning metric
for LLMs automatically. Recently, several studies (Sung et al., 2024; Xu et al., 2024a; Yin et al.,
2024b) indicate that layer-wise compression, which typically applies a uniform sparsity rate across
all layers and evaluates weight importance within the layer, often results in suboptimal performance
due to the lack of overall consideration. Specifically, Xu et al. (2024a) proposes a differentiable
pruning framework designed to search for optimal pruning rates for each layer. Yin et al. (2024b)
introduce outlier weighed layerwise sparsity, which relates the sparsity of each layer to the observed
outliers in a proportional manner.
In the aforementioned post-training compression methods, calibration data is an indispensable com-
ponent. Calibration data is a small subset randomly sampled from unlabeled pretraining text. Many
methods (Frantar & Alistarh, 2023; Sun et al., 2024; Dettmers et al., 2024) claim their robustness
to the quantity and distribution of calibration data, requiring only dozens or hundreds of samples
with 2,048 sequence length. However, this conclusion is based on the perplexity of certain datasets
(such as Wikitext2), which does not fully reflect the true capabilities of the LLMs. Even if per-
plexity shows no significant change, the compressed model may still experience substantial per-
formance declines in downstream tasks (Jaiswal et al., 2024). Williams & Aletras (2024) observe
in extensive experiments that the selection of calibration data in post-training pruning and quantiza-
tion methods significantly impacts downstream tasks’ performance, especially post-training pruning,
which is highly sensitive to calibration data. Nevertheless, current research on calibration data re-
mains under-explored, with few studies providing guidelines for selecting calibration data. Khanal
& Capone (2024) suggest that using task-specific calibration data helps improve performance on
specific downstream tasks. Unlike their research, this paper aims to provide guidance on selecting
calibration data to enhance the general capabilities of compressed models.
3 THE IMPACT OF CALIBRATION DATA FOR PRUNING
Though Williams & Aletras (2024) have noted that calibration data significantly impacts post-
training pruning, there exist many open questions. How much does calibration data affect prun-
ing performance? How does the amount of calibration data affect compressed model performance?
What data sources are more suitable for calibration? We investigate these questions in this section.
3.1 EXPERIMENTAL DETAILS
Dense Model To study the impact of data from different sources on post-training pruning methods,
we need a comprehensive knowledge of the data used in model training. We select the powerful and
fully open-source LLM (including training data), DCLM-7B1 (Li et al., 2024), as the dense model
and conduct post-training pruning with different calibration data on it.
Post-training Pruning Methods We choose three competitive and representative post-training
pruning methods for evaluation: Wanda (Sun et al., 2024), DSnoT (Zhang et al., 2024d) and
OWL (Yin et al., 2024b). These methods apply to both unstructured and semi-structured pruning.
1https://huggingface.co/apple/DCLM-7B
3
Preprint
(a) sparsity ratio
(b) sparsity type
Figure 2: Pruning performance range (M ax.-M in.) of different datasets (C4, Wikipedia, Slimpa-
jama, DCLM) under various sparsity ratios (a) and sparsity types (b) on Wanda.
Calibration Data We consider various data sources to be calibration data. Following the main-
stream works, the calibration data sources are all from the unlabeled pre-trained corpus:
• C4 (Raffel et al., 2020)2 is a widely used calibration data source, consisting of a large amount of
multilingual web text filtered from Common Crawl. We sample from the English training set.
• Wikipedia3 is a source of high-quality encyclopedic text. We use the first shard of the cleaned
English version until 2023-11-01.
• Slimpajama4 is a cleaned and deduplicated version of RedPajama. It is a high-quality pre-training
corpus with diverse sources, including C4, ArXiv, GitHub, Books, etc.
• DCLM (Li et al., 2024) is the pre-training data of DCLM-7B model.
extracted from Common Crawl. We sample from a subset5 of the DCLM.
It includes 2.6T tokens
Aside from the experiments in Section 3.3, we follow prior works and randomly sample 128 se-
quences with 2048 tokens as calibration data. To mitigate the impact of sampling randomness, all
our experiments repeat the calibration data sampling 20 times with different random seeds and report
the average performance.
Evaluation Tasks Some pruning works focus on the perplexity of certain datasets while neglect-
ing performance on various downstream tasks, which often fails to fully reflect the capabilities of
compressed models. Therefore, we choose multiple widely used and challenging commonsense
reasoning tasks for evaluation, including BoolQ (Clark et al., 2019), Winogrande (Sakaguchi et al.,
2021), PIQA (Bisk et al., 2020), Hellaswag (Zellers et al., 2019), ARC-e, ARC-c (Clark et al., 2018)
and MMLU (Hendrycks et al., 2021). For MMLU, we use a 5-shot setting, while all other tasks are
evaluated in a zero-shot setting. Our evaluation code is based on the lm-evaluation-harness
repository6. We report the average performance of these seven tasks.
3.2 HOW MUCH DOES CALIBRATION DATA AFFECT PRUNING PERFORMANCE?
In practical applications, evaluating and comparing the impact of different calibration data on pruned
models inevitably consumes time and computational resources. Therefore, we wonder how signif-
icant the impact of calibration data is on pruning performance and whether it’s worth our effort
to seek optimal calibration data in research and practice. We consider different sparsity ratios and
sparsity types. Our experiments cover sparsity ratios ranging from 30% to 60%, and at 50% sparsity
ratio, we further compare unstructured, 4:8 semi-structured, and 2:4 semi-structured sparsity types.
2https://huggingface.co/datasets/allenai/c4
3https://huggingface.co/datasets/wikimedia/wikipedia
4https://huggingface.co/datasets/DKYoon/SlimPajama-6B
5https://huggingface.co/datasets/robbiegwaldd/dclm-micro
6https://github.com/EleutherAI/lm-evaluation-harness
4
Preprint
(a) Wanda
(b) DSnoT
Figure 3: The impact of calibration data amount for different pre-training data resources (i.e., C4,
Wikipedia, Slimpajama, DCLM) and pruning methods, i.e., Wanda (a) and DSnoT (b). Shaded areas
represent the standard deviations of 20 random seeds.
We use Wanda as an example to illustrate the model’s performance range, defined as the difference
between the maximum and minimum values, after pruning with four calibration data sets, as shown
in Figure 2. More details on the performance of the different calibration data can be found in
Figure 6 in Appendix A. Specifically, at low sparsity ratios (<50%), the performance difference
between different calibration data is minimal, less than 0.1%. As sparsity increases, the impact
of calibration data on pruning gradually amplifies, rising from a 0.5% difference at 50% sparsity
to 2.3% at 60% sparsity. Notably, as shown in Figure 6, inappropriate calibration data can even
have a negative effect at moderate sparsity levels. For instance, at 60% sparsity, using Wikipedia
and Slimpajama as calibration data performs worse than magnitude pruning without any calibration
data. For sparsity types, we observe that as the sparsity pattern becomes more structured, the choice
of calibration data becomes increasingly important, with the maximum difference reaching 1.5%
to 1.8%. We also report results on DSnoT and OWL in Appendix A. Although different pruning
methods exhibit varying performance, they show similar trends regarding the impact of calibration
data. Overall, at moderate to high sparsity ratios and with semi-structured sparsity types,
different calibration data significantly affect the performance of pruned LLMs. For all pruning
methods, higher sparsity ratios and more structured sparsity types are key to achieving effective
inference acceleration. Therefore, paying more attention to the choice of calibration data is crucial.
3.3
IS CALIBRATION DATA FROM DIFFERENT SOURCES EQUALLY ROBUST TO DATA
AMOUNT?
Currently, almost all post-training pruning methods for LLMs have empirically demonstrated ro-
bustness in terms of the amount of calibration data they use. Typically, model performance reaches
a plateau when the data amount reaches 128, and more calibration data do not lead to additional
performance gains. We wonder whether these methods are equally robust to the amount of data
for calibration data from different sources. Can certain calibration data that lead to poorer pruned
models be improved by increasing the data amount?
We perform Wanda and DSnoT pruning on DCLM-7B in the 2:4 semi-structured pruning setting.
We randomly sample 64, 128, 256, 512, 1024, and 2048 samples from different data sources as
calibration data. Figure 3 shows how the performance of pruned models changes with increasing
data amount using different calibration data. We observe that the average performance of pruned
models is robust to data amount, regardless of the calibration data source, with fluctuations of
only 0.1%-0.2%. Therefore, we cannot expect that increasing the amount of calibration data will
narrow the performance gap between different calibration data. Additionally, as the data amount
increases, the standard deviation of the pruned model’s performance decreases.
5
& |